Zephyr
Zephyr
Release 3.4.0
1 Introduction 1
1.1 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Distinguishing Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Community Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Fundamental Terms and Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
i
2.5.1 API Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.2 API Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.5.3 API Design Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.5.4 API Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.6 Language Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.6.1 C Language Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.6.2 C++ Language Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.7 Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.7.1 Optimizing for Footprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.7.2 Optimization Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.8 Flashing and Hardware Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.8.1 Flash & Debug Host Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.8.2 Debug Probes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.9 Modules (External projects) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.9.1 Modules vs west projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.9.2 Module Repositories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.9.3 Contributing to Zephyr modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2.9.4 Licensing requirements and policies . . . . . . . . . . . . . . . . . . . . . . . . . . 86
2.9.5 Documentation requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.9.6 Testing requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.9.7 Deprecating and removing modules . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.9.8 Integrate modules in Zephyr build system . . . . . . . . . . . . . . . . . . . . . . 88
2.9.9 Module yaml file description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.9.10 Submitting changes to modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.10 West (Zephyr’s meta-tool) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2.10.1 Installing west . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.10.2 West Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
2.10.3 Troubleshooting West . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
2.10.4 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
2.10.5 Built-in commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
2.10.6 Workspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
2.10.7 West Manifests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
2.10.8 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
2.10.9 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
2.10.10 Building, Flashing and Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
2.10.11 Signing Binaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
2.10.12 Additional Zephyr extension commands . . . . . . . . . . . . . . . . . . . . . . . 175
2.10.13 History and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
2.10.14 Moving to West . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
2.10.15 Using Zephyr without west . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
2.11 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
2.11.1 Test Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
2.11.2 Test Runner (Twister) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
2.11.3 Integration with pytest test framework . . . . . . . . . . . . . . . . . . . . . . . . 220
2.11.4 Generating coverage reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
2.11.5 BabbleSim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
2.11.6 ZTest Deprecated APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
2.12 Static Code Analysis (SCA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
2.12.1 SCA Tool infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
2.12.2 Native SCA Tool support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
2.13 Toolchains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
2.13.1 Zephyr SDK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
2.13.2 Arm Compiler 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
2.13.3 Cadence Tensilica Xtensa C/C++ Compiler (XCC) . . . . . . . . . . . . . . . . . . 236
2.13.4 DesignWare ARC MetaWare Development Toolkit (MWDT) . . . . . . . . . . . . . 237
2.13.5 GNU Arm Embedded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
2.13.6 Intel oneAPI Toolkit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
2.13.7 Crosstool-NG (Deprecated) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
ii
2.13.8 Host Toolchains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
2.13.9 Other Cross Compilers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
2.13.10 Custom CMake Toolchains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
2.14 Tools and IDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
2.14.1 Coccinelle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
3 Kernel 251
3.1 Kernel Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
3.1.1 Scheduling, Interrupts, and Synchronization . . . . . . . . . . . . . . . . . . . . . 251
3.1.2 Data Passing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
3.1.3 Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
3.1.4 Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
3.1.5 Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
3.2 Device Driver Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
3.2.2 Standard Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
3.2.3 Synchronous Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
3.2.4 Driver APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
3.2.5 Driver Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
3.2.6 Subsystems and API Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
3.2.7 Device-Specific API Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
3.2.8 Single Driver, Multiple Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
3.2.9 Initialization Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
3.2.10 System Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
3.2.11 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
3.2.12 Memory Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
3.2.13 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
3.3 User Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
3.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
3.3.2 Memory Protection Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
3.3.3 Kernel Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
3.3.4 System Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
3.3.5 MPU Stack Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
3.3.6 MPU Backed Userspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
3.4 Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
3.4.1 Memory Heaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
3.4.2 Shared Multi Heap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
3.4.3 Memory Slabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
3.4.4 Memory Blocks Allocator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
3.4.5 Demand Paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
3.5 Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
3.5.1 Single-linked List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
3.5.2 Double-linked List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
3.5.3 Multi Producer Single Consumer Packet Buffer . . . . . . . . . . . . . . . . . . . . 527
3.5.4 Single Producer Single Consumer Packet Buffer . . . . . . . . . . . . . . . . . . . 528
3.5.5 Balanced Red/Black Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
3.5.6 Ring Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
3.6 Executing Time Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
3.6.1 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
3.6.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
3.6.3 API documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
3.7 Time Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
3.7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
3.7.2 Time Utility APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
3.7.3 Concepts Underlying Time Support in Zephyr . . . . . . . . . . . . . . . . . . . . 552
3.8 Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
3.9 Iterable Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
3.9.1 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
iii
3.9.2 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
3.10 Code And Data Relocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
3.10.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
3.10.2 Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
4 OS Services 579
4.1 Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
4.1.1 TinyCrypt Cryptographic Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
4.1.2 Random Number Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
4.1.3 Crypto APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
4.2 Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
4.2.1 Thread analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
4.2.2 Core Dump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
4.2.3 GDB stub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
4.2.4 Cortex-M Debug Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
4.3 Device Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
4.3.1 MCUmgr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
4.3.2 MCUmgr Callbacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
4.3.3 Fixing and backporting fixes to Zephyr v2.7 MCUmgr . . . . . . . . . . . . . . . . 628
4.3.4 SMP Protocol Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
4.3.5 SMP Transport Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
4.3.6 Device Firmware Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
4.3.7 Over-the-Air Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
4.3.8 EC Host Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
4.3.9 SMP Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
4.4 Digital Signal Processing (DSP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
4.4.1 Using zDSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
4.4.2 Optimizing for your architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
4.4.3 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
4.5 File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
4.5.1 Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
4.5.2 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
4.6 Formatted Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
4.6.1 Cbprintf Packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
4.6.2 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
4.7 Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
4.7.1 Input Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
4.7.2 Input Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
4.7.3 Application API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
4.7.4 Kscan Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
4.7.5 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
4.7.6 Input Event Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
4.8 Interprocessor Communication (IPC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
4.8.1 IPC service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
4.9 Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
4.9.1 Global Kconfig Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724
4.9.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
4.9.3 Logging panic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
4.9.4 Printk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
4.9.5 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
4.9.6 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
4.9.7 Benchmark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
4.9.8 Stack usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
4.9.9 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
4.10 Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
4.10.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
4.10.2 Serialization Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
4.10.3 Transport Backends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
iv
4.10.4 Using Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
4.10.5 Visualisation Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
4.10.6 Future LTTng Inspiration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
4.10.7 Object tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757
4.10.8 API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757
4.11 Resource Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
4.11.1 On-Off Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
4.12 Modbus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796
4.12.1 Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796
4.12.2 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796
4.13 Asynchronous Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
4.13.1 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
4.14 Power Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
4.14.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
4.14.2 System Power Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
4.14.3 Device Power Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
4.14.4 Device Runtime Power Management . . . . . . . . . . . . . . . . . . . . . . . . . 815
4.14.5 Power Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
4.14.6 Power Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
4.15 OS Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
4.15.1 POSIX Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
4.15.2 CMSIS RTOS v1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
4.15.3 CMSIS RTOS v2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
4.16 Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
4.16.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
4.16.2 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
4.16.3 Tab Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
4.16.4 History Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
4.16.5 Wildcards Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
4.16.6 Meta Keys Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
4.16.7 Getopt Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
4.16.8 Obscured Input Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
4.16.9 Shell Logger Backend Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
4.16.10 RTT Backend Channel Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863
4.16.11 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863
4.16.12 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864
4.17 Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
4.17.1 Handlers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
4.17.2 Backends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
4.17.3 Zephyr Storage Backends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
4.17.4 Storage Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
4.17.5 Loading data from persisted storage . . . . . . . . . . . . . . . . . . . . . . . . . . 885
4.17.6 Storing data to persistent storage . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
4.17.7 Secure domain settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
4.17.8 Example: Device Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
4.17.9 Example: Persist Runtime State . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
4.17.10 Example: Custom Backend Implementation . . . . . . . . . . . . . . . . . . . . . 888
4.17.11 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888
4.18 State Machine Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
4.18.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
4.18.2 State Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
4.18.3 State Machine Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
4.18.4 State Machine Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
4.18.5 State Machine Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
4.18.6 Flat State Machine Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
4.18.7 Hierarchical State Machine Example . . . . . . . . . . . . . . . . . . . . . . . . . 900
4.18.8 Event Driven State Machine Example . . . . . . . . . . . . . . . . . . . . . . . . . 903
4.19 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
v
4.19.1 Non-Volatile Storage (NVS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
4.19.2 Disk Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
4.19.3 Flash map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915
4.19.4 Flash Circular Buffer (FCB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
4.19.5 Stream Flash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927
4.20 Task Watchdog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930
4.20.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930
4.20.2 Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930
4.20.3 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930
4.21 Trusted Firmware-M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 932
4.21.1 Trusted Firmware-M Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 932
4.21.2 TF-M Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
4.21.3 TF-M Build System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937
4.21.4 Trusted Firmware-M Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940
4.21.5 Test Suites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 941
4.22 Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 941
4.22.1 Inter-VM Shared Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 941
4.23 Retention System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944
4.23.1 Devicetree setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944
4.23.2 Boot mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946
4.23.3 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
4.24 Real Time I/O (RTIO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
4.24.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
4.24.2 Inspiration, introducing io_uring . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
4.24.3 Submission Queue and Chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
4.24.4 Completion Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952
4.24.5 Executor and IODev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952
4.24.6 Memory pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952
4.24.7 Outstanding Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
4.24.8 When to Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
4.24.9 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
4.24.10 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 958
4.25 Zephyr message bus (zbus) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 970
4.25.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971
4.25.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
4.25.3 Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
4.25.4 Suggested Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982
4.25.5 Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982
4.25.6 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982
4.26 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
4.26.1 Checksum APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
4.26.2 Structured Data APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
vi
5.4.2 Built-in snippets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1273
5.4.3 Writing Snippets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
5.4.4 Snippets Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277
5.5 Zephyr CMake Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277
5.5.1 Zephyr CMake package export (west) . . . . . . . . . . . . . . . . . . . . . . . . . 1278
5.5.2 Zephyr CMake package export (without west) . . . . . . . . . . . . . . . . . . . . 1278
5.5.3 Zephyr Base Environment Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . 1278
5.5.4 Zephyr CMake Package Search Order . . . . . . . . . . . . . . . . . . . . . . . . . 1278
5.5.5 Zephyr CMake Package Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1279
5.5.6 Multiple Zephyr Installations (Zephyr workspace) . . . . . . . . . . . . . . . . . . 1280
5.5.7 Zephyr Build Configuration CMake package . . . . . . . . . . . . . . . . . . . . . 1281
5.5.8 Zephyr Build Configuration CMake package (Freestanding application) . . . . . . 1282
5.5.9 Zephyr CMake package source code . . . . . . . . . . . . . . . . . . . . . . . . . . 1283
5.6 Sysbuild (System build) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1283
5.6.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284
5.6.2 Architectural Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284
5.6.3 Building with sysbuild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
5.6.4 Configuration namespacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
5.6.5 Sysbuild flashing using west flash . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
5.6.6 Sysbuild debugging using west debug . . . . . . . . . . . . . . . . . . . . . . . . 1287
5.6.7 Building a sample with MCUboot . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
5.6.8 Sysbuild Kconfig file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1288
5.6.9 Sysbuild targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1288
5.6.10 Dedicated image build targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1289
5.6.11 Adding Zephyr applications to sysbuild . . . . . . . . . . . . . . . . . . . . . . . . 1289
5.6.12 Adding non-Zephyr applications to sysbuild . . . . . . . . . . . . . . . . . . . . . 1291
5.6.13 Extending sysbuild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1292
6 Connectivity 1293
6.1 Bluetooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1293
6.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1293
6.1.2 Bluetooth Stack Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295
6.1.3 Bluetooth Low Energy Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . 1301
6.1.4 Bluetooth Audio Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1315
6.1.5 Bluetooth Qualification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319
6.1.6 Bluetooth tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1339
6.1.7 Developing Bluetooth Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 1342
6.1.8 AutoPTS on Windows 10 with nRF52 board . . . . . . . . . . . . . . . . . . . . . 1345
6.1.9 AutoPTS on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1354
6.1.10 Bluetooth APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
6.1.11 Bluetooth Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1960
6.2 Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1966
6.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1966
6.2.2 Network Stack Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1968
6.2.3 Network Connectivity API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1974
6.2.4 Networking with the host system . . . . . . . . . . . . . . . . . . . . . . . . . . . 1974
6.2.5 Monitor Network Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1987
6.2.6 Networking APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1991
6.3 LoRa and LoRaWAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2292
6.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2292
6.3.2 Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2293
6.3.3 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2293
6.4 USB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2303
6.4.1 USB device support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2303
6.4.2 USB device support APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2312
6.4.3 New experimental USB device support . . . . . . . . . . . . . . . . . . . . . . . . 2327
6.4.4 New USB device support APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2327
6.4.5 USB host support APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2345
vii
6.4.6 USB-C device stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2353
6.4.7 Human Interface Devices (HID) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2373
viii
7.5.38 Retained Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2733
7.5.39 Secure Digital High Capacity (SDHC) . . . . . . . . . . . . . . . . . . . . . . . . . 2734
7.5.40 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2742
7.5.41 Serial Peripheral Interface (SPI) Bus . . . . . . . . . . . . . . . . . . . . . . . . . 2764
7.5.42 System Management Bus (SMBus) . . . . . . . . . . . . . . . . . . . . . . . . . . 2776
7.5.43 Universal Asynchronous Receiver-Transmitter (UART) . . . . . . . . . . . . . . . . 2787
7.5.44 USB-C VBUS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2805
7.5.45 USB Type-C Port Controller (TCPC) . . . . . . . . . . . . . . . . . . . . . . . . . . 2806
7.5.46 Video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2841
7.5.47 Watchdog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2849
7.6 Pin Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2852
7.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2852
7.6.2 State model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2854
7.6.3 Dynamic pin control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2855
7.6.4 Devicetree representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2856
7.6.5 Implementation guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2858
7.6.6 Pin Control API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2860
7.6.7 Other reference material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2865
7.7 Porting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2865
7.7.1 Architecture Porting Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2865
7.7.2 Board Porting Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2900
7.7.3 Shields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2910
ix
8.9 Additional considerations about the main manifest . . . . . . . . . . . . . . . . . . . . . . 2964
8.10 Binary Blobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2964
8.10.1 Software license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2964
8.10.2 Hosting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2964
8.10.3 Fetching blobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2964
8.10.4 Tainting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2965
8.10.5 Allowed types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2965
8.10.6 Precompiled library-specific requirements . . . . . . . . . . . . . . . . . . . . . . 2966
8.10.7 Support and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2967
8.10.8 Submission and review process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2967
10 Security 2995
10.1 Zephyr Security Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2995
10.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2995
10.1.2 Current Security Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2996
10.1.3 Secure Development Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2998
10.1.4 Secure Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3002
10.1.5 Security Certification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3004
10.2 Security Vulnerability Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3005
10.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3005
10.2.2 Security Issue Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3005
10.2.3 Vulnerability Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3007
x
10.2.4 Backporting of Security Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . 3008
10.2.5 Need to Know . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3008
10.3 Secure Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3008
10.3.1 Introduction and Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3008
10.3.2 Secure Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3009
10.3.3 Secure development knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3010
10.3.4 Code Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3011
10.3.5 Issues and Bug Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3011
10.3.6 Modifications to This Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3011
10.4 Sensor Device Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3011
10.4.1 Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3012
10.4.2 Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3013
10.4.3 Other Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3016
10.4.4 Threats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3016
10.4.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3016
10.5 Hardening Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3016
10.5.1 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3016
10.6 Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3017
10.6.1 CVE-2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3017
10.6.2 CVE-2019 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3018
10.6.3 CVE-2020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3018
10.6.4 CVE-2021 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3026
Bibliography 3033
Index 3037
xi
xii
Chapter 1
Introduction
The Zephyr OS is based on a small-footprint kernel designed for use on resource-constrained and em-
bedded systems: from simple embedded environmental sensors and LED wearables to sophisticated
embedded controllers, smart watches, and IoT wireless applications.
The Zephyr kernel supports multiple architectures, including:
• ARCv2 (EM and HS) and ARCv3 (HS6X)
• ARMv6-M, ARMv7-M, and ARMv8-M (Cortex-M)
• ARMv7-A and ARMv8-A (Cortex-A, 32- and 64-bit)
• ARMv7-R, ARMv8-R (Cortex-R, 32- and 64-bit)
• Intel x86 (32- and 64-bit)
• MIPS (MIPS32 Release 1 specification)
• NIOS II Gen 2
• RISC-V (32- and 64-bit)
• SPARC V8
• Tensilica Xtensa
The full list of supported boards based on these architectures can be found here.
1.1 Licensing
Zephyr is permissively licensed using the Apache 2.0 license (as found in the LICENSE file in the project’s
GitHub repo). There are some imported or reused components of the Zephyr project that use other
licensing, as described in Licensing of Zephyr Project components.
1
Zephyr Project Documentation, Release 3.4.0
• Memory Allocation Services for dynamic allocation and freeing of fixed-size or variable-size
memory blocks.
• Inter-thread Synchronization Services for binary semaphores, counting semaphores, and mutex
semaphores.
• Inter-thread Data Passing Services for basic message queues, enhanced message queues, and
byte streams.
• Power Management Services such as overarching, application or policy-defined, System Power
Management and fine-grained, driver-defined, Device Power Management.
Multiple Scheduling Algorithms
Zephyr provides a comprehensive set of thread scheduling choices:
• Cooperative and Preemptive Scheduling
• Earliest Deadline First (EDF)
• Meta IRQ scheduling implementing “interrupt bottom half” or “tasklet” behavior
• Timeslicing: Enables time slicing between preemptible threads of equal priority
• Multiple queuing strategies:
– Simple linked-list ready queue
– Red/black tree ready queue
– Traditional multi-queue ready queue
Highly configurable / Modular for flexibility
Allows an application to incorporate only the capabilities it needs as it needs them, and to specify
their quantity and size.
Cross Architecture
Supports a wide variety of supported boards with different CPU architectures and developer tools.
Contributions have added support for an increasing number of SoCs, platforms, and drivers.
Memory Protection
Implements configurable architecture-specific stack-overflow protection, kernel object and device
driver permission tracking, and thread isolation with thread-level memory protection on x86, ARC,
and ARM architectures, userspace, and memory domains.
For platforms without MMU/MPU and memory constrained devices, supports combining
application-specific code with a custom kernel to create a monolithic image that gets loaded and
executed on a system’s hardware. Both the application code and kernel code execute in a single
shared address space.
Compile-time resource definition
Allows system resources to be defined at compile-time, which reduces code size and increases
performance for resource-limited systems.
Optimized Device Driver Model
Provides a consistent device model for configuring the drivers that are part of the platform/system
and a consistent model for initializing all the drivers configured into the system and Allows the
reuse of drivers across platforms that have common devices/IP blocks
Devicetree Support
Use of devicetree to describe hardware. Information from devicetree is used to create the application
image.
Native Networking Stack supporting multiple protocols
Networking support is fully featured and optimized, including LwM2M and BSD sockets compatible
support. OpenThread support (on Nordic chipsets) is also provided - a mesh network designed to
securely and reliably connect hundreds of products around the home.
2 Chapter 1. Introduction
Zephyr Project Documentation, Release 3.4.0
Community support is provided via mailing lists and Discord; see the Resources below for details.
1.4 Resources
Here’s a quick summary of resources to help you find your way around:
• Help: Asking for Help Tips
See glossary
4 Chapter 1. Introduction
Chapter 2
macOS
On macOS Mojave or later, select System Preferences > Software Update. Click Update Now if necessary.
On other versions, see this Apple support topic.
Windows
Select Start > Settings > Update & Security > Windows Update. Click Check for updates and install any
that are available.
Next, you’ll install some host dependencies using your package manager.
The current minimum required version for the main dependencies are:
5
Zephyr Project Documentation, Release 3.4.0
Ubuntu
1. If using an Ubuntu version older than 22.04, it is necessary to add extra repositories to meet the
minimum required versions for the main dependencies listed above. In that case, download, inspect
and execute the Kitware archive script to add the Kitware APT repository to your sources list. A
detailed explanation of kitware-archive.sh can be found here kitware third-party apt repository:
wget https://apt.kitware.com/kitware-archive.sh
sudo bash kitware-archive.sh
3. Verify the versions of the main dependencies installed on your system by entering:
cmake --version
python3 --version
dtc --version
Check those against the versions in the table in the beginning of this section. Refer to the Install
Linux Host Dependencies page for additional information on updating the dependencies manually.
macOS
1. Install Homebrew:
brew install cmake ninja gperf python3 ccache qemu dtc wget libmagic
Windows
Note: Due to issues finding executables, the Zephyr Project doesn’t currently support application flash-
ing using the Windows Subsystem for Linux (WSL) (WSL).
Therefore, we don’t recommend using WSL when getting started.
These instructions must be run in a cmd.exe command prompt. The required commands differ on
PowerShell.
These instructions rely on Chocolatey. If Chocolatey isn’t an option, you can install dependencies from
their respective websites and ensure the command line tools are on your PATH environment variable.
1. Install chocolatey.
2. Open a cmd.exe window as Administrator. To do so, press the Windows key, type “cmd.exe”,
right-click the result, and choose Run as Administrator.
3. Disable global confirmation to avoid having to confirm the installation of individual programs:
5. Close the window and open a new cmd.exe window as a regular user to continue.
Next, clone Zephyr and its modules into a new west workspace named zephyrproject. You’ll also install
Zephyr’s additional Python dependencies.
Note: It is easy to run into Python package incompatibilities when installing dependencies at a system
or user level. This situation can happen, for example, if working on multiple Zephyr versions or other
projects using Python on the same machine.
For this reason it is suggested to use Python virtual environments.
Ubuntu
Install within virtual environment
1. Use apt to install Python venv package:
source ~/zephyrproject/.venv/bin/activate
Once activated your shell will be prefixed with (.venv). The virtual environment can be deacti-
vated at any time by running deactivate.
Note: Remember to activate the virtual environment every time you start working.
4. Install west:
6. Export a Zephyr CMake package. This allows CMake to automatically load boilerplate code required
for building Zephyr applications.
west zephyr-export
Install globally
1. Install west, and make sure ~/.local/bin is on your PATH environment variable:
3. Export a Zephyr CMake package. This allows CMake to automatically load boilerplate code required
for building Zephyr applications.
west zephyr-export
macOS
Install within virtual environment
1. Create a new virtual environment:
source ~/zephyrproject/.venv/bin/activate
Once activated your shell will be prefixed with (.venv). The virtual environment can be deacti-
vated at any time by running deactivate.
Note: Remember to activate the virtual environment every time you start working.
3. Install west:
5. Export a Zephyr CMake package. This allows CMake to automatically load boilerplate code required
for building Zephyr applications.
west zephyr-export
Install globally
1. Install west:
3. Export a Zephyr CMake package. This allows CMake to automatically load boilerplate code required
for building Zephyr applications.
west zephyr-export
Windows
Install within virtual environment
1. Create a new virtual environment:
cd %HOMEPATH%
python -m venv zephyrproject\.venv
:: cmd.exe
zephyrproject\.venv\Scripts\activate.bat
:: PowerShell
zephyrproject\.venv\Scripts\Activate.ps1
Once activated your shell will be prefixed with (.venv). The virtual environment can be deacti-
vated at any time by running deactivate.
Note: Remember to activate the virtual environment every time you start working.
3. Install west:
5. Export a Zephyr CMake package. This allows CMake to automatically load boilerplate code required
for building Zephyr applications.
west zephyr-export
Install globally
1. Install west:
cd %HOMEPATH%
west init zephyrproject
cd zephyrproject
west update
3. Export a Zephyr CMake package. This allows CMake to automatically load boilerplate code required
for building Zephyr applications.
west zephyr-export
The Zephyr Software Development Kit (SDK) contains toolchains for each of Zephyr’s supported architec-
tures, which include a compiler, assembler, linker and other programs required to build Zephyr applica-
tions.
It also contains additional host tools, such as custom QEMU and OpenOCD builds that are used to
emulate, flash and debug Zephyr applications.
Ubuntu
1. Download and verify the Zephyr SDK bundle:
cd ~
wget https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/
˓→zephyr-sdk-0.16.1_linux-x86_64.tar.xz
wget -O - https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/
˓→sha256.sum | shasum --check --ignore-missing
If your host architecture is 64-bit ARM (for example, Raspberry Pi), replace x86_64 with aarch64
in order to download the 64-bit ARM Linux SDK.
2. Extract the Zephyr SDK bundle archive:
Note: It is recommended to extract the Zephyr SDK bundle at one of the following locations:
• $HOME
• $HOME/.local
• $HOME/.local/opt
• $HOME/bin
• /opt
• /usr/local
The Zephyr SDK bundle archive contains the zephyr-sdk-0.16.1 directory and, when extracted
under $HOME, the resulting installation path will be $HOME/zephyr-sdk-0.16.1.
cd zephyr-sdk-0.16.1
./setup.sh
Note: You only need to run the setup script once after extracting the Zephyr SDK bundle.
You must rerun the setup script if you relocate the Zephyr SDK bundle directory after the initial
setup.
4. Install udev rules, which allow you to flash most Zephyr boards as a regular user:
sudo cp ~/zephyr-sdk-0.16.1/sysroots/x86_64-pokysdk-linux/usr/share/openocd/
˓→contrib/60-openocd.rules /etc/udev/rules.d
macOS
1. Download and verify the Zephyr SDK bundle:
cd ~
wget https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/
˓→zephyr-sdk-0.16.1_macos-x86_64.tar.xz
wget -O - https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/
˓→sha256.sum | shasum --check --ignore-missing
If your host architecture is 64-bit ARM (Apple Silicon, also known as M1), replace x86_64 with
aarch64 in order to download the 64-bit ARM macOS SDK.
2. Extract the Zephyr SDK bundle archive:
Note: It is recommended to extract the Zephyr SDK bundle at one of the following locations:
• $HOME
• $HOME/.local
• $HOME/.local/opt
• $HOME/bin
• /opt
• /usr/local
The Zephyr SDK bundle archive contains the zephyr-sdk-0.16.1 directory and, when extracted
under $HOME, the resulting installation path will be $HOME/zephyr-sdk-0.16.1.
cd zephyr-sdk-0.16.1
./setup.sh
Note: You only need to run the setup script once after extracting the Zephyr SDK bundle.
You must rerun the setup script if you relocate the Zephyr SDK bundle directory after the initial
setup.
Windows
1. Open a cmd.exe window by pressing the Windows key typing “cmd.exe”.
2. Download the Zephyr SDK bundle:
cd %HOMEPATH%
wget https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/
˓→zephyr-sdk-0.16.1_windows-x86_64.7z
7z x zephyr-sdk-0.16.1_windows-x86_64.7z
Note: It is recommended to extract the Zephyr SDK bundle at one of the following locations:
• %HOMEPATH%
• %PROGRAMFILES%
The Zephyr SDK bundle archive contains the zephyr-sdk-0.16.1 directory and, when extracted
under %HOMEPATH%, the resulting installation path will be %HOMEPATH%\zephyr-sdk-0.16.1.
cd zephyr-sdk-0.16.1
setup.cmd
Note: You only need to run the setup script once after extracting the Zephyr SDK bundle.
You must rerun the setup script if you relocate the Zephyr SDK bundle directory after the initial
setup.
Note: Blinky is compatible with most, but not all, boards. If your board does not meet Blinky’s blinky-
sample-requirements, then hello_world is a good alternative.
If you are unsure what name west uses for your board, west boards can be used to obtain a list of all
boards Zephyr supports.
Build the blinky-sample with west build, changing <your-board-name> appropriately for your board:
Ubuntu
cd ~/zephyrproject/zephyr
west build -p always -b <your-board-name> samples/basic/blinky
macOS
cd ~/zephyrproject/zephyr
west build -p always -b <your-board-name> samples/basic/blinky
Windows
cd %HOMEPATH%\zephyrproject\zephyr
west build -p always -b <your-board-name> samples\basic\blinky
The -p always option forces a pristine build, and is recommended for new users. Users may also use
the -p auto option, which will use heuristics to determine if a pristine build is required, such as when
building another sample.
Connect your board, usually via USB, and turn it on if there’s a power switch. If in doubt about what to
do, check your board’s page in boards.
Then flash the sample using west flash:
west flash
You may need to install additional host tools required by your board. The west flash command will
print an error if any required dependencies are missing.
If you’re using blinky, the LED will start to blink as shown in this figure:
Here are some tips for fixing some issues related to the installation process.
You can ask for help on a mailing list or on Discord. Please send bug reports and feature requests to
GitHub.
• Mailing Lists: [email protected] is usually the right list to ask for help. Search archives
and sign up here.
• Discord: You can join with this Discord invite.
• GitHub: Use GitHub issues for bugs and feature requests.
How to Ask
Important: Please search this documentation and the mailing list archives first. Your question may have
an answer there.
Don’t just say “this isn’t working” or ask “is this working?”. Include as much detail as you can about:
1. What you want to do
2. What you tried (commands you typed, etc.)
3. What happened (output of each command, etc.)
Use Copy/Paste
Please copy/paste text instead of taking a picture or a screenshot of it. Text includes source code,
terminal commands, and their output.
Doing this makes it easier for people to help you, and also helps other users search the archives. Unnec-
essary screenshots exclude vision impaired developers; some are major Zephyr contributors. Accessibility
has been recognized as a basic human right by the United Nations.
When copy/pasting more than 5 lines of computer text into Discord or Github, create a snippet using
three backticks to delimit the snippet.
The Getting Started Guide gives a straight-forward path to set up your Linux, macOS, or Windows en-
vironment for Zephyr development. In this document, we delve deeper into Zephyr development setup
issues and alternatives.
Python 3 and its package manager, pip1 , are used extensively by Zephyr to install and run scripts required
to compile and run Zephyr applications, set up and maintain the Zephyr development environment, and
build project documentation.
Depending on your operating system, you may need to provide the --user flag to the pip3 command
when installing new packages. This is documented throughout the instructions. See Installing Packages
in the Python Packaging User Guide for more information about pip1 , including information on -\-user.
• On Linux, make sure ~/.local/bin is at the front of your PATH environment variable, or programs
installed with --user won’t be found. Installing with --user avoids conflicts between pip and the
system package manager, and is the default on Debian-based distributions.
• On macOS, Homebrew disables -\-user.
• On Windows, see the Installing Packages information on --user if you require using this option.
On all operating systems, pip’s -U flag installs or updates the package if the package is already installed
locally but a more recent version is available. It is good practice to use this flag if the latest version of a
package is required. (Check the scripts/requirements.txt file to see if a specific Python package version
is expected.)
Here are some alternative instructions for more advanced platform setup configurations for supported
development platforms:
Note: If you’re working behind a corporate firewall, you’ll likely need to configure a proxy for accessing
the internet, if you haven’t done so already. While some tools use the environment variables http_proxy
and https_proxy to get their proxy settings, some use their own configuration files, most notably apt
and git.
installed on your computer. If that is not possible, pip install downloads them from the Python Package Index (PyPI) on the
Internet.
The package versions requested by Zephyr’s requirements.txt may conflict with other requirements on your system, in which
case you may want to set up a virtualenv for Zephyr development.
Fedora
Clear Linux
Arch Linux
Install Requirements and Dependencies Note that both Ninja and Make are installed with these
instructions; you only need one.
Ubuntu
Fedora
sudo dnf group install "Development Tools" "C Development Tools and Libraries"
sudo dnf install git cmake ninja-build gperf ccache dfu-util dtc wget \
python3-pip python3-tkinter xz file glibc-devel.i686 libstdc++-devel.i686 python38 \
SDL2-devel
Clear Linux
The Clear Linux focus is on native performance and security and not cross-compilation. For that reason
it uniquely exports by default to the environment of all users a list of compiler and linker flags. Zephyr’s
CMake build system will either warn or fail because of these. To clear the C/C++ flags among these and
fix the Zephyr build, run the following command as root then log out and back in:
Note this command unsets the C/C++ flags for all users on the system. Each Linux distribution has a
unique, relatively complex and potentially evolving sequence of bash initialization files sourcing each
other and Clear Linux is no exception. If you need a more flexible solution, start by looking at the logic
in /usr/share/defaults/etc/profile.
Arch Linux
sudo pacman -S git cmake ninja gperf ccache dfu-util dtc wget \
python-pip python-setuptools python-wheel tk xz file make
CMake A recent CMake version is required. Check what version you have by using cmake --version.
If you have an older version, there are several ways of obtaining a more recent one:
• On Ubuntu, you can follow the instructions for adding the kitware third-party apt repository to get
an updated version of cmake using apt.
• Download and install a packaged cmake from the CMake project site. (Note this won’t uninstall
the previous version of cmake.)
cd ~
wget https://github.com/Kitware/CMake/releases/download/v3.21.1/cmake-3.21.1-
˓→Linux-x86_64.sh
chmod +x cmake-3.21.1-Linux-x86_64.sh
sudo ./cmake-3.21.1-Linux-x86_64.sh --skip-license --prefix=/usr/local
hash -r
The hash -r command may be necessary if the installation script put cmake into a new location
on your PATH.
• Download and install from the pre-built binaries provided by the CMake project itself in the CMake
Downloads page. For example, to install version 3.21.1 in ~/bin/cmake:
• Use pip3:
Note this won’t uninstall the previous version of cmake and will install the new cmake into your
~/.local/bin folder so you’ll need to add ~/.local/bin to your PATH. (See Python and pip for de-
tails.)
• Check your distribution’s beta or unstable release package library for an update.
• On Ubuntu you can also use snap to get the latest version available:
After updating cmake, verify that the newly installed cmake is found using cmake --version. You might
also want to uninstall the CMake provided by your package manager to avoid conflicts. (Use whereis
cmake to find other installed versions.)
DTC (Device Tree Compiler) A recent DTC version is required. Check what version you have by using
dtc --version. If you have an older version, either install a more recent one by building from source,
or use the one that is bundled in the Zephyr SDK by installing it.
Python A modern Python 3 version is required. Check what version you have by using python3
--version.
If you have an older version, you will need to install a more recent Python 3. You can build from source,
or use a backport from your distribution’s package manager channels if one is available. Isolating this
Python in a virtual environment is recommended to avoid interfering with your system Python.
Install the Zephyr Software Development Kit (SDK) The Zephyr Software Development Kit (SDK)
contains toolchains for each of Zephyr’s supported architectures. It also includes additional host tools,
such as custom QEMU and OpenOCD.
Use of the Zephyr SDK is highly recommended and may even be required under certain conditions (for
example, running tests in QEMU for some architectures).
wget https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/
˓→zephyr-sdk-0.16.1_linux-x86_64.tar.xz
wget -O - https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/
˓→sha256.sum | shasum --check --ignore-missing
You can change 0.16.1 to another version if needed; the Zephyr SDK Releases page contains all
available SDK releases.
If your host architecture is 64-bit ARM (for example, Raspberry Pi), replace x86_64 with aarch64
in order to download the 64-bit ARM Linux SDK.
2. Extract the Zephyr SDK bundle archive:
cd zephyr-sdk-0.16.1
./setup.sh
If this fails, make sure Zephyr’s dependencies were installed as described in Install Requirements
and Dependencies.
If you want to uninstall the SDK, remove the directory where you installed it. If you relocate the SDK
directory, you need to re-run the setup script.
Note: It is recommended to extract the Zephyr SDK bundle at one of the following locations:
• $HOME
• $HOME/.local
• $HOME/.local/opt
• $HOME/bin
• /opt
• /usr/local
The Zephyr SDK bundle archive contains the zephyr-sdk-0.16.1 directory and, when extracted under
$HOME, the resulting installation path will be $HOME/zephyr-sdk-0.16.1.
If you install the Zephyr SDK outside any of these locations, you must register the Zephyr SDK in the
CMake package registry by running the setup script, or set ZEPHYR_SDK_INSTALL_DIR to point to the
Zephyr SDK installation directory.
You can also use ZEPHYR_SDK_INSTALL_DIR for pointing to a directory containing multiple Zephyr SDKs,
allowing for automatic toolchain selection. For example, ZEPHYR_SDK_INSTALL_DIR=/company/tools,
where the company/tools folder contains the following subfolders:
• /company/tools/zephyr-sdk-0.13.2
• /company/tools/zephyr-sdk-a.b.c
• /company/tools/zephyr-sdk-x.y.z
This allows the Zephyr build system to choose the correct version of the SDK, while allowing multiple
Zephyr SDKs to be grouped together at a specific path.
Building on Linux without the Zephyr SDK The Zephyr SDK is provided for convenience and ease of
use. It provides toolchains for all Zephyr target architectures, and does not require any extra flags when
building applications or running tests. In addition to cross-compilers, the Zephyr SDK also provides pre-
built host tools. It is, however, possible to build without the SDK’s toolchain by using another toolchain
as as described in the Toolchains section.
As already noted above, the SDK also includes prebuilt host tools. To use the SDK’s prebuilt host tools
with a toolchain from another source, you must set the ZEPHYR_SDK_INSTALL_DIR environment variable
to the Zephyr SDK installation directory. To build without the Zephyr SDK’s prebuilt host tools, the
ZEPHYR_SDK_INSTALL_DIR environment variable must be unset.
To make sure this variable is unset, run:
unset ZEPHYR_SDK_INSTALL_DIR
Important note about Gatekeeper Starting with macOS 10.15 Catalina, applications launched from
the macOS Terminal application (or any other terminal emulator) are subject to the same system security
policies that are applied to applications launched from the Dock. This means that if you download
executable binaries using a web browser, macOS will not let you execute those from the Terminal by
default. In order to get around this issue you can take two different approaches:
• Run xattr -r -d com.apple.quarantine /path/to/folder where path/to/folder is the path
to the enclosing folder where the executables you want to run are located.
• Open “System Preferences” -> “Security and Privacy” -> “Privacy” and then scroll down to “Devel-
oper Tools”. Then unlock the lock to be able to make changes and check the checkbox correspond-
ing to your terminal emulator of choice. This will apply to any executable being launched from
such terminal program.
Note that this section does not apply to executables installed with Homebrew, since those are automati-
cally un-quarantined by brew itself. This is however relevant for most Toolchains.
Additional notes for MacPorts users While MacPorts is not officially supported in this guide, it is
possible to use MacPorts instead of Homebrew to get all the required dependencies on macOS. Note also
that you may need to install rust and cargo for the Python dependencies to install correctly.
Windows 10 WSL (Windows Subsystem for Linux) If you are running a recent version of Windows
10 you can make use of the built-in functionality to natively run Ubuntu binaries directly on a standard
command-prompt. This allows you to use software such as the Zephyr SDK without setting up a virtual
machine.
Warning: Windows 10 version 1803 has an issue that will cause CMake to not work properly and is
fixed in version 1809 (and later). More information can be found in Zephyr Issue 10420.
Note: For the Zephyr SDK to function properly you will need Windows 10 build 15002 or greater.
You can check which Windows 10 build you are running in the “About your PC” section of the
System Settings. If you are running an older Windows 10 build you might need to install the
Creator’s Update.
2. Follow the Ubuntu instructions in the Install Linux Host Dependencies document.
Zephyr binaries are compiled and linked by a toolchain comprised of a cross-compiler and related tools
which are different from the compiler and tools used for developing software that runs natively on your
host operating system.
You can install the Zephyr SDK to get toolchains for all supported architectures, or install an alternate
toolchain recommended by the SoC vendor or a specific board (check your specific board-level documen-
tation).
You can configure the Zephyr build system to use a specific toolchain by setting environment variables
such as ZEPHYR_TOOLCHAIN_VARIANT to a supported value, along with additional variable(s) specific to
the toolchain variant.
The Zephyr project source is maintained in the GitHub zephyr repo. External modules used by Zephyr
are found in the parent GitHub Zephyr project. Because of these dependencies, it’s convenient to use the
Zephyr-created west tool to fetch and manage the Zephyr and external module source code. See Basics
for more details.
Once your development tools are installed, use West (Zephyr’s meta-tool) to create, initialize, and down-
load sources from the zephyr and external module repos. We’ll use the name zephyrproject, but you
can choose any name that does not contain a space anywhere in the path.
The west update command fetches and keeps Modules (External projects) in the zephyrproject folder
in sync with the code in the local zephyr repo.
Warning: You must run west update any time the zephyr/west.yml changes, caused, for example,
when you pull the zephyr repository, switch branches in it, or perform a git bisect inside of it.
To update the Zephyr project source code, you need to get the latest changes via git. Afterwards, run
west update as mentioned in the previous paragraph.
The Zephyr CMake Package can be exported to CMake’s user package registry if it has not already been
done as part of Getting Started Guide.
Developers who work with multiple boards may find explicit board names cumbersome and want to use
aliases for common targets. This is supported by a CMake file with content like this:
and specifying its location in ZEPHYR_BOARD_ALIASES . This enables use of aliases pca10028 in contexts
like cmake -DBOARD=pca10028 and west -b pca10028.
You can build, flash, and run Zephyr applications on real hardware using a supported host system. De-
pending on your operating system, you can also run it in emulation with QEMU, or as a native POSIX
application. Additional information about building applications can be found in the Building an Applica-
tion section.
Build Blinky
Zephyr applications are built to run on specific hardware, called a “board”2 . We’ll use the Phytec
reel_board here, but you can change the reel_board build target to another value if you have a dif-
ferent board. See boards or run west boards from anywhere inside the zephyrproject directory for a
list of supported boards.
1. Go to the zephyr repository:
cd zephyrproject/zephyr
The main build products will be in build/zephyr; build/zephyr/zephyr.elf is the blinky application
binary in ELF format. Other binary formats, disassembly, and map files may be present depending on
your board.
The other sample applications in the samples folder are documented in samples-and-demos.
Note: If you want to re-use an existing build directory for another board or application, you need to
add the parameter -p=auto to west build to clean out settings and artifacts from the previous build.
Most hardware boards supported by Zephyr can be flashed by running west flash. This may require
board-specific tool installation and configuration to work properly.
See Run an Application and your specific board’s documentation in boards for additional details.
Flashing a board requires permission to directly access the board hardware, usually managed by installa-
tion of the flashing tools. On Linux systems, if the west flash command fails, you likely need to define
udev rules to grant the needed access permission.
Udev is a device manager for the Linux kernel and the udev daemon handles all user space events raised
when a hardware device is added (or removed) from the system. We can add a rules file to grant access
permission by non-root users to certain USB-connected devices.
The OpenOCD (On-Chip Debugger) project conveniently provides a rules file that defined board-specific
rules for most Zephyr-supported arm-based boards, so we recommend installing this rules file by down-
loading it from their sourceforge repo, or if you’ve installed the Zephyr SDK there is a copy of this rules
file in the SDK folder:
• Either download the OpenOCD rules file and copy it to the right location:
sudo cp ${ZEPHYR_SDK_INSTALL_DIR}/sysroots/x86_64-pokysdk-linux/usr/share/
˓→openocd/contrib/60-openocd.rules /etc/udev/rules.d
2 This has become something of a misnomer over time. While the target can be, and often is, a microprocessor running on its
own dedicated hardware board, Zephyr also supports using QEMU to run targets built for other architectures in emulation, targets
which produce native host system binaries that implement Zephyr’s driver interfaces with POSIX APIs, and even running different
Zephyr-based binaries on CPU cores of differing architectures on the same physical chip. Each of these hardware configurations is
called a “board,” even though that doesn’t always make perfect sense in context.
Then, in either case, ask the udev daemon to reload these rules:
Unplug and plug in the USB connection to your board, and you should have permission to access the
board hardware for flashing. Check your board-specific documentation (boards) for further information
if needed.
On Linux and macOS, you can run Zephyr applications via emulation on your host system using QEMU
when targeting either the x86 or ARM Cortex-M3 architectures. (QEMU is included with the Zephyr SDK
installation.)
For example, you can build and run the hello_world sample using the x86 emulation board configuration
(qemu_x86), with:
You can compile some samples to run as host processes on a POSIX OS. This is currently only tested
on Linux hosts. See native_posix for more information. On 64-bit host operating systems, you need to
install a 32-bit C library; see native_posix_deps for details.
First, build Hello World for native_posix.
Various pages in this documentation refer to setting Zephyr-specific environment variables. This page
describes how.
To set the environment variable MY_VARIABLE to foo for the lifetime of your current terminal window:
Linux/macOS
export MY_VARIABLE=foo
Windows
set MY_VARIABLE=foo
Warning: This is best for experimentation. If you close your terminal window, use another terminal
window or tab, restart your computer, etc., this setting will be lost forever.
Using options 2 or 3 is recommended if you want to keep using the setting.
Linux/macOS
Add the export MY_VARIABLE=foo line to your shell’s startup script in your home directory. For Bash,
this is usually ~/.bashrc on Linux or ~/.bash_profile on macOS. Changes in these startup scripts don’t
affect shell instances already started; try opening a new terminal window to get the new settings.
Windows
You can use the setx program in cmd.exe or the third-party RapidEE program.
To use setx, type this command, then close the terminal window. Any new cmd.exe windows will have
MY_VARIABLE set to foo.
To install RapidEE, a freeware graphical environment variable editor, using Chocolatey in an Adminis-
trator command prompt:
You can then run rapidee from your terminal to launch the program and set environment variables.
Make sure to use the “User” environment variables area – otherwise, you have to run RapidEE as admin-
istrator. Also make sure to save your changes by clicking the Save button at top left before exiting.Settings
you make in RapidEE will be available whenever you open a new terminal window.
Choose this option if you don’t want to make the variable’s setting available to all of your terminals, but
still want to save the value for loading into your environment when you are using Zephyr.
Linux/macOS
Create a file named ~/.zephyrrc if it doesn’t exist, then add this line to it:
export MY_VARIABLE=foo
To get this value back into your current terminal environment, you must run source zephyr-env.sh
from the main zephyr repository. Among other things, this script sources ~/.zephyrrc.
The value will be lost if you close the window, etc.; run source zephyr-env.sh again to get it back.
Windows
Add the line set MY_VARIABLE=foo to the file %userprofile%\zephyrrc.cmd using a text editor such as
Notepad to save the value.
To get this value back into your current terminal environment, you must run zephyr-env.cmd in a cmd.
exe window after changing directory to the main zephyr repository. Among other things, this script runs
%userprofile%\zephyrrc.cmd.
The value will be lost if you close the window, etc.; run zephyr-env.cmd again to get it back.
These scripts:
• set ZEPHYR_BASE to the location of the zephyr repository
• adds some Zephyr-specific locations (such as zephyr’s scripts directory) to your PATH environ-
ment variable
• loads any settings from the zephyrrc files described above in Option 3: Using zephyrrc files.
You can thus use them any time you need any of these settings.
You can use the zephyr repository scripts zephyr-env.sh (for macOS and Linux) and zephyr-env.cmd
(for Windows) to load Zephyr-specific settings into your current terminal’s environment. To do so, run
this command from the zephyr repository:
Linux/macOS
source zephyr-env.sh
Windows
zephyr-env.cmd
These scripts:
• set ZEPHYR_BASE to the location of the zephyr repository
• adds some Zephyr-specific locations (such as zephyr’s scripts directory) to your PATH environment
variable
• loads any settings from the zephyrrc files described above in Option 3: Using zephyrrc files.
You can thus use them any time you need any of these settings.
Some Important Build System Variables can also be set in the environment. Here is a description of some
of these important environment variables. This is not a comprehensive list.
BOARD
See Important Build System Variables.
CONF_FILE
See Important Build System Variables.
SHIELD
See Shields.
ZEPHYR_BASE
See Important Build System Variables.
EXTRA_ZEPHYR_MODULES
See Important Build System Variables.
ZEPHYR_MODULES
See Important Build System Variables.
ZEPHYR_BOARD_ALIASES
See Board Aliases
The following additional environment variables are significant when configuring the toolchain used to
build Zephyr applications.
ZEPHYR_SDK_INSTALL_DIR
Path where Zephyr SDK is installed.
ZEPHYR_TOOLCHAIN_VARIANT
The name of the toolchain to use.
{TOOLCHAIN}_TOOLCHAIN_PATH
Path to the toolchain specified by ZEPHYR_TOOLCHAIN_VARIANT . For example, if
ZEPHYR_TOOLCHAIN_VARIANT=llvm, use LLVM_TOOLCHAIN_PATH. (Note the capitalization when
forming the environment variable name.)
You might need to update some of these variables when you update the Zephyr SDK toolchain.
Emulators and boards may also depend on additional programs. The build system will try to locate those
programs automatically, but may rely on additional CMake or environment variables to do so. Please
consult your emulator’s or board’s documentation for more information. The following environment
variables may be useful in such situations:
PATH
PATH is an environment variable used on Unix-like or Microsoft Windows operating systems to
specify a set of directories where executable programs are located.
2.4.1 Overview
The main zephyr repository contains Zephyr’s source code, configuration files, and build system. You also
likely have installed various Modules (External projects) alongside the zephyr repository, which provide
third party source code integration.
The files in the application directory link Zephyr and any modules with the application. This directory
contains all application-specific files, such as application-specific configuration files and source code.
Here are the files in a simple Zephyr application:
<app>
CMakeLists.txt
app.overlay
prj.conf
src
main.c
We distinguish three basic types of Zephyr application based on where <app> is located:
We’ll discuss these more below. To learn how the build system supports each type, see Zephyr CMake
Package.
An application located within the zephyr source code repository in a Zephyr west workspace is referred
to as a Zephyr repository application. In the following example, the hello_world sample is a Zephyr
repository application:
zephyrproject/
.west/
config
zephyr/
arch/
boards/
cmake/
samples/
hello_world/
...
tests/
...
An application located within a workspace, but outside the zephyr repository itself, is referred to as a
Zephyr workspace application. In the following example, app is a Zephyr workspace application:
zephyrproject/
.west/
config
zephyr/
bootloader/
modules/
tools/
<vendor/private-repositories>/
applications/
app/
<home>/
zephyrproject/
.west/
config
zephyr/
bootloader/
modules/
...
app/
CMakeLists.txt
(continues on next page)
example-application
The easiest way to get started with the example-application repository within an existing Zephyr
workspace is to follow these steps:
cd <home>/zephyrproject
git clone https://github.com/zephyrproject-rtos/example-application my-app
The directory name my-app above is arbitrary: change it as needed. You can now go into this directory
and adapt its contents to suit your needs. Since you are using an existing Zephyr workspace, you can use
west build or any other west commands to build, flash, and debug.
You can also use the example-application repository as a starting point for building your own customized
Zephyr-based software distribution. This lets you do things like:
• remove Zephyr modules you don’t need
• add additional custom repositories of your own
• override repositories provided by Zephyr with your own versions
• share the results with others and collaborate further
The example-application repository contains a west.yml file and is therefore also a west manifest reposi-
tory. Use this to create a new, customized workspace by following these steps:
cd <home>
mkdir my-workspace
cd my-workspace
git clone https://github.com/zephyrproject-rtos/example-application my-manifest-repo
west init -l my-manifest-repo
This will create a new workspace with the T2 topology, with my-manifest-repo as the manifest reposi-
tory. The my-workspace and my-manifest-repo names are arbitrary: change them as needed.
Next, customize the manifest repository. The initial contents of this repository will match the example-
application’s contents when you clone it. You can then edit my-manifest-repo/west.yml to your liking,
changing the set of repositories in it as you wish. See Manifest Imports for many examples of how to add
or remove different repositories from your workspace as needed. Make any other changes you need to
other files.
When you are satisfied, you can run:
west update
From now on, you can collaborate on the shared software by pushing changes to the repositories you are
using and updating my-manifest-repo/west.yml as needed to add and remove repositories, or change
their contents.
You can follow these steps to create a basic application directory from scratch. However, using the
example-application repository or one of Zephyr’s samples-and-demos as a starting point is likely to be
easier.
1. Create an application directory.
For example, in a Unix shell or Windows cmd.exe prompt:
mkdir app
cd app
mkdir src
3. Place your application source code in the src sub-directory. For this example, we’ll assume you
created a file named src/main.c.
4. Create a file named CMakeLists.txt in the app directory with the following contents:
cmake_minimum_required(VERSION 3.20.0)
find_package(Zephyr)
project(my_zephyr_app)
Notes:
• The cmake_minimum_required() call is required by CMake. It is also invoked by the Zephyr
package on the next line. CMake will error out if its version is older than either the version in
your CMakeLists.txt or the version number in the Zephyr package.
• find_package(Zephyr) pulls in the Zephyr build system, which creates a CMake target
named app (see Zephyr CMake Package). Adding sources to this target is how you include
them in the build. The Zephyr package will define Zephyr-Kernel as a CMake project and
enable support for the C, CXX, ASM languages.
• project(my_zephyr_app) defines your application’s CMake project. This must be called after
find_package(Zephyr) to avoid interference with Zephyr’s project(Zephyr-Kernel).
• target_sources(app PRIVATE src/main.c) is to add your source file to the app target. This
must come after find_package(Zephyr) which defines the target. You can add as many files
as you want with target_sources().
5. Create at least one Kconfig fragment for your application (usually named prj.conf) and set Kconfig
option values needed by your application there. See Kconfig Configuration. If no Kconfig options
need to be set, create an empty file.
6. Configure any devicetree overlays needed by your application, usually in a file named app.
overlay. See Set devicetree overlays.
7. Set up any other files you may need, such as twister configuration files, continuous integration files,
documentation, etc.
You can control the Zephyr build system using many variables. This section describes the most important
ones that every Zephyr developer should know about.
Note: The variables BOARD, CONF_FILE, and DTC_OVERLAY_FILE can be supplied to the build system in
3 ways (in order of precedence):
• As a parameter to the west build or cmake invocation via the -D command-line switch. If you
have multiple overlay files, you should use quotations, "file1.overlay;file2.overlay"
• As Environment Variables.
• As a set(<VARIABLE> <VALUE>) statement in your CMakeLists.txt
• ZEPHYR_BASE: Zephyr base variable used by the build system. find_package(Zephyr) will auto-
matically set this as a cached CMake variable. But ZEPHYR_BASE can also be set as an environment
variable in order to force CMake to use a specific Zephyr installation.
• BOARD: Selects the board that the application’s build will use for the default configuration. See
boards for built-in boards, and Board Porting Guide for information on adding board support.
• CONF_FILE: Indicates the name of one or more Kconfig configuration fragment files. Multiple file-
names can be separated with either spaces or semicolons. Each file includes Kconfig configuration
values that override the default configuration values.
See The Initial Configuration for more information.
• EXTRA_CONF_FILE: Additional Kconfig configuration fragment files. Multiple filenames can be sep-
arated with either spaces or semicolons. This can be useful in order to leave CONF_FILE at its
default value, but “mix in” some additional configuration options.
• DTC_OVERLAY_FILE: One or more devicetree overlay files to use. Multiple files can be separated
with semicolons. See Set devicetree overlays for examples and Introduction to devicetree for infor-
mation about devicetree and Zephyr.
• SHIELD: see Shields
• ZEPHYR_MODULES: A CMake list containing absolute paths of additional directories with source code,
Kconfig, etc. that should be used in the application build. See Modules (External projects) for details.
If you set this variable, it must be a complete list of all modules to use, as the build system will not
automatically pick up any modules from west.
• EXTRA_ZEPHYR_MODULES: Like ZEPHYR_MODULES, except these will be added to the list of modules
found via west, instead of replacing it.
Note: You can use a Zephyr Build Configuration CMake package to share common settings for these
variables.
Every application must have a CMakeLists.txt file. This file is the entry point, or top level, of the build
system. The final zephyr.elf image contains both the application and the kernel libraries.
This section describes some of what you can do in your CMakeLists.txt. Make sure to follow these
steps in order.
1. If you only want to build for one board, add the name of the board configuration for your applica-
tion on a new line. For example:
set(BOARD qemu_x86)
set(CONF_FILE "fragment_file1.conf")
list(APPEND CONF_FILE "fragment_file2.conf")
3. If your application uses devicetree overlays, you may need to set DTC_OVERLAY_FILE. See Set
devicetree overlays.
4. If your application has its own kernel configuration options, create a Kconfig file in the same
directory as your application’s CMakeLists.txt.
See the Kconfig section of the manual for detailed Kconfig documentation.
An (unlikely) advanced use case would be if your application has its own unique configuration
options that are set differently depending on the build configuration.
If you just want to set application specific values for existing Zephyr configuration options, refer
to the CONF_FILE description above.
Structure your Kconfig file like this:
# SPDX-License-Identifier: Apache-2.0
Note: Environment variables in source statements are expanded directly, so you do not need to
define an option env="ZEPHYR_BASE" Kconfig “bounce” symbol. If you use such a symbol, it must
have the same name as the environment variable.
See Kconfig extensions for more information.
The Kconfig file is automatically detected when placed in the application directory, but it is also
possible for it to be found elsewhere if the CMake variable KCONFIG_ROOT is set with an absolute
path.
5. Specify that the application requires Zephyr on a new line, after any lines added from the steps
above:
find_package(Zephyr)
project(my_zephyr_app)
6. Now add any application source files to the ‘app’ target library, each on their own line, like so:
set(BOARD qemu_x86)
find_package(Zephyr)
(continues on next page)
The Cmake property HEX_FILES_TO_MERGE leverages the application configuration provided by Kconfig
and CMake to let you merge externally built hex files with the hex file generated when building the
Zephyr application. For example:
2.4.6 CMakeCache.txt
CMake uses a CMakeCache.txt file as persistent key/value string storage used to cache values between
runs, including compile and build options and paths to library dependencies. This cache file is created
when CMake is run in an empty build folder.
For more details about the CMakeCache.txt file see the official CMake documentation runningcmake .
Zephyr will use configuration files from the application’s configuration directory except for files with an
absolute path provided by the arguments described earlier, for example CONF_FILE, EXTRA_CONF_FILE,
DTC_OVERLAY_FILE, and EXTRA_DTC_OVERLAY_FILE.
The application configuration directory is defined by the APPLICATION_CONFIG_DIR variable.
APPLICATION_CONFIG_DIR will be set by one of the sources below with the highest priority listed first.
1. If APPLICATION_CONFIG_DIR is specified by the user with -DAPPLICATION_CONFIG_DIR=<path> or
in a CMake file before find_package(Zephyr) then this folder is used a the application’s configu-
ration directory.
2. The application’s source directory.
Kconfig Configuration
Application configuration options are usually set in prj.conf in the application directory. For example,
C++ support could be enabled with this assignment:
CONFIG_CPP=y
Experimental features Zephyr is a project under constant development and thus there are features
that are still in early stages of their development cycle. Such features will be marked [EXPERIMENTAL]
in their Kconfig title.
The CONFIG_WARN_EXPERIMENTAL setting can be used to enable warnings at CMake configure time if any
experimental feature is enabled.
CONFIG_WARN_EXPERIMENTAL=y
Devicetree Overlays
Application-specific source code files are normally added to the application’s src directory. If the ap-
plication adds a large number of files the developer can group them into sub-directories under src, to
whatever depth is needed.
Application-specific source code should not use symbol name prefixes that have been reserved by the
kernel for its own use. For more information, see Naming Conventions.
It is possible to build library code outside the application’s src directory but it is important that both
application and library code targets the same Application Binary Interface (ABI). On most architectures
there are compiler flags that control the ABI targeted, making it important that both libraries and ap-
plications have certain compiler flags in common. It may also be useful for glue code to have access to
Zephyr kernel header files.
To make it easier to integrate third-party components, the Zephyr build system has defined CMake
functions that give application build scripts access to the zephyr compiler options. The func-
tions are documented and defined in cmake/extensions.cmake and follow the naming convention
zephyr_get_<type>_<format>.
The following variables will often need to be exported to the third-party build system.
• CMAKE_C_COMPILER, CMAKE_AR.
• ARCH and BOARD, together with several variables that identify the Zephyr kernel version.
samples/application_development/external_lib is a sample project that demonstrates some of these fea-
tures.
The Zephyr build system compiles and links all components of an application into a single application
image that can be run on simulated hardware or real hardware.
Like any other CMake-based system, the build process takes place in two stages. First, build files (also
known as a buildsystem) are generated using the cmake command-line tool while specifying a generator.
This generator determines the native build tool the buildsystem will use in the second stage. The second
stage runs the native build tool to actually build the source files and generate an image. To learn more
about these concepts refer to the CMake introduction in the official CMake documentation.
Although the default build tool in Zephyr is west, Zephyr’s meta-tool, which invokes cmake and the
underlying build tool (ninja or make) behind the scenes, you can also choose to invoke cmake directly
if you prefer. On Linux and macOS you can choose between the make and ninja generators (i.e. build
tools), whereas on Windows you need to use ninja, since make is not supported on this platform. For
simplicity we will use ninja throughout this guide, and if you choose to use west build to build your
application know that it will default to ninja under the hood.
As an example, let’s build the Hello World sample for the reel_board:
Using west:
On Linux and macOS, you can also build with make instead of ninja:
Using west:
• to use make just once, add -- -G"Unix Makefiles" to the west build command line; see the west
build documentation for an example.
• to use make by default from now on, run west config build.generator "Unix Makefiles".
Using CMake directly:
Basics
If desired, you can build the application using the configuration settings specified in an alternate
.conf file using the CONF_FILE parameter. These settings will override the settings in the applica-
tion’s .config file or its default .conf file. For example:
Using west:
As described in the previous section, you can instead choose to permanently set the board and
configuration settings by either exporting BOARD and CONF_FILE environment variables or by setting
their values in your CMakeLists.txt using set() statements. Additionally, west allows you to set
a default board.
When using the Ninja generator a build directory looks like this:
<app>/build
build.ninja
CMakeCache.txt
CMakeFiles
cmake_install.cmake
rules.ninja
zephyr
Note: The previous version of .config is saved to .config.old whenever the configuration is
updated. This is for convenience, as comparing the old and new versions can be handy.
• Various object files (.o files and .a files) containing compiled kernel and application code.
• zephyr.elf, which contains the final combined application and kernel binary. Other binary output
formats, such as .hex and .bin, are also supported.
Rebuilding an Application
Application development is usually fastest when changes are continually tested. Frequently rebuilding
your application makes debugging less painful as the application becomes more complex. It’s usually a
good idea to rebuild and test after any major changes to the application’s source files, CMakeLists.txt
files, or configuration settings.
Important: The Zephyr build system rebuilds only the parts of the application image potentially affected
by the changes. Consequently, rebuilding an application is often significantly faster than building it the
first time.
Sometimes the build system doesn’t rebuild the application correctly because it fails to recompile one or
more necessary files. You can force the build system to rebuild the entire application from scratch with
the following procedure:
1. Open a terminal console on your host computer, and navigate to the build directory <app>/build.
2. Enter one of the following commands, depending on whether you want to use west or cmake
directly to delete the application’s generated files, except for the .config file that contains the
application’s current configuration information.
or
ninja clean
Alternatively, enter one of the following commands to delete all generated files, including the .
config files that contain the application’s current configuration information for those board types.
or
ninja pristine
If you use west, you can take advantage of its capability to automatically make the build folder
pristine whenever it is required.
3. Rebuild the application normally following the steps specified in Building an Application above.
The Zephyr build system has support for specifying multiple hardware revisions of a single board with
small variations. Using revisions allows the board support files to make minor adjustments to a board
configuration without duplicating all the files described in Create your board directory for each revision.
To build for a particular revision, use <board>@<revision> instead of plain <board>. For example:
Using west:
Check your board’s documentation for details on whether it has multiple revisions, and what revisions
are supported.
When targeting a board revision, the active revision will be printed at CMake configure time, like this:
Running on a Board
Most boards supported by Zephyr let you flash a compiled binary using the flash target to copy the
binary to the board and run it. Follow these instructions to flash and run an application on real hardware:
1. Build your application, as described in Building an Application.
2. Make sure your board is attached to your host computer. Usually, you’ll do this via USB.
3. Run one of these console commands from the build directory, <app>/build, to flash the compiled
Zephyr image and run it on your board:
west flash
or
ninja flash
The Zephyr build system integrates with the board support files to use hardware-specific tools to flash
the Zephyr binary to your hardware, then run it.
Each time you run the flash command, your application is rebuilt and flashed again.
In cases where board support is incomplete, flashing via the Zephyr build system may not be supported. If
you receive an error message about flash support being unavailable, consult your board’s documentation
for additional information on how to flash your board.
Note: When developing on Linux, it’s common to need to install board-specific udev rules to enable
USB device access to your board as a non-root user. If flashing fails, consult your board’s documentation
to see if this is necessary.
Running in an Emulator
The kernel has built-in emulator support for QEMU (on Linux/macOS only, this is not yet supported
on Windows). It allows you to run and test an application virtually, before (or in lieu of) loading and
running it on actual target hardware. Follow these instructions to run an application via QEMU:
1. Build your application for one of the QEMU boards, as described in Building an Application.
For example, you could set BOARD to:
• qemu_x86 to emulate running on an x86-based board
• qemu_cortex_m3 to emulate running on an ARM Cortex M3-based board
2. Run one of these console commands from the build directory, <app>/build, to run the Zephyr
binary in QEMU:
or
ninja run
Each time you execute the run command, your application is rebuilt and run again.
Note: If the (Linux only) Zephyr SDK is installed, the run target will use the SDK’s QEMU binary by
default. To use another version of QEMU, set the environment variable QEMU_BIN_PATH to the path of the
QEMU binary you want to use instead.
Note: You can choose a specific emulator by appending _<emulator> to your target name, for example
west build -t run_qemu or ninja run_qemu for QEMU.
This section is a quick hands-on reference to start debugging your application with QEMU. Most content
in this section is already covered in QEMU and GNU_Debugger reference manuals.
In this quick reference, you’ll find shortcuts, specific environmental variables, and parameters that can
help you to quickly set up your debugging environment.
The simplest way to debug an application running in QEMU is using the GNU Debugger and setting a
local GDB server in your development system through QEMU.
You will need an ELF (Executable and Linkable Format) binary image for debugging purposes. The build
system generates the image in the build directory. By default, the kernel binary name is zephyr.elf.
The name can be changed using CONFIG_KERNEL_BIN_NAME.
GDB server
We will use the standard 1234 TCP port to open a GDB (GNU Debugger) server instance. This port
number can be changed for a port that best suits the development environment. There are multiple ways
to do this. Each way starts a QEMU instance with the processor halted at startup and with a GDB server
instance listening for a connection.
Running QEMU directly You can run QEMU to listen for a “gdb connection” before it starts executing
any code to debug it.
qemu -s -S <image>
will setup Qemu to listen on port 1234 and wait for a GDB connection to it.
The options used above have the following meaning:
• -S Do not start CPU at startup; rather, you must type ‘c’ in the monitor.
• -s Shorthand for -gdb tcp::1234: open a GDB server on TCP port 1234.
Running QEMU via ninja Run the following inside the build directory of an application:
ninja debugserver
QEMU will write the console output to the path specified in ${QEMU_PIPE} via CMake, typically
qemu-fifo within the build directory. You may monitor this file during the run with tail -f qemu-fifo.
Running QEMU via west Run the following from your project root:
QEMU will write the console output to the terminal from which you invoked west.
GDB client
$ path/to/gdb path/to/zephyr.elf
(gdb) target remote localhost:1234
(gdb) dir ZEPHYR_BASE
You can use a local GDB configuration .gdbinit to initialize your GDB instance on every run. Your home
directory is a typical location for .gdbinit, but you can configure GDB to load from other locations,
including the directory from which you invoked gdb. This example file performs the same configuration
as above:
Alternate interfaces GDB provides a curses-based interface that runs in the terminal. Pass the --tui
option when invoking gdb or give the tui enable command within gdb.
Note: The GDB version on your development system might not support the --tui option. Please make
sure you use the GDB binary from the SDK which corresponds to the toolchain that has been used to
build the binary.
Finally, the command below connects to the GDB server using the DDD (Data Display Debugger), a
graphical frontend for GDB. The following command loads the symbol table from the ELF binary file, in
this instance, zephyr.elf.
Both commands execute gdb. The command name might change depending on the toolchain you are
using and your cross-development tools.
ddd may not be installed in your development system by default. Follow your system instructions to
install it. For example, use sudo apt-get install ddd on an Ubuntu system.
Debugging
As configured above, when you connect the GDB client, the application will be stopped at system startup.
You may set breakpoints, step through code, etc. as when running the application directly within gdb.
Note: gdb will not print the system console output as the application runs, unlike when you run a native
application in GDB directly. If you just continue after connecting the client, the application will run, but
nothing will appear to happen. Check the console output as described above.
In cases where the board or platform you are developing for is not yet supported by Zephyr, you can
add board, Devicetree and SOC definitions to your application without having to add them to the Zephyr
tree.
The structure needed to support out-of-tree board and SOC development is similar to how boards and
SOCs are maintained in the Zephyr tree. By using this structure, it will be much easier to upstream your
platform related work into the Zephyr tree after your initial development is done.
Add the custom board to your application or a dedicated repository using the following structure:
boards/
soc/
CMakeLists.txt
prj.conf
README.rst
src/
where the boards directory hosts the board you are building for:
.
boards
x86
my_custom_board
doc
img
support
src
and the soc directory hosts any SOC code. You can also have boards that are supported by a SOC that is
available in the Zephyr tree.
Boards
Use the proper architecture folder name (e.g., x86, arm, etc.) under boards for my_custom_board. (See
boards for a list of board architectures.)
Documentation (under doc/) and support files (under support/) are optional, but will be needed when
submitting to Zephyr.
The contents of my_custom_board should follow the same guidelines for any Zephyr board, and provide
the following files:
my_custom_board_defconfig
my_custom_board.dts
my_custom_board.yaml
board.cmake
board.h
CMakeLists.txt
doc/
Kconfig.board
(continues on next page)
Once the board structure is in place, you can build your application targeting this board by specifying
the location of your custom board information with the -DBOARD_ROOT parameter to the CMake build
system:
Using west:
This will use your custom board configuration and will generate the Zephyr binary into your application
directory.
You can also define the BOARD_ROOT variable in the application CMakeLists.txt file. Make sure to do so
before pulling in the Zephyr boilerplate with find_package(Zephyr ...).
Note: When specifying BOARD_ROOT in a CMakeLists.txt, then an absolute path must be provided,
for example list(APPEND BOARD_ROOT ${CMAKE_CURRENT_SOURCE_DIR}/<extra-board-root>). When
using -DBOARD_ROOT=<board-root> both absolute and relative paths can be used. Relative paths are
treated relatively to the application directory.
SOC Definitions
Similar to board support, the structure is similar to how SOCs are maintained in the Zephyr tree, for
example:
soc
arm
st_stm32
common
stm32l0
The file soc/Kconfig will create the top-level SoC/CPU/Configuration Selection menu in Kconfig.
Out of tree SoC definitions can be added to this menu using the SOC_ROOT CMake variable. This variable
contains a semicolon-separated list of directories which contain SoC support files.
Following the structure above, the following files can be added to load more SoCs into the menu.
soc
arm
st_stm32
Kconfig
Kconfig.soc
Kconfig.defconfig
The Kconfig files above may describe the SoC or load additional SoC Kconfig files.
An example of loading stm31l0 specific Kconfig files in this structure:
soc
arm
st_stm32
Kconfig.soc
stm32l0
Kconfig.series
rsource "*/Kconfig.series"
Once the SOC structure is in place, you can build your application targeting this platform by specifying
the location of your custom platform information with the -DSOC_ROOT parameter to the CMake build
system:
Using west:
ninja -Cbuild
This will use your custom platform configurations and will generate the Zephyr binary into your appli-
cation directory.
See Build settings for information on setting SOC_ROOT in a module’s zephyr/module.yml file.
Or you can define the SOC_ROOT variable in the application CMakeLists.txt file. Make sure to do so
before pulling in the Zephyr boilerplate with find_package(Zephyr ...).
Note: When specifying SOC_ROOT in a CMakeLists.txt, then an absolute path must be provided,
for example list(APPEND SOC_ROOT ${CMAKE_CURRENT_SOURCE_DIR}/<extra-soc-root>. When us-
ing -DSOC_ROOT=<soc-root> both absolute and relative paths can be used. Relative paths are treated
relatively to the application directory.
Devicetree Definitions
Devicetree directory trees are found in APPLICATION_SOURCE_DIR, BOARD_DIR, and ZEPHYR_BASE, but
additional trees, or DTS_ROOTs, can be added by creating this directory tree:
include/
dts/common/
dts/arm/
dts/
dts/bindings/
Where ‘arm’ is changed to the appropriate architecture. Each directory is optional. The binding directory
contains bindings and the other directories contain files that can be included from DT sources.
Once the directory structure is in place, you can use it by specifying its location through the DTS_ROOT
CMake Cache variable:
Using west:
You can also define the variable in the application CMakeLists.txt file. Make sure to do so before
pulling in the Zephyr boilerplate with find_package(Zephyr ...).
Note: When specifying DTS_ROOT in a CMakeLists.txt, then an absolute path must be provided,
for example list(APPEND DTS_ROOT ${CMAKE_CURRENT_SOURCE_DIR}/<extra-dts-root>. When us-
ing -DDTS_ROOT=<dts-root> both absolute and relative paths can be used. Relative paths are treated
relatively to the application directory.
Devicetree source are passed through the C preprocessor, so you can include files that can be located in
a DTS_ROOT directory. By convention devicetree include files have a .dtsi extension.
You can also use the preprocessor to control the content of a devicetree file, by specifying directives
through the DTS_EXTRA_CPPFLAGS CMake Cache variable:
Using west:
ninja -Cbuild
Overview
CMake supports generating a project description file that can be imported into the Eclipse Integrated
Development Environment (IDE) and used for graphical debugging.
The GNU MCU Eclipse plug-ins provide a mechanism to debug ARM projects in Eclipse with pyOCD,
Segger J-Link, and OpenOCD debugging tools.
The following tutorial demonstrates how to debug a Zephyr application in Eclipse with pyOCD in Win-
dows. It assumes you have already installed the GCC ARM Embedded toolchain and pyOCD.
# On Windows
cd %userprofile%
Note: If the build directory is a subdirectory of the source directory, as is usually done in Zephyr,
CMake will warn:
“The build directory is a subdirectory of the source directory.
This is not supported well by Eclipse. It is strongly recommended to use a build directory which is
a sibling of the source directory.”
3. Configure your application with CMake and build it with ninja. Note the different CMake gener-
ator specified by the -G"Eclipse CDT4 - Ninja" argument. This will generate an Eclipse project
description file, .project, in addition to the usual ninja build files.
Using west:
ninja -Cbuild
4. In Eclipse, import your generated project by opening the menu File->Import... and selecting the
option Existing Projects into Workspace. Browse to your application build directory in the
choice, Select root directory:. Check the box for your project in the list of projects found and
click the Finish button.
Note: This is optional. It provides the SoC’s memory-mapped register addresses and bitfields
to the debugger.
RTOS Awareness
Support for Zephyr RTOS awareness is implemented in pyOCD v0.11.0 and later. It is compatible with
GDB PyOCD Debugging in Eclipse, but you must enable CONFIG_DEBUG_THREAD_INFO=y in your
application.
The table lists Zephyr’s APIs and information about them, including their current stability level. More
details about API changes between major releases are available in the zephyr_release_notes.
Developers using Zephyr’s APIs need to know how long they can trust that a given API will not change
in future releases. At the same time, developers maintaining and extending Zephyr’s APIs need to be
able to introduce new APIs that aren’t yet fully proven, and to potentially retire old APIs when they’re no
longer optimal or supported by the underlying platforms.
An up-to-date table of all APIs and their maturity level can be found in the API Overview page.
Experimental
Experimental APIs denote that a feature was introduced recently, and may change or be removed in
future versions. Try it out and provide feedback to the community via the Developer mailing list.
The following requirements apply to all new APIs:
• Documentation of the API (usage) explaining its design and assumptions, how it is to be used,
current implementation limitations, and future potential, if appropriate.
• The API introduction should be accompanied by at least one implementation of said API (in the
case of peripheral APIs, this corresponds to one driver)
• At least one sample using the new API (may only build on one single board)
Peripheral APIs (Hardware Related) When introducing an API (public header file with documen-
tation) for a new peripheral or driver subsystem, review of the API is enforced and is driven by the
Architecture working group consisting of representatives from different vendors.
The API shall be promoted to unstable when it has at least two implementations on different hardware
platforms.
Unstable
The API is in the process of settling, but has not yet had sufficient real-world testing to be considered
stable. The API is considered generic in nature and can be used on different hardware platforms.
Peripheral APIs (Hardware Related) The API shall be promoted from experimental to unstable
when it has at least two implementations on different hardware platforms.
Hardware Agnostic APIs For hardware agnostic APIs, multiple applications using it are required to
promote an API from experimental to unstable.
Stable
The API has proven satisfactory, but cleanup in the underlying code may cause minor changes.
Backwards-compatibility will be maintained if reasonable.
An API can be declared stable after fulfilling the following requirements:
• Test cases for the new API with 100% coverage
• Complete documentation in code. All public interfaces shall be documented and available in online
documentation.
• The API has been in-use and was available in at least 2 development releases
• Stable APIs can get backward compatible updates, bug fixes and security fixes at any time.
In order to declare an API stable, the following steps need to be followed:
1. A Pull Request must be opened that changes the corresponding entry in the API Overview table
2. An email must be sent to the devel mailing list announcing the API upgrade request
3. The Pull Request must be submitted for discussion in the next Zephyr Architecture meeting where,
barring any objections, the Pull Request will be merged
Introducing incompatible changes A stable API, as described above strives to remain backwards-
compatible through its life-cycle. There are however cases where fulfilling this objective prevents tech-
nical progress or is simply unfeasible without unreasonable burden on the maintenance of the API and
its implementation(s).
An incompatible change is defined as one that forces users to modify their existing code in order to
maintain the current behavior of their application. The need for recompilation of applications (without
changing the application itself) is not considered an incompatible change.
In order to restrict and control the introduction of a change that breaks the promise of backwards com-
patibility the following steps must be followed whenever such a change is considered necessary in order
to accept it in the project:
1. An RFC issue must be opened on GitHub with the following content:
Instead of a written description of the changes, the RFC issue may link to a Pull Request containing
those changes in code form.
2. The RFC issue must be labeled with the GitHub Stable API Change label
3. The RFC issue must be submitted for discussion in the next Zephyr Architecture meeting
4. An email must be sent to the devel mailing list with a subject identical to the RFC issue title and
that links to the RFC issue
The RFC will then receive feedback through issue comments and will also be discussed in the Zephyr
Architecture meeting, where the stakeholders and the community at large will have a chance to discuss
it in detail.
Finally, and if not done as part of the first step, a Pull Request must be opened on GitHub. It is left to
the person proposing the change to decide whether to introduce both the RFC and the Pull Request at
the same time or to wait until the RFC has gathered consensus enough so that the implementation can
proceed with confidence that it will be accepted. The Pull Request must include the following:
• A title that matches the RFC issue
• A link to the RFC issue
• The actual changes to the API
– Changes to the API header file
– Changes to the API implementation(s)
– Changes to the relevant API documentation
– Changes to Device Tree source and bindings
• The changes required to adapt in-tree users of the API to the change. Depending on the scope of
this task this might require additional help from the corresponding maintainers
• An entry in the “API Changes” section of the release notes for the next upcoming release
• The labels API, Stable API Change and Release Notes, as well as any others that are applicable
Once the steps above have been completed, the outcome of the proposal will depend on the approval
of the actual Pull Request by the maintainer of the corresponding subsystem. As with any other Pull
Request, the author can request for it to be discussed and ultimately even voted on in the Zephyr TSC
meeting.
If the Pull Request is merged then an email must be sent to the devel and user mailing lists informing
them of the change.
Note: Incompatible changes will be announced in the “API Changes” section of the release notes.
Deprecated
Note: Unstable APIs can be removed without deprecation at any time. Deprecation and removal of APIs
will be announced in the “API Changes” section of the release notes.
• Deprecation Time (stable APIs): 2 Releases The API needs to be marked as deprecated in at least
two full releases. For example, if an API was first deprecated in release 1.14, it will be ready
to be removed in 1.16 at the earliest. There may be special circumstances, determined by the
Architecture working group, where an API is deprecated sooner.
• What is required when deprecating:
– Mark as deprecated. This can be done by using the compiler itself (__deprecated for function
declarations and __DEPRECATED_MACRO for macro definitions), or by introducing a Kconfig
option (typically one that contains the DEPRECATED word in it) that, when enabled, reverts the
APIs back to their previous form
– Document the deprecation
– Include the deprecation in the “API Changes” of the release notes for the next upcoming release
– Code using the deprecated API needs to be modified to remove usage of said API
– The change needs to be atomic and bisectable
– Create a GitHub issue to track the removal of the deprecated API, and add it to the roadmap
targeting the appropriate release (in the example above, 1.16).
During the deprecation waiting period, the API will be in the deprecated state. The Zephyr maintainers
will track usage of deprecated APIs on docs.zephyrproject.org and support developers migrating their
code. Zephyr will continue to provide warnings:
• API documentation will inform users that the API is deprecated.
• Attempts to use a deprecated API at build time will log a warning to the console.
Retired
Zephyr development and evolution is a group effort, and to simplify maintenance and enhancements
there are some general policies that should be followed when developing a new capability or interface.
Using Callbacks
Many APIs involve passing a callback as a parameter or as a member of a configuration structure. The
following policies should be followed when specifying the signature of a callback:
• The first parameter should be a pointer to the object most closely associated with the callback. In
the case of device drivers this would be const struct device *dev. For library functions it may
be a pointer to another object that was referenced when the callback was provided.
• The next parameter(s) should be additional information specific to the callback invocation, such as
a channel identifier, new status value, and/or a message pointer followed by the message length.
• The final parameter should be a void *user_data pointer carrying context that allows a shared
callback function to locate additional material necessary to process the callback.
An exception to providing user_data as the last parameter may be allowed when the callback itself was
provided through a structure that will be embedded in another structure. An example of such a case is
gpio_callback , normally defined within a data structure specific to the code that also defines the call-
back function. In those cases further context can accessed by the callback indirectly by CONTAINER_OF .
Examples
• The requirements of k_timer_expiry_t invoked when a system timer alarm fires are satisfied by:
The assumption here, as with gpio_callback , is that the timer is embedded in a structure reach-
able from CONTAINER_OF that can provide additional context to the callback.
• The requirements of counter_alarm_callback_t invoked when a counter device alarm fires are
satisfied by:
This provides more complete useful information, including which counter channel timed-out and
the counter value at which the timeout occurred, as well as user context which may or may not be
the counter_alarm_cfg used to register the callback, depending on user needs.
APIs and libraries may provide features that are expensive in RAM or code size but are optional in
the sense that some applications can be implemented without them. Examples of such feature include
capturing a timestamp or providing an alternative interface. The developer in coordination
with the community must determine whether enabling the features is to be controllable through a Kconfig
option.
In the case where a feature is determined to be optional the following practices should be followed.
• Any data that is accessed only when the feature is enabled should be conditionally included via
#ifdef CONFIG_MYFEATURE in the structure or union declaration. This reduces memory use for
applications that don’t need the capability.
• Function declarations that are available only when the option is enabled should be provided un-
conditionally. Add a note in the description that the function is available only when the specified
feature is enabled, referencing the required Kconfig symbol by name. In the cases where the func-
tion is used but not enabled the definition of the function shall be excluded from compilation, so
references to the unsupported API will result in a link-time error.
• Where code specific to the feature is isolated in a source file that has no other content that file
should be conditionally included in CMakeLists.txt:
zephyr_sources_ifdef(CONFIG_MYFEATURE foo_funcs.c)
• Where code specific to the feature is part of a source file that has other content the feature-specific
code should be conditionally processed using #ifdef CONFIG_MYFEATURE.
The Kconfig flag used to enable the feature should be added to the PREDEFINED variable in doc/zephyr.
doxyfile.in to ensure the conditional API and functions appear in generated documentation.
Return Codes
Implementations of an API, for example an API for accessing a peripheral might implement only a subset
of the functions that is required for minimal operation. A distinction is needed between APIs that are not
supported and those that are not implemented or optional:
• APIs that are supported but not implemented shall return -ENOSYS.
• Optional APIs that are not supported by the hardware should be implemented and the return code
in this case shall be -ENOTSUP.
• When an API is implemented, but the particular combination of options requested in the call cannot
be satisfied by the implementation the call shall return -ENOTSUP. (For example, a request for a
level-triggered GPIO interrupt on hardware that supports only edge-triggered interrupts)
The following terms may be used as shorthand API tags to indicate the allowed calling context (thread,
ISR, pre-kernel), the effect of a call on the current thread state, and other behavioral characteristics.
reschedule
if executing the function reaches a reschedule point
sleep
if executing the function can cause the invoking thread to sleep
no-wait
if a parameter to the function can prevent the invoking thread from trying to sleep
isr-ok
if the function can be safely called and will have its specified effect whether invoked from interrupt
or thread context
pre-kernel-ok
if the function can be safely called before the kernel has been fully initialized and will have its
specified effect when invoked from that context.
async
if the function may return before the operation it initializes is complete (i.e. function return and
operation completion are asynchronous)
supervisor
if the calling thread must have supervisor privileges to execute the function
Details on the behavioral impact of each attribute are in the following sections.
reschedule
The reschedule attribute is used on a function that can reach a reschedule point within its execution.
Details The significance of this attribute is that when a rescheduling function is invoked by a thread
it is possible for that thread to be suspended as a consequence of a higher-priority thread being made
ready. Whether the suspension actually occurs depends on the operation associated with the reschedule
point and the relative priorities of the invoking thread and the head of the ready queue.
Note that in the case of timeslicing, or reschedule points executed from interrupts, any thread may be
suspended in any function.
Functions that are not reschedule may be invoked from either thread or interrupt context.
Functions that are reschedule may be invoked from thread context.
Functions that are reschedule but not sleep may be invoked from interrupt context.
sleep
The sleep attribute is used on a function that can cause the invoking thread to sleep.
Explanation This attribute is of relevance specifically when considering applications that use only non-
preemptible threads, because the kernel will not replace a running cooperative-only thread at a resched-
ule point unless that thread has explicitly invoked an operation that caused it to sleep.
This attribute does not imply the function will sleep unconditionally, but that the operation may require
an invoking thread that would have to suspend, wait, or invoke k_yield() before it can complete its
operation. This behavior may be mediated by no-wait.
Functions that are sleep are implicitly reschedule.
Functions that are sleep may be invoked from thread context.
Functions that are sleep may be invoked from interrupt and pre-kernel contexts if and only if invoked in
no-wait mode.
no-wait
The no-wait attribute is used on a function that is also sleep to indicate that a parameter to the function
can force an execution path that will not cause the invoking thread to sleep.
Explanation The paradigmatic case of a no-wait function is a function that takes a timeout, to which
K_NO_WAIT can be passed. The semantics of this special timeout value are to execute the function’s
operation as long as it can be completed immediately, and to return an error code rather than sleep if it
cannot.
It is use of the no-wait feature that allows functions like k_sem_take() to be invoked from ISRs, since it
is not permitted to sleep in interrupt context.
A function with a no-wait path does not imply that taking that path guarantees the function is syn-
chronous.
Functions with this attribute may be invoked from interrupt and pre-kernel contexts only when the
parameter selects the no-wait path.
isr-ok
The isr-ok attribute is used on a function to indicate that it works whether it is being invoked from
interrupt or thread context.
Explanation Any function that is not sleep is inherently isr-ok. Functions that are sleep are isr-ok
if the implementation ensures that the documented behavior is implemented even if called from an
interrupt context. This may be achieved by having the implementation detect the calling context and
transfer the operation that would sleep to a thread, or by documenting that when invoked from a non-
thread context the function will return a specific error (generally -EWOULDBLOCK).
Note that a function that is no-wait is safe to call from interrupt context only when the no-wait path is
selected. isr-ok functions need not provide a no-wait path.
pre-kernel-ok
The pre-kernel-ok attribute is used on a function to indicate that it works as documented even when
invoked before the kernel main thread has been started.
Explanation This attribute is similar to isr-ok in function, but is intended for use by any API that is
expected to be called in DEVICE_DEFINE() or SYS_INIT() calls that may be invoked with PRE_KERNEL_1
or PRE_KERNEL_2 initialization levels.
Generally a function that is pre-kernel-ok checks k_is_pre_kernel() when determining whether it can
fulfill its required behavior. In many cases it would also check k_is_in_isr() so it can be isr-ok as well.
async
A function is async (i.e. asynchronous) if it may return before the operation it initiates has completed. An
asynchronous function will generally provide a mechanism by which operation completion is reported,
e.g. a callback or event.
A function that is not asynchronous is synchronous, i.e. the operation will always be complete when the
function returns. As most functions are synchronous this behavior does not have a distinct attribute to
identify it.
Explanation Be aware that async is orthogonal to context-switching. Some APIs may provide comple-
tion information through a callback, but may suspend while waiting for the resource necessary to initiate
the operation; an example is spi_transceive_async().
If a function is both no-wait and async then selecting the no-wait path only guarantees that the function
will not sleep. It does not affect whether the operation will be completed before the function returns.
supervisor
The supervisor attribute is relevant only in user-mode applications, and indicates that the function cannot
be invoked from user mode.
C is a general-purpose low-level programming language that is widely used for writing code for embed-
ded systems.
Zephyr is primarily written in C and natively supports applications written in the C language. All Zephyr
API functions and macros are implemented in C and available as part of the C header files under the
include directory, so writing Zephyr applications in C gives the developers access to the most features.
The main() function must have the return type of int as Zephyr applications run in a “hosted” environ-
ment as defined by the C standard. Applications must return zero (0) from main. All non-zero return
values are reserved.
Language Standards
Zephyr does not target a specific version of the C standards; however, the Zephyr codebase makes exten-
sive use of the features newly introduced in the 1999 release of the ISO C standard (ISO/IEC 9899:1999,
hereinafter referred to as C99) such as those listed below, effectively requiring the use of a compiler
toolchain that supports the C99 standard and above:
• inline functions
• standard boolean types (bool in <stdbool.h>)
• fixed-width integer types ([u]intN_t in <stdint.h>)
• designated initializers
• variadic macros
• restrict qualification
Some Zephyr components make use of the features newly introduced in the 2011 release of the ISO
C standard (ISO/IEC 9899:2011, hereinafter referred to as C11) such as the type-generic expressions
using the _Generic keyword. For example, the cbprintf() component, used as the default formatted
output processor for Zephyr, makes use of the C11 type-generic expressions, and this effectively requires
most Zephyr applications to be compiled using a compiler toolchain that supports the C11 standard and
above.
In summary, it is recommended to use a compiler toolchain that supports at least the C11 standard for
developing with Zephyr. It is, however, important to note that some optional Zephyr components and
external modules may make use of the C language features that have been introduced in more recent
versions of the standards, in which case it will be necessary to use a more up-to-date compiler toolchain
that supports such standards.
Standard Library
The C Standard Library is an integral part of any C program, and Zephyr provides the support for a
number of different C libraries for the applications to choose from, depending on the compiler toolchain
being used to build the application.
Common C library code Zephyr provides some C library functions that are designed to be used in
conjunction with multiple C libraries. These either provide functions not available in multiple C libraries
or are designed to replace functionality in the C library with code better suited for use in the Zephyr
environment
Time function This provides an implementation of the standard C function, time(), relying on the
Zephyr function, clock_gettime(). This function can be enabled by selecting COMMON_LIBC_TIME.
Dynamic Memory Management The common dynamic memory management implementation can be
enabled by selecting the CONFIG_COMMON_LIBC_MALLOC in the application configuration file.
The common C library internally uses the kernel memory heap API to manage the memory heap used by
the standard dynamic memory management interface functions such as malloc() and free().
The internal memory heap is normally located in the .bss section. When userspace is enabled,
however, it is placed in a dedicated memory partition called z_malloc_partition, which can be
accessed from the user mode threads. The size of the internal memory heap is specified by the
CONFIG_COMMON_LIBC_MALLOC_ARENA_SIZE.
The default heap size for applications using the common C library is zero (no heap). For other C library
users, if there is an MMU present, then the default heap is 16kB. Otherwise, the heap uses all available
memory.
There are also separate controls to select calloc() (COMMON_LIBC_CALLOC) and reallocarray()
(COMMON_LIBC_REALLOCARRAY). Both of these are enabled by default as that doesn’t impact memory
usage in applications not using them.
The standard dynamic memory management interface functions implemented by the common C library
are thread safe and may be simultaneously called by multiple threads. These functions are implemented
in lib/libc/common/source/stdlib/malloc.c.
Minimal libc The most basic C library, named “minimal libc”, is part of the Zephyr codebase and
provides the minimal subset of the standard C library required to meet the needs of Zephyr and its
subsystems, primarily in the areas of string manipulation and display.
It is very low footprint and is suitable for projects that do not rely on less frequently used portions of the
ISO C standard library. It can also be used with a number of different toolchains.
The minimal libc implementation can be found in lib/libc/minimal in the main Zephyr tree.
Functions The minimal libc implements the minimal subset of the ISO/IEC 9899:2011 standard C
library functions required to meet the needs of the Zephyr kernel, as defined by the Coding Guidelines
Rule A.4.
Formatted Output The minimal libc does not implement its own formatted output processor; instead,
it maps the C standard formatted output functions such as printf and sprintf to the cbprintf()
function, which is Zephyr’s own C99-compatible formatted output implementation.
For more details, refer to the Formatted Output OS service documentation.
Dynamic Memory Management The minimal libc uses the malloc api family implementation provided
by the common C library, which itself is built upon the kernel memory heap API.
Error numbers Error numbers are used throughout Zephyr APIs to signal error conditions as return
values from functions. They are typically returned as the negative value of the integer literals defined in
this section, and are defined in the errno.h header file.
A subset of the error numbers defined in the POSIX errno.h specification and other de-facto standard
sources have been added to the minimal libc.
A conscious effort is made in Zephyr to keep the values of the minimal libc error numbers consistent
with the different implementations of the C standard libraries supported by Zephyr. The minimal libc
errno.h is checked against that of the Newlib to ensure that the error numbers are kept aligned.
Below is a list of the error number definitions. For the actual numeric values please refer to errno.h.
group system_errno
System error numbers Error codes returned by functions. Includes a list of those defined by IEEE
Std 1003.1-2017.
Defines
errno
EPERM
Not owner
ENOENT
No such file or directory
ESRCH
No such context
EINTR
Interrupted system call
EIO
I/O error
ENXIO
No such device or address
E2BIG
Arg list too long
ENOEXEC
Exec format error
EBADF
Bad file number
ECHILD
No children
EAGAIN
No more contexts
ENOMEM
Not enough core
EACCES
Permission denied
EFAULT
Bad address
ENOTBLK
Block device required
EBUSY
Mount device busy
EEXIST
File exists
EXDEV
Cross-device link
ENODEV
No such device
ENOTDIR
Not a directory
EISDIR
Is a directory
EINVAL
Invalid argument
ENFILE
File table overflow
EMFILE
Too many open files
ENOTTY
Not a typewriter
ETXTBSY
Text file busy
EFBIG
File too large
ENOSPC
No space left on device
ESPIPE
Illegal seek
EROFS
Read-only file system
EMLINK
Too many links
EPIPE
Broken pipe
EDOM
Argument too large
ERANGE
Result too large
ENOMSG
Unexpected message type
EDEADLK
Resource deadlock avoided
ENOLCK
No locks available
ENOSTR
STREAMS device required
ENODATA
Missing expected message data
ETIME
STREAMS timeout occurred
ENOSR
Insufficient memory
EPROTO
Generic STREAMS error
EBADMSG
Invalid STREAMS message
ENOSYS
Function not implemented
ENOTEMPTY
Directory not empty
ENAMETOOLONG
File name too long
ELOOP
Too many levels of symbolic links
EOPNOTSUPP
Operation not supported on socket
EPFNOSUPPORT
Protocol family not supported
ECONNRESET
Connection reset by peer
ENOBUFS
No buffer space available
EAFNOSUPPORT
Addr family not supported
EPROTOTYPE
Protocol wrong type for socket
ENOTSOCK
Socket operation on non-socket
ENOPROTOOPT
Protocol not available
ESHUTDOWN
Can’t send after socket shutdown
ECONNREFUSED
Connection refused
EADDRINUSE
Address already in use
ECONNABORTED
Software caused connection abort
ENETUNREACH
Network is unreachable
ENETDOWN
Network is down
ETIMEDOUT
Connection timed out
EHOSTDOWN
Host is down
EHOSTUNREACH
No route to host
EINPROGRESS
Operation now in progress
EALREADY
Operation already in progress
EDESTADDRREQ
Destination address required
EMSGSIZE
Message size
EPROTONOSUPPORT
Protocol not supported
ESOCKTNOSUPPORT
Socket type not supported
EADDRNOTAVAIL
Can’t assign requested address
ENETRESET
Network dropped connection on reset
EISCONN
Socket is already connected
ENOTCONN
Socket is not connected
ETOOMANYREFS
Too many references: can’t splice
ENOTSUP
Unsupported value
EILSEQ
Illegal byte sequence
EOVERFLOW
Value overflow
ECANCELED
Operation canceled
EWOULDBLOCK
Operation would block
Newlib Newlib is a complete C library implementation written for the embedded systems. It is a
separate open source project and is not included in source code form with Zephyr. Instead, the Zephyr
SDK includes a precompiled library for each supported architecture (libc.a and libm.a).
Note: Other 3rd-party toolchains, such as GNU Arm Embedded, also bundle the Newlib as a precompiled
library.
Zephyr implements the “API hook” functions that are invoked by the C standard library functions in the
Newlib. These hook functions are implemented in lib/libc/newlib/libc-hooks.c and translate the
library internal system calls to the equivalent Zephyr API calls.
Types of Newlib The Newlib included in the Zephyr SDK comes in two versions: ‘full’ and ‘nano’
variants.
Full Newlib The Newlib full variant (libc.a and libm.a) is the most capable variant of the Newlib
available in the Zephyr SDK, and supports almost all standard C library features. It is optimized for
performance (prefers performance over code size) and its footprint is significantly larger than the the
nano variant.
This variant can be enabled by selecting the CONFIG_NEWLIB_LIBC and de-selecting the
CONFIG_NEWLIB_LIBC_NANO in the application configuration file.
Nano Newlib The Newlib nano variant (libc_nano.a and libm_nano.a) is the size-optimized version
of the Newlib, and supports all features that the full variant supports except the new format specifiers
introduced in C99, such as the char, long long type format specifiers (i.e. %hhX and %llX).
This variant can be enabled by selecting the CONFIG_NEWLIB_LIBC and CONFIG_NEWLIB_LIBC_NANO in
the application configuration file.
Note that the Newlib nano variant is not available for all architectures. The availability of the nano
variant is specified by the CONFIG_HAS_NEWLIB_LIBC_NANO.
Formatted Output Newlib supports all standard C formatted input and output functions, including
printf, fprintf, sprintf and sscanf.
The Newlib formatted input and output function implementation supports all format specifiers defined
by the C standard with the following exceptions:
• Floating point format specifiers (e.g. %f) require CONFIG_NEWLIB_LIBC_FLOAT_PRINTF and
CONFIG_NEWLIB_LIBC_FLOAT_SCANF to be enabled.
• C99 format specifiers are not supported in the Newlib nano variant (i.e. %hhX for char, %llX for
long long, %jX for intmax_t, %zX for size_t, %tX for ptrdiff_t).
Dynamic Memory Management Newlib implements an internal heap allocator to manage the memory
blocks used by the standard dynamic memory management interface functions (for example, malloc()
and free()).
The internal heap allocator implemented by the Newlib may vary across the different types of the Newlib
used. For example, the heap allocator implemented in the Full Newlib (libc.a and libm.a) of the Zephyr
SDK requests larger memory chunks to the operating system and has a significantly higher minimum
memory requirement compared to that of the Nano Newlib (libc_nano.a and libm_nano.a).
The only interface between the Newlib dynamic memory management functions and the Zephyr-side
libc hooks is the sbrk() function, which is used by the Newlib to manage the size of the memory pool
reserved for its internal heap allocator.
The _sbrk() hook function, implemented in libc-hooks.c, handles the memory pool size change re-
quests from the Newlib and ensures that the Newlib internal heap allocator memory pool size does not
exceed the amount of available memory space by returning an error when the system is out of memory.
When userspace is enabled, the Newlib internal heap allocator memory pool is placed in a dedicated
memory partition called z_malloc_partition, which can be accessed from the user mode threads.
The amount of memory space available for the Newlib heap depends on the system configurations:
• When MMU is enabled (CONFIG_MMU is selected), the amount of memory space reserved for the
Newlib heap is set by the size of the free memory space returned by the k_mem_free_get() function
or the CONFIG_NEWLIB_LIBC_MAX_MAPPED_REGION_SIZE, whichever is the smallest.
• When MPU is enabled and the MPU requires power-of-two partition size and address alignment
(CONFIG_NEWLIB_LIBC_ALIGNED_HEAP_SIZE is set to a non-zero value), the amount of memory
space reserved for the Newlib heap is set by the CONFIG_NEWLIB_LIBC_ALIGNED_HEAP_SIZE.
• Otherwise, the amount of memory space reserved for the Newlib heap is equal to the amount of
free (unallocated) memory in the SRAM region.
The standard dynamic memory management interface functions implemented by the Newlib are thread
safe and may be simultaneously called by multiple threads.
Picolibc Picolibc is a complete C library implementation written for the embedded systems, targeting
C17 (ISO/IEC 9899:2018) and POSIX 2018 (IEEE Std 1003.1-2017) standards. Picolibc is an external
open source project which is provided for Zephyr as a module, and included as part of the Zephyr SDK
in precompiled form for each supported architecture (libc.a).
Note: Picolibc is also available for other 3rd-party toolchains, such as GNU Arm Embedded.
Zephyr implements the “API hook” functions that are invoked by the C standard library functions in the
Picolibc. These hook functions are implemented in lib/libc/picolibc/libc-hooks.c and translate
the library internal system calls to the equivalent Zephyr API calls.
Picolibc Module When built as a Zephyr module, there are several configuration knobs available to
adjust the feature set in the library, balancing what the library supports versus the code size of the
resulting functions. Because the standard C++ library must be compiled for the target C library, the
Picolibc module cannot be used with applications which use the standard C++ library. Building the
Picolibc module will increase the time it takes to compile the application.
The Picolibc module can be enabled by selecting CONFIG_PICOLIBC_USE_MODULE in the application con-
figuration file.
When updating the Picolibc module to a newer version, the toolchain-bundled Picolibc in the Zephyr SDK
must also be updated to the same version.
Toolchain Picolibc Starting with version 0.16, the Zephyr SDK includes precompiled versions of Picol-
ibc for every target architecture, along with precompiled versions of libstdc++.
The toolchain version of Picolibc can be enabled by de-selecting CONFIG_PICOLIBC_USE_MODULE in the
application configuration file.
For every release of Zephyr, the toolchain-bundled Picolibc and the Picolibc module are guaranteed to be
in sync when using the recommended version of Zephyr SDK.
Formatted Output Picolibc supports all standard C formatted input and output functions, including
printf(), fprintf(), sprintf() and sscanf().
Picolibc formatted input and output function implementation supports all format specifiers defined by
the C17 and POSIX 2018 standards with the following exceptions:
• Floating point format specifiers (e.g. %f) require CONFIG_PICOLIBC_IO_FLOAT.
• Long long format specifiers (e.g. %lld) require CONFIG_PICOLIBC_IO_LONG_LONG. This option is
automatically enabled with CONFIG_PICOLIBC_IO_FLOAT.
Printk, cbprintf and friends When using Picolibc, Zephyr formatted output functions are implemented
in terms of stdio calls. This includes:
• printk, snprintk and vsnprintk
• cbprintf and cbvprintf
• fprintfcb, vfprintfcb, printfcb, vprintfcb, snprintfcb and vsnprintfcb
Math Functions Picolibc provides full C17/IEEE STD 754-2019 support for float, double and long
double math operations, except for long double versions of the Bessel functions.
Thread Local Storage Picolibc uses Thread Local Storage (TLS) (where supported) for data which is
supposed to remain local to each thread, like errno . This means that TLS support is enabled when
using Picolibc. As all TLS variables are allocated out of the thread stack area, this can affect stack size
requirements by a few bytes.
C Library Local Variables Picolibc uses a few internal variables for things like heap management.
These are collected in a dedicated memory partition called z_libc_partition. Applications using
CONFIG_USERSPACE and memory domains must ensure that this partition is included in any domain
active during Picolibc calls.
Dynamic Memory Management Picolibc uses the malloc api family implementation provided by the
common C library, which itself is built upon the kernel memory heap API.
Formatted Output
C defines standard formatted output functions such as printf and sprintf and these functions are
implemented by the C standard libraries.
Each C standard library has its own set of requirements and configurations for selecting the formatted
output modes and capabilities. Refer to each C standard library documentation for more details.
C defines a standard dynamic memory management interface (for example, malloc() and free()) and
these functions are implemented by the C standard libraries.
While the details of the dynamic memory management implementation varies across different C standard
libraries, all supported libraries must conform to the following conventions. Every supported C standard
library shall:
• manage its own memory heap either internally or by invoking the hook functions (for example,
sbrk()) implemented in libc-hooks.c.
• maintain the architecture- and memory region-specific alignment requirements for the memory
blocks allocated by the standard dynamic memory allocation interface (for example, malloc()).
• allocate memory blocks inside the z_malloc_partition memory partition when userspace is en-
abled. See Pre-defined Memory Partitions.
For more details regarding the C standard library-specific memory management implementation, refer
to each C standard library documentation.
Note: Native Zephyr applications should use the memory management API supported by the Zephyr
kernel such as k_malloc() in order to take advantage of the advanced features that they offer.
C standard dynamic memory management interface functions such as malloc() should be used only by
the portable applications and libraries that target multiple operating systems.
Zephyr supports applications written in both C and C++. However, to use C++ in an application
you must configure Zephyr to include C++ support by selecting the CONFIG_CPP in the application
configuration file.
To enable C++ support, the compiler toolchain must also include a C++ compiler and the included
compiler must be supported by the Zephyr build system. The Zephyr SDK, which includes the GNU C++
Compiler (part of GCC), is supported by Zephyr, and the features and their availability documented here
assume the use of the Zephyr SDK.
The default C++ standard level (i.e. the language enforced by the compiler flags passed) for Zephyr
apps is C++11. Other standards are available via kconfig choice, for example CONFIG_STD_CPP98. The
oldest standard supported and tested in Zephyr is C++98.
When compiling a source file, the build system selects the C++ compiler based on the suffix (extension)
of the files. Files identified with either a cpp or a cxx suffix are compiled using the C++ compiler. For
example, myCplusplusApp.cpp is compiled using C++.
The C++ standard requires the main() function to have the return type of int. Your main() must be
defined as int main(void). Zephyr ignores the return value from main, so applications should not
return status information and should, instead, return zero.
Note: Do not use C++ for kernel, driver, or system initialization code.
Language Features
Zephyr currently provides only a subset of C++ functionality. The following features are not supported:
• Static global object destruction
• OS-specific C++ standard library classes (e.g. std::thread, std::mutex)
While not an exhaustive list, support for the following functionality is included:
• Inheritance
• Virtual functions
• Virtual tables
• Static global object constructors
• Dynamic object management with the new and delete operators
• Exceptions
• RTTI (runtime type information)
• Standard Template Library (STL)
Static global object constructors are initialized after the drivers are initialized but before the application
main() function. Therefore, use of C++ is restricted to application code.
In order to make use of the C++ exceptions, the CONFIG_CPP_EXCEPTIONS must be selected in the
application configuration file.
Zephyr minimal C++ library (lib/cpp/minimal) provides a minimal subset of the C++ standard library
and application binary interface (ABI) functions to enable basic C++ language support. This includes:
• new and delete operators
• virtual function stub and vtables
• static global initializers for global constructors
The scope of the minimal C++ library is strictly limited to providing the basic C++ language support,
and it does not implement any Standard Template Library (STL) classes and functions. For this reason,
it is only suitable for use in the applications that implement their own (non-standard) class library and
do rely on the Standard Template Library (STL) components.
Any application that makes use of the Standard Template Library (STL) components, such as
std::string and std::vector, must enable the C++ standard library support.
The C++ Standard Library is a collection of classes and functions that are part of the ISO C++ standard
(std namespace).
Zephyr does not include any C++ standard library implementation in source code form. Instead, it
allows configuring the build system to link against the pre-built C++ standard library included in the
C++ compiler toolchain.
To enable C++ standard library, select an applicable toolchain-specific C++ standard library type from
the CONFIG_LIBCPP_IMPLEMENTATION in the application configuration file.
For instance, when building with the Zephyr SDK, the build system can be configured to link against
the GNU C++ Library (libstdc++.a), which is a fully featured C++ standard library that provides all
features required by the ISO C++ standard including the Standard Template Library (STL), by selecting
CONFIG_GLIBCXX_LIBCPP in the application configuration file.
The following C++ standard libraries are supported by Zephyr:
• GNU C++ Library (CONFIG_GLIBCXX_LIBCPP)
• ARC MetaWare C++ Library (CONFIG_ARCMWDT_LIBCPP)
A Zephyr subsystem that requires the features from the full C++ standard library can select, from its
config, CONFIG_REQUIRES_FULL_LIBCPP, which automatically selects a compatible C++ standard library
unless the Kconfig symbol for a specific C++ standard library is selected.
2.7 Optimizations
Stack Sizes
Stack sizes of various system threads are specified generously to allow for usage in different scenarios
on as many supported platforms as possible. You should start the optimization process by reviewing all
stack sizes and adjusting them for your application:
CONFIG_ISR_STACK_SIZE
Set to 2048 by default
CONFIG_MAIN_STACK_SIZE
Set to 1024 by default
CONFIG_IDLE_STACK_SIZE
Set to 320 by default
CONFIG_SYSTEM_WORKQUEUE_STACK_SIZE
Set to 1024 by default
CONFIG_PRIVILEGED_STACK_SIZE
Set to 1024 by default, depends on userspace feature.
Unused Peripherals
Some peripherals are enabled by default. You can disable unused peripherals in your project configura-
tion, for example:
CONFIG_GPIO=n
CONFIG_SPI=n
The following options are enabled by default to provide more information about the running application
and to provide means for debugging and error handling:
CONFIG_BOOT_BANNER
This option can be disabled to save a few bytes.
CONFIG_DEBUG
This option can be disabled for production builds
MPU/MMU Support
Depending on your application and platform needs, you can disable MPU/MMU support to gain some
memory and improve performance. Consider the consequences of this configuration choice though,
because you’ll lose advanced stack checking and support.
The build system offers 3 targets to view and analyse RAM, ROM and stack usage in generated images.
The tools run on the final image and give information about size of symbols and code being used in
both RAM and ROM. Additionally, with features available through the compiler, we can also generate
worst-case stack usage analysis:
Tools that are available as build system targets:
Build Target: puncover This target uses a 3rd party tools called puncover which can be found here.
When this target is built, it will launch a local web server which will allow you to open a web client and
browse the files and view their ROM, RAM and stack usage. Before you can use this target, you will have
to install the puncover python module:
2.7. Optimizations 69
Zephyr Project Documentation, Release 3.4.0
Then:
Using west:
To view worst-case stack usage analysis, build this with the CONFIG_STACK_USAGE enabled.
Using west:
Build Target: ram_report List all compiled objects and their RAM usage in a tabular form with bytes
per symbol and the percentage it uses. The data is grouped based on the file system location of the object
in the tree and the file containing the symbol.
Use the ram_report target with your board:
Using west:
Path ␣
˓→ Size %
=====================================================================================================
...
...
SystemCoreClock ␣
˓→ 4 0.08%
_kernel ␣
˓→ 48 0.99%
(continues on next page)
Build Target: rom_report List all compiled objects and their ROM usage in a tabular form with bytes
per symbol and the percentage it uses. The data is grouped based on the file system location of the object
in the tree and the file containing the symbol.
Use the rom_report to get the ROM report:
Using west:
2.7. Optimizations 71
Zephyr Project Documentation, Release 3.4.0
Path ␣
˓→ Size %
=====================================================================================================
...
...
CSWTCH.5 ␣
˓→ 4 0.02%
SystemCoreClock ␣
˓→ 4 0.02%
__aeabi_idiv0 ␣
˓→ 2 0.01%
__udivmoddi4 ␣
˓→ 702 3.37%
_sw_isr_table ␣
˓→ 384 1.85%
delay_machine_code.9114 ␣
˓→ 6 0.03%
levels.8826 ␣
˓→ 20 0.10%
mpu_config ␣
˓→ 8 0.04%
transitions.10558 ␣
˓→ 12 0.06%
arch ␣
˓→1194 5.74%
arm ␣
˓→1194 5.74%
core ␣
˓→1194 5.74%
aarch32 ␣
˓→1194 5.74%
cortex_m ␣
˓→ 852 4.09%
fault.c ␣
˓→ 400 1.92%
bus_fault.isra.0 ␣
˓→ 60 0.29%
mem_manage_fault.isra.0 ␣
˓→ 56 0.27%
usage_fault.isra.0 ␣
˓→ 36 0.17%
z_arm_fault ␣
˓→ 232 1.11%
z_arm_fault_init ␣
(continues on next page)
Data Structures
Build Target: pahole Poke-a-hole (pahole) is an object-file analysis tool to find the size of the data
structures, and the holes caused due to aligning the data elements to the word-size of the CPU by the
compiler.
Poke-a-hole (pahole) must be installed prior to using this target. It can be obtained from https://git.
kernel.org/pub/scm/devel/pahole/pahole.git and is available in the dwarves package in both fedora
and ubuntu:
or in fedora:
Using west:
2.7. Optimizations 73
Zephyr Project Documentation, Release 3.4.0
After running this target, pahole will output the results to the console:
This guide describes the software tools you can run on your host workstation to flash and debug Zephyr
applications.
Zephyr’s west tool has built-in support for all of these in its flash, debug, debugserver, and attach
commands, provided your board hardware supports them and your Zephyr board directory’s board.
cmake file declares that support properly. See Building, Flashing and Debugging for more information on
these commands.
Atmel SAM Boot Assistant (Atmel SAM-BA) allows In-System Programming (ISP) from USB or UART host
without any external programming interface. Zephyr allows users to develop and program boards with
SAM-BA support using west. Zephyr supports devices with/without ROM bootloader and both extensions
from Arduino and Adafruit. Full support was introduced in Zephyr SDK 0.12.0.
The typical command to flash the board is:
Note: The CONFIG_BOOTLOADER_BOSSA_LEGACY Kconfig option should be used as last resource. Try
configure first with Devices without ROM bootloader.
Typical flash layout and configuration For bootloaders that reside on flash, the devicetree partition
layout is mandatory. For devices that have a ROM bootloader, they are mandatory when the application
uses a storage or other non-application partition. In this special case, the boot partition should be omitted
and code_partition should start from offset 0. It is necessary to define the partitions with sizes that avoid
overlaps, always.
A typical flash layout for devices without a ROM bootloader is:
/ {
chosen {
zephyr,code-partition = &code_partition;
};
};
&flash0 {
partitions {
(continues on next page)
boot_partition: partition@0 {
label = "sam-ba";
reg = <0x00000000 0x2000>;
read-only;
};
code_partition: partition@2000 {
label = "code";
reg = <0x2000 0x3a000>;
read-only;
};
/*
* The final 16 KiB is reserved for the application.
* Storage partition will be used by FCB/LittleFS/NVS
* if enabled.
*/
storage_partition: partition@3c000 {
label = "storage";
reg = <0x0003c000 0x00004000>;
};
};
};
A typical flash layout for devices with a ROM bootloader and storage partition is:
/ {
chosen {
zephyr,code-partition = &code_partition;
};
};
&flash0 {
partitions {
compatible = "fixed-partitions";
#address-cells = <1>;
#size-cells = <1>;
code_partition: partition@0 {
label = "code";
reg = <0x0 0xF0000>;
read-only;
};
/*
* The final 64 KiB is reserved for the application.
* Storage partition will be used by FCB/LittleFS/NVS
* if enabled.
*/
storage_partition: partition@F0000 {
label = "storage";
reg = <0x000F0000 0x00100000>;
(continues on next page)
Enabling SAM-BA runner In order to instruct Zephyr west tool to use the SAM-BA bootloader the
board.cmake file must have include(${ZEPHYR_BASE}/boards/common/bossac.board.cmake) entry.
Note that Zephyr tool accept more entries to define multiple runners. By default, the first one will be
selected when using west flash command. The remaining options are available passing the runner
option, for instance west flash -r bossac.
More implementation details can be found in the boards documentation. As a quick reference, see these
three board documentation pages:
• sam4e_xpro (ROM bootloader)
• adafruit_feather_m0_basic_proto (Adafruit UF2 bootloader)
• arduino_nano_33_iot (Arduino bootloader)
• arduino_nano_33_ble (Arduino legacy bootloader)
Enabling BOSSAC on Windows Native [Experimental] Zephyr SDK´s bossac is only currenty support
on Linux and macOS. Windows support can be achieved by using the bossac version from BOSSA oficial
releases. After installing using default options, the bossac.exe must be added to Windows PATH. A
specific bossac executable can be used by passing the --bossac option, as follows:
Segger provides a suite of debug host tools for Linux, macOS, and Windows operating systems:
• J-Link GDB Server: GDB remote debugging
• J-Link Commander: Command-line control and flash programming
• RTT Viewer: RTT terminal input and output
• SystemView: Real-time event visualization and recording
These debug host tools are compatible with the following debug probes:
• LPC-Link2 J-Link Onboard Debug Probe
• OpenSDA J-Link Onboard Debug Probe
• J-Link External Debug Probe
• ST-LINK/V2-1 Onboard Debug Probe
Check if your SoC is listed in J-Link Supported Devices.
Download and install the J-Link Software and Documentation Pack to get the J-Link GDB Server and
Commander, and to install the associated USB device drivers. RTT Viewer and SystemView can be
downloaded separately, but are not required.
Note that the J-Link GDB server does not yet support Zephyr RTOS-awareness.
OpenOCD is a community open source project that provides GDB remote debugging and flash program-
ming support for a wide range of SoCs. A fork that adds Zephyr RTOS-awareness is included in the
Zephyr SDK; otherwise see Getting OpenOCD for options to download OpenOCD from official reposito-
ries.
These debug host tools are compatible with the following debug probes:
• OpenSDA DAPLink Onboard Debug Probe
• J-Link External Debug Probe
• ST-LINK/V2-1 Onboard Debug Probe
Check if your SoC is listed in OpenOCD Supported Devices.
Note: On Linux, openocd is available though the Zephyr SDK. Windows users should use the following
steps to install openocd:
• Download openocd for Windows from here: OpenOCD Windows
• Copy bin and share dirs to C:\Program Files\OpenOCD\
• Add C:\Program Files\OpenOCD\bin to ‘PATH’ environment variable
pyOCD is an open source project from Arm that provides GDB remote debugging and flash programming
support for Arm Cortex-M SoCs. It is distributed on PyPi and installed when you complete the Get Zephyr
and install Python dependencies step in the Getting Started Guide. pyOCD includes support for Zephyr
RTOS-awareness.
These debug host tools are compatible with the following debug probes:
• OpenSDA DAPLink Onboard Debug Probe
• ST-LINK/V2-1 Onboard Debug Probe
Check if your SoC is listed in pyOCD Supported Devices.
Lauterbach TRACE32 is a product line of microprocessor development tools, debuggers and real-time
tracer with support for JTAG, SWD, NEXUS or ETM over multiple core architectures, including Arm
Cortex-A/-R/-M, RISC-V, Xtensa, etc. Zephyr allows users to develop and program boards with Lauter-
bach TRACE32 support using west.
The runner consists of a wrapper around TRACE32 software, and allows a Zephyr board to execute a
custom start-up script (Practice Script) for the different commands supported, including the ability to
pass extra arguments from CMake. Is up to the board using this runner to define the actions performed
on each command.
Install Lauterbach TRACE32 Software Download Lauterbach TRACE32 software from the Lauter-
bach TRACE32 download website (registration required) and follow the installation steps described in
Lauterbach TRACE32 Installation Guide.
Flashing and Debugging Set the environment variable T32_DIR to the TRACE32 system directory. Then
execute west flash or west debug commands to flash or debug the Zephyr application as detailed in
Building, Flashing and Debugging. The debug command launches TRACE32 GUI to allow debug the
Zephyr application, while the flash command hides the GUI and perform all operations in the back-
ground.
By default, the t32 runner will launch TRACE32 using the default configuration file named config.t32
located in the TRACE32 system directory. To use a different configuration file, supply the argument
--config CONFIG to the runner, for example:
For more options, run west flash --context -r t32 to print the usage.
Zephyr RTOS Awareness To enable Zephyr RTOS awareness follow the steps described in Lauterbach
TRACE32 Zephyr OS Awareness Manual.
A debug probe is special hardware which allows you to control execution of a Zephyr application running
on a separate board. Debug probes usually allow reading and writing registers and memory, and support
breakpoint debugging of the Zephyr application on your host workstation using tools like GDB. They
may also support other debug software and more advanced features such as tracing program execution.
For details on the related host software supported by Zephyr, see Flash & Debug Host Tools.
Debug probes are usually connected to your host workstation via USB; they are sometimes also accessible
via an IP network or other means. They usually connect to the device running Zephyr using the JTAG or
SWD protocols. Debug probes are either separate hardware devices or circuitry integrated into the same
board which runs Zephyr.
Many supported boards in Zephyr include a second microcontroller that serves as an onboard debug
probe, usb-to-serial adapter, and sometimes a drag-and-drop flash programmer. This eliminates the need
to purchase an external debug probe and provides a variety of debug host tool options.
Several hardware vendors have their own branded onboard debug probe implementations: NXP LPC
boards have LPC-Link2, NXP Kinetis (former Freescale) boards have OpenSDA, and ST boards have
ST-LINK. Each onboard debug probe microcontroller can support one or more types of firmware that
communicate with their respective debug host tools. For example, an OpenSDA microcontroller can be
programmed with DAPLink firmware to communicate with pyOCD or OpenOCD debug host tools, or
with J-Link firmware to communicate with J-Link debug host tools.
Some supported boards in Zephyr do not include an onboard debug probe and therefore require an
external debug probe. In addition, boards that do include an onboard debug probe often also have an
SWD or JTAG header to enable the use of an external debug probe instead. One reason this may be useful
is that the onboard debug probe may have limitations, such as lack of support for advanced debuggers or
high-speed tracing. You may need to adjust jumpers to prevent the onboard debug probe from interfering
with the external debug probe.
The LPC-Link2 J-Link is an onboard debug probe and usb-to-serial adapter supported on many NXP LPC
and i.MX RT development boards.
This debug probe is compatible with the following debug host tools:
• Enabling BOSSAC on Windows Native [Experimental]
This probe is realized by programming the LPC-Link2 microcontroller with J-Link LPC-Link2 firmware.
Download and install LPCScrypt to get the firmware and programming scripts.
Note: Verify the firmware supports your board by visiting Firmware for LPCXpresso
1. Put the LPC-Link2 microcontroller into DFU boot mode by attaching the DFU jumper, then power-
ing up the board.
2. Run the program_JLINK script.
3. Remove the DFU jumper and power cycle the board.
The OpenSDA DAPLink is an onboard debug probe and usb-to-serial adapter supported on many NXP
Kinetis and i.MX RT development boards. It also includes drag-and-drop flash programming support.
This debug probe is compatible with the following debug host tools:
• pyOCD Debug Host Tools
• OpenOCD Debug Host Tools
This probe is realized by programming the OpenSDA microcontroller with DAPLink OpenSDA firmware.
NXP provides OpenSDA DAPLink Board-Specific Firmwares.
Install the debug host tools before you program the firmware.
As with all OpenSDA debug probes, the steps for programming the firmware are:
1. Put the OpenSDA microcontroller into bootloader mode by holding the reset button while you
power on the board. Note that “bootloader mode” in this context applies to the OpenSDA micro-
controller itself, not the target microcontroller of your Zephyr application.
2. After you power on the board, release the reset button. A USB mass storage device called BOOT-
LOADER or MAINTENANCE will enumerate.
3. Copy the OpenSDA firmware binary to the USB mass storage device.
4. Power cycle the board, this time without holding the reset button. You should see three USB
devices enumerate: a CDC device (serial port), a HID device (debug port), and a mass storage
device (drag-and-drop flash programming).
The OpenSDA J-Link is an onboard debug probe and usb-to-serial adapter supported on many NXP
Kinetis and i.MX RT development boards.
This debug probe is compatible with the following debug host tools:
• Enabling BOSSAC on Windows Native [Experimental]
This probe is realized by programming the OpenSDA microcontroller with J-Link OpenSDA firmware.
Segger provides OpenSDA J-Link Generic Firmwares and OpenSDA J-Link Board-Specific Firmwares,
where the latter is generally recommended when available. Board-specific firmwares are required for
i.MX RT boards to support their external flash memories, whereas generic firmwares are compatible with
all Kinetis boards.
Install the debug host tools before you program the firmware.
As with all OpenSDA debug probes, the steps for programming the firmware are:
1. Put the OpenSDA microcontroller into bootloader mode by holding the reset button while you
power on the board. Note that “bootloader mode” in this context applies to the OpenSDA micro-
controller itself, not the target microcontroller of your Zephyr application.
2. After you power on the board, release the reset button. A USB mass storage device called BOOT-
LOADER or MAINTENANCE will enumerate.
3. Copy the OpenSDA firmware binary to the USB mass storage device.
4. Power cycle the board, this time without holding the reset button. You should see two USB devices
enumerate: a CDC device (serial port) and a vendor-specific device (debug port).
Segger J-Link is a family of external debug probes, including J-Link EDU, J-Link PLUS, J-Link ULTRA+,
and J-Link PRO, that support a large number of devices from different hardware architectures and ven-
dors.
This debug probe is compatible with the following debug host tools:
• Enabling BOSSAC on Windows Native [Experimental]
• OpenOCD Debug Host Tools
Install the debug host tools before you program the firmware.
ST-LINK/V2-1 is a serial and debug adapter built into all Nucleo and Discovery boards. It provides a
bridge between your computer (or other USB host) and the embedded target processor, which can be
used for debugging, flash programming, and serial communication, all over a simple USB cable.
It is compatible with the following host debug tools:
• OpenOCD Debug Host Tools
• Enabling BOSSAC on Windows Native [Experimental]
For some STM32 based boards, it is also compatible with:
• pyOCD Debug Host Tools
While it works out of the box with OpenOCD, it requires some flashing to work with J-Link. To do this,
SEGGER offers a firmware upgrading the ST-LINK/V2-1 on board on the Nucleo and Discovery boards.
This firmware makes the ST-LINK/V2-1 compatible with J-LinkOB, allowing users to take advantage of
most J-Link features like the ultra fast flash download and debugging speed or the free-to-use GDBServer.
More information about upgrading ST-LINK/V2-1 to JLink or restore ST-Link/V2-1 firmware please visit:
Segger over ST-Link
set(BOARD_FLASH_RUNNER jlink)
If you use West (Zephyr’s meta-tool) you can modify the default runner using the --runner (or -r)
option.
To attach a debugger to your board and open up a debug console with jlink.
For more information about West and available options, see West (Zephyr’s meta-tool).
If you configured your Zephyr application to use Segger RTT console instead, open telnet:
If you get no RTT output you might need to disable other consoles which conflict with the RTT one
if they are enabled by default in the particular sample or application you are running, such as disable
UART_CONSOLE in menuconfig
Where board_uid can be obtained using twister’s generate-hardware-map option. For more information
about twister and available options, see Test Runner (Twister).
Zephyr relies on the source code of several externally maintained projects in order to avoid reinventing
the wheel and to reuse as much well-established, mature code as possible when it makes sense. In the
context of Zephyr’s build system those are called modules. These modules must be integrated with the
Zephyr build system, as described in more detail in other sections on this page.
To be classified as a candidate for being included in the default list of modules, an external project is
required to have its own life-cycle outside the Zephyr Project, that is, reside in its own repository, and
have its own contribution and maintenance workflow and release process. Zephyr modules should not
contain code that is written exclusively for Zephyr. Instead, such code should be contributed to the main
zephyr tree.
Modules to be included in the default manifest of the Zephyr project need to provide functionality or
features endorsed and approved by the project Technical Steering Committee and should comply with
the module licensing requirements and contribution guidelines. They should also have a Zephyr developer
that is committed to maintain the module codebase.
Zephyr depends on several categories of modules, including but not limited to:
• Debugger integration
• Silicon vendor Hardware Abstraction Layers (HALs)
• Cryptography libraries
• File Systems
• Inter-Process Communication (IPC) libraries
Additionally, in some cases modules (particularly vendor HALs) can contain references to optional binary
blobs.
This page summarizes a list of policies and best practices which aim at better organizing the workflow in
Zephyr modules.
Zephyr modules, described in this page, are not the same as west projects. In fact, modules do not require
west at all. However, when using modules with west, then the build system uses west in order to find
modules.
In summary:
Modules are repositories that contain a zephyr/module.yml file, so that the Zephyr build system can pull
in the source code from the repository. West projects are entries in the projects: section in the west.yml
manifest file. West projects are often also modules, but not always. There are west projects that are not
included in the final firmware image (eg. tools) and thus do not need to be modules. Modules are found
by the Zephyr build system either via west itself , or via the ZEPHYR_MODULES CMake variable.
The contents of this page only apply to modules, and not to west projects in general (unless they are a
module themselves).
• All modules included in the default manifest shall be hosted in repositories under the zephyrproject-
rtos GitHub organization.
• The module repository codebase shall include a module.yml file in a zephyr/ folder at the root of
the repository.
• Module repository names should follow the convention of using lowercase letters and dashes in-
stead of underscores. This rule will apply to all new module repositories, except for repositories
that are directly tracking external projects (hosted in Git repositories); such modules may be named
as their external project counterparts.
Note: Existing module repositories that do not conform to the above convention do not need to
be renamed to comply with the above convention.
Note: It is not required in module repositories to maintain a ‘master’ branch mirroring the master
branch of the external repository. It is not recommended as this may generate confusion around
the module’s main branch, which should be ‘zephyr’.
• Modules should expose all provided header files with an include pathname beginning
with the module-name. (E.g., mcuboot should expose its bootutil/bootutil.h as “mcu-
boot/bootutil/bootutil.h”.)
It is preferred to synchronize a module repository with the latest stable release of the corresponding
external project. It is permitted, however, to update a Zephyr module repository with the latest develop-
ment branch tip, if this is required to get important updates in the module codebase. When synchronizing
a module with upstream it is mandatory to document the rationale for performing the particular update.
Requirements for allowed practices Changes to the main branch of a module repository, including
synchronization with upstream code base, may only be applied via pull requests. These pull requests shall
be verifiable by Zephyr CI and mergeable (e.g. with the Rebase and merge, or Create a merge commit option
using Github UI). This ensures that the incoming changes are always reviewable, and the downstream
module repository history is incremental (that is, existing commits, tags, etc. are always preserved). This
policy also allows to run Zephyr CI, git lint, identity, and license checks directly on the set of changes
that are to be brought into the module repository.
Allowed practices The following practices conform to the above requirements and should be followed
in all modules repositories. It is up to the module code owner to select the preferred synchronization
practice, however, it is required that the selected practice is consistently followed in the respective mod-
ule repository.
Updating modules with a diff from upstream: Upstream changes brought as a single snapshot commit
(manual diff) in a pull request against the module’s main branch, which may be merged using the Rebase
& merge operation. This approach is simple and should be applicable to all modules with the downside
of suppressing the upstream history in the module repository.
Note: The above practice is the only allowed practice in modules where the external project
is not hosted in an upstream Git repository.
The commit message is expected to identify the upstream project URL, the version to which the module
is updated (upstream version, tag, commit SHA, if applicable, etc.), and the reason for the doing the
update.
Updating modules by merging the upstream branch: Upstream changes brought in by performing a
Git merge of the intended upstream branch (e.g. main branch, latest release branch, etc.) submitting the
result in pull request against the module main branch, and merging the pull request using the Create a
merge commit operation. This approach is applicable to modules with an upstream project Git repository.
The main advantages of this approach is that the upstream repository history (that is, the original commit
SHAs) is preserved in the module repository. The downside of this approach is that two additional merge
commits are generated in the downstream main branch.
To facilitate management of Zephyr module repositories, the following individual roles are defined.
Administrator: Each Zephyr module shall have an administrator who is responsible for managing access
to the module repository, for example, for adding individuals as Collaborators in the repository at the
request of the module owner. Module administrators are members of the Administrators team, that is a
group of project members with admin rights to module GitHub repositories.
Module owner: Each module shall have a module code owner. Module owners will have the overall
responsibility of the contents of a Zephyr module repository. In particular, a module owner will:
• coordinate code reviewing in the module repository
• be the default assignee in pull-requests against the repository’s main branch
• request additional collaborators to be added to the repository, as they see fit
• regularly synchronize the module repository with its upstream counterpart following the policies
described in Synchronizing with upstream
• be aware of security vulnerability issues in the external project and update the module repository
to include security fixes, as soon as the fixes are available in the upstream code base
• list any known security vulnerability issues, present in the module codebase, in Zephyr release
notes.
Merger: The Zephyr Release Engineering team has the right and the responsibility to merge approved
pull requests in the main branch of a module repository.
Updates in the zephyr main tree, for example, in public Zephyr APIs, may require patching a module’s
codebase. The responsibility for keeping the module codebase up to date is shared between the contrib-
utor of such updates in Zephyr and the module owner. In particular:
• the contributor of the original changes in Zephyr is required to submit the corresponding changes
that are required in module repositories, to ensure that Zephyr CI on the pull request with the
original changes, as well as the module integration testing are successful.
• the module owner has the overall responsibility for synchronizing and testing the module codebase
with the zephyr main tree. This includes occasional advanced testing of the module’s codebase in
addition to the testing performed by Zephyr’s CI. The module owner is required to fix issues in the
module’s codebase that have not been caught by Zephyr pull request CI runs.
Submitting and merging changes directly to a module’s codebase, that is, before they have been merged
in the corresponding external project repository, should be limited to:
• changes required due to updates in the zephyr main tree
• urgent changes that should not wait to be merged in the external project first, such as fixes to
security vulnerabilities.
Non-trivial changes to a module’s codebase, including changes in the module design or functionality
should be discouraged, if the module has an upstream project repository. In that case, such changes shall
be submitted to the upstream project, directly.
Submitting changes to modules describes in detail the process of contributing changes to module reposi-
tories.
Contribution guidelines Contributing to Zephyr modules shall follow the generic project Contribution
guidelines.
Pull Requests: may be merged with minimum of 2 approvals, including an approval by the PR assignee.
In addition to this, pull requests in module repositories may only be merged if the introduced changes
are verified with Zephyr CI tools, as described in more detail in other sections on this page.
The merging of pull requests in the main branch of a module repository must be coupled with the
corresponding manifest file update in the zephyr main tree.
Issue Reporting: GitHub issues are intentionally disabled in module repositories, in favor of a central-
ized policy for issue reporting. Tickets concerning, for example, bugs or enhancements in modules shall
be opened in the main zephyr repository. Issues should be appropriately labeled using GitHub labels
corresponding to each module, where applicable.
Note: It is allowed to file bug reports for zephyr modules to track the corresponding up-
stream project bugs in Zephyr. These bug reports shall not affect the Release Quality Criteria.
All source files in a module’s codebase shall include a license header, unless the module repository has
main license file that covers source files that do not include license headers.
Main license files shall be added in the module’s codebase by Zephyr developers, only if they exist as part
of the external project, and they contain a permissive OSI-compliant license. Main license files should
preferably contain the full license text instead of including an SPDX license identifier. If multiple main
license files are present it shall be made clear which license applies to each source file in a module’s
codebase.
Individual license headers in module source files supersede the main license.
Any new content to be added in a module repository will require to have license coverage.
Note: Zephyr recommends conveying module licensing via individual license headers and
main license files. This not a hard requirement; should an external project have its own
practice of conveying how licensing applies in the module’s codebase (for example, by having
a single or multiple main license files), this practice may be accepted by and be referred to
in the Zephyr module, as long as licensing requirements, for example OSI compliance, are
satisfied.
License policies
License checks License checks (via CI tools) shall be enabled on every pull request that adds new
content in module repositories.
All Zephyr modules should provide some level of integration testing, ensuring that the integration with
Zephyr works correctly. Integration tests:
• may be in the form of a minimal set of samples and tests that reside in the zephyr main tree
• should verify basic usage of the module (configuration, functional APIs, etc.) that is integrated
with Zephyr.
• shall be built and executed (for example in QEMU) as part of twister runs in pull requests that
introduce changes in module repositories.
Note: New modules, that are candidates for being included in the Zephyr default manifest, shall
provide some level of integration testing.
Note: Vendor HALs are implicitly tested via Zephyr tests built or executed on target platforms, so
they do not need to provide integration tests.
The purpose of integration testing is not to provide functional verification of the module; this should be
part of the testing framework of the external project.
Certain external projects provide test suites that reside in the upstream testing infrastructure but are
written explicitly for Zephyr. These tests may (but are not required to) be part of the Zephyr test frame-
work.
Modules may be deprecated for reasons including, but not limited to:
• Lack of maintainership in the module
• Licensing changes in the external project
• Codebase becoming obsolete
The module information shall indicate whether a module is deprecated and the build system shall issue
a warning when trying to build Zephyr using a deprecated module.
Deprecated modules may be removed from the Zephyr default manifest after 2 Zephyr releases.
Note: Repositories of removed modules shall remain accessible via their original URL, as
they are required by older Zephyr versions.
The build system variable ZEPHYR_MODULES is a CMake list of absolute paths to the directories containing
Zephyr modules. These modules contain CMakeLists.txt and Kconfig files describing how to build
and configure them, respectively. Module CMakeLists.txt files are added to the build using CMake’s
add_subdirectory() command, and the Kconfig files are included in the build’s Kconfig menu tree.
If you have west installed, you don’t need to worry about how this variable is defined unless you are
adding a new module. The build system knows how to use west to set ZEPHYR_MODULES. You can add
additional modules to this list by setting the EXTRA_ZEPHYR_MODULES CMake variable or by adding a
EXTRA_ZEPHYR_MODULES line to .zephyrrc (See the section on Environment Variables for more details).
This can be useful if you want to keep the list of modules found with west and also add your own.
Note: If the module FOO is provided by west but also given with -DEXTRA_ZEPHYR_MODULES=/<path>/
foo then the module given by the command line variable EXTRA_ZEPHYR_MODULES will take precedence.
This allows you to use a custom version of FOO when building and still use other Zephyr modules provided
by west. This can for example be useful for special test purposes.
If you want to permanently add modules to the zephyr workspace and you are using zephyr as your
manifest repository, you can also add a west manifest file into the submanifests directory. See submani-
fests/README.txt for more details.
See Basics for more on west workspaces.
Finally, you can also specify the list of modules yourself in various ways, or not use modules at all if your
application doesn’t need them.
A module can be described using a file named zephyr/module.yml. The format of zephyr/module.yml
is described in the following:
Module name
Each Zephyr module is given a name by which it can be referred to in the build system.
The name should be specified in the zephyr/module.yml file. This will ensure the module name is not
changeable through user-defined directory names or west manifest files:
name: <name>
In CMake the location of the Zephyr module can then be referred to using the CMake variable
ZEPHYR_<MODULE_NAME>_MODULE_DIR and the variable ZEPHYR_<MODULE_NAME>_CMAKE_DIR holds the lo-
cation of the directory containing the module’s CMakeLists.txt file.
Note: When used for CMake and Kconfig variables, all letters in module names are converted to up-
percase and all non-alphanumeric characters are converted to underscores (_). As example, the module
foo-bar must be referred to as ZEPHYR_FOO_BAR_MODULE_DIR in CMake and Kconfig.
name: foo
Note: If the name field is not specified then the Zephyr module name will be set to the name of the
module folder. As example, the Zephyr module located in <workspace>/modules/bar will use bar as its
module name if nothing is specified in zephyr/module.yml.
build:
cmake: <cmake-directory>
kconfig: <directory>/Kconfig
The cmake: <cmake-directory> part specifies that <cmake-directory> contains the CMakeLists.txt
to use. The kconfig: <directory>/Kconfig part specifies the Kconfig file to use. Neither is required:
cmake defaults to zephyr, and kconfig defaults to zephyr/Kconfig.
Here is an example module.yml file referring to CMakeLists.txt and Kconfig files in the root directory
of the module:
build:
cmake: .
kconfig: Kconfig
Sysbuild integration
Sysbuild is the Zephyr build system that allows for building multiple images as part of a single application,
the sysbuild build process can be extended externally with modules as needed, for example to add custom
build steps or add additional targets to a build. Inclusion of sysbuild-specific build files, CMakeLists.txt
and Kconfig, can be described as:
build:
sysbuild-cmake: <cmake-directory>
sysbuild-kconfig: <directory>/Kconfig
build:
sysbuild-cmake: sysbuild
sysbuild-kconfig: sysbuild/Kconfig
The module description file zephyr/module.yml can also be used to specify that the build files,
CMakeLists.txt and Kconfig, are located in a Module integration files (external).
Build files located in a MODULE_EXT_ROOT can be described as:
build:
sysbuild-cmake-ext: True
sysbuild-kconfig-ext: True
This allows control of the build inclusion to be described externally to the Zephyr module.
When a module has a module.yml file, it will automatically be included into the Zephyr build system.
The path to the module is then accessible through Kconfig and CMake variables.
Zephyr modules In both Kconfig and CMake, the variable ZEPHYR_<MODULE_NAME>_MODULE_DIR con-
tains the absolute path to the module.
In CMake, ZEPHYR_<MODULE_NAME>_CMAKE_DIR contains the absolute path to the directory containing
the CMakeLists.txt file that is included into CMake build system. This variable’s value is empty if the
module.yml file does not specify a CMakeLists.txt.
To read these variables for a Zephyr module named foo:
• In CMake: use ${ZEPHYR_FOO_MODULE_DIR} for the module’s top level directory, and
${ZEPHYR_FOO_CMAKE_DIR} for the directory containing its CMakeLists.txt
• In Kconfig: use $(ZEPHYR_FOO_MODULE_DIR) for the module’s top level directory
Notice how a lowercase module name foo is capitalized to FOO in both CMake and Kconfig.
These variables can also be used to test whether a given module exists. For example, to verify that foo
is the name of a Zephyr module:
if(ZEPHYR_FOO_MODULE_DIR)
# Do something if FOO exists.
endif()
In Kconfig, the variable may be used to find additional files to include. For example, to include the file
some/Kconfig in module foo:
source "$(ZEPHYR_FOO_MODULE_DIR)/some/Kconfig"
During CMake processing of each Zephyr module, the following two variables are also available:
• the current module’s top level directory: ${ZEPHYR_CURRENT_MODULE_DIR}
• the current module’s CMakeLists.txt directory: ${ZEPHYR_CURRENT_CMAKE_DIR}
This removes the need for a Zephyr module to know its own name during CMake processing. The module
can source additional CMake files using these CURRENT variables. For example:
include(${ZEPHYR_CURRENT_MODULE_DIR}/cmake/code.cmake)
It is possible to append values to a Zephyr CMake list variable from the module’s first CMakeLists.txt file.
To do so, append the value to the list and then set the list in the PARENT_SCOPE of the CMakeLists.txt
file. For example, to append bar to the FOO_LIST variable in the Zephyr CMakeLists.txt scope:
An example of a Zephyr list where this is useful is when adding additional directories to the
SYSCALL_INCLUDE_DIRS list.
Sysbuild modules In both Kconfig and CMake, the variable SYSBUILD_CURRENT_MODULE_DIR contains
the absolute path to the sysbuild module. In CMake, SYSBUILD_CURRENT_CMAKE_DIR contains the abso-
lute path to the directory containing the CMakeLists.txt file that is included into CMake build system.
This variable’s value is empty if the module.yml file does not specify a CMakeLists.txt.
To read these variables for a sysbuild module:
• In CMake: use ${SYSBUILD_CURRENT_MODULE_DIR} for the module’s top level directory, and
${SYSBUILD_CURRENT_CMAKE_DIR} for the directory containing its CMakeLists.txt
• In Kconfig: use $(SYSBUILD_CURRENT_MODULE_DIR) for the module’s top level directory
In Kconfig, the variable may be used to find additional files to include. For example, to include the file
some/Kconfig:
source "$(SYSBUILD_CURRENT_MODULE_DIR)/some/Kconfig"
The module can source additional CMake files using these variables. For example:
include(${SYSBUILD_CURRENT_MODULE_DIR}/cmake/code.cmake)
It is possible to append values to a Zephyr CMake list variable from the module’s first CMakeLists.txt file.
To do so, append the value to the list and then set the list in the PARENT_SCOPE of the CMakeLists.txt
file. For example, to append bar to the FOO_LIST variable in the Zephyr CMakeLists.txt scope:
Sysbuild modules hooks Sysbuild provides an infrastructure which allows a sysbuild module to define
a function which will be invoked by sysbuild at a pre-defined point in the CMake flow.
Functions invoked by sysbuild:
• <module-name>_pre_cmake(IMAGES <images>): This function is called for each sysbuild module
before CMake configure is invoked for all images.
• <module-name>_post_cmake(IMAGES <images>): This function is called for each sysbuild module
after CMake configure has completed for all images.
• <module-name>_pre_domains(IMAGES <images>): This function is called for each sysbuild mod-
ule before domains yaml is created by sysbuild.
• <module-name>_post_domains(IMAGES <images>): This function is called for each sysbuild mod-
ule after domains yaml has been created by sysbuild.
arguments passed from sysbuild to the function defined by a module:
• <images> is the list of Zephyr images that will be created by the build system.
If a module foo want to provide a post CMake configure function, then the module’s sysbuild
CMakeLists.txt file must define function foo_post_cmake().
To facilitate naming of functions, the module name is provided by sysbuild CMake through the
SYSBUILD_CURRENT_MODULE_NAME CMake variable when loading the module’s sysbuild CMakeLists.txt
file.
Example of how the foo sysbuild module can define foo_post_cmake():
function(${SYSBUILD_CURRENT_MODULE_NAME}_post_cmake)
cmake_parse_arguments(POST_CMAKE "" "" "IMAGES" ${ARGN})
A Zephyr module may be dependent on other Zephyr modules to be present in order to function correctly.
Or it might be that a given Zephyr module must be processed after another Zephyr module, due to
dependencies of certain CMake targets.
Such a dependency can be described using the depends field.
build:
depends:
- <module>
Here is an example for the Zephyr module foo that is dependent on the Zephyr module bar to be present
in the build system:
name: foo
build:
depends:
- bar
This example will ensure that bar is present when foo is included into the build system, and it will also
ensure that bar is processed before foo.
Module integration files can be located externally to the Zephyr module itself. The MODULE_EXT_ROOT
variable holds a list of roots containing integration files located externally to Zephyr modules.
Module integration files in Zephyr The Zephyr repository contain CMakeLists.txt and Kconfig
build files for certain known Zephyr modules.
Those files are located under
<ZEPHYR_BASE>
modules
<module_name>
CMakeLists.txt
Kconfig
Module integration files in a custom location You can create a similar MODULE_EXT_ROOT for addi-
tional modules, and make those modules known to Zephyr build system.
Create a MODULE_EXT_ROOT with the following structure
<MODULE_EXT_ROOT>
modules
modules.cmake
<module_name>
CMakeLists.txt
Kconfig
and then build your application by specifying -DMODULE_EXT_ROOT parameter to the CMake build system.
The MODULE_EXT_ROOT accepts a CMake list of roots as argument.
A Zephyr module can automatically be added to the MODULE_EXT_ROOT list using the module description
file zephyr/module.yml, see Build settings.
Note: ZEPHYR_BASE is always added as a MODULE_EXT_ROOT with the lowest priority. This allows you
to overrule any integration files under <ZEPHYR_BASE>/modules/<module_name> with your own imple-
mentation your own MODULE_EXT_ROOT.
The modules.cmake file must contain the logic that specifies the integration files for Zephyr modules via
specifically named CMake variables.
To include a module’s CMake file, set the variable ZEPHYR_<MODULE_NAME>_CMAKE_DIR to the path con-
taining the CMake file.
To include a module’s Kconfig file, set the variable ZEPHYR_<MODULE_NAME>_KCONFIG to the path to the
Kconfig file.
The following is an example on how to add support the the FOO module.
Create the following structure
<MODULE_EXT_ROOT>
modules
modules.cmake
foo
CMakeLists.txt
Kconfig
set(ZEPHYR_FOO_CMAKE_DIR ${CMAKE_CURRENT_LIST_DIR}/foo)
set(ZEPHYR_FOO_KCONFIG ${CMAKE_CURRENT_LIST_DIR}/foo/Kconfig)
Module integration files (zephyr/module.yml) The module description file zephyr/module.yml can
be used to specify that the build files, CMakeLists.txt and Kconfig, are located in a Module integration
files (external).
Build files located in a MODULE_EXT_ROOT can be described as:
build:
cmake-ext: True
kconfig-ext: True
This allows control of the build inclusion to be described externally to the Zephyr module.
The Zephyr repository itself is always added as a Zephyr module ext root.
Build settings
It is possible to specify additional build settings that must be used when including the module into the
build system.
All root settings are relative to the root of the module.
Build settings supported in the module.yml file are:
• board_root: Contains additional boards that are available to the build system. Additional boards
must be located in a <board_root>/boards folder.
• dts_root: Contains additional dts files related to the architecture/soc families. Additional dts files
must be located in a <dts_root>/dts folder.
• snippet_root: Contains additional snippets that are available for use. These snippets must
be defined in snippet.yml files underneath the <snippet_root>/snippets folder. For exam-
ple, if you have snippet_root: foo, then you should place your module’s snippet.yml files in
<your-module>/foo/snippets or any nested subdirectory.
• soc_root: Contains additional SoCs that are available to the build system. Additional SoCs must
be located in a <soc_root>/soc folder.
• arch_root: Contains additional architectures that are available to the build system. Additional
architectures must be located in a <arch_root>/arch folder.
• module_ext_root: Contains CMakeLists.txt and Kconfig files for Zephyr modules, see also Mod-
ule integration files (external).
• sca_root: Contains additional SCA tool implementations available to the build system. Each tool
must be located in <sca_root>/sca/<tool> folder. The folder must contain a sca.cmake.
Example of a module.yaml file containing additional roots, and the corresponding file system layout.
build:
settings:
board_root: .
dts_root: .
soc_root: .
arch_root: .
module_ext_root: .
To execute both tests and samples available in modules, the Zephyr test runner (twister) should be
pointed to the directories containing those samples and tests. This can be done by specifying the path
to both samples and tests in the zephyr/module.yml file. Additionally, if a module defines out of tree
boards, the module file can point twister to the path where those files are maintained in the module. For
example:
build:
cmake: .
samples:
(continues on next page)
Binary Blobs
Zephyr supports fetching and using binary blobs, and their metadata is contained entirely in zephyr/
module.yml. This is because a binary blob must always be associated with a Zephyr module, and thus
the blob metadata belongs in the module’s description itself.
Binary blobs are fetched using west blobs. If west is not used, they must be downloaded and verified
manually.
The blobs section in zephyr/module.yml consists of a sequence of maps, each of which has the following
entries:
• path: The path to the binary blob, relative to the zephyr/blobs/ folder in the module repository
• sha256: SHA-256 checksum of the binary blob file
• type: The type of binary blob. Currently limited to img or lib
• version: A version string
• license-path: Path to the license file for this blob, relative to the root of the module repository
• url: URL that identifies the location the blob will be fetched from, as well as the fetching scheme
to use
• description: Human-readable description of the binary blob
• doc-url: A URL pointing to the location of the official documentation for this blob
Module Inclusion
Using West If west is installed and ZEPHYR_MODULES is not already set, the build system finds all the
modules in your west installation and uses those. It does this by running west list to get the paths of
all the projects in the installation, then filters the results to just those projects which have the necessary
module metadata files.
Each project in the west list output is tested like this:
• If the project contains a file named zephyr/module.yml, then the content of that file will be used
to determine which files should be added to the build, as described in the previous section.
• Otherwise (i.e. if the project has no zephyr/module.yml), the build system looks for zephyr/
CMakeLists.txt and zephyr/Kconfig files in the project. If both are present, the project is con-
sidered a module, and those files will be added to the build.
• If neither of those checks succeed, the project is not considered a module, and is not added to
ZEPHYR_MODULES.
Without West If you don’t have west installed or don’t want the build system to use it to find Zephyr
modules, you can set ZEPHYR_MODULES yourself using one of the following options. Each of the directo-
ries in the list must contain either a zephyr/module.yml file or the files zephyr/CMakeLists.txt and
Kconfig, as described in the previous section.
1. At the CMake command line, like this:
If you choose this option, make sure to set the variable before calling find_package(Zephyr ...),
as shown above.
3. In a separate CMake script which is pre-loaded to populate the CMake cache, like this:
You can tell the build system to use this file by adding -C zephyr-modules.cmake to your CMake
command line.
Not using modules If you don’t have west installed and don’t specify ZEPHYR_MODULES yourself, then
no additional modules are added to the build. You will still be able to build any applications that don’t
require code or Kconfig options defined in an external repository.
When submitting new or making changes to existing modules the main repository Zephyr needs a ref-
erence to the changes to be able to verify the changes. In the main tree this is done using revisions.
For code that is already merged and part of the tree we use the commit hash, a tag, or a branch name.
For pull requests however, we require specifying the pull request number in the revision field to allow
building the zephyr main tree with the changes submitted to the module.
To avoid merging changes to master with pull request information, the pull request should be marked as
DNM (Do Not Merge) or preferably a draft pull request to make sure it is not merged by mistake and to
allow for the module to be merged first and be assigned a permanent commit hash. Drafts reduce noise
by not automatically notifying anyone until marked as “Ready for review”. Once the module is merged,
the revision will need to be changed either by the submitter or by the maintainer to the commit hash of
the module which reflects the changes.
Note that multiple and dependent changes to different modules can be submitted using exactly the same
process. In this case you will change multiple entries of all modules that have a pull request against
them.
Please follow the process in Submission and review process and obtain the TSC approval to integrate the
external source code as a module
If the request is approved, a new repository will created by the project team and initialized with basic
information that would allow submitting code to the module project following the project contribution
guidelines.
If a module is maintained as a fork of another project on Github, the Zephyr module related files and
changes in relation to upstream need to be maintained in a special branch named zephyr.
Maintainers from the Zephyr project will create the repository and initialize it. You will be added as a
collaborator in the new repository. Submit the module content (code) to the new repository following
the guidelines described here, and then add a new entry to the west.yml with the following information:
- name: my_module
path: modules/lib/my_module
revision: pull/23/head
Where 23 in the example above indicated the pull request number submitted to the my_module reposi-
tory. Once the module changes are reviewed and merged, the revision needs to be changed to the commit
hash from the module repository.
1. Submit the changes using a pull request to an existing repository following the contribution guide-
lines and expectations.
2. Submit a pull request changing the entry referencing the module into the west.yml of the main
Zephyr tree with the following information:
- name: my_module
path: modules/lib/my_module
revision: pull/23/head
Where 23 in the example above indicated the pull request number submitted to the my_module reposi-
tory. Once the module changes are reviewed and merged, the revision needs to be changed to the commit
hash from the module repository.
The Zephyr project includes a swiss-army knife command line tool named west1 . West is developed in
its own repository.
West’s built-in commands provide a multiple repository management system with features inspired by
Google’s Repo tool and Git submodules. West is also “pluggable”: you can write your own west extension
commands which add additional features to west. Zephyr uses this to provide conveniences for building
applications, flashing and debugging them, and more.
Like git and docker, the top-level west command takes some common options, a sub-command to run,
and then options and arguments for that sub-command:
Since west v0.8, you can also run west like this:
You can run west --help (or west -h for short) to get top-level help for available west commands, and
west <command> -h for detailed help on each command.
The following pages document west’s v1.0.y releases, and provide additional context about the tool.
West is written in Python 3 and distributed through PyPI. Use pip3 to install or upgrade west:
On Linux:
Note: See Python and pip for additional clarification on using the --user switch.
Afterwards, you can run pip3 show -f west for information on where the west binary and related files
were installed.
Once west is installed, you can use it to clone the Zephyr repositories.
Structure
West’s code is distributed via PyPI in a Python package named west. This distribution includes a launcher
executable, which is also named west (or west.exe on Windows).
When west is installed, the launcher is placed by pip3 somewhere in the user’s filesystem (exactly where
depends on the operating system, but should be on the PATH environment variable). This launcher is the
command-line entry point to running both built-in commands like west init, west update, along with
any extensions discovered in the workspace.
In addition to its command-line interface, you can also use west’s Python APIs directly. See west-apis for
details.
West currently supports shell completion in the following combinations of platform and shell:
• Linux: bash
• macOS: bash
• Windows: not available
In order to enable shell completion, you will need to obtain the corresponding completion script and
have it sourced every time you enter a new shell session.
To obtain the completion script you can use the west completion command:
cd /path/to/zephyr/
west completion bash > ~/west-completion.bash
Note: Remember to update your local copy of the completion script using west completion when you
update Zephyr.
source /usr/local/etc/profile.d/bash_completion.sh
v1.1.0
Major changes:
• west compare: new command that compares the state of the workspace against the manifest.
• Support for a new manifest.project-filter configuration option. See Built-in Configuration Op-
tions for details. The west manifest --freeze and west manifest --resolve commands cur-
rently cannot be used when this option is set. This restriction can be removed in a later release.
• Project names which contain comma (,) or whitespace now generate warnings. These warnings
are errors if the new manifest.project-filter configuration option is set. The warnings may be
promoted to errors in a future major version of west.
Other changes:
• west forall now takese a --group argument that can be used to restrict the command to only
run in one or more groups. Run west help forall for details.
• All west commands will now output log messages from west API modules at warning level or
higher. In addition, the --verbose argument to west can be used once to include informational
messages, or twice to include debug messages, from all commands.
Bug fixes:
• Various improvements to error messages, debug logging, and error handling.
API changes:
• west.manifest.Manifest.is_active() now respects the manifest.project-filter configura-
tion option’s value.
v1.0.1
Major changes:
• Manifest schema version “1.0” is now available for use in this release. This is identical to the “0.13”
schema version in terms of features, but can be used by applications that do not wish to use a “0.x”
manifest “version:” field. See Version for details on this feature.
Bug fixes:
• West no longer exits with a successful error code when sent an interrupt signal. Instead, it exits
with a platform-specific error code and signals to the calling environment that the process was
interrupted.
v1.0.0
v0.14.0
Bug fixes:
• West commands that were run with a bad local configuration file dumped stack in a confusing way.
This has been fixed and west now prints a sensible error message in this case.
• A bug in the way west looks for the zephyr repository was fixed. The bug itself usually appeared
when running an extension command like west build in a new workspace for the first time; this
used to fail (just for the first time, not on subsequent command invocations) unless you ran the
command in the workspace’s top level directory.
• West now prints sensible error messages when the user lacks permission to open the manifest file
instead of dumping stack traces.
API changes:
• The west.manifest.MalformedConfig exception type has been moved to the west.
configuration module
• The west.manifest.MalformedConfig exception type has been moved to the west.configuration
module
• The west.configuration.Configuration class now raises MalformedConfig instead of
RuntimeError in some cases
v0.13.1
Bug fix:
• When calling west.manifest.Manifest.from_file() when outside of a workspace, west again falls
back on the ZEPHYR_BASE environment variable to locate the workspace.
v0.13.0
New features:
• You can now associate arbitrary user data with the manifest repository itself in the manifest:
self: userdata: value, like so:
manifest:
self:
userdata: <any YAML value can go here>
Bug fixes:
• The path to the manifest repository reported by west could be incorrect in certain circumstances
detailed in [issue #572](https://github.com/zephyrproject-rtos/west/issues/572). This has been
fixed as part of a larger overhaul of path handling support in the west.manifest API module.
• The west.Manifest.ManifestProject.__repr__ return value was fixed
API changes:
• west.configuration.Configuration: new object-oriented interface to the current configuration.
This reflects the system, global, and workspace-local configuration values, and allows you to read,
write, and delete configuration options from any or all of these locations.
• west.commands.WestCommand:
– config: new attribute, returns a Configuration object or aborts the program if none is set.
This is always usable from within extension command do_run() implementations.
– has_config: new boolean attribute, which is True if and only if reading self.config will
abort the program.
• The path handling in the west.manifest package has been overhauled in a backwards-
incompatible way. For more details, see commit [56cfe8d1d1](https://github.com/
zephyrproject-rtos/west/commit/56cfe8d1d1f3c9b45de3e793c738acd62db52aca).
• west.manifest.Manifest.validate(): this now returns the validated data as a Python dict. This
can be useful if the value passed to this function was a str, and the dict is desired.
• west.manifest.Manifest: new:
– path attributes abspath, posixpath, relative_path, yaml_path, repo_path,
repo_posixpath
– userdata attribute, which contains the parsed value from manifest: self: userdata:,
or is None
– from_topdir() factory method
• west.manifest.ManifestProject: new userdata attribute, which also contains the parsed value
from manifest: self: userdata:, or is None
• west.manifest.ManifestImportFailed: the constructor can now take any value; this can be used
to reflect failed imports from a map or other compound value.
• Deprecated configuration APIs:
The following APIs are now deprecated in favor of using a Configuration object. Usually this
will be done via self.config from a WestCommand instance, but this can be done directly by
instantiating a Configuration object for other usages.
– west.configuration.config
– west.configuration.read_config
– west.configuration.update_config
– west.configuration.delete_config
v0.12.0
New features:
• West now works on the MSYS2 platform.
• West manifest files can now contain arbitrary user data associated with each project. See Repository
user data for details.
Bug fixes:
• The west list command’s {sha} format key has been fixed for the manifest repository; it now
prints N/A (“not applicable”) as expected.
API changes:
• The west.manifest.Project.userdata attribute was added to support project user data.
v0.11.1
New features:
• west status now only prints output for projects which have a nonempty status.
Bug fixes:
• The manifest file parser was incorrectly allowing project names which contain the path separator
characters / and \. These invalid characters are now rejected.
Note: if you need to place a project within a subdirectory of the workspace topdir, use the
path: key. If you need to customize a project’s fetch URL relative to its remote url-base:, use
repo-path:. See Projects for examples.
• The changes made in west v0.10.1 to the west init --manifest-rev option which selected the
default branch name were leaving the manifest repository in a detached HEAD state. This has
been fixed by using git clone internally instead of git init and git fetch. See issue #522 for
details.
• The WEST_CONFIG_LOCAL environment variable now correctly overrides the default location,
<workspace topdir>/.west/config.
• west update --fetch=smart (smart is the default) now correctly skips fetches for project revi-
sions which are lightweight tags (it already worked correctly for annotated tags; only lightweight
tags were unnecessarily fetched).
Other changes:
• The fix for issue #522 mentioned above introduces a new restriction. The west init
--manifest-rev option value, if given, must now be either a branch or a tag. In particular,
“pseudo-branches” like GitHub’s pull/1234/head references which could previously be used to
fetch a pull request can no longer be passed to --manifest-rev. Users must now fetch and check
out such revisions manually after running west init.
API changes:
• west.manifest.Manifest.get_projects() avoids incorrect results in some edge cases described
in issue #523.
• west.manifest.Project.sha() now works correctly for tag revisions. (This applies to both
lightweight and annotated tags.)
v0.11.0
New features:
• west update now supports --narrow, --name-cache, and --path-cache options. These can be
influenced by the update.narrow, update.name-cache, and update.path-cache Configuration
options. These can be used to optimize the speed of the update.
• west update now supports a --fetch-opt option that will be passed to the git fetch command
used to fetch remote revisions when updating each project.
Bug fixes:
• west update now synchronizes Git submodules in projects by default. This avoids issues if the
URL changes in the manifest file from when the submodule was first initialized. This behavior can
be disabled by setting the update.sync-submodules configuration option to false.
Other changes:
• the west-apis-manifest module has fixed docstrings for the Project class
v0.10.1
New features:
• The west init command’s --manifest-rev (--mr) option no longer defaults to master. Instead, the
command will query the repository for its default branch name and use that instead. This allows
users to move from master to main without breaking scripts that do not provide this option.
v0.10.0
New features:
• The name key in a project’s submodules list is now optional.
Bug fixes:
• West now checks that the manifest schema version is one of the explicitly allowed vlaues docu-
mented in Version. The old behavior was just to check that the schema version was newer than
the west version where the manifest: version: key was introduced. This incorrectly allowed
invalid schema versions, like 0.8.2.
Other changes:
• A manifest file’s group-filter is now propagated through an import. This is a change from how
west v0.9.x handled this. In west v0.9.x, only the top level manifest file’s group-filter had any
effect; the group filter lists from any imported manifests were ignored.
Starting with west v0.10.0, the group filter lists from imported manifests are also imported. For
details, see Group Filters and Imports.
The new behavior will take effect if manifest: version: is not given or is at least 0.10. The old
behavior is still available in the top level manifest file only with an explicit manifest: version:
0.9. See Version for more information on schema versions.
See west pull request #482 for the motivation for this change and additional context.
v0.9.1
Bug fixes:
• Commands like west manifest --resolve now correctly include group and group filter informa-
tion.
Other changes:
• West now warns if you combine import with group-filter. Semantics for this combination have
changed starting with v0.10.x. See the v0.10.0 release notes above for more information.
v0.9.0
Warning: The west config fix described below comes at a cost: any comments or other manual
edits in configuration files will be removed when setting a configuration option via that command or
the west.configuration API.
Warning: Combining the group-filter feature introduced in this release with manifest imports is
discouraged. The resulting behavior has changed in west v0.10.
New features:
• West manifests now support Git Submodules in Projects. This allows you to clone Git submodules
into a west project repository in addition to the project repository itself.
• West manifests now support Project Groups. Project groups can be enabled and disabled to de-
termine what projects are “active”, and therefore will be acted upon by the following commands:
west update, west list, west diff, west status, west forall.
• west update no longer updates inactive projects by default. It now supports a --group-filter
option which allows for one-time modifications to the set of enabled and disabled project groups.
• Running west list, west diff, west status, or west forall with no arguments does not print
information for inactive projects by default. If the user specifies a list of projects explicitly at the
command line, output for them is included regardless of whether they are active.
These commands also now support --all arguments to include all projects, even inactive ones.
• west list now supports a {groups} format string key in its --format argument.
Bug fixes:
• The west config command and west.configuration API did not correctly store some configura-
tion values, such as strings which contain commas. This has been fixed; see commit 36f3f91e for
details.
• A manifest file with an empty manifest: self: path: value is invalid, but west used to let it
pass silently. West now rejects such manifests.
• A bug affecting the behavior of the west init -l . command was fixed; see issue #435.
API changes:
• added west.manifest.Manifest.is_active()
• added west.manifest.Manifest.group_filter
• added submodules attribute to west.manifest.Project, which has newly added type west.
manifest.Submodule
Other changes:
• The Manifest Imports feature now supports the terms allowlist and blocklist instead of
whitelist and blacklist, respectively.
The old terms are still supported for compatibility, but the documentation has been updated to use
the new ones exclusively.
v0.8.0
This is a feature release which changes the manifest schema by adding support for a path-prefix: key
in an import: mapping, along with some other features and fixes.
• Manifest import mappings now support a path-prefix: key, which places the project and its im-
ported repositories in a subdirectory of the workspace. See Example 3.4: Import into a subdirectory
for an example.
• The west command line application can now also be run using python3 -m west. This makes it
easier to run west under a particular Python interpreter without modifying the PATH environment
variable.
• west manifest –path prints the absolute path to west.yml
• west init now supports an --mf foo.yml option, which initializes the workspace using foo.yml
instead of west.yml.
• west list now prints the manifest repository’s path using the manifest.path configuration option,
which may differ from the self: path: value in the manifest data. The old behavior is still
available, but requires passing a new --manifest-path-from-yaml option.
• Various Python API changes; see west-apis for details.
v0.7.3
v0.7.2
v0.7.1
v0.7.0
The main user-visible feature in west 0.7 is the Manifest Imports feature. This allows users to load west
manifest data from multiple different files, resolving the results into a single logical manifest.
Additional user-visible changes:
• The idea of a “west installation” has been renamed to “west workspace” in this documentation and
in the west API documentation. The new term seems to be easier for most people to work with
than the old one.
• West manifests now support a schema version.
• The “west config” command can now be run outside of a workspace, e.g. to run west config
--global section.key value to set a configuration option’s value globally.
• There is a new west topdir command, which prints the root directory of the current west workspace.
• The west -vv init command now prints the git operations being performed, and their results.
• The restriction that no project can be named “manifest” is now enforced; the name “manifest” is
reserved for the manifest repository, and is usable as such in commands like west list manifest,
instead of west list path-to-manifest-repository being the only way to say that
• It’s no longer an error if there is no project named “zephyr”. This is part of an effort to make west
generally usable for non-Zephyr use cases.
• Various bug fixes.
The developer-visible changes to the west-apis are:
• west.build and west.cmake: deprecated; this is Zephyr-specific functionality and should never
have been part of west. Since Zephyr v1.14 LTS relies on it, it will continue to be included in the
distribution, but will be removed when that version of Zephyr is obsoleted.
• west.commands:
– WestCommand.requires_installation: deprecated; use requires_workspace instead
– WestCommand.requires_workspace: new
– WestCommand.has_manifest: new
– WestCommand.manifest: this is now settable
• west.configuration: callers can now identify the workspace directory when reading and writing
configuration files
• west.log:
– msg(): new
• west.manifest:
– The module now uses the standard logging module instead of west.log
– QUAL_REFS_WEST: new
– SCHEMA_VERSION: new
– Defaults: removed
– Manifest.as_dict(): new
– Manifest.as_frozen_yaml(): new
– Manifest.as_yaml(): new
– Manifest.from_file() and from_data(): these factory methods are more flexible to use and less
reliant on global state
– Manifest.validate(): new
– ManifestImportFailed: new
– ManifestProject: semi-deprecated and will likely be removed later.
– Project: the constructor now takes a topdir argument
– Project.format() and its callers are removed. Use f-strings instead.
– Project.name_and_path: new
– Project.remote_name: new
– Project.sha() now captures stderr
– Remote: removed
West now requires Python 3.6 or later. Additionally, some features may rely on Python dictionaries being
insertion-ordered; this is only an implementation detail in CPython 3.6, but is is part of the language
specification as of Python 3.7.
v0.6.3
This point release fixes an error in the behavior of the deprecated west.cmake module.
v0.6.2
This point release fixes an error in the behavior of west update --fetch=smart, introduced in v0.6.1.
All v0.6.1 users must upgrade.
v0.6.1
Warning: Do not use this point release. Make sure to use v0.6.2 instead.
v0.6.0
• No separate bootstrapper
In west v0.5.x, the program was split into two components, a bootstrapper and a per-installation
clone. See Multiple Repository Management in the v1.14 documentation for more details.
This is similar to how Google’s Repo tool works, and lets west iterate quickly at first. It caused
confusion, however, and west is now stable enough to be distributed entirely as one piece via PyPI.
From v0.6.x onwards, all of the core west commands and helper classes are part of the west package
distributed via PyPI. This eliminates complexity and makes it possible to import west modules from
anywhere in the system, not just extension commands.
• The selfupdate command still exists for backwards compatibility, but now simply exits after print-
ing an error message.
• Manifest syntax changes
– A west manifest file’s projects elements can now specify their fetch URLs directly, like so:
manifest:
projects:
- name: example-project-name
url: https://github.com/example/example-project
Project elements with url attributes set in this way may not also have remote attributes.
– Project names must be unique: this restriction is needed to support future work, but was not
possible in west v0.5.x because distinct projects may have URLs with the same final pathname
component, like so:
manifest:
remotes:
- name: remote-1
url-base: https://github.com/remote-1
- name: remote-2
url-base: https://github.com/remote-2
projects:
- name: project
remote: remote-1
path: remote-1-project
- name: project
remote: remote-2
path: remote-2-project
These manifests can now be written with projects that use url instead of remote, like so:
manifest:
projects:
- name: remote-1-project
url: https://github.com/remote-1/project
- name: remote-2-project
url: https://github.com/remote-2/project
• The west list command now supports a {sha} format string key
• The default format string for west list was changed to "{name:12} {path:28} {revision:40}
{url}".
• The command west manifest --validate can now be run to load and validate the current man-
ifest file, among other error-handling fixes related to manifest parsing.
• Incompatible API changes were made to west’s APIs. Further changes are expected until API sta-
bility is declared in west v1.0.
– The west.manifest.Project constructor’s remote and defaults positional arguments are
now kwargs. A new url kwarg was also added; if given, the Project URL is set to that value,
and the remote kwarg is ignored.
– west.manifest.MANIFEST_SECTIONS was removed. There is only one section now, namely
manifest. The sections kwargs in the west.manifest.Manifest factory methods and con-
structor were also removed.
– The west.manifest.SpecialProject class was removed. Use west.manifest.
ManifestProject instead.
v0.5.x
West v0.5.x is the first version used widely by the Zephyr Project as part of its v1.14 Long-Term Support
(LTS) release. The west v0.5.x documentation is available as part of the Zephyr’s v1.14 documentation.
Tags in the west repository before v0.5.x are prototypes which are of historical interest only.
This page covers common issues with west and how to solve them.
One good way to troubleshoot fetching issues is to run west update in verbose mode, like this:
west -v update
The output includes Git commands run by west and their outputs. Look for something like this:
The git fetch command example in the last line above is what needs to succeed.
One strategy is to go to /path/to/your/project, copy/paste and run the entire git fetch command,
then debug from there using the documentation for your credential storage helper.
If you’re behind a corporate firewall and may have proxy or other issues, curl -v FETCH_URL (for HTTPS
URLs) or ssh -v FETCH_URL (for SSH URLs) may be helpful.
If you can get the git fetch command to run successfully without prompting for a password when you
run it directly, you will be able to run west update without entering your password in that same shell.
“‘west’ is not recognized as an internal or external command, operable program or batch file.’
On Windows, this means that either west is not installed, or your PATH environment variable does not
contain the directory where pip installed west.exe.
First, make sure you’ve installed west; see Installing west. Then try running west from a new cmd.exe
window. If that still doesn’t work, keep reading.
You need to find the directory containing west.exe, then add it to your PATH . (This PATH change should
have been done for you when you installed Python and pip, so ordinarily you should not need to follow
these steps.)
Run this command in cmd.exe:
Then:
1. Look for a line in the output that looks like Location: C:\foo\python\python38\lib\
site-packages. The exact location will be different on your computer.
2. Look for a file named west.exe in the scripts directory C:\foo\python\python38\scripts.
Important: Notice how lib\site-packages in the pip3 show output was changed to scripts!
3. If you see west.exe in the scripts directory, add the full path to scripts to your PATH using a
command like this:
Do not just copy/paste this command. The scripts directory location will be different on your
system.
4. Close your cmd.exe window and open a new one. You should be able to run west.
This error occurs on some Linux distributions after upgrading to west 0.7.0 or later from 0.6.x. For
example:
$ west update
[... stack trace ...]
TypeError: __init__() got an unexpected keyword argument 'requires_workspace'
This appears to be a problem with the distribution’s pip; see this comment in west issue 373 for details.
Some versions of Ubuntu and Linux Mint are known to have this problem. Some users report issues on
Fedora as well.
Neither macOS nor Windows users have reported this issue. There have been no reports of this issue on
other Linux distributions, like Arch Linux, either.
Workaround 1: remove the old version, then upgrade:
If you see an unexpected error like this when trying to run a Zephyr extension command (like west flash,
west build, etc.):
The most likely cause is that you’re running the command outside of a west workspace. West needs to
know where your workspace is to find Extensions.
To fix this, you have two choices:
1. Run the command from inside a workspace (e.g. the zephyrproject directory you created when
you got started).
For example, create your build directory inside the workspace, or run west flash --build-dir
YOUR_BUILD_DIR from inside the workspace.
2. Set the ZEPHYR_BASE environment variable and re-run the west extension command. If set, west
will use ZEPHYR_BASE to find your workspace.
If you’re unsure whether a command is built-in or an extension, run west help from inside your
workspace. The output prints extension commands separately, and looks like this for mainline Zephyr:
$ west help
Then you have an old version of west installed, and are trying to use it in a workspace that requires a
more recent version.
The easiest way to resolve this issue is to upgrade west and retry as follows:
1. Install the latest west with the -U option for pip3 install as shown in Installing west.
2. Back up any contents of zephyrproject/.west/config that you want to save. (If you don’t have
any configuration options set, it’s safe to skip this step.)
3. Completely remove the zephyrproject/.west directory (if you don’t, you will get the “already in
a workspace” error message discussed next).
4. Run west init again.
“already in an installation”
You may see this error when running west init with west 0.6:
If this is unexpected and you’re really trying to create a new west workspace, then it’s likely that west is
using the ZEPHYR_BASE environment variable to locate a workspace elsewhere on your system.
This is intentional; it allows you to put your Zephyr applications in any directory and still use west to
build, flash, and debug them, for example.
To resolve this issue, unset ZEPHYR_BASE and try again.
2.10.4 Basics
This page introduces west’s basic concepts and provides references to further reading.
West’s built-in commands allow you to work with projects (Git repositories) under a common workspace
directory.
Example workspace
If you’ve followed the upstream Zephyr getting started guide, your workspace looks like this:
Workspace concepts
Here are the basic concepts you should understand about this structure. Additional details are in
Workspaces.
topdir
Above, zephyrproject is the name of the workspace’s top level directory, or topdir. (The name
zephyrproject is just an example – it could be anything, like z, my-zephyr-workspace, etc.)
You’ll typically create the topdir and a few other files and directories using west init.
.west directory
The topdir contains the .west directory. When west needs to find the topdir, it searches for .west,
and uses its parent directory. The search starts from the current working directory (and starts again
from the location in the ZEPHYR_BASE environment variable as a fallback if that fails).
configuration file
The file .west/config is the workspace’s local configuration file.
manifest repository
Every west workspace contains exactly one manifest repository, which is a Git repository containing
a manifest file. The location of the manifest repository is given by the manifest.path configuration
option in the local configuration file.
For upstream Zephyr, zephyr is the manifest repository, but you can configure west to use any Git
repository in the workspace as the manifest repository. The only requirement is that it contains a
valid manifest file. See Topologies supported for information on other options, and West Manifests
for details on the manifest file format.
manifest file
The manifest file is a YAML file that defines projects, which are the additional Git repositories in
the workspace managed by west. The manifest file is named west.yml by default; this can be
overridden using the manifest.file local configuration option.
You use the west update command to update the workspace’s projects based on the contents of the
manifest file.
projects
Projects are Git repositories managed by west. Projects are defined in the manifest file and can be
located anywhere inside the workspace. In the above example workspace, zcbor and net-tools
are projects.
By default, the Zephyr build system uses west to get the locations of all the projects in the
workspace, so any code they contain can be used as Modules (External projects). Note however
that modules and projects are conceptually different.
extensions
Any repository known to west (either the manifest repository or any project repository) can define
Extensions. Extensions are extra west commands you can run when using that workspace.
The zephyr repository uses this feature to provide Zephyr-specific commands like west build. Defin-
ing these as extensions keeps west’s core agnostic to the specifics of any workspace’s Zephyr version,
etc.
ignored files
A workspace can contain additional Git repositories or other files and directories not managed by
west. West basically ignores anything in the workspace except .west, the manifest repository, and
the projects specified in the manifest file.
The two most important workspace-related commands are west init and west update.
Important: West doesn’t change your manifest repository contents after west init is run. Use ordinary
Git commands to pull new versions, etc.
This will:
1. Create the topdir, zephyrproject, along with .west and .west/config inside it
2. Clone the manifest repository from https://github.com/zephyrproject-rtos/zephyr, placing it into
zephyrproject/zephyr
3. Check out the v2.5.0 git tag in your local zephyr clone
4. Set manifest.path to zephyr in .west/config
5. Set manifest.file to west.yml
Your workspace is now almost ready to use; you just need to run west update to clone the rest of the
projects into the workspace to finish.
For more details, see west init.
west update basics This command makes sure your workspace contains Git repositories matching the
projects in the manifest file.
Important: Whenever you check out a different revision in your manifest repository, you should run
west update to make sure your workspace contains the project repositories the new revision expects.
The west update command reads the manifest file’s contents by:
1. Finding the topdir. In the west init example above, that means finding zephyrproject.
2. Loading .west/config in the topdir to read the manifest.path (e.g. zephyr) and manifest.file
(e.g. west.yml) options.
3. Loading the manifest file given by these options (e.g. zephyrproject/zephyr/west.yml).
It then uses the manifest file to decide where missing projects should be placed within the workspace,
what URLs to clone them from, and what Git revisions should be checked out locally. Project repositories
which already exist are updated in place by fetching and checking out their respective Git revisions in
the manifest file.
For more details, see west update.
Zephyr Extensions
Troubleshooting
This page describes west’s built-in commands, some of which were introduced in Basics, in more detail.
Some commands are related to Git commands with the same name, but operate on the entire workspace.
For example, west diff shows local changes in multiple Git repositories in the workspace.
Some commands take projects as arguments. These arguments can be project names as specified in
the manifest file, or (as a fallback) paths to them on the local file system. Omitting project arguments
to commands which accept them (such as west list, west forall, etc.) usually defaults to using all
projects in the manifest file plus the manifest repository itself.
For additional help, run west <command> -h (e.g. west init -h).
west init
The new workspace is created in the given directory, creating a new .west inside this directory. You
can give the manifest URL using the -m switch, the initial revision to check out using --mr, and the
location of the manifest file within the repository using --mf.
For example, running:
would clone the upstream official zephyr repository into zp/zephyr, and check out the v1.14.0 release.
This command creates zp/.west, and set the manifest.path configuration option to zephyr to record
the location of the manifest repository in the workspace. The default manifest file location is used.
The -m option defaults to https://github.com/zephyrproject-rtos/zephyr. The --mf option defaults
to west.yml. Since west v0.10.1, west will use the default branch in the manifest repository unless the
--mr option is used to override it. (In prior versions, --mr defaulted to master.)
If no directory is given, the current working directory is used.
Option 2: to create a workspace around an existing local manifest repository, use:
This creates .west next to directory in the file system, and sets manifest.path to directory.
As above, --mf defaults to west.yml.
Reconfiguring the workspace:
If you change your mind later, you are free to change manifest.path and manifest.file using west
config after running west init. Just be sure to run west update afterwards to update your workspace
to match the new manifest file.
west update
1. Initializes a local Git repository for the project in the workspace, if it does not already exist
2. Inspects the project’s revision field in the manifest, and fetches it from the remote if it is not
already available locally
3. Sets the project’s manifest-rev branch to the commit specified by the revision in the previous step
4. Checks out manifest-rev in the local working copy as a detached HEAD
5. If the manifest file specifies a submodules key for the project, recursively updates the project’s
submodules as described below.
To avoid unnecessary fetches, west update will not fetch project revision values which are Git SHAs or
tags that are already available locally. This is the behavior when the -f (--fetch) option has its default
value, smart. To force this command to fetch from project remotes even if the revisions appear to be
available locally, either use -f always or set the update.fetch configuration option to always. SHAs
may be given as unique prefixes as long as they are acceptable to Git1 .
If the project revision is a Git ref that is neither a tag nor a SHA (i.e. if the project is tracking a branch),
west update always fetches, regardless of -f and update.fetch.
Some branch names might look like short SHAs, like deadbeef. West treats these like SHAs. You can dis-
ambiguate by prefixing the revision value with refs/heads/, e.g. revision: refs/heads/deadbeef.
For safety, west update uses git checkout --detach to check out a detached HEAD at the manifest
revision for each updated project, leaving behind any branches which were already checked out. This is
typically a safe operation that will not modify any of your local branches.
However, if you had added some local commits onto a previously detached HEAD checked out by west,
then git will warn you that you’ve left behind some commits which are no longer referred to by any
branch. These may be garbage-collected and lost at some point in the future. To avoid this if you
have local commits in the project, make sure you have a local branch checked out before running west
update.
If you would rather rebase any locally checked out branches instead, use the -r (--rebase) option.
If you would like west update to keep local branches checked out as long as they point to commits that
are descendants of the new manifest-rev, use the -k (--keep-descendants) option.
Note: west update --rebase will fail in projects that have git conflicts between your branch and new
commits brought in by the manifest. You should immediately resolve these conflicts as you usually do
with git, or you can use git -C <project_path> rebase --abort to ignore incoming changes for the
moment.
With a clean working tree, a plain west update never fails because it does not try to hold on to your
commits and simply leaves them aside.
west update --keep-descendants offers an intermediate option that never fails either but does not
treat all projects the same:
• in projects where your branch diverged from the incoming commits, it does not even try to rebase
and leaves your branches behind just like a plain west update does;
• in all other projects where no rebase or merge is needed it keeps your branches in place.
For example, running west update --group-filter=+foo,-bar would behave the same way as if you
had temporarily appended the string "+foo,-bar" to the value of manifest.group-filter, run west
update, then restored manifest.group-filter to its original value.
Note that using the syntax --group-filter=VALUE instead of --group-filter VALUE avoids issues pars-
ing command line options if you just want to disable a single group, e.g. --group-filter=-bar.
Submodule update procedure:
If a project in the manifest has a submodules key, the submodules are updated as follows, depending on
the value of the submodules key.
If the project has submodules: true, west first synchronizes the project’s submodules with:
West then runs one of the following in the project repository, depending on whether you run west
update with the --rebase option or without it:
Otherwise, the project has submodules: <list-of-submodules>. In this case, west synchronizes the
project’s submodules with:
Then it updates each submodule in the list as follows, depending on whether you run west update with
the --rebase option or without it:
The git submodule sync commands are skipped if the update.sync-submodules Configuration option
is false.
West has a few more commands for managing the projects in the workspace, which are summarized
here. Run west <command> -h for detailed help.
• west list: print a line of information about each project in the manifest, according to a format
string
• west manifest: manage the manifest file. See Manifest Command.
• west diff: run git diff in local project repositories
• west status: run git status in local project repositories
• west forall: run an arbitrary command in local project repositories
• west compare: compare the state of the workspace against the manifest
2.10.6 Workspaces
This page describes the west workspace concept introduced in Basics in more detail.
West creates and controls a Git branch named manifest-rev in each project. This branch points to
the revision that the manifest file specified for the project at the time west update was last run. Other
workspace management commands may use manifest-rev as a reference point for the upstream revision
as of this latest update. Among other purposes, the manifest-rev branch allows the manifest file to use
SHAs as project revisions.
Although manifest-rev is a normal Git branch, west will recreate and/or reset it on the next update. For
this reason, it is dangerous to check it out or otherwise modify it yourself. For instance, any commits you
manually add to this branch may be lost the next time you run west update. Instead, check out a local
branch with another name, and either rebase it on top of a new manifest-rev, or merge manifest-rev
into it.
Note: West does not create a manifest-rev branch in the manifest repository, since west does not
manage the manifest repository’s branches or revisions.
West also reserves all Git refs that begin with refs/west/ (such as refs/west/foo) for itself in local
project repositories. Unlike manifest-rev, these refs are not regular branches. West’s behavior here is
an implementation detail; users should not rely on these refs’ existence or behavior.
Private repositories
You can use west to fetch from private repositories. There is nothing west-specific about this.
The west update command essentially runs git fetch YOUR_PROJECT_URL when a project’s
manifest-rev branch must be updated to a newly fetched commit. It’s up to your environment to
make sure the fetch succeeds.
You can either enter the password manually or use any of the credential helpers built in to Git. Since Git
has credential storage built in, there is no need for a west-specific feature.
The following sections cover common cases for running west update without having to enter your
password, as well as how to troubleshoot issues.
Fetching via HTTPS On Windows when fetching from GitHub, recent versions of Git prompt you for
your GitHub password in a graphical window once, then store it for future use (in a default installation).
Passwordless fetching from GitHub should therefore work “out of the box” on Windows after you have
done it once.
In general, you can store your credentials on disk using the “store” git credential helper. See the git-
credential-store manual page for details.
To use this helper for all the repositories in your workspace, run:
To use this helper on just the projects foo and bar, run:
On GitHub, you can set up a personal access token to use in place of your account password. (This may
be required if your account has two-factor authentication enabled, and may be preferable to storing your
account password in plain text even if two-factor authentication is disabled.)
You can use the Git credential store to authenticate with a GitHub PAT (Personal Access Token) like so:
If you don’t want to store any credentials on the file system, you can store them in memory temporarily
using git-credential-cache instead.
If you setup fetching via SSH, you can use Git URL rewrite feature. The following command instructs Git
to use SSH URLs for GitHub instead of HTTPS ones:
Fetching via SSH If your SSH key has no password, fetching should just work. If it does have a
password, you can avoid entering it manually every time using ssh-agent.
On GitHub, see Connecting to GitHub with SSH for details on configuration and key creation.
Project locations
Projects can be located anywhere inside the workspace, but they may not “escape” it.
In other words, project repositories need not be located in subdirectories of the manifest repository or as
immediate subdirectories of the topdir. However, projects must have paths inside the workspace.
You may replace a project’s repository directory within the workspace with a symbolic link to elsewhere
on your computer, but west will not do this for you.
Topologies supported
west-workspace/
application/ # .git/
CMakeLists.txt
prj.conf never modified by west
src/
main.c
west.yml # main manifest with optional import(s) and override(s)
modules/
lib/
zcbor/ # .git/ project from either the main manifest or some import.
Here is an example application/west.yml which uses Manifest Imports, available since west 0.7, to
import Zephyr v2.5.0 and its modules into the application manifest file:
You can still selectively “override” individual Zephyr modules if you use import: in this way; see Example
1.3: Downstream of a Zephyr release, with module fork for an example.
Another way to do the same thing is to copy/paste zephyr/west.yml to application/west.yml, adding
an entry for the zephyr project itself, like this:
(The west-commands is there for Building, Flashing and Debugging and other Zephyr-specific Extensions.
It’s not necessary when using import.)
The main advantage to using import is not having to track the revisions of imported projects separately.
In the above example, using import means Zephyr’s module versions are automatically determined from
the zephyr/west.yml revision, instead of having to be copy/pasted (and maintained) on their own.
west-workspace/
app1/ # .git/ project
CMakeLists.txt
prj.conf
src/
main.c
app2/ # .git/ project
CMakeLists.txt
prj.conf
src/
main.c
manifest-repo/ # .git/ never modified by west
west.yml # main manifest with optional import(s) and override(s)
modules/
lib/
zcbor/ # .git/ project from either the main manifest or
# from some import
Here is an example T3 manifest-repo/west.yml which uses Manifest Imports, available since west 0.7,
to import Zephyr v2.5.0 and its modules, then add the app1 and app2 projects:
manifest:
remotes:
- name: zephyrproject-rtos
url-base: https://github.com/zephyrproject-rtos
- name: your-git-server
url-base: https://git.example.com/your-company
defaults:
remote: your-git-server
projects:
- name: zephyr
remote: zephyrproject-rtos
revision: v2.5.0
import: true
- name: app1
revision: SOME_SHA_OR_BRANCH_OR_TAG
- name: app2
revision: ANOTHER_SHA_OR_BRANCH_OR_TAG
self:
path: manifest-repo
You can also do this “by hand” by copy/pasting zephyr/west.yml as shown above for the T2 topology,
with the same caveats.
This page contains detailed information about west’s multiple repository model, manifest files, and
the west manifest command. For API documentation on the west.manifest module, see west-apis-
manifest. For a more general introduction and command overview, see Basics.
West’s view of the repositories in a west workspace, and their history, looks like the following figure
(though some parts of this example are specific to upstream Zephyr’s use of west):
The history of the manifest repository is the line of Git commits which is “floating” on top of the gray
plane. Parent commits point to child commits using solid arrows. The plane below contains the Git
commit history of the repositories in the workspace, with each project repository boxed in by a rectangle.
Parent/child commit relationships in each repository are also shown with solid arrows.
The commits in the manifest repository (again, for upstream Zephyr this is the zephyr repository itself)
each have a manifest file. The manifest file in each commit specifies the corresponding commits which
it expects in each of the project repositories. This relationship is shown using dotted line arrows in the
diagram. Each dotted line arrow points from a commit in the manifest repository to a corresponding
commit in a project repository.
Notice the following important details:
• Projects can be added (like P1 between manifest repository commits D and E) and removed (P2
between the same manifest repository commits)
• Project and manifest repository histories don’t have to move forwards or backwards together:
– P2 stays the same from A → B, as do P1 and P3 from F → G.
– P3 moves forward from A → B.
– P3 moves backward from C → D.
One use for moving backward in project history is to “revert” a regression by going back to a
revision before it was introduced.
• Project repository commits can be “skipped”: P3 moves forward multiple commits in its history
from B → C.
• In the above diagram, no project repository has two revisions “at the same time”: every manifest
file refers to exactly one commit in the projects it cares about. This can be relaxed by using a
branch name as a manifest revision, at the cost of being able to bisect manifest repository history.
Manifest Files
West manifests are YAML files. Manifests have a top-level manifest section with some subsections, like
this:
manifest:
remotes:
# short names for project URLs
projects:
# a list of projects managed by west
defaults:
# default project attributes
self:
# configuration related to the manifest repository itself,
# i.e. the repository containing west.yml
version: "<schema-version>"
group-filter:
# a list of project groups to enable or disable
In YAML terms, the manifest file contains a mapping, with a manifest key. Any other keys and their
contents are ignored (west v0.5 also required a west key, but this is ignored starting with v0.6).
The manifest contains subsections, like defaults, remotes, projects, and self. In YAML terms, the
value of the manifest key is also a mapping, with these “subsections” as keys. As of west v0.10, all of
Remotes The remotes subsection contains a sequence which specifies the base URLs where projects
can be fetched from.
Each remotes element has a name and a “URL base”. These are used to form the complete Git fetch URL
for each project. A project’s fetch URL can be set by appending a project-specific path onto a remote URL
base. (As we’ll see below, projects can also specify their complete fetch URLs.)
For example:
manifest:
# ...
remotes:
- name: remote1
url-base: https://git.example.com/base1
- name: remote2
url-base: https://git.example.com/base2
The remotes keys and their usage are in the following table.
Above, two remotes are given, with names remote1 and remote2. Their URL bases are respectively
https://git.example.com/base1 and https://git.example.com/base2. You can use SSH URL bases
as well; for example, you might use [email protected]:base1 if remote1 supported Git over SSH as well.
Anything acceptable to Git will work.
Projects The projects subsection contains a sequence describing the project repositories in the west
workspace. Every project has a unique name. You can specify what Git remote URLs to use when cloning
and fetching the projects, what revisions to track, and where the project should be stored on the local
file system. Note that west projects are different from modules.
Here is an example. We’ll assume the remotes given above.
manifest:
# [... same remotes as above...]
projects:
- name: proj1
remote: remote1
path: extra/project-1
- name: proj2
repo-path: my-path
remote: remote2
revision: v1.3
- name: proj3
url: https://github.com/user/project-three
revision: abcde413a111
In this manifest:
• proj1 has remote remote1, so its Git fetch URL is https://git.example.com/base1/proj1. The
remote url-base is appended with a / and the project name to form the URL.
Locally, this project will be cloned at path extra/project-1 relative to the west workspace’s root
directory, since it has an explicit path attribute with this value.
Since the project has no revision specified, master is used by default. The current tip of this
branch will be fetched and checked out as a detached HEAD when west next updates this project.
• proj2 has a remote and a repo-path, so its fetch URL is https://git.example.com/base2/
my-path. The repo-path attribute, if present, overrides the default name when forming the fetch
URL.
Since the project has no path attribute, its name is used by default. It will be cloned into a directory
named proj2. The commit pointed to by the v1.3 tag will be checked out when west updates the
project.
• proj3 has an explicit url, so it will be fetched from https://github.com/user/project-three.
Its local path defaults to its name, proj3. Commit abcde413a111 will be checked out when it is
next updated.
The available project keys and their usage are in the following table. Sometimes we’ll refer to the
defaults subsection; it will be described next.
Defaults The defaults subsection can provide default values for project attributes. In particular, the
default remote name and revision can be specified here. Another way to write the same manifest we
have been describing so far using defaults is:
manifest:
defaults:
remote: remote1
revision: v1.3
remotes:
- name: remote1
url-base: https://git.example.com/base1
- name: remote2
url-base: https://git.example.com/base2
projects:
- name: proj1
path: extra/project-1
revision: master
- name: proj2
repo-path: my-path
remote: remote2
- name: proj3
url: https://github.com/user/project-three
revision: abcde413a111
The available defaults keys and their usage are in the following table.
Self The self subsection can be used to control the manifest repository itself.
As an example, let’s consider this snippet from the zephyr repository’s west.yml:
manifest:
# ...
self:
path: zephyr
west-commands: scripts/west-commands.yml
This ensures that the zephyr repository is cloned into path zephyr, though as explained above
that would have happened anyway if cloning from the default manifest URL, https://github.com/
zephyrproject-rtos/zephyr. Since the zephyr repository does contain extension commands, its self
entry declares the location of the corresponding west-commands.yml relative to the repository root.
The available self keys and their usage are in the following table.
1 In git, HEAD is a reference, whereas HEAD~<n> is a valid revision but not a reference. West fetches references, such as
refs/heads/main or HEAD, and commits not available locally, but will not fetch commits if they are already available. HEAD~0
is resolved to a specific commit that is locally available, and therefore west will simply checkout the locally available commit,
identified by HEAD~0.
Version The version subsection declares that the manifest file uses features which were introduced in
some version of west. Attempts to load the manifest with older versions of west will fail with an error
message that explains the minimum required version of west which is needed.
Here is an example:
manifest:
# Marks that this file uses version 0.10 of the west manifest
# file format.
#
# An attempt to load this manifest file with west v0.8.0 will
# fail with an error message saying that west v0.10.0 or
# later is required.
version: "0.10"
The pykwalify schema manifest-schema.yml in the west source code repository is used to validate the
manifest section.
Here is a table with the valid version values, along with information about the manifest file features
that were introduced in that version.
Note: Versions of west without any new features in the manifest file format do not change the list
of valid version values. For example, version: "0.11" is not valid, because west v0.11.x did not
introduce new manifest file format features.
Quoting the version value as shown above forces the YAML parser to treat it as a string. Without quotes,
0.10 in YAML is just the floating point value 0.1. You can omit the quotes if the value is the same when
cast to string, but it’s best to include them. Always use quotes if you’re not sure.
If you do not include a version in your manifest, each new release of west assumes that it should try to
load it using the features that were available in that release. This may result in error messages that are
harder to understand if that version of west is too old to load the manifest.
Projects defined in the west manifest can be inactive or active. The difference is that an inactive project
is generally ignored by west. For example, west update will not update inactive projects, and west
list will not print information about them by default. As another example, any Manifest Imports in an
inactive project will be ignored by west.
There are two ways to make a project inactive:
1. Using the manifest.project-filter configuration option. If a project is made active or inactive
using this option, then the rules related to making a project inactive using its groups: are ignored.
That is, if a regular expression in manifest.project-filter applies to a project, the project’s
groups have no effect on whether it is active or inactive.
See the entry for this option in Built-in Configuration Options for details.
2. Otherwise, if a project has groups, and they are all disabled, then the project is inactive.
See the following section for details.
Project Groups
You can use the groups and group-filter keys briefly described above to place projects into groups,
and to enable or disable groups.
For example, this lets you run a west forall command only on the projects in the group by using
west forall --group. This can also let you make projects inactive; see the previous section for more
information on inactive projects.
The next section introduces project groups. The following section describes Enabled and Disabled Project
Groups. There are some basic examples in Project Group Examples. Finally, Group Filters and Imports
provides a simplified overview of how group-filter interacts with the Manifest Imports feature.
Groups Basics The groups: and group-filter: keys appear in the manifest like this:
manifest:
projects:
- name: some-project
groups: ...
group-filter: ...
The groups key’s value is a list of group names. Group names are strings.
You can enable or disable project groups using group-filter. Projects whose groups are all disabled,
and which are not otherwise made active by a manifest.project-filter configuration option, are
inactive.
For example, in this manifest fragment:
manifest:
projects:
- name: project-1
groups:
- groupA
- name: project-2
groups:
- groupB
- groupC
- name: project-3
Enabled and Disabled Project Groups All project groups are enabled by default. You can enable or
disable groups in both your manifest file and Configuration.
Within a manifest file, manifest: group-filter: is a YAML list of groups to enable and disable.
To enable a group, prefix its name with a plus sign (+). For example, groupA is enabled in this manifest
fragment:
manifest:
group-filter: [+groupA]
Although this is redundant for groups that are already enabled by default, it can be used to override
settings in an imported manifest file. See Group Filters and Imports for more information.
To disable a group, prefix its name with a dash (-). For example, groupA and groupB are disabled in this
manifest fragment:
manifest:
group-filter: [-groupA,-groupB]
Note: Since group-filter is a YAML list, you could have written this fragment as follows:
manifest:
group-filter:
(continues on next page)
In addition to the manifest file, you can control which groups are enabled and disabled using the
manifest.group-filter configuration option. This option is a comma-separated list of groups to enable
and/or disable.
To enable a group, add its name to the list prefixed with +. To disable a group, add its name prefixed
with -. For example, setting manifest.group-filter to +groupA,-groupB enables groupA, and disables
groupB.
The value of the configuration option overrides any data in the manifest file. You can think of this as if
the manifest.group-filter configuration option is appended to the manifest: group-filter: list
from YAML, with “last entry wins” semantics.
Project Group Examples This section contains example situations involving project groups and active
projects. The examples use both manifest: group-filter: YAML lists and manifest.group-filter
configuration lists, to show how they work together.
Note that the defaults and remotes data in the following manifests isn’t relevant except to make the
examples complete and self-contained.
Note: In all of the examples that follow, the manifest.project-filter option is assumed to be unset.
manifest:
projects:
- name: foo
groups:
- groupA
- name: bar
groups:
- groupA
- groupB
- name: baz
defaults:
remote: example-remote
remotes:
- name: example-remote
url-base: https://git.example.com
The manifest.group-filter configuration option is not set (you can ensure this by running west
config -D manifest.group-filter).
No groups are disabled, because all groups are enabled by default. Therefore, all three projects (foo,
bar, and baz) are active. Note that there is no way to make project baz inactive, since it has no groups.
Example 2: Disabling one group via manifest The entire manifest file is:
manifest:
projects:
- name: foo
groups:
- groupA
- name: bar
groups:
- groupA
- groupB
group-filter: [-groupA]
defaults:
remote: example-remote
remotes:
- name: example-remote
url-base: https://git.example.com
The manifest.group-filter configuration option is not set (you can ensure this by running west
config -D manifest.group-filter).
Since groupA is disabled, project foo is inactive. Project bar is active, because groupB is enabled.
Example 3: Disabling multiple groups via manifest The entire manifest file is:
manifest:
projects:
- name: foo
groups:
- groupA
- name: bar
groups:
- groupA
- groupB
group-filter: [-groupA,-groupB]
defaults:
remote: example-remote
remotes:
- name: example-remote
url-base: https://git.example.com
The manifest.group-filter configuration option is not set (you can ensure this by running west
config -D manifest.group-filter).
Both foo and bar are inactive, because all of their groups are disabled.
Example 4: Disabling a group via configuration The entire manifest file is:
manifest:
projects:
- name: foo
groups:
- groupA
- name: bar
groups:
(continues on next page)
defaults:
remote: example-remote
remotes:
- name: example-remote
url-base: https://git.example.com
The manifest.group-filter configuration option is set to -groupA (you can ensure this by running
west config manifest.group-filter -- -groupA; the extra -- is required so the argument parser
does not treat -groupA as a command line option -g with value roupA).
Project foo is inactive because groupA has been disabled by the manifest.group-filter configuration
option. Project bar is active because groupB is enabled.
Example 5: Overriding a disabled group via configuration The entire manifest file is:
manifest:
projects:
- name: foo
- name: bar
groups:
- groupA
- name: baz
groups:
- groupA
- groupB
group-filter: [-groupA]
defaults:
remote: example-remote
remotes:
- name: example-remote
url-base: https://git.example.com
The manifest.group-filter configuration option is set to +groupA (you can ensure this by running
west config manifest.group-filter +groupA).
In this case, groupA is enabled: the manifest.group-filter configuration option has higher precedence
than the manifest: group-filter: [-groupA] content in the manifest file.
Therefore, projects foo and bar are both active.
Example 6: Overriding multiple disabled groups via configuration The entire manifest file is:
manifest:
projects:
- name: foo
- name: bar
groups:
- groupA
- name: baz
groups:
- groupA
- groupB
(continues on next page)
group-filter: [-groupA,-groupB]
defaults:
remote: example-remote
remotes:
- name: example-remote
url-base: https://git.example.com
The manifest.group-filter configuration option is set to +groupA,+groupB (you can ensure this by
running west config manifest.group-filter "+groupA,+groupB").
In this case, both groupA and groupB are enabled, because the configuration value overrides the manifest
file for both groups.
Therefore, projects foo and bar are both active.
Example 7: Disabling multiple groups via configuration The entire manifest file is:
manifest:
projects:
- name: foo
- name: bar
groups:
- groupA
- name: baz
groups:
- groupA
- groupB
defaults:
remote: example-remote
remotes:
- name: example-remote
url-base: https://git.example.com
The manifest.group-filter configuration option is set to -groupA,-groupB (you can ensure this by
running west config manifest.group-filter -- "-groupA,-groupB").
In this case, both groupA and groupB are disabled.
Therefore, projects foo and bar are both inactive.
Group Filters and Imports This section provides a simplified description of how the manifest:
group-filter: value behaves when combined with Manifest Imports. For complete details, see Man-
ifest Import Details.
Warning: The below semantics apply to west v0.10.0 and later. West v0.9.x semantics are different,
and combining group-filter with import in west v0.9.x is discouraged.
In short:
• if you only import one manifest, any groups it disables in its group-filter are also disabled in
your manifest
• you can override this in your manifest file’s manifest: group-filter: value, your workspace’s
manifest.group-filter configuration option, or both
# parent/west.yml:
manifest:
projects:
- name: child
url: https://git.example.com/child
import: true
- name: project-1
url: https://git.example.com/project-1
groups:
- unstable
# child/west.yml:
manifest:
group-filter: [-unstable]
projects:
- name: project-2
url: https://git.example.com/project-2
- name: project-3
url: https://git.example.com/project-3
groups:
- unstable
Example 2: overriding an imported group-filter via manifest You are using this parent/west.yml
manifest:
# parent/west.yml:
manifest:
group-filter: [+unstable,-optional]
projects:
- name: child
url: https://git.example.com/child
import: true
- name: project-1
url: https://git.example.com/project-1
groups:
- unstable
# child/west.yml:
manifest:
group-filter: [-unstable]
projects:
- name: project-2
(continues on next page)
Example 3: overriding an imported group-filter via configuration You are using this parent/
west.yml manifest:
# parent/west.yml:
manifest:
projects:
- name: child
url: https://git.example.com/child
import: true
- name: project-1
url: https://git.example.com/project-1
groups:
- unstable
# child/west.yml:
manifest:
group-filter: [-unstable]
projects:
- name: project-2
url: https://git.example.com/project-2
groups:
- optional
- name: project-3
url: https://git.example.com/project-3
groups:
- unstable
If you run:
Then only the child, project-1, and project-3 projects are active.
The -unstable group filter in child/west.yml is overridden in the manifest.group-filter configu-
ration option, so the unstable group is enabled. Since project-1 and project-3 are in the unstable
group, they are active.
The same configuration option disables the optional group, so project-2 is inactive.
The final group filter specified by parent/west.yml and the manifest.group-filter configuration op-
tion is [+unstable,-optional].
You can use the submodules keys briefly described above to force west update to also handle any Git
submodules configured in project’s git repository. The submodules key can appear inside projects, like
this:
manifest:
projects:
- name: some-project
submodules: ...
The submodules key can be a boolean or a list of mappings. We’ll describe these in order.
manifest:
projects:
- name: foo
submodules: true
- name: bar
Here, west update will initialize and update all submodules in foo. If bar has any submodules, they are
ignored, because bar does not have a submodules value.
Option 2: List of mappings The submodules key may be a list of mappings, one list element for each
desired submodule. Each submodule listed is updated recursively. You can still track and update unlisted
submodules with git commands manually; present or not they will be completely ignored by west.
The path key must match exactly the path of one submodule relative to its parent west project, as shown
in the output of git submodule status. The name key is optional and not used by west for now; it’s not
passed to git submodule commands either. The name key was briefly mandatory in west version 0.9.0,
but was made optional in 0.9.1.
For example, let’s say you have a source code repository foo, which has many submodules, and you
want west update to keep some but not all of them in sync, along with another project named bar in
the same workspace.
You can do that with this manifest file:
manifest:
projects:
- name: foo
submodules:
- path: path/to/foo-first-sub
- name: foo-second-sub
path: path/to/foo-second-sub
- name: bar
Here, west update will recursively initialize and update just the submodules in foo with paths path/
to/foo-first-sub and path/to/foo-second-sub. Any submodules in bar are still ignored.
West versions v0.12 and later support an optional userdata key in projects.
West versions v0.13 and later supports this key in the manifest: self: section.
It is meant for consumption by programs that require user-specific project metadata. Beyond parsing it
as YAML, west itself ignores the value completely.
The key’s value is arbitrary YAML. West parses the value and makes it accessible to programs using
west-apis as the userdata attribute of the corresponding west.manifest.Project object.
Example manifest fragment:
manifest:
projects:
- name: foo
- name: bar
userdata: a-string
- name: baz
userdata:
key: value
self:
userdata: blub
manifest = west.manifest.Manifest.from_file()
foo.userdata # None
bar.userdata # 'a-string'
baz.userdata # {'key': 'value'}
manifest.userdata # 'blub'
Manifest Imports
You can use the import key briefly described above to include projects from other manifest files in your
west.yml. This key can be either a project or self section attribute:
manifest:
projects:
- name: some-project
import: ...
self:
import: ...
You can use a “self: import:” to load additional files from the repository containing your west.yml. You
can use a “project: . . . import:” to load additional files defined in that project’s Git history.
West resolves the final manifest from individual manifest files in this order:
1. imported files in self
2. your west.yml file
3. imported files in projects
During resolution, west ignores projects which have already been defined in other files. For example, a
project named foo in your west.yml makes west ignore other projects named foo imported from your
projects list.
The import key can be a boolean, path, mapping, or sequence. We’ll describe these in order, using
examples:
• Boolean
– Example 1.1: Downstream of a Zephyr release
– Example 1.2: “Rolling release” Zephyr downstream
– Example 1.3: Downstream of a Zephyr release, with module fork
• Relative path
– Example 2.1: Downstream of a Zephyr release with explicit path
– Example 2.2: Downstream with directory of manifest files
– Example 2.3: Continuous Integration overrides
• Mapping with additional configuration
– Example 3.1: Downstream with name allowlist
– Example 3.2: Downstream with path allowlist
– Example 3.3: Downstream with path blocklist
– Example 3.4: Import into a subdirectory
• Sequence of paths and mappings
– Example 4.1: Downstream with sequence of manifest files
– Example 4.2: Import order illustration
A more formal description of how this works is last, after the examples.
Troubleshooting Note If you’re using this feature and find west’s behavior confusing, try resolving your
manifest to see the final results after imports are done.
manifest:
# ...
projects:
- name: p1
revision: v1.0
import: true # Import west.yml from p1's v1.0 git tag
- name: p2
import: false # Nothing is imported from p2.
- name: p3 # Nothing is imported from p3 either.
It’s an error to set import to either true or false inside self, like this:
manifest:
# ...
self:
import: true # Error
Example 1.1: Downstream of a Zephyr release You have a source code repository you want to use
with Zephyr v1.14.1 LTS. You want to maintain the whole thing using west. You don’t want to modify
any of the mainline repositories.
In other words, the west workspace you want looks like this:
my-downstream/
.west/ # west directory
zephyr/ # mainline zephyr repository
west.yml # the v1.14.1 version of this file is imported
modules/ # modules from mainline zephyr
hal/
[...other directories..]
[ ... other projects ...] # other mainline repositories
my-repo/ # your downstream repository
west.yml # main manifest importing zephyr/west.yml v1.14.1
[...other files..]
# my-repo/west.yml:
manifest:
remotes:
- name: zephyrproject-rtos
url-base: https://github.com/zephyrproject-rtos
projects:
- name: zephyr
remote: zephyrproject-rtos
revision: v1.14.1
import: true
You can then create the workspace on your computer like this, assuming my-repo is hosted at https://
git.example.com/my-repo:
Example 1.2: “Rolling release” Zephyr downstream This is similar to Example 1.1: Downstream of a
Zephyr release, except we’ll use revision: main for the zephyr repository:
# my-repo/west.yml:
manifest:
remotes:
- name: zephyrproject-rtos
url-base: https://github.com/zephyrproject-rtos
projects:
- name: zephyr
remote: zephyrproject-rtos
revision: main
import: true
This time, whenever you run west update, the special manifest-rev branch in the zephyr reposi-
tory will be updated to point at a newly fetched main branch tip from the URL https://github.com/
zephyrproject-rtos/zephyr.
The contents of zephyr/west.yml at the new manifest-rev will then be used to import projects from
Zephyr. This lets you stay up to date with the latest changes in the Zephyr project. The cost is that
running west update will not produce reproducible results, since the remote main branch can change
every time you run it.
It’s also important to understand that west ignores your working tree’s zephyr/west.yml entirely when
resolving imports. West always uses the contents of imported manifests as they were committed to the
latest manifest-rev when importing from a project.
You can only import manifest from the file system if they are in your manifest repository’s working tree.
See Example 2.2: Downstream with directory of manifest files for an example.
Example 1.3: Downstream of a Zephyr release, with module fork This manifest is similar to the
one in Example 1.1: Downstream of a Zephyr release, except it:
• is a downstream of Zephyr 2.0
• includes a downstream fork of the modules/hal/nordic module which was included in that release
# my-repo/west.yml:
manifest:
remotes:
- name: zephyrproject-rtos
url-base: https://github.com/zephyrproject-rtos
- name: my-remote
url-base: https://git.example.com
projects:
- name: hal_nordic # higher precedence
remote: my-remote
revision: my-sha
path: modules/hal/nordic
- name: zephyr
remote: zephyrproject-rtos
revision: v2.0.0
import: true # imported projects have lower precedence
Option 2: Relative path The import value can also be a relative path to a manifest file or a directory
containing manifest files. The path is relative to the root directory of the projects or self repository
the import key appears in.
Here is an example:
manifest:
projects:
- name: project-1
revision: v1.0
import: west.yml
- name: project-2
revision: main
import: p2-manifests
self:
import: submanifests
Example 2.1: Downstream of a Zephyr release with explicit path This is an explicit way to write an
equivalent manifest to the one in Example 1.1: Downstream of a Zephyr release.
manifest:
remotes:
- name: zephyrproject-rtos
url-base: https://github.com/zephyrproject-rtos
projects:
- name: zephyr
(continues on next page)
The setting import: west.yml means to use the file west.yml inside the zephyr project. This example
is contrived, but shows the idea.
This can be useful in practice when the name of the manifest file you want to import is not west.yml.
Example 2.2: Downstream with directory of manifest files Your Zephyr downstream has a lot of
additional repositories. So many, in fact, that you want to split them up into multiple manifest files, but
keep track of them all in a single manifest repository, like this:
my-repo/
submanifests
01-libraries.yml
02-vendor-hals.yml
03-applications.yml
west.yml
You want to add all the files in my-repo/submanifests to the main manifest file, my-repo/west.yml, in
addition to projects in zephyr/west.yml. You want to track the latest development code in the Zephyr
repository’s main branch instead of using a fixed revision.
Here’s how:
# my-repo/west.yml:
manifest:
remotes:
- name: zephyrproject-rtos
url-base: https://github.com/zephyrproject-rtos
projects:
- name: zephyr
remote: zephyrproject-rtos
revision: main
import: true
self:
import: submanifests
Note: The .yml file names are prefixed with numbers in this example to make sure they are imported
in the specified order.
You can pick arbitrary names. West sorts files in a directory by name before importing.
Notice how the manifests in submanifests are imported before my-repo/west.yml and zephyr/west.
yml. In general, an import in the self section is processed before the manifest files in projects and the
main manifest file.
This means projects defined in my-repo/submanifests take highest precedence. For example, if
01-libraries.yml defines hal_nordic, the project by the same name in zephyr/west.yml is simply
ignored. As usual, see Resolving Manifests for troubleshooting advice.
This may seem strange, but it allows you to redefine projects “after the fact”, as we’ll see in the next
example.
Example 2.3: Continuous Integration overrides Your continuous integration system needs to fetch
and test multiple repositories in your west workspace from a developer’s forks instead of your mainline
development trees, to see if the changes all work well together.
Starting with Example 2.2: Downstream with directory of manifest files, the CI scripts add a file 00-ci.yml
in my-repo/submanifests, with these contents:
# my-repo/submanifests/00-ci.yml:
manifest:
projects:
- name: a-vendor-hal
url: https://github.com/a-developer/hal
revision: a-pull-request-branch
- name: an-application
url: https://github.com/a-developer/application
revision: another-pull-request-branch
The CI scripts run west update after generating this file in my-repo/submanifests. The projects defined
in 00-ci.yml have higher precedence than other definitions in my-repo/submanifests, because the
name 00-ci.yml comes before the other file names.
Thus, west update always checks out the developer’s branches in the projects named a-vendor-hal and
an-application, even if those same projects are also defined elsewhere.
Option 3: Mapping The import key can also contain a mapping with the following keys:
• file: Optional. The name of the manifest file or directory to import. This defaults to west.yml if
not present.
• name-allowlist: Optional. If present, a name or sequence of project names to include.
• path-allowlist: Optional. If present, a path or sequence of project paths to match against. This
is a shell-style globbing pattern, currently implemented with pathlib. Note that this means case
sensitivity is platform specific.
• name-blocklist: Optional. Like name-allowlist, but contains project names to exclude rather
than include.
• path-blocklist: Optional. Like path-allowlist, but contains project paths to exclude rather
than include.
• path-prefix: Optional (new in v0.8.0). If given, this will be prepended to the project’s path in the
workspace, as well as the paths of any imported projects. This can be used to place these projects
in a subdirectory of the workspace.
Allowlists override blocklists if both are given. For example, if a project is blocked by path, then allowed
by name, it will still be imported.
Example 3.1: Downstream with name allowlist Here is a pair of manifest files, representing a main-
line and a downstream. The downstream doesn’t want to use all the mainline projects, however. We’ll
assume the mainline west.yml is hosted at https://git.example.com/mainline/manifest.
# mainline west.yml:
manifest:
projects:
- name: mainline-app # included
path: examples/app
url: https://git.example.com/mainline/app
- name: lib
path: libraries/lib
url: https://git.example.com/mainline/lib
- name: lib2 # included
path: libraries/lib2
url: https://git.example.com/mainline/lib2
# downstream west.yml:
manifest:
projects:
- name: mainline
url: https://git.example.com/mainline/manifest
import:
name-allowlist:
- mainline-app
- lib2
- name: downstream-app
url: https://git.example.com/downstream/app
- name: lib3
path: libraries/lib3
url: https://git.example.com/downstream/lib3
If an allowlist had not been used, the lib project from the mainline manifest would have been imported.
Example 3.2: Downstream with path allowlist Here is an example showing how to allowlist main-
line’s libraries only, using path-allowlist.
# mainline west.yml:
manifest:
projects:
- name: app
path: examples/app
url: https://git.example.com/mainline/app
(continues on next page)
# downstream west.yml:
manifest:
projects:
- name: mainline
url: https://git.example.com/mainline/manifest
import:
path-allowlist: libraries/*
- name: app
url: https://git.example.com/downstream/app
- name: lib3
path: libraries/lib3
url: https://git.example.com/downstream/lib3
manifest:
projects:
- name: lib # imported
path: libraries/lib
url: https://git.example.com/mainline/lib
- name: lib2 # imported
path: libraries/lib2
url: https://git.example.com/mainline/lib2
- name: mainline
url: https://git.example.com/mainline/manifest
- name: app
url: https://git.example.com/downstream/app
- name: lib3
path: libraries/lib3
url: https://git.example.com/downstream/lib3
Example 3.3: Downstream with path blocklist Here’s an example showing how to block all vendor
HALs from mainline by common path prefix in the workspace, add your own version for the chip you’re
targeting, and keep everything else.
# mainline west.yml:
manifest:
defaults:
remote: mainline
remotes:
- name: mainline
url-base: https://git.example.com/mainline
projects:
- name: app
- name: lib
path: libraries/lib
- name: lib2
path: libraries/lib2
(continues on next page)
# downstream west.yml:
manifest:
projects:
- name: mainline
url: https://git.example.com/mainline/manifest
import:
path-blocklist: modules/hals/*
- name: hal_foo
path: modules/hals/foo
url: https://git.example.com/downstream/hal_foo
manifest:
defaults:
remote: mainline
remotes:
- name: mainline
url-base: https://git.example.com/mainline
projects:
- name: app # imported
- name: lib # imported
path: libraries/lib
- name: lib2 # imported
path: libraries/lib2
- name: mainline
repo-path: https://git.example.com/mainline/manifest
- name: hal_foo
path: modules/hals/foo
url: https://git.example.com/downstream/hal_foo
Example 3.4: Import into a subdirectory You want to import a manifest and its projects, placing
everything into a subdirectory of your west workspace.
For example, suppose you want to import this manifest from project foo, adding this project and its
projects bar and baz to your workspace:
# foo/west.yml:
manifest:
defaults:
remote: example
remotes:
- name: example
url-base: https://git.example.com
projects:
- name: bar
- name: baz
Instead of importing these into the top level workspace, you want to place all three project repositories
in an external-code subdirectory, like this:
workspace/
external-code/
foo/
bar/
baz/
manifest:
projects:
- name: foo
url: https://git.example.com/foo
import:
path-prefix: external-code
# foo/west.yml:
manifest:
defaults:
remote: example
remotes:
- name: example
url-base: https://git.example.com
projects:
- name: foo
path: external-code/foo
- name: bar
path: external-code/bar
- name: baz
path: external-code/baz
Option 4: Sequence The import key can also contain a sequence of files, directories, and mappings.
Example 4.1: Downstream with sequence of manifest files This example manifest is equivalent to
the manifest in Example 2.2: Downstream with directory of manifest files, with a sequence of explicitly
named files.
# my-repo/west.yml:
manifest:
projects:
- name: zephyr
url: https://github.com/zephyrproject-rtos/zephyr
import: west.yml
self:
import:
- submanifests/01-libraries.yml
- submanifests/02-vendor-hals.yml
- submanifests/03-applications.yml
Example 4.2: Import order illustration This more complicated example shows the order that west
imports manifest files:
# my-repo/west.yml
manifest:
# ...
projects:
- name: my-library
- name: my-app
- name: zephyr
import: true
- name: another-manifest-repo
import: submanifests
self:
import:
- submanifests/libraries.yml
- submanifests/vendor-hals.yml
- submanifests/applications.yml
defaults:
remote: my-remote
Manifest Import Details This section describes how west resolves a manifest file that uses import a
bit more formally.
Overview The import key can appear in a west manifest’s projects and self sections. The general
case looks like this:
Import keys are optional. If any of import-1, ..., import-N are missing, west will not import addi-
tional manifest data from that project. If self-import is missing, no additional files in the manifest
Projects This section describes how the final projects list is created.
Projects are identified by name. If the same name occurs in multiple manifests, the first definition is
used, and subsequent definitions are ignored. For example, if import-1 contains a project named bar,
that is ignored, because the top-level west.yml has already defined a project by that name.
The contents of files named by import-1 through import-N are imported from Git at the latest
manifest-rev revisions in their projects. These revisions can be updated to the values rev-1 through
rev-N by running west update. If any manifest-rev reference is missing or out of date, west update
also fetches project data from the remote fetch URL and updates the reference.
Also note that all imported manifests, from the root manifest to the repository which defines a project P,
must be up to date in order for west to update P itself. For example, this means west update P would up-
date manifest-rev in the baz project if baz/west.yml defines P, as well as updating the manifest-rev
branch in the local git clone of P. Confusingly, updating baz may result in the removal of P from baz/
west.yml, which “should” cause west update P to fail with an unrecognized project!
For this reason, it’s not possible to run west update P if P is defined in an imported manifest; you must
update this project along with all the others with a plain west update.
By default, west won’t fetch any project data over the network if a project’s revision is a SHA or tag which
is already available locally, so updating the extra projects shouldn’t take too much time unless it’s really
needed. See the documentation for the update.fetch configuration option for more information.
Extensions All extension commands defined using west-commands keys discovered while handling
imports are available in the resolved manifest.
If an imported manifest file has a west-commands: definition in its self: section, the extension com-
mands defined there are added to the set of available extensions at the time the manifest is imported.
They will thus take precedence over any extension commands with the same names added later on.
Group filters The resolved manifest has a group-filter value which is the result of concatenating the
group-filter values in the top-level manifest and any imported manifests.
Manifest files which appear earlier in the import order have higher precedence and are therefore con-
catenated later into the final group-filter.
In other words, let:
• the submanifest resolved from self-import have group filter self-filter
• the top-level manifest file have group filter top-filter
• the submanifests resolved from import-1 through import-N have group filters filter-1 through
filter-N respectively
The final resolved group-filter value is then filterN + ... + filter-2 + filter-1 + top-filter
+ self-filter, where + here refers to list concatenation.
Important: The order that filters appear in the above list matters.
The last filter element in the final concatenated list “wins” and determines if the group is enabled or
disabled.
For example, in [-foo] + [+foo], group foo is enabled. However, in [+foo] + [-foo], group foo is
disabled.
For simplicity, west and this documentation may elide concatenated group filter elements which are
redundant using these rules. For example, [+foo] + [-foo] could be written more simply as [-foo],
for the reasons given above. As another example, [-foo] + [+foo] could be written as the empty list
[], since all groups are enabled by default.
Manifest Command
The west manifest command can be used to manipulate manifest files. It takes an action, and action-
specific arguments.
The following sections describe each action and provides a basic signature for simple uses. Run west
manifest --help for full details on all options.
Resolving Manifests The --resolve action outputs a single manifest file equivalent to your current
manifest and all its imported manifests:
The main use for this action is to see the “final” manifest contents after performing any imports.
To print detailed information about each imported manifest file and how projects are handled during
manifest resolution, set the maximum verbosity level using -v:
A “frozen” manifest is a manifest file where every project’s revision is a SHA. You can use --freeze to
produce a frozen manifest that’s equivalent to your current manifest file. The -o option specifies an
output file; if not given, standard output is used.
Validating Manifests The --validate action either succeeds if the current manifest file is valid, or
fails with an error:
west manifest --validate
Get the manifest path The --path action prints the path to the top level manifest file:
west manifest --path
The output is something like /path/to/workspace/west.yml. The path format depends on your oper-
ating system.
2.10.8 Configuration
This page documents west’s configuration file system, the west config command, and configuration
options used by built-in commands. For API documentation on the west.configuration module, see
west-apis-configuration.
[zephyr]
base = zephyr
Above, the manifest section has option path set to zephyr. Another way to say the same thing is that
manifest.path is zephyr in this file.
There are three types of configuration file:
1. System: Settings in this file affect west’s behavior for every user logged in to the computer. Its
location depends on the platform:
• Linux: /etc/westconfig
• macOS: /usr/local/etc/westconfig
• Windows: %PROGRAMDATA%\west\config
2. Global (per user): Settings in this file affect how west behaves when run by a particular user on
the computer.
• All platforms: the default is .westconfig in the user’s home directory.
• Linux note: if the environment variable XDG_CONFIG_HOME is set, then $XDG_CONFIG_HOME/
west/config is used.
• Windows note: the following environment variables are tested to find the home directory:
%HOME%, then %USERPROFILE%, then a combination of %HOMEDRIVE% and %HOMEPATH%.
3. Local: Settings in this file affect west’s behavior for the current west workspace. The file is .west/
config, relative to the workspace’s root directory.
A setting in a file which appears lower down on this list overrides an earlier setting. For example, if
color.ui is true in the system’s configuration file, but false in the workspace’s, then the final value is
false. Similarly, settings in the user configuration file override system settings, and so on.
west config
The built-in config command can be used to get and set configuration values. You can pass west config
the options --system, --global, or --local to specify which configuration file to use. Only one of these
can be used at a time. If none is given, then writes default to --local, and reads show the final value
after applying overrides.
Some examples for common uses follow; run west config -h for detailed help, and see Built-in Config-
uration Options for more details on built-in options.
To set manifest.path to some-other-manifest:
Doing the above means that commands like west update will look for the west manifest inside the
some-other-manifest directory (relative to the workspace root directory) instead of the directory given
to west init, so be careful!
To read zephyr.base, the value which will be used as ZEPHYR_BASE if it is unset in the calling environ-
ment (also relative to the workspace root):
You can switch to another zephyr repository without changing manifest.path – and thus the behavior
of commands like west update – using:
This can be useful if you use commands like git worktree to create your own zephyr directories, and
want commands like west build to use them instead of the zephyr repository specified in the manifest.
(You can go back to using the directory in the upstream manifest by running west config zephyr.base
zephyr.)
To set color.ui to false in the global (user-wide) configuration file, so that west will no longer print
colored output for that user when run in any workspace:
The following table documents configuration options supported by west’s built-in commands. Configu-
ration options supported by Zephyr’s extension commands are documented in the pages for those com-
mands.
Option Description
color.ui Boolean. If true (the default), then west output is colorized when stdout is
a terminal.
commands. Boolean, default true, disables Extensions if false
allow_extensions
manifest.file String, default west.yml. Relative path from the manifest repository root
directory to the manifest file used by west init and other commands which
parse the manifest.
manifest. String, default empty. A comma-separated list of project groups to enable
group-filter and disable within the workspace. Prefix enabled groups with + and dis-
abled groups with -. For example, the value "+foo,-bar" enables group
foo and disables bar. See Project Groups.
manifest.path String, relative path from the west workspace root directory to the mani-
fest repository used by west update and other commands which parse the
manifest. Set locally by west init.
manifest. Comma-separated list of strings.
project-filter The option’s value is a comma-separated list of regular expressions, each
prefixed with + or -, like this:
+re1,-re2,-re3
Project names are matched against each regular expression (re1, re2, re3,
. . . ) in the list, in order. If the entire project name matches the regular ex-
pression, that element of the list either deactivates or activates the project.
The project is deactivated if the element begins with -. The project is acti-
vated if the element begins with +. (Project names cannot contain , if this
option is used, so the regular expressions do not need to contain a literal ,
character.)
If a project’s name matches multiple regular expressions in the list, the re-
sult from the last regular expression is used. For example, if manifest.
project-filter is:
-hal_.*,+hal_foo
Then a project named hal_bar is inactive, but a project named hal_foo is
active.
If a project is made inactive or active by a list element, the project is active
or not regardless of whether any or all of its groups are disabled. (This is
currently the only way to make a project that has no groups inactive.)
Otherwise, i.e. if a project does not match any regular expressions in the
list, it is active or inactive according to the usual rules related to its groups
(see Project Group Examples for examples in that case).
Within an element of a manifest.project-filter list, leading and trailing
whitespace are ignored. That means these example values are equivalent:
+foo,-bar
+foo , -bar
Any empty elements are ignored. That means these example values are
equivalent:
+foo,,-bar
+foo,-bar
update.fetch String, one of "smart" (the default behavior starting in v0.6.1) or "always"
(the previous behavior). If set to "smart", the west update command will
skip fetching from project remotes when those projects’ revisions in the
manifest file are SHAs or tags which are already available locally. The
"always" behavior is to unconditionally fetch from the remote.
update.name-cache String. If non-empty, west update will use its value as the --name-cache
option’s value if not given on the command line.
update.narrow Boolean. If true, west update behaves as if --narrow was given on the
command line. The default is false.
update.path-cache String. If non-empty, west update will use its value as the --path-cache
154 Chapter
option’s value if not given on the command line.2. Developing with Zephyr
update. Boolean. If true (the default), west update will synchronize Git submodules
sync-submodules before updating them.
zephyr.base String, default value to set for the ZEPHYR_BASE environment variable while
Zephyr Project Documentation, Release 3.4.0
2.10.9 Extensions
West is “pluggable”: you can add your own commands to west without editing its source code. These are
called west extension commands, or just “extensions” for short. Extensions show up in the west --help
output in a special section for the project which defines them. This page provides general information
on west extension commands, and has a tutorial for writing your own.
Some commands you can run when using west with Zephyr, like the ones used to build, flash, and debug
and the ones described here , are extensions. That’s why help for them shows up like this in west --help:
To disable support for extension commands, set the commands.allow_extensions configuration option
to false. To set this globally for whenever you run west, use:
If you want to, you can then re-enable them in a particular west workspace with:
Note that the files containing extension commands are not imported by west unless the commands are
explicitly run. See below for details.
Step 1: Implement Your Command Create a Python file to contain your command implementation
(see the “Meta > Requires” information on the west PyPI page for details on the currently supported
versions of Python). You can put it in anywhere in any project tracked by your west manifest, or the
manifest repository itself. This file must contain a subclass of the west.commands.WestCommand class;
this class will be instantiated and used when your extension is run.
Here is a basic skeleton you can use to get started. It contains a subclass of WestCommand, with imple-
mentations for all the abstract methods. For more details on the west APIs you can use, see west-apis.
'''my_west_extension.py
class MyCommand(WestCommand):
def __init__(self):
super().__init__(
'my-command-name', # gets stored as self.name
'one-line help for what my-command-name does', # self.help
# self.description:
dedent('''
A multi-line description of my-command.
You can split this up into multiple paragraphs and they'll get
reflowed for you. You can also pass
formatter_class=argparse.RawDescriptionHelpFormatter when calling
parser_adder.add_parser() below if you want to keep your line
endings.'''))
# Add some example options using the standard argparse module API.
parser.add_argument('-o', '--optional', help='an optional argument')
parser.add_argument('required', help='a required argument')
You can ignore the second argument to do_run() (unknown_args above), as WestCommand will reject
unknown arguments by default. If you want to be passed a list of unknown arguments instead, add
accepts_unknown_args=True to the super().__init__() arguments.
Step 2: Add or Update Your west-commands.yml You now need to add a west-commands.yml file to
your project which describes your extension to west.
Here is an example for the above class definition, assuming it’s in my_west_extension.py at the project
root directory:
west-commands:
- file: my_west_extension.py
commands:
- name: my-command-name
class: MyCommand
help: one-line help for what my-command-name does
The top level of this YAML file is a map with a west-commands key. The key’s value is a sequence
of “command descriptors”. Each command descriptor gives the location of a file implementing west
extensions, along with the names of those extensions, and optionally the names of the classes which
define them (if not given, the class value defaults to the same thing as name).
Some information in this file is redundant with definitions in the Python code. This is because west won’t
import my_west_extension.py until the user runs west my-command-name, since:
• It allows users to run west update with a manifest from an untrusted source, then use other west
commands without your code being imported along the way. Since importing a Python module is
shell-equivalent, this provides some peace of mind.
• It’s a small optimization, since your code will only be imported if it is needed.
So, unless your command is explicitly run, west will just load the west-commands.yml file to get the basic
information it needs to display information about your extension to the user in west --help output, etc.
If you have multiple extensions, or want to split your extensions across multiple files, your
west-commands.yml will look something like this:
west-commands:
- file: my_west_extension.py
commands:
- name: my-command-name
class: MyCommand
help: one-line help for what my-command-name does
- file: another_file.py
commands:
- name: command2
help: another cool west extension
- name: a-third-command
class: ThirdCommand
help: a third command in the same file as command2
Above:
• my_west_extension.py defines extension my-command-name with class MyCommand
• another_file.py defines two extensions:
1. command2 with class command2
2. a-third-command with class ThirdCommand
See the file west-commands-schema.yml in the west repository for a schema describing the contents of
a west-commands.yml.
Step 3: Update Your Manifest Finally, you need to specify the location of the west-commands.yml you
just edited in your west manifest. If your extension is in a project, add it like this:
manifest:
# [... other contents ...]
projects:
(continues on next page)
Where path/to/west-commands.yml is relative to the root of the project. Note that the name
west-commands.yml, while encouraged, is just a convention; you can name the file something else if
you need to.
Alternatively, if your extension is in the manifest repository, just do the same thing in the manifest’s self
section, like this:
manifest:
# [... other contents ...]
self:
west-commands: path/to/west-commands.yml
That’s it; you can now run west my-command-name. Your command’s name, help, and the project which
contains its code will now also show up in the west --help output. If you share the updated repositories
with others, they’ll be able to use it, too.
Zephyr provides several west extension commands for building, flashing, and interacting with Zephyr
programs running on a board: build, flash, debug, debugserver and attach.
For information on adding board support for the flashing and debugging commands, see Flash and debug
support in the board porting guide.
The build command helps you build Zephyr applications from source. You can use west config to config-
ure its behavior.
Its default behavior tries to “do what you mean”:
• If there is a Zephyr build directory named build in your current working directory, it is incremen-
tally re-compiled. The same is true if you run west build from a Zephyr build directory.
• Otherwise, if you run west build from a Zephyr application’s source directory and no build direc-
tory is found, a new one is created and the application is compiled in it.
Basics The easiest way to use west build is to go to an application’s root directory (i.e. the folder
containing the application’s CMakeLists.txt) and then run:
Where <BOARD> is the name of the board you want to build for. This is exactly the same name you would
supply to CMake if you were to invoke it with: cmake -DBOARD=<BOARD>.
Tip: You can use the west boards command to list all supported boards.
A build directory named build will be created, and the application will be compiled there after west
build runs CMake to create a build system in that directory. If west build finds an existing build
directory, the application is incrementally re-compiled there without re-running CMake. You can force
CMake to run again with --cmake.
You don’t need to use the --board option if you’ve already got an existing build directory; west build
can figure out the board from the CMake cache. For new builds, the --board option, BOARD environment
variable, or build.board configuration option are checked (in that order).
Sysbuild (multi-domain builds) Sysbuild (System build) can be used to create a multi-domain build
system combining multiple images for a single or multiple boards.
Use --sysbuild to select the Sysbuild (System build) build infrastructure with west build to build
multiple domains.
More detailed information regarding the use of sysbuild can be found in the Sysbuild (System build)
guide.
Tip: The build.sysbuild configuration option can be enabled to tell west build to default build using
sysbuild. --no-sysbuild can be used to disable sysbuild for a specific build.
west build will build all domains through the top-level build folder of the domains specified by sysbuild.
A single domain from a multi-domain project can be built by using --domain argument.
Examples Here are some west build usage examples, grouped by area.
Forcing CMake to Run Again To force a CMake re-run, use the --cmake (or -c) option:
west build -c
Setting a Default Board To configure west build to build for the reel_board by default:
(You can use any other board supported by Zephyr here; it doesn’t have to be reel_board.)
Setting Source and Build Directories To set the application source directory explicitly, give its path as
a positional argument:
To change the default build directory from build, use the build.dir-fmt configuration option. This lets
you name build directories using format strings, like this:
With the above, running west build -b reel_board samples/hello_world will use build directory
build/reel_board/hello_world. See Configuration Options for more details on this option.
Setting the Build System Target To specify the build system target to run, use --target (or -t).
For example, on host platforms with QEMU, you can use the run target to build and run the hello_world
sample for the emulated qemu_x86 board in one command:
As a final example, to use -t to run the pristine target, which deletes all the files in the build directory:
Pristine Builds A pristine build directory is essentially a new build directory. All byproducts from
previous builds have been removed.
To force west build make the build directory pristine before re-running CMake to generate a build
system, use the --pristine=always (or -p=always) option.
Giving --pristine or -p without a value has the same effect as giving it the value always. For example,
these commands are equivalent:
By default, west build applies a heuristic to detect if the build directory needs to be made pristine. This
is the same as using --pristine=auto.
Tip: You can run west config build.pristine always to always do a pristine build, or west config
build.pristine never to disable the heuristic. See the west build Configuration Options for details.
Verbose Builds To print the CMake and compiler commands run by west build, use the global west
verbosity option, -v:
One-Time CMake Arguments To pass additional arguments to the CMake invocation performed by
west build, pass them after a -- at the end of the command line.
Important: Passing additional CMake arguments like this forces west build to re-run the CMake build
configuration step, even if a build system has already been generated. This will make incremental builds
slower (but still much faster than building from scratch).
After using -- once to generate the build directory, use west build -d <build-dir> on subsequent
runs to do incremental builds.
Alternatively, make your CMake arguments permanent as described in the next section; it will not slow
down incremental builds.
For example, to use the Unix Makefiles CMake generator instead of Ninja (which west build uses by
default), run:
Notice how the -- only appears once, even though multiple CMake arguments are given. All command-
line arguments to west build after a -- are passed to CMake.
To set DTC_OVERLAY_FILE to enable-modem.overlay, using that file as a devicetree overlay:
Permanent CMake Arguments The previous section describes how to add CMake arguments for a
single west build command. If you want to save CMake arguments for west build to use every time
it generates a new build system instead, you should use the build.cmake-args configuration option.
Whenever west build runs CMake to generate a build system, it splits this option’s value according to
shell rules and includes the results in the cmake command line.
Remember that, by default, west build tries to avoid generating a new build system if one is present
in your build directory. Therefore, you need to either delete any existing build directories or do a pristine
build after setting build.cmake-args to make sure it will take effect.
For example, to always enable CMAKE_EXPORT_COMPILE_COMMANDS, you can run:
(The extra -- is used to force the rest of the command to be treated as a positional argument. Without
it, west config would treat the -DVAR=VAL syntax as a use of its -D option.)
To enable CMAKE_VERBOSE_MAKEFILE, so CMake always produces a verbose build system:
To save more than one argument in build.cmake-args, use a single string whose value can be split into
distinct arguments (west build uses the Python function shlex.split() internally to split the value).
For example, to enable both CMAKE_EXPORT_COMPILE_COMMANDS and CMAKE_VERBOSE_MAKEFILE:
If you want to save your CMake arguments in a separate file instead, you can combine CMake’s -C
<initial-cache> option with build.cmake-args. For instance, another way to set the options used in
the previous example is to create a file named ~/my-cache.cmake with the following contents:
Then run:
See the cmake(1) manual page and the set() command documentation for more details.
Build tool arguments Use -o to pass options to the underlying build tool.
This works with both ninja (the default) and make based build systems.
For example, to pass -dexplain to ninja:
Note that using -o=--foo instead of -o --foo is required to prevent --foo from being treated as a west
build option.
Build parallelism By default, ninja uses all of your cores to build, while make uses only one. You can
control this explicitly with the -j option supported by both tools.
For example, to build with 4 cores:
Build a single domain In a multi-domain build with hello_world and MCUboot, you can use --domain
hello_world to only build this domain:
The --domain argument can be combined with the --target argument to build the specific target for
the target, for example:
Configuration Options You can configure west build using these options.
Option Description
build.board String. If given, this the board used by west build when --board is not given
and BOARD is unset in the environment.
build.board_warn Boolean, default true. If false, disables warnings when west build can’t
figure out the target board.
build.cmake-args String. If present, the value will be split according to shell rules and passed
to CMake whenever a new build system is generated. See Permanent CMake
Arguments.
build.dir-fmt String, default build. The build folder format string, used by west when-
ever it needs to create or locate a build folder. The currently available
arguments are:
• board: The board name
• source_dir: The relative path from the current working directory to
the source directory. If the current working directory is inside the
source directory this will be set to an empty string.
• app: The name of the source directory.
build.generator String, default Ninja. The CMake Generator to use to create a build system.
(To set a generator for a single build, see the above example)
build.guess-dir String, instructs west whether to try to guess what build folder to use when
build.dir-fmt is in use and not enough information is available to resolve
the build folder name. Can take these values:
• never (default): Never try to guess, bail out instead and require the
user to provide a build folder with -d.
• runners: Try to guess the folder when using any of the ‘runner’ com-
mands. These are typically all commands that invoke an external tool,
such as flash and debug.
build.pristine String. Controls the way in which west build may clean the build folder
before building. Can take the following values:
• never (default): Never automatically make the build folder pristine.
• auto: west build will automatically make the build folder pristine
before building, if a build system is present and the build would fail
otherwise (e.g. the user has specified a different board or application
from the one previously used to make the build directory).
• always: Always make the build folder pristine before building, if a
build system is present.
build.sysbuild Boolean, default false. If true, build application using the sysbuild infras-
tructure.
Basics From a Zephyr build directory, re-build the binary and flash it to your board:
west flash
Without options, the behavior is the same as ninja flash (or make flash, etc.).
To specify the build directory, use --build-dir (or -d):
If you don’t specify the build directory, west flash searches for one in build, then the current working
directory. If you set the build.dir-fmt configuration option (see Setting Source and Build Directories),
west flash searches there instead of build.
Choosing a Runner If your board’s Zephyr integration supports flashing with multiple programs, you
can specify which one to use using the --runner (or -r) option. For example, if West flashes your board
with nrfjprog by default, but it also supports JLink, you can override the default with:
You can override the default flash runner at build time by using the BOARD_FLASH_RUNNER CMake vari-
able, and the debug runner with BOARD_DEBUG_RUNNER.
For example:
See One-Time CMake Arguments and Permanent CMake Arguments for more information on setting CMake
arguments.
See Flash and debug runners below for more information on the runner library used by West. The list
of runners which support flashing can be obtained with west flash -H; if run from a build directory or
with --build-dir, this will print additional information on available runners for your board.
Configuration Overrides The CMake cache contains default values West uses while flashing, such as
where the board directory is on the file system, the path to the zephyr binaries to flash in several formats,
and more. You can override any of this configuration at runtime with additional options.
For example, to override the HEX file containing the Zephyr image to flash (assuming your runner expects
a HEX file), but keep other flash configuration at default values:
The west flash -h output includes a complete list of overrides supported by all runners.
Runner-Specific Overrides Each runner may support additional options related to flashing. For exam-
ple, some runners support an --erase flag, which mass-erases the flash storage on your board before
flashing the Zephyr image.
To view all of the available options for the runners your board supports, as well as their usage informa-
tion, use --context (or -H):
Important: Note the capital H in the short option name. This re-runs the build in order to ensure the
information displayed is up to date!
When running West outside of a build directory, west flash -H just prints a list of runners. You can use
west flash -H -r <runner-name> to print usage information for options supported by that runner.
For example, to print usage information about the jlink runner:
Multi-domain flashing When a Sysbuild (multi-domain builds) folder is detected, then west flash will
flash all domains in the order defined by sysbuild.
It is possible to flash the image from a single domain in a multi-domain project by using --domain.
For example, in a multi-domain build with hello_world and MCUboot, you can use the --domain
hello_world domain to only flash only the image from this domain:
Basics From a Zephyr build directory, to attach a debugger to your board and open up a debug console
(e.g. a GDB session):
west debug
To attach a debugger to your board and open up a local network port you can connect a debugger to
(e.g. an IDE debugger):
west debugserver
Without options, the behavior is the same as ninja debug and ninja debugserver (or make debug,
etc.).
To specify the build directory, use --build-dir (or -d):
If you don’t specify the build directory, these commands search for one in build, then the current working
directory. If you set the build.dir-fmt configuration option (see Setting Source and Build Directories),
west debug searches there instead of build.
Choosing a Runner If your board’s Zephyr integration supports debugging with multiple programs,
you can specify which one to use using the --runner (or -r) option. For example, if West debugs your
board with pyocd-gdbserver by default, but it also supports JLink, you can override the default with:
See Flash and debug runners below for more information on the runner library used by West. The list of
runners which support debugging can be obtained with west debug -H; if run from a build directory or
with --build-dir, this will print additional information on available runners for your board.
Configuration Overrides The CMake cache contains default values West uses for debugging, such as
where the board directory is on the file system, the path to the zephyr binaries containing symbol tables,
and more. You can override any of this configuration at runtime with additional options.
For example, to override the ELF file containing the Zephyr binary and symbol tables (assuming your
runner expects an ELF file), but keep other debug configuration at default values:
west debug --elf-file path/to/some/other.elf
west debugserver --elf-file path/to/some/other.elf
The west debug -h output includes a complete list of overrides supported by all runners.
Runner-Specific Overrides Each runner may support additional options related to debugging. For
example, some runners support flags which allow you to set the network ports used by debug servers.
To view all of the available options for the runners your board supports, as well as their usage informa-
tion, use --context (or -H):
west debug --context
(The command west debugserver --context will print the same output.)
Important: Note the capital H in the short option name. This re-runs the build in order to ensure the
information displayed is up to date!
When running West outside of a build directory, west debug -H just prints a list of runners. You can use
west debug -H -r <runner-name> to print usage information for options supported by that runner.
For example, to print usage information about the jlink runner:
west debug -H -r jlink
Multi-domain debugging west debug can only debug a single domain at a time. When a Sysbuild
(multi-domain builds) folder is detected, west debug will debug the default domain specified by sys-
build.
The default domain will be the application given as the source directory. See the following example:
west build --sysbuild path/to/source/directory
For example, when building hello_world with MCUboot using sysbuild, hello_world becomes the de-
fault domain:
west build --sysbuild samples/hello_world
or:
west debug --domain hello_world
If you wish to debug MCUboot, you must explicitly specify MCUboot as the domain to debug:
west debug --domain mcuboot
The flash and debug commands use Python wrappers around various Flash & Debug Host Tools. These
wrappers are all defined in a Python library at scripts/west_commands/runners. Each wrapper is called
a runner. Runners can flash and/or debug Zephyr programs.
The central abstraction within this library is ZephyrBinaryRunner, an abstract class which represents
runners. The set of available runners is determined by the imported subclasses of ZephyrBinaryRunner.
ZephyrBinaryRunner is available in the runners.core module; individual runner implementations are
in other submodules, such as runners.nrfjprog, runners.openocd, etc.
Hacking
This section documents the runners.core module used by the flash and debug commands. This is the
core abstraction used to implement support for these features.
Warning: These APIs are provided for reference, but they are more “shared code” used to implement
multiple extension commands than a stable API.
Developers can add support for new ways to flash and debug Zephyr programs by implementing addi-
tional runners. To get this support into upstream Zephyr, the runner should be added into a new or
existing runners module, and imported from runners/__init__.py.
Note: The test cases in scripts/west_commands/tests add unit test coverage for the runners package
and individual runner classes.
Please try to add tests when adding new runners. Note that if your changes break existing test cases, CI
testing on upstream pull requests will fail.
exception runners.core.MissingProgram(program)
FileNotFoundError subclass for missing program dependencies.
No significant changes from the parent FileNotFoundError; this is useful for explicitly signaling that
the file in question is a program that some class requires to proceed.
The filename attribute contains the missing program.
class runners.core.NetworkPortHelper
Helper class for dealing with local IP network ports.
get_unused_ports(starting_from)
Find unused network ports, starting at given values.
starting_from is an iterable of ports the caller would like to use.
The return value is an iterable of ports, in the same order, using the given values if they were
unused, or the next sequentially available unused port otherwise.
Ports may be bound between this call’s check and actual usage, so callers still need to handle
errors involving returned ports.
class runners.core.RunnerCaps(commands: Set[str] = {'attach', 'debug', 'debugserver', 'flash'}, dev_id:
bool = False, flash_addr: bool = False, erase: bool = False, tool_opt:
bool = False, file: bool = False)
This class represents a runner class’s capabilities.
Each capability is represented as an attribute with the same name. Flag attributes are True or False.
Available capabilities:
• commands: set of supported commands; default is {‘flash’, ‘debug’, ‘debugserver’, ‘attach’}.
• dev_id: whether the runner supports device identifiers, in the form of an -i, –dev-id option.
This is useful when the user has multiple debuggers connected to a single computer, in order
to select which one will be used with the command provided.
• flash_addr: whether the runner supports flashing to an arbitrary address. Default is False. If
true, the runner must honor the –dt-flash option.
• erase: whether the runner supports an –erase option, which does a mass-erase of the entire
addressable flash on the target before flashing. On multi-core SoCs, this may only erase
portions of flash specific the actual target core. (This option can be useful for things like
clearing out old settings values or other subsystem state that may affect the behavior of the
zephyr image. It is also sometimes needed by SoCs which have flash-like areas that can’t be
sector erased by the underlying tool before flashing; UICR on nRF SoCs is one example.)
• tool_opt: whether the runner supports a –tool-opt (-O) option, which can be given multiple
times and is passed on to the underlying tool that the runner wraps.
class runners.core.RunnerConfig(build_dir: str, board_dir: str, elf_file: Optional[str], hex_file:
Optional[str], bin_file: Optional[str], uf2_file: Optional[str], file:
Optional[str], file_type: Optional[FileType] = FileType.OTHER,
gdb: Optional[str] = None, openocd: Optional[str] = None,
openocd_search: List[str] = [])
Runner execution-time configuration.
This is a common object shared by all runners. Individual runners can register specific configuration
options using their do_add_parser() hooks.
bin_file: Optional[str]
Alias for field number 4
board_dir: str
Alias for field number 1
build_dir: str
Alias for field number 0
elf_file: Optional[str]
Alias for field number 2
file: Optional[str]
Alias for field number 6
file_type: Optional[FileType]
Alias for field number 7
gdb: Optional[str]
Alias for field number 8
hex_file: Optional[str]
Alias for field number 3
openocd: Optional[str]
Alias for field number 9
openocd_search: List[str]
Alias for field number 10
uf2_file: Optional[str]
Alias for field number 5
class runners.core.ZephyrBinaryRunner(cfg: RunnerConfig)
Abstract superclass for binary runners (flashers, debuggers).
Note: this class’s API has changed relatively rarely since it as added, but it is not considered a
stable Zephyr API, and may change without notice.
With some exceptions, boards supported by Zephyr must provide generic means to be flashed (have
a Zephyr firmware binary permanently installed on the device for running) and debugged (have a
breakpoint debugger and program loader on a host workstation attached to a running target).
This is supported by four top-level commands managed by the Zephyr build system:
• ‘flash’: flash a previously configured binary to the board, start execution on the target, then
return.
• ‘debug’: connect to the board via a debugging protocol, program the flash, then drop the user
into a debugger interface with symbol tables loaded from the current binary, and block until
it exits.
• ‘debugserver’: connect via a board-specific debugging protocol, then reset and halt the target.
Ensure the user is now able to connect to a debug server with symbol tables loaded from the
binary.
• ‘attach’: connect to the board via a debugging protocol, then drop the user into a debugger
interface with symbol tables loaded from the current binary, and block until it exits. Unlike
‘debug’, this command does not program the flash.
This class provides an API for these commands. Every subclass is called a ‘runner’ for short. Each
runner has a name (like ‘pyocd’), and declares commands it can handle (like ‘flash’). Boards (like
‘nrf52dk_nrf52832’) declare which runner(s) are compatible with them to the Zephyr build system,
along with information on how to configure the runner to work with the board.
The build system will then place enough information in the build directory to create and use
runners with this class’s create() method, which provides a command line argument parsing API.
You can also create runners by instantiating subclasses directly.
In order to define your own runner, you need to:
1. Define a ZephyrBinaryRunner subclass, and implement its abstract methods. You may need to
override capabilities().
2. Make sure the Python module defining your runner class is imported, e.g. by editing this
package’s __init__.py (otherwise, get_runners() won’t work).
3. Give your runner’s name to the Zephyr build system in your board’s board.cmake.
Additional advice:
• If you need to import any non-standard-library modules, make sure to catch ImportError and
defer complaints about it to a RuntimeError if one is missing. This avoids affecting users that
don’t require your runner, while still making it clear what went wrong to users that do require
it that don’t have the necessary modules installed.
• If you need to ask the user something (e.g. using input()), do it in your create() classmethod,
not do_run(). That ensures your __init__() really has everything it needs to call do_run(),
and also avoids calling input() when not instantiating within a command line application.
• Use self.logger to log messages using the standard library’s logging API; your logger is named
“runner.<your-runner-name()>”
For command-line invocation from the Zephyr build system, runners define their own argparse-
based interface through the common add_parser() (and runner-specific do_add_parser() it dele-
gates to), and provide a way to create instances of themselves from a RunnerConfig and parsed
runner-specific arguments via create().
Runners use a variety of host tools and configuration values, the user interface to which is ab-
stracted by this class. Each runner subclass should take any values it needs to execute one of these
commands in its constructor. The actual command execution is handled in the run() method.
classmethod add_parser(parser)
Adds a sub-command parser for this runner.
The given object, parser, is a sub-command parser from the argparse module. For more details,
refer to the documentation for argparse.ArgumentParser.add_subparsers().
The lone common optional argument is:
• –dt-flash (if the runner capabilities includes flash_addr)
Runner-specific options are added through the do_add_parser() hook.
property build_conf: BuildConfiguration
Get a BuildConfiguration for the build directory.
call(cmd: List[str], **kwargs) → int
Subclass subprocess.call() wrapper.
Subclasses should use this method to run command in a subprocess and get its return code,
rather than using subprocess directly, to keep accurate debug logs.
classmethod capabilities() → RunnerCaps
Returns a RunnerCaps representing this runner’s capabilities.
This implementation returns the default capabilities.
Subclasses should override appropriately if needed.
cfg
RunnerConfig for this instance.
check_call(cmd: List[str], **kwargs)
Subclass subprocess.check_call() wrapper.
Subclasses should use this method to run command in a subprocess and check that it executed
correctly, rather than using subprocess directly, to keep accurate debug logs.
check_output(cmd: List[str], **kwargs) → bytes
Subclass subprocess.check_output() wrapper.
Subclasses should use this method to run command in a subprocess and check that it executed
correctly, rather than using subprocess directly, to keep accurate debug logs.
If program is an absolute path to an existing program binary, this call succeeds. Otherwise,
try to find the program by name on the system PATH.
If the program can be found, its path is returned. Otherwise, raises MissingProgram.
run(command: str, **kwargs)
Runs command (‘flash’, ‘debug’, ‘debugserver’, ‘attach’).
This is the main entry point to this runner.
run_client(client)
Run a client that handles SIGINT.
run_server_and_client(server, client)
Run a server that ignores SIGINT, and a client that handles it.
This routine portably:
• creates a Popen object for the server command which ignores SIGINT
• runs client in a subprocess while temporarily ignoring SIGINT
• cleans up the server after the client exits.
It’s useful to e.g. open a GDB server and client.
property thread_info_enabled: bool
Returns True if self.build_conf has CONFIG_DEBUG_THREAD_INFO enabled.
classmethod tool_opt_help() → str
Get the ArgParse help text for the –tool-opt option.
Doing it By Hand
If you prefer not to use West to flash or debug your board, simply inspect the build directory for the
binaries output by the build system. These will be named something like zephyr/zephyr.elf, zephyr/
zephyr.hex, etc., depending on your board’s build system integration. These binaries may be flashed
to a board using alternative tools of your choice, or used for debugging as needed, e.g. as a source of
symbol tables.
By default, these West commands rebuild binaries before flashing and debugging. This can of course
also be accomplished using the usual targets provided by Zephyr’s build system (in fact, that’s how these
commands do it).
The west sign extension command can be used to sign a Zephyr application binary for consumption by
a bootloader using an external tool. Run west sign -h for command line help.
MCUboot / imgtool
The Zephyr build system has special support for signing binaries for use with the MCUboot bootloader
using the imgtool program provided by its developers. You can both build and sign this type of application
binary in one step by setting some Kconfig options. If you do, west flash will use the signed binaries.
If you use this feature, you don’t need to run west sign yourself; the build system will do it for you.
Here is an example workflow, which builds and flashes MCUboot, as well as the hello_world application
for chain-loading by MCUboot. Run these commands from the zephyrproject workspace you created
in the Getting Started Guide.
Then, you should see something like this when you run west flash -d build-hello-signed:
Whether west flash supports this feature depends on your runner. The nrfjprog and pyocd runners
work with the above flow. If your runner does not support this flow and you would like it to, please send
a patch or file an issue for adding support.
The signing script used when running west flash can be extended or replaced to change features or
introduce different signing mechanisms. By default with MCUboot enabled, signing is setup by the
cmake/mcuboot.cmake file in Zephyr which adds extra post build commands for generating the signed
images. The file used for signing can be replaced by adjusting the SIGNING_SCRIPT property on the
zephyr_property_target, ideally done by a module using:
if(CONFIG_BOOTLOADER_MCUBOOT)
set_target_properties(zephyr_property_target PROPERTIES SIGNING_SCRIPT ${CMAKE_
˓→CURRENT_LIST_DIR}/custom_signing.cmake)
endif()
This will include the custom signing CMake file instead of the default Zephyr one when projects are
built with MCUboot signing support enabled. The base Zephyr MCUboot signing file can be used as a
reference for creating a new signing system or extending the default behaviour.
rimage
rimage configuration uses a different approach that does not rely on Kconfig or CMake but on west config
instead, similar to Permanent CMake Arguments.
Signing involves a number of “wrapper” scripts stacked on top of each other: west flash invokes west
build which invokes cmake and ninja which invokes west sign which invokes imgtool or rimage. As
long as the signing parameters desired are the default ones and fairly static, these indirections are not
a problem. On the other hand, passing imgtool or rimage options through all these layers can causes
issues typical when the layers don’t abstract anything. First, this usually requires boilerplate code in
each layer. Quoting whitespace or other special characters through all the wrappers can be difficult.
Reproducing a lower west sign command to debug some build-time issue can be very time-consuming:
it requires at least enabling and searching verbose build logs to find which exact options were used.
Copying these options from the build logs can be unreliable: it may produce different results because of
subtle environment differences. Last and worst: new signing feature and options are impossible to use
until more boilerplate code has been added in each layer.
To avoid these issues, rimage parameters can bet set in west config instead. Here’s a workspace/.
west/config example:
[sign]
# Not needed when invoked from CMake
tool = rimage
[rimage]
# Quoting is optional and works like in Unix shells
# Not needed when rimage can be found in the default PATH
path = "/home/me/zworkspace/build-rimage/rimage"
In order to support quoting, values are parsed by Python’s shlex.split() like in One-Time CMake Argu-
ments.
The extra-args are passed directly to the rimage command. The example above has the same effect as
appending them on command line after -- like this: west sign --tool rimage -- -i 4 -k 'keys/
key argument with space.pem'. In case both are used, the command-line arguments go last.
The boards command can be used to list the boards that are supported by Zephyr without having to
resort to additional sources of information.
It can be run by typing:
west boards
This command lists all supported boards in a default format. If you prefer to specify the display format
yourself you can use the --format (or -f) flag:
west boards -h
The completion extension command outputs shell completion scripts that can then be used directly to
enable shell completion for the supported shells.
It currently supports the following shells:
• bash
• zsh
Additional instructions are available in the command’s help:
This command registers the current Zephyr installation as a CMake config package in the CMake user
package registry.
In Windows, the CMake user package registry is found in HKEY_CURRENT_USER\Software\Kitware\
CMake\Packages.
In Linux and MacOS, the CMake user package registry is found in. ~/.cmake/packages.
You may run this command when setting up a Zephyr workspace. If you do, application CMakeLists.txt
files that are outside of your workspace will be able to find the Zephyr repository with the following:
This command generates SPDX 2.2 tag-value documents, creating relationships from source files to the
corresponding generated build files. SPDX-License-Identifier comments in source files are scanned
and filled into the SPDX documents.
To use this command:
1. Pre-populate a build directory BUILD_DIR like this:
This step ensures the build directory contains CMake metadata required for SPDX document gen-
eration.
2. Build your application using this pre-created build directory, like so:
The blobs command allows users to interact with binary blobs declared in one or more modules via their
module.yml file.
The blobs command has three sub-commands, used to list, fetch or clean (i.e. delete) the binary blobs
themselves.
You can list binary blobs while specifying the format of the output:
For the full set of variables available in -f/--format run west blobs -h.
Fetching blobs works in a similar manner:
Note that, as described in the modules section, fetched blobs are stored in a zephyr/blobs/ folder relative
to the root of the corresponding module repository.
As does deleting them:
Additionally the tool allows you to specify the modules you want to list, fetch or clean blobs for by typing
the module names as a command-line parameter.
West was added to the Zephyr project to fulfill two fundamental requirements:
• The ability to work with multiple Git repositories
• The ability to provide an extensible and user-friendly command-line interface for basic Zephyr
workflows
During the development of west, a set of Design Constraints were identified to avoid the common pitfalls
of tools of this kind.
Requirements
Although the motivation behind splitting the Zephyr codebase into multiple repositories is outside of the
scope of this page, the fundamental requirements, along with a clear justification of the choice not to use
existing tools and instead develop a new one, do belong here.
The basic requirements are:
• R1: Keep externally maintained code in separately maintained repositories outside of the main
zephyr repository, without requiring users to manually clone each of the external repositories
• R2: Provide a tool that both Zephyr users and distributors can make use of to benefit from and
extend
• R3: Allow users and downstream distributions to override or remove repositories without having
to make changes to the zephyr repository
• R4: Support both continuous tracking and commit-based (bisectable) project updating
Some of west’s features are similar to those provided by Git Submodules and Google’s repo.
Existing tools were considered during west’s initial design and development. None were found suitable
for Zephyr’s requirements. In particular, these were examined in detail:
• Google repo
– Does not cleanly support using zephyr as the manifest repository (R4)
– Python 2 only
– Does not play well with Windows
– Assumes Gerrit is used for code review
• Git submodules
– Does not fully support R1, since the externally maintained repositories would still need to be
inside the main zephyr Git tree
– Does not support R3, since downstream copies would need to either delete or replace sub-
module definitions
– Does not support continuous tracking of the latest HEAD in external repositories (R4)
– Requires hardcoding of the paths/locations of the external repositories
Zephyr intends to provide all required building blocks needed to deploy complex IoT applications. This
in turn means that the Zephyr project is much more than an RTOS kernel, and is instead a collection
of components that work together. In this context, there are a few reasons to work with multiple Git
repositories in a standardized manner within the project:
• Clean separation of Zephyr original code and imported projects and libraries
• Avoidance of license incompatibilities between original and imported code
• Reduction in size and scope of the core Zephyr codebase, with additional repositories containing
optional components instead of being imported directly into the tree
• Safety and security certifications
• Enforcement of modularization of the components
• Out-of-tree development based on subsets of the supported boards and SoCs
See Basics for information on how west workspaces manage multiple git repositories.
Design Constraints
West is:
• Optional: it is always possible to drop back to “raw” command-line tools, i.e. use Zephyr without
using west (although west itself might need to be installed and accessible to the build system). It
may not always be convenient to do so, however. (If all of west’s features were already conveniently
available, there would be no reason to develop it.)
• Compatible with CMake: building, flashing and debugging, and emulator support will always
remain compatible with direct use of CMake.
• Cross-platform: West is written in Python 3, and works on all platforms supported by Zephyr.
• Usable as a Library: whenever possible, west features are implemented as libraries that can be
used standalone in other programs, along with separate command line interfaces that wrap them.
West itself is a Python package named west; its libraries are implemented as subpackages.
• Conservative about features: no features will be accepted without strong and compelling moti-
vation.
• Clearly specified: West’s behavior in cases where it wraps other commands is clearly specified and
documented. This enables interoperability with third party tools, and means Zephyr developers
can always find out what is happening “under the hood” when using west.
See Zephyr issue #6205 and for more details and discussion.
To convert a “pre-west” Zephyr setup on your computer to west, follow these steps. If you are starting
from scratch, use the Getting Started Guide instead. See Troubleshooting West for advice on common
issues.
1. Install west.
On Linux:
mkdir zephyrproject
mv zephyr zephyrproject
cd zephyrproject
On Windows cmd.exe:
mkdir zephyrproject
move zephyr zephyrproject
chdir zephyrproject
The name zephyrproject is recommended, but you can choose any name with no spaces anywhere
in the path.
3. Create a west workspace using the zephyr repository as a local manifest repository:
This creates zephyrproject/.west, marking the root of your workspace, and does some other
setup. It will not change the contents of the zephyr repository in any way.
4. Clone the rest of the repositories used by zephyr:
west update
Make sure to run this command whenever you pull zephyr. Otherwise, your local repositories
will get out of sync. (Run west list for current information on these repositories.)
You are done: zephyrproject is now set up to use west.
This page provides information on using Zephyr without west. This is not recommended for beginners
due to the extra effort involved. In particular, you will have to do work “by hand” to replace these
features:
• cloning the additional source code repositories used by Zephyr in addition to the main zephyr
repository, and keeping them up to date
• specifying the locations of these repositories to the Zephyr build system
• flashing and debugging without understanding detailed usage of the relevant host tools
Note: If you have previously installed west and want to stop using it, uninstall it first:
Otherwise, Zephyr’s build system will find it and may try to use it.
In addition to downloading the zephyr source code repository itself, you will need to manually clone the
additional projects listed in the west manifest file inside that repository.
mkdir zephyrproject
cd zephyrproject
git clone https://github.com/zephyrproject-rtos/zephyr
# clone additional repositories listed in zephyr/west.yml,
# and check out the specified revisions as well.
As you pull changes in the zephyr repository, you will also need to maintain those additional repositories,
adding new ones as necessary and keeping existing ones up to date at the latest revisions.
Building applications
You can build a Zephyr application using CMake and Ninja (or make) directly without west installed if
you specify any modules manually.
When building with west installed, the Zephyr build system will use it to set ZEPHYR_MODULES.
If you don’t have west installed and your application does not need any of these repositories, the build
will still work.
If you don’t have west installed and your application does need one of these repositories, you must set
ZEPHYR_MODULES yourself as shown above.
See Modules (External projects) for more details.
Similarly, if your application requires binary blobs and you are not using west, you will need to download
and place those blobs in the right places instead of using west blobs. See Binary Blobs for more details.
Running build system targets like ninja flash, ninja debug, etc. is just a call to the corresponding
west command. For example, ninja flash calls west flash1 . If you don’t have west installed on your
system, running those targets will fail. You can of course still flash and debug using any Flash & Debug
Host Tools which work for your board (and which those west commands wrap).
If you want to use these build system targets but do not want to install west on your system using pip, it
is possible to do so by manually creating a west workspace:
[manifest]
path = zephyr
[zephyr]
base = zephyr
After that, and in order for ninja to be able to invoke west to flash and debug, you must specify the west
directory. This can be done by setting the environment variable WEST_DIR to point to zephyrproject/.
west/west before running CMake to set up a build directory.
For details on west’s Python APIs, see west-apis.
2.11 Testing
The Zephyr Test Framework (Ztest) provides a simple testing framework intended to be used during
development. It provides basic assertion macros and a generic test structure.
The framework can be used in two ways, either as a generic framework for integration testing, or for
unit testing specific modules.
To enable support for the latest Ztest API, set CONFIG_ZTEST_NEW_API to y. There is also a legacy API
that is deprecated and will eventually be removed.
Using Ztest to create a test suite is as easy as calling the ZTEST_SUITE. The macro accepts the following
arguments:
• suite_name - The name of the suite. This name must be unique within a single binary.
• ztest_suite_predicate_t - An optional predicate function to allow choosing when the test will
run. The predicate will get a pointer to the global state passed in through ztest_run_all() and
should return a boolean to decide if the suite should run.
• ztest_suite_setup_t - An optional setup function which returns a test fixture. This will be called
and run once per test suite run.
1 Note that west build invokes ninja, among other tools. There’s no recursive invocation of either west or ninja involved
by default, however, as west build does not invoke ninja flash, debug, etc. The one exception is if you specifically run one of
these build system targets with a command line like west build -t flash. In that case, west is run twice: once for west build,
and in a subprocess, again for west flash. Even in this case, ninja is only run once, as ninja flash. This is because these build
system targets depend on an up to date build of the Zephyr application, so it’s compiled before west flash is run.
• ztest_suite_before_t - An optional before function which will run before every single test in this
suite.
• ztest_suite_after_t - An optional after function which will run after every single test in this
suite.
• ztest_suite_teardown_t - An optional teardown function which will run at the end of all the
tests in the suite.
Below is an example of a test suite using a predicate:
# include <zephyr/ztest.h>
# include "test_state.h"
Test fixtures Test fixtures can be used to help simplify repeated test setup operations. In many cases,
tests in the same suite will require some initial setup followed by some form of reset between each test.
This is achieved via fixtures in the following way:
# include <zephyr/ztest.h>
struct my_suite_fixture {
size_t max_size;
size_t size;
uint8_t buff[1];
};
zassume_not_null(fixture, NULL);
fixture->max_size = 256;
return fixture;
(continues on next page)
ZTEST_F(my_suite, test_feature_x)
{
zassert_equal(0, fixture->size);
zassert_equal(256, fixture->max_size);
}
Advanced features
Test result expectations Some tests were made to be broken. In cases where the test is expected to
fail or skip due to the nature of the code, it’s possible to annotate the test as such. For example:
# include <zephyr/ztest.h>
ZTEST_EXPECT_FAIL(my_suite, test_fail)
ZTEST(my_suite, test_fail)
{
/** This will fail the test */
zassert_true(false, NULL);
}
ZTEST_EXPECT_SKIP(my_suite, test_skip)
ZTEST(my_suite, test_skip)
{
/** This will skip the test */
zassume_true(false, NULL);
}
In this example, the above tests should be marked as failed and skipped respectively. Instead, Ztest will
mark both as passed due to the expectation.
Test rules Test rules are a way to run the same logic for every test and every suite. There are a lot of
cases where you might want to reset some state for every test in the binary (regardless of which suite is
currently running). As an example, this could be to reset mocks, reset emulators, flush the UART, etc.:
# include <zephyr/fff.h>
# include <zephyr/ztest.h>
(continues on next page)
# include "test_mocks.h"
DEFINE_FFF_GLOBALS;
DEFINE_FAKE_VOID_FUN(my_weak_func);
RESET_FAKE(my_weak_func);
}
A custom test_main While the Ztest framework provides a default test_main() function, it’s pos-
sible that some applications will want to provide custom behavior. This is particularly true if there’s
some global state that the tests depend on and that state either cannot be replicated or is difficult to
replicate without starting the process over. For example, one such state could be a power sequence.
Assuming there’s a board with several steps in the power-on sequence a test suite can be written using
the predicate to control when it would run. In that case, the test_main() function can be written as
follows:
# include <zephyr/ztest.h>
# include "my_test.h"
void test_main(void)
{
struct power_sequence_state state;
/* Only suites that use a predicate checking for phase == PWR_PHASE_0 will run.␣
˓→ */
state.phase = PWR_PHASE_0;
ztest_run_all(&state);
/* Only suites that use a predicate checking for phase == PWR_PHASE_1 will run.␣
˓→ */
state.phase = PWR_PHASE_1;
ztest_run_all(&state);
/* Only suites that use a predicate checking for phase == PWR_PHASE_2 will run.␣
˓→ */
state.phase = PWR_PHASE_2;
ztest_run_all(&state);
/* Check that all the suites in this binary ran at least once. */
ztest_verify_all_test_suites_ran();
}
A simple working base is located at samples/subsys/testsuite/integration. Just copy the files to tests/
and edit them for your needs. The test will then be automatically built and run by the twister script. If
you are testing the bar component of foo, you should copy the sample folder to tests/foo/bar. It can
then be tested with:
./scripts/twister -s tests/foo/bar/test-identifier
In the example above tests/foo/bar signifies the path to the test and the test-identifier references
a test defined in the testcase.yaml file.
To run all tests defined in a test project, run:
./scripts/twister -T tests/foo/bar/
1 # SPDX-License-Identifier: Apache-2.0
2
3 cmake_minimum_required(VERSION 3.20.0)
4 find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE})
5 project(integration)
6
testcase.yaml
1 tests:
2 # section.subsection
3 sample.testing.ztest:
4 build_only: true
5 platform_allow: native_posix
6 integration_platforms:
7 - native_posix
8 tags: test_framework
prj.conf
1 CONFIG_ZTEST=y
2 CONFIG_ZTEST_NEW_API=y
1 /*
2 * Copyright (c) 2016 Intel Corporation
3 *
4 * SPDX-License-Identifier: Apache-2.0
5 */
6
7 # include <zephyr/ztest.h>
8
12 /**
13 * @brief Test Asserts
(continues on next page)
• Listing Tests
• Skipping Tests
A test case project may consist of multiple sub-tests or smaller tests that either can be testing functionality
or APIs. Functions implementing a test should follow the guidelines below:
• Test cases function names should be prefix with test_
• Test cases should be documented using doxygen
• Test function names should be unique within the section or component being tested
For example:
/**
* @brief Test Asserts
*
* This test verifies the zassert_true macro.
*/
ZTEST(my_suite, test_assert)
{
zassert_true(1, "1 was false");
}
Listing Tests Tests (test projects) in the Zephyr tree consist of many testcases that run as part of a
project and test similar functionality, for example an API or a feature. The twister script can parse the
testcases in all test projects or a subset of them, and can generate reports on a granular level, i.e. if cases
have passed or failed or if they were blocked or skipped.
Twister parses the source files looking for test case names, so you can list all kernel test cases, for
example, by running:
Skipping Tests Special- or architecture-specific tests cannot run on all platforms and architectures,
however we still want to count those and report them as being skipped. Because the test inventory
and the list of tests is extracted from the code, adding conditionals inside the test suite is sub-optimal.
Tests that need to be skipped for a certain platform or feature need to explicitly report a skip using
ztest_test_skip() or Z_TEST_SKIP_IFDEF. If the test runs, it needs to report either a pass or fail. For
example:
# ifdef CONFIG_TEST1
ZTEST(common, test_test1)
{
zassert_true(1, "true");
}
# else
ZTEST(common, test_test1)
{
ztest_test_skip();
}
# endif
ZTEST(common, test_test2)
{
Z_TEST_SKIP_IFDEF(CONFIG_BUGxxxxx);
zassert_equal(1, 0, NULL);
}
Ztest can be used for unit testing. This means that rather than including the entire Zephyr OS for testing
a single function, you can focus the testing efforts into the specific module in question. This will speed
up testing since only the module will have to be compiled in, and the tested functions will be called
directly.
Since you won’t be including basic kernel data structures that most code depends on, you have to provide
function stubs in the test. Ztest provides some helpers for mocking functions, as demonstrated below.
In a unit test, mock objects can simulate the behavior of complex real objects and are used to decide
whether a test failed or passed by verifying whether an interaction with an object occurred, and if
required, to assert the order of that interaction.
Best practices for declaring the test suite twister and other validation tools need to obtain the list of
subcases that a Zephyr ztest test image will expose.
Rationale
This all is for the purpose of traceability. It’s not enough to have only a semaphore test project. We also
need to show that we have testpoints for all APIs and functionality, and we trace back to documentation
of the API, and functional requirements.
The idea is that test reports show results for every sub-testcase as passed, failed, blocked, or skipped.
Reporting on only the high-level test project level, particularly when tests do too many things, is too
vague.
Other questions:
• Why not pre-scan with CPP and then parse? or post scan the ELF file?
If C pre-processing or building fails because of any issue, then we won’t be able to tell the subcases.
• Why not declare them in the YAML testcase description?
A separate testcase description file would be harder to maintain than just keeping the information
in the test source files themselves – only one file to update when changes are made eliminates
duplication.
Zephyr stress test framework (Ztress) provides an environment for executing user functions in multiple
priority contexts. It can be used to validate that code is resilient to preemptions. The framework tracks
the number of executions and preemptions for each context. Execution can have various completion
conditions like timeout, number of executions or number of preemptions.
The framework is setting up the environment by creating the requested number of threads (each on
different priority), optionally starting a timer. For each context, a user function (different for each
context) is called and then the context sleeps for a randomized amount of system ticks. The framework
is tracking CPU load and adjusts sleeping periods to achieve higher CPU load. In order to increase the
probability of preemptions, the system clock frequency should be relatively high. The default 100 Hz on
QEMU x86 is much too low and it is recommended to increase it to 100 kHz.
The stress test environment is setup and executed using ZTRESS_EXECUTE which accepts a variable num-
ber of arguments. Each argument is a context that is specified by ZTRESS_TIMER or ZTRESS_THREAD
macros. Contexts are specified in priority descending order. Each context specifies completion condi-
tions by providing the minimum number of executions and preemptions. When all conditions are met
and the execution has completed, an execution report is printed and the macro returns. Note that while
the test is executing, a progress report is periodically printed.
Execution can be prematurely completed by specifying a test timeout (ztress_set_timeout() ) or an
explicit abort (ztress_abort() ).
User function parameters contains an execution counter and a flag indicating if it is the last execution.
The example below presents how to setup and run 3 contexts (one of which is k_timer interrupt handler
context). Completion criteria is set to at least 10000 executions of each context and 1000 preemptions of
the lowest priority context. Additionally, the timeout is configured to complete after 10 seconds if those
conditions are not met. The last argument of each context is the initial sleep time which will be adjusted
throughout the test to achieve the highest CPU load.
ztress_set_timeout(K_MSEC(10000));
ZTRESS_EXECUTE(ZTRESS_TIMER(foo_0, user_data_0, 10000, Z_TIMEOUT_TICKS(20)),
ZTRESS_THREAD(foo_1, user_data_1, 10000, 0, Z_TIMEOUT_
˓→TICKS(20)),
API reference
Running tests
group ztest_test
This module eases the testing process by providing helpful macros and other testing structures.
Defines
ZTEST(suite, fn)
Create and register a new unit test.
Calling this macro will create a new unit test and attach it to the declared suite. The suite
does not need to be defined in the same compilation unit.
Parameters
• suite – The name of the test suite to attach this test
• fn – The test function to call.
ZTEST_USER(suite, fn)
Define a test function that should run as a user thread.
This macro behaves exactly the same as ZTEST, but calls the test function in user space if
CONFIG_USERSPACE was enabled.
Parameters
• suite – The name of the test suite to attach this test
• fn – The test function to call.
ZTEST_F(suite, fn)
Define a test function.
This macro behaves exactly the same as ZTEST(), but the function takes an argument for the
fixture of type struct suite##_fixture* named this.
Parameters
• suite – The name of the test suite to attach this test
• fn – The test function to call.
ZTEST_USER_F(suite, fn)
Define a test function that should run as a user thread.
If CONFIG_USERSPACE is not enabled, this is functionally identical to ZTEST_F(). The test
function takes a single fixture argument of type struct suite##_fixture* named this.
Parameters
• suite – The name of the test suite to attach this test
• fn – The test function to call.
ZTEST_RULE(name, before_each_fn, after_each_fn)
Define a test rule that will run before/after each unit test.
Functions defined here will run before/after each unit test for every test suite. Along with the
callback, the test functions are provided a pointer to the test being run, and the data. This
provides a mechanism for tests to perform custom operations depending on the specific test
or the data (for example logging may use the test’s name).
Ordering:
• Test rule’s before function will run before the suite’s before function. This is done to
allow the test suite’s customization to take precedence over the rule which is applied to
all suites.
• Test rule’s after function is not guaranteed to run in any particular order.
Parameters
• name – The name for the test rule (must be unique within the compilation unit)
• before_each_fn – The callback function (ztest_rule_cb) to call before each test
(may be NULL)
ztest_run_test_suite(suite)
Run the specified test suite.
Parameters
• suite – Test suite to run.
Typedefs
Functions
void ztest_test_fail(void)
Fail the currently running test.
This is the function called from failed assertions and the like. You probably don’t need to call
it yourself.
void ztest_test_pass(void)
Pass the currently running test.
Normally a test passes just by returning without an assertion failure. However, if
the success case for your test involves a fatal fault, you can call this function from
k_sys_fatal_error_handler to indicate that the test passed before aborting the thread.
void ztest_test_skip(void)
Skip the current test.
void ztest_skip_failed_assumption(void)
struct ztest_test_rule
struct ztest_arch_api
#include <ztest_test_new.h> Structure for architecture specific APIs.
Assertions These macros will instantly fail the test if the related assertion fails. When an assertion
fails, it will print the current file, line and function, alongside a reason for the failure and an optional
message. If the config CONFIG_ZTEST_ASSERT_VERBOSE is 0, the assertions will only print the file and
line numbers, reducing the binary size of the test.
Example output for a failed macro from zassert_equal(buf->ref, 2, "Invalid refcount"):
group ztest_assert
This module provides assertions when using Ztest.
Defines
zassert_unreachable(...)
Assert that this function call won’t be reached.
Parameters
• ... – Optional message and variables to print if the assertion fails
zassert_true(cond, ...)
Assert that cond is true.
Parameters
• cond – Condition to check
• ... – Optional message and variables to print if the assertion fails
zassert_false(cond, ...)
Assert that cond is false.
Parameters
• cond – Condition to check
• ... – Optional message and variables to print if the assertion fails
zassert_ok(cond, ...)
Assert that cond is 0 (success)
Parameters
• cond – Condition to check
• ... – Optional message and variables to print if the assertion fails
zassert_is_null(ptr, ...)
Assert that ptr is NULL.
Parameters
• ptr – Pointer to compare
• ... – Optional message and variables to print if the assertion fails
zassert_not_null(ptr, ...)
Assert that ptr is not NULL.
Parameters
• ptr – Pointer to compare
• ... – Optional message and variables to print if the assertion fails
zassert_equal(a, b, ...)
Assert that a equals b.
a and b won’t be converted and will be compared directly.
Parameters
• a – Value to compare
• b – Value to compare
• ... – Optional message and variables to print if the assertion fails
zassert_not_equal(a, b, ...)
Assert that a does not equal b.
a and b won’t be converted and will be compared directly.
Parameters
• a – Value to compare
• b – Value to compare
• ... – Optional message and variables to print if the assertion fails
zassert_equal_ptr(a, b, ...)
Assert that a equals b.
a and b will be converted to void * before comparing.
Parameters
• a – Value to compare
• b – Value to compare
• ... – Optional message and variables to print if the assertion fails
zassert_within(a, b, d, ...)
Assert that a is within b with delta d.
Parameters
• a – Value to compare
• b – Value to compare
• d – Delta
• ... – Optional message and variables to print if the assertion fails
zassert_between_inclusive(a, l, u, ...)
Assert that a is greater than or equal to l and less than or equal to u.
Parameters
• a – Value to compare
• l – Lower limit
• u – Upper limit
• ... – Optional message and variables to print if the assertion fails
zassert_mem_equal(...)
Assert that 2 memory buffers have the same contents.
This macro calls the final memory comparison assertion macro. Using double expansion al-
lows providing some arguments by macros that would expand to more than one values (ANSI-
C99 defines that all the macro arguments have to be expanded before macro call).
Parameters
• ... – Arguments, see zassert_mem_equal__ for real arguments accepted.
zassert_mem_equal__(buf, exp, size, ...)
Internal assert that 2 memory buffers have the same contents.
Parameters
• buf – Buffer to compare
• exp – Buffer with expected contents
• size – Size of buffers
• ... – Optional message and variables to print if the assertion fails
Expectations These macros will continue test execution if the related expectation fails and subse-
quently fail the test at the end of its execution. When an expectation fails, it will print the current file,
line, and function, alongside a reason for the failure and an optional message but continue executing the
test. If the config CONFIG_ZTEST_ASSERT_VERBOSE is 0, the expectations will only print the file and line
numbers, reducing the binary size of the test.
For example, if the following expectations fail:
START - test_get_single_buffer
Expectation failed at main.c:62: test_get_single_buffer: Invalid refcount (buf->
˓→ref not equal to 2)
group ztest_expect
This module provides expectations when using Ztest.
Defines
zexpect_true(cond, ...)
Expect that cond is true, otherwise mark test as failed but continue its execution.
Parameters
• cond – Condition to check
• ... – Optional message and variables to print if the expectation fails
zexpect_false(cond, ...)
Expect that cond is false, otherwise mark test as failed but continue its execution.
Parameters
• cond – Condition to check
• ... – Optional message and variables to print if the expectation fails
zexpect_ok(cond, ...)
Expect that cond is 0 (success), otherwise mark test as failed but continue its execution.
Parameters
• cond – Condition to check
• ... – Optional message and variables to print if the expectation fails
zexpect_is_null(ptr, ...)
Expect that ptr is NULL, otherwise mark test as failed but continue its execution.
Parameters
• ptr – Pointer to compare
• ... – Optional message and variables to print if the expectation fails
zexpect_not_null(ptr, ...)
Expect that ptr is not NULL, otherwise mark test as failed but continue its execution.
Parameters
• ptr – Pointer to compare
• ... – Optional message and variables to print if the expectation fails
zexpect_equal(a, b, ...)
Expect that a equals b, otherwise mark test as failed but continue its execution. expectation
fails, the test will be marked as “skipped”.
Parameters
• a – Value to compare
• b – Value to compare
• ... – Optional message and variables to print if the expectation fails
zexpect_not_equal(a, b, ...)
Expect that a does not equal b, otherwise mark test as failed but continue its execution.
a and b won’t be converted and will be compared directly.
Parameters
• a – Value to compare
• b – Value to compare
• ... – Optional message and variables to print if the expectation fails
zexpect_equal_ptr(a, b, ...)
Expect that a equals b, otherwise mark test as failed but continue its execution.
a and b will be converted to void * before comparing.
Parameters
• a – Value to compare
• b – Value to compare
• ... – Optional message and variables to print if the expectation fails
zexpect_within(a, b, delta, ...)
Expect that a is within b with delta d, otherwise mark test as failed but continue its execution.
Parameters
• a – Value to compare
• b – Value to compare
• delta – Difference between a and b
• ... – Optional message and variables to print if the expectation fails
zexpect_between_inclusive(a, lower, upper, ...)
Expect that a is greater than or equal to l and less than or equal to u, otherwise mark test as
failed but continue its execution.
Parameters
• a – Value to compare
• lower – Lower limit
• upper – Upper limit
• ... – Optional message and variables to print if the expectation fails
zexpect_mem_equal(buf, exp, size, ...)
Expect that 2 memory buffers have the same contents, otherwise mark test as failed but con-
tinue its execution.
Parameters
• buf – Buffer to compare
• exp – Buffer with expected contents
• size – Size of buffers
• ... – Optional message and variables to print if the expectation fails
Assumptions These macros will instantly skip the test or suite if the related assumption fails. When an
assumption fails, it will print the current file, line, and function, alongside a reason for the failure and
an optional message. If the config CONFIG_ZTEST_ASSERT_VERBOSE is 0, the assumptions will only print
the file and line numbers, reducing the binary size of the test.
Example output for a failed macro from zassume_equal(buf->ref, 2, "Invalid refcount"):
group ztest_assume
This module provides assumptions when using Ztest.
Defines
zassume_true(cond, ...)
Assume that cond is true.
If the assumption fails, the test will be marked as “skipped”.
Parameters
• cond – Condition to check
• ... – Optional message and variables to print if the assumption fails
zassume_false(cond, ...)
Assume that cond is false.
If the assumption fails, the test will be marked as “skipped”.
Parameters
• cond – Condition to check
• ... – Optional message and variables to print if the assumption fails
zassume_ok(cond, ...)
Assume that cond is 0 (success)
If the assumption fails, the test will be marked as “skipped”.
Parameters
• cond – Condition to check
• ... – Optional message and variables to print if the assumption fails
zassume_is_null(ptr, ...)
Assume that ptr is NULL.
If the assumption fails, the test will be marked as “skipped”.
Parameters
• ptr – Pointer to compare
• ... – Optional message and variables to print if the assumption fails
zassume_not_null(ptr, ...)
Assume that ptr is not NULL.
If the assumption fails, the test will be marked as “skipped”.
Parameters
• ptr – Pointer to compare
• ... – Optional message and variables to print if the assumption fails
zassume_equal(a, b, ...)
Assume that a equals b.
a and b won’t be converted and will be compared directly. If the assumption fails, the test will
be marked as “skipped”.
Parameters
• a – Value to compare
• b – Value to compare
• ... – Optional message and variables to print if the assumption fails
zassume_not_equal(a, b, ...)
Assume that a does not equal b.
a and b won’t be converted and will be compared directly. If the assumption fails, the test will
be marked as “skipped”.
Parameters
• a – Value to compare
• b – Value to compare
• ... – Optional message and variables to print if the assumption fails
zassume_equal_ptr(a, b, ...)
Assume that a equals b.
a and b will be converted to void * before comparing. If the assumption fails, the test will be
marked as “skipped”.
Parameters
• a – Value to compare
• b – Value to compare
• ... – Optional message and variables to print if the assumption fails
zassume_within(a, b, d, ...)
Assume that a is within b with delta d.
If the assumption fails, the test will be marked as “skipped”.
Parameters
• a – Value to compare
• b – Value to compare
• d – Delta
• ... – Optional message and variables to print if the assumption fails
zassume_between_inclusive(a, l, u, ...)
Assume that a is greater than or equal to l and less than or equal to u.
If the assumption fails, the test will be marked as “skipped”.
Parameters
• a – Value to compare
• l – Lower limit
• u – Upper limit
• ... – Optional message and variables to print if the assumption fails
zassume_mem_equal(...)
Assume that 2 memory buffers have the same contents.
This macro calls the final memory comparison assumption macro. Using double expansion
allows providing some arguments by macros that would expand to more than one values
(ANSI-C99 defines that all the macro arguments have to be expanded before macro call).
Parameters
• ... – Arguments, see zassume_mem_equal__ for real arguments accepted.
Parameters
• buf – Buffer to compare
• exp – Buffer with expected contents
• size – Size of buffers
• ... – Optional message and variables to print if the assumption fails
Ztress
group ztest_ztress
This module provides test stress when using Ztest.
Defines
Note: There can only be up to one k_timer context in the set and it must be the first argument
of ZTRESS_EXECUTE.
Parameters
• handler – User handler of type ztress_handler.
• user_data – User data passed to the handler.
• exec_cnt – Number of handler executions to complete the test. If 0 then this
is not included in completion criteria.
• init_timeout – Initial backoff time base (given in k_timeout_t). It is adjusted
during the test to optimize CPU load. The actual timeout used for the timer is
randomized.
Note: thread sleeps for random amount of time. Additionally, the thread busy-waits for a
random length of time to further increase randomization in the test.
Parameters
Typedefs
typedef bool (*ztress_handler)(void *user_data, uint32_t cnt, bool last, int prio)
User handler called in one of the configured contexts.
Param user_data
User data provided in the context descriptor.
Param cnt
Current execution counter. Counted from 0.
Param last
Flag set to true indicates that it is the last execution because completion criteria
are met, test timed out or was aborted.
Param prio
Context priority counting from 0 which indicates the highest priority.
Retval true
continue test.
Retval false
stop executing the current context.
Functions
struct ztress_context_data
#include <ztress.h>
Mocking via FFF Zephyr has integrated with FFF for mocking. See FFF for documentation. To use it,
include the relevant header:
# include <zephyr/fff.h>
Zephyr provides several FFF-based fake drivers which can be used as either stubs or mocks. Fake driver
instances are configured via Devicetree and Configuration System (Kconfig). See the following devicetree
bindings for more information:
• zephyr,fake-can
• zephyr,fake-eeprom
Zephyr also has defined extensions to FFF for simplified declarations of fake functions. See FFF Exten-
sions.
The way output is presented when running tests can be customized. An example can be found in
tests/ztest/custom_output.
Customization is enabled by setting CONFIG_ZTEST_TC_UTIL_USER_OVERRIDE to “y” and adding a file
tc_util_user_override.h with your overrides.
Add the line zephyr_include_directories(my_folder) to your project’s CMakeLists.txt to let Zephyr
find your header file during builds.
See the file subsys/testsuite/include/zephyr/tc_util.h to see which macros and/or defines can be over-
ridden. These will be surrounded by blocks such as:
# ifndef SOMETHING
# define SOMETHING <default implementation>
# endif /* SOMETHING */
By default the tests are sorted and ran in alphanumerical order. Test cases may be dependent on this
sequence. Enable CONFIG_ZTEST_SHUFFLE to randomize the order. The output from the test will display
the seed for failed tests. For native posix builds you can provide the seed as an argument to twister with
–seed
Static configuration of ZTEST_SHUFFLE contains:
• CONFIG_ZTEST_SHUFFLE_SUITE_REPEAT_COUNT - Number of iterations the test suite will run.
• CONFIG_ZTEST_SHUFFLE_TEST_REPEAT_COUNT - Number of iterations the test will run.
Test Selection
For POSIX enabled builds with ZTEST_NEW_API use command line arguments to list or select tests to
run. The test argument expects a comma separated list of suite::test . You can substitute the test
name with an * to run all tests within a suite.
For example
$ zephyr.exe -list
$ zephyr.exe -test="fixture_tests::test_fixture_pointer,framework_tests::test_assert_
˓→mem_equal"
$ zephyr.exe -test="framework_tests::*"
FFF Extensions
group fff_extensions
This module provides extensions to FFF for simplifying the configuration and usage of fakes.
Defines
int FUNCNAME##_custom_fake(
const struct instance **instance_out)
{
RETURN_HANDLED_CONTEXT(
FUNCNAME,
struct FUNCNAME##_custom_fake_context,
result,
context,
{
if (context != NULL)
{
if (context->result == 0)
{
if (instance_out != NULL)
{
*instance_out = context->instance;
}
}
return context->result;
}
(continues on next page)
Parameters
• FUNCNAME – Name of function being faked
• CONTEXTTYPE – type of custom defined fake context struct
• RESULTFIELD – name of field holding the return type & value
• CONTEXTPTRNAME – expected name of pointer to custom defined fake context
struct
• HANDLERBODY – in-line custom fake handling logic
This script scans for the set of unit test applications in the git repository and attempts to execute them.
By default, it tries to build each test case on boards marked as default in the board definition file.
The default options will build the majority of the tests on a defined set of boards and will run in an
emulated environment if available for the architecture or configuration being tested.
In normal use, twister runs a limited set of kernel tests (inside an emulator). Because of its limited test
execution coverage, twister cannot guarantee local changes will succeed in the full build environment,
but it does sufficient testing by building samples and tests for different boards and different configura-
tions to help keep the complete code tree buildable.
When using (at least) one -v option, twister’s console output shows for every test how the test is run
(qemu, native_posix, etc.) or whether the binary was just built. There are a few reasons why twister
only builds a test and doesn’t run it:
• The test is marked as build_only: true in its .yaml configuration file.
• The test configuration has defined a harness but you don’t have it or haven’t set it up.
• The target device is not connected and not available for flashing
• You or some higher level automation invoked twister with --build-only.
To run the script in the local tree, follow the steps below:
Linux
$ source zephyr-env.sh
$ ./scripts/twister
Windows
zephyr-env.cmd
python .\scripts\twister
If you have a system with a large number of cores and plenty of free storage space, you can build and
run all possible tests using the following options:
Linux
Windows
This will build for all available boards and run all applicable tests in a simulated (for example QEMU)
environment.
If you want to run tests on one or more specific platforms, you can use the --platform option, it is a plat-
form filter for testing, with this option, test suites will only be built/run on the platforms specified. This
option also supports different revisions of one same board, you can use --platform board@revision to
test on a specific revision.
The list of command line options supported by twister can be viewed using:
Linux
$ ./scripts/twister --help
Windows
Board Configuration
To build tests for a specific board and to execute some of the tests on real hardware or in an emulation
environment such as QEMU a board configuration file is required which is generic enough to be used for
other tasks that require a board inventory with details about the board and its configuration that is only
available during build time otherwise.
The board metadata file is located in the board directory and is structured using the YAML markup
language. The example below shows a board with a data required for best test coverage for this specific
board:
identifier: frdm_k64f
name: NXP FRDM-K64F
type: mcu
arch: arm
toolchain:
- zephyr
- gnuarmemb
- xtools
supported:
- arduino_gpio
- arduino_i2c
- netif:eth
- adc
- i2c
- nvs
- spi
- gpio
- usb_device
- watchdog
- can
- pwm
testing:
default: true
identifier:
A string that matches how the board is defined in the build system. This same string is used when
building, for example when calling west build or cmake:
# with west
west build -b reel_board
# with cmake
cmake -DBOARD=reel_board ..
name:
The actual name of the board as it appears in marketing material.
type:
Type of the board or configuration, currently we support 2 types: mcu, qemu
simulation:
Simulator used to simulate the platform, e.g. qemu.
arch:
Architecture of the board
toolchain:
The list of supported toolchains that can build this board. This should match one of the values used
for ‘ZEPHYR_TOOLCHAIN_VARIANT’ when building on the command line
ram:
Available RAM on the board (specified in KB). This is used to match testcase requirements. If not
specified we default to 128KB.
flash:
Available FLASH on the board (specified in KB). This is used to match testcase requirements. If not
specified we default to 512KB.
supported:
A list of features this board supports. This can be specified as a single word feature or as a variant
of a feature class. For example:
supported:
- pci
This indicates the board does support PCI. You can make a testcase build or run only on such
boards, or:
supported:
- netif:eth
- sensor:bmi16
A testcase can both depend on ‘eth’ to only test ethernet or on ‘netif’ to run on any board with a
networking interface.
testing:
testing relating keywords to provide best coverage for the features of this board.
default: [True|False]:
This is a default board, it will tested with the highest priority and is covered when invoking
the simplified twister without any additional arguments.
ignore_tags:
Do not attempt to build (and therefore run) tests marked with this list of tags.
only_tags:
Only execute tests with this list of tags on a specific platform.
Test Cases
Test cases are detected by the presence of a ‘testcase.yaml’ or a ‘sample.yaml’ files in the application’s
project directory. This file may contain one or more entries in the test section each identifying a test
scenario.
The name of each testcase needs to be unique in the context of the overall testsuite and has to follow
basic rules:
1. The format of the test identifier shall be a string without any spaces or special characters (allowed
characters: alphanumeric and [_=]) consisting of multiple sections delimited with a dot (.).
2. Each test identifier shall start with a section followed by a subsection separated by a dot. For
example, a test that covers semaphores in the kernel shall start with kernel.semaphore.
3. All test identifiers within a testcase.yaml file need to be unique. For example a testcase.yaml file
covering semaphores in the kernel can have:
• kernel.semaphore: For general semaphore tests
• kernel.semaphore.stress: Stress testing semaphores in the kernel.
4. Depending on the nature of the test, an identifier can consist of at least two sections:
• Ztest tests: The individual testcases in the ztest testsuite will be concatenated to identifier in
the testcase.yaml file generating unique identifiers for every testcase in the suite.
• Standalone tests and samples: This type of test should at least have 3 sections in the test
identifier in the testcase.yaml (or sample.yaml) file. The last section of the name shall signify
the test itself.
Test cases are written using the YAML syntax and share the same structure as samples. The following is
an example test with a few options that are explained in this document.
tests:
bluetooth.gatt:
build_only: true
platform_allow: qemu_cortex_m3 qemu_x86
tags: bluetooth
bluetooth.gatt.br:
build_only: true
extra_args: CONF_FILE="prj_br.conf"
filter: not CONFIG_DEBUG
platform_exclude: up_squared
platform_allow: qemu_cortex_m3 qemu_x86
tags: bluetooth
A sample with tests will have the same structure with additional information related to the sample and
what is being demonstrated:
sample:
name: hello world
description: Hello World sample, the simplest Zephyr application
tests:
sample.basic.hello_world:
build_only: true
tags: tests
min_ram: 16
sample.basic.hello_world.singlethread:
build_only: true
extra_args: CONF_FILE=prj_single.conf
filter: not CONFIG_BT
tags: tests
min_ram: 16
Each test block in the testcase meta data can define the following key/value pairs:
tags: <list of tags> (required)
A set of string tags for the testcase. Usually pertains to functional domains but can be anything.
Command line invocations of this script can filter the set of tests to run based on tag.
skip: <True|False> (default False)
skip testcase unconditionally. This can be used for broken tests.
slow: <True|False> (default False)
Don’t run this test case unless –enable-slow was passed in on the command line. Intended for
time-consuming test cases that are only run under certain circumstances, like daily builds. These
test cases are still compiled.
extra_args: <list of extra arguments>
Extra arguments to pass to Make when building or running the test case.
extra_configs: <list of extra configurations>
Extra configuration options to be merged with a master prj.conf when building or running the test
case. For example:
common:
tags: drivers adc
tests:
test:
depends_on: adc
test_async:
extra_configs:
- CONFIG_ADC_ASYNC=y
Using namespacing, it is possible to apply a configuration only to some hardware. Currently both
architectures and platforms are supported:
common:
tags: drivers adc
tests:
test:
depends_on: adc
test_async:
extra_configs:
- arch:x86:CONFIG_ADC_ASYNC=y
- platform:qemu_x86:CONFIG_DEBUG=y
Harnesses ztest, gtest and console are based on parsing of the output and matching certain
phrases. ztest and gtest harnesses look for pass/fail/etc. frames defined in those frameworks.
Use gtest harness if you’ve already got tests written in the gTest framework and do not wish to
update them to zTest. The console harness tells Twister to parse a test’s text output for a regex
defined in the test’s YAML file. The robot harness is used to execute Robot Framework test suites
in the Renode simulation framework.
Some widely used harnesses that are not supported yet:
• keyboard
• net
• bluetooth
platform_key: <list of platform attributes>
Often a test needs to only be built and run once to qualify as passing. Imagine a library of code
that depends on the platform architecture where passing the test on a single platform for each arch
is enough to qualify the tests and code as passing. The platform_key attribute enables doing just
that.
For example to key on (arch, simulation) to ensure a test is run once per arch and simulation (as
would be most common):
platform_key:
- arch
- simulation
Adding platform (board) attributes to include things such as soc name, soc family, and perhaps
sets of IP blocks implementing each peripheral interface would enable other interesting uses. For
example, this could enable building and running SPI tests once for eacn unique IP block.
harness_config: <harness configuration options>
Extra harness configuration options to be used to select a board and/or for handling generic Con-
sole with regex matching. Config can announce what features it supports. This option will enable
the test to run on only those platforms that fulfill this external dependency.
The following options are currently supported:
type: <one_line|multi_line> (required)
Depends on the regex string to be matched
record: <recording options>
regex: <expression> (required)
Any string that the particular test case prints to record test results.
regex: <expression> (required)
Any string that the particular test case prints to confirm test runs as expected.
ordered: <True|False> (default False)
Check the regular expression strings in orderly or randomly fashion
repeat: <integer>
Number of times to validate the repeated regex expression
fixture: <expression>
Specify a test case dependency on an external device(e.g., sensor), and identify setups that
fulfill this dependency. It depends on specific test setup and board selection logic to pick
the particular board(s) out of multiple boards that fulfill the dependency in an automation
setup based on “fixture” keyword. Some sample fixture names are i2c_hts221, i2c_bme280,
i2c_FRAM, ble_fw and gpio_loop.
Only one fixture can be defined per testcase and the fixture name has to be unique across all
tests in the test suite.
sample:
name: HTS221 Temperature and Humidity Monitor
common:
tags: sensor
harness: console
harness_config:
type: multi_line
ordered: false
regex:
- "Temperature:(.*)C"
- "Relative Humidity:(.*)%"
fixture: i2c_hts221
tests:
test:
tags: sensors
depends_on: i2c
The following is an example yaml file with pytest harness_config options, default pytest_root
name “pytest” will be used if pytest_root not specified. please refer the example in sam-
ples/subsys/testsuite/pytest/.
tests:
pytest.example:
harness: pytest
harness_config:
pytest_root: [pytest directory name]
tests:
robot.example:
harness: robot
harness_config:
robot_test_path: [robot file path]
filter: <expression>
Filter whether the testcase should be run by evaluating an expression against an environment
containing the following values:
{ ARCH : <architecture>,
PLATFORM : <platform>,
<all CONFIG_* key/value pairs in the test's generated defconfig>,
*<env>: any environment variable available
}
Twister will first evaluate the expression to find if a “limited” cmake call, i.e. using package_helper
cmake script, can be done. Existence of “dt_*” entries indicates devicetree is needed. Existence
of “CONFIG*” entries indicates kconfig is needed. If there are no other types of entries in the
expression a filtration can be done wihout creating a complete build system. If there are entries of
other types a full cmake is required.
The grammar for the expression language is as follows:
expression ::= expression “and” expression
string
For the case where expression ::= symbol, it evaluates to true if the symbol is defined to a non-
empty string.
Operator precedence, starting from lowest to highest:
or (left associative) and (left associative) not (right associative) all comparison operators
(non-associative)
arch_allow, arch_exclude, platform_allow, platform_exclude are all syntactic sugar for these ex-
pressions. For instance
arch_exclude = x86 arc
Is the same as:
filter = not ARCH in [“x86”, “arc”]
The ‘:’ operator compiles the string argument as a regular expression, and then returns a
true value only if the symbol’s value in the environment matches. For example, if CON-
FIG_SOC=”stm32f107xc” then
filter = CONFIG_SOC : “stm.*”
Would match it.
The set of test cases that actually run depends on directives in the testcase filed and options passed
in on the command line. If there is any confusion, running with -v or examining the discard report
(twister_discard.csv) can help show why particular test cases were skipped.
Metrics (such as pass/fail state and binary size) for the last code release are stored in
scripts/release/twister_last_release.csv. To update this, pass the –all –release options.
To load arguments from a file, write ‘+’ before the file name, e.g., +file_name. File content must be one
or more valid arguments separated by line break instead of white spaces.
This mode is used in continuous integration (CI) and other automated environments used to give de-
velopers fast feedback on changes. The mode can be activated using the –integration option of twister
and narrows down the scope of builds and tests if applicable to platforms defined under the integration
keyword in the testcase definition file (testcase.yaml and sample.yaml).
Beside being able to run tests in QEMU and other simulated environments, twister supports running
most of the tests on real devices and produces reports for each run with detailed FAIL/PASS results.
Executing tests on a single device To use this feature on a single connected device, run twister with
the following new options:
Linux
Windows
The --device-serial option denotes the serial device the board is connected to. This needs to be
accessible by the user running twister. You can run this on only one board at a time, specified using the
--platform option.
The --device-serial-baud option is only needed if your device does not run at 115200 baud.
To support devices without a physical serial port, use the --device-serial-pty option. In this cases,
log messages are captured for example using a script. In this case you can run twister with the following
options:
Linux
Windows
The script is user-defined and handles delivering the messages which can be used by twister to determine
the test execution status.
The --device-flash-timeout option allows to set explicit timeout on the device flash operation, for
example when device flashing takes significantly large time.
The --device-flash-with-test option indicates that on the platform the flash operation also executes
a test case, so the flash timeout is increased by a test case timeout.
Executing tests on multiple devices To build and execute tests on multiple devices connected to the
host PC, a hardware map needs to be created with all connected devices and their details such as the
serial device, baud and their IDs if available. Run the following command to produce the hardware map:
Linux
Windows
The generated hardware map file (map.yml) will have the list of connected devices, for example:
Linux
- connected: true
id: OSHW000032254e4500128002ab98002784d1000097969900
platform: unknown
product: DAPLink CMSIS-DAP
runner: pyocd
serial: /dev/cu.usbmodem146114202
- connected: true
id: 000683759358
platform: unknown
product: J-Link
runner: unknown
serial: /dev/cu.usbmodem0006837593581
Windows
- connected: true
id: OSHW000032254e4500128002ab98002784d1000097969900
platform: unknown
product: unknown
runner: unknown
serial: COM1
- connected: true
id: 000683759358
platform: unknown
product: unknown
runner: unknown
serial: COM2
Any options marked as ‘unknown’ need to be changed and set with the correct values, in the above
example the platform names, the products and the runners need to be replaced with the correct val-
ues corresponding to the connected hardware. In this example we are using a reel_board and an
nrf52840dk_nrf52840:
Linux
- connected: true
id: OSHW000032254e4500128002ab98002784d1000097969900
platform: reel_board
product: DAPLink CMSIS-DAP
runner: pyocd
serial: /dev/cu.usbmodem146114202
baud: 9600
- connected: true
id: 000683759358
(continues on next page)
Windows
- connected: true
id: OSHW000032254e4500128002ab98002784d1000097969900
platform: reel_board
product: DAPLink CMSIS-DAP
runner: pyocd
serial: COM1
baud: 9600
- connected: true
id: 000683759358
platform: nrf52840dk_nrf52840
product: J-Link
runner: nrfjprog
serial: COM2
baud: 9600
Windows
The above command will result in twister building tests for the platforms defined in the hardware map
and subsequently flashing and running the tests on those platforms.
Note: Currently only boards with support for both pyocd and nrfjprog are supported with the hardware
map features. Boards that require other runners to flash the Zephyr binary are still work in progress.
- connected: true
id: None
platform: intel_adsp_cavs25
product: None
runner: intel_adsp
serial_pty: path/to/script.py
(continues on next page)
The runner_params field indicates the parameters you want to pass to the west runner. For some boards
the west runner needs some extra parameters to work. It is equivalent to following west and twister
commands.
Linux
Windows
Note: For serial PTY, the “–generate-hardware-map” option cannot scan it out and generate a correct
hardware map automatically. You have to edit it manually according to above example. This is because
the serial port of the PTY is not fixed and being allocated in the system at runtime.
Fixtures Some tests require additional setup or special wiring specific to the test. Running the tests
without this setup or test fixture may fail. A testcase can specify the fixture it needs which can then be
matched with hardware capability of a board and the fixtures it supports via the command line or using
the hardware map file.
Fixtures are defined in the hardware map file as a list:
- connected: true
fixtures:
- gpio_loopback
id: 0240000026334e450015400f5e0e000b4eb1000097969900
platform: frdm_k64f
product: DAPLink CMSIS-DAP
runner: pyocd
serial: /dev/ttyACM9
When running twister with --device-testing, the configured fixture in the hardware map file will be
matched to testcases requesting the same fixtures and these tests will be executed on the boards that
provide this fixture.
Fixtures can also be provided via twister command option --fixture, this option can be used multiple
times and all given fixtures will be appended as a list. And the given fixtures will be assigned to all
boards, this means that all boards set by current twister command can run those testcases which request
the same fixtures.
Notes It may be useful to annotate board descriptions in the hardware map file with additional infor-
mation. Use the “notes” keyword to do this. For example:
- connected: false
fixtures:
- gpio_loopback
(continues on next page)
sensor XYZ
Testcase
harness: console...
this file you will need to update serial to reference the third port, and platform
to nrf5340dk_nrf5340_cpuapp or another supported board target.
platform: nrf52840dk_nrf52840
product: J-Link
runner: jlink
serial: null
Overriding Board Identifier When (re-)generated the hardware map file will contain an “id” keyword
that serves as the argument to --board-id when flashing. In some cases the detected ID is not the
correct one to use, for example when using an external J-Link probe. The “probe_id” keyword overrides
the “id” keyword for this purpose. For example:
- connected: false
id: 0229000005d9ebc600000000000000000000000097969905
platform: mimxrt1060_evk
probe_id: 000609301751
product: DAPLink CMSIS-DAP
runner: jlink
serial: null
Quarantine Twister allows user to provide onfiguration files defining a list of tests or platforms to be
put under quarantine. Such tests will be skipped and marked accordingly in the output reports. This
feature is especially useful when running larger test suits, where a failure of one test can affect the
execution of other tests (e.g. putting the physical board in a corrupted state).
To use the quarantine feature one has to add the argument --quarantine-list
<PATH_TO_QUARANTINE_YAML> to a twister call. Multiple quarantine files can be used. The cur-
rent status of tests on the quarantine list can also be verified by adding --quarantine-verify to the
above argument. This will make twister skip all tests which are not on the given list.
A quarantine yaml has to be a sequence of dictionaries. Each dictionary has to have “scenarios” and
“platforms” entries listing combinations of scenarios and platforms to put under quarantine. In addition,
an optional entry “comment” can be used, where some more details can be given (e.g. link to a reported
issue). These comments will also be added to the output reports.
When quarantining a class of tests or many scenarios in a single testsuite or when dealing with mul-
tiple issues within a subsystem, it is possible to use regular expressions, for example, kernel.* would
quarantine all kernel tests.
An example of entries in a quarantine yaml:
- scenarios:
- sample.basic.helloworld
comment: "Link to the issue: https://github.com/zephyrproject-rtos/zephyr/pull/33287
˓→"
- scenarios:
- kernel.common
- kernel.common.(misra|tls)
- kernel.common.nano64
platforms:
(continues on next page)
- platforms:
- qemu_x86
comment: "broken qemu"
Additionally you can quarantine entire architectures or a specific simulator for executing tests.
Test Configuration
A test configuration can be used to customize various apects of twister and the default enabled options
and features. This allows tweaking the filtering capabilities depending on the environment and makes it
possible to adapt and improve coverage when targeting different sets of platforms.
The test configuration also adds support for test levels and the ability to assign a specific test to one or
more levels. Using command line options of twister it is then possible to select a level and just execute
the tests included in this level.
Additionally, the test configuration allows defining level dependencies and additional inclusion of tests
into a specific level if the test itself does not have this information already.
In the configuration file you can include complete components using regular expressions and you can
specify which test level to import from the same file, making management of levels easier.
To help with testing outside of upstream CI infrastructure, additional options are available in the config-
uration file, which can be hosted locally. As of now, those options are available:
• Ability to ignore default platforms as defined in board definitions (Those are mostly emulation
platforms used to run tests in upstream CI)
• Option to specify your own list of default platforms overriding what upstream defines.
• Ability to override build_onl_all options used in some testscases. This will treat tests or sample as
any other just build for default platforms you specify in the configuation file or on the command
line.
• Ignore some logic in twister to expand platform coverage in cases where default platforms are not
in scope.
platforms:
override_default_platforms: true
increased_platform_scope: false
(continues on next page)
Test Level Configuration The test configuration allows defining test levels, level dependencies and
additional inclusion of tests into a specific test level if the test itself does not have this information
already.
In the configuration file you can include complete components using regular expressions and you can
specify which test level to import from the same file, making management of levels simple.
And example test level configuration:
levels:
- name: my-test-level
description: >
my custom test level
adds:
- kernel.threads.*
- kernel.timer.behavior
- arch.interrupt
- boards.*
Combined configuration To mix the Platform and level confgiuration, you can take an example as
below:
And example platforms plus level configuration:
platforms:
override_default_platforms: true
default_platforms:
- frdm_k64f
levels:
- name: smoke
description: >
A plan to be used verifying basic zephyr features.
- name: unit
description: >
A plan to be used verifying unit test.
- name: integration
description: >
A plan to be used verifying integration.
- name: acceptance
description: >
A plan to be used verifying acceptance.
- name: system
description: >
A plan to be used verifying system.
- name: regression
description: >
A plan to be used verifying regression.
To run with above test_config.yaml file, only default_paltforms with given test level test cases will run.
Linux
Enable ZTEST framework’s CONFIG_ZTEST_SHUFFLE config option to run your tests in random order. This
can be beneficial for identifying dependencies between test cases. For native_posix platforms, you can
provide the seed to the random number generator by providing -seed=value as an argument to twister.
See Shuffling Test Sequence for more details.
Windows
Writing Robot tests For the list of keywords provided by the Robot Framework itself, refer to the
official Robot documentation.
Information on writing and running Robot Framework tests in Renode can be found in the testing section
of Renode documentation. It provides a list of the most commonly used keywords together with links to
the source code where those are defined.
It’s possible to extend the framework by adding new keywords expressed directly in Robot test suite
files, as an external Python library or, like Renode does it, dynamically via XML-RPC. For details see the
extending Robot Framework section in the official Robot documentation.
Please mind that integration of twister with pytest is still work in progress. Not every platform type is
supported in pytest (yet). If you find any issue with the integration or have an idea for an improvement,
please, let us know about it and open a GitHub issue/enhancement.
Introduction
Pytest is a python framework that “makes it easy to write small, readable tests, and can scale to support
complex functional testing for applications and libraries” (https://docs.pytest.org/en/7.3.x/). Python is
known for its free libraries and ease of using it for scripting. In addition, pytest utilizes the concept of
plugins and fixtures, increasing its expendability and reusability. A pytest plugin pytest-twister-harness
was introduced to provide an integration between pytest and twister, allowing Zephyr’s community to
utilize pytest functionality with keeping twister as the main framework.
By default, there is nothing to be done to enable pytest support in twister. The plugin is developed as a
part of Zephyr’s tree. To enable install-less operation, twister first extends PYTHONPATH with path to this
plugin, and then during pytest call, it appends the command with -p twister_harness.plugin argu-
ment. If one prefers to use the installed version of the plugin, they must add --allow-installed-plugin
flag to twister’s call.
Pytest-based test suites are discovered the same way as other twister tests, i.e., by a presence of test-
case/sample.yaml. Inside, a keyword harness tells twister how to handle a given test. In the case of
harness: pytest, most of twister workflow (test suites discovery, parallelization, building and report-
ing) remains the same as for other harnesses. The change happens during the execution step. The below
picture presents a simplified overview of the integration.
Twister
Collecting tests (ba... Generation test conf... Applying filtration Spawn workers (paral... Building
Test execution
Run pytest with pytest-twister-harness plug... Execute test directly in Twister with fi...
Twister
Generate reports
If harness: pytest is used, twister delegates the test execution to pytest, by calling it as a subprocess.
Required parameters (such as build directory, device to be used, etc.) are passed through a CLI command.
When pytest is done, twister looks for a pytest report (results.xml) and sets the test result accordingly.
The first enables pytest-twister-harness plugin indirectly, as it is added with pytest. It also gives access to
dut fixture. The second is important for type checking and enabling IDE hints for duts. The dut fixture
is the core of pytest harness plugin. When used as an argument of a test function it gives access to a
DeviceAbstract type object. The fixture yields a device prepared according to the requested type (native
posix, qemu, hardware, etc.). All types of devices share the same API. This allows for writing tests which
are device-type-agnostic.
Limitations
• The whole pytest call is reported as one test in the final twister report (xml or json).
• Device adapters in pytest plugin provide iter_stdout method to read from devices. In some cases, it
is not the most convenient way, and it will be considered how to improve this (for example replace
it with a simple read function with a given byte size and timeout arguments).
• Not every platform type is supported in the plugin (yet).
With Zephyr, you can generate code coverage reports to analyze which parts of the code are covered by
a given test or application.
You can do this in two ways:
• In a real embedded target or QEMU, using Zephyr’s gcov integration
• Directly in your host computer, by compiling your application targeting the POSIX architecture
Overview GCC GCOV is a test coverage program used together with the GCC compiler to analyze and
create test coverage reports for your programs, helping you create more efficient, faster running code
and discovering untested code paths
In Zephyr, gcov collects coverage profiling data in RAM (and not to a file system) while your application
is running. Support for gcov collection and reporting is limited by available RAM size and so is currently
enabled only for QEMU emulation of embedded targets.
Details There are 2 parts to enable this feature. The first is to enable the coverage for the device and the
second to enable in the test application. As explained earlier the code coverage with gcov is a function
of RAM available. Therefore ensure that the device has enough RAM when enabling the coverage for it.
For example a small device like frdm_k64f can run a simple test application but the more complex test
cases which consume more RAM will crash when coverage is enabled.
To enable the device for coverage, select CONFIG_HAS_COVERAGE_SUPPORT in the Kconfig.board file.
To report the coverage for the particular test application set CONFIG_COVERAGE.
Steps to generate code coverage reports These steps will produce an HTML coverage report for a
single application.
1. Build the code with CONFIG_COVERAGE=y.
2. Capture the emulator output into a log file. You may need to terminate the emulator with Ctrl-A
X for this to complete after the coverage dump has been printed:
or
3. Generate the gcov .gcda and .gcno files from the log file that was saved:
4. Find the gcov binary placed in the SDK. You will need to pass the path to the gcov binary for the
appropriate architecture when you later invoke gcovr:
$ mkdir -p gcov_report
When compiling for the POSIX architecture, you utilize your host native tooling to build a native exe-
cutable which contains your application, the Zephyr OS, and some basic HW emulation.
That means you can use the same tools you would while developing any other desktop application.
To build your application with gcc’s gcov, simply set CONFIG_COVERAGE before compiling it. When you
run your application, gcov coverage data will be dumped into the respective gcda and gcno files. You
may postprocess these with your preferred tools. For example:
$ ./build/zephyr/zephyr.exe
# Press Ctrl+C to exit
lcov --capture --directory ./ --output-file lcov.info -q --rc lcov_branch_coverage=1
genhtml lcov.info --output-directory lcov_html -q --ignore-errors source --branch-
˓→coverage --highlight --legend
Note: You need a recent version of lcov (at least 1.14) with support for intermediate text format. Such
packages exist in recent Linux distributions.
Zephyr’s twister script can automatically generate a coverage report from the tests which were executed.
You just need to invoke it with the --coverage command line option.
For example, you may invoke:
or:
2.11.5 BabbleSim
In the Zephyr project we use the Babblesim simulator to test some of the Zephyr radio protocols, includ-
ing the BLE stack, 802.15.4, and some of the networking stack.
BabbleSim is a physical layer simulator, which in combination with the Zephyr bsim boards can be used
to simulate a network of BLE and 15.4 devices. When we build Zephyr targeting an nrf52_bsim board
we produce a Linux executable, which includes the application, Zephyr OS, and models of the HW.
When there is radio activity, this Linux executable will connect to the BabbleSim Phy simulation to
simulate the radio channel.
In the BabbleSim documentation you can find more information on how to get. and build
<https://babblesim.github.io/building.html>_ the simulator. In the nrf52_bsim board documentation
you can find more information about how to build Zephyr targeting that particular board, and a few
examples.
Types of tests
Tests without radio activity: bsim tests with twister The bsim boards can be used without radio
activity, and in that case, it is not necessary to connect them to a phyisical layer simulation. Thanks to
this, this target boards can be used just like native_posix with twister, to run all standard Zephyr twister
tests, but with models of a real SOC HW, and their drivers.
Tests with radio activity When there is radio activity, BabbleSim tests require at the very least a
physical layer simulation running, and most, more than 1 simulated device. Due to this, these tests are
not build and run with twister, but with a dedicated set of tests scripts.
These tests are kept in the tests/bsim/ folder. There you can find a README with more information
about how to build and run them, as well as the convention they follow.
There are two main sets of tests of these type:
• Self checking embedded application/tests: In which some of the simulated devices applications are
built with some checks which decide if the test is passing or failing. These embedded applications
tests use the bs_tests system to report the pass or failure, and in many cases to build several tests
into the same binary.
• Test using the EDTT tool, in which a EDTT (python) test controls the embedded applications over
an RPC mechanism, and decides if the test passes or not. Today these tests include a very significant
subset of the BT qualification test suite.
More information about how different tests types relate to BabbleSim and the bsim boards can be found
in the bsim boards tests section.
As the nrf52_bsim is based on the POSIX architecture, you can easily collect test coverage information.
You can use the script tests/bsim/generate_coverage_report.sh to generate an html coverage report
from tests.
Check the page on coverage generation for more info.
Ztest is currently being migrated to a new API, this documentation provides information about the dep-
recated APIs which will eventually be removed. See Test Framework for the new API. Similarly, ZTest’s
mocking framework is also deprecated (see Mocking via FFF).
Ztest can be used for unit testing. This means that rather than including the entire Zephyr OS for testing
a single function, you can focus the testing efforts into the specific module in question. This will speed
up testing since only the module will have to be compiled in, and the tested functions will be called
directly.
Since you won’t be including basic kernel data structures that most code depends on, you have to provide
function stubs in the test. Ztest provides some helpers for mocking functions, as demonstrated below.
In a unit test, mock objects can simulate the behavior of complex real objects and are used to decide
whether a test failed or passed by verifying whether an interaction with an object occurred, and if
required, to assert the order of that interaction.
Best practices for declaring the test suite twister and other validation tools need to obtain the list of
subcases that a Zephyr ztest test image will expose.
Rationale
This all is for the purpose of traceability. It’s not enough to have only a semaphore test project. We also
need to show that we have testpoints for all APIs and functionality, and we trace back to documentation
of the API, and functional requirements.
The idea is that test reports show results for every sub-testcase as passed, failed, blocked, or skipped.
Reporting on only the high-level test project level, particularly when tests do too many things, is too
vague.
There exist two alternatives to writing tests. The first, and more verbose, approach is to directly
declare and run the test suites. Here is a generic template for a test showing the expected use of
ztest_test_suite():
# include <zephyr/ztest.h>
void test_main(void)
{
ztest_test_suite(common,
ztest_unit_test(test_sometest1),
ztest_unit_test(test_sometest2),
ztest_unit_test(test_sometest3),
ztest_unit_test(test_sometest4)
);
ztest_run_test_suite(common);
}
Alternatively, it is possible to split tests across multiple files using ztest_register_test_suite() which
bypasses the need for extern:
# include <zephyr/ztest.h>
void test_sometest1(void) {
zassert_true(1, "true");
}
ztest_register_test_suite(common, NULL,
ztest_unit_test(test_sometest1)
);
The above sample simple registers the test suite and uses a NULL pragma function (more on that later).
It is important to note that the test suite isn’t directly run in this file. Instead two alternatives exist for
running the suite. First, if to do nothing. A default test_main function is provided by ztest. This is the
preferred approach if the test doesn’t involve a state and doesn’t require use of the pragma.
In cases of an integration test it is possible that some general state needs to be set between test suites.
This can be thought of as a state diagram in which test_main simply goes through various actions that
modify the board’s state and different test suites need to run. This is achieved in the following:
# include <zephyr/ztest.h>
struct state {
bool is_hibernating;
bool is_usb_connected;
}
ztest_register_test_suite(baseline, pragma_always,
ztest_unit_test(test_case0));
ztest_register_test_suite(before_usb, pragma_not_hibernating_not_connected,
ztest_unit_test(test_case1),
ztest_unit_test(test_case2));
ztest_register_test_suite(with_usb, pragma_usb_connected,,
ztest_unit_test(test_case3),
ztest_unit_test(test_case4));
void test_main(void)
{
struct state state;
For twister to parse source files and create a list of subcases, the declarations of ztest_test_suite()
and ztest_register_test_suite() must follow a few rules:
• one declaration per line
• conditional execution by using ztest_test_skip()
What to avoid:
• packing multiple testcases in one source file
void test_main(void)
{
# ifdef TEST_feature1
ztest_test_suite(feature1,
ztest_unit_test(test_1a),
ztest_unit_test(test_1b),
ztest_unit_test(test_1c)
);
ztest_run_test_suite(feature1);
# endif
# ifdef TEST_feature2
ztest_test_suite(feature2,
ztest_unit_test(test_2a),
ztest_unit_test(test_2b)
);
ztest_run_test_suite(feature2);
# endif
}
ztest_test_suite(common,
ztest_unit_test(test_sometest1),
ztest_unit_test(test_sometest2),
# ifdef CONFIG_WHATEVER
ztest_unit_test(test_sometest3),
# endif
ztest_unit_test(test_sometest4),
...
ztest_test_suite(common,
ztest_unit_test(test_sometest1),
ztest_unit_test(test_sometest2) /* will fail */ ,
/* will fail! */ ztest_unit_test(test_sometest3),
ztest_unit_test(test_sometest4),
...
• Do not define multiple definitions of unit / user unit test case per line
ztest_test_suite(common,
ztest_unit_test(test_sometest1), ztest_unit_test(test_
˓→sometest2),
ztest_unit_test(test_sometest3),
ztest_unit_test(test_sometest4),
...
Other questions:
• Why not pre-scan with CPP and then parse? or post scan the ELF file?
If C pre-processing or building fails because of any issue, then we won’t be able to tell the subcases.
• Why not declare them in the YAML testcase description?
A separate testcase description file would be harder to maintain than just keeping the information
in the test source files themselves – only one file to update when changes are made eliminates
duplication.
Mocking
These functions allow abstracting callbacks and related functions and controlling them from specific
tests. You can enable the mocking framework by setting CONFIG_ZTEST_MOCKING to “y” in the configu-
ration file of the test. The amount of concurrent return values and expected parameters is limited by
CONFIG_ZTEST_PARAMETER_COUNT.
Here is an example for configuring the function expect_two_parameters to expect the values a=2 and
b=3, and telling returns_int to return 5:
1 # include <zephyr/ztest.h>
2
27 void test_main(void)
28 {
29 ztest_test_suite(mock_framework_tests,
30 ztest_unit_test(parameter_test),
31 ztest_unit_test(return_value_test)
32 );
33
34 ztest_run_test_suite(mock_framework_tests);
35 }
group ztest_mock
This module provides simple mocking functions for unit testing. These need CON-
FIG_ZTEST_MOCKING=y.
Defines
ztest_copy_return_data(param, length)
Copy the data set by ztest_return_data to the memory pointed by param.
This will first check that param is not null and then copy the data. This must be called from
the called function.
Parameters
• param – Parameter to return data for
• length – Length of the data to return
ztest_returns_value(func, value)
Tell func that it should return value.
Parameters
• func – Function that should return value
• value – Value to return from func
ztest_get_return_value()
Get the return value for current function.
The return value must have been set previously with ztest_returns_value(). If no return value
exists, the current test will fail.
Returns
The value the current function should return
ztest_get_return_value_ptr()
Get the return value as a pointer for current function.
The return value must have been set previously with ztest_returns_value(). If no return value
exists, the current test will fail.
Returns
The value the current function should return as a void *
Support for static code analysis tools in Zephyr is possible through CMake.
The build setting ZEPHYR_SCA_VARIANT can be used to specify the SCA tool to use. ZEPHYR_SCA_VARIANT
is also supported as environment variable.
Use -DZEPHYR_SCA_VARIANT=<tool>, for example -DZEPHYR_SCA_VARIANT=sparse to enable the static
analysis tool sparse.
Support for an SCA tool is implemented in a file:sca.cmake file. The file:sca.cmake must be placed under
file:<SCA_ROOT>/cmake/sca/<tool>/sca.cmake. Zephyr itself is always added as an SCA_ROOT but the
build system offers the possibility to add additional folders to the SCA_ROOT setting.
You can provide support for out of tree SCA tools by creating the following structure:
/path/to/my_tools
cmake/
sca/
foo/
sca.cmake
The following is a list of SCA tools natively supported by Zephyr build system.
Sparse support
Sparse is a static code analysis tool. Apart from performing common code analysis tasks it also supports
an address_space attribute, which allows introduction of distinct address spaces in C code with subse-
quent verification that pointers to different address spaces do not get confused. Additionally it supports
a force attribute which should be used to cast pointers between different address spaces. At the moment
Zephyr introduces a single custom address space __cache used to identify pointers from the cached ad-
dress range on the Xtensa architecture. This helps identify cases where cached and uncached addresses
are confused.
Running with sparse To run a sparse verification build west build should be called with a
-DZEPHYR_SCA_VARIANT=sparse parameter, e.g.
2.13 Toolchains
The Zephyr Software Development Kit (SDK) contains toolchains for each of Zephyr’s supported archi-
tectures. It also includes additional host tools, such as custom QEMU and OpenOCD.
Use of the Zephyr SDK is highly recommended and may even be required under certain conditions (for
example, running tests in QEMU for some architectures).
Supported architectures
The Zephyr SDK bundle supports all major operating systems (Linux, macOS and Windows) and is
delivered as a compressed file. The installation consists of extracting the file and running the included
setup script. Additional OS-specific instructions are described in the sections below.
If no toolchain is selected, the build system looks for Zephyr SDK and uses the toolchain from there. You
can enforce this by setting the environment variable ZEPHYR_TOOLCHAIN_VARIANT to zephyr.
If you install the Zephyr SDK outside any of the default locations (listed in the operating system specific
instructions below) and you want automatic discovery of the Zephyr SDK, then you must register the
Zephyr SDK in the CMake package registry by running the setup script. If you decide not to register
the Zephyr SDK in the CMake registry, then the ZEPHYR_SDK_INSTALL_DIR can be used to point to the
Zephyr SDK installation directory.
You can also set ZEPHYR_SDK_INSTALL_DIR to point to a directory containing multiple Zephyr SDKs,
allowing for automatic toolchain selection. For example, you can set ZEPHYR_SDK_INSTALL_DIR to /
company/tools, where the company/tools folder contains the following subfolders:
• /company/tools/zephyr-sdk-0.13.2
• /company/tools/zephyr-sdk-a.b.c
• /company/tools/zephyr-sdk-x.y.z
This allows the Zephyr build system to choose the correct version of the SDK, while allowing multiple
Zephyr SDKs to be grouped together at a specific path.
In general, the Zephyr SDK version referenced in this page should be considered the recommended
version for the corresponding Zephyr version.
For the full list of compatible Zephyr and Zephyr SDK versions, refer to the Zephyr SDK Version Compat-
ibility Matrix.
wget https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/
˓→zephyr-sdk-0.16.1_linux-x86_64.tar.xz
wget -O - https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/
˓→sha256.sum | shasum --check --ignore-missing
You can change 0.16.1 to another version if needed; the Zephyr SDK Releases page contains all
available SDK releases.
If your host architecture is 64-bit ARM (for example, Raspberry Pi), replace x86_64 with aarch64
in order to download the 64-bit ARM Linux SDK.
2. Extract the Zephyr SDK bundle archive:
cd zephyr-sdk-0.16.1
./setup.sh
If this fails, make sure Zephyr’s dependencies were installed as described in Install Requirements
and Dependencies.
If you want to uninstall the SDK, remove the directory where you installed it. If you relocate the SDK
directory, you need to re-run the setup script.
Note: It is recommended to extract the Zephyr SDK bundle at one of the following default locations:
• $HOME
• $HOME/.local
• $HOME/.local/opt
• $HOME/bin
• /opt
• /usr/local
The Zephyr SDK bundle archive contains the zephyr-sdk-0.16.1 directory and, when extracted under
$HOME, the resulting installation path will be $HOME/zephyr-sdk-0.16.1.
cd ~
wget https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/
˓→zephyr-sdk-0.16.1_macos-x86_64.tar.xz
wget -O - https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/
˓→sha256.sum | shasum --check --ignore-missing
If your host architecture is 64-bit ARM (Apple Silicon, also known as M1), replace x86_64 with
aarch64 in order to download the 64-bit ARM macOS SDK.
2. Extract the Zephyr SDK bundle archive:
Note: It is recommended to extract the Zephyr SDK bundle at one of the following default loca-
tions:
• $HOME
• $HOME/.local
• $HOME/.local/opt
• $HOME/bin
• /opt
• /usr/local
The Zephyr SDK bundle archive contains the zephyr-sdk-0.16.1 directory and, when extracted
under $HOME, the resulting installation path will be $HOME/zephyr-sdk-0.16.1.
cd zephyr-sdk-0.16.1
./setup.sh
Note: You only need to run the setup script once after extracting the Zephyr SDK bundle.
You must rerun the setup script if you relocate the Zephyr SDK bundle directory after the initial
setup.
cd %HOMEPATH%
wget https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.1/
˓→zephyr-sdk-0.16.1_windows-x86_64.7z
7z x zephyr-sdk-0.16.1_windows-x86_64.7z
Note: It is recommended to extract the Zephyr SDK bundle at one of the following default loca-
tions:
• %HOMEPATH%
• %PROGRAMFILES%
The Zephyr SDK bundle archive contains the zephyr-sdk-0.16.1 directory and, when extracted
under %HOMEPATH%, the resulting installation path will be %HOMEPATH%\zephyr-sdk-0.16.1.
cd zephyr-sdk-0.16.1
setup.cmd
Note: You only need to run the setup script once after extracting the Zephyr SDK bundle.
You must rerun the setup script if you relocate the Zephyr SDK bundle directory after the initial
setup.
1. Download and install a development suite containing the Arm Compiler 6 for your operating sys-
tem.
2. Set these environment variables:
• Set ZEPHYR_TOOLCHAIN_VARIANT to armclang.
• Set ARMCLANG_TOOLCHAIN_PATH to the toolchain installation directory.
3. The Arm Compiler 6 needs the ARMLMD_LICENSE_FILE environment variable to point to your license
file or server.
For example:
1. If the Arm Compiler 6 was installed as part of an Arm Development Studio, then you must set the
ARM_PRODUCT_DEF to point to the product definition file: See also: Product and toolkit configura-
tion. For example if the Arm Development Studio is installed in: /opt/armds-2020-1 with a Gold
license, then set ARM_PRODUCT_DEF to point to /opt/armds-2020-1/gold.elmap.
Note: The Arm Compiler 6 uses armlink for linking. This is incompatible with Zephyr’s linker
script template, which works with GNU ld. Zephyr’s Arm Compiler 6 support Zephyr’s CMake
linker script generator, which supports generating scatter files. Basic scatter file support is in place,
but there are still areas covered in ld templates which are not fully supported by the CMake linker
script generator.
Some Zephyr subsystems or modules may also contain C or assembly code that relies on GNU
intrinsics and have not yet been updated to work fully with armclang.
1. Obtain Tensilica Software Development Toolkit targeting the specific SoC on hand. This usually
contains two parts:
• The Xtensa Xplorer which contains the necessary executables and libraries.
• A SoC-specific add-on to be installed on top of Xtensa Xplorer.
– This add-on allows the compiler to generate code for the SoC on hand.
2. Install Xtensa Xplorer and then the SoC add-on.
• Follow the instruction from Cadence on how to install the SDK.
• Depending on the SDK, there are two set of compilers:
– GCC-based compiler: xt-xcc and its friends.
– Clang-based compiler: xt-clang and its friends.
3. Make sure you have obtained a license to use the SDK, or has access to a remote licensing server.
# Linux
export ZEPHYR_TOOLCHAIN_VARIANT=xcc
export XTENSA_TOOLCHAIN_PATH=/opt/xtensa/XtDevTools/install/tools/
export XTENSA_CORE=X6H3SUE_RI_2018_0
export TOOLCHAIN_VER=RI-2018.0-linux
# Linux
export XCC_NO_G_FLAG=1
Note: Even though ARC MWDT compiler is used for Zephyr RTOS sources compilation, still the
GNU preprocessor & GNU objcopy might be used for some steps like DTS preprocessing and .bin
file generation. Hence we need to have either ARC or host GNU tools in PATH . Currently Zephyr
looks for:
• objcopy binaries: arc-elf32-objcopy or arc-linux-objcopy or objcopy
• gcc binaries: arc-elf32-gcc or arc-linux-gcc or gcc
This list can be extended or modified in future.
3. To check that you have set these variables correctly in your current environment, follow these
example shell sessions (the ARCMWDT_TOOLCHAIN_PATH values may be different on your system):
# Linux:
$ echo $ZEPHYR_TOOLCHAIN_VARIANT
arcmwdt
$ echo $ARCMWDT_TOOLCHAIN_PATH
/home/you/ARC/MWDT_2019.12/
# Windows:
> echo %ZEPHYR_TOOLCHAIN_VARIANT%
arcmwdt
> echo %ARCMWDT_TOOLCHAIN_PATH%
C:\ARC\MWDT_2019.12\
1. Download and install a GNU Arm Embedded build for your operating system and extract it on your
file system.
Note: On Windows, we’ll assume for this guide that you install into the directory C:\
gnu_arm_embedded. You can also choose the default installation path used by the ARM GCC in-
staller, in which case you will need to adjust the path accordingly in the guide below.
Warning: On macOS Catalina or later you might need to change a security policy for the
toolchain to be able to run from the terminal.
# Linux, macOS:
$ echo $ZEPHYR_TOOLCHAIN_VARIANT
gnuarmemb
$ echo $GNUARMEMB_TOOLCHAIN_PATH
/home/you/Downloads/gnu_arm_embedded
# Windows:
> echo %ZEPHYR_TOOLCHAIN_VARIANT%
gnuarmemb
> echo %GNUARMEMB_TOOLCHAIN_PATH%
C:\gnu_arm_embedded
Warning: On macOS, if you are having trouble with the suggested procedure, there is an
unofficial package on brew that might help you. Run brew install gcc-arm-embedded and
configure the variables
• Set ZEPHYR_TOOLCHAIN_VARIANT to gnuarmemb.
# Linux, macOS:
export ONEAPI_TOOLCHAIN_PATH=/opt/intel/oneapi
source $ONEAPI_TOOLCHAIN_PATH/compiler/latest/env/vars.sh
# Windows:
> set ONEAPI_TOOLCHAIN_PATH=C:\Users\Intel\oneapi
source /opt/intel/oneapi/setvars.sh
The above will also change the python environment to the one used by the toolchain and might
conflict with what Zephyr uses.
3. Set ZEPHYR_TOOLCHAIN_VARIANT to oneApi.
Warning: xtools toolchain variant is deprecated. The cross-compile toolchain variant should be used
when using a custom toolchain built with Crosstool-NG.
./go.sh <arch>
Note: Currently, only i586 and Arm toolchain builds are verified.
# Linux, macOS:
$ echo $ZEPHYR_TOOLCHAIN_VARIANT
xtools
(continues on next page)
In some specific configurations, like when building for non-MCU x86 targets on a Linux host, you may
be able to re-use the native development tools provided by your operating system.
To use your host gcc, set the ZEPHYR_TOOLCHAIN_VARIANT environment variable to host. To use clang,
set ZEPHYR_TOOLCHAIN_VARIANT to llvm.
This toolchain variant is borrowed from the Linux kernel build system’s mechanism of using a
CROSS_COMPILE environment variable to set up a GNU-based cross toolchain.
Examples of such “other cross compilers” are cross toolchains that your Linux distribution packaged, that
you compiled on your own, or that you downloaded from the net. Unlike toolchains specifically listed in
Toolchains, the Zephyr build system may not have been tested with them, and doesn’t officially support
them. (Nonetheless, the toolchain set-up mechanism itself is supported.)
Follow these steps to use one of these toolchains.
1. Install a cross compiler suitable for your host and target systems.
For example, you might install the gcc-arm-none-eabi package on Debian-based Linux systems,
or arm-none-eabi-newlib on Fedora or Red Hat:
# On Debian or Ubuntu
sudo apt-get install gcc-arm-none-eabi
# On Fedora or Red Hat
sudo dnf install arm-none-eabi-newlib
# Linux, macOS:
$ echo $ZEPHYR_TOOLCHAIN_VARIANT
cross-compile
$ echo $CROSS_COMPILE
/usr/bin/arm-none-eabi-
To use a custom toolchain defined in an external CMake file, set these environment variables:
• Set ZEPHYR_TOOLCHAIN_VARIANT to your toolchain’s name
• Set TOOLCHAIN_ROOT to the path to the directory containing your toolchain’s CMake configuration
files.
Zephyr will then include the toolchain cmake files located in the TOOLCHAIN_ROOT directory:
• cmake/toolchain/<toolchain name>/generic.cmake: configures the toolchain for “generic” use,
which mostly means running the C preprocessor on the generated Devicetree file.
• cmake/toolchain/<toolchain name>/target.cmake: configures the toolchain for “target” use,
i.e. building Zephyr and your application’s source code.
Here <toolchain name> is the same as the name provided in ZEPHYR_TOOLCHAIN_VARIANT See
the zephyr files cmake/modules/FindHostTools.cmake and cmake/modules/FindTargetTools.cmake for
more details on what your generic.cmake and target.cmake files should contain.
You can also set ZEPHYR_TOOLCHAIN_VARIANT and TOOLCHAIN_ROOT as CMake variables when generating
a build system for a Zephyr application, like so:
If you do this, -C <initial-cache> cmake option may useful. If you save your
ZEPHYR_TOOLCHAIN_VARIANT, TOOLCHAIN_ROOT, and other settings in a file named my-toolchain.
cmake, you can then invoke cmake as cmake -C my-toolchain.cmake ... to save typing.
Zephyr includes include/toolchain.h which again includes a toolchain specific header based on the
compiler identifier, such as __llvm__ or __GNUC__. Some custom compilers identify themselves as the
compiler on which they are based, for example llvm which then gets the toolchain/llvm.h included.
This included file may though not be right for the custom toolchain. In order to solve this, and thus
to get the include/other.h included instead, add the set(TOOLCHAIN_USE_CUSTOM 1) cmake line
to the generic.cmake and/or target.cmake files located under <TOOLCHAIN_ROOT>/cmake/toolchain/
<toolchain name>/.
When TOOLCHAIN_USE_CUSTOM is set, the other.h must be available out-of-tree and it must include the
correct header for the custom toolchain. A good location for the other.h header file, would be a directory
under the directory specified in TOOLCHAIN_ROOT as include/toolchain. To get the toolchain header
included in zephyr’s build, the USERINCLUDE can be set to point to the include directory, as shown here:
2.14.1 Coccinelle
Coccinelle is a tool for pattern matching and text transformation that has many uses in kernel develop-
ment, including the application of complex, tree-wide patches and detection of problematic programming
patterns.
Note: Linux and macOS development environments are supported, but not Windows.
Getting Coccinelle
The semantic patches included in the kernel use features and options which are provided by Coccinelle
version 1.0.0-rc11 and above. Using earlier versions will fail as the option names used by the Coccinelle
files and coccicheck have been updated.
Coccinelle is available through the package manager of many distributions, e.g. :
• Debian
• Fedora
• Ubuntu
• OpenSUSE
• Arch Linux
• NetBSD
• FreeBSD
Some distribution packages are obsolete and it is recommended to use the latest version released from
the Coccinelle homepage at http://coccinelle.lip6.fr/
Or from Github at:
https://github.com/coccinelle/coccinelle
Once you have it, run the following commands:
./autogen
./configure
make
More detailed installation instructions to build from source can be found at:
https://github.com/coccinelle/coccinelle/blob/master/install.txt
Supplemental documentation
coccicheck checker is the front-end to the Coccinelle infrastructure and has various modes:
Four basic modes are defined: patch, report, context, and org. The mode to use is specified by setting
--mode=<mode> or -m=<mode>.
• patch proposes a fix, when possible.
• report generates a list in the following format: file:line:column-column: message
• context highlights lines of interest and their context in a diff-like style.Lines of interest are indi-
cated with -.
• org generates a report in the Org mode format of Emacs.
Note that not all semantic patches implement all modes. For easy use of Coccinelle, the default mode is
report.
Two other modes provide some common combinations of these modes.
• chain tries the previous modes in the order above until one succeeds.
• rep+ctxt runs successively the report mode and the context mode. It should be used with the C
option (described later) which checks the code on a file basis.
Examples
To make a report for every semantic patch, run the following command:
./scripts/coccicheck --mode=report
./scripts/coccicheck --mode=patch
The coccicheck target applies every semantic patch available in the sub-directories of scripts/
coccinelle to the entire source code tree.
For each semantic patch, a commit message is proposed. It gives a description of the problem being
checked by the semantic patch, and includes a reference to Coccinelle.
As any static code analyzer, Coccinelle produces false positives. Thus, reports must be carefully checked,
and patches reviewed.
To enable verbose messages set --verbose=1 option, for example:
Coccinelle parallelization
By default, coccicheck tries to run as parallel as possible. To change the parallelism, set the
--jobs=<number> option. For example, to run across 4 CPUs:
As of Coccinelle 1.0.2 Coccinelle uses Ocaml parmap for parallelization, if support for this is detected
you will benefit from parmap parallelization.
When parmap is enabled coccicheck will enable dynamic load balancing by using --chunksize 1 argu-
ment, this ensures we keep feeding threads with work one by one, so that we avoid the situation where
most work gets done by only a few threads. With dynamic load balancing, if a thread finishes early we
keep feeding it more work.
When parmap is enabled, if an error occurs in Coccinelle, this error value is propagated back, the return
value of the coccicheck command captures this return value.
The option --cocci can be used to check a single semantic patch. In that case, the variable must be
initialized with the name of the semantic patch to apply.
For instance:
or:
The report mode is the default. You can select another one with the --mode=<mode> option explained
above.
Using coccicheck is best as it provides in the spatch command line include options matching the options
used when we compile the kernel. You can learn what these options are by using verbose option, you
could then manually run Coccinelle with debug options added.
Alternatively you can debug running Coccinelle against SmPL patches by asking for stderr to be redi-
rected to stderr, by default stderr is redirected to /dev/null, if you’d like to capture stderr you can specify
the --debug=file.err option to coccicheck. For instance:
rm -f cocci.err
./scripts/coccicheck --mode=patch --debug=cocci.err
cat cocci.err
Additional Flags
Additional flags can be passed to spatch through the SPFLAGS variable. This works as Coccinelle respects
the last flags given to it when options are in conflict.
./scripts/coccicheck --sp-flag="--use-glimpse"
Coccinelle supports idutils as well but requires coccinelle >= 1.0.6. When no ID file is specified coccinelle
assumes your ID database file is in the file .id-utils.index on the top level of the kernel, coccinelle carries
a script scripts/idutils_index.sh which creates the database with:
If you have another database filename you can also just symlink with this name.
./scripts/coccicheck --sp-flag="--use-idutils"
Alternatively you can specify the database filename explicitly, for instance:
Sometimes coccinelle doesn’t recognize or parse complex macro variables due to insufficient defini-
tion. Therefore, to make it parsable we explicitly provide the prototype of the complex macro using the
---macro-file-builtins <headerfile.h> flag.
The <headerfile.h> should contain the complete prototype of the complex macro from which spatch
engine can extract the type information required during transformation.
For example:
Z_SYSCALL_HANDLER is not recognized by coccinelle. Therefore, we put its prototype in a header file, say
for example mymacros.h.
$ cat mymacros.h
#define Z_SYSCALL_HANDLER int xxx
SmPL patches can have their own requirements for options passed to Coccinelle. SmPL patch specific
options can be provided by providing them at the top of the SmPL patch, for instance:
// Options: --no-includes --include-headers
New semantic patches can be proposed and submitted by kernel developers. For sake of clarity, they
should be organized in the sub-directories of scripts/coccinelle/.
The cocci script should have the following properties:
• The script must have report mode.
• The first few lines should state the purpose of the script using /// comments . Usually, this message
would be used as the commit log when proposing a patch based on the script.
Example
/// Use ARRAY_SIZE instead of dividing sizeof array with sizeof an element
• A more detailed information about the script with exceptional cases or false positives (if any) can
be listed using //# comments.
Example
//# This makes an effort to find cases where ARRAY_SIZE can be used such as
//# where there is a division of sizeof the array by the sizeof its first
//# element or by any indexed element or the element type. It replaces the
//# division of the two sizeofs by ARRAY_SIZE.
• Confidence: It is a property defined to specify the accuracy level of the script. It can be either High,
Moderate or Low depending upon the number of false positives observed.
Example
// Confidence: High
• Virtual rules: These are required to support the various modes framed in the script. The virtual
rule specified in the script should have the corresponding mode handling rule.
Example
virtual context
@depends on context@
type T;
T[] E;
@@
(
* (sizeof(E)/sizeof(*E))
|
* (sizeof(E)/sizeof(E[...]))
|
* (sizeof(E)/sizeof(T))
)
file:line:column-column: message
Example Running:
<smpl>
</smpl>
This SmPL excerpt generates entries on the standard output, as illustrated below:
When the patch mode is available, it proposes a fix for each problem identified.
Example Running:
<smpl>
@depends on patch@
type T;
T[] E;
@@
(
- (sizeof(E)/sizeof(*E))
+ ARRAY_SIZE(E)
|
- (sizeof(E)/sizeof(E[...]))
+ ARRAY_SIZE(E)
|
- (sizeof(E)/sizeof(T))
+ ARRAY_SIZE(E)
)
</smpl>
This SmPL excerpt generates patch hunks on the standard output, as illustrated below:
--- a/ext/lib/encoding/tinycbor/src/cborvalidation.c
+++ b/ext/lib/encoding/tinycbor/src/cborvalidation.c
@@ -325,7 +325,7 @@ static inline CborError validate_number(
static inline CborError validate_tag(CborValue *it, CborTag tag, int flags, int␣
˓→recursionLeft)
{
CborType type = cbor_value_get_type(it);
- const size_t knownTagCount = sizeof(knownTagData) / sizeof(knownTagData[0]);
+ const size_t knownTagCount = ARRAY_SIZE(knownTagData);
const struct KnownTagData *tagData = knownTagData;
const struct KnownTagData * const knownTagDataEnd = knownTagData + knownTagCount;
Note: The diff-like output generated is NOT an applicable patch. The intent of the context mode is
to highlight the important lines (annotated with minus, -) and gives some surrounding context lines
around. This output can be used with the diff mode of Emacs to review the code.
Example Running:
@depends on context@
type T;
T[] E;
@@
(
* (sizeof(E)/sizeof(*E))
|
* (sizeof(E)/sizeof(E[...]))
|
* (sizeof(E)/sizeof(T))
)
</smpl>
This SmPL excerpt generates diff hunks on the standard output, as illustrated below:
diff -u -p ext/lib/encoding/tinycbor/src/cborvalidation.c /tmp/nothing/ext/lib/
˓→encoding/tinycbor/src/cborvalidation.c
--- ext/lib/encoding/tinycbor/src/cborvalidation.c
+++ /tmp/nothing/ext/lib/encoding/tinycbor/src/cborvalidation.c
@@ -325,7 +325,6 @@ static inline CborError validate_number(
static inline CborError validate_tag(CborValue *it, CborTag tag, int flags, int␣
˓→recursionLeft)
{
CborType type = cbor_value_get_type(it);
- const size_t knownTagCount = sizeof(knownTagData) / sizeof(knownTagData[0]);
const struct KnownTagData *tagData = knownTagData;
const struct KnownTagData * const knownTagDataEnd = knownTagData + knownTagCount;
Example Running:
./scripts/coccicheck --mode=org --cocci=scripts/coccinelle/misc/array_size.cocci
</smpl>
This SmPL excerpt generates Org entries on the standard output, as illustrated below:
* TODO [[view:ext/lib/encoding/tinycbor/src/cborvalidation.c::face=ovl-
˓→face1::linb=328::colb=52::cole=53][WARNING should use ARRAY_SIZE]]
Kernel
The Zephyr kernel lies at the heart of every Zephyr application. It provides a low footprint, high per-
formance, multi-threaded execution environment with a rich set of available features. The rest of the
Zephyr ecosystem, including device drivers, networking stack, and application-specific code, uses the
kernel’s features to create a complete application.
The configurable nature of the kernel allows you to incorporate only those features needed by your
application, making it ideal for systems with limited amounts of memory (as little as 2 KB!) or with
simple multi-threading requirements (such as a set of interrupt handlers and a single background task).
Examples of such systems include: embedded sensor hubs, environmental sensors, simple LED wearable,
and store inventory tags.
Applications requiring more memory (50 to 900 KB), multiple communication devices (like Wi-Fi and
Bluetooth Low Energy), and complex multi-threading, can also be developed using the Zephyr kernel.
Examples of such systems include: fitness wearables, smart watches, and IoT wireless gateways.
These pages cover basic kernel services related to thread scheduling and synchronization.
Threads
Note: There is also limited support for using Operation without Threads.
• Lifecycle
– Thread Creation
– Thread Termination
– Thread Aborting
– Thread Suspension
• Thread States
• Thread Stack objects
– Kernel-only Stacks
251
Zephyr Project Documentation, Release 3.4.0
– Thread stacks
• Thread Priorities
– Meta-IRQ Priorities
• Thread Options
• Thread Custom Data
• Implementation
– Spawning a Thread
– Dropping Permissions
– Terminating a Thread
• Runtime Statistics
• Suggested Uses
• Configuration Options
• API Reference
This section describes kernel services for creating, scheduling, and deleting independently executable
threads of instructions.
A thread is a kernel object that is used for application processing that is too lengthy or too complex to be
performed by an ISR.
Any number of threads can be defined by an application (limited only by available RAM). Each thread is
referenced by a thread id that is assigned when the thread is spawned.
A thread has the following key properties:
• A stack area, which is a region of memory used for the thread’s stack. The size of the stack area
can be tailored to conform to the actual needs of the thread’s processing. Special macros exist to
create and work with stack memory regions.
• A thread control block for private kernel bookkeeping of the thread’s metadata. This is an instance
of type k_thread .
• An entry point function, which is invoked when the thread is started. Up to 3 argument values
can be passed to this function.
• A scheduling priority, which instructs the kernel’s scheduler how to allocate CPU time to the
thread. (See Scheduling.)
• A set of thread options, which allow the thread to receive special treatment by the kernel under
specific circumstances. (See Thread Options.)
• A start delay, which specifies how long the kernel should wait before starting the thread.
• An execution mode, which can either be supervisor or user mode. By default, threads run in
supervisor mode and allow access to privileged CPU instructions, the entire memory address
space, and peripherals. User mode threads have a reduced set of privileges. This depends on
the CONFIG_USERSPACE option. See User Mode.
Lifecycle
Thread Creation A thread must be created before it can be used. The kernel initializes the thread
control block as well as one end of the stack portion. The remainder of the thread’s stack is typically left
uninitialized.
Specifying a start delay of K_NO_WAIT instructs the kernel to start thread execution immediately. Alter-
natively, the kernel can be instructed to delay execution of the thread by specifying a timeout value – for
example, to allow device hardware used by the thread to become available.
The kernel allows a delayed start to be canceled before the thread begins executing. A cancellation
request has no effect if the thread has already started. A thread whose delayed start was successfully
canceled must be re-spawned before it can be used.
Thread Termination Once a thread is started it typically executes forever. However, a thread may
synchronously end its execution by returning from its entry point function. This is known as termination.
A thread that terminates is responsible for releasing any shared resources it may own (such as mutexes
and dynamically allocated memory) prior to returning, since the kernel does not reclaim them automat-
ically.
In some cases a thread may want to sleep until another thread terminates. This can be accomplished
with the k_thread_join() API. This will block the calling thread until either the timeout expires, the
target thread self-exits, or the target thread aborts (either due to a k_thread_abort() call or triggering
a fatal error).
Once a thread has terminated, the kernel guarantees that no use will be made of the thread struct. The
memory of such a struct can then be re-used for any purpose, including spawning a new thread. Note that
the thread must be fully terminated, which presents race conditions where a thread’s own logic signals
completion which is seen by another thread before the kernel processing is complete. Under normal
circumstances, application code should use k_thread_join() or k_thread_abort() to synchronize on
thread termination state and not rely on signaling from within application logic.
Thread Aborting A thread may asynchronously end its execution by aborting. The kernel automati-
cally aborts a thread if the thread triggers a fatal error condition, such as dereferencing a null pointer.
A thread can also be aborted by another thread (or by itself) by calling k_thread_abort() . However, it
is typically preferable to signal a thread to terminate itself gracefully, rather than aborting it.
As with thread termination, the kernel does not reclaim shared resources owned by an aborted thread.
Note: The kernel does not currently make any claims regarding an application’s ability to respawn a
thread that aborts.
Thread Suspension A thread can be prevented from executing for an indefinite period of time if it
becomes suspended. The function k_thread_suspend() can be used to suspend any thread, including
the calling thread. Suspending a thread that is already suspended has no additional effect.
Once suspended, a thread cannot be scheduled until another thread calls k_thread_resume() to remove
the suspension.
Note: A thread can prevent itself from executing for a specified period of time using k_sleep() . How-
ever, this is different from suspending a thread since a sleeping thread becomes executable automatically
when the time limit is reached.
Thread States A thread that has no factors that prevent its execution is deemed to be ready, and is
eligible to be selected as the current thread.
A thread that has one or more factors that prevent its execution is deemed to be unready, and cannot be
selected as the current thread.
The following factors make a thread unready:
Suspended
resume suspend
New Terminated
start abort
interrupt
Ready Running
dispatch
Waiting
Note: Although the diagram above may appear to suggest that both Ready and Running are distinct
thread states, that is not the correct interpretation. Ready is a thread state, and Running is a schedule
state that only applies to Ready threads.
Thread Stack objects Every thread requires its own stack buffer for the CPU to push context. Depend-
ing on configuration, there are several constraints that must be met:
• There may need to be additional memory reserved for memory management structures
• If guard-based stack overflow detection is enabled, a small write- protected memory management
region must immediately precede the stack buffer to catch overflows.
• If userspace is enabled, a separate fixed-size privilege elevation stack must be reserved to serve as
a private kernel stack for handling system calls.
• If userspace is enabled, the thread’s stack buffer must be appropriately sized and aligned such that
a memory protection region may be programmed to exactly fit.
The alignment constraints can be quite restrictive, for example some MPUs require their regions to be of
some power of two in size, and aligned to its own size.
Because of this, portable code can’t simply pass an arbitrary character buffer to k_thread_create() .
Special macros exist to instantiate stacks, prefixed with K_KERNEL_STACK and K_THREAD_STACK.
Kernel-only Stacks If it is known that a thread will never run in user mode, or the stack is being used
for special contexts like handling interrupts, it is best to define stacks using the K_KERNEL_STACK macros.
These stacks save memory because an MPU region will never need to be programmed to cover the stack
buffer itself, and the kernel will not need to reserve additional room for the privilege elevation stack, or
memory management data structures which only pertain to user mode threads.
Attempts from user mode to use stacks declared in this way will result in a fatal error for the caller.
If CONFIG_USERSPACE is not enabled, the set of K_THREAD_STACK macros have an identical effect to the
K_KERNEL_STACK macros.
Thread stacks If it is known that a stack will need to host user threads, or if this cannot be determined,
define the stack with K_THREAD_STACK macros. This may use more memory but the stack object is suitable
for hosting user threads.
If CONFIG_USERSPACE is not enabled, the set of K_THREAD_STACK macros have an identical effect to the
K_KERNEL_STACK macros.
Thread Priorities A thread’s priority is an integer value, and can be either negative or non-negative.
Numerically lower priorities takes precedence over numerically higher values. For example, the scheduler
gives thread A of priority 4 higher priority over thread B of priority 7; likewise thread C of priority -2 has
higher priority than both thread A and thread B.
The scheduler distinguishes between two classes of threads, based on each thread’s priority.
• A cooperative thread has a negative priority value. Once it becomes the current thread, a coopera-
tive thread remains the current thread until it performs an action that makes it unready.
• A preemptible thread has a non-negative priority value. Once it becomes the current thread, a
preemptible thread may be supplanted at any time if a cooperative thread, or a preemptible thread
of higher or equal priority, becomes ready.
A thread’s initial priority value can be altered up or down after the thread has been started. Thus it is
possible for a preemptible thread to become a cooperative thread, and vice versa, by changing its priority.
Note: The scheduler does not make heuristic decisions to re-prioritize threads. Thread priorities are set
and changed only at the application’s request.
The kernel supports a virtually unlimited number of thread priority levels. The configuration options
CONFIG_NUM_COOP_PRIORITIES and CONFIG_NUM_PREEMPT_PRIORITIES specify the number of priority
levels for each class of thread, resulting in the following usable priority ranges:
• cooperative threads: (-CONFIG_NUM_COOP_PRIORITIES) to -1
• preemptive threads: 0 to (CONFIG_NUM_PREEMPT_PRIORITIES - 1)
-2 -1 0 1 2
- CONFIG_NUM_COOP_PRIORITIES CONFIG_NUM_PREEMPT_PRIORITIES - 1
For example, configuring 5 cooperative priorities and 10 preemptive priorities results in the ranges -5 to
-1 and 0 to 9, respectively.
Thread Options The kernel supports a small set of thread options that allow a thread to receive special
treatment under specific circumstances. The set of options associated with a thread are specified when
the thread is spawned.
A thread that does not require any thread option has an option value of zero. A thread that requires a
thread option specifies it by name, using the | character as a separator if multiple options are needed
(i.e. combine options using the bitwise OR operator).
The following thread options are supported.
K_ESSENTIAL
This option tags the thread as an essential thread. This instructs the kernel to treat the termination
or aborting of the thread as a fatal system error.
By default, the thread is not considered to be an essential thread.
K_SSE_REGS
This x86-specific option indicate that the thread uses the CPU’s SSE registers. Also see K_FP_REGS .
By default, the kernel does not attempt to save and restore the contents of these registers when
scheduling the thread.
K_FP_REGS
This option indicate that the thread uses the CPU’s floating point registers. This instructs the
kernel to take additional steps to save and restore the contents of these registers when scheduling
the thread. (For more information see Floating Point Services.)
By default, the kernel does not attempt to save and restore the contents of this register when
scheduling the thread.
K_USER
If CONFIG_USERSPACE is enabled, this thread will be created in user mode and will have reduced
privileges. See User Mode. Otherwise this flag does nothing.
K_INHERIT_PERMS
If CONFIG_USERSPACE is enabled, this thread will inherit all kernel object permissions that the
parent thread had, except the parent thread object. See User Mode.
Thread Custom Data Every thread has a 32-bit custom data area, accessible only by the thread itself,
and may be used by the application for any purpose it chooses. The default custom data value for a
thread is zero.
Note: Custom data support is not available to ISRs because they operate within a single shared kernel
interrupt handling context.
Note: Obviously, only a single routine can use this technique, since it monopolizes the use of the custom
data feature.
int call_tracking_routine(void)
{
uint32_t call_count;
if (k_is_in_isr()) {
/* ignore any call made by an ISR */
} else {
call_count = (uint32_t)k_thread_custom_data_get();
call_count++;
k_thread_custom_data_set((void *)call_count);
}
Use thread custom data to allow a routine to access thread-specific information, by using the custom
data as a pointer to a data structure owned by the thread.
Implementation
Spawning a Thread A thread is spawned by defining its stack area and its thread control block, and
then calling k_thread_create() .
The stack area must be defined using K_THREAD_STACK_DEFINE or K_KERNEL_STACK_DEFINE to ensure it
is properly set up in memory.
The size parameter for the stack must be one of three values:
• The original requested stack size passed to K_THREAD_STACK or K_KERNEL_STACK family of stack
instantiation macros.
• For a stack object defined with the K_THREAD_STACK family of macros, the return value of
K_THREAD_STACK_SIZEOF() for that’ object.
• For a stack object defined with the K_KERNEL_STACK family of macros, the return value of
K_KERNEL_STACK_SIZEOF() for that object.
The thread spawning function returns its thread id, which can be used to reference the thread.
The following code spawns a thread that starts immediately.
K_THREAD_STACK_DEFINE(my_stack_area, MY_STACK_SIZE);
struct k_thread my_thread_data;
Alternatively, a thread can be declared at compile time by calling K_THREAD_DEFINE . Observe that the
macro defines the stack area, control block, and thread id variables automatically.
The following code has the same effect as the code segment above.
K_THREAD_DEFINE(my_tid, MY_STACK_SIZE,
my_entry_point, NULL, NULL, NULL,
MY_PRIORITY, 0, 0);
User Mode Constraints This section only applies if CONFIG_USERSPACE is enabled, and a user thread
tries to create a new thread. The k_thread_create() API is still used, but there are additional con-
straints which must be met or the calling thread will be terminated:
• The calling thread must have permissions granted on both the child thread and stack parameters;
both are tracked by the kernel as kernel objects.
• The child thread and stack objects must be in an uninitialized state, i.e. it is not currently running
and the stack memory is unused.
• The stack size parameter passed in must be equal to or less than the bounds of the stack object
when it was declared.
• The K_USER option must be used, as user threads can only create other user threads.
• The K_ESSENTIAL option must not be used, user threads may not be considered essential threads.
• The priority of the child thread must be a valid priority value, and equal to or lower than the parent
thread.
way operation which will reset and zero the thread’s stack memory. The thread will be marked as
non-essential.
Terminating a Thread A thread terminates itself by returning from its entry point function.
The following code illustrates the ways a thread can terminate.
If CONFIG_USERSPACE is enabled, aborting a thread will additionally mark the thread and stack objects
as uninitialized so that they may be re-used.
k_thread_runtime_stats_t rt_stats_thread;
k_thread_runtime_stats_get(k_current_get(), &rt_stats_thread);
Suggested Uses Use threads to handle processing that cannot be handled in an ISR.
Use separate threads to handle logically distinct processing operations that can execute in parallel.
• CONFIG_TIMESLICE_PRIORITY
• CONFIG_USERSPACE
API Reference
group thread_apis
Defines
K_ESSENTIAL
system thread that must not abort
K_FP_IDX
FPU registers are managed by context switch.
This option indicates that the thread uses the CPU’s floating point registers. This instructs
the kernel to take additional steps to save and restore the contents of these registers when
scheduling the thread. No effect if CONFIG_FPU_SHARING is not enabled.
K_FP_REGS
K_USER
user mode thread
This thread has dropped from supervisor mode to user mode and consequently has additional
restrictions
K_INHERIT_PERMS
Inherit Permissions.
Indicates that the thread being created should inherit all kernel object permissions from the
thread that created it. No effect if CONFIG_USERSPACE is not enabled.
K_CALLBACK_STATE
Callback item state.
This is a single bit of state reserved for “callback manager” utilities (p4wq initially) who need
to track operations invoked from within a user-provided callback they have been invoked.
Effectively it serves as a tiny bit of zero-overhead TLS data.
k_thread_access_grant(thread, ...)
Grant a thread access to a set of kernel objects.
This is a convenience function. For the provided thread, grant access to the remaining argu-
ments, which must be pointers to kernel objects.
The thread object must be initialized (i.e. running). The objects don’t need to be. Note that
NULL shouldn’t be passed as an argument.
Parameters
• thread – Thread to grant access to objects
• ... – list of kernel object pointers
Note: Static threads with zero delay should not normally have MetaIRQ priority levels. This
can preempt the system initialization handling (depending on the priority of the main thread)
and cause surprising ordering side effects. It will not affect anything in the OS per se, but
consider it bad practice. Use a SYS_INIT() callback if you need to run code before entrance to
the application main().
Parameters
• name – Name of the thread.
• stack_size – Stack size in bytes.
• entry – Thread entry function.
• p1 – 1st entry point parameter.
• p2 – 2nd entry point parameter.
• p3 – 3rd entry point parameter.
• prio – Thread priority.
• options – Thread options.
• delay – Scheduling delay (in milliseconds), zero for no delay.
Note: Threads defined by this can only run in kernel mode, and cannot be transformed into
user thread via k_thread_user_mode_enter().
Warning: Depending on the architecture, the stack size (stack_size) may need to be
multiples of CONFIG_MMU_PAGE_SIZE (if MMU) or in power-of-two size (if MPU).
Parameters
• name – Name of the thread.
Typedefs
Functions
Note: This API uses k_spin_lock to protect the _kernel.threads list which means creation of
new threads and terminations of existing threads are blocked until this API returns.
Parameters
• user_cb – Pointer to the user callback function.
• user_data – Pointer to user data.
Note: This API uses k_spin_lock only when accessing the _kernel.threads queue elements. It
unlocks it during user callback function processing. If a new task is created when this foreach
function is in progress, the added new task would not be included in the enumeration. If a
task is aborted during this enumeration, there would be a race here and there is a possibility
that this aborted task would be included in the enumeration.
Note: If the task is aborted and the memory occupied by its k_thread structure is reused
when this k_thread_foreach_unlocked is in progress it might even lead to the system behave
unstable. This function may never return, as it would follow some next task pointers treating
given pointer as a pointer to the k_thread structure while it is something different right now.
Do not reuse the memory that was occupied by k_thread structure of aborted task if it was
aborted after this function was called in any context.
Parameters
• user_cb – Pointer to the user callback function.
• user_data – Pointer to user data.
This API may only be called from ISRs with a K_NO_WAIT timeout, where it can be useful as
a predicate to detect when a thread has aborted.
Parameters
• thread – Thread to wait to exit
• timeout – upper bound time to wait for the thread to exit.
Return values
• 0 – success, target thread has exited or wasn’t running
• -EBUSY – returned without waiting
• -EAGAIN – waiting period timed out
• -EDEADLK – target thread is joining on the caller, or target thread is the caller
int32_t k_sleep(k_timeout_t timeout)
Put the current thread to sleep.
This routine puts the current thread to sleep for duration, specified as a k_timeout_t object.
Parameters
• timeout – Desired duration of sleep.
Returns
Zero if the requested time has elapsed or the number of milliseconds left to sleep,
if thread was woken up by k_wakeup call.
Note: The clock used for the microsecond-resolution delay here may be skewed relative
to the clock used for system timeouts like k_sleep(). For example k_busy_wait(1000) may
take slightly more or less time than k_sleep(K_MSEC(1)), with the offset dependent on clock
tolerances.
bool k_can_yield(void)
Check whether it is possible to yield in the current context.
This routine checks whether the kernel is in a state where it is possible to yield or call blocking
API’s. It should be used by code that needs to yield to perform correctly, but can feasibly be
called from contexts where that is not possible. For example in the PRE_KERNEL initialization
step, or when being run from the idle thread.
Returns
True if it is possible to yield in the current context, false otherwise.
void k_yield(void)
Yield the current thread.
This routine causes the current thread to yield execution to another thread of the same or
higher priority. If there are no other ready threads of the same or higher priority, the routine
returns immediately.
void k_wakeup(k_tid_t thread)
Wake up a sleeping thread.
This routine prematurely wakes up thread from sleeping.
If thread is not currently sleeping, the routine has no effect.
Parameters
• thread – ID of thread to wake.
__attribute_const__ static inline k_tid_t k_current_get(void)
Get thread ID of the current thread.
Returns
ID of current thread.
void k_thread_abort(k_tid_t thread)
Abort a thread.
This routine permanently stops execution of thread. The thread is taken off all kernel queues
it is part of (i.e. the ready queue, the timeout queue, or a kernel object wait queue). However,
any kernel resources the thread might currently own (such as mutexes or memory blocks) are
not released. It is the responsibility of the caller of this routine to ensure all necessary cleanup
is performed.
After k_thread_abort() returns, the thread is guaranteed not to be running or to become
runnable anywhere on the system. Normally this is done via blocking the caller (in the same
manner as k_thread_join()), but in interrupt context on SMP systems the implementation is
required to spin for threads that are running on other CPUs. Note that as specified, this
means that on SMP platforms it is possible for application code to create a deadlock condition
by simultaneously aborting a cycle of threads using at least one termination from interrupt
context. Zephyr cannot detect all such conditions.
Parameters
• If its priority is raised above the priority of the caller of this function, and the caller is
preemptible, thread will be scheduled in.
• If the caller operates on itself, it lowers its priority below that of other threads in the
system, and the caller is preemptible, the thread of highest priority will be scheduled in.
Priority can be assigned in the range of -CONFIG_NUM_COOP_PRIORITIES to
CONFIG_NUM_PREEMPT_PRIORITIES-1, where -CONFIG_NUM_COOP_PRIORITIES is the
highest priority.
Warning: Changing the priority of a thread currently involved in mutex priority inheri-
tance may result in undefined behavior.
Parameters
• thread – ID of thread whose priority is to be set.
• prio – New priority.
Note: Deadlines are stored internally using 32 bit unsigned integers. The number of cy-
cles between the “first” deadline in the scheduler queue and the “last” deadline must be less
than 2^31 (i.e a signed non-negative quantity). Failure to adhere to this rule may result in
scheduled threads running in an incorrect deadline order.
Note: Despite the API naming, the scheduler makes no guarantees the the thread WILL be
scheduled within that deadline, nor does it take extra metadata (like e.g. the “runtime” and
“period” parameters in Linux sched_setattr()) that allows the kernel to validate the scheduling
for achievability. Such features could be implemented above this call, which is simply input
to the priority selection logic.
Parameters
• thread – A thread on which to set the deadline
• deadline – A time delta, in cycle units
Parameters
• thread – Thread to operate upon
Returns
Zero on success, otherwise error code
Parameters
• thread – Thread to operate upon
Returns
Zero on success, otherwise error code
Parameters
• thread – Thread to operate upon
• cpu – CPU index
Returns
Zero on success, otherwise error code
Parameters
• thread – Thread to operate upon
• cpu – CPU index
Returns
Zero on success, otherwise error code
Note: Unlike the older API, the time slice parameter here is specified in ticks, not millisec-
onds. Ticks have always been the internal unit, and not all platforms have integer conversions
between the two.
Note: Threads with a non-zero slice time set will be timesliced always, even if they are higher
priority than the maximum timeslice priority set via k_sched_time_slice_set().
Note: The callback notification for slice expiration happens, as it must, while the thread is
still “current”, and thus it happens before any registered timeouts at this tick. This has the
somewhat confusing side effect that the tick time (c.f. k_uptime_get()) does not yet reflect the
expired ticks. Applications wishing to make fine-grained timing decisions within this callback
should use the cycle API, or derived facilities like k_thread_runtime_stats_get().
Parameters
• th – A valid, initialized thread
• slice_ticks – Maximum timeslice, in ticks
• expired – Callback function called on slice expiration
• data – Parameter for the expiration handler
void k_sched_lock(void)
Lock the scheduler.
This routine prevents the current thread from being preempted by another thread by instruct-
ing the scheduler to treat it as a cooperative thread. If the thread subsequently performs an
operation that makes it unready, it will be context switched out in the normal manner. When
the thread again becomes the current thread, its non-preemptible status is maintained.
This routine can be called recursively.
Owing to clever implementation details, scheduler locks are extremely fast for non-userspace
threads (just one byte inc/decrement in the thread struct).
Note: This works by elevating the thread priority temporarily to a cooperative priority,
allowing cheap synchronization vs. other preemptible or cooperative threads running on the
current CPU. It does not prevent preemption or asynchrony of other types. It does not prevent
threads from running on other CPUs when CONFIG_SMP=y. It does not prevent interrupts
from happening, nor does it prevent threads with MetaIRQ priorities from preempting the
current thread. In general this is a historical API not well-suited to modern applications, use
with care.
void k_sched_unlock(void)
Unlock the scheduler.
This routine reverses the effect of a previous call to k_sched_lock(). A thread must call the
routine once for each time it called k_sched_lock() before the thread becomes preemptible.
void k_thread_custom_data_set(void *value)
Set current thread’s custom data.
This routine sets the custom data for the current thread to @ value.
Custom data is not used by the kernel itself, and is freely available for a thread to use as it
sees fit. It can be used as a framework upon which to build thread-local storage.
Parameters
• value – New custom data value.
void *k_thread_custom_data_get(void)
Get current thread’s custom data.
This routine returns the custom data for the current thread.
Returns
Current custom data value.
struct k_thread
#include <thread.h> Thread Structure
Public Members
void *init_data
static thread init data
_wait_q_t join_queue
threads waiting in k_thread_join()
void *custom_data
crude thread-local storage
k_thread_stack_t *stack_obj
Base address of thread stack
void *syscall_frame
current syscall frame pointer
int swap_retval
z_swap() return value
void *switch_handle
Context handle returned via arch_switch()
group thread_stack_api
Thread Stack APIs.
Defines
K_KERNEL_STACK_DECLARE(sym, size)
Declare a reference to a thread stack.
This macro declares the symbol of a thread stack defined elsewhere in the current scope.
Parameters
• sym – Thread stack symbol name
• size – Size of the stack memory region
K_KERNEL_STACK_ARRAY_DECLARE(sym, nmemb, size)
Declare a reference to a thread stack array.
This macro declares the symbol of a thread stack array defined elsewhere in the current scope.
Parameters
• sym – Thread stack symbol name
• nmemb – Number of stacks defined
• size – Size of the stack memory region
K_KERNEL_PINNED_STACK_ARRAY_DECLARE(sym, nmemb, size)
Declare a reference to a pinned thread stack array.
This macro declares the symbol of a pinned thread stack array defined elsewhere in the current
scope.
Parameters
• sym – Thread stack symbol name
• nmemb – Number of stacks defined
• size – Size of the stack memory region
K_KERNEL_STACK_EXTERN(sym)
Obtain an extern reference to a stack.
This macro properly brings the symbol of a thread stack defined elsewhere into scope.
Deprecated:
Use K_KERNEL_STACK_DECLARE() instead.
Parameters
• sym – Thread stack symbol name
Deprecated:
Use K_KERNEL_STACK_ARRAY_DECLARE() instead.
Parameters
• sym – Thread stack symbol name
• nmemb – Number of stacks defined
Deprecated:
Use K_KERNEL_PINNED_STACK_ARRAY_DECLARE() instead.
Parameters
• sym – Thread stack symbol name
• nmemb – Number of stacks defined
• size – Size of the stack memory region
K_KERNEL_STACK_DEFINE(sym, size)
Define a toplevel kernel stack memory region.
This defines a region of memory for use as a thread stack, for threads that exclusively run in
supervisor mode. This is also suitable for declaring special stacks for interrupt or exception
handling.
Stacks defined with this macro may not host user mode threads.
It is legal to precede this definition with the ‘static’ keyword.
It is NOT legal to take the sizeof(sym) and pass that to the stackSize parame-
ter of k_thread_create(), it may not be the same as the ‘size’ parameter. Use
K_KERNEL_STACK_SIZEOF() instead.
The total amount of memory allocated may be increased to accommodate fixed-size stack
overflow guards.
Parameters
• sym – Thread stack symbol name
• size – Size of the stack memory region
K_KERNEL_PINNED_STACK_DEFINE(sym, size)
Define a toplevel kernel stack memory region in pinned section.
See K_KERNEL_STACK_DEFINE() for more information and constraints.
This puts the stack into the pinned noinit linker section if CON-
FIG_LINKER_USE_PINNED_SECTION is enabled, or else it would put the stack into the
same section as K_KERNEL_STACK_DEFINE().
Parameters
• sym – Thread stack symbol name
• size – Size of the stack memory region
K_KERNEL_STACK_ARRAY_DEFINE(sym, nmemb, size)
Define a toplevel array of kernel stack memory regions.
Stacks defined with this macro may not host user mode threads.
Parameters
• sym – Kernel stack array symbol name
• nmemb – Number of stacks to define
K_THREAD_STACK_DECLARE(sym, size)
Declare a reference to a thread stack.
This macro declares the symbol of a thread stack defined elsewhere in the current scope.
Parameters
• sym – Thread stack symbol name
• size – Size of the stack memory region
K_THREAD_STACK_ARRAY_DECLARE(sym, nmemb, size)
Declare a reference to a thread stack array.
This macro declares the symbol of a thread stack array defined elsewhere in the current scope.
Parameters
• sym – Thread stack symbol name
• nmemb – Number of stacks defined
• size – Size of the stack memory region
K_THREAD_STACK_EXTERN(sym)
Obtain an extern reference to a stack.
This macro properly brings the symbol of a thread stack defined elsewhere into scope.
Deprecated:
Use K_THREAD_STACK_DECLARE() instead.
Parameters
• sym – Thread stack symbol name
Deprecated:
Use K_THREAD_STACK_ARRAY_DECLARE() instead.
Parameters
• sym – Thread stack symbol name
• nmemb – Number of stacks defined
• size – Size of the stack memory region
K_THREAD_STACK_SIZEOF(sym)
Return the size in bytes of a stack memory region.
Convenience macro for passing the desired stack size to k_thread_create() since the underlying
implementation may actually create something larger (for instance a guard area).
The value returned here is not guaranteed to match the ‘size’ parameter passed to
K_THREAD_STACK_DEFINE and may be larger, but is always safe to pass to k_thread_create()
for the associated stack object.
Parameters
• sym – Stack memory symbol
Returns
Size of the stack buffer
K_THREAD_STACK_DEFINE(sym, size)
Define a toplevel thread stack memory region.
This defines a region of memory suitable for use as a thread’s stack.
This is the generic, historical definition. Align to Z_THREAD_STACK_OBJ_ALIGN and put in
‘noinit’ section so that it isn’t zeroed at boot
The defined symbol will always be a k_thread_stack_t which can be passed to
k_thread_create(), but should otherwise not be manipulated. If the buffer inside needs to be
examined, examine thread->stack_info for the associated thread object to obtain the bound-
aries.
It is legal to precede this definition with the ‘static’ keyword.
It is NOT legal to take the sizeof(sym) and pass that to the stackSize parame-
ter of k_thread_create(), it may not be the same as the ‘size’ parameter. Use
K_THREAD_STACK_SIZEOF() instead.
Some arches may round the size of the usable stack region up to satisfy alignment constraints.
K_THREAD_STACK_SIZEOF() will return the aligned size.
Parameters
• sym – Thread stack symbol name
• size – Size of the stack memory region
K_THREAD_PINNED_STACK_DEFINE(sym, size)
Define a toplevel thread stack memory region in pinned section.
This defines a region of memory suitable for use as a thread’s stack.
This puts the stack into the pinned noinit linker section if CON-
FIG_LINKER_USE_PINNED_SECTION is enabled, or else it would put the stack into the
same section as K_THREAD_STACK_DEFINE().
Parameters
• sym – Thread stack symbol name
• nmemb – Number of stacks to define
• size – Size of the stack memory region
K_THREAD_STACK_MEMBER(sym, size)
Define an embedded stack memory region.
Used for stacks embedded within other data structures. Use is highly discouraged but in some
cases necessary. For memory protection scenarios, it is very important that any RAM preceding
this member not be writable by threads else a stack overflow will lead to silent corruption. In
other words, the containing data structure should live in RAM owned by the kernel.
A user thread can only be started with a stack defined in this way if the thread starting it is in
supervisor mode.
This is now deprecated, as stacks defined in this way are not usable from user mode. Use
K_KERNEL_STACK_MEMBER.
Parameters
• sym – Thread stack symbol name
• size – Size of the stack memory region
Scheduling
The kernel’s priority-based scheduler allows an application’s threads to share the CPU.
Concepts The scheduler determines which thread is allowed to execute at any point in time; this thread
is known as the current thread.
There are various points in time when the scheduler is given an opportunity to change the identity of the
current thread. These points are called reschedule points. Some potential reschedule points are:
• transition of a thread from running state to a suspended or waiting state, for example by
k_sem_take() or k_sleep() .
• transition of a thread to the ready state, for example by k_sem_give() or k_thread_start()
• return to thread context after processing an interrupt
• when a running thread invokes k_yield()
A thread sleeps when it voluntarily initiates an operation that transitions itself to a suspended or waiting
state.
Whenever the scheduler changes the identity of the current thread, or when execution of the current
thread is replaced by an ISR, the kernel first saves the current thread’s CPU register values. These
register values get restored when the thread later resumes execution.
Scheduling Algorithm The kernel’s scheduler selects the highest priority ready thread to be the current
thread. When multiple ready threads of the same priority exist, the scheduler chooses the one that has
been waiting longest.
A thread’s relative priority is primarily determined by its static priority. However, when both earliest-
deadline-first scheduling is enabled (CONFIG_SCHED_DEADLINE) and a choice of threads have equal static
priority, then the thread with the earlier deadline is considered to have the higher priority. Thus, when
earliest-deadline-first scheduling is enabled, two threads are only considered to have the same priority
when both their static priorities and deadlines are equal. The routine k_thread_deadline_set() is
used to set a thread’s deadline.
Note: Execution of ISRs takes precedence over thread execution, so the execution of the current thread
may be replaced by an ISR at any time unless interrupts have been masked. This applies to both cooper-
ative threads and preemptive threads.
The kernel can be built with one of several choices for the ready queue implementation, offering differ-
ent choices between code size, constant factor runtime overhead and performance scaling when many
threads are added.
• Simple linked-list ready queue (CONFIG_SCHED_DUMB)
The scheduler ready queue will be implemented as a simple unordered list, with very fast con-
stant time performance for single threads and very low code size. This implementation should
be selected on systems with constrained code size that will never see more than a small number
(3, maybe) of runnable threads in the queue at any given time. On most platforms (that are not
otherwise using the red/black tree) this results in a savings of ~2k of code size.
• Red/black tree ready queue (CONFIG_SCHED_SCALABLE)
The scheduler ready queue will be implemented as a red/black tree. This has rather slower
constant-time insertion and removal overhead, and on most platforms (that are not otherwise
using the red/black tree somewhere) requires an extra ~2kb of code. The resulting behavior will
scale cleanly and quickly into the many thousands of threads.
Use this for applications needing many concurrent runnable threads (> 20 or so). Most applications
won’t need this ready queue implementation.
• Traditional multi-queue ready queue (CONFIG_SCHED_MULTIQ)
When selected, the scheduler ready queue will be implemented as the classic/textbook array of
lists, one per priority (max 32 priorities).
This corresponds to the scheduler algorithm used in Zephyr versions prior to 1.12.
It incurs only a tiny code size overhead vs. the “dumb” scheduler and runs in O(1) time in almost all
circumstances with very low constant factor. But it requires a fairly large RAM budget to store those
list heads, and the limited features make it incompatible with features like deadline scheduling that
need to sort threads more finely, and SMP affinity which need to traverse the list of threads.
Typical applications with small numbers of runnable threads probably want the DUMB scheduler.
The wait_q abstraction used in IPC primitives to pend threads for later wakeup shares the same backend
data structure choices as the scheduler, and can use the same options.
• Scalable wait_q implementation (CONFIG_WAITQ_SCALABLE)
When selected, the wait_q will be implemented with a balanced tree. Choose this if you expect
to have many threads waiting on individual primitives. There is a ~2kb code size increase over
CONFIG_WAITQ_DUMB (which may be shared with CONFIG_SCHED_SCALABLE) if the red/black tree
is not used elsewhere in the application, and pend/unpend operations on “small” queues will be
somewhat slower (though this is not generally a performance path).
• Simple linked-list wait_q (CONFIG_WAITQ_DUMB)
When selected, the wait_q will be implemented with a doubly-linked list. Choose this if you expect
to have only a few threads blocked on any single IPC primitive.
Cooperative Time Slicing Once a cooperative thread becomes the current thread, it remains the cur-
rent thread until it performs an action that makes it unready. Consequently, if a cooperative thread
performs lengthy computations, it may cause an unacceptable delay in the scheduling of other threads,
including those of higher priority and equal priority.
High
Ready Thread 2
Thread Priority
ISR
Low priority thread relinquishes<br/>the CPU
Thread 1 Thread 1
Low
Time
To overcome such problems, a cooperative thread can voluntarily relinquish the CPU from time to time
to permit other threads to execute. A thread can relinquish the CPU in two ways:
• Calling k_yield() puts the thread at the back of the scheduler’s prioritized list of ready threads,
and then invokes the scheduler. All ready threads whose priority is higher or equal to that of the
yielding thread are then allowed to execute before the yielding thread is rescheduled. If no such
ready threads exist, the scheduler immediately reschedules the yielding thread without context
switching.
• Calling k_sleep() makes the thread unready for a specified time period. Ready threads of all
priorities are then allowed to execute; however, there is no guarantee that threads whose priority
is lower than that of the sleeping thread will actually be scheduled before the sleeping thread
becomes ready once again.
Preemptive Time Slicing Once a preemptive thread becomes the current thread, it remains the cur-
rent thread until a higher priority thread becomes ready, or until the thread performs an action that
makes it unready. Consequently, if a preemptive thread performs lengthy computations, it may cause an
unacceptable delay in the scheduling of other threads, including those of equal priority.
High
Thread Priority
Preemption
Thread 3 Completion
Thread 2 Thread 2
Thread 1 Thread 1
Low
Time
To overcome such problems, a preemptive thread can perform cooperative time slicing (as described
above), or the scheduler’s time slicing capability can be used to allow other threads of the same priority
to execute.
High
Thread Priority
Preemption Completion
Time Slice
Thread 4
Low
Time
The scheduler divides time into a series of time slices, where slices are measured in system clock ticks.
The time slice size is configurable, but this size can be changed while the application is running.
At the end of every time slice, the scheduler checks to see if the current thread is preemptible and, if so,
implicitly invokes k_yield() on behalf of the thread. This gives other ready threads of the same priority
the opportunity to execute before the current thread is scheduled again. If no threads of equal priority
are ready, the current thread remains the current thread.
Threads with a priority higher than specified limit are exempt from preemptive time slicing, and are
never preempted by a thread of equal priority. This allows an application to use preemptive time slicing
only when dealing with lower priority threads that are less time-sensitive.
Note: The kernel’s time slicing algorithm does not ensure that a set of equal-priority threads receive an
equitable amount of CPU time, since it does not measure the amount of time a thread actually gets to
execute. However, the algorithm does ensure that a thread never executes for longer than a single time
slice without being required to yield.
Scheduler Locking A preemptible thread that does not wish to be preempted while performing a
critical operation can instruct the scheduler to temporarily treat it as a cooperative thread by calling
k_sched_lock() . This prevents other threads from interfering while the critical operation is being
performed.
Once the critical operation is complete the preemptible thread must call k_sched_unlock() to restore
its normal, preemptible status.
If a thread calls k_sched_lock() and subsequently performs an action that makes it unready, the sched-
uler will switch the locking thread out and allow other threads to execute. When the locking thread
again becomes the current thread, its non-preemptible status is maintained.
Note: Locking out the scheduler is a more efficient way for a preemptible thread to prevent preemption
than changing its priority level to a negative value.
Thread Sleeping A thread can call k_sleep() to delay its processing for a specified time period.
During the time the thread is sleeping the CPU is relinquished to allow other ready threads to execute.
Once the specified delay has elapsed the thread becomes ready and is eligible to be scheduled once again.
A sleeping thread can be woken up prematurely by another thread using k_wakeup() . This technique
can sometimes be used to permit the secondary thread to signal the sleeping thread that something has
occurred without requiring the threads to define a kernel synchronization object, such as a semaphore.
Waking up a thread that is not sleeping is allowed, but has no effect.
Busy Waiting A thread can call k_busy_wait() to perform a busy wait that delays its processing for
a specified time period without relinquishing the CPU to another ready thread.
A busy wait is typically used instead of thread sleeping when the required delay is too short to warrant
having the scheduler context switch from the current thread to another thread and then back again.
Suggested Uses Use cooperative threads for device drivers and other performance-critical work.
Use cooperative threads to implement mutually exclusion without the need for a kernel object, such as a
mutex.
Use preemptive threads to give priority to time-sensitive processing over less time-sensitive processing.
CPU Idling
Although normally reserved for the idle thread, in certain special applications, a thread might want to
make the CPU idle.
• Concepts
• Implementation
– Making the CPU idle
– Making the CPU idle in an atomic fashion
• Suggested Uses
• API Reference
Concepts Making the CPU idle causes the kernel to pause all operations until an event, normally an
interrupt, wakes up the CPU. In a regular system, the idle thread is responsible for this. However, in
some constrained systems, it is possible that another thread takes this duty.
Implementation
Making the CPU idle Making the CPU idle is simple: call the k_cpu_idle() API. The CPU will stop
executing instructions until an event occurs. Most likely, the function will be called within a loop. Note
that in certain architectures, upon return, k_cpu_idle() unconditionally unmasks interrupts.
static k_sem my_sem;
for (;;) {
/* ... do processing */
Making the CPU idle in an atomic fashion It is possible that there is a need to do some work atomi-
cally before making the CPU idle. In such a case, k_cpu_atomic_idle() should be used instead.
In fact, there is a race condition in the previous example: the interrupt could occur between the time the
semaphore is taken, finding out it is not available and making the CPU idle again. In some systems, this
can cause the CPU to idle until another interrupt occurs, which might be never, thus hanging the system
completely. To prevent this, k_cpu_atomic_idle() should have been used, like in this example.
static k_sem my_sem;
int main(void)
{
k_sem_init(&my_sem, 0, 1);
for (;;) {
/*
* Wait for semaphore from ISR; if acquired, do related work, then
* go to next loop iteration (the semaphore might have been given
* again); else, make the CPU idle.
*/
if (k_sem_take(&my_sem, K_NO_WAIT) == 0) {
irq_unlock(key);
/* ... do processing */
Suggested Uses Use k_cpu_atomic_idle() when a thread has to do some real work in addition to idling
the CPU to wait for an event. See example above.
Use k_cpu_idle() only when a thread is only responsible for idling the CPU, i.e. not doing any real work,
like in this example below.
int main(void)
{
/* ... do some system/application initialization */
Note: Do not use these APIs unless absolutely necessary. In a normal system, the idle thread takes
care of power management, including CPU idling.
API Reference
group cpu_idle_apis
Functions
Note: In some architectures, before returning, the function unmasks interrupts uncondition-
ally.
After waking up from the low-power mode, the interrupt lockout state will be restored as if
by irq_unlock(key).
Parameters
• key – Interrupt locking key obtained from irq_lock().
System Threads
• Implementation
– Writing a main() function
• Suggested Uses
A system thread is a thread that the kernel spawns automatically during system initialization.
The kernel spawns the following system threads:
Main thread
This thread performs kernel initialization, then calls the application’s main() function (if one is
defined).
By default, the main thread uses the highest configured preemptible thread priority (i.e. 0). If the
kernel is not configured to support preemptible threads, the main thread uses the lowest configured
cooperative thread priority (i.e. -1).
The main thread is an essential thread while it is performing kernel initialization or executing the
application’s main() function; this means a fatal system error is raised if the thread aborts. If
main() is not defined, or if it executes and then does a normal return, the main thread terminates
normally and no error is raised.
Idle thread
This thread executes when there is no other work for the system to do. If possible, the idle thread
activates the board’s power management support to save power; otherwise, the idle thread simply
performs a “do nothing” loop. The idle thread remains in existence as long as the system is running
and never terminates.
The idle thread always uses the lowest configured thread priority. If this makes it a cooperative
thread, the idle thread repeatedly yields the CPU to allow the application’s other threads to run
when they need to.
The idle thread is an essential thread, which means a fatal system error is raised if the thread
aborts.
Additional system threads may also be spawned, depending on the kernel and board configuration op-
tions specified by the application. For example, enabling the system workqueue spawns a system thread
that services the work items submitted to it. (See Workqueue Threads.)
Implementation
Writing a main() function An application-supplied main() function begins executing once kernel ini-
tialization is complete. The kernel does not pass any arguments to the function.
The following code outlines a trivial main() function. The function used by a real application can be as
complex as needed.
int main(void)
{
/* initialize a semaphore */
(continues on next page)
Suggested Uses Use the main thread to perform thread-based processing in an application that only
requires a single thread, rather than defining an additional application-specific thread.
Workqueue Threads
A workqueue is a kernel object that uses a dedicated thread to process work items in a first in, first out
manner. Each work item is processed by calling the function specified by the work item. A workqueue
is typically used by an ISR or a high-priority thread to offload non-urgent processing to a lower-priority
thread so it does not impact time-sensitive processing.
Any number of workqueues can be defined (limited only by available RAM). Each workqueue is refer-
enced by its memory address.
A workqueue has the following key properties:
• A queue of work items that have been added, but not yet processed.
• A thread that processes the work items in the queue. The priority of the thread is configurable,
allowing it to be either cooperative or preemptive as required.
Regardless of workqueue thread priority the workqueue thread will yield between each submitted work
item, to prevent a cooperative workqueue from starving other threads.
A workqueue must be initialized before it can be used. This sets its queue to empty and spawns the
workqueue’s thread. The thread runs forever, but sleeps when no work items are available.
Note: The behavior described here is changed from the Zephyr workqueue implementation used prior
to release 2.6. Among the changes are:
• Precise tracking of the status of cancelled work items, so that the caller need not be concerned that
an item may be processing when the cancellation returns. Checking of return values on cancellation
is still required.
• Direct submission of delayable work items to the queue with K_NO_WAIT rather than always going
through the timeout API, which could introduce delays.
• The ability to wait until a work item has completed or a queue has been drained.
• Finer control of behavior when scheduling a delayable work item, specifically allowing a previous
deadline to remain unchanged when a work item is scheduled again.
• Safe handling of work item resubmission when the item is being processed on another workqueue.
Using the return values of k_work_busy_get() or k_work_is_pending() , or measurements of remain-
ing time until delayable work is scheduled, should be avoided to prevent race conditions of the type
observed with the previous implementation. See also Workqueue Best Practices.
Work Item Lifecycle Any number of work items can be defined. Each work item is referenced by its
memory address.
A work item is assigned a handler function, which is the function executed by the workqueue’s thread
when the work item is processed. This function accepts a single argument, which is the address of the
work item itself. The work item also maintains information about its status.
A work item must be initialized before it can be used. This records the work item’s handler function and
marks it as not pending.
A work item may be queued (K_WORK_QUEUED ) by submitting it to a workqueue by an ISR or a thread.
Submitting a work item appends the work item to the workqueue’s queue. Once the workqueue’s thread
has processed all of the preceding work items in its queue the thread will remove the next work item
from the queue and invoke the work item’s handler function. Depending on the scheduling priority of
the workqueue’s thread, and the work required by other items in the queue, a queued work item may be
processed quickly or it may remain in the queue for an extended period of time.
A delayable work item may be scheduled (K_WORK_DELAYED ) to a workqueue; see Delayable Work.
A work item will be running (K_WORK_RUNNING ) when it is running on a work queue, and may also be
canceling (K_WORK_CANCELING ) if it started running before a thread has requested that it be cancelled.
A work item can be in multiple states; for example it can be:
• running on a queue;
• marked canceling (because a thread used k_work_cancel_sync() to wait until the work item
completed);
• queued to run again on the same queue;
• scheduled to be submitted to a (possibly different) queue
all simultaneously. A work item that is in any of these states is pending (k_work_is_pending() ) or busy
(k_work_busy_get() ).
A handler function can use any kernel API available to threads. However, operations that are poten-
tially blocking (e.g. taking a semaphore) must be used with care, since the workqueue cannot process
subsequent work items in its queue until the handler function finishes executing.
The single argument that is passed to a handler function can be ignored if it is not required. If the
handler function requires additional information about the work it is to perform, the work item can
be embedded in a larger data structure. The handler function can then use the argument value to
compute the address of the enclosing data structure with CONTAINER_OF , and thereby obtain access to
the additional information it needs.
A work item is typically initialized once and then submitted to a specific workqueue whenever work
needs to be performed. If an ISR or a thread attempts to submit a work item that is already queued the
work item is not affected; the work item remains in its current place in the workqueue’s queue, and the
work is only performed once.
A handler function is permitted to re-submit its work item argument to the workqueue, since the work
item is no longer queued at that time. This allows the handler to execute work in stages, without unduly
delaying the processing of other work items in the workqueue’s queue.
Important: A pending work item must not be altered until the item has been processed by the
workqueue thread. This means a work item must not be re-initialized while it is busy. Furthermore,
any additional information the work item’s handler function needs to perform its work must not be
altered until the handler function has finished executing.
Delayable Work An ISR or a thread may need to schedule a work item that is to be processed only
after a specified period of time, rather than immediately. This can be done by scheduling a delayable
work item to be submitted to a workqueue at a future time.
A delayable work item contains a standard work item but adds fields that record when and where the
item should be submitted.
A delayable work item is initialized and scheduled to a workqueue in a similar manner to a standard work
item, although different kernel APIs are used. When the schedule request is made the kernel initiates a
timeout mechanism that is triggered after the specified delay has elapsed. Once the timeout occurs the
kernel submits the work item to the specified workqueue, where it remains queued until it is processed
in the standard manner.
Note that work handler used for delayable still receives a pointer to the underlying non-delayable work
structure, which is not publicly accessible from k_work_delayable . To get access to an object that
contains the delayable work object use this idiom:
Triggered Work The k_work_poll_submit() interface schedules a triggered work item in response to
a poll event (see Polling API), that will call a user-defined function when a monitored resource becomes
available or poll signal is raised, or a timeout occurs. In contrast to k_poll() , the triggered work does
not require a dedicated thread waiting or actively polling for a poll event.
A triggered work item is a standard work item that has the following added properties:
• A pointer to an array of poll events that will trigger work item submissions to the workqueue
• A size of the array containing poll events.
A triggered work item is initialized and submitted to a workqueue in a similar manner to a standard
work item, although dedicated kernel APIs are used. When a submit request is made, the kernel begins
observing kernel objects specified by the poll events. Once at least one of the observed kernel object’s
changes state, the work item is submitted to the specified workqueue, where it remains queued until it
is processed in the standard manner.
Important: The triggered work item as well as the referenced array of poll events have to be valid and
cannot be modified for a complete triggered work item lifecycle, from submission to work item execution
or cancellation.
An ISR or a thread may cancel a triggered work item it has submitted as long as it is still waiting for a
poll event. In such case, the kernel stops waiting for attached poll events and the specified work is not
executed. Otherwise the cancellation cannot be performed.
System Workqueue The kernel defines a workqueue known as the system workqueue, which is available
to any application or kernel code that requires workqueue support. The system workqueue is optional,
and only exists if the application makes use of it.
Important: Additional workqueues should only be defined when it is not possible to submit new work
items to the system workqueue, since each new workqueue incurs a significant cost in memory footprint.
A new workqueue can be justified if it is not possible for its work items to co-exist with existing system
workqueue work items without an unacceptable impact; for example, if the new work items perform
blocking operations that would delay other system workqueue processing to an unacceptable degree.
Defining and Controlling a Workqueue A workqueue is defined using a variable of type k_work_q .
The workqueue is initialized by defining the stack area used by its thread, initializing the k_work_q ,
either zeroing its memory or calling k_work_queue_init() , and then calling k_work_queue_start() .
The stack area must be defined using K_THREAD_STACK_DEFINE to ensure it is properly set up in memory.
The following code defines and initializes a workqueue:
K_THREAD_STACK_DEFINE(my_stack_area, MY_STACK_SIZE);
k_work_queue_init(&my_work_q);
k_work_queue_start(&my_work_q, my_stack_area,
K_THREAD_STACK_SIZEOF(my_stack_area), MY_PRIORITY,
NULL);
In addition the queue identity and certain behavior related to thread rescheduling can be controlled by
the optional final parameter; see k_work_queue_start() for details.
The following API can be used to interact with a workqueue:
• k_work_queue_drain() can be used to block the caller until the work queue has no items left.
Work items resubmitted from the workqueue thread are accepted while a queue is draining, but
work items from any other thread or ISR are rejected. The restriction on submitting more work
can be extended past the completion of the drain operation in order to allow the blocking thread
to perform additional work while the queue is “plugged”. Note that draining a queue has no effect
on scheduling or processing delayable items, but if the queue is plugged and the deadline expires
the item will silently fail to be submitted.
• k_work_queue_unplug() removes any previous block on submission to the queue due to a previous
drain operation.
Submitting a Work Item A work item is defined using a variable of type k_work . It must be initial-
ized by calling k_work_init() , unless it is defined using K_WORK_DEFINE in which case initialization is
performed at compile-time.
An initialized work item can be submitted to the system workqueue by calling k_work_submit() , or to
a specified workqueue by calling k_work_submit_to_queue() .
The following code demonstrates how an ISR can offload the printing of error messages to the system
workqueue. Note that if the ISR attempts to resubmit the work item while it is still queued, the work
item is left unchanged and the associated error message will not be printed.
struct device_info {
struct k_work work;
char name[16]
} my_device;
The following API can be used to check the status of or synchronize with the work item:
• k_work_busy_get() returns a snapshot of flags indicating work item state. A zero value indicates
the work is not scheduled, submitted, being executed, or otherwise still being referenced by the
workqueue infrastructure.
• k_work_is_pending() is a helper that indicates true if and only if the work is scheduled, queued,
or running.
• k_work_flush() may be invoked from threads to block until the work item has completed. It
returns immediately if the work is not pending.
• k_work_cancel() attempts to prevent the work item from being executed. This may or may not
be successful. This is safe to invoke from ISRs.
• k_work_cancel_sync() may be invoked from threads to block until the work completes; it will
return immediately if the cancellation was successful or not necessary (the work wasn’t submit-
ted or running). This can be used after k_work_cancel() is invoked (from an ISR)` to confirm
completion of an ISR-initiated cancellation.
Scheduling a Delayable Work Item A delayable work item is defined using a variable of type
k_work_delayable . It must be initialized by calling k_work_init_delayable() .
For delayed work there are two common use cases, depending on whether a deadline should be extended
if a new event occurs. An example is collecting data that comes in asynchronously, e.g. characters from
a UART associated with a keyboard. There are two APIs that submit work after a delay:
• k_work_schedule() (or k_work_schedule_for_queue() ) schedules work to be executed at a spe-
cific time or after a delay. Further attempts to schedule the same item with this API before the delay
completes will not change the time at which the item will be submitted to its queue. Use this if
the policy is to keep collecting data until a specified delay since the first unprocessed data was
received;
• k_work_reschedule() (or k_work_reschedule_for_queue() ) unconditionally sets the deadline
for the work, replacing any previous incomplete delay and changing the destination queue if neces-
sary. Use this if the policy is to keep collecting data until a specified delay since the last unprocessed
data was received.
If the work item is not scheduled both APIs behave the same. If K_NO_WAIT is specified as the delay the
behavior is as if the item was immediately submitted directly to the target queue, without waiting for a
minimal timeout (unless k_work_schedule() is used and a previous delay has not completed).
Both also have variants that allow control of the queue used for submission.
The helper function k_work_delayable_from_work() can be used to get a pointer to the containing
k_work_delayable from a pointer to k_work that is passed to a work handler function.
The following additional API can be used to check the status of or synchronize with the work item:
• k_work_delayable_busy_get() is the analog to k_work_busy_get() for delayable work.
• k_work_delayable_is_pending() is the analog to k_work_is_pending() for delayable work.
• k_work_flush_delayable() is the analog to k_work_flush() for delayable work.
• k_work_cancel_delayable() is the analog to k_work_cancel() for delayable work; similarly
with k_work_cancel_delayable_sync() .
Synchronizing with Work Items While the state of both regular and delayable work items can be
determined from any context using k_work_busy_get() and k_work_delayable_busy_get() some
use cases require synchronizing with work items after they’ve been submitted. k_work_flush() ,
k_work_cancel_sync() , and k_work_cancel_delayable_sync() can be invoked from thread context
to wait until the requested state has been reached.
These APIs must be provided with a k_work_sync object that has no application-inspectable components
but is needed to provide the synchronization objects. These objects should not be allocated on a stack if
the code is expected to work on architectures with CONFIG_KERNEL_COHERENCE.
Avoid Race Conditions Sometimes the data a work item must process is naturally thread-safe, for
example when it’s put into a k_queue by some thread and processed in the work thread. More often
external synchronization is required to avoid data races: cases where the work thread might inspect or
manipulate shared state that’s being accessed by another thread or interrupt. Such state might be a flag
indicating that work needs to be done, or a shared object that is filled by an ISR or thread and read by
the work handler.
For simple flags Atomic Services may be sufficient. In other cases spin locks (k_spinlock_t) or thread-
aware locks (k_sem, k_mutex , . . . ) may be used to ensure data races don’t occur.
If the selected lock mechanism can sleep then allowing the work thread to sleep will starve other work
queue items, which may need to make progress in order to get the lock released. Work handlers should
try to take the lock with its no-wait path. For example:
if (k_mutex_lock(&parent->lock, K_NO_WAIT) != 0) {
/* NB: Submit will fail if the work item is being cancelled. */
(void)k_work_submit(work);
return;
}
Be aware that if the lock is held by a thread with a lower priority than the work queue the resubmission
may starve the thread that would release the lock, causing the application to fail. Where the idiom above
is required a delayable work item is preferred, and the work should be (re-)scheduled with a non-zero
delay to allow the thread holding the lock to make progress.
Note that submitting from the work handler can fail if the work item had been cancelled. Generally this
is acceptable, since the cancellation will complete once the handler finishes. If it is not, the code above
must take other steps to notify the application that the work could not be performed.
Work items in isolation are self-locking, so you don’t need to hold an external lock just to submit or
schedule them. Even if you use external state protected by such a lock to prevent further resubmission,
it’s safe to do the resubmit as long as you’re sure that eventually the item will take its lock and check that
state to determine whether it should do anything. Where a delayable work item is being rescheduled in
its handler due to inability to take the lock some other self-locking state, such as an atomic flag set by the
application/driver when the cancel is initiated, would be required to detect the cancellation and avoid
the cancelled work item being submitted again after the deadline.
Check Return Values All work API functions return status of the underlying operation, and in many
cases it is important to verify that the intended result was obtained.
• Submitting a work item (k_work_submit_to_queue() ) can fail if the work is being cancelled or
the queue is not accepting new items. If this happens the work will not be executed, which could
cause a subsystem that is animated by work handler activity to become non-responsive.
• Asynchronous cancellation (k_work_cancel() or k_work_cancel_delayable() ) can complete
while the work item is still being run by a handler. Proceeding to manipulate state shared with
the work handler will result in data races that can cause failures.
Many race conditions have been present in Zephyr code because the results of an operation were not
checked.
There may be good reason to believe that a return value indicating that the operation did not complete as
expected is not a problem. In those cases the code should clearly document this, by (1) casting the return
value to void to indicate that the result is intentionally ignored, and (2) documenting what happens in
the unexpected case. For example:
However in such a case the following code must still avoid data races, as it cannot guarantee that the
work thread is not accessing work-related state.
Don’t Optimize Prematurely The workqueue API is designed to be safe when invoked from multiple
threads and interrupts. Attempts to externally inspect a work item’s state and make decisions based on
the result are likely to create new problems.
So when new work comes in, just submit it. Don’t attempt to “optimize” by checking whether
the work item is already submitted by inspecting snapshot state with k_work_is_pending() or
k_work_busy_get() , or checking for a non-zero delay from k_work_delayable_remaining_get() .
Those checks are fragile: a “busy” indication can be obsolete by the time the test is returned, and a
“not-busy” indication can also be wrong if work is submitted from multiple contexts, or (for delayable
work) if the deadline has completed but the work is still in queued or running state.
A general best practice is to always maintain in shared state some condition that can be checked by the
handler to confirm whether there is work to be done. This way you can use the work handler as the
standard cleanup path: rather than having to deal with cancellation and cleanup at points where items
are submitted, you may be able to have everything done in the work handler itself.
A rare case where you could safely use k_work_is_pending() is as a check to avoid invoking
k_work_flush() or k_work_cancel_sync() , if you are certain that nothing else might submit the work
while you’re checking (generally because you’re holding a lock that prevents access to state used for
submission).
Suggested Uses Use the system workqueue to defer complex interrupt-related processing from an ISR
to a shared thread. This allows the interrupt-related processing to be done promptly without compro-
mising the system’s ability to respond to subsequent interrupts, and does not require the application to
define and manage an additional thread to do the processing.
API Reference
group workqueue_apis
Defines
K_WORK_DELAYABLE_DEFINE(work, work_handler)
Initialize a statically-defined delayable work item.
This macro can be used to initialize a statically-defined delayable work item, prior to its first
use. For example,
Note that if the runtime dependencies support initialization with k_work_init_delayable() us-
ing that will eliminate the initialized object in ROM that is produced by this macro and copied
in at system startup.
Parameters
• work – Symbol name for delayable work item object
• work_handler – Function to invoke each time work item is processed.
K_WORK_USER_DEFINE(work, work_handler)
Initialize a statically-defined user work item.
This macro can be used to initialize a statically-defined user work item, prior to its first use.
For example,
Parameters
• work – Symbol name for work item object
• work_handler – Function to invoke each time work item is processed.
K_WORK_DEFINE(work, work_handler)
Initialize a statically-defined work item.
This macro can be used to initialize a statically-defined workqueue work item, prior to its first
use. For example,
Parameters
• work – Symbol name for work item object
• work_handler – Function to invoke each time work item is processed.
Typedefs
Enums
enum [anonymous]
Values:
Functions
Parameters
• work – the work structure to be initialized.
• handler – the handler to be invoked by the work item.
Note: This is a live snapshot of state, which may change before the result is checked. Use
locks where appropriate.
Parameters
• work – pointer to the work item.
Returns
a mask of flags K_WORK_DELAYED, K_WORK_QUEUED, K_WORK_RUNNING,
and K_WORK_CANCELING.
Note: This is a live snapshot of state, which may change before the result is checked. Use
locks where appropriate.
Parameters
• work – pointer to the work item.
Returns
true if and only if k_work_busy_get() returns a non-zero value.
Parameters
• queue – pointer to the work queue on which the item should run. If NULL the
queue from the most recent submission will be used.
• work – pointer to the work item.
Return values
• 0 – if work was already submitted to a queue
• 1 – if work was not submitted and has been queued to queue
• 2 – if work was running and has been queued to the queue that was running it
• -EBUSY –
– if work submission was rejected because the work item is cancelling; or
– queue is draining; or
– queue is plugged.
• -EINVAL – if queue is null and the work item has never been run.
• -ENODEV – if queue has not been started.
Parameters
• work – pointer to the work item.
Returns
as with k_work_submit_to_queue().
Note: Be careful of caller and work queue thread relative priority. If this function sleeps
it will not return until the work queue thread completes the tasks that allow this thread to
resume.
Note: Behavior is undefined if this function is invoked on work from a work queue running
work.
Parameters
• work – pointer to the work item.
• sync – pointer to an opaque item containing state related to the pending can-
cellation. The object must persist until the call returns, and be accessible from
both the caller thread and the work queue thread. The object must not be
used for any other flush or cancel operation until this one completes. On ar-
chitectures with CONFIG_KERNEL_COHERENCE the object must be allocated
in coherent memory.
Return values
• true – if call had to wait for completion
• false – if work was already idle
Parameters
• work – pointer to the work item.
Returns
the k_work_busy_get() status indicating the state of the item after all cancellation
steps performed by this call are completed.
Note: Be careful of caller and work queue thread relative priority. If this function sleeps
it will not return until the work queue thread completes the tasks that allow this thread to
resume.
Note: Behavior is undefined if this function is invoked on work from a work queue running
work.
Parameters
• work – pointer to the work item.
• sync – pointer to an opaque item containing state related to the pending can-
cellation. The object must persist until the call returns, and be accessible from
both the caller thread and the work queue thread. The object must not be
used for any other flush or cancel operation until this one completes. On ar-
chitectures with CONFIG_KERNEL_COHERENCE the object must be allocated
in coherent memory.
Return values
• true – if work was pending (call had to wait for cancellation of a running
handler to complete, or scheduled or submitted operations were cancelled);
• false – otherwise
Parameters
• queue – the queue structure to be initialized.
Parameters
• queue – pointer to the queue structure.
Return values
• 0 – if successfully unplugged
• -EALREADY – if the work queue was not plugged.
Parameters
• dwork – the delayable work structure to be initialized.
• handler – the handler to be invoked by the work item.
Note: This is a live snapshot of state, which may change before the result can be inspected.
Use locks where appropriate.
Parameters
• dwork – pointer to the delayable work item.
Returns
a mask of flags K_WORK_DELAYED, K_WORK_QUEUED, K_WORK_RUNNING,
and K_WORK_CANCELING. A zero return value indicates the work item appears
to be idle.
Note: This is a live snapshot of state, which may change before the result can be inspected.
Use locks where appropriate.
Parameters
• dwork – pointer to the delayable work item.
Returns
true if and only if k_work_delayable_busy_get() returns a non-zero value.
Note: This is a live snapshot of state, which may change before the result can be inspected.
Use locks where appropriate.
Parameters
• dwork – pointer to the delayable work item.
Returns
the tick count when the timer that will schedule the work item will expire, or the
current tick count if the work is not scheduled.
Note: This is a live snapshot of state, which may change before the result can be inspected.
Use locks where appropriate.
Parameters
• dwork – pointer to the delayable work item.
Returns
the number of ticks until the timer that will schedule the work item will expire,
or zero if the item is not scheduled.
Parameters
• queue – the queue on which the work item should be submitted after the delay.
• dwork – pointer to the delayable work item.
• delay – the time to wait before submitting the work item. If K_NO_WAIT and
the work is not pending this is equivalent to k_work_submit_to_queue().
Return values
• 0 – if work was already scheduled or submitted.
• 1 – if work has been scheduled.
• -EBUSY – if delay is K_NO_WAIT and k_work_submit_to_queue() fails with this
code.
• -EINVAL – if delay is K_NO_WAIT and k_work_submit_to_queue() fails with this
code.
• -ENODEV – if delay is K_NO_WAIT and k_work_submit_to_queue() fails with this
code.
Note: If delay is K_NO_WAIT (“no delay”) the return values are as with
k_work_submit_to_queue().
Parameters
• queue – the queue on which the work item should be submitted after the delay.
• dwork – pointer to the delayable work item.
• delay – the time to wait before submitting the work item. If K_NO_WAIT this
is equivalent to k_work_submit_to_queue() after canceling any previous sched-
uled submission.
Return values
• 0 – if delay is K_NO_WAIT and work was already on a queue
• 1 – if
– delay is K_NO_WAIT and work was not submitted but has now been queued
to queue; or
– delay not K_NO_WAIT and work has been scheduled
• 2 – if delay is K_NO_WAIT and work was running and has been queued to the
queue that was running it
• -EBUSY – if delay is K_NO_WAIT and k_work_submit_to_queue() fails with this
code.
• -EINVAL – if delay is K_NO_WAIT and k_work_submit_to_queue() fails with this
code.
• -ENODEV – if delay is K_NO_WAIT and k_work_submit_to_queue() fails with this
code.
Note: Be careful of caller and work queue thread relative priority. If this function sleeps
it will not return until the work queue thread completes the tasks that allow this thread to
resume.
Note: Behavior is undefined if this function is invoked on dwork from a work queue running
dwork.
Parameters
• dwork – pointer to the delayable work item.
• sync – pointer to an opaque item containing state related to the pending can-
cellation. The object must persist until the call returns, and be accessible from
both the caller thread and the work queue thread. The object must not be
used for any other flush or cancel operation until this one completes. On ar-
chitectures with CONFIG_KERNEL_COHERENCE the object must be allocated
in coherent memory.
Return values
Note: The work may still be running when this returns. Use k_work_flush_delayable() or
k_work_cancel_delayable_sync() to ensure it is not running.
Note: Canceling delayable work does not prevent rescheduling it. It does prevent submitting
it until the cancellation completes.
Parameters
• dwork – pointer to the delayable work item.
Returns
the k_work_delayable_busy_get() status indicating the state of the item after all
cancellation steps performed by this call are completed.
Note: Canceling delayable work does not prevent rescheduling it. It does prevent submitting
it until the cancellation completes.
Note: Be careful of caller and work queue thread relative priority. If this function sleeps
it will not return until the work queue thread completes the tasks that allow this thread to
resume.
Note: Behavior is undefined if this function is invoked on dwork from a work queue running
dwork.
Parameters
• dwork – pointer to the delayable work item.
• sync – pointer to an opaque item containing state related to the pending can-
cellation. The object must persist until the call returns, and be accessible from
both the caller thread and the work queue thread. The object must not be
used for any other flush or cancel operation until this one completes. On ar-
chitectures with CONFIG_KERNEL_COHERENCE the object must be allocated
in coherent memory.
Return values
• true – if work was not idle (call had to wait for cancellation of a running
handler to complete, or scheduled or submitted operations were cancelled);
• false – otherwise
Note: Checking if the work is pending gives no guarantee that the work will still be pending
when this information is used. It is up to the caller to make sure that this information is used
in a safe manner.
Parameters
• work – Address of work item.
Returns
true if work item is pending, or false if it is not pending.
Parameters
• work_q – Address of workqueue.
• work – Address of work item.
Return values
• -EBUSY – if the work item was already in some workqueue
• -ENOMEM – if no memory for thread resource pool allocation
• 0 – Success
Warning: Provided array of events as well as a triggered work item must be placed in
persistent memory (valid until work handler execution or work cancellation) and cannot
be modified after submission.
Parameters
• work_q – Address of workqueue.
• work – Address of delayed work item.
• events – An array of events which trigger the work.
• num_events – The number of events in the array.
• timeout – Timeout after which the work will be scheduled for execution even
if not triggered.
Return values
• 0 – Work item started watching for events.
• -EINVAL – Work item is being processed or has completed its work.
• -EADDRINUSE – Work item is pending on a different workqueue.
Warning: Provided array of events as well as a triggered work item must not be modified
until the item has been processed by the workqueue.
Parameters
• work – Address of delayed work item.
• events – An array of events which trigger the work.
• num_events – The number of events in the array.
• timeout – Timeout after which the work will be scheduled for execution even
if not triggered.
Return values
• 0 – Work item started watching for events.
• -EINVAL – Work item is being processed or has completed its work.
• -EADDRINUSE – Work item is pending on a different workqueue.
Parameters
• work – Address of delayed work item.
Return values
• 0 – Work item canceled.
• -EINVAL – Work item is being processed or has completed its work.
struct k_work
#include <kernel.h> A structure used to submit work.
struct k_work_delayable
#include <kernel.h> A structure used to submit work after a delay.
struct k_work_sync
#include <kernel.h> A structure holding internal state for a pending synchronous operation
on a work item or queue.
Instances of this type are provided by the caller for invocation of k_work_flush(),
k_work_cancel_sync() and sibling flush and cancel APIs. A referenced object must persist until
the call returns, and be accessible from both the caller thread and the work queue thread.
struct k_work_queue_config
#include <kernel.h> A structure holding optional configuration items for a work queue.
This structure, and values it references, are not retained by k_work_queue_start().
Public Members
bool no_yield
Control whether the work queue thread should yield between items.
Yielding between items helps guarantee the work queue thread does not starve other
threads, including cooperative ones released by a work item. This is the default behavior.
Set this to true to prevent the work queue thread from yielding between items. This may
be appropriate when a sequence of items should complete without yielding control.
struct k_work_q
#include <kernel.h> A structure used to hold work until it can be processed.
What Can be Expected to Work These core capabilities shall function correctly when
CONFIG_MULTITHREADING is disabled:
• The build system
• The ability to boot the application to main()
• Interrupt management
• The system clock including k_uptime_get()
• Timers, i.e. k_timer()
• Non-sleeping delays e.g. k_busy_wait() .
• Sleeping k_cpu_idle() .
• Pre main() drivers and subsystems initialization e.g. SYS_INIT.
• Memory Management
• Specifically identified drivers in certain subsystems, listed below.
The expectations above affect selection of other features; for example CONFIG_SYS_CLOCK_EXISTS cannot
be set to n.
What Cannot be Expected to Work Functionality that will not work with CONFIG_MULTITHREADING
includes majority of the kernel API:
• Threads
• Scheduling
• Workqueue Threads
• Polling API
• Semaphores
• Mutexes
• Condition Variables
• Data Passing
Subsystem Behavior Without Thread Support The sections below list driver and functional subsys-
tems that are expected to work to some degree when CONFIG_MULTITHREADING is disabled. Subsystems
that are not listed here should not be expected to work.
Some existing drivers within the listed subsystems do not work when threading is disabled, but are within
scope based on their subsystem, or may be sufficiently isolated that supporting them on a particular
platform is low-impact. Enhancements to add support to existing capabilities that were not originally
implemented to work with threads disabled will be considered.
Flash The Flash is expected to work for all SoC flash peripheral drivers. Bus-accessed devices like serial
memories may not be supported.
List/table of supported drivers to go here
GPIO The General-Purpose Input/Output (GPIO) is expected to work for all SoC GPIO peripheral drivers.
Bus-accessed devices like GPIO extenders may not be supported.
List/table of supported drivers to go here
UART A subset of the Universal Asynchronous Receiver-Transmitter (UART) is expected to work for all
SoC UART peripheral drivers.
• Applications that select CONFIG_UART_INTERRUPT_DRIVEN may work, depending on driver imple-
mentation.
• Applications that select CONFIG_UART_ASYNC_API may work, depending on driver implementation.
• Applications that do not select either CONFIG_UART_ASYNC_API or
CONFIG_UART_INTERRUPT_DRIVEN are expected to work.
List/table of supported drivers to go here, including which API options are supported
Interrupts
An interrupt service routine (ISR) is a function that executes asynchronously in response to a hardware or
software interrupt. An ISR normally preempts the execution of the current thread, allowing the response
to occur with very low overhead. Thread execution resumes only once all ISR work has been completed.
• Concepts
– Multi-level Interrupt handling
– Preventing Interruptions
– Offloading ISR Work
• Implementation
– Defining a regular ISR
– Defining a ‘direct’ ISR
– Implementation Details
• Suggested Uses
• Configuration Options
• API Reference
Concepts Any number of ISRs can be defined (limited only by available RAM), subject to the constraints
imposed by underlying hardware.
An ISR has the following key properties:
• An interrupt request (IRQ) signal that triggers the ISR.
• A priority level associated with the IRQ.
• An interrupt handler function that is invoked to handle the interrupt.
• An argument value that is passed to that function.
An IDT (Interrupt Descriptor Table) or a vector table is used to associate a given interrupt source with a
given ISR. Only a single ISR can be associated with a specific IRQ at any given time.
Multiple ISRs can utilize the same function to process interrupts, allowing a single function to service
a device that generates multiple types of interrupts or to service multiple devices (usually of the same
type). The argument value passed to an ISR’s function allows the function to determine which interrupt
has been signaled.
The kernel provides a default ISR for all unused IDT entries. This ISR generates a fatal system error if
an unexpected interrupt is signaled.
The kernel supports interrupt nesting. This allows an ISR to be preempted in mid-execution if a higher
priority interrupt is signaled. The lower priority ISR resumes execution once the higher priority ISR has
completed its processing.
An ISR’s interrupt handler function executes in the kernel’s interrupt context. This context has its own
dedicated stack area (or, on some architectures, stack areas). The size of the interrupt context stack must
be capable of handling the execution of multiple concurrent ISRs if interrupt nesting support is enabled.
Important: Many kernel APIs can be used only by threads, and not by ISRs. In cases where a routine
may be invoked by both threads and ISRs the kernel provides the k_is_in_isr() API to allow the
routine to alter its behavior depending on whether it is executing as part of a thread or as part of an ISR.
Multi-level Interrupt handling A hardware platform can support more interrupt lines than natively-
provided through the use of one or more nested interrupt controllers. Sources of hardware interrupts
are combined into one line that is then routed to the parent controller.
If nested interrupt controllers are supported, CONFIG_MULTI_LEVEL_INTERRUPTS should be set to 1, and
CONFIG_2ND_LEVEL_INTERRUPTS and CONFIG_3RD_LEVEL_INTERRUPTS configured as well, based on the
hardware architecture.
A unique 32-bit interrupt number is assigned with information embedded in it to select and invoke the
correct Interrupt Service Routine (ISR). Each interrupt level is given a byte within this 32-bit number,
providing support for up to four interrupt levels using this arch, as illustrated and explained below:
9 2 0
_ _ _ _ _ _ _ _ _ _ _ _ _ (LEVEL 1)
5 | A |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ (LEVEL 2)
| C B
_ _ _ _ _ _ _ (LEVEL 3)
D
• One of the LEVEL 2 controllers has interrupt line 5 connected to a LEVEL 3 nested controller and
one device ‘C’ on line 3.
• The other LEVEL 2 controller has no nested controllers but has one device ‘B’ on line 2.
• The LEVEL 3 controller has one device ‘D’ on line 2.
Here’s how unique interrupt numbers are generated for each hardware interrupt. Let’s consider four
interrupts shown above as A, B, C, and D:
A -> 0x00000004
B -> 0x00000302
C -> 0x00000409
D -> 0x00030609
Note: The bit positions for LEVEL 2 and onward are offset by 1, as 0 means that interrupt number is not
present for that level. For our example, the LEVEL 3 controller has device D on line 2, connected to the
LEVEL 2 controller’s line 5, that is connected to the LEVEL 1 controller’s line 9 (2 -> 5 -> 9). Because of
the encoding offset for LEVEL 2 and onward, device D is given the number 0x00030609.
Preventing Interruptions In certain situations it may be necessary for the current thread to prevent
ISRs from executing while it is performing time-sensitive or critical section operations.
A thread may temporarily prevent all IRQ handling in the system using an IRQ lock. This lock can be
applied even when it is already in effect, so routines can use it without having to know if it is already
in effect. The thread must unlock its IRQ lock the same number of times it was locked before interrupts
can be once again processed by the kernel while the thread is running.
Important: The IRQ lock is thread-specific. If thread A locks out interrupts then performs an operation
that puts itself to sleep (e.g. sleeping for N milliseconds), the thread’s IRQ lock no longer applies once
thread A is swapped out and the next ready thread B starts to run.
This means that interrupts can be processed while thread B is running unless thread B has also locked
out interrupts using its own IRQ lock. (Whether interrupts can be processed while the kernel is switching
between two threads that are using the IRQ lock is architecture-specific.)
When thread A eventually becomes the current thread once again, the kernel re-establishes thread A’s
IRQ lock. This ensures thread A won’t be interrupted until it has explicitly unlocked its IRQ lock.
If thread A does not sleep but does make a higher-priority thread B ready, the IRQ lock will inhibit any
preemption that would otherwise occur. Thread B will not run until the next reschedule point reached
after releasing the IRQ lock.
Alternatively, a thread may temporarily disable a specified IRQ so its associated ISR does not execute
when the IRQ is signaled. The IRQ must be subsequently enabled to permit the ISR to execute.
Important: Disabling an IRQ prevents all threads in the system from being preempted by the associated
ISR, not just the thread that disabled the IRQ.
Zero Latency Interrupts Preventing interruptions by applying an IRQ lock may increase the observed
interrupt latency. A high interrupt latency, however, may not be acceptable for certain low-latency use-
cases.
The kernel addresses such use-cases by allowing interrupts with critical latency constraints to execute at
a priority level that cannot be blocked by interrupt locking. These interrupts are defined as zero-latency
interrupts. The support for zero-latency interrupts requires CONFIG_ZERO_LATENCY_IRQS to be enabled.
Offloading ISR Work An ISR should execute quickly to ensure predictable system operation. If time
consuming processing is required the ISR should offload some or all processing to a thread, thereby
restoring the kernel’s ability to respond to other interrupts.
The kernel supports several mechanisms for offloading interrupt-related processing to a thread.
• An ISR can signal a helper thread to do interrupt-related processing using a kernel object, such as
a FIFO, LIFO, or semaphore.
• An ISR can instruct the system workqueue thread to execute a work item. (See Workqueue Threads.)
When an ISR offloads work to a thread, there is typically a single context switch to that thread when
the ISR completes, allowing interrupt-related processing to continue almost immediately. However, de-
pending on the priority of the thread handling the offload, it is possible that the currently executing
cooperative thread or other higher-priority threads may execute before the thread handling the offload
is scheduled.
Implementation
Defining a regular ISR An ISR is defined at runtime by calling IRQ_CONNECT . It must then be enabled
by calling irq_enable() .
Important: IRQ_CONNECT() is not a C function and does some inline assembly magic behind the
scenes. All its arguments must be known at build time. Drivers that have multiple instances may need to
define per-instance config functions to configure each instance of the interrupt.
void my_isr_installer(void)
{
...
(continues on next page)
Since the IRQ_CONNECT macro requires that all its parameters be known at build time, in some cases this
may not be acceptable. It is also possible to install interrupts at runtime with irq_connect_dynamic() .
It is used in exactly the same way as IRQ_CONNECT :
void my_isr_installer(void)
{
...
irq_connect_dynamic(MY_DEV_IRQ, MY_DEV_PRIO, my_isr, MY_ISR_ARG,
MY_IRQ_FLAGS);
irq_enable(MY_DEV_IRQ);
...
}
Defining a ‘direct’ ISR Regular Zephyr interrupts introduce some overhead which may be unacceptable
for some low-latency use-cases. Specifically:
• The argument to the ISR is retrieved and passed to the ISR
• If power management is enabled and the system was idle, all the hardware will be resumed from
low-power state before the ISR is executed, which can be very time-consuming
• Although some architectures will do this in hardware, other architectures need to switch to the
interrupt stack in code
• After the interrupt is serviced, the OS then performs some logic to potentially make a scheduling
decision.
Zephyr supports so-called ‘direct’ interrupts, which are installed via IRQ_DIRECT_CONNECT . These direct
interrupts have some special implementation requirements and a reduced feature set; see the definition
of IRQ_DIRECT_CONNECT for details.
The following code demonstrates a direct ISR:
ISR_DIRECT_DECLARE(my_isr)
{
do_stuff();
ISR_DIRECT_PM(); /* PM done after servicing interrupt for best latency */
return 1; /* We should check if scheduling decision should be made */
}
void my_isr_installer(void)
{
...
IRQ_DIRECT_CONNECT(MY_DEV_IRQ, MY_DEV_PRIO, my_isr, MY_IRQ_FLAGS);
irq_enable(MY_DEV_IRQ);
(continues on next page)
Implementation Details Interrupt tables are set up at build time using some special build tools. The
details laid out here apply to all architectures except x86, which are covered in the x86 Details section
below.
Any invocation of IRQ_CONNECT will declare an instance of struct _isr_list which is placed in a special
.intList section:
struct _isr_list {
/** IRQ line number */
int32_t irq;
/** Flags for this IRQ, see ISR_FLAG_* definitions */
int32_t flags;
/** ISR to call */
void *func;
/** Parameter for non-direct IRQs */
void *param;
};
Zephyr is built in two phases; the first phase of the build produces ${ZEPHYR_PREBUILT_EXECUTABLE}.elf
which contains all the entries in the .intList section preceded by a header:
struct {
void *spurious_irq_handler;
void *sw_irq_handler;
uint32_t num_isrs;
uint32_t num_vectors;
struct _isr_list isrs[]; <- of size num_isrs
};
This data consisting of the header and instances of struct _isr_list inside
${ZEPHYR_PREBUILT_EXECUTABLE}.elf is then used by the gen_isr_tables.py script to generate a C
file defining a vector table and software ISR table that are then compiled and linked into the final
application.
The priority level of any interrupt is not encoded in these tables, instead IRQ_CONNECT also has a runtime
component which programs the desired priority level of the interrupt to the interrupt controller. Some
architectures do not support the notion of interrupt priority, in which case the priority argument is
ignored.
Vector Table A vector table is generated when CONFIG_GEN_IRQ_VECTOR_TABLE is enabled. This data
structure is used natively by the CPU and is simply an array of function pointers, where each element n
corresponds to the IRQ handler for IRQ line n, and the function pointers are:
1. For ‘direct’ interrupts declared with IRQ_DIRECT_CONNECT , the handler function will be placed
here.
2. For regular interrupts declared with IRQ_CONNECT , the address of the common software IRQ han-
dler is placed here. This code does common kernel interrupt bookkeeping and looks up the ISR
and parameter from the software ISR table.
3. For interrupt lines that are not configured at all, the address of the spurious IRQ handler will be
placed here. The spurious IRQ handler causes a system fatal error if encountered.
Some architectures (such as the Nios II internal interrupt controller) have a common entry point for all
interrupts and do not support a vector table, in which case the CONFIG_GEN_IRQ_VECTOR_TABLE option
should be disabled.
Some architectures may reserve some initial vectors for system exceptions and declare this in a table
elsewhere, in which case CONFIG_GEN_IRQ_START_VECTOR needs to be set to properly offset the
indices in the table.
struct _isr_table_entry {
void *arg;
void (*isr)(void *);
};
This is used by the common software IRQ handler to look up the ISR and its argument and execute it.
The active IRQ line is looked up in an interrupt controller register and used to index this table.
x86 Details The x86 architecture has a special type of vector table called the Interrupt Descriptor
Table (IDT) which must be laid out in a certain way per the x86 processor documentation. It is still
fundamentally a vector table, and the arch/x86/gen_idt.py tool uses the .intList section to create it.
However, on APIC-based systems the indexes in the vector table do not correspond to the IRQ line. The
first 32 vectors are reserved for CPU exceptions, and all remaining vectors (up to index 255) correspond
to the priority level, in groups of 16. In this scheme, interrupts of priority level 0 will be placed in vectors
32-47, level 1 48-63, and so forth. When the arch/x86/gen_idt.py tool is constructing the IDT, when it
configures an interrupt it will look for a free vector in the appropriate range for the requested priority
level and set the handler there.
On x86 when an interrupt or exception vector is executed by the CPU, there is no foolproof way to de-
termine which vector was fired, so a software ISR table indexed by IRQ line is not used. Instead, the
IRQ_CONNECT call creates a small assembly language function which calls the common interrupt code
in _interrupt_enter() with the ISR and parameter as arguments. It is the address of this assembly
interrupt stub which gets placed in the IDT. For interrupts declared with IRQ_DIRECT_CONNECT the pa-
rameterless ISR is placed directly in the IDT.
On systems where the position in the vector table corresponds to the interrupt’s priority level, the inter-
rupt controller needs to know at runtime what vector is associated with an IRQ line. arch/x86/gen_idt.py
additionally creates an _irq_to_interrupt_vector array which maps an IRQ line to its configured vector
in the IDT. This is used at runtime by IRQ_CONNECT to program the IRQ-to-vector association in the
interrupt controller.
For dynamic interrupts, the build must generate some 4-byte dynamic interrupt stubs, one stub per
dynamic interrupt in use. The number of stubs is controlled by the CONFIG_X86_DYNAMIC_IRQ_STUBS op-
tion. Each stub pushes an unique identifier which is then used to fetch the appropriate handler function
and parameter out of a table populated when the dynamic interrupt was connected.
Suggested Uses Use a regular or direct ISR to perform interrupt processing that requires a very rapid
response, and can be done quickly without blocking.
Note: Interrupt processing that is time consuming, or involves blocking, should be handed off to a
thread. See Offloading ISR Work for a description of various techniques that can be used in an application.
API Reference
group isr_apis
Defines
Warning: Although this routine is invoked at run-time, all of its arguments must be
computable by the compiler at build time.
Parameters
• irq_p – IRQ line number.
• priority_p – Interrupt priority.
• isr_p – Address of interrupt service routine.
• isr_param_p – Parameter passed to interrupt service routine.
• flags_p – Architecture-specific IRQ configuration flags..
Warning: Although this routine is invoked at run-time, all of its arguments must be
computable by the compiler at build time.
Parameters
ISR_DIRECT_HEADER()
Common tasks before executing the body of an ISR.
This macro must be at the beginning of all direct interrupts and performs minimal
architecture-specific tasks before the ISR itself can run. It takes no arguments and has no
return value.
ISR_DIRECT_FOOTER(check_reschedule)
Common tasks before exiting the body of an ISR.
This macro must be at the end of all direct interrupts and performs minimal architecture-
specific tasks like EOI. It has no return value.
In a normal interrupt, a check is done at end of interrupt to invoke z_swap() logic if the
current thread is preemptible and there is another thread ready to run in the kernel’s ready
queue cache. This is now optional and controlled by the check_reschedule argument. If
unsure, set to nonzero. On systems that do stack switching and nested interrupt tracking in
software, z_swap() should only be called if this was a non-nested interrupt.
Parameters
• check_reschedule – If nonzero, additionally invoke scheduling logic
ISR_DIRECT_PM()
Perform power management idle exit logic.
This macro may optionally be invoked somewhere in between IRQ_DIRECT_HEADER() and
IRQ_DIRECT_FOOTER() invocations. It performs tasks necessary to exit power management
idle state. It takes no parameters and returns no arguments. It may be omitted, but be careful!
ISR_DIRECT_DECLARE(name)
Helper macro to declare a direct interrupt service routine.
This will declare the function in a proper way and automatically include the
ISR_DIRECT_FOOTER() and ISR_DIRECT_HEADER() macros. The function should re-
turn nonzero status if a scheduling decision should potentially be made. See
ISR_DIRECT_FOOTER() for more details on the scheduling decision.
For architectures that support ‘regular’ and ‘fast’ interrupt types, where these interrupt types
require different assembly language handling of registers by the ISR, this will always generate
code for the ‘fast’ interrupt type.
Example usage:
ISR_DIRECT_DECLARE(my_isr)
{
bool done = do_stuff();
ISR_DIRECT_PM(); // done after do_stuff() due to latency concerns
if (!done) {
return 0; // don't bother checking if we have to z_swap()
}
k_sem_give(some_sem);
return 1;
}
Parameters
irq_lock()
Lock interrupts.
This routine disables all interrupts on the CPU. It returns an unsigned integer “lock-out key”,
which is an architecture-dependent indicator of whether interrupts were locked prior to the
call. The lock-out key must be passed to irq_unlock() to re-enable interrupts.
This routine can be called recursively, as long as the caller keeps track of each lock-out key
that is generated. Interrupts are re-enabled by passing each of the keys to irq_unlock() in
the reverse order they were acquired. (That is, each call to irq_lock() must be balanced by a
corresponding call to irq_unlock().)
This routine can only be invoked from supervisor mode. Some architectures (for example,
ARM) will fail silently if invoked from user mode instead of generating an exception.
Note: This routine must also serve as a memory barrier to ensure the uniprocessor imple-
mentation of k_spinlock_t is correct.
Note: This routine can be called by ISRs or by threads. If it is called by a thread, the
interrupt lock is thread-specific; this means that interrupts remain disabled only while the
thread is running. If the thread performs an operation that allows another thread to run
(for example, giving a semaphore or sleeping for N milliseconds), the interrupt lock no longer
applies and interrupts may be re-enabled while other processing occurs. When the thread once
again becomes the current thread, the kernel re-establishes its interrupt lock; this ensures the
thread won’t be interrupted until it has explicitly released the interrupt lock it established.
Warning: The lock-out key should never be used to manually re-enable interrupts or to
inspect or manipulate the contents of the CPU’s interrupt bits.
Returns
An architecture-dependent lock-out key representing the “interrupt disable state”
prior to the call.
irq_unlock(key)
Unlock interrupts.
This routine reverses the effect of a previous call to irq_lock() using the associated lock-out
key. The caller must call the routine once for each time it called irq_lock(), supplying the keys
in the reverse order they were acquired, before interrupts are enabled.
This routine can only be invoked from supervisor mode. Some architectures (for example,
ARM) will fail silently if invoked from user mode instead of generating an exception.
Note: This routine must also serve as a memory barrier to ensure the uniprocessor imple-
mentation of k_spinlock_t is correct.
Parameters
• key – Lock-out key generated by irq_lock().
irq_enable(irq)
Enable an IRQ.
This routine enables interrupts from source irq.
Parameters
• irq – IRQ line.
irq_disable(irq)
Disable an IRQ.
This routine disables interrupts from source irq.
Parameters
• irq – IRQ line.
irq_is_enabled(irq)
Get IRQ enable state.
This routine indicates if interrupts from source irq are enabled.
Parameters
• irq – IRQ line.
Returns
interrupt enable state, true or false
Functions
static inline int irq_connect_dynamic(unsigned int irq, unsigned int priority, void
(*routine)(const void *parameter), const void *parameter,
uint32_t flags)
Configure a dynamic interrupt.
Use this instead of IRQ_CONNECT() if arguments cannot be known at build time.
Parameters
• irq – IRQ line number
• priority – Interrupt priority
• routine – Interrupt service routine
• parameter – ISR parameter
• flags – Arch-specific IRQ configuration flags
Returns
The vector assigned to this interrupt
static inline unsigned int irq_get_level(unsigned int irq)
Return IRQ level This routine returns the interrupt level number of the provided interrupt.
Parameters
• irq – IRQ number in its zephyr format
Returns
1 if IRQ level 1, 2 if IRQ level 2, 3 if IRQ level 3
bool k_is_in_isr(void)
Determine if code is running at interrupt level.
This routine allows the caller to customize its actions, depending on whether it is a thread or
an ISR.
Returns
false if invoked by a thread.
Returns
true if invoked by an ISR.
int k_is_preempt_thread(void)
Determine if code is running in a preemptible thread.
This routine allows the caller to customize its actions, depending on whether it can be pre-
empted by another thread. The routine returns a ‘true’ value if all of the following conditions
are met:
Returns
0 if invoked by an ISR or by a cooperative thread.
Returns
Non-zero if invoked by a preemptible thread.
Returns
true if invoked before post-kernel initialization
Returns
false if invoked during/after post-kernel initialization
Polling API
The polling API is used to wait concurrently for any one of multiple conditions to be fulfilled.
• Concepts
• Implementation
– Using k_poll()
– Using k_poll_signal_raise()
• Suggested Uses
• Configuration Options
• API Reference
Concepts The polling API’s main function is k_poll() , which is very similar in concept to the POSIX
poll() function, except that it operates on kernel objects rather than on file descriptors.
The polling API allows a single thread to wait concurrently for one or more conditions to be fulfilled
without actively looking at each one individually.
There is a limited set of such conditions:
• a semaphore becomes available
• a kernel FIFO contains data ready to be retrieved
• a poll signal is raised
A thread that wants to wait on multiple conditions must define an array of poll events, one for each
condition.
All events in the array must be initialized before the array can be polled on.
Each event must specify which type of condition must be satisfied so that its state is changed to signal
the requested condition has been met.
Each event must specify what kernel object it wants the condition to be satisfied.
Each event must specify which mode of operation is used when the condition is satisfied.
Each event can optionally specify a tag to group multiple events together, to the user’s discretion.
Apart from the kernel objects, there is also a poll signal pseudo-object type that be directly signaled.
The k_poll() function returns as soon as one of the conditions it is waiting for is fulfilled. It is possible
for more than one to be fulfilled when k_poll() returns, if they were fulfilled before k_poll() was
called, or due to the preemptive multi-threading nature of the kernel. The caller must look at the state
of all the poll events in the array to figure out which ones were fulfilled and what actions to take.
Currently, there is only one mode of operation available: the object is not acquired. As an example, this
means that when k_poll() returns and the poll event states that the semaphore is available, the caller
of k_poll() must then invoke k_sem_take() to take ownership of the semaphore. If the semaphore is
contested, there is no guarantee that it will be still available when k_sem_take() is called.
Implementation
Using k_poll() The main API is k_poll() , which operates on an array of poll events of type
k_poll_event . Each entry in the array represents one event a call to k_poll() will wait for its condition
to be fulfilled.
Poll events can be initialized using either the runtime initializers K_POLL_EVENT_INITIALIZER()
or k_poll_event_init() , or the static initializer K_POLL_EVENT_STATIC_INITIALIZER() . An ob-
ject that matches the type specified must be passed to the initializers. The mode must be set to
K_POLL_MODE_NOTIFY_ONLY . The state must be set to K_POLL_STATE_NOT_READY (the initializers take
care of this). The user tag is optional and completely opaque to the API: it is there to help a user
to group similar events together. Being optional, it is passed to the static initializer, but not the run-
time ones for performance reasons. If using runtime initializers, the user must set it separately in the
k_poll_event data structure. If an event in the array is to be ignored, most likely temporarily, its type
can be set to K_POLL_TYPE_IGNORE.
or at runtime
k_poll_event_init(&events[1],
K_POLL_TYPE_FIFO_DATA_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&my_fifo);
After the events are initialized, the array can be passed to k_poll() . A timeout can be specified to wait
only for a specified amount of time, or the special values K_NO_WAIT and K_FOREVER to either not wait
or wait until an event condition is satisfied and not sooner.
A list of pollers is offered on each semaphore or FIFO and as many events can wait in it as the app wants.
Notice that the waiters will be served in first-come-first-serve order, not in priority order.
In case of success, k_poll() returns 0. If it times out, it returns -EAGAIN .
void do_stuff(void)
{
rc = k_poll(events, 2, 1000);
if (rc == 0) {
if (events[0].state == K_POLL_STATE_SEM_AVAILABLE) {
k_sem_take(events[0].sem, 0);
} else if (events[1].state == K_POLL_STATE_FIFO_DATA_AVAILABLE) {
data = k_fifo_get(events[1].fifo, 0);
// handle data
}
} else {
// handle timeout
}
}
When k_poll() is called in a loop, the events state must be reset to K_POLL_STATE_NOT_READY by the
user.
void do_stuff(void)
{
for(;;) {
rc = k_poll(events, 2, K_FOREVER);
if (events[0].state == K_POLL_STATE_SEM_AVAILABLE) {
k_sem_take(events[0].sem, 0);
} else if (events[1].state == K_POLL_STATE_FIFO_DATA_AVAILABLE) {
data = k_fifo_get(events[1].fifo, 0);
// handle data
}
events[0].state = K_POLL_STATE_NOT_READY;
events[1].state = K_POLL_STATE_NOT_READY;
}
}
Using k_poll_signal_raise() One of the types of events is K_POLL_TYPE_SIGNAL : this is a “direct” signal
to a poll event. This can be seen as a lightweight binary semaphore only one thread can wait for.
A poll signal is a separate object of type k_poll_signal that must be attached to a k_poll_event, sim-
ilar to a semaphore or FIFO. It must first be initialized either via K_POLL_SIGNAL_INITIALIZER() or
k_poll_signal_init() .
It is signaled via the k_poll_signal_raise() function. This function takes a user result parameter that
is opaque to the API and can be used to pass extra information to the thread waiting on the event.
// thread A
void do_stuff(void)
{
k_poll_signal_init(&signal);
k_poll(events, 1, K_FOREVER);
if (events.signal->result == 0x1337) {
// A-OK!
} else {
// weird error
}
}
// thread B
(continues on next page)
If the signal is to be polled in a loop, both its event state and its signaled field must be reset on each
iteration if it has been signaled.
for (;;) {
k_poll(events, 1, K_FOREVER);
if (events[0].signal->result == 0x1337) {
// A-OK!
} else {
// weird error
}
events[0].signal->signaled = 0;
events[0].state = K_POLL_STATE_NOT_READY;
}
}
Note that poll signals are not internally synchronized. A k_poll() call that is passed a signal will return
after any code in the system calls k_poll_signal_raise() . But if the signal is being externally managed
and reset via k_poll_signal_init() , it is possible that by the time the application checks, the event
state may no longer be equal to K_POLL_STATE_SIGNALED , and a (naive) application will miss events.
Best practice is always to reset the signal only from within the thread invoking the k_poll() loop, or
else to use some other event type which tracks event counts: semaphores and FIFOs are more error-proof
in this sense because they can’t “miss” events, architecturally.
Suggested Uses Use k_poll() to consolidate multiple threads that would be pending on one object
each, saving possibly large amounts of stack space.
Use a poll signal as a lightweight binary semaphore if only one thread pends on it.
Note: Because objects are only signaled if no other thread is waiting for them to become available
and only one thread can poll on a specific object, polling is best used when objects are not subject
of contention between multiple threads, basically when a single thread operates as a main “server” or
“dispatcher” for multiple objects and is the only one trying to acquire these objects.
API Reference
group poll_apis
Defines
K_POLL_TYPE_IGNORE
K_POLL_TYPE_SIGNAL
K_POLL_TYPE_SEM_AVAILABLE
K_POLL_TYPE_DATA_AVAILABLE
K_POLL_TYPE_FIFO_DATA_AVAILABLE
K_POLL_TYPE_MSGQ_DATA_AVAILABLE
K_POLL_TYPE_PIPE_DATA_AVAILABLE
K_POLL_STATE_NOT_READY
K_POLL_STATE_SIGNALED
K_POLL_STATE_SEM_AVAILABLE
K_POLL_STATE_DATA_AVAILABLE
K_POLL_STATE_FIFO_DATA_AVAILABLE
K_POLL_STATE_MSGQ_DATA_AVAILABLE
K_POLL_STATE_PIPE_DATA_AVAILABLE
K_POLL_STATE_CANCELLED
K_POLL_SIGNAL_INITIALIZER(obj)
Enums
enum k_poll_modes
Values:
enumerator K_POLL_MODE_NOTIFY_ONLY = 0
enumerator K_POLL_NUM_MODES
Functions
void k_poll_event_init(struct k_poll_event *event, uint32_t type, int mode, void *obj)
Initialize one struct k_poll_event instance.
After this routine is called on a poll event, the event it ready to be placed in an event array to
be passed to k_poll().
Parameters
• event – The event to initialize.
• type – A bitfield of the types of event, from the K_POLL_TYPE_xxx values.
Only values that apply to the same object being polled can be used together.
Choosing K_POLL_TYPE_IGNORE disables the event.
• mode – Future. Use K_POLL_MODE_NOTIFY_ONLY.
• obj – Kernel object or poll signal.
int k_poll(struct k_poll_event *events, int num_events, k_timeout_t timeout)
Wait for one or many of multiple poll events to occur.
This routine allows a thread to wait concurrently for one or many of multiple poll events to
have occurred. Such events can be a kernel object being available, like a semaphore, or a poll
signal event.
When an event notifies that a kernel object is available, the kernel object is not “given” to
the thread calling k_poll(): it merely signals the fact that the object was available when the
k_poll() call was in effect. Also, all threads trying to acquire an object the regular way, i.e.
by pending on the object, have precedence over the thread polling on the object. This means
that the polling thread will never get the poll event on an object until the object becomes
available and its pend queue is empty. For this reason, the k_poll() call is more effective when
the objects being polled only have one thread, the polling thread, trying to acquire them.
When k_poll() returns 0, the caller should loop on all the events that were passed to k_poll()
and check the state field for the values that were expected and take the associated actions.
Before being reused for another call to k_poll(), the user has to reset the state field to
K_POLL_STATE_NOT_READY.
When called from user mode, a temporary memory allocation is required from the caller’s
resource pool.
Parameters
• events – An array of events to be polled for.
• num_events – The number of events in the array.
• timeout – Waiting period for an event to be ready, or one of the special values
K_NO_WAIT and K_FOREVER.
Return values
• 0 – One or more events are ready.
• -EAGAIN – Waiting period timed out.
Note: The result is stored and the ‘signaled’ field is set even if this function returns an error
indicating that an expiring poll was not notified. The next k_poll() will detect the missed
raise.
Parameters
• sig – A poll signal.
• result – The value to store in the result field of the signal.
Return values
• 0 – The signal was delivered successfully.
• -EAGAIN – The polling thread’s timeout is in the process of expiring.
struct k_poll_signal
#include <kernel.h>
Public Members
sys_dlist_t poll_events
PRIVATE - DO NOT TOUCH
int result
custom result value passed to k_poll_signal_raise() if needed
struct k_poll_event
#include <kernel.h> Poll Event.
Public Members
uint32_t tag
optional user-specified tag, opaque, untouched by the API
uint32_t type
bitfield of event types (bitwise-ORed K_POLL_TYPE_xxx values)
uint32_t state
bitfield of event states (bitwise-ORed K_POLL_STATE_xxx values)
uint32_t mode
mode of operation, from enum k_poll_modes
uint32_t unused
unused bits in 32-bit word
Semaphores
• Concepts
• Implementation
– Defining a Semaphore
– Giving a Semaphore
– Taking a Semaphore
• Suggested Uses
• Configuration Options
• API Reference
• User Mode Semaphore API Reference
Concepts Any number of semaphores can be defined (limited only by available RAM). Each semaphore
is referenced by its memory address.
A semaphore has the following key properties:
• A count that indicates the number of times the semaphore can be taken. A count of zero indicates
that the semaphore is unavailable.
• A limit that indicates the maximum value the semaphore’s count can reach.
A semaphore must be initialized before it can be used. Its count must be set to a non-negative value that
is less than or equal to its limit.
A semaphore may be given by a thread or an ISR. Giving the semaphore increments its count, unless the
count is already equal to the limit.
A semaphore may be taken by a thread. Taking the semaphore decrements its count, unless the
semaphore is unavailable (i.e. at zero). When a semaphore is unavailable a thread may choose to
wait for it to be given. Any number of threads may wait on an unavailable semaphore simultaneously.
When the semaphore is given, it is taken by the highest priority thread that has waited longest.
Note: You may initialize a “full” semaphore (count equal to limit) to limit the number of threads able to
execute the critical section at the same time. You may also initialize an empty semaphore (count equal
to 0, with a limit greater than 0) to create a gate through which no waiting thread may pass until the
semaphore is incremented. All standard use cases of the common semaphore are supported.
Note: The kernel does allow an ISR to take a semaphore, however the ISR must not attempt to wait if
the semaphore is unavailable.
Implementation
Defining a Semaphore A semaphore is defined using a variable of type k_sem. It must then be initial-
ized by calling k_sem_init() .
The following code defines a semaphore, then configures it as a binary semaphore by setting its count to
0 and its limit to 1.
k_sem_init(&my_sem, 0, 1);
Alternatively, a semaphore can be defined and initialized at compile time by calling K_SEM_DEFINE .
The following code has the same effect as the code segment above.
K_SEM_DEFINE(my_sem, 0, 1);
...
}
void consumer_thread(void)
{
...
if (k_sem_take(&my_sem, K_MSEC(50)) != 0) {
printk("Input data not available!");
} else {
/* fetch available data */
...
}
...
}
Suggested Uses Use a semaphore to control access to a set of resources by multiple threads.
Use a semaphore to synchronize processing between a producing and consuming threads or ISRs.
API Reference
group semaphore_apis
Defines
K_SEM_MAX_LIMIT
Maximum limit value allowed for a semaphore.
This is intended for use when a semaphore does not have an explicit maximum limit, and
instead is just used for counting purposes.
K_SEM_DEFINE(name, initial_count, count_limit)
Statically define and initialize a semaphore.
The semaphore can be accessed outside the module where it is defined using:
Parameters
• name – Name of the semaphore.
• initial_count – Initial semaphore count.
• count_limit – Maximum permitted semaphore count.
Functions
int k_sem_init(struct k_sem *sem, unsigned int initial_count, unsigned int limit)
Initialize a semaphore.
This routine initializes a semaphore object, prior to its first use.
See also:
K_SEM_MAX_LIMIT
Parameters
• sem – Address of the semaphore.
• initial_count – Initial semaphore count.
• limit – Maximum permitted semaphore count.
Return values
• 0 – Semaphore created successfully
• -EINVAL – Invalid values
Parameters
• sem – Address of the semaphore.
• timeout – Waiting period to take the semaphore, or one of the special values
K_NO_WAIT and K_FOREVER.
Return values
• 0 – Semaphore taken.
• -EBUSY – Returned without waiting.
• -EAGAIN – Waiting period timed out, or the semaphore was reset during the
waiting period.
Parameters
• sem – Address of the semaphore.
User Mode Semaphore API Reference The sys_sem exists in user memory working as counter
semaphore for user mode thread when user mode enabled. When user mode isn’t enabled, sys_sem
behaves like k_sem.
group user_semaphore_apis
Defines
Functions
int sys_sem_init(struct sys_sem *sem, unsigned int initial_count, unsigned int limit)
Initialize a semaphore.
This routine initializes a semaphore instance, prior to its first use.
Parameters
• sem – Address of the semaphore.
• initial_count – Initial semaphore count.
• limit – Maximum permitted semaphore count.
Return values
• 0 – Initial success.
• -EINVAL – Bad parameters, the value of limit should be located in (0,
INT_MAX] and initial_count shouldn’t be greater than limit.
int sys_sem_give(struct sys_sem *sem)
Give a semaphore.
This routine gives sem, unless the semaphore is already at its maximum permitted count.
Parameters
• sem – Address of the semaphore.
Return values
• 0 – Semaphore given.
• -EINVAL – Parameter address not recognized.
• -EACCES – Caller does not have enough access.
• -EAGAIN – Count reached Maximum permitted count and try again.
int sys_sem_take(struct sys_sem *sem, k_timeout_t timeout)
Take a sys_sem.
This routine takes sem.
Parameters
• sem – Address of the sys_sem.
• timeout – Waiting period to take the sys_sem, or one of the special values
K_NO_WAIT and K_FOREVER.
Return values
• 0 – sys_sem taken.
• -EINVAL – Parameter address not recognized.
• -ETIMEDOUT – Waiting period timed out.
• -EACCES – Caller does not have enough access.
unsigned int sys_sem_count_get(struct sys_sem *sem)
Get sys_sem’s value.
This routine returns the current value of sem.
Parameters
• sem – Address of the sys_sem.
Returns
Current value of sys_sem.
Mutexes
A mutex is a kernel object that implements a traditional reentrant mutex. A mutex allows multiple threads
to safely share an associated hardware or software resource by ensuring mutually exclusive access to the
resource.
• Concepts
– Reentrant Locking
– Priority Inheritance
• Implementation
– Defining a Mutex
– Locking a Mutex
– Unlocking a Mutex
• Suggested Uses
• Configuration Options
• API Reference
• Futex API Reference
• User Mode Mutex API Reference
Concepts Any number of mutexes can be defined (limited only by available RAM). Each mutex is
referenced by its memory address.
A mutex has the following key properties:
• A lock count that indicates the number of times the mutex has be locked by the thread that has
locked it. A count of zero indicates that the mutex is unlocked.
• An owning thread that identifies the thread that has locked the mutex, when it is locked.
A mutex must be initialized before it can be used. This sets its lock count to zero.
A thread that needs to use a shared resource must first gain exclusive rights to access it by locking the
associated mutex. If the mutex is already locked by another thread, the requesting thread may choose to
wait for the mutex to be unlocked.
After locking a mutex, the thread may safely use the associated resource for as long as needed; however,
it is considered good practice to hold the lock for as short a time as possible to avoid negatively impacting
other threads that want to use the resource. When the thread no longer needs the resource it must unlock
the mutex to allow other threads to use the resource.
Any number of threads may wait on a locked mutex simultaneously. When the mutex becomes unlocked
it is then locked by the highest-priority thread that has waited the longest.
Reentrant Locking A thread is permitted to lock a mutex it has already locked. This allows the thread
to access the associated resource at a point in its execution when the mutex may or may not already be
locked.
A mutex that is repeatedly locked by a thread must be unlocked an equal number of times before the
mutex becomes fully unlocked so it can be claimed by another thread.
Priority Inheritance The thread that has locked a mutex is eligible for priority inheritance. This means
the kernel will temporarily elevate the thread’s priority if a higher priority thread begins waiting on the
mutex. This allows the owning thread to complete its work and release the mutex more rapidly by
executing at the same priority as the waiting thread. Once the mutex has been unlocked, the unlocking
thread resets its priority to the level it had before locking that mutex.
Note: The CONFIG_PRIORITY_CEILING configuration option limits how high the kernel can raise a
thread’s priority due to priority inheritance. The default value of 0 permits unlimited elevation.
The owning thread’s base priority is saved in the mutex when it obtains the lock. Each time a higher
priority thread waits on a mutex, the kernel adjusts the owning thread’s priority. When the owning
thread releases the lock (or if the high priority waiting thread times out), the kernel restores the thread’s
base priority from the value saved in the mutex.
This works well for priority inheritance as long as only one locked mutex is involved. However, if
multiple mutexes are involved, sub-optimal behavior will be observed if the mutexes are not unlocked
in the reverse order to which the owning thread’s priority was previously raised. Consequently it is
recommended that a thread lock only a single mutex at a time when multiple mutexes are shared between
threads of different priorities.
Implementation
Defining a Mutex A mutex is defined using a variable of type k_mutex . It must then be initialized by
calling k_mutex_init() .
The following code defines and initializes a mutex.
k_mutex_init(&my_mutex);
Alternatively, a mutex can be defined and initialized at compile time by calling K_MUTEX_DEFINE .
The following code has the same effect as the code segment above.
K_MUTEX_DEFINE(my_mutex);
k_mutex_lock(&my_mutex, K_FOREVER);
The following code waits up to 100 milliseconds for the mutex to become available, and gives a warning
if the mutex does not become available.
if (k_mutex_lock(&my_mutex, K_MSEC(100)) == 0) {
/* mutex successfully locked */
} else {
printf("Cannot lock XYZ display\n");
}
k_mutex_unlock(&my_mutex);
Suggested Uses Use a mutex to provide exclusive access to a resource, such as a physical device.
API Reference
group mutex_apis
Defines
K_MUTEX_DEFINE(name)
Statically define and initialize a mutex.
The mutex can be accessed outside the module where it is defined using:
Parameters
• name – Name of the mutex.
Functions
• timeout – Waiting period to lock the mutex, or one of the special values
K_NO_WAIT and K_FOREVER.
Return values
• 0 – Mutex locked.
• -EBUSY – Returned without waiting.
• -EAGAIN – Waiting period timed out.
int k_mutex_unlock(struct k_mutex *mutex)
Unlock a mutex.
This routine unlocks mutex. The mutex must already be locked by the calling thread.
The mutex cannot be claimed by another thread until it has been unlocked by the calling
thread as many times as it was previously locked by that thread.
Mutexes may not be unlocked in ISRs, as mutexes must only be manipulated in thread context
due to ownership and priority inheritance semantics.
Parameters
• mutex – Address of the mutex.
Return values
• 0 – Mutex unlocked.
• -EPERM – The current thread does not own the mutex
• -EINVAL – The mutex is not locked
struct k_mutex
#include <kernel.h> Mutex Structure
Public Members
_wait_q_t wait_q
Mutex wait queue
uint32_t lock_count
Current lock count
int owner_orig_prio
Original thread priority
Futex API Reference k_futex is a lightweight mutual exclusion primitive designed to minimize kernel
involvement. Uncontended operation relies only on atomic access to shared memory. k_futex are tracked
as kernel objects and can live in user memory so that any access bypasses the kernel object permission
management mechanism.
group futex_apis
Functions
User Mode Mutex API Reference sys_mutex behaves almost exactly like k_mutex, with the added ad-
vantage that a sys_mutex instance can reside in user memory. When user mode isn’t enabled, sys_mutex
behaves like k_mutex.
group user_mutex_apis
Defines
SYS_MUTEX_DEFINE(name)
Statically define and initialize a sys_mutex.
The mutex can be accessed outside the module where it is defined using:
Functions
• -EINVAL – Provided mutex not recognized by the kernel or mutex wasn’t locked
• -EPERM – Caller does not own the mutex
Condition Variables
A condition variable is a synchronization primitive that enables threads to wait until a particular condition
occurs.
• Concepts
• Implementation
– Defining a Condition Variable
– Waiting on a Condition Variable
– Signaling a Condition Variable
• Suggested Uses
• Configuration Options
• API Reference
Concepts Any number of condition variables can be defined (limited only by available RAM). Each
condition variable is referenced by its memory address.
To wait for a condition to become true, a thread can make use of a condition variable.
A condition variable is basically a queue of threads that threads can put themselves on when some
state of execution (i.e., some condition) is not as desired (by waiting on the condition). The function
k_condvar_wait() performs atomically the following steps;
1. Releases the last acquired mutex.
2. Puts the current thread in the condition variable queue.
Some other thread, when it changes said state, can then wake one (or more) of those waiting
threads and thus allow them to continue by signaling on the condition using k_condvar_signal()
or k_condvar_broadcast() then it:
1. Re-acquires the mutex previously released.
2. Returns from k_condvar_wait() .
A condition variable must be initialized before it can be used.
Implementation
Defining a Condition Variable A condition variable is defined using a variable of type k_condvar. It
must then be initialized by calling k_condvar_init() .
The following code defines a condition variable:
k_condvar_init(&my_condvar);
Alternatively, a condition variable can be defined and initialized at compile time by calling
K_CONDVAR_DEFINE .
The following code has the same effect as the code segment above.
K_CONDVAR_DEFINE(my_condvar);
K_MUTEX_DEFINE(mutex);
K_CONDVAR_DEFINE(condvar)
int main(void)
{
k_mutex_lock(&mutex, K_FOREVER);
void worker_thread(void)
{
k_mutex_lock(&mutex, K_FOREVER);
/*
* Do some work and fulfill the condition
*/
...
...
k_condvar_signal(&condvar);
k_mutex_unlock(&mutex);
}
Suggested Uses Use condition variables with a mutex to signal changing states (conditions) from one
thread to another thread. Condition variables are not the condition itself and they are not events. The
condition is contained in the surrounding programming logic.
Mutexes alone are not designed for use as a notification/synchronization mechanism. They are meant to
provide mutually exclusive access to a shared resource only.
API Reference
group condvar_apis
Defines
K_CONDVAR_DEFINE(name)
Statically define and initialize a condition variable.
The condition variable can be accessed outside the module where it is defined using:
Parameters
• name – Name of the condition variable.
Functions
Events
• Concepts
• Implementation
– Defining an Event Object
– Setting Events
– Posting Events
– Waiting for Events
• Suggested Uses
• Configuration Options
• API Reference
Concepts Any number of event objects can be defined (limited only by available RAM). Each event
object is referenced by its memory address. One or more threads may wait on an event object until the
desired set of events has been delivered to the event object. When new events are delivered to the event
object, all threads whose wait conditions have been satisfied become ready simultaneously.
An event object has the following key properties:
• A 32-bit value that tracks which events have been delivered to it.
An event object must be initialized before it can be used.
Events may be delivered by a thread or an ISR. When delivering events, the events may either overwrite
the existing set of events or add to them in a bitwise fashion. When overwriting the existing set of events,
this is referred to as setting. When adding to them in a bitwise fashion, this is referred to as posting.
Both posting and setting events have the potential to fulfill match conditions of multiple threads waiting
on the event object. All threads whose match conditions have been met are made active at the same
time.
Threads may wait on one or more events. They may either wait for all of the the requested events, or for
any of them. Furthermore, threads making a wait request have the option of resetting the current set of
events tracked by the event object prior to waiting. Care must be taken with this option when multiple
threads wait on the same event object.
Note: The kernel does allow an ISR to query an event object, however the ISR must not attempt to wait
for the events.
Implementation
Defining an Event Object An event object is defined using a variable of type k_event . It must then be
initialized by calling k_event_init() .
The following code defines an event object.
k_event_init(&my_event);
Alternatively, an event object can be defined and initialized at compile time by calling K_EVENT_DEFINE .
The following code has the same effect as the code segment above.
K_EVENT_DEFINE(my_event);
k_event_set(&my_event, 0x001);
...
}
k_event_post(&my_event, 0x120);
...
}
Alternatively, the consumer thread may desire to wait for all the events before continuing.
void consumer_thread(void)
{
uint32_t events;
Suggested Uses Use events to indicate that a set of conditions have occurred.
Use events to pass small amounts of data to multiple threads at once.
API Reference
group event_apis
Defines
K_EVENT_DEFINE(name)
Statically define and initialize an event object.
The event can be accessed outside the module where it is defined using:
Parameters
• name – Name of the event object.
Functions
Note: The caller must be careful when resetting if there are multiple threads waiting for the
event object event.
Parameters
• event – Address of the event object
• events – Set of desired events on which to wait
• reset – If true, clear the set of events tracked by the event object before wait-
ing. If false, do not clear the events.
• timeout – Waiting period for the desired set of events or one of the special
values K_NO_WAIT and K_FOREVER.
Return values
• set – of matching events upon success
• 0 – if matching events were not received within the specified time
Note: The caller must be careful when resetting if there are multiple threads waiting for the
event object event.
Parameters
• event – Address of the event object
• events – Set of desired events on which to wait
• reset – If true, clear the set of events tracked by the event object before wait-
ing. If false, do not clear the events.
• timeout – Waiting period for the desired set of events or one of the special
values K_NO_WAIT and K_FOREVER.
Return values
• set – of matching events upon success
• 0 – if matching events were not received within the specified time
struct k_event
#include <kernel.h> Event Structure
Symmetric Multiprocessing
On multiprocessor architectures, Zephyr supports the use of multiple physical CPUs running Zephyr
application code. This support is “symmetric” in the sense that no specific CPU is treated specially by
default. Any processor is capable of running any Zephyr thread, with access to all standard Zephyr APIs
supported.
No special application code needs to be written to take advantage of this feature. If there are two Zephyr
application threads runnable on a supported dual processor device, they will both run simultaneously.
SMP configuration is controlled under the CONFIG_SMP kconfig variable. This must be set to “y” to enable
SMP features, otherwise a uniprocessor kernel will be built. In general the platform default will have
enabled this anywhere it’s supported. When enabled, the number of physical CPUs available is visible at
build time as CONFIG_MP_NUM_CPUS. Likewise, the default for this will be the number of available CPUs
on the platform and it is not expected that typical apps will change it. But it is legal and supported to set
this to a smaller (but obviously not larger) number for special purposes (e.g. for testing, or to reserve a
physical CPU for running non-Zephyr code).
Synchronization At the application level, core Zephyr IPC and synchronization primitives all behave
identically under an SMP kernel. For example semaphores used to implement blocking mutual exclusion
continue to be a proper application choice.
At the lowest level, however, Zephyr code has often used the irq_lock() /irq_unlock() primitives to
implement fine grained critical sections using interrupt masking. These APIs continue to work via an
emulation layer (see below), but the masking technique does not: the fact that your CPU will not be
interrupted while you are in your critical section says nothing about whether a different CPU will be
running simultaneously and be inspecting or modifying the same data!
Spinlocks SMP systems provide a more constrained k_spin_lock() primitive that not only masks
interrupts locally, as done by irq_lock() , but also atomically validates that a shared lock variable has
been modified before returning to the caller, “spinning” on the check if needed to wait for the other CPU
to exit the lock. The default Zephyr implementation of k_spin_lock() and k_spin_unlock() is built
on top of the pre-existing atomic_ layer (itself usually implemented using compiler intrinsics), though
facilities exist for architectures to define their own for performance reasons.
One important difference between IRQ locks and spinlocks is that the earlier API was naturally recursive:
the lock was global, so it was legal to acquire a nested lock inside of a critical section. Spinlocks are
separable: you can have many locks for separate subsystems or data structures, preventing CPUs from
contending on a single global resource. But that means that spinlocks must not be used recursively. Code
that holds a specific lock must not try to re-acquire it or it will deadlock (it is perfectly legal to nest
distinct spinlocks, however). A validation layer is available to detect and report bugs like this.
When used on a uniprocessor system, the data component of the spinlock (the atomic lock variable)
is unnecessary and elided. Except for the recursive semantics above, spinlocks in single-CPU contexts
produce identical code to legacy IRQ locks. In fact the entirety of the Zephyr core kernel has now been
ported to use spinlocks exclusively.
Legacy irq_lock() emulation For the benefit of applications written to the uniprocessor locking API,
irq_lock() and irq_unlock() continue to work compatibly on SMP systems with identical semantics
to their legacy versions. They are implemented as a single global spinlock, with a nesting count and the
ability to be atomically reacquired on context switch into locked threads. The kernel will ensure that
only one thread across all CPUs can hold the lock at any time, that it is released on context switch, and
that it is re-acquired when necessary to restore the lock state when a thread is switched in. Other CPUs
will spin waiting for the release to happen.
The overhead involved in this process has measurable performance impact, however. Unlike uniprocessor
apps, SMP apps using irq_lock() are not simply invoking a very short (often ~1 instruction) interrupt
masking operation. That, and the fact that the IRQ lock is global, means that code expecting to be run
in an SMP context should be using the spinlock API wherever possible.
CPU Mask It is often desirable for real time applications to deliberately partition work across physical
CPUs instead of relying solely on the kernel scheduler to decide on which threads to execute. Zephyr
provides an API, controlled by the CONFIG_SCHED_CPU_MASK kconfig variable, which can associate a
specific set of CPUs with each thread, indicating on which CPUs it can run.
By default, new threads can run on any CPU. Calling k_thread_cpu_mask_disable() with a par-
ticular CPU ID will prevent that thread from running on that CPU in the future. Likewise
k_thread_cpu_mask_enable() will re-enable execution. There are also k_thread_cpu_mask_clear()
and k_thread_cpu_mask_enable_all() APIs available for convenience. For obvious reasons, these
APIs are illegal if called on a runnable thread. The thread must be blocked or suspended, otherwise an
-EINVAL will be returned.
Note that when this feature is enabled, the scheduler algorithm involved in doing the per-CPU mask test
requires that the list be traversed in full. The kernel does not keep a per-CPU run queue. That means
that the performance benefits from the CONFIG_SCHED_SCALABLE and CONFIG_SCHED_MULTIQ scheduler
backends cannot be realized. CPU mask processing is available only when CONFIG_SCHED_DUMB is the
selected backend. This requirement is enforced in the configuration layer.
SMP Boot Process A Zephyr SMP kernel begins boot identically to a uniprocessor kernel. Auxiliary
CPUs begin in a disabled state in the architecture layer. All standard kernel initialization, including
device initialization, happens on a single CPU before other CPUs are brought online.
Just before entering the application main() function, the kernel calls z_smp_init() to launch the SMP
initialization process. This enumerates over the configured CPUs, calling into the architecture layer
using arch_start_cpu() for each one. This function is passed a memory region to use as a stack on the
foreign CPU (in practice it uses the area that will become that CPU’s interrupt stack), the address of a
local smp_init_top() callback function to run on that CPU, and a pointer to a “start flag” address which
will be used as an atomic signal.
The local SMP initialization (smp_init_top()) on each CPU is then invoked by the architecture
layer. Note that interrupts are still masked at this point. This routine is responsible for calling
smp_timer_init() to set up any needed stat in the timer driver. On many architectures the timer is
a per-CPU device and needs to be configured specially on auxiliary CPUs. Then it waits (spinning) for
the atomic “start flag” to be released in the main thread, to guarantee that all SMP initialization is
complete before any Zephyr application code runs, and finally calls z_swap() to transfer control to the
appropriate runnable thread via the standard scheduler API.
CPU 0 CPU 1
init
stack
Core/device
initialization
init
arch_start_cpu() stack
ThreadA ThreadB
z_swap() z_swap()
Fig. 1: Example SMP initialization process, showing a configuration with two CPUs and two app threads
which begin operating simultaneously.
SMP Kernel Internals In general, Zephyr kernel code is SMP-agnostic and, like application code, will
work correctly regardless of the number of CPUs available. But in a few areas there are notable changes
in structure or behavior.
Per-CPU data Many elements of the core kernel data need to be implemented for each CPU in SMP
mode. For example, the _current thread pointer obviously needs to reflect what is running locally, there
are many threads running concurrently. Likewise a kernel-provided interrupt stack needs to be created
and assigned for each physical CPU, as does the interrupt nesting count used to detect ISR state.
These fields are now moved into a separate struct _cpu instance within the _kernel struct, which has
a cpus[] array indexed by ID. Compatibility fields are provided for legacy uniprocessor code trying to
access the fields of cpus[0] using the older syntax and assembly offsets.
Note that an important requirement on the architecture layer is that the pointer to this CPU struct be
available rapidly when in kernel context. The expectation is that arch_curr_cpu() will be implemented
using a CPU-provided register or addressing mode that can store this value across arbitrary context
switches or interrupts and make it available to any kernel-mode code.
Similarly, where on a uniprocessor system Zephyr could simply create a global “idle thread” at the lowest
priority, in SMP we may need one for each CPU. This makes the internal predicate test for “_is_idle()” in
the scheduler, which is a hot path performance environment, more complicated than simply testing the
thread pointer for equality with a known static variable. In SMP mode, idle threads are distinguished by
a separate field in the thread struct.
Switch-based context switching The traditional Zephyr context switch primitive has been z_swap().
Unfortunately, this function takes no argument specifying a thread to switch to. The expectation has
always been that the scheduler has already made its preemption decision when its state was last modified
and cached the resulting “next thread” pointer in a location where architecture context switch primitives
can find it via a simple struct offset. That technique will not work in SMP, because the other CPU may
have modified scheduler state since the current CPU last exited the scheduler (for example: it might
already be running that cached thread!).
Instead, the SMP “switch to” decision needs to be made synchronously with the swap call, and as we don’t
want per-architecture assembly code to be handling scheduler internal state, Zephyr requires a somewhat
lower-level context switch primitives for SMP systems: arch_switch() is always called with interrupts
masked, and takes exactly two arguments. The first is an opaque (architecture defined) handle to the
context to which it should switch, and the second is a pointer to such a handle into which it should
store the handle resulting from the thread that is being switched out. The kernel then implements a
portable z_swap() implementation on top of this primitive which includes the relevant scheduler logic
in a location where the architecture doesn’t need to understand it.
Similarly, on interrupt exit, switch-based architectures are expected to call
z_get_next_switch_handle() to retrieve the next thread to run from the scheduler. The argu-
ment to z_get_next_switch_handle() is either the interrupted thread’s “handle” reflecting the same
opaque type used by arch_switch() , or NULL if that thread cannot be released to the scheduler just yet.
The choice between a handle value or NULL depends on the way CPU interrupt mode is implemented.
Architectures with a large CPU register file would typically preserve only the caller-saved registers on
the current thread’s stack when interrupted in order to minimize interrupt latency, and preserve the
callee-saved registers only when arch_switch() is called to minimize context switching latency. Such
architectures must use NULL as the argument to z_get_next_switch_handle() to determine if there
is a new thread to schedule, and follow through with their own arch_switch() or derrivative if so,
or directly leave interrupt mode otherwise. In the former case it is up to that switch code to store the
handle resulting from the thread that is being switched out in that thread’s “switch_handle” field after
its context has fully been saved.
Architectures whose entry in interrupt mode already preserves the entire thread state may pass that
thread’s handle directly to z_get_next_switch_handle() and be done in one step.
Note that while SMP requires CONFIG_USE_SWITCH, the reverse is not true. A uniprocessor archi-
tecture built with CONFIG_SMP set to No might still decide to implement its context switching using
arch_switch() .
API Reference
group spinlock_apis
Spinlock APIs.
Typedefs
Functions
struct k_spinlock
#include <spinlock.h> Kernel Spin Lock.
This struct defines a spin lock record on which CPUs can wait with k_spin_lock(). Any number
of spinlocks may be defined in application code.
These pages cover kernel objects which can be used to pass data between threads and ISRs.
The following table summarizes their high-level features.
Object Bidirec- Data Data Data ISRs can ISRs can Overrun handling
tional? structure item size Align- receive? send?
ment
FIFO No Queue Arbi- 4 B [2] Yes [3] Yes N/A
trary
[1]
LIFO No Queue Arbi- 4 B [2] Yes [3] Yes N/A
trary
[1]
Stack No Array Word Word Yes [3] Yes Undefined be-
havior
Message No Ring Power Power of Yes [3] Yes Pend thread or
queue buffer of two two return -errno
Mailbox Yes Queue Arbi- Arbitrary No No N/A
trary
[1]
Pipe No Ring Arbi- Arbitrary Yes [5] Yes [5] Pend thread or
buffer trary return -errno
[4]
[1] Callers allocate space for queue overhead in the data elements themselves.
[2] Objects added with k_fifo_alloc_put() and k_lifo_alloc_put() do not have alignment constraints, but
use temporary memory from the calling thread’s resource pool.
[3] ISRs can receive only when passing K_NO_WAIT as the timeout argument.
[4] Optional.
[5] ISRS can send and/or receive only when passing K_NO_WAIT as the timeout argument.
Queues
A Queue in Zephyr is a kernel object that implements a traditional queue, allowing threads and ISRs
to add and remove data items of any size. The queue is similar to a FIFO and serves as the underlying
implementation for both k_fifo and k_lifo. For more information on usage see k_fifo.
API Reference
group queue_apis
Defines
K_QUEUE_DEFINE(name)
Statically define and initialize a queue.
The queue can be accessed outside the module where it is defined using:
Parameters
• name – Name of the queue.
Functions
Parameters
• queue – Address of the queue.
Parameters
• queue – Address of the queue.
• data – Address of the data item.
Parameters
• queue – Address of the queue.
• data – Address of the data item.
Return values
• 0 – on success
• -ENOMEM – if there isn’t sufficient RAM in the caller’s resource pool
Parameters
• queue – Address of the queue.
• data – Address of the data item.
Parameters
• queue – Address of the queue.
• data – Address of the data item.
Return values
• 0 – on success
• -ENOMEM – if there isn’t sufficient RAM in the caller’s resource pool
Parameters
• queue – Address of the queue.
• prev – Address of the previous data item.
• data – Address of the data item.
Parameters
• queue – Address of the queue.
• head – Pointer to first node in singly-linked list.
• tail – Pointer to last node in singly-linked list.
Return values
• 0 – on success
• -EINVAL – on invalid supplied data
Parameters
• queue – Address of the queue.
• list – Pointer to sys_slist_t object.
Return values
• 0 – on success
• -EINVAL – on invalid data
Parameters
• queue – Address of the queue.
• timeout – Non-negative waiting period to obtain a data item or one of the
special values K_NO_WAIT and K_FOREVER.
Returns
Address of the data item if successful; NULL if returned without waiting, or wait-
ing period timed out.
Parameters
• queue – Address of the queue.
• data – Address of the data item.
Returns
true if data item was removed
Parameters
• queue – Address of the queue.
• data – Address of the data item.
Returns
true if data item was added, false if not
Parameters
• queue – Address of the queue.
Returns
Non-zero if the queue is empty.
Returns
0 if data is available.
FIFOs
A FIFO is a kernel object that implements a traditional first in, first out (FIFO) queue, allowing threads
and ISRs to add and remove data items of any size.
• Concepts
• Implementation
– Defining a FIFO
– Writing to a FIFO
– Reading from a FIFO
• Suggested Uses
• Configuration Options
• API Reference
Concepts Any number of FIFOs can be defined (limited only by available RAM). Each FIFO is refer-
enced by its memory address.
A FIFO has the following key properties:
• A queue of data items that have been added but not yet removed. The queue is implemented as a
simple linked list.
A FIFO must be initialized before it can be used. This sets its queue to empty.
FIFO data items must be aligned on a word boundary, as the kernel reserves the first word of an item
for use as a pointer to the next data item in the queue. Consequently, a data item that holds N bytes
of application data requires N+4 (or N+8) bytes of memory. There are no alignment or reserved space
requirements for data items if they are added with k_fifo_alloc_put() , instead additional memory is
temporarily allocated from the calling thread’s resource pool.
Note: FIFO data items are restricted to single active instance across all FIFO data queues. Any attempt
to re-add a FIFO data item to a queue before it has been removed from the queue to which it was
A data item may be added to a FIFO by a thread or an ISR. The item is given directly to a waiting thread,
if one exists; otherwise the item is added to the FIFO’s queue. There is no limit to the number of items
that may be queued.
A data item may be removed from a FIFO by a thread. If the FIFO’s queue is empty a thread may choose
to wait for a data item to be given. Any number of threads may wait on an empty FIFO simultaneously.
When a data item is added, it is given to the highest priority thread that has waited longest.
Note: The kernel does allow an ISR to remove an item from a FIFO, however the ISR must not attempt
to wait if the FIFO is empty.
If desired, multiple data items can be added to a FIFO in a single operation if they are chained together
into a singly-linked list. This capability can be useful if multiple writers are adding sets of related data
items to the FIFO, as it ensures the data items in each set are not interleaved with other data items.
Adding multiple data items to a FIFO is also more efficient than adding them one at a time, and can
be used to guarantee that anyone who removes the first data item in a set will be able to remove the
remaining data items without waiting.
Implementation
Defining a FIFO A FIFO is defined using a variable of type k_fifo. It must then be initialized by calling
k_fifo_init() .
The following code defines and initializes an empty FIFO.
struct k_fifo my_fifo;
k_fifo_init(&my_fifo);
Alternatively, an empty FIFO can be defined and initialized at compile time by calling K_FIFO_DEFINE .
The following code has the same effect as the code segment above.
K_FIFO_DEFINE(my_fifo);
...
}
}
Additionally, a singly-linked list of data items can be added to a FIFO by calling k_fifo_put_list() or
k_fifo_put_slist() .
Finally, a data item can be added to a FIFO with k_fifo_alloc_put() . With this API, there is no need
to reserve space for the kernel’s use in the data item, instead additional memory will be allocated from
the calling thread’s resource pool until the item is read.
Reading from a FIFO A data item is removed from a FIFO by calling k_fifo_get() .
The following code builds on the example above, and uses the FIFO to obtain data items from a producer
thread, which are then processed in some manner.
while (1) {
rx_data = k_fifo_get(&my_fifo, K_FOREVER);
Suggested Uses Use a FIFO to asynchronously transfer data items of arbitrary size in a “first in, first
out” manner.
API Reference
group fifo_apis
Defines
k_fifo_init(fifo)
Initialize a FIFO queue.
This routine initializes a FIFO queue, prior to its first use.
Parameters
• fifo – Address of the FIFO queue.
k_fifo_cancel_wait(fifo)
Cancel waiting on a FIFO queue.
This routine causes first thread pending on fifo, if any, to return from k_fifo_get() call with
NULL value (as if timeout expired).
Parameters
• fifo – Address of the FIFO queue.
k_fifo_put(fifo, data)
Add an element to a FIFO queue.
This routine adds a data item to fifo. A FIFO data item must be aligned on a word boundary,
and the first word of the item is reserved for the kernel’s use.
Parameters
• fifo – Address of the FIFO.
• data – Address of the data item.
k_fifo_alloc_put(fifo, data)
Add an element to a FIFO queue.
This routine adds a data item to fifo. There is an implicit memory allocation to create an ad-
ditional temporary bookkeeping data structure from the calling thread’s resource pool, which
is automatically freed when the item is removed. The data itself is not copied.
Parameters
• fifo – Address of the FIFO.
• data – Address of the data item.
Return values
• 0 – on success
• -ENOMEM – if there isn’t sufficient RAM in the caller’s resource pool
Parameters
• fifo – Address of the FIFO queue.
• head – Pointer to first node in singly-linked list.
• tail – Pointer to last node in singly-linked list.
k_fifo_put_slist(fifo, list)
Atomically add a list of elements to a FIFO queue.
This routine adds a list of data items to fifo in one operation. The data items must be in
a singly-linked list implemented using a sys_slist_t object. Upon completion, the sys_slist_t
object is invalid and must be re-initialized via sys_slist_init().
Parameters
• fifo – Address of the FIFO queue.
• list – Pointer to sys_slist_t object.
k_fifo_get(fifo, timeout)
Get an element from a FIFO queue.
This routine removes a data item from fifo in a “first in, first out” manner. The first word of
the data item is reserved for the kernel’s use.
Parameters
• fifo – Address of the FIFO queue.
• timeout – Waiting period to obtain a data item, or one of the special values
K_NO_WAIT and K_FOREVER.
Returns
Address of the data item if successful; NULL if returned without waiting, or wait-
ing period timed out.
k_fifo_is_empty(fifo)
Query a FIFO queue to see if it has data available.
Note that the data might be already gone by the time this function returns if other threads is
also trying to read from the FIFO.
Parameters
• fifo – Address of the FIFO queue.
Returns
Non-zero if the FIFO queue is empty.
Returns
0 if data is available.
k_fifo_peek_head(fifo)
Peek element at the head of a FIFO queue.
Return element from the head of FIFO queue without removing it. A usecase for this is if
elements of the FIFO object are themselves containers. Then on each iteration of processing,
a head container will be peeked, and some data processed out of it, and only if the container
is empty, it will be completely remove from the FIFO queue.
Parameters
• fifo – Address of the FIFO queue.
Returns
Head element, or NULL if the FIFO queue is empty.
k_fifo_peek_tail(fifo)
Peek element at the tail of FIFO queue.
Return element from the tail of FIFO queue (without removing it). A usecase for this is if
elements of the FIFO queue are themselves containers. Then it may be useful to add more
data to the last container in a FIFO queue.
Parameters
• fifo – Address of the FIFO queue.
Returns
Tail element, or NULL if a FIFO queue is empty.
K_FIFO_DEFINE(name)
Statically define and initialize a FIFO queue.
The FIFO queue can be accessed outside the module where it is defined using:
Parameters
• name – Name of the FIFO queue.
LIFOs
A LIFO is a kernel object that implements a traditional last in, first out (LIFO) queue, allowing threads
and ISRs to add and remove data items of any size.
• Concepts
• Implementation
– Defining a LIFO
– Writing to a LIFO
– Reading from a LIFO
• Suggested Uses
• Configuration Options
• API Reference
Concepts Any number of LIFOs can be defined (limited only by available RAM). Each LIFO is refer-
enced by its memory address.
A LIFO has the following key properties:
• A queue of data items that have been added but not yet removed. The queue is implemented as a
simple linked list.
A LIFO must be initialized before it can be used. This sets its queue to empty.
LIFO data items must be aligned on a word boundary, as the kernel reserves the first word of an item
for use as a pointer to the next data item in the queue. Consequently, a data item that holds N bytes
of application data requires N+4 (or N+8) bytes of memory. There are no alignment or reserved space
requirements for data items if they are added with k_lifo_alloc_put() , instead additional memory is
temporarily allocated from the calling thread’s resource pool.
Note: LIFO data items are restricted to single active instance across all LIFO data queues. Any attempt
to re-add a LIFO data item to a queue before it has been removed from the queue to which it was
previously added will result in undefined behavior.
A data item may be added to a LIFO by a thread or an ISR. The item is given directly to a waiting thread,
if one exists; otherwise the item is added to the LIFO’s queue. There is no limit to the number of items
that may be queued.
A data item may be removed from a LIFO by a thread. If the LIFO’s queue is empty a thread may choose
to wait for a data item to be given. Any number of threads may wait on an empty LIFO simultaneously.
When a data item is added, it is given to the highest priority thread that has waited longest.
Note: The kernel does allow an ISR to remove an item from a LIFO, however the ISR must not attempt
to wait if the LIFO is empty.
Implementation
Defining a LIFO A LIFO is defined using a variable of type k_lifo. It must then be initialized by calling
k_lifo_init() .
The following defines and initializes an empty LIFO.
k_lifo_init(&my_lifo);
Alternatively, an empty LIFO can be defined and initialized at compile time by calling K_LIFO_DEFINE .
The following code has the same effect as the code segment above.
K_LIFO_DEFINE(my_lifo);
struct data_item_t {
void *LIFO_reserved; /* 1st word reserved for use by LIFO */
...
};
...
}
}
A data item can be added to a LIFO with k_lifo_alloc_put() . With this API, there is no need to reserve
space for the kernel’s use in the data item, instead additional memory will be allocated from the calling
thread’s resource pool until the item is read.
Reading from a LIFO A data item is removed from a LIFO by calling k_lifo_get() .
The following code builds on the example above, and uses the LIFO to obtain data items from a producer
thread, which are then processed in some manner.
while (1) {
rx_data = k_lifo_get(&my_lifo, K_FOREVER);
Suggested Uses Use a LIFO to asynchronously transfer data items of arbitrary size in a “last in, first
out” manner.
API Reference
group lifo_apis
Defines
k_lifo_init(lifo)
Initialize a LIFO queue.
This routine initializes a LIFO queue object, prior to its first use.
Parameters
• lifo – Address of the LIFO queue.
k_lifo_put(lifo, data)
Add an element to a LIFO queue.
This routine adds a data item to lifo. A LIFO queue data item must be aligned on a word
boundary, and the first word of the item is reserved for the kernel’s use.
Parameters
• lifo – Address of the LIFO queue.
• data – Address of the data item.
k_lifo_alloc_put(lifo, data)
Add an element to a LIFO queue.
This routine adds a data item to lifo. There is an implicit memory allocation to create an ad-
ditional temporary bookkeeping data structure from the calling thread’s resource pool, which
is automatically freed when the item is removed. The data itself is not copied.
Parameters
• lifo – Address of the LIFO.
• data – Address of the data item.
Return values
• 0 – on success
• -ENOMEM – if there isn’t sufficient RAM in the caller’s resource pool
k_lifo_get(lifo, timeout)
Get an element from a LIFO queue.
This routine removes a data item from LIFO in a “last in, first out” manner. The first word of
the data item is reserved for the kernel’s use.
Parameters
• lifo – Address of the LIFO queue.
• timeout – Waiting period to obtain a data item, or one of the special values
K_NO_WAIT and K_FOREVER.
Returns
Address of the data item if successful; NULL if returned without waiting, or wait-
ing period timed out.
K_LIFO_DEFINE(name)
Statically define and initialize a LIFO queue.
The LIFO queue can be accessed outside the module where it is defined using:
Parameters
• name – Name of the fifo.
Stacks
A stack is a kernel object that implements a traditional last in, first out (LIFO) queue, allowing threads
and ISRs to add and remove a limited number of integer data values.
• Concepts
• Implementation
– Defining a Stack
– Pushing to a Stack
– Popping from a Stack
• Suggested Uses
• Configuration Options
• API Reference
Concepts Any number of stacks can be defined (limited only by available RAM). Each stack is refer-
enced by its memory address.
A stack has the following key properties:
• A queue of integer data values that have been added but not yet removed. The queue is imple-
mented using an array of stack_data_t values and must be aligned on a native word boundary. The
stack_data_t type corresponds to the native word size i.e. 32 bits or 64 bits depending on the CPU
architecture and compilation mode.
• A maximum quantity of data values that can be queued in the array.
A stack must be initialized before it can be used. This sets its queue to empty.
A data value can be added to a stack by a thread or an ISR. The value is given directly to a waiting
thread, if one exists; otherwise the value is added to the LIFO’s queue.
Note: If CONFIG_NO_RUNTIME_CHECKS is enabled, the kernel will not detect and prevent attempts to add
a data value to a stack that has already reached its maximum quantity of queued values. Adding a data
value to a stack that is already full will result in array overflow, and lead to unpredictable behavior.
A data value may be removed from a stack by a thread. If the stack’s queue is empty a thread may
choose to wait for it to be given. Any number of threads may wait on an empty stack simultaneously.
When a data item is added, it is given to the highest priority thread that has waited longest.
Note: The kernel does allow an ISR to remove an item from a stack, however the ISR must not attempt
to wait if the stack is empty.
Implementation
Defining a Stack A stack is defined using a variable of type k_stack. It must then be initialized by
calling k_stack_init() or k_stack_alloc_init() . In the latter case, a buffer is not provided and it is
instead allocated from the calling thread’s resource pool.
The following code defines and initializes an empty stack capable of holding up to ten word-sized data
values.
# define MAX_ITEMS 10
stack_data_t my_stack_array[MAX_ITEMS];
struct k_stack my_stack;
Alternatively, a stack can be defined and initialized at compile time by calling K_STACK_DEFINE .
The following code has the same effect as the code segment above. Observe that the macro defines both
the stack and its array of data values.
K_STACK_DEFINE(my_stack, MAX_ITEMS);
Popping from a Stack A data item is taken from a stack by calling k_stack_pop() .
The following code builds on the example above, and shows how a thread can dynamically allocate an
unused data structure. When the data structure is no longer required, the thread must push its address
back on the stack to allow the data structure to be reused.
Suggested Uses Use a stack to store and retrieve integer data values in a “last in, first out” manner,
when the maximum number of stored items is known.
API Reference
group stack_apis
Defines
K_STACK_DEFINE(name, stack_num_entries)
Statically define and initialize a stack.
The stack can be accessed outside the module where it is defined using:
Parameters
• name – Name of the stack.
• stack_num_entries – Maximum number of values that can be stacked.
Functions
Returns
-ENOMEM if memory couldn’t be allocated
int k_stack_cleanup(struct k_stack *stack)
Release a stack’s allocated buffer.
If a stack object was given a dynamically allocated buffer via k_stack_alloc_init(), this will free
it. This function does nothing if the buffer wasn’t dynamically allocated.
Parameters
• stack – Address of the stack.
Return values
• 0 – on success
• -EAGAIN – when object is still in use
int k_stack_push(struct k_stack *stack, stack_data_t data)
Push an element onto a stack.
This routine adds a stack_data_t value data to stack.
Parameters
• stack – Address of the stack.
• data – Value to push onto the stack.
Return values
• 0 – on success
• -ENOMEM – if stack is full
Parameters
• stack – Address of the stack.
• data – Address of area to hold the value popped from the stack.
• timeout – Waiting period to obtain a value, or one of the special values
K_NO_WAIT and K_FOREVER.
Return values
• 0 – Element popped from stack.
• -EBUSY – Returned without waiting.
Message Queues
A message queue is a kernel object that implements a simple message queue, allowing threads and ISRs
to asynchronously send and receive fixed-size data items.
• Concepts
• Implementation
– Defining a Message Queue
– Writing to a Message Queue
– Reading from a Message Queue
– Peeking into a Message Queue
• Suggested Uses
• Configuration Options
• API Reference
Concepts Any number of message queues can be defined (limited only by available RAM). Each mes-
sage queue is referenced by its memory address.
A message queue has the following key properties:
• A ring buffer of data items that have been sent but not yet received.
• A data item size, measured in bytes.
• A maximum quantity of data items that can be queued in the ring buffer.
The message queue’s ring buffer must be aligned to an N-byte boundary, where N is a power of 2 (i.e. 1,
2, 4, 8, . . . ). To ensure that the messages stored in the ring buffer are similarly aligned to this boundary,
the data item size must also be a multiple of N.
A message queue must be initialized before it can be used. This sets its ring buffer to empty.
A data item can be sent to a message queue by a thread or an ISR. The data item pointed at by the
sending thread is copied to a waiting thread, if one exists; otherwise the item is copied to the message
queue’s ring buffer, if space is available. In either case, the size of the data area being sent must equal
the message queue’s data item size.
If a thread attempts to send a data item when the ring buffer is full, the sending thread may choose to
wait for space to become available. Any number of sending threads may wait simultaneously when the
ring buffer is full; when space becomes available it is given to the highest priority sending thread that
has waited the longest.
A data item can be received from a message queue by a thread. The data item is copied to the area
specified by the receiving thread; the size of the receiving area must equal the message queue’s data item
size.
If a thread attempts to receive a data item when the ring buffer is empty, the receiving thread may choose
to wait for a data item to be sent. Any number of receiving threads may wait simultaneously when the
ring buffer is empty; when a data item becomes available it is given to the highest priority receiving
thread that has waited the longest.
A thread can also peek at the message on the head of a message queue without removing it from the
queue. The data item is copied to the area specified by the receiving thread; the size of the receiving
area must equal the message queue’s data item size.
Note: The kernel does allow an ISR to receive an item from a message queue, however the ISR must
not attempt to wait if the message queue is empty.
Implementation
Defining a Message Queue A message queue is defined using a variable of type k_msgq . It must then
be initialized by calling k_msgq_init() .
The following code defines and initializes an empty message queue that is capable of holding 10 items,
each of which is 12 bytes long.
struct data_item_type {
uint32_t field1;
uint32_t field2;
uint32_t field3;
};
Alternatively, a message queue can be defined and initialized at compile time by calling K_MSGQ_DEFINE .
The following code has the same effect as the code segment above. Observe that the macro defines both
the message queue and its buffer.
The following code demonstrates an alignment implementation for the structure defined in the previ-
ous example code. aligned means each data_item_type will begin on the specified byte boundary.
aligned(4) means that the structure is aligned to an address that is divisible by 4.
typedef struct {
uint32_t field1;
uint32_t field2;
uint32_t field3;
}__attribute__((aligned(4))) data_item_type;
Writing to a Message Queue A data item is added to a message queue by calling k_msgq_put() .
The following code builds on the example above, and uses the message queue to pass data items from a
producing thread to one or more consuming threads. If the message queue fills up because the consumers
can’t keep up, the producing thread throws away all existing data so the newer data can be saved.
void producer_thread(void)
{
struct data_item_type data;
while (1) {
/* create data item to send (e.g. measurement, timestamp, ...) */
data = ...
Reading from a Message Queue A data item is taken from a message queue by calling k_msgq_get() .
The following code builds on the example above, and uses the message queue to process data items
generated by one or more producing threads. Note that the return value of k_msgq_get() should be
tested as -ENOMSG can be returned due to k_msgq_purge() .
void consumer_thread(void)
{
struct data_item_type data;
while (1) {
/* get a data item */
k_msgq_get(&my_msgq, &data, K_FOREVER);
Peeking into a Message Queue A data item is read from a message queue by calling k_msgq_peek() .
The following code peeks into the message queue to read the data item at the head of the queue that is
generated by one or more producing threads.
void consumer_thread(void)
{
struct data_item_type data;
while (1) {
/* read a data item by peeking into the queue */
k_msgq_peek(&my_msgq, &data);
Suggested Uses Use a message queue to transfer small data items between threads in an asynchronous
manner.
Note: A message queue can be used to transfer large data items, if desired. However, this can increase
interrupt latency as interrupts are locked while a data item is written or read. The time to write or read
a data item increases linearly with its size since the item is copied in its entirety to or from the buffer in
memory. For this reason, it is usually preferable to transfer large data items by exchanging a pointer to
the data item, rather than the data item itself.
A synchronous transfer can be achieved by using the kernel’s mailbox object type.
API Reference
group msgq_apis
Defines
K_MSGQ_FLAG_ALLOC
Parameters
• q_name – Name of the message queue.
• q_msg_size – Message size (in bytes).
• q_max_msgs – Maximum number of messages that can be queued.
• q_align – Alignment of the message queue’s ring buffer.
Functions
void k_msgq_init(struct k_msgq *msgq, char *buffer, size_t msg_size, uint32_t max_msgs)
Initialize a message queue.
This routine initializes a message queue object, prior to its first use.
The message queue’s ring buffer must contain space for max_msgs messages, each of which is
msg_size bytes long. The buffer must be aligned to an N-byte boundary, where N is a power
of 2 (i.e. 1, 2, 4, . . . ). To ensure that each message is similarly aligned to this boundary,
q_msg_size must also be a multiple of N.
Parameters
• msgq – Address of the message queue.
• buffer – Pointer to ring buffer that holds queued messages.
• msg_size – Message size (in bytes).
• max_msgs – Maximum number of messages that can be queued.
Note: The message content is copied from data into msgq and the data pointer is not retained,
so the message content will not be modified by this function.
Parameters
• msgq – Address of the message queue.
• data – Pointer to the message.
• timeout – Non-negative waiting period to add the message, or one of the spe-
cial values K_NO_WAIT and K_FOREVER.
Return values
• 0 – Message sent.
• -ENOMSG – Returned without waiting or queue purged.
• -EAGAIN – Waiting period timed out.
Parameters
• msgq – Address of the message queue.
• data – Address of area to hold the received message.
• timeout – Waiting period to receive the message, or one of the special values
K_NO_WAIT and K_FOREVER.
Return values
• 0 – Message received.
• -ENOMSG – Returned without waiting.
• -EAGAIN – Waiting period timed out.
Parameters
• msgq – Address of the message queue.
• data – Address of area to hold the message read from the queue.
Return values
• 0 – Message read.
• -ENOMSG – Returned when the queue has no message.
Parameters
struct k_msgq
#include <kernel.h> Message Queue Structure.
Public Members
_wait_q_t wait_q
Message queue wait queue
size_t msg_size
Message size
uint32_t max_msgs
Maximal number of messages
char *buffer_start
Start of message buffer
char *buffer_end
End of message buffer
char *read_ptr
Read pointer
char *write_ptr
Write pointer
uint32_t used_msgs
Number of used messages
uint8_t flags
Message queue
struct k_msgq_attrs
#include <kernel.h> Message Queue Attributes.
Public Members
size_t msg_size
Message Size
uint32_t max_msgs
Maximal number of messages
uint32_t used_msgs
Used messages
Mailboxes
A mailbox is a kernel object that provides enhanced message queue capabilities that go beyond the
capabilities of a message queue object. A mailbox allows threads to send and receive messages of any
size synchronously or asynchronously.
• Concepts
– Message Format
– Message Lifecycle
– Thread Compatibility
– Message Flow Control
• Implementation
– Defining a Mailbox
– Message Descriptors
– Sending a Message
– Receiving a Message
• Suggested Uses
• Configuration Options
• API Reference
Concepts Any number of mailboxes can be defined (limited only by available RAM). Each mailbox is
referenced by its memory address.
A mailbox has the following key properties:
• A send queue of messages that have been sent but not yet received.
• A receive queue of threads that are waiting to receive a message.
A mailbox must be initialized before it can be used. This sets both of its queues to empty.
A mailbox allows threads, but not ISRs, to exchange messages. A thread that sends a message is known
as the sending thread, while a thread that receives the message is known as the receiving thread. Each
message may be received by only one thread (i.e. point-to-multipoint and broadcast messaging is not
supported).
Messages exchanged using a mailbox are handled non-anonymously, allowing both threads participating
in an exchange to know (and even specify) the identity of the other thread.
Message Format A message descriptor is a data structure that specifies where a message’s data is
located, and how the message is to be handled by the mailbox. Both the sending thread and the receiving
thread supply a message descriptor when accessing a mailbox. The mailbox uses the message descriptors
to perform a message exchange between compatible sending and receiving threads. The mailbox also
updates certain message descriptor fields during the exchange, allowing both threads to know what has
occurred.
A mailbox message contains zero or more bytes of message data. The size and format of the message
data is application-defined, and can vary from one message to the next.
A message buffer is an area of memory provided by the thread that sends or receives the message data.
An array or structure variable can often be used for this purpose.
A message that has neither form of message data is called an empty message.
Note: A message whose message buffer exists, but contains zero bytes of actual data, is not an empty
message.
Message Lifecycle The life cycle of a message is straightforward. A message is created when it is given
to a mailbox by the sending thread. The message is then owned by the mailbox until it is given to a
receiving thread. The receiving thread may retrieve the message data when it receives the message from
the mailbox, or it may perform data retrieval during a second, subsequent mailbox operation. Only when
data retrieval has occurred is the message deleted by the mailbox.
Thread Compatibility A sending thread can specify the address of the thread to which the message is
sent, or send it to any thread by specifying K_ANY. Likewise, a receiving thread can specify the address
of the thread from which it wishes to receive a message, or it can receive a message from any thread by
specifying K_ANY. A message is exchanged only when the requirements of both the sending thread and
receiving thread are satisfied; such threads are said to be compatible.
For example, if thread A sends a message to thread B (and only thread B) it will be received by thread B
if thread B tries to receive a message from thread A or if thread B tries to receive from any thread. The
exchange will not occur if thread B tries to receive a message from thread C. The message can never be
received by thread C, even if it tries to receive a message from thread A (or from any thread).
Implementation
Defining a Mailbox A mailbox is defined using a variable of type k_mbox . It must then be initialized
by calling k_mbox_init() .
The following code defines and initializes an empty mailbox.
k_mbox_init(&my_mailbox);
Alternatively, a mailbox can be defined and initialized at compile time by calling K_MBOX_DEFINE .
The following code has the same effect as the code segment above.
K_MBOX_DEFINE(my_mailbox);
Message Descriptors A message descriptor is a structure of type k_mbox_msg . Only the fields listed
below should be used; any other fields are for internal mailbox use only.
info
A 32-bit value that is exchanged by the message sender and receiver, and whose meaning is defined
by the application. This exchange is bi-directional, allowing the sender to pass a value to the
receiver during any message exchange, and allowing the receiver to pass a value to the sender
during a synchronous message exchange.
size
The message data size, in bytes. Set it to zero when sending an empty message, or when sending a
message buffer with no actual data. When receiving a message, set it to the maximum amount of
data desired, or to zero if the message data is not wanted. The mailbox updates this field with the
actual number of data bytes exchanged once the message is received.
tx_data
A pointer to the sending thread’s message buffer. Set it to NULL when sending an empty message.
Leave this field uninitialized when receiving a message.
tx_target_thread
The address of the desired receiving thread. Set it to K_ANY to allow any thread to receive the
message. Leave this field uninitialized when receiving a message. The mailbox updates this field
with the actual receiver’s address once the message is received.
rx_source_thread
The address of the desired sending thread. Set it to K_ANY to receive a message sent by any thread.
Leave this field uninitialized when sending a message. The mailbox updates this field with the
actual sender’s address when the message is put into the mailbox.
Sending a Message A thread sends a message by first creating its message data, if any.
Next, the sending thread creates a message descriptor that characterizes the message to be sent, as
described in the previous section.
Finally, the sending thread calls a mailbox send API to initiate the message exchange. The message is
immediately given to a compatible receiving thread, if one is currently waiting. Otherwise, the message
is added to the mailbox’s send queue.
Any number of messages may exist simultaneously on a send queue. The messages in the send queue
are sorted according to the priority of the sending thread. Messages of equal priority are sorted so that
the oldest message can be received first.
For a synchronous send operation, the operation normally completes when a receiving thread has both
received the message and retrieved the message data. If the message is not received before the waiting
period specified by the sending thread is reached, the message is removed from the mailbox’s send
queue and the send operation fails. When a send operation completes successfully the sending thread
can examine the message descriptor to determine which thread received the message, how much data
was exchanged, and the application-defined info value supplied by the receiving thread.
Note: A synchronous send operation may block the sending thread indefinitely, even when the thread
specifies a maximum waiting period. The waiting period only limits how long the mailbox waits before
the message is received by another thread. Once a message is received there is no limit to the time the
receiving thread may take to retrieve the message data and unblock the sending thread.
For an asynchronous send operation, the operation always completes immediately. This allows the send-
ing thread to continue processing regardless of whether the message is given to a receiving thread im-
mediately or added to the send queue. The sending thread may optionally specify a semaphore that the
mailbox gives when the message is deleted by the mailbox, for example, when the message has been
received and its data retrieved by a receiving thread. The use of a semaphore allows the sending thread
to easily implement a flow control mechanism that ensures that the mailbox holds no more than an
application-specified number of messages from a sending thread (or set of sending threads) at any point
in time.
Note: A thread that sends a message asynchronously has no way to determine which thread received the
message, how much data was exchanged, or the application-defined info value supplied by the receiving
thread.
Sending an Empty Message This code uses a mailbox to synchronously pass 4 byte random values to
any consuming thread that wants one. The message “info” field is large enough to carry the information
being exchanged, so the data portion of the message isn’t used.
void producer_thread(void)
{
struct k_mbox_msg send_msg;
while (1) {
Sending Data Using a Message Buffer This code uses a mailbox to synchronously pass variable-sized
requests from a producing thread to any consuming thread that wants it. The message “info” field is
used to exchange information about the maximum size message buffer that each thread can handle.
void producer_thread(void)
{
char buffer[100];
int buffer_bytes_used;
while (1) {
Receiving a Message A thread receives a message by first creating a message descriptor that character-
izes the message it wants to receive. It then calls one of the mailbox receive APIs. The mailbox searches
its send queue and takes the message from the first compatible thread it finds. If no compatible thread
exists, the receiving thread may choose to wait for one. If no compatible thread appears before the
waiting period specified by the receiving thread is reached, the receive operation fails. Once a receive
operation completes successfully the receiving thread can examine the message descriptor to determine
which thread sent the message, how much data was exchanged, and the application-defined info value
supplied by the sending thread.
Any number of receiving threads may wait simultaneously on a mailboxes’ receive queue. The threads
are sorted according to their priority; threads of equal priority are sorted so that the one that started
waiting first can receive a message first.
Note: Receiving threads do not always receive messages in a first in, first out (FIFO) order, due to the
thread compatibility constraints specified by the message descriptors. For example, if thread A waits to
receive a message only from thread X and then thread B waits to receive a message from thread Y, an
incoming message from thread Y to any thread will be given to thread B and thread A will continue to
wait.
The receiving thread controls both the quantity of data it retrieves from an incoming message and where
the data ends up. The thread may choose to take all of the data in the message, to take only the initial
part of the data, or to take no data at all. Similarly, the thread may choose to have the data copied into
a message buffer of its choice.
The following sections outline various approaches a receiving thread may use when retrieving message
data.
Retrieving Data at Receive Time The most straightforward way for a thread to retrieve message data
is to specify a message buffer when the message is received. The thread indicates both the location of
the message buffer (which must not be NULL) and its size.
The mailbox copies the message’s data to the message buffer as part of the receive operation. If the
message buffer is not big enough to contain all of the message’s data, any uncopied data is lost. If the
message is not big enough to fill all of the buffer with data, the unused portion of the message buffer
is left unchanged. In all cases the mailbox updates the receiving thread’s message descriptor to indicate
how many data bytes were copied (if any).
The immediate data retrieval technique is best suited for small messages where the maximum size of a
message is known in advance.
The following code uses a mailbox to process variable-sized requests from any producing thread, using
the immediate data retrieval technique. The message “info” field is used to exchange information about
the maximum size message buffer that each thread can handle.
void consumer_thread(void)
{
struct k_mbox_msg recv_msg;
char buffer[100];
int i;
(continues on next page)
while (1) {
/* prepare to receive message */
recv_msg.info = 100;
recv_msg.size = 100;
recv_msg.rx_source_thread = K_ANY;
Retrieving Data Later Using a Message Buffer A receiving thread may choose to defer message data
retrieval at the time the message is received, so that it can retrieve the data into a message buffer at a
later time. The thread does this by specifying a message buffer location of NULL and a size indicating the
maximum amount of data it is willing to retrieve later.
The mailbox does not copy any message data as part of the receive operation. However, the mailbox
still updates the receiving thread’s message descriptor to indicate how many data bytes are available for
retrieval.
The receiving thread must then respond as follows:
• If the message descriptor size is zero, then either the sender’s message contained no data or the
receiving thread did not want to receive any data. The receiving thread does not need to take any
further action, since the mailbox has already completed data retrieval and deleted the message.
• If the message descriptor size is non-zero and the receiving thread still wants to retrieve the data,
the thread must call k_mbox_data_get() and supply a message buffer large enough to hold the
data. The mailbox copies the data into the message buffer and deletes the message.
• If the message descriptor size is non-zero and the receiving thread does not want to retrieve the
data, the thread must call k_mbox_data_get() . and specify a message buffer of NULL. The mailbox
deletes the message without copying the data.
The subsequent data retrieval technique is suitable for applications where immediate retrieval of message
data is undesirable. For example, it can be used when memory limitations make it impractical for the
receiving thread to always supply a message buffer capable of holding the largest possible incoming
message.
The following code uses a mailbox’s deferred data retrieval mechanism to get message data from a
producing thread only if the message meets certain criteria, thereby eliminating unneeded data copying.
The message “info” field supplied by the sender is used to classify the message.
void consumer_thread(void)
{
struct k_mbox_msg recv_msg;
char buffer[10000];
while (1) {
/* prepare to receive message */
recv_msg.size = 10000;
recv_msg.rx_source_thread = K_ANY;
Suggested Uses Use a mailbox to transfer data items between threads whenever the capabilities of a
message queue are insufficient.
API Reference
group mailbox_apis
Defines
K_MBOX_DEFINE(name)
Statically define and initialize a mailbox.
The mailbox is to be accessed outside the module where it is defined using:
Parameters
• name – Name of the mailbox.
Functions
• 0 – Message received.
• -ENOMSG – Returned without waiting.
• -EAGAIN – Waiting period timed out.
void k_mbox_data_get(struct k_mbox_msg *rx_msg, void *buffer)
Retrieve mailbox message data into a buffer.
This routine completes the processing of a received message by retrieving its data into a buffer,
then disposing of the message.
Alternatively, this routine can be used to dispose of a received message without retrieving its
data.
Parameters
• rx_msg – Address of the receive message descriptor.
• buffer – Address of the buffer to receive data, or NULL to discard the data.
struct k_mbox_msg
#include <kernel.h> Mailbox Message Structure.
Public Members
size_t size
size of message (in bytes)
uint32_t info
application-defined information value
void *tx_data
sender’s message data buffer
k_tid_t rx_source_thread
source thread id
k_tid_t tx_target_thread
target thread id
struct k_mbox
#include <kernel.h> Mailbox Structure.
Public Members
_wait_q_t tx_msg_queue
Transmit messages queue
_wait_q_t rx_msg_queue
Receive message queue
Pipes
A pipe is a kernel object that allows a thread to send a byte stream to another thread. Pipes can be used
to synchronously transfer chunks of data in whole or in part.
• Concepts
• Implementation
– Writing to a Pipe
– Reading from a Pipe
– Flushing a Pipe’s Buffer
– Flushing a Pipe
• Suggested uses
• Configuration Options
• API Reference
Concepts The pipe can be configured with a ring buffer which holds data that has been sent but not
yet received; alternatively, the pipe may have no ring buffer.
Any number of pipes can be defined (limited only by available RAM). Each pipe is referenced by its
memory address.
A pipe has the following key property:
• A size that indicates the size of the pipe’s ring buffer. Note that a size of zero defines a pipe with
no ring buffer.
A pipe must be initialized before it can be used. The pipe is initially empty.
Data is synchronously sent either in whole or in part to a pipe by a thread. If the specified minimum
number of bytes can not be immediately satisfied, then the operation will either fail immediately or
attempt to send as many bytes as possible and then pend in the hope that the send can be completed
later. Accepted data is either copied to the pipe’s ring buffer or directly to the waiting reader(s).
Data is synchronously received from a pipe by a thread. If the specified minimum number of bytes can
not be immediately satisfied, then the operation will either fail immediately or attempt to receive as
many bytes as possible and then pend in the hope that the receive can be completed later. Accepted data
is either copied from the pipe’s ring buffer or directly from the waiting sender(s).
Data may also be flushed from a pipe by a thread. Flushing can be performed either on the entire pipe
or on only its ring buffer. Flushing the entire pipe is equivalent to reading all the information in the ring
buffer and waiting to be written into a giant temporary buffer which is then discarded. Flushing the
ring buffer is equivalent to reading only the data in the ring buffer into a temporary buffer which is then
discarded. Flushing the ring buffer does not guarantee that the ring buffer will stay empty; flushing it
may allow a pended writer to fill the ring buffer.
Note: The kernel does allow for an ISR to flush a pipe from an ISR. It also allows it to send/receive data
to/from one provided it does not attempt to wait for space/data.
Implementation A pipe is defined using a variable of type k_pipe and an optional character buffer of
type unsigned char. It must then be initialized by calling k_pipe_init() .
The following code defines and initializes an empty pipe that has a ring buffer capable of holding 100
bytes and is aligned to a 4-byte boundary.
Alternatively, a pipe can be defined and initialized at compile time by calling K_PIPE_DEFINE .
The following code has the same effect as the code segment above. Observe that that macro defines both
the pipe and its ring buffer.
struct message_header {
...
};
void producer_thread(void)
{
unsigned char *data;
size_t total_size;
size_t bytes_written;
int rc;
...
while (1) {
/* Craft message to send in the pipe */
data = ...;
total_size = ...;
if (rc < 0) {
/* Incomplete message header sent */
...
} else if (bytes_written < total_size) {
/* Some of the data was sent */
...
} else {
/* All data sent */
...
}
}
}
Reading from a Pipe Data is read from the pipe by calling k_pipe_get() .
The following code builds on the example above, and uses the pipe to process data items generated by
one or more producing threads.
void consumer_thread(void)
{
unsigned char buffer[120];
size_t bytes_read;
struct message_header *header = (struct message_header *)buffer;
while (1) {
rc = k_pipe_get(&my_pipe, buffer, sizeof(buffer), &bytes_read,
sizeof(header), K_MSEC(100));
Note: A pipe can be used to transfer long streams of data if desired. However it is often preferable to
send pointers to large data items to avoid copying the data.
Flushing a Pipe’s Buffer Data is flushed from the pipe’s ring buffer by calling
k_pipe_buffer_flush() .
The following code builds on the examples above, and flushes the pipe’s buffer.
void monitor_thread(void)
{
while (1) {
...
/* Pipe buffer contains stale data. Flush it. */
k_pipe_buffer_flush(&my_pipe);
...
}
}
void monitor_thread(void)
{
while (1) {
...
(continues on next page)
Note: A pipe can be used to transfer long streams of data if desired. However it is often preferable
to send pointers to large data items to avoid copying the data. Copying large data items will negatively
impact interrupt latency as a spinlock is held while copying that data.
API Reference
group pipe_apis
Defines
Parameters
• name – Name of the pipe.
• pipe_buffer_size – Size of the pipe’s ring buffer (in bytes), or zero if no ring
buffer is used.
• pipe_align – Alignment of the pipe’s ring buffer (power of 2).
Functions
Parameters
• pipe – Address of the pipe.
• data – Address to place the data read from pipe.
• bytes_to_read – Maximum number of data bytes to read.
• bytes_read – Address of area to hold the number of bytes read.
• min_xfer – Minimum number of data bytes to read.
• timeout – Waiting period to wait for the data to be read, or one of the special
values K_NO_WAIT and K_FOREVER.
Return values
• 0 – At least min_xfer bytes of data were read.
• -EINVAL – invalid parameters supplied
• -EIO – Returned without waiting; zero data bytes were read.
• -EAGAIN – Waiting period timed out; between zero and min_xfer minus one
data bytes were read.
size_t k_pipe_read_avail(struct k_pipe *pipe)
Query the number of bytes that may be read from pipe.
Parameters
• pipe – Address of the pipe.
Return values
a – number n such that 0 <= n <= k_pipe::size; the result is zero for unbuffered
pipes.
size_t k_pipe_write_avail(struct k_pipe *pipe)
Query the number of bytes that may be written to pipe.
Parameters
• pipe – Address of the pipe.
Return values
a – number n such that 0 <= n <= k_pipe::size; the result is zero for unbuffered
pipes.
void k_pipe_flush(struct k_pipe *pipe)
Flush the pipe of write data.
This routine flushes the pipe. Flushing the pipe is equivalent to reading both all the data in
the pipe’s buffer and all the data waiting to go into that pipe into a large temporary buffer
and discarding the buffer. Any writers that were previously pended become unpended.
Parameters
• pipe – Address of the pipe.
void k_pipe_buffer_flush(struct k_pipe *pipe)
Flush the pipe’s internal buffer.
This routine flushes the pipe’s internal buffer. This is equivalent to reading up to N bytes from
the pipe (where N is the size of the pipe’s buffer) into a temporary buffer and then discarding
that buffer. If there were writers previously pending, then some may unpend as they try to fill
up the pipe’s emptied buffer.
Parameters
• pipe – Address of the pipe.
struct k_pipe
#include <kernel.h> Pipe Structure
Public Members
size_t size
Buffer size
size_t bytes_used
size_t read_index
Where in buffer to read from
size_t write_index
Where in buffer to write
_wait_q_t readers
Reader wait queue
_wait_q_t writers
Writer wait queue
uint8_t flags
Wait queue Flags
3.1.4 Timing
Kernel Timing
Zephyr provides a robust and scalable timing framework to enable reporting and tracking of timed events
from hardware timing sources of arbitrary precision.
Time Units Kernel time is tracked in several units which are used for different purposes.
Real time values, typically specified in milliseconds or microseconds, are the default presentation of time
to application code. They have the advantages of being universally portable and pervasively understood,
though they may not match the precision of the underlying hardware perfectly.
The kernel presents a “cycle” count via the k_cycle_get_32() and k_cycle_get_64() APIs. The in-
tent is that this counter represents the fastest cycle counter that the operating system is able to present
to the user (for example, a CPU cycle counter) and that the read operation is very fast. The expec-
tation is that very sensitive application code might use this in a polling manner to achieve maximal
precision. The frequency of this counter is required to be steady over time, and is available from
sys_clock_hw_cycles_per_sec() (which on almost all platforms is a runtime constant that evaluates
to CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC).
For asynchronous timekeeping, the kernel defines a “ticks” concept. A “tick” is the internal count in
which the kernel does all its internal uptime and timeout bookkeeping. Interrupts are expected to be
delivered on tick boundaries to the extent practical, and no fractional ticks are tracked. The choice of
tick rate is configurable via CONFIG_SYS_CLOCK_TICKS_PER_SEC. Defaults on most hardware platforms
(ones that support setting arbitrary interrupt timeouts) are expected to be in the range of 10 kHz, with
software emulation platforms and legacy drivers using a more traditional 100 Hz value.
Conversion Zephyr provides an extensively enumerated conversion library with rounding control for
all time units. Any unit of “ms” (milliseconds), “us” (microseconds), “tick”, or “cyc” can be converted to
any other. Control of rounding is provided, and each conversion is available in “floor” (round down to
nearest output unit), “ceil” (round up) and “near” (round to nearest). Finally the output precision can
be specified as either 32 or 64 bits.
For example: k_ms_to_ticks_ceil32() will convert a millisecond input value to the next higher number
of ticks, returning a result truncated to 32 bits of precision; and k_cyc_to_us_floor64() will convert
a measured cycle count to an elapsed number of microseconds in a full 64 bits of precision. See the
reference documentation for the full enumeration of conversion routines.
On most platforms, where the various counter rates are integral multiples of each other and where the
output fits within a single word, these conversions expand to a 2-4 operation sequence, requiring full
precision only where actually required and requested.
Uptime The kernel tracks a system uptime count on behalf of the application. This is available at all
times via k_uptime_get() , which provides an uptime value in milliseconds since system boot. This is
expected to be the utility used by most portable application code.
The internal tracking, however, is as a 64 bit integer count of ticks. Apps with precise timing require-
ments (that are willing to do their own conversions to portable real time units) may access this with
k_uptime_ticks() .
Timeouts The Zephyr kernel provides many APIs with a “timeout” parameter. Conceptually, this indi-
cates the time at which an event will occur. For example:
• Kernel blocking operations like k_sem_take() or k_queue_get() may provide a timeout after
which the routine will return with an error code if no data is available.
• Kernel k_timer objects must specify delays for their duration and period.
• The kernel k_work_delayable API provides a timeout parameter indicating when a work queue
item will be added to the system queue.
All these values are specified using a k_timeout_t value. This is an opaque struct type that must be
initialized using one of a family of kernel timeout macros. The most common, K_MSEC , defines a time
in milliseconds after the current time (strictly: the time at which the kernel receives the timeout value).
Other options for timeout initialization follow the unit conventions described above: K_NSEC() , K_USEC ,
K_TICKS and K_CYC() specify timeout values that will expire after specified numbers of nanoseconds,
microseconds, ticks and cycles, respectively.
Precision of k_timeout_t values is configurable, with the default being 32 bits. Large uptime counts
in non-tick units will experience complicated rollover semantics, so it is expected that timing-sensitive
applications with long uptimes will be configured to use a 64 bit timeout type.
Finally, it is possible to specify timeouts as absolute times since system boot. A timeout initialized with
K_TIMEOUT_ABS_MS indicates a timeout that will expire after the system uptime reaches the specified
value. There are likewise nanosecond, microsecond, cycles and ticks variants of this API.
Timing Internals
Timeout Queue All Zephyr k_timeout_t events specified using the API above are managed in a single,
global queue of events. Each event is stored in a double-linked list, with an attendant delta count in ticks
from the previous event. The action to take on an event is specified as a callback function pointer
provided by the subsystem requesting the event, along with a _timeout tracking struct that is expected
to be embedded within subsystem-defined data structures (for example: a wait_q struct, or a k_tid_t
thread struct).
Note that all variant units passed via a k_timeout_t are converted to ticks once on insertion into the
list. There no multiple-conversion steps internal to the kernel, so precision is guaranteed at the tick level
no matter how many events exist or how long a timeout might be.
Note that the list structure means that the CPU work involved in managing large numbers of timeouts is
quadratic in the number of active timeouts. The API design of the timeout queue was intended to permit
a more scalable backend data structure, but no such implementation exists currently.
Timer Drivers Kernel timing at the tick level is driven by a timer driver with a comparatively simple
API.
• The driver is expected to be able to “announce” new ticks to the kernel via the
sys_clock_announce() call, which passes an integer number of ticks that have elapsed since the
last announce call (or system boot). These calls can occur at any time, but the driver is expected to
attempt to ensure (to the extent practical given interrupt latency interactions) that they occur near
tick boundaries (i.e. not “halfway through” a tick), and most importantly that they be correct over
time and subject to minimal skew vs. other counters and real world time.
• The driver is expected to provide a sys_clock_set_timeout() call to the kernel which indicates
how many ticks may elapse before the kernel must receive an announce call to trigger registered
timeouts. It is legal to announce new ticks before that moment (though they must be correct) but
delay after that will cause events to be missed. Note that the timeout value passed here is in a
delta from current time, but that does not absolve the driver of the requirement to provide ticks
at a steady rate over time. Naive implementations of this function are subject to bugs where the
fractional tick gets “reset” incorrectly and causes clock skew.
• The driver is expected to provide a sys_clock_elapsed() call which provides a current indica-
tion of how many ticks have elapsed (as compared to a real world clock) since the last call to
sys_clock_announce() , which the kernel needs to test newly arriving timeouts for expiration.
Note that a natural implementation of this API results in a “tickless” kernel, which receives and processes
timer interrupts only for registered events, relying on programmable hardware counters to provide ir-
regular interrupts. But a traditional, “ticked” or “dumb” counter driver can be trivially implemented
also:
• The driver can receive interrupts at a regular rate corresponding to the OS tick rate, calling
sys_clock_announce() with an argument of one each time.
• The driver can ignore calls to sys_clock_set_timeout() , as every tick will be announced regard-
less of timeout status.
• The driver can return zero for every call to sys_clock_elapsed() as no more than one tick can be
detected as having elapsed (because otherwise an interrupt would have been received).
SMP Details In general, the timer API described above does not change when run in a multiproces-
sor context. The kernel will internally synchronize all access appropriately, and ensure that all critical
sections are small and minimal. But some notes are important to detail:
• Zephyr is agnostic about which CPU services timer interrupts. It is not illegal (though probably
undesirable in some circumstances) to have every timer interrupt handled on a single processor.
Existing SMP architectures implement symmetric timer drivers.
• The sys_clock_announce() call is expected to be globally synchronized at the driver level. The
kernel does not do any per-CPU tracking, and expects that if two timer interrupts fire near simulta-
neously, that only one will provide the current tick count to the timing subsystem. The other may
legally provide a tick count of zero if no ticks have elapsed. It should not “skip” the announce call
because of timeslicing requirements (see below).
• Some SMP hardware uses a single, global timer device, others use a per-CPU counter. The complex-
ity here (for example: ensuring counter synchronization between CPUs) is expected to be managed
by the driver, not the kernel.
• The next timeout value passed back to the driver via sys_clock_set_timeout() is done iden-
tically for every CPU. So by default, every CPU will see simultaneous timer interrupts for ev-
ery event, even though by definition only one of them should see a non-zero ticks argument to
sys_clock_announce() . This is probably a correct default for timing sensitive applications (be-
cause it minimizes the chance that an errant ISR or interrupt lock will delay a timeout), but may
be a performance problem in some cases. The current design expects that any such optimization is
the responsibility of the timer driver.
Time Slicing An auxiliary job of the timing subsystem is to provide tick counters to the scheduler that
allow implementation of time slicing of threads. A thread time-slice cannot be a timeout value, as it does
not reflect a global expiration but instead a per-CPU value that needs to be tracked independently on
each CPU in an SMP context.
Because there may be no other hardware available to drive timeslicing, Zephyr multiplexes the existing
timer driver. This means that the value passed to sys_clock_set_timeout() may be clamped to a
smaller value than the current next timeout when a time sliced thread is currently scheduled.
Subsystems that keep millisecond APIs In general, code like this will port just like applications code
will. Millisecond values from the user may be treated any way the subsystem likes, and then converted
into kernel timeouts using K_MSEC() at the point where they are presented to the kernel.
Obviously this comes at the cost of not being able to use new features, like the higher precision timeout
constructors or absolute timeouts. But for many subsystems with simple needs, this may be acceptable.
One complexity is K_FOREVER . Subsystems that might have been able to accept this value to their mil-
lisecond API in the past no longer can, because it is no longer an integral type. Such code will need to
use a different, integer-valued token to represent “forever”. K_NO_WAIT has the same typesafety concern
too, of course, but as it is (and has always been) simply a numerical zero, it has a natural porting path.
Subsystems using k_timeout_t Ideally, code that takes a “timeout” parameter specifying a time to
wait should be using the kernel native abstraction where possible. But k_timeout_t is opaque, and
needs to be converted before it can be inspected by an application.
Some conversions are simple. Code that needs to test for K_FOREVER can simply use the K_TIMEOUT_EQ()
macro to test the opaque struct for equality and take special action.
The more complicated case is when the subsystem needs to take a timeout and loop, waiting for it to
finish while doing some processing that may require multiple blocking operations on underlying kernel
code. For example, consider this design:
if (is_event_complete(obj)) {
return;
}
This code requires that the timeout value be inspected, which is no longer possible. For situations
like this, the new API provides an internal sys_clock_timeout_end_calc() routine that converts an
arbitrary timeout to the uptime value in ticks at which it will expire. So such a loop might look like:
Note that sys_clock_timeout_end_calc() returns values in units of ticks, to prevent conversion alias-
ing, is always presented at 64 bit uptime precision to prevent rollover bugs, handles special K_FOREVER
naturally (as UINT64_MAX), and works identically for absolute timeouts as well as conventional ones.
But some care is still required for subsystems that use it. Note that delta timeouts need to
be interpreted relative to a “current time”, and obviously that time is the time of the call to
sys_clock_timeout_end_calc(). But the user expects that the time is the time they passed the timeout
to you. Care must be taken to call this function just once, as synchronously as possible to the timeout
creation in user code. It should not be used on a “stored” timeout value, and should never be called
iteratively in a loop.
API Reference
group clock_apis
Clock APIs.
Defines
K_NO_WAIT
Generate null timeout delay.
This macro generates a timeout delay that instructs a kernel API not to wait if the requested
operation cannot be performed immediately.
Returns
Timeout delay value.
K_NSEC(t)
Generate timeout delay from nanoseconds.
This macro generates a timeout delay that instructs a kernel API to wait up to t nanoseconds
to perform the requested operation. Note that timer precision is limited to the tick rate, not
the requested value.
Parameters
• t – Duration in nanoseconds.
Returns
Timeout delay value.
K_USEC(t)
Generate timeout delay from microseconds.
This macro generates a timeout delay that instructs a kernel API to wait up to t microseconds
to perform the requested operation. Note that timer precision is limited to the tick rate, not
the requested value.
Parameters
• t – Duration in microseconds.
Returns
Timeout delay value.
K_CYC(t)
Generate timeout delay from cycles.
This macro generates a timeout delay that instructs a kernel API to wait up to t cycles to
perform the requested operation.
Parameters
• t – Duration in cycles.
Returns
Timeout delay value.
K_TICKS(t)
Generate timeout delay from system ticks.
This macro generates a timeout delay that instructs a kernel API to wait up to t ticks to perform
the requested operation.
Parameters
• t – Duration in system ticks.
Returns
Timeout delay value.
K_MSEC(ms)
Generate timeout delay from milliseconds.
This macro generates a timeout delay that instructs a kernel API to wait up to ms milliseconds
to perform the requested operation.
Parameters
• ms – Duration in milliseconds.
Returns
Timeout delay value.
K_SECONDS(s)
Generate timeout delay from seconds.
This macro generates a timeout delay that instructs a kernel API to wait up to s seconds to
perform the requested operation.
Parameters
• s – Duration in seconds.
Returns
Timeout delay value.
K_MINUTES(m)
Generate timeout delay from minutes.
This macro generates a timeout delay that instructs a kernel API to wait up to m minutes to
perform the requested operation.
Parameters
• m – Duration in minutes.
Returns
Timeout delay value.
K_HOURS(h)
Generate timeout delay from hours.
This macro generates a timeout delay that instructs a kernel API to wait up to h hours to
perform the requested operation.
Parameters
• h – Duration in hours.
Returns
Timeout delay value.
K_FOREVER
Generate infinite timeout delay.
This macro generates a timeout delay that instructs a kernel API to wait as long as necessary
to perform the requested operation.
Returns
Timeout delay value.
K_TICKS_FOREVER
K_TIMEOUT_EQ(a, b)
Compare timeouts for equality.
The k_timeout_t object is an opaque struct that should not be inspected by application code.
This macro exists so that users can test timeout objects for equality with known constants
(e.g. K_NO_WAIT and K_FOREVER) when implementing their own APIs in terms of Zephyr
timeout constants.
Returns
True if the timeout objects are identical
Typedefs
Functions
void sys_clock_idle_exit(void)
Timer idle exit notification.
This notifies the timer driver that the system is exiting the idle and allows it to do whatever
bookkeeping is needed to restore timer operation and compute elapsed ticks.
Note: Legacy timer drivers also use this opportunity to call back into sys_clock_announce()
to notify the kernel of expired ticks. This is allowed for compatibility, but not recommended.
The kernel will figure that out on its own.
Note: Not all system timer drivers has the capability of being disabled. The config
CONFIG_SYSTEM_TIMER_HAS_DISABLE_SUPPORT can be used to check if the system timer has
the capability of being disabled.
uint32_t sys_clock_cycle_get_32(void)
Hardware cycle counter.
Timer drivers are generally responsible for the system cycle counter as well as the tick an-
nouncements. This function is generally called out of the architecture layer (
See also:
arch_k_cycle_get_32()) to implement the cycle counter, though the user-facing API
is owned by the architecture, not the driver. The rate must match CON-
FIG_SYS_CLOCK_HW_CYCLES_PER_SEC.
Note: If the counter clock is large enough for this to wrap its full range within a few seconds
(i.e. CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC is greater than 50Mhz) then it is recom-
mended to also implement sys_clock_cycle_get_64().
Returns
The current cycle time. This should count up monotonically through the full 32
bit space, wrapping at 0xffffffff. Hardware with fewer bits of precision in the
timer is expected to synthesize a 32 bit count.
uint64_t sys_clock_cycle_get_64(void)
64 bit hardware cycle counter
As for sys_clock_cycle_get_32(), but with a 64 bit return value. Not all hard-
ware has 64 bit counters. This function need be implemented only if CON-
FIG_TIMER_HAS_64BIT_CYCLE_COUNTER is set.
Note: If the counter clock is large enough for sys_clock_cycle_get_32() to wrap its full
range within a few seconds (i.e. CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC is greater than
50Mhz) then it is recommended to implement this API.
Returns
The current cycle time. This should count up monotonically through the full 64
bit space, wrapping at 2^64-1. Hardware with fewer bits of precision in the
timer is generally not expected to implement this API.
int64_t k_uptime_ticks(void)
Get system uptime, in system ticks.
This routine returns the elapsed time since the system booted, in ticks (c.f.
CONFIG_SYS_CLOCK_TICKS_PER_SEC ), which is the fundamental unit of resolution of kernel
timekeeping.
Returns
Current uptime in ticks.
static inline int64_t k_uptime_get(void)
Get system uptime.
This routine returns the elapsed time since the system booted, in milliseconds.
Note: While this function returns time in milliseconds, it does not mean it has millisec-
ond resolution. The actual resolution depends on CONFIG_SYS_CLOCK_TICKS_PER_SEC config
option.
Returns
Current uptime in milliseconds.
Note: While this function returns time in milliseconds, it does not mean it has millisec-
ond resolution. The actual resolution depends on CONFIG_SYS_CLOCK_TICKS_PER_SEC config
option
Returns
The low 32 bits of the current uptime, in milliseconds.
See also:
CONFIG_TIMER_HAS_64BIT_CYCLE_COUNTER
Returns
Current hardware clock up-counter (in cycles).
struct k_timeout_t
#include <sys_clock.h> Kernel timeout type.
Timeout arguments presented to kernel APIs are stored in this opaque type, which
is capable of representing times in various formats and units. It should be con-
structed from application data using one of the macros defined for this purpose (e.g.
K_MSEC() , K_TIMEOUT_ABS_TICKS(), etc. . . ), or be one of the two constants K_NO_WAIT
or K_FOREVER. Applications should not inspect the internal data once constructed. Timeout
values may be compared for equality with the K_TIMEOUT_EQ() macro.
Timers
A timer is a kernel object that measures the passage of time using the kernel’s system clock. When
a timer’s specified time limit is reached it can perform an application-defined action, or it can simply
record the expiration and wait for the application to read its status.
• Concepts
• Implementation
– Defining a Timer
– Using a Timer Expiry Function
– Reading Timer Status
Concepts Any number of timers can be defined (limited only by available RAM). Each timer is refer-
enced by its memory address.
A timer has the following key properties:
• A duration specifying the time interval before the timer expires for the first time. This is a
k_timeout_t value that may be initialized via different units.
• A period specifying the time interval between all timer expirations after the first one, also a
k_timeout_t. It must be non-negative. A period of K_NO_WAIT (i.e. zero) or K_FOREVER means
that the timer is a one shot timer that stops after a single expiration. (For example then, if a timer
is started with a duration of 200 and a period of 75, it will first expire after 200ms and then every
75ms after that.)
• An expiry function that is executed each time the timer expires. The function is executed by the
system clock interrupt handler. If no expiry function is required a NULL function can be specified.
• A stop function that is executed if the timer is stopped prematurely while running. The function
is executed by the thread that stops the timer. If no stop function is required a NULL function can
be specified.
• A status value that indicates how many times the timer has expired since the status value was last
read.
A timer must be initialized before it can be used. This specifies its expiry function and stop function
values, sets the timer’s status to zero, and puts the timer into the stopped state.
A timer is started by specifying a duration and a period. The timer’s status is reset to zero, then the
timer enters the running state and begins counting down towards expiry.
Note that the timer’s duration and period parameters specify minimum delays that will elapse. Because
of internal system timer precision (and potentially runtime interactions like interrupt delay) it is possible
that more time may have passed as measured by reads from the relevant system time APIs. But at least
this much time is guaranteed to have elapsed.
When a running timer expires its status is incremented and the timer executes its expiry function, if one
exists; If a thread is waiting on the timer, it is unblocked. If the timer’s period is zero the timer enters
the stopped state; otherwise the timer restarts with a new duration equal to its period.
A running timer can be stopped in mid-countdown, if desired. The timer’s status is left unchanged, then
the timer enters the stopped state and executes its stop function, if one exists. If a thread is waiting on
the timer, it is unblocked. Attempting to stop a non-running timer is permitted, but has no effect on the
timer since it is already stopped.
A running timer can be restarted in mid-countdown, if desired. The timer’s status is reset to zero, then
the timer begins counting down using the new duration and period values specified by the caller. If a
thread is waiting on the timer, it continues waiting.
A timer’s status can be read directly at any time to determine how many times the timer has expired since
its status was last read. Reading a timer’s status resets its value to zero. The amount of time remaining
before the timer expires can also be read; a value of zero indicates that the timer is stopped.
A thread may read a timer’s status indirectly by synchronizing with the timer. This blocks the thread
until the timer’s status is non-zero (indicating that it has expired at least once) or the timer is stopped; if
the timer status is already non-zero or the timer is already stopped the thread continues without waiting.
The synchronization operation returns the timer’s status and resets it to zero.
Note: Only a single user should examine the status of any given timer, since reading the status (directly
or indirectly) changes its value. Similarly, only a single thread at a time should synchronize with a given
timer. ISRs are not permitted to synchronize with timers, since ISRs are not allowed to block.
Implementation
Defining a Timer A timer is defined using a variable of type k_timer. It must then be initialized by
calling k_timer_init() .
The following code defines and initializes a timer.
Alternatively, a timer can be defined and initialized at compile time by calling K_TIMER_DEFINE .
The following code has the same effect as the code segment above.
Using a Timer Expiry Function The following code uses a timer to perform a non-trivial action on a
periodic basis. Since the required work cannot be done at interrupt level, the timer’s expiry function
submits a work item to the system workqueue, whose thread performs the work.
K_WORK_DEFINE(my_work, my_work_handler);
...
Reading Timer Status The following code reads a timer’s status directly to determine if the timer has
expired on not.
...
/* do work */
...
Using Timer Status Synchronization The following code performs timer status synchronization to
allow a thread to do useful work while ensuring that a pair of protocol operations are separated by the
specified time interval.
...
/* do other work */
...
Note: If the thread had no other work to do it could simply sleep between the two protocol operations,
without using a timer.
Suggested Uses Use a timer to initiate an asynchronous operation after a specified amount of time.
Use a timer to determine whether or not a specified amount of time has elapsed. In particular, timers
should be used when higher precision and/or unit control is required than that afforded by the simpler
k_sleep() and k_usleep() calls.
Use a timer to perform other work while carrying out operations involving time limits.
Note: If a thread needs to measure the time required to perform an operation it can read the system
clock or the hardware clock directly, rather than using a timer.
API Reference
group timer_apis
Defines
Parameters
• name – Name of the timer variable.
• expiry_fn – Function to invoke each time the timer expires.
• stop_fn – Function to invoke if the timer is stopped while running.
Typedefs
Functions
Note: The stop handler has to be callable from ISRs if k_timer_stop is to be called from ISRs.
Parameters
• timer – Address of timer.
3.1.5 Other
Atomic Services
An atomic variable is one that can be read and modified by threads and ISRs in an uninterruptible manner.
It 32-bit on 32-bit machines and 64-bit on 64-bit machines.
• Concepts
• Implementation
– Defining an Atomic Variable
– Manipulating an Atomic Variable
– Manipulating an Array of Atomic Variables
– Memory Ordering
• Suggested Uses
• Configuration Options
• API Reference
Concepts Any number of atomic variables can be defined (limited only by available RAM).
Using the kernel’s atomic APIs to manipulate an atomic variable guarantees that the desired operation
occurs correctly, even if higher priority contexts also manipulate the same variable.
The kernel also supports the atomic manipulation of a single bit in an array of atomic variables.
Implementation
Defining an Atomic Variable An atomic variable is defined using a variable of type atomic_t.
By default an atomic variable is initialized to zero. However, it can be given a different value using
ATOMIC_INIT :
Manipulating an Atomic Variable An atomic variable is manipulated using the APIs listed at the end
of this section.
The following code shows how an atomic variable can be used to keep track of the number of times a
function has been invoked. Since the count is incremented atomically, there is no risk that it will become
corrupted in mid-increment if a thread calling the function is interrupted if by a higher priority context
that also calls the routine.
atomic_t call_count;
int call_counting_routine(void)
{
/* increment invocation counter */
atomic_inc(&call_count);
Manipulating an Array of Atomic Variables An array of 32-bit atomic variables can be defined
in the conventional manner. However, you can also define an N-bit array of atomic variables using
ATOMIC_DEFINE .
A single bit in array of atomic variables can be manipulated using the APIs listed at the end of this section
that end with _bit().
The following code shows how a set of 200 flag bits can be implemented using an array of atomic
variables.
ATOMIC_DEFINE(flag_bits, NUM_FLAG_BITS);
Memory Ordering For consistency and correctness, all Zephyr atomic APIs are expected to include a
full memory barrier (in the sense of e.g. “serializing” instructions on x86, “DMB” on ARM, or a “se-
quentially consistent” operation as defined by the C++ memory model) where needed by hardware to
guarantee a reliable picture across contexts. Any architecture-specific implementations are responsible
for ensuring this behavior.
Suggested Uses Use an atomic variable to implement critical section processing that only requires the
manipulation of a single 32-bit value.
Use multiple atomic variables to implement critical section processing on a set of flag bits in a bit array
longer than 32 bits.
Note: Using atomic variables is typically far more efficient than using other techniques to implement
critical sections such as using a mutex or locking interrupts.
API Reference
Important: All atomic services APIs can be used by both threads and ISRs.
group atomic_apis
Defines
ATOMIC_INIT(i)
Initialize an atomic variable.
This macro can be used to initialize an atomic variable. For example,
Parameters
• i – Value to assign to atomic variable.
ATOMIC_PTR_INIT(p)
Initialize an atomic pointer variable.
This macro can be used to initialize an atomic pointer variable. For example,
atomic_ptr_t my_ptr = ATOMIC_PTR_INIT(&data);
Parameters
• p – Pointer value to assign to atomic pointer variable.
ATOMIC_BITMAP_SIZE(num_bits)
This macro computes the number of atomic variables necessary to represent a bitmap with
num_bits.
Parameters
• num_bits – Number of bits.
ATOMIC_DEFINE(name, num_bits)
Define an array of atomic variables.
This macro defines an array of atomic variables containing at least num_bits bits.
Note: If used from file scope, the bits of the array are initialized to zero; if used from within
a function, the bits are left uninitialized.
Parameters
• name – Name of array of atomic variables.
• num_bits – Number of bits needed.
Functions
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable or array.
• bit – Bit number (starting from 0).
Returns
true if the bit was set, false if it wasn’t.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable or array.
• bit – Bit number (starting from 0).
Returns
true if the bit was set, false if it wasn’t.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable or array.
• bit – Bit number (starting from 0).
Returns
true if the bit was set, false if it wasn’t.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable or array.
• bit – Bit number (starting from 0).
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable or array.
• bit – Bit number (starting from 0).
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable or array.
• bit – Bit number (starting from 0).
• val – true for 1, false for 0.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
• old_value – Original value to compare against.
• new_value – New value to store.
Returns
true if new_value is written, false otherwise.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
• old_value – Original value to compare against.
• new_value – New value to store.
Returns
true if new_value is written, false otherwise.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
• value – Value to add.
Returns
Previous value of target.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
• value – Value to subtract.
Returns
Previous value of target.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
Returns
Previous value of target.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
Returns
Previous value of target.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
Returns
Value of target.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of pointer variable.
Returns
Value of target.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
• value – Value to write to target.
Returns
Previous value of target.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
Returns
Previous value of target.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
Returns
Previous value of target.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
• value – Value to OR.
Returns
Previous value of target.
This routine atomically sets target to the bitwise exclusive OR (XOR) of target and value.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
• value – Value to XOR
Returns
Previous value of target.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
• value – Value to AND.
Returns
Previous value of target.
Note: As for all atomic APIs, includes a full/sequentially-consistent memory barrier (where
applicable).
Parameters
• target – Address of atomic variable.
• value – Value to NAND.
Returns
Previous value of target.
The kernel allows threads to use floating point registers on board configurations that support these
registers.
Note: Floating point services are currently available only for boards based on ARM Cortex-M SoCs
supporting the Floating Point Extension, the Intel x86 architecture, the SPARC architecture and ARCv2
SoCs supporting the Floating Point Extension. The services provided are architecture specific.
The kernel does not support the use of floating point registers by ISRs.
• Concepts
– No FP registers mode
– Unshared FP registers mode
– Shared FP registers mode
• Implementation
– Performing Floating Point Arithmetic
• Suggested Uses
• Configuration Options
• API Reference
Concepts The kernel can be configured to provide only the floating point services required by an ap-
plication. Three modes of operation are supported, which are described below. In addition, the kernel’s
support for the SSE registers can be included or omitted, as desired.
No FP registers mode This mode is used when the application has no threads that use floating point
registers. It is the kernel’s default floating point services mode.
If a thread uses any floating point register, the kernel generates a fatal error condition and aborts the
thread.
Unshared FP registers mode This mode is used when the application has only a single thread that
uses floating point registers.
On x86 platforms, the kernel initializes the floating point registers so they can be used by any thread
(initialization in skipped on ARM Cortex-M platforms and ARCv2 platforms). The floating point registers
are left unchanged whenever a context switch occurs.
Note: The behavior is undefined, if two or more threads attempt to use the floating point registers, as
the kernel does not attempt to detect (or prevent) multiple threads from using these registers.
Shared FP registers mode This mode is used when the application has two or more threads that use
floating point registers. Depending upon the underlying CPU architecture, the kernel supports one or
more of the following thread sub-classes:
• non-user: A thread that cannot use any floating point registers
• FPU user: A thread that can use the standard floating point registers
• SSE user: A thread that can use both the standard floating point registers and SSE registers
The kernel initializes and enables access to the floating point registers, so they can be used by any thread,
then saves and restores these registers during context switches to ensure the computations performed by
each FPU user or SSE user are not impacted by the computations performed by the other users.
Note: The Shared FP registers mode is the default Floating Point Services mode in ARM Cortex-M.
On the ARM Cortex-M architecture with the Floating Point Extension, the kernel treats all threads as FPU
users when shared FP registers mode is enabled. This means that any thread is allowed to access the
floating point registers. The ARM kernel automatically detects that a given thread is using the floating
point registers the first time the thread accesses them.
Pretag a thread that intends to use the FP registers by using one of the techniques listed below.
• A statically-created ARM thread can be pretagged by passing the K_FP_REGS option to
K_THREAD_DEFINE .
• A dynamically-created ARM thread can be pretagged by passing the K_FP_REGS option to
k_thread_create() .
Pretagging a thread with the K_FP_REGS option instructs the MPU-based stack protection mechanism to
properly configure the size of the thread’s guard region to always guarantee stack overflow detection,
and enable lazy stacking for the given thread upon thread creation.
During thread context switching the ARM kernel saves the callee-saved floating point registers, if the
switched-out thread has been using them. Additionally, the caller-saved floating point registers are saved
on the thread’s stack. If the switched-in thread has been using the floating point registers, the kernel
restores the callee-saved FP registers of the switched-in thread and the caller-saved FP context is restored
from the thread’s stack. Thus, the kernel does not save or restore the FP context of threads that are not
using the FP registers.
Each thread that intends to use the floating point registers must provide an extra 72 bytes of stack space
where the callee-saved FP context can be saved.
Lazy Stacking is currently enabled in Zephyr applications on ARM Cortex-M architecture, minimizing
interrupt latency, when the floating point context is active.
When the MPU-based stack protection mechanism is not enabled, lazy stacking is always active in the
Zephyr application. When the MPU-based stack protection is enabled, the following rules apply with
respect to lazy stacking:
• Lazy stacking is activated by default on threads that are pretagged with K_FP_REGS
• Lazy stacking is activated dynamically on threads that are not pretagged with K_FP_REGS , as soon
as the kernel detects that they are using the floating point registers.
If an ARM thread does not require use of the floating point registers any more, it can call
k_float_disable(). This instructs the kernel not to save or restore its FP context during thread context
switching.
ARM64 architecture
Note: The Shared FP registers mode is the default Floating Point Services mode on ARM64. The
compiler is free to optimize code using FP/SIMD registers, and library functions such as memcpy are
known to make use of them.
On the ARM64 (Aarch64) architecture the kernel treats each thread as a FPU user on a case-by-case
basis. A “lazy save” algorithm is used during context switching which updates the floating point registers
only when it is absolutely necessary. For example, the registers are not saved when switching from an
FPU user to a non-user thread, and then back to the original FPU user.
FPU register usage by ISRs is supported although not recommended. When an ISR uses floating point
or SIMD registers, then the access is trapped, the current FPU user context is saved in the thread object
and the ISR is resumed with interrupts disabled so to prevent another IRQ from interrupting the ISR
and potentially requesting FPU usage. Because ISR don’t have a persistent register context, there are no
provision for saving an ISR’s FPU context either, hence the IRQ disabling.
Each thread object becomes 512 bytes larger when Shared FP registers mode is enabled.
ARCv2 architecture On the ARCv2 architecture, the kernel treats each thread as a non-user or FPU
user and the thread must be tagged by one of the following techniques.
• A statically-created ARC thread can be tagged by passing the K_FP_REGS option to
K_THREAD_DEFINE .
• A dynamically-created ARC thread can be tagged by passing the K_FP_REGS to
k_thread_create() .
If an ARC thread does not require use of the floating point registers any more, it can call
k_float_disable(). This instructs the kernel not to save or restore its FP context during thread context
switching.
During thread context switching the ARC kernel saves the callee-saved floating point registers, if the
switched-out thread has been using them. Additionally, the caller-saved floating point registers are saved
on the thread’s stack. If the switched-in thread has been using the floating point registers, the kernel
restores the callee-saved FP registers of the switched-in thread and the caller-saved FP context is restored
from the thread’s stack. Thus, the kernel does not save or restore the FP context of threads that are not
using the FP registers. An extra 16 bytes (single floating point hardware) or 32 bytes (double floating
point hardware) of stack space is required to load and store floating point registers.
RISC-V architecture On the RISC-V architecture the kernel treats each thread as an FPU user on a
case-by-case basis with the FPU access allocated on demand. A “lazy save” algorithm is used during
context switching which updates the floating point registers only when it is absolutely necessary. For
example, the FPU registers are not saved when switching from an FPU user to a non-user thread (or an
FPU user that doesn’t touch the FPU during its scheduling slot), and then back to the original FPU user.
FPU register usage by ISRs is supported although not recommended. When an ISR uses floating point
or SIMD registers, then the access is trapped, the current FPU user context is saved in the thread object
and the ISR is resumed with interrupts disabled so to prevent another IRQ from interrupting the ISR
and potentially requesting FPU usage. Because ISR don’t have a persistent register context, there are no
provision for saving an ISR’s FPU context either, hence the IRQ disabling.
As an optimization, the FPU context is preemptively restored upon scheduling back an “active FPU user”
thread that had its FPU context saved away due to FPU usage by another thread. Active FPU users are so
designated when they make the FPU state “dirty” during their most recent scheduling slot before being
scheduled out. So if a thread doesn’t modify the FPU state within its scheduling slot and another thread
claims the FPU for itself afterwards then that first thread will be subjected to the on-demand regime and
won’t have its FPU context restored until it attempts to access it again. But if that thread does modify
the FPU before being scheduled out then it is likely to continue using it when scheduled back in and
preemptively restoring its FPU context saves on the exception trap overhead that would occur otherwise.
Each thread object becomes 136 bytes (single-precision floating point hardware) or 264 bytes (double-
precision floating point hardware) larger when Shared FP registers mode is enabled.
SPARC architecture On the SPARC architecture, the kernel treats each thread as a non-user or FPU
user and the thread must be tagged by one of the following techniques:
• A statically-created thread can be tagged by passing the K_FP_REGS option to K_THREAD_DEFINE .
• A dynamically-created thread can be tagged by passing the K_FP_REGS to k_thread_create() .
During thread context switch at exit from interrupt handler, the SPARC kernel saves all floating point
registers, if the FPU was enabled in the switched-out thread. Floating point registers are saved on the
thread’s stack. Floating point registers are restored when a thread context is restored iff they were saved
at the context save. Saving and restoring of the floating point registers is synchronous and thus not lazy.
The FPU is always disabled when an ISR is called (independent of CONFIG_FPU_SHARING).
Floating point disabling with k_float_disable() is not implemented.
When CONFIG_FPU_SHARING is used, then 136 bytes of stack space is required for each FPU user thread
to load and store floating point registers. No extra stack is required if CONFIG_FPU_SHARING is not used.
x86 architecture On the x86 architecture the kernel treats each thread as a non-user, FPU user or SSE
user on a case-by-case basis. A “lazy save” algorithm is used during context switching which updates
the floating point registers only when it is absolutely necessary. For example, the registers are not saved
when switching from an FPU user to a non-user thread, and then back to the original FPU user. The
following table indicates the amount of additional stack space a thread must provide so the registers can
be saved properly.
The x86 kernel automatically detects that a given thread is using the floating point registers the first
time the thread accesses them. The thread is tagged as an SSE user if the kernel has been configured to
support the SSE registers, or as an FPU user if the SSE registers are not supported. If this would result
in a thread that is an FPU user being tagged as an SSE user, or if the application wants to avoid the
exception handling overhead involved in auto-tagging threads, it is possible to pretag a thread using one
of the techniques listed below.
• A statically-created x86 thread can be pretagged by passing the K_FP_REGS or K_SSE_REGS option
to K_THREAD_DEFINE .
• A dynamically-created x86 thread can be pretagged by passing the K_FP_REGS or K_SSE_REGS
option to k_thread_create() .
• An already-created x86 thread can pretag itself once it has started by passing the K_FP_REGS or
K_SSE_REGS option to k_float_enable().
If an x86 thread uses the floating point registers infrequently it can call k_float_disable() to remove
its tagging as an FPU user or SSE user. This eliminates the need for the kernel to take steps to preserve
the contents of the floating point registers during context switches when there is no need to do so. When
the thread again needs to use the floating point registers it can re-tag itself as an FPU user or SSE user
by calling k_float_enable().
Implementation
Performing Floating Point Arithmetic No special coding is required for a thread to use floating point
arithmetic if the kernel is properly configured.
The following code shows how a routine can use floating point arithmetic to avoid overflow issues when
computing the average of a series of integer values.
sum = 0.0;
Suggested Uses Use the kernel floating point services when an application needs to perform floating
point operations.
Configuration Options To configure unshared FP registers mode, enable the CONFIG_FPU configuration
option and leave the CONFIG_FPU_SHARING configuration option disabled.
To configure shared FP registers mode, enable both the CONFIG_FPU configuration option and the
CONFIG_FPU_SHARING configuration option. Also, ensure that any thread that uses the floating point reg-
isters has sufficient added stack space for saving floating point register values during context switches,
as described above.
For x86, use the CONFIG_X86_SSE configuration option to enable support for SSEx instructions.
API Reference
group float_apis
Version
Kernel version handling and APIs related to kernel version being used.
API Reference
uint32_t sys_kernel_version_get(void)
Return the kernel version of the present build.
The kernel version is a four-byte value, whose format is described in the file “kernel_version.h”.
Returns
kernel version
SYS_KERNEL_VER_MAJOR(ver)
SYS_KERNEL_VER_MINOR(ver)
SYS_KERNEL_VER_PATCHLEVEL(ver)
Fatal Errors
Software Errors Triggered in Source Code Zephyr provides several methods for inducing fatal error
conditions through either build-time checks, conditionally compiled assertions, or deliberately invoked
panic or oops conditions.
Runtime Assertions Zephyr provides some macros to perform runtime assertions which may be con-
ditionally compiled. Their definitions may be found in include/zephyr/sys/__assert.h.
Assertions are enabled by setting the __ASSERT_ON preprocessor symbol to a non-zero value. There are
two ways to do this:
• Use the CONFIG_ASSERT and CONFIG_ASSERT_LEVEL kconfig options.
• Add -D__ASSERT_ON=<level> to the project’s CFLAGS, either on the build command line or in a
CMakeLists.txt.
The __ASSERT_ON method takes precedence over the kconfig option if both are used.
Specifying an assertion level of 1 causes the compiler to issue warnings that the kernel contains debug-
type __ASSERT() statements; this reminder is issued since assertion code is not normally present in a
final product. Specifying assertion level 2 suppresses these warnings.
Assertions are enabled by default when running Zephyr test cases, as configured by the CONFIG_TEST
option.
The policy for what to do when encountering a failed assertion is controlled by the implementation of
assert_post_action(). Zephyr provides a default implementation with weak linkage which invokes a
kernel oops if the thread that failed the assertion was running in user mode, and a kernel panic otherwise.
__ASSERT() The __ASSERT() macro can be used inside kernel and application code to perform optional
runtime checks which will induce a fatal error if the check does not pass. The macro takes a string
message which will be printed to provide context to the assertion. In addition, the kernel will print a
text representation of the expression code that was evaluated, and the file and line number where the
assertion can be found.
For example:
If at runtime foo had some unexpected value, the error produced may look like the following:
__ASSERT_EVAL() The __ASSERT_EVAL() macro can also be used inside kernel and application code,
with special semantics for the evaluation of its arguments.
It makes use of the __ASSERT() macro, but has some extra flexibility. It allows the developer to specify
different actions depending whether the __ASSERT() macro is enabled or not. This can be particularly
useful to prevent the compiler from generating comments (errors, warnings or remarks) about variables
that are only used with __ASSERT() being assigned a value, but otherwise unused when the __ASSERT()
macro is disabled.
Consider the following example:
int x;
x = foo();
__ASSERT(x != 0, "foo() returned zero!");
If __ASSERT() is disabled, then ‘x’ is assigned a value, but never used. This type of situation can be
resolved using the __ASSERT_EVAL() macro.
The first parameter tells __ASSERT_EVAL() what to do if __ASSERT() is disabled. The second parameter
tells __ASSERT_EVAL() what to do if __ASSERT() is enabled. The third and fourth parameters are the
parameters it passes to __ASSERT().
__ASSERT_NO_MSG() The __ASSERT_NO_MSG() macro can be used to perform an assertion that re-
ports the failed test and its location, but lacks additional debugging information provided to assist the
user in diagnosing the problem; its use is discouraged.
Build Assertions Zephyr provides two macros for performing build-time assertion checks. These are
evaluated completely at compile-time, and are always checked.
BUILD_ASSERT() This has the same semantics as C’s _Static_assert or C++’s static_assert. If
the evaluation fails, a build error will be generated by the compiler. If the compiler supports it, the
provided message will be printed to provide further context.
Unlike __ASSERT(), the message must be a static string, without printf()-like format codes or extra
arguments.
For example, suppose this check fails:
Kernel Oops A kernel oops is a software triggered fatal error invoked by k_oops(). This should be
used to indicate an unrecoverable condition in application logic.
The fatal error reason code generated will be K_ERR_KERNEL_OOPS.
Kernel Panic A kernel error is a software triggered fatal error invoked by k_panic(). This
should be used to indicate that the Zephyr kernel is in an unrecoverable state. Implementations of
k_sys_fatal_error_handler() should not return if the kernel encounters a panic condition, as the
entire system needs to be reset.
Threads running in user mode are not permitted to invoke k_panic(), and doing so will generate a
kernel oops instead. Otherwise, the fatal error reason code generated will be K_ERR_KERNEL_PANIC.
Exceptions
Spurious Interrupts If the CPU receives a hardware interrupt on an interrupt line that has not had a
handler installed with IRQ_CONNECT() or irq_connect_dynamic() , then the kernel will generate a fatal
error with the reason code K_ERR_SPURIOUS_IRQ().
Stack Overflows In the event that a thread pushes more data onto its execution stack than its stack
buffer provides, the kernel may be able to detect this situation and generate a fatal error with a reason
code of K_ERR_STACK_CHK_FAIL.
If a thread is running in user mode, then stack overflows are always caught, as the thread will simply
not have permission to write to adjacent memory addresses outside of the stack buffer. Because this is
enforced by the memory protection hardware, there is no risk of data corruption to memory that the
thread would not otherwise be able to write to.
If a thread is running in supervisor mode, or if CONFIG_USERSPACE is not enabled, depending on con-
figuration stack overflows may or may not be caught. CONFIG_HW_STACK_PROTECTION is supported on
some architectures and will catch stack overflows in supervisor mode, including when handling a system
call on behalf of a user thread. Typically this is implemented via dedicated CPU features, or read-only
MMU/MPU guard regions placed immediately adjacent to the stack buffer. Stack overflows caught in
this way can detect the overflow, but cannot guarantee against data corruption and should be treated as
a very serious condition impacting the health of the entire system.
If a platform lacks memory management hardware support, CONFIG_STACK_SENTINEL is a software-only
stack overflow detection feature which periodically checks if a sentinel value at the end of the stack
buffer has been corrupted. It does not require hardware support, but provides no protection against
data corruption. Since the checks are typically done at interrupt exit, the overflow may be detected a
nontrivial amount of time after the stack actually overflowed.
Finally, Zephyr supports GCC compiler stack canaries via CONFIG_STACK_CANARIES. If enabled, the
compiler will insert a canary value randomly generated at boot into function stack frames, checking
that the canary has not been overwritten at function exit. If the check fails, the compiler invokes
__stack_chk_fail(), whose Zephyr implementation invokes a fatal stack overflow error. An error in
this case does not indicate that the entire stack buffer has overflowed, but instead that the current func-
tion stack frame has been corrupted. See the compiler documentation for more details.
Other Exceptions Any other type of unhandled CPU exception will generate an error code of
K_ERR_CPU_EXCEPTION.
Fatal Error Handling The policy for what to do when encountering a fatal error is determined by
the implementation of the k_sys_fatal_error_handler() function. This function has a default imple-
mentation with weak linkage that calls LOG_PANIC() to dump all pending logging messages and then
unconditionally halts the system with k_fatal_halt() .
Applications are free to implement their own error handling policy by overriding the implementation of
k_sys_fatal_error_handler() . If the implementation returns, the faulting thread will be aborted and
the system will otherwise continue to function. See the documentation for this function for additional
details and constraints.
API Reference
group fatal_apis
Functions
Parameters
• reason – The reason for the fatal error
• esf – Exception context, with details and partial or full register state when the
error occurred. May in some cases be NULL.
Thread Local Storage (TLS) allows variables to be allocated on a per-thread basis. These variables are
stored in the thread stack which means every thread has its own copy of these variables.
Zephyr currently requires toolchain support for TLS.
Declaring and Using Thread Local Variables The keyword __thread can be used to declare thread
local variables.
For example, to declare a thread local variable in header files:
__thread int i;
Keyword static can also be used to limit the variable within a source file:
Using the thread local variable is the same as using other variable, for example:
void testing(void) {
i = 10;
}
3.2.1 Introduction
The Zephyr kernel supports a variety of device drivers. Whether a driver is available depends on the
board and the driver.
The Zephyr device model provides a consistent device model for configuring the drivers that are part of
a system. The device model is responsible for initializing all the drivers configured into the system.
Each type of driver (e.g. UART, SPI, I2C) is supported by a generic type API.
In this model the driver fills in the pointer to the structure containing the function pointers to its API
functions during driver initialization. These structures are placed into the RAM section in initialization
level order.
Subsytem 1
Instance 1 of Device Driver 1
API Impl 2
API 2
Application
[Not supported by viewer]
Instance 1 of Device Driver 2 [Not supported by viewer]
API Impl 1
API Impl 3
API 1
API 2
API 3
API Impl 1
Instance 1 of Device Driver 3
API Impl 2
Device drivers which are present on all supported board configurations are listed below.
• Interrupt controller: This device driver is used by the kernel’s interrupt management subsystem.
• Timer: This device driver is used by the kernel’s system clock and hardware clock subsystem.
• Serial communication: This device driver is used by the kernel’s system console subsystem.
• Entropy: This device driver provides a source of entropy numbers for the random number genera-
tor subsystem.
Important: Use the random API functions for random values. Entropy functions should not be di-
rectly used as a random number generator source as some hardware implementations are designed
to be an entropy seed source for random number generators and will not provide cryptographically
secure random number streams.
Zephyr provides a set of device drivers for multiple boards. Each driver should support an interrupt-based
implementation, rather than polling, unless the specific hardware does not provide any interrupt.
High-level calls accessed through device-specific APIs, such as i2c.h or spi.h, are usually intended as
synchronous. Thus, these calls should be blocking.
The following APIs for device drivers are provided by device.h. The APIs are intended for use in device
drivers only and should not be used in applications.
DEVICE_DEFINE()
Create device object and related data structures including setting it up for boot-time initialization.
DEVICE_NAME_GET()
Converts a device identifier to the global identifier for a device object.
DEVICE_GET()
Obtain a pointer to a device object by name.
DEVICE_DECLARE()
Declare a device object. Use this when you need a forward reference to a device that has not yet
been defined.
The device initialization macros populate some data structures at build time which are split into read-
only and runtime-mutable parts. At a high level we have:
struct device {
const char *name;
const void *config;
const void *api;
void * const data;
};
The config member is for read-only configuration data set at build time. For example, base memory
mapped IO addresses, IRQ line numbers, or other fixed physical characteristics of the device. This is the
config pointer passed to DEVICE_DEFINE() and related macros.
The data struct is kept in RAM, and is used by the driver for per-instance runtime housekeeping. For
example, it may contain reference counts, semaphores, scratch buffers, etc.
The api struct maps generic subsystem APIs to the device-specific implementations in the driver. It is
typically read-only and populated at build time. The next section describes this in more detail.
Most drivers will be implementing a device-independent subsystem API. Applications can simply program
to that generic API, and application code is not specific to any particular driver implementation.
A subsystem API definition typically looks like this:
typedef int (*subsystem_do_this_t)(const struct device *dev, int foo, int bar);
typedef void (*subsystem_do_that_t)(const struct device *dev, void *baz);
struct subsystem_api {
subsystem_do_this_t do_this;
subsystem_do_that_t do_that;
};
static inline int subsystem_do_this(const struct device *dev, int foo, int bar)
{
struct subsystem_api *api;
A driver implementing a particular subsystem will define the real implementation of these APIs, and
populate an instance of subsystem_api structure:
static int my_driver_do_this(const struct device *dev, int foo, int bar)
{
...
}
The driver would then pass my_driver_api_funcs as the api argument to DEVICE_DEFINE().
Note: Since pointers to the API functions are referenced in the api struct, they will always be included
in the binary even if unused; gc-sections linker option will always see at least one reference to them.
Providing for link-time size optimizations with driver APIs in most cases requires that the optional feature
be controlled by a Kconfig option.
Some devices can be cast as an instance of a driver subsystem such as GPIO, but provide additional func-
tionality that cannot be exposed through the standard API. These devices combine subsystem operations
with device-specific APIs, described in a device-specific header.
A device-specific API definition typically looks like this:
# include <zephyr/drivers/subsystem.h>
A driver implementing extensions to the subsystem will define the real implementation of both the
subsystem API and the specific APIs:
# ifdef CONFIG_USERSPACE
# include <zephyr/syscall_handler.h>
# include <syscalls/specific_from_user_mrsh.c>
# endif /* CONFIG_USERSPACE */
Applications use the device through both the subsystem and specific APIs.
Note: Public API for device-specific extensions should be prefixed with the compatible for the device to
which it applies. For example, if adding special functions to support the Maxim DS3231 the identifier
fragment specific in the examples above would be maxim_ds3231.
Some drivers may be instantiated multiple times in a given system. For example there can be multiple
GPIO banks, or multiple UARTS. Each instance of the driver will have a different config struct and data
struct.
Configuring interrupts for multiple drivers instances is a special case. If each instance needs to config-
ure a different interrupt line, this can be accomplished through the use of per-instance configuration
functions, since the parameters to IRQ_CONNECT() need to be resolvable at build time.
For example, let’s say we need to configure two instances of my_driver, each with a different interrupt
line. In drivers/subsystem/subsystem_my_driver.h:
typedef void (*my_driver_config_irq_t)(const struct device *dev);
DEVICE_MMIO_MAP(dev, K_MEM_CACHE_NONE);
config->config_func(dev);
return 0;
}
# if CONFIG_MY_DRIVER_0
DEVICE_DECLARE(my_driver_0);
# endif /* CONFIG_MY_DRIVER_0 */
Note the use of DEVICE_DECLARE() to avoid a circular dependency on providing the IRQ handler argu-
ment and the definition of the device itself.
Drivers may depend on other drivers being initialized first, or require the use of kernel services.
DEVICE_DEFINE() and related APIs allow the user to specify at what time during the boot sequence
the init function will be executed. Any driver will specify one of four initialization levels:
EARLY
Used very early in the boot process, right after entering the C domain (z_cstart()). This can be
used in architectures and SoCs that extend or implement architecture code and use drivers or sys-
tem services that have to be initialized before the Kernel calls any architecture specific initialization
code.
PRE_KERNEL_1
Used for devices that have no dependencies, such as those that rely solely on hardware present in
the processor/SOC. These devices cannot use any kernel services during configuration, since the
kernel services are not yet available. The interrupt subsystem will be configured however so it’s OK
to set up interrupts. Init functions at this level run on the interrupt stack.
PRE_KERNEL_2
Used for devices that rely on the initialization of devices initialized as part of the PRE_KERNEL_1
level. These devices cannot use any kernel services during configuration, since the kernel services
are not yet available. Init functions at this level run on the interrupt stack.
POST_KERNEL
Used for devices that require kernel services during configuration. Init functions at this level run
in context of the kernel main task.
APPLICATION
Used for application components (i.e. non-kernel components) that need automatic configuration.
These devices can use all services provided by the kernel during configuration. Init functions at
this level run on the kernel main task.
Within each initialization level you may specify a priority level, relative to other devices in the same
initialization level. The priority level is specified as an integer value in the range 0 to 99; lower val-
ues indicate earlier initialization. The priority level must be a decimal integer literal without leading
zeroes or sign (e.g. 32), or an equivalent symbolic name (e.g. \#define MY_INIT_PRIO 32); symbolic
expressions are not permitted (e.g. CONFIG_KERNEL_INIT_PRIORITY_DEFAULT + 5).
Drivers and other system utilities can determine whether startup is still in pre-kernel states by using the
k_is_pre_kernel() function.
In some cases you may just need to run a function at boot. For such cases, the SYS_INIT can be used.
This macro does not take any config or runtime data structures and there isn’t a way to later get a device
pointer by name. The same device policies for initialization level and priority apply.
In general, it’s best to use __ASSERT() macros instead of propagating return values unless the failure is
expected to occur during the normal course of operation (such as a storage device full). Bad parameters,
programming errors, consistency checks, pathological/unrecoverable failures, etc., should be handled by
assertions.
When it is appropriate to return error conditions for the caller to check, 0 should be returned on success
and a POSIX errno.h code returned on failure. See https://github.com/zephyrproject-rtos/zephyr/wiki/
Naming-Conventions#return-codes for details about this.
On some systems, the linear address of peripheral memory-mapped I/O (MMIO) regions cannot be
known at build time:
• The I/O ranges must be probed at runtime from the bus, such as with PCI express
• A memory management unit (MMU) is active, and the physical address of the MMIO range must
be mapped into the page tables at some virtual memory location determined by the kernel.
These systems must maintain storage for the MMIO range within RAM and establish the mapping within
the driver’s init function. Other systems do not care about this and can use MMIO physical addresses
directly from DTS and do not need any RAM-based storage for it.
For drivers that may need to deal with this situation, a set of APIs under the DEVICE_MMIO scope are
defined, along with a mapping function device_map().
The simplest case is for drivers which need to maintain one MMIO region. These drivers will need
to use the DEVICE_MMIO_ROM and DEVICE_MMIO_RAM macros in the definitions for their config_info and
driver_data structures, with initialization of the config_info from DTS using DEVICE_MMIO_ROM_INIT.
A call to DEVICE_MMIO_MAP() is made within the init function:
struct my_driver_config {
DEVICE_MMIO_ROM; /* Must be first */
...
}
struct my_driver_dev_data {
DEVICE_MMIO_RAM; /* Must be first */
...
}
The particular expansion of these macros depends on configuration. On a device with no MMU or PCI-e,
DEVICE_MMIO_MAP and DEVICE_MMIO_RAM expand to nothing.
Some drivers may have multiple MMIO regions. In addition, some drivers may already be implement-
ing a form of inheritance which requires some other data to be placed first in the config_info and
driver_data structures.
This can be managed with the DEVICE_MMIO_NAMED variant macros. These require that DEV_CFG() and
DEV_DATA() macros be defined to obtain a properly typed pointer to the driver’s config_info or dev_data
structs. For example:
struct my_driver_config {
...
DEVICE_MMIO_NAMED_ROM(corge);
DEVICE_MMIO_NAMED_ROM(grault);
...
}
struct my_driver_dev_data {
...
DEVICE_MMIO_NAMED_RAM(corge);
DEVICE_MMIO_NAMED_RAM(grault);
...
}
# define DEV_CFG(_dev) \
((const struct my_driver_config *)((_dev)->config))
# define DEV_DATA(_dev) \
((struct my_driver_dev_data *)((_dev)->data))
Device Model Drivers with multiple MMIO regions in the same DT node
Some drivers may have multiple MMIO regions defined into the same DT device node using the
reg-names property to differentiate them, for example:
/dts-v1/ ;
/ {
a-driver@40000000 {
reg = <0x40000000 0x1000>,
<0x40001000 0x1000>;
reg-names = "corge", "grault";
};
};
This can be managed as seen in the previous section but this time using the
DEVICE_MMIO_NAMED_ROM_INIT_BY_NAME macro instead. So the only difference would be in the
driver config struct:
Some drivers or driver-like code may not user Zephyr’s device model, and alternative storage must be
arranged for the MMIO data. An example of this are timer drivers, or interrupt controller code.
This can be managed with the DEVICE_MMIO_TOPLEVEL set of macros, for example:
DEVICE_MMIO_TOPLEVEL_STATIC(my_regs, DT_DRV_INST(..));
void some_init_code(...)
{
...
DEVICE_MMIO_TOPLEVEL_MAP(my_regs, K_MEM_CACHE_NONE);
...
}
void some_function(...)
...
sys_write32(DEVICE_MMIO_TOPLEVEL_GET(my_regs), 0xDEADBEEF);
...
}
Some drivers may not obtain the MMIO physical address from DTS, such as is the case with PCI-E. In
this case the device_map() function may be used directly:
void some_init_code(...)
{
...
(continues on next page)
group device_model
Device Model.
Defines
DEVICE_HANDLE_SEP
Flag value used in lists of device handles to separate distinct groups.
This is the minimum value for the device_handle_t type.
DEVICE_HANDLE_ENDS
Flag value used in lists of device handles to indicate the end of the list.
This is the maximum value for the device_handle_t type.
DEVICE_HANDLE_NULL
Flag value used to identify an unknown device.
DEVICE_NAME_GET(dev_id)
Expands to the name of a global device object.
Return the full name of a device object symbol created by DEVICE_DEFINE(), using the dev_id
provided to DEVICE_DEFINE(). This is the name of the global variable storing the device
structure, not a pointer to the string in the device::name field.
It is meant to be used for declaring extern symbols pointing to device objects before using the
DEVICE_GET macro to get the device object.
This macro is normally only useful within device driver source code. In other situations, you
are probably looking for device_get_binding().
Parameters
• dev_id – Device identifier.
Returns
The full name of the device object defined by device definition macros.
DEVICE_DEFINE(dev_id, name, init_fn, pm, data, config, level, prio, api)
Create a device object and set it up for boot time initialization.
This macro defines a device that is automatically configured by the kernel during system ini-
tialization. This macro should only be used when the device is not being allocated from a de-
vicetree node. If you are allocating a device from a devicetree node, use DEVICE_DT_DEFINE()
or DEVICE_DT_INST_DEFINE() instead.
Parameters
• dev_id – A unique token which is used in the name of the global device struc-
ture as a C identifier.
• name – A string name for the device, which will be stored in device::name. This
name can be used to look up the device with device_get_binding(). This must be
less than Z_DEVICE_MAX_NAME_LEN characters (including terminating NULL)
in order to be looked up from user mode.
• init_fn – Pointer to the device’s initialization function, which will be run by
the kernel during system initialization. Can be NULL.
• pm – Pointer to the device’s power management resources, a pm_device, which
will be stored in device::pm field. Use NULL if the device does not use PM.
• data – Pointer to the device’s private mutable data, which will be stored in
device::data.
• config – Pointer to the device’s private constant data, which will be stored in
device::config.
• level – The device’s initialization level. See System Initialization for details.
• prio – The device’s priority within its initialization level. See SYS_INIT() for
details.
• api – Pointer to the device’s API structure. Can be NULL.
DEVICE_DT_NAME(node_id)
Return a string name for a devicetree node.
This macro returns a string literal usable as a device’s name from a devicetree node identifier.
Parameters
• node_id – The devicetree node identifier.
Returns
The value of the node’s label property, if it has one. Otherwise, the node’s full
name in node-name@unit-address form.
DEVICE_DT_DEFINE(node_id, init_fn, pm, data, config, level, prio, api, ...)
Create a device object from a devicetree node identifier and set it up for boot time initializa-
tion.
This macro defines a device that is automatically configured by the kernel during system ini-
tialization. The global device object’s name as a C identifier is derived from the node’s depen-
dency ordinal. device::name is set to DEVICE_DT_NAME(node_id) .
The device is declared with extern visibility, so a pointer to a global device object can be
obtained with DEVICE_DT_GET(node_id) from any source file that includes <zephyr/device.
h>. Before using the pointer, the referenced object should be checked using device_is_ready().
Parameters
• node_id – The devicetree node identifier.
• init_fn – Pointer to the device’s initialization function, which will be run by
the kernel during system initialization. Can be NULL.
• pm – Pointer to the device’s power management resources, a pm_device, which
will be stored in device::pm. Use NULL if the device does not use PM.
• data – Pointer to the device’s private mutable data, which will be stored in
device::data.
• config – Pointer to the device’s private constant data, which will be stored in
device::config field.
DEVICE_INIT_DT_GET(node_id)
Get a init_entry reference from a devicetree node.
Parameters
• node_id – A devicetree node identifier
Returns
A pointer to the init_entry object created for that node
DEVICE_INIT_GET(dev_id)
Get a init_entry reference from a device identifier.
Parameters
• dev_id – Device identifier.
Returns
A pointer to the init_entry object created for that device
Typedefs
See also:
device_handle_get()
See also:
device_from_handle()
See also:
device_required_foreach()
See also:
device_supported_foreach()
Param dev
a device in the set being iterated
Param context
state used to support the visitor function
Return
A non-negative number to allow walking to continue, and a negative error code
to case the iteration to stop.
Functions
• count – pointer to where this function should store the length of the returned
array. No value is stored if the call returns a null pointer. The value may be set
to zero if the device has no devicetree dependencies.
Returns
a pointer to a sequence of *count device handles, or a null pointer if dev does
not have any dependency data.
static inline const device_handle_t *device_supported_handles_get(const struct device *dev,
size_t *count)
Get the set of handles that this device supports.
This function returns a pointer to an array of device handles. The length of the array is stored
in the count parameter.
The array contains a handle for each device that dev “supports” — that is, devices
that require dev directly — as determined from the devicetree. This does not include
transitive dependencies; you must recursively determine those.
Parameters
• dev – the device for which supports are desired.
• count – pointer to where this function should store the length of the returned
array. No value is stored if the call returns a null pointer. The value may be set
to zero if nothing in the devicetree depends on dev.
Returns
a pointer to a sequence of *count device handles, or a null pointer if dev does
not have any dependency data.
int device_required_foreach(const struct device *dev, device_visitor_callback_t visitor_cb, void
*context)
Visit every device that dev directly requires.
Zephyr maintains information about which devices are directly required by another device;
for example an I2C-based sensor driver will require an I2C controller for communication.
Required devices can derive from statically-defined devicetree relationships or dependencies
registered at runtime.
This API supports operating on the set of required devices. Example uses include making sure
required devices are ready before the requiring device is used, and releasing them when the
requiring device is no longer needed.
There is no guarantee on the order in which required devices are visited.
If the visitor_cb function returns a negative value iteration is halted, and the returned value
from the visitor is returned from this function.
Parameters
• dev – a device of interest. The devices that this device depends on will be used
as the set of devices to visit. This parameter must not be null.
• visitor_cb – the function that should be invoked on each device in the de-
pendency set. This parameter must not be null.
• context – state that is passed through to the visitor function. This parameter
may be null if visitor_cb tolerates a null context.
Returns
The number of devices that were visited if all visits succeed, or the negative value
returned from the first visit that did not succeed.
Parameters
• dev – a device of interest. The devices that this device supports will be used as
the set of devices to visit. This parameter must not be null.
• visitor_cb – the function that should be invoked on each device in the support
set. This parameter must not be null.
• context – state that is passed through to the visitor function. This parameter
may be null if visitor_cb tolerates a null context.
Returns
The number of devices that were visited if all visits succeed, or the negative value
returned from the first visit that did not succeed.
Parameters
• dev – pointer to the device in question.
Return values
• true – If the device is ready for use.
• false – If the device is not ready for use or if a NULL device pointer is passed
as argument.
struct device_state
#include <device.h> Runtime device dynamic structure (in RAM) per driver instance.
Fields in this are expected to be default-initialized to zero. The kernel driver infrastructure
and driver access functions are responsible for ensuring that any non-zero initialization is
done before they are accessed.
Public Members
uint8_t init_res
Device initialization return code (positive errno value).
Device initialization functions return a negative errno code if they fail. In Zephyr, errno
values do not exceed 255, so we can store the positive result value in a uint8_t type.
bool initialized
Indicates the device initialization function has been invoked.
struct device
#include <device.h> Runtime device structure (in ROM) per driver instance.
Public Members
void *data
Address of the device instance private data
This encodes a sequence of sets of device handles that have some relationship
to this node. The individual sets are extracted with dedicated API, such as de-
vice_required_handles_get().
Zephyr offers the capability to run threads at a reduced privilege level which we call user mode. The
current implementation is designed for devices with MPU hardware.
For details on creating threads that run in user mode, please see Lifecycle.
3.3.1 Overview
Threat Model
User mode threads are considered to be untrusted by Zephyr and are therefore isolated from other user
mode threads and from the kernel. A flawed or malicious user mode thread cannot leak or modify the
private data/resources of another thread or the kernel, and cannot interfere with or control another user
mode thread or the kernel.
Example use-cases of Zephyr’s user mode features:
• The kernel can protect against many unintentional programming errors which could otherwise
silently or spectacularly corrupt the system.
• The kernel can sandbox complex data parsers such as interpreters, network protocols, and filesys-
tems such that malicious third-party code or data cannot compromise the kernel or other threads.
• The kernel can support the notion of multiple logical “applications”, each with their own group
of threads and private data structures, which are isolated from each other if one crashes or is
otherwise compromised.
Design Goals For threads running in a non-privileged CPU state (hereafter referred to as ‘user mode’)
we aim to protect against the following:
• We prevent access to memory not specifically granted, or incorrect access to memory that has an
incompatible policy, such as attempting to write to a read-only area.
– Access to thread stack buffers will be controlled with a policy which partially depends on the
underlying memory protection hardware.
* A user thread will by default have read/write access to its own stack buffer.
* A user thread will never by default have access to user thread stacks that are not members
of the same memory domain.
* A user thread will never by default have access to thread stacks owned by a supervisor
thread, or thread stacks used to handle system call privilege elevations, interrupts, or CPU
exceptions.
* A user thread may have read/write access to the stacks of other user threads in the same
memory domain, depending on hardware.
· On MPU systems, threads may only access their own stack buffer.
· On MMU systems, threads may access any user thread stack in the same memory
domain. Portable code should not assume this.
– By default, program text and read-only data are accessible to all threads on read-only basis,
kernel-wide. This policy may be adjusted.
– User threads by default are not granted default access to any memory except what is noted
above.
• We prevent use of device drivers or kernel objects not specifically granted, with the permission
granularity on a per object or per driver instance basis.
• We validate kernel or driver API calls with incorrect parameters that would otherwise cause a crash
or corruption of data structures private to the kernel. This includes:
– Using the wrong kernel object type.
– Using parameters outside of proper bounds or with nonsensical values.
– Passing memory buffers that the calling thread does not have sufficient access to read or write,
depending on the semantics of the API.
– Use of kernel objects that are not in a proper initialization state.
• We ensure the detection and safe handling of user mode stack overflows.
• We prevent invoking system calls to functions excluded by the kernel configuration.
• We prevent disabling of or tampering with kernel-defined and hardware- enforced memory protec-
tions.
• We prevent re-entry from user to supervisor mode except through the kernel- defined system calls
and interrupt handlers.
• We prevent the introduction of new executable code by user mode threads, except to the extent to
which this is supported by kernel system calls.
We are specifically not protecting against the following attacks:
• The kernel itself, and any threads that are executing in supervisor mode, are assumed to be trusted.
• The toolchain and any supplemental programs used by the build system are assumed to be trusted.
• The kernel build is assumed to be trusted. There is considerable build-time logic for creating the
tables of valid kernel objects, defining system calls, and configuring interrupts. The .elf binary files
that are worked with during this process are all assumed to be trusted code.
• We can’t protect against mistakes made in memory domain configuration done in kernel mode that
exposes private kernel data structures to a user thread. RAM for kernel objects should always be
configured as supervisor-only.
• It is possible to make top-level declarations of user mode threads and assign them permissions
to kernel objects. In general, all C and header files that are part of the kernel build producing
zephyr.elf are assumed to be trusted.
• We do not protect against denial of service attacks through thread CPU starvation. Zephyr has
no thread priority aging and a user thread of a particular priority can starve all threads of lower
priority, and also other threads of the same priority if time-slicing is not enabled.
• There are build-time defined limits on how many threads can be active simultaneously, after which
creation of new user threads will fail.
• Stack overflows for threads running in supervisor mode may be caught, but the integrity of the
system cannot be guaranteed.
Broadly speaking, we accomplish these thread-level memory protection goals through the following
mechanisms:
• Any user thread will only have access to a subset of memory: typically its stack, program text,
read-only data, and any partitions configured in the Memory Protection Design it belongs to. Access
to any other RAM must be done on the thread’s behalf through system calls, or specifically granted
by a supervisor thread using the memory domain APIs. Newly created threads inherit the memory
domain configuration of the parent. Threads may communicate with each other by having shared
membership of the same memory domains, or via kernel objects such as semaphores and pipes.
• User threads cannot directly access memory belonging to kernel objects. Although pointers to
kernel objects are used to reference them, actual manipulation of kernel objects is done through
system call interfaces. Device drivers and threads stacks are also considered kernel objects. This
ensures that any data inside a kernel object that is private to the kernel cannot be tampered with.
• User threads by default have no permission to access any kernel object or driver other than their
own thread object. Such access must be granted by another thread that is either in supervisor mode
or has permission on both the receiving thread object and the kernel object being granted access
to. The creation of new threads has an option to automatically inherit permissions of all kernel
objects granted to the parent, except the parent thread itself.
• For performance and footprint reasons Zephyr normally does little or no parameter error checking
for kernel object or device driver APIs. Access from user mode through system calls involves an
extra layer of handler functions, which are expected to rigorously validate access permissions and
type of the object, check the validity of other parameters through bounds checking or other means,
and verify proper read/write access to any memory buffers involved.
• Thread stacks are defined in such a way that exceeding the specified stack space will generate a
hardware fault. The way this is done specifically varies per architecture.
Constraints
All kernel objects, thread stacks, and device driver instances must be defined at build time if they are to
be used from user mode. Dynamic use-cases for kernel objects will need to go through pre-defined pools
of available objects.
There are some constraints if additional application binary data is loaded for execution after the kernel
starts:
• Loaded object code will not be able to define any kernel objects that will be recognized by the
kernel. This code will instead need to use APIs for requesting kernel objects from pools.
• Similarly, since the loaded object code will not be part of the kernel build process, this code will not
be able to install interrupt handlers, instantiate device drivers, or define system calls, regardless of
what mode it runs in.
• Loaded object code that does not come from a verified source should always be entered with the
CPU already in user mode.
Zephyr’s memory protection design is geared towards microcontrollers with MPU (Memory Protection
Unit) hardware. We do support some architectures, such as x86, which have a paged MMU (Memory
Management Unit), but in that case the MMU is used like an MPU with an identity page table.
All of the discussion below will be using MPU terminology; systems with MMUs can be considered to
have an MPU with an unlimited number of programmable regions.
There are a few different levels on how memory access is configured when Zephyr memory protection
features are enabled, which we will describe here:
This is the configuration of the MPU after the kernel has started up. It should contain the following:
• Any configuration of memory regions which need to have special caching or write-back policies for
basic hardware and driver function. Note that most MPUs have the concept of a default memory
access policy map, which can be enabled as a “background” mapping for any area of memory that
doesn’t have an MPU region configuring it. It is strongly recommended to use this to maximize
the number of available MPU regions for the end user. On ARMv7-M/ARMv8-M this is called the
System Address Map, other CPUs may have similar capabilities.
• A read-only, executable region or regions for program text and ro-data, that is accessible to user
mode. This could be further sub-divided into a read-only region for ro-data, and a read-only,
executable region for text, but this will require an additional MPU region. This is required so that
threads running in user mode can read ro-data and fetch instructions.
• Depending on configuration, user-accessible read-write regions to support extra features like GCOV,
HEP, etc.
Assuming there is a background map which allows supervisor mode to access any memory it needs, and
regions are defined which grant user mode access to text/ro-data, this is sufficient for the boot time
configuration.
CONFIG_HW_STACK_PROTECTION is an optional feature which detects stack buffer overflows when the
system is running in supervisor mode. This catches issues when the entire stack buffer has overflowed,
and not individual stack frames, use compiler-assisted CONFIG_STACK_CANARIES for that.
Like any crash in supervisor mode, no guarantees can be made about the overall health of the system
after a supervisor mode stack overflow, and any instances of this should be treated as a serious error.
However it’s still very useful to know when these overflows happen, as without robust detection logic
the system will either crash in mysterious ways or behave in an undefined manner when the stack buffer
overflows.
Some systems implement this feature by creating at runtime a ‘guard’ MPU region which is set to be
read-only and is at either the beginning or immediately preceding the supervisor mode stack buffer. If
the stack overflows an exception will be generated.
This feature is optional and is not required to catch stack overflows in user mode; disabling this may free
1-2 MPU regions depending on the MPU design.
Other systems may have dedicated CPU support for catching stack overflows and no extra MPU regions
will be required.
Thread Stack
Any thread running in user mode will need access to its own stack buffer. On context switch into a user
mode thread, a dedicated MPU region will be programmed with the bounds of the stack buffer. A thread
exceeding its stack buffer will start pushing data onto memory it doesn’t have access to and a memory
access violation exception will be generated.
A small subset of kernel APIs, invoked as system calls, require heap memory allocations. This memory is
used only by the kernel and is not accessible directly by user mode. In order to use these system calls,
invoking threads must assign themselves to a resource pool, which is a k_heap object. Memory is drawn
from a thread’s resource pool using z_thread_malloc() and freed with k_free() .
The APIs which use resource pools are as follows, with any alternatives noted for users who do not want
heap allocations within their application:
• k_stack_alloc_init() sets up a k_stack with its storage buffer allocated out of a resource pool
instead of a buffer provided by the user. An alternative is to declare k_stacks that are automatically
initialized at boot with K_STACK_DEFINE() , or to initialize the k_stack in supervisor mode with
k_stack_init() .
• k_pipe_alloc_init() sets up a k_pipe object with its storage buffer allocated out of a resource
pool instead of a buffer provided by the user. An alternative is to declare k_pipes that are automat-
ically initialized at boot with K_PIPE_DEFINE() , or to initialize the k_pipe in supervisor mode with
k_pipe_init() .
• k_msgq_alloc_init() sets up a k_msgq object with its storage buffer allocated out of a resource
pool instead of a buffer provided by the user. An alternative is to declare a k_msgq that is auto-
matically initialized at boot with K_MSGQ_DEFINE() , or to initialize the k_msgq in supervisor mode
with k_msgq_init() .
• k_poll() when invoked from user mode, needs to make a kernel-side copy of the provided events
array while waiting for an event. This copy is freed when k_poll() returns for any reason.
• k_queue_alloc_prepend() and k_queue_alloc_append() allocate a container structure to place
the data in, since the internal bookkeeping information that defines the queue cannot be placed in
the memory provided by the user.
• k_object_alloc() allows for entire kernel objects to be dynamically allocated at runtime and a
usable pointer to them returned to the caller.
The relevant API is k_thread_heap_assign() which assigns a k_heap to draw these allocations from for
the target thread.
If the system heap is enabled, then the system heap may be used with
k_thread_system_pool_assign() , but it is preferable for different logical applications running
on the system to have their own pools.
Memory Domains
The kernel ensures that any user thread will have access to its own stack buffer, plus program text and
read-only data. The memory domain APIs are the way to grant access to additional blocks of memory to
a user thread.
Conceptually, a memory domain is a collection of some number of memory partitions. The maximum
number of memory partitions in a domain is limited by the number of available MPU regions. This is
why it is important to minimize the number of boot-time MPU regions.
Memory domains are not intended to control access to memory from supervisor mode. In some cases
this may be unavoidable; for example some architectures do not allow for the definition of regions which
are read-only to user mode but read-write to supervisor mode. A great deal of care must be taken when
working with such regions to not unintentionally cause the kernel to crash when accessing such a region.
Any attempt to use memory domain APIs to control supervisor mode access is at best undefined behavior;
supervisor mode access policy is only intended to be controlled by boot-time memory regions.
Memory domain APIs are only available to supervisor mode. The only control user mode has over
memory domains is that any user thread’s child threads will automatically become members of the
parent’s domain.
All threads are members of a memory domain, including supervisor threads (even though this has no
implications on their memory access). There is a default domain k_mem_domain_default which will be
assigned to threads if they have not been specifically assigned to a domain, or inherited a memory domain
membership from their parent thread. The main thread starts as a member of the default domain.
Memory Partitions Each memory partition consists of a memory address, a size, and access attributes.
It is intended that memory partitions are used to control access to system memory. Defining memory
partitions are subject to the following constraints:
• The partition must represent a memory region that can be programmed by the underlying memory
management hardware, and needs to conform to any underlying hardware constraints. For exam-
ple, many MPU-based systems require that partitions be sized to some power of two, and aligned
to their own size. For MMU-based systems, the partition must be aligned to a page and the size
some multiple of the page size.
• Partitions within the same memory domain may not overlap each other. There is no notion of
precedence among partitions within a memory domain. Partitions within a memory domain are
assumed to have a higher precedence than any boot-time memory regions, however whether a
memory domain partition can overlap a boot-time memory region is architecture specific.
• The same partition may be specified in multiple memory domains. For example there may be a
shared memory area that multiple domains grant access to.
• Care must be taken in determining what memory to expose in a partition. It is not appropriate to
provide direct user mode access to any memory containing private kernel data.
• Memory domain partitions are intended to control access to system RAM. Configuration of memory
partitions which do not correspond to RAM may not be supported by the architecture; this is true
for MMU-based systems.
There are two ways to define memory partitions: either manually or automatically.
Manual Memory Partitions The following code declares a global array buf, and then declares a read-
write partition for it which may be added to a domain:
uint8_t __aligned(32) buf[32];
This does not scale particularly well when we are trying to contain multiple objects spread out across
several C files into a single partition.
Automatic Memory Partitions Automatic memory partitions are created by the build system. All
globals which need to be placed inside a partition are tagged with their destination partition. The build
system will then coalesce all of these into a single contiguous block of memory, zero any BSS variables at
boot, and define a memory partition of appropriate base address and size which contains all the tagged
data.
Automatic memory partitions are only configured as read-write regions. They are defined with
K_APPMEM_PARTITION_DEFINE(). Global variables are then routed to this partition using K_APP_DMEM()
for initialized data and K_APP_BMEM() for BSS.
# include <zephyr/app_memory/app_memdomain.h>
The build system will ensure that the base address of my_partition will be properly aligned, and the
total size of the region conforms to the memory management hardware requirements, adding padding if
necessary.
If multiple partitions are being created, a variadic preprocessor macro can be used as provided in
app_macro_support.h:
Automatic Partitions for Static Library Globals The build-time logic for setting up automatic memory
partitions is in scripts/build/gen_app_partitions.py. If a static library is linked into Zephyr, it
is possible to route all the globals in that library to a specific memory partition with the --library
argument.
For example, if the Newlib C library is enabled, the Newlib globals all need to be placed in
z_libc_partition. The invocation of the script in the top-level CMakeLists.txt adds the following:
For pre-compiled libraries there is no support for expressing this in the project-level configuration or
build files; the toplevel CMakeLists.txt must be edited.
For Zephyr libraries created using zephyr_library or zephyr_library_named the
zephyr_library_app_memory function can be used to specify the memory partition where all
globals in the library should be placed.
Pre-defined Memory Partitions There are a few memory partitions which are pre-defined by the sys-
tem:
• z_malloc_partition - This partition contains the system-wide pool of memory used by libc mal-
loc(). Due to possible starvation issues, it is not recommended to draw heap memory from a global
pool, instead it is better to define various sys_heap objects and assign them to specific memory
domains.
• z_libc_partition - Contains globals required by the C library and runtime. Required when using
either the Minimal C library or the Newlib C Library. Required when CONFIG_STACK_CANARIES is
enabled.
Library-specific partitions are listed in include/app_memory/partitions.h. For example, to use the
MBEDTLS library from user mode, the k_mbedtls_partition must be added to the domain.
Create a Memory Domain A memory domain is defined using a variable of type k_mem_domain . It
must then be initialized by calling k_mem_domain_init() .
The following code defines and initializes an empty memory domain.
k_mem_domain_init(&app0_domain, 0, NULL);
Add Memory Partitions into a Memory Domain There are two ways to add memory partitions into a
memory domain.
This first code sample shows how to add memory partitions while creating a memory domain.
/* the start address of the MPU region needs to align with its size */
uint8_t __aligned(32) app0_buf[32];
uint8_t __aligned(32) app1_buf[32];
This second code sample shows how to add memory partitions into an initialized memory domain one
by one.
/* the start address of the MPU region needs to align with its size */
uint8_t __aligned(32) app0_buf[32];
uint8_t __aligned(32) app1_buf[32];
k_mem_domain_add_partition(&app0_domain, &app0_part0);
k_mem_domain_add_partition(&app0_domain, &app0_part1);
Note: The maximum number of memory partitions is limited by the maximum number of MPU regions
or the maximum number of MMU tables.
Memory Domain Assignment Any thread may join a memory domain, and any memory domain may
have multiple threads assigned to it. Threads are assigned to memory domains with an API call:
k_mem_domain_add_thread(&app0_domain, app_thread_id);
If the thread was already a member of some other domain (including the default domain), it will be
removed from it in favor of the new one.
In addition, if a thread is a member of a memory domain, and it creates a child thread, that thread will
belong to the domain as well.
Remove a Memory Partition from a Memory Domain The following code shows how to remove a
memory partition from a memory domain.
k_mem_domain_remove_partition(&app0_domain, &app0_part1);
The k_mem_domain_remove_partition() API finds the memory partition that matches the given param-
eter and removes that partition from the memory domain.
Available Partition Attributes When defining a partition, we need to set access permission attributes
to the partition. Since the access control of memory partitions relies on either an MPU or MMU, the
available partition attributes would be architecture dependent.
The complete list of available partition attributes for a specific architecture is found in the architecture-
specific include file include/arch/<arch name>/arch.h, (for example, include/arch/arm/aarch32/
arch.h.) Some examples of partition attributes are:
Configuration Options
API Reference
group mem_domain_apis
Defines
Functions
• Partitions in the same memory domain may not overlap each other.
• Partitions must not be defined which expose private kernel data structures or kernel ob-
jects.
• The starting address alignment, and the partition size must conform to the constraints of
the underlying memory management hardware, which varies per architecture.
• Memory domain partitions are only intended to control access to memory from user mode
threads.
• If CONFIG_EXECUTE_XOR_WRITE is enabled, the partition must not allow both writes
and execution.
Violating these constraints may lead to CPU exceptions or undefined behavior.
Parameters
• domain – The memory domain to be added a memory partition.
• part – The memory partition to be added
Return values
• 0 – if successful
Variables
struct k_mem_partition
#include <mem_domain.h> Memory Partition.
A memory partition is a region of memory in the linear address space with a specific access
policy.
The alignment of the starting address, and the alignment of the size value may have varying
requirements based on the capabilities of the underlying memory management hardware;
arbitrary values are unlikely to work.
Public Members
uintptr_t start
start address of memory partition
size_t size
size of memory partition
k_mem_partition_attr_t attr
attribute of memory partition
struct k_mem_domain
#include <mem_domain.h> Memory Domain.
A memory domain is a collection of memory partitions, used to represent a user thread’s
access policy for the linear address space. A thread may be a member of only one memory
domain, but any memory domain may have multiple threads that are members.
Supervisor threads may also be a member of a memory domain; this has no implications
on their memory access but can be useful as any child threads inherit the memory domain
membership of the parent.
A user thread belonging to a memory domain with no active partitions will have guaranteed
access to its own stack buffer, program text, and read-only data.
Public Members
sys_dlist_t mem_domain_q
Doubly linked list of member threads
uint8_t num_partitions
number of active partitions in the domain
Permission on an object also has the semantics of a reference to an object. This is significant for certain
object APIs which do temporary allocations, or objects which themselves have been allocated from a
runtime memory pool.
If an object loses all references, two events may happen:
• If the object has an associated cleanup function, the cleanup function may be called to release any
runtime-allocated buffers the object was using.
• If the object itself was dynamically allocated, the memory for the object will be freed.
Object Placement
Kernel objects that are only used by supervisor threads have no restrictions and can be located anywhere
in the binary, or even declared on stacks. However, to prevent accidental or intentional corruption by
user threads, they must not be located in any memory that user threads have direct access to.
In order for a static kernel object to be usable by a user thread via system call APIs, several conditions
must be met on how the kernel object is declared:
• The object must be declared as a top-level global at build time, such that it appears in the ELF
symbol table. It is permitted to declare kernel objects with static scope. The post-build script
scripts/build/gen_kobject_list.py scans the generated ELF file to find kernel objects and places their
memory addresses in a special table of kernel object metadata. Kernel objects may be members of
arrays or embedded within other data structures.
• Kernel objects must be located in memory reserved for the kernel. They must not be located in any
memory partitions that are user-accessible.
• Any memory reserved for a kernel object must be used exclusively for that object. Kernel objects
may not be members of a union data type.
Kernel objects that are found but do not meet the above conditions will not be included in the generated
table that is used to validate kernel object pointers passed in from user mode.
The debug output of the scripts/build/gen_kobject_list.py script may be useful when debugging why some
object was unexpectedly not being tracked. This information will be printed if the script is run with the
--verbose flag, or if the build system is invoked with verbose output.
Dynamic Objects
Kernel objects may also be allocated at runtime if CONFIG_DYNAMIC_OBJECTS is enabled. In this case,
the k_object_alloc() API may be used to instantiate an object from the calling thread’s resource pool.
Such allocations may be freed in two ways:
• Supervisor threads may call k_object_free() to force a dynamic object to be released.
• If an object’s references drop to zero (which happens when no threads have permissions on it)
the object will be automatically freed. User threads may drop their own permission on an ob-
ject with k_object_release() , and their permissions are automatically cleared when a thread
terminates. Supervisor threads may additionally revoke references for another thread using
k_object_access_revoke() .
Because permissions are also used for reference counting, it is important for supervisor threads to acquire
permissions on objects they are using even though the access control aspects of the permission system
are not enforced.
Any instances of structs or arrays corresponding to kernel objects that meet the object placement criteria
will have their memory addresses placed in a special perfect hash table of kernel objects generated by
the ‘gperf’ tool. When a system call is made and the kernel is presented with a memory address of what
may or may not be a valid kernel object, the address can be validated with a constant-time lookup in this
table.
Drivers are a special case. All drivers are instances of device , but it is important to know what subsystem
a driver belongs to so that incorrect operations, such as calling a UART API on a sensor driver object, can
be prevented. When a device struct is found, its API pointer is examined to determine what subsystem
the driver belongs to.
The table itself maps kernel object memory addresses to instances of z_object, which has all the meta-
data for that object. This includes:
• A bitfield indicating permissions on that object. All threads have a numerical ID assigned to them
at build time, used to index the permission bitfield for an object to see if that thread has permission
on it. The size of this bitfield is controlled by the CONFIG_MAX_THREAD_BYTES option and the build
system will generate an error if this value is too low.
• A type field indicating what kind of object this is, which is some instance of k_objects.
• A set of flags for that object. This is currently used to track initialization state and whether an
object is public or not.
• An extra data field. The semantics of this field vary by object type, see the definition of
z_object_data.
Dynamic objects allocated at runtime are tracked in a runtime red/black tree which is used in parallel to
the gperf table when validating object pointers.
Supervisor threads can access any kernel object. However, permissions for supervisor threads are still
tracked for two reasons:
• If a supervisor thread calls k_thread_user_mode_enter() , the thread will then run in user mode
with any permissions it had been granted (in many cases, by itself) when it was a supervisor thread.
• If a supervisor thread creates a user thread with the K_INHERIT_PERMS option, the child thread will
be granted the same permissions as the parent thread, except the parent thread object.
By default, when a user thread is created, it will only have access permissions on its own thread object.
Other kernel objects by default are not usable. Access to them needs to be explicitly or implicitly granted.
There are several ways to do this.
• If a thread is created with the K_INHERIT_PERMS , that thread will inherit all the permissions of the
parent thread, except the parent thread object.
• A thread that has permission on an object, or is running in supervisor mode, may grant permis-
sion on that object to another thread via the k_object_access_grant() API. The convenience
pseudo-function k_thread_access_grant() may also be used, which accepts an arbitrary number
of pointers to kernel objects and calls k_object_access_grant() on each of them. The thread
being granted permission, or the object whose access is being granted, do not need to be in an ini-
tialized state. If the caller is from user mode, the caller must have permissions on both the kernel
object and the target thread object.
• Supervisor threads may declare a particular kernel object to be a public object, usable by all cur-
rent and future threads with the k_object_access_all_grant() API. You must assume that any
untrusted or exploited code will then be able to access the object. Use this API with caution!
Initialization State
Most operations on kernel objects will fail if the object is considered to be in an uninitialized state. The
appropriate init function for the object must be performed first.
Some objects will be implicitly initialized at boot:
• Kernel objects that were declared with static initialization macros (such as K_SEM_DEFINE for
semaphores) will be in an initialized state at build time.
• Device driver objects are considered initialized after their init function is run by the kernel early in
the boot process.
If a kernel object is initialized with a private static initializer, the object must have z_object_init()
called on it at some point by a supervisor thread, otherwise the kernel will consider the object unini-
tialized if accessed by a user thread. This is very uncommon, typically only for kernel objects that are
embedded within some larger struct and initialized statically.
struct foo {
struct k_sem sem;
...
};
...
z_object_init(&my_foo.sem);
...
When implementing new kernel features or driver subsystems, it may be necessary to define some new
kernel object types. There are different steps needed for creating core kernel objects and new driver
subsystems.
Creating New Driver Subsystem Kernel Objects All driver instances are device . They are differenti-
ated by what API struct they are set to.
• In scripts/build/gen_kobject_list.py, add the name of the API struct for the new subsystem
to the subsystems list.
Driver instances of the new subsystem should now be tracked.
Configuration Options
API Reference
group usermode_apis
Defines
K_THREAD_ACCESS_GRANT(name_, ...)
Grant a static thread access to a list of kernel objects.
For threads declared with K_THREAD_DEFINE(), grant the thread access to a set of kernel
objects. These objects do not need to be in an initialized state. The permissions will be
granted when the threads are initialized in the early boot sequence.
All arguments beyond the first must be pointers to kernel objects.
Parameters
• name_ – Name of the thread, as passed to K_THREAD_DEFINE()
K_OBJ_FLAG_INITIALIZED
Object initialized
K_OBJ_FLAG_PUBLIC
Object is Public
K_OBJ_FLAG_ALLOC
Object allocated
K_OBJ_FLAG_DRIVER
Driver Object
Functions
Parameters
• object – Address of kernel object
• thread – Thread to grant access to the object
void k_object_access_revoke(const void *object, struct k_thread *thread)
Revoke a thread’s access to a kernel object
The thread will lose access to the object if the caller is from supervisor mode, or the caller is
from user mode AND has permissions on both the object and the thread whose access is being
revoked.
Parameters
• object – Address of kernel object
• thread – Thread to remove access to the object
void k_object_release(const void *object)
Release an object.
Allows user threads to drop their own permission on an object Their permissions are automat-
ically cleared when a thread terminates.
Parameters
• object – The object to be released
void k_object_access_all_grant(const void *object)
Grant all present and future threads access to an object
If the caller is from supervisor mode, or the caller is from user mode and have sufficient
permissions on the object, then that object will have permissions granted to it for all current
and future threads running in the system, effectively becoming a public kernel object.
Use of this API should be avoided on systems that are running untrusted code as it is possible
for such code to derive the addresses of kernel objects and perform unwanted operations on
them.
It is not possible to revoke permissions on public objects; once public, any thread may use it.
Parameters
• object – Address of kernel object
static inline void k_object_free(void *obj)
Free an object.
Parameters
• obj –
User threads run with a reduced set of privileges than supervisor threads: certain CPU instructions may
not be used, and they have access to only a limited part of the memory map. System calls (may) allow
user threads to perform operations not directly available to them.
When defining system calls, it is very important to ensure that access to the API’s private data is done
exclusively through system call interfaces. Private kernel data should never be made available to user
mode threads directly. For example, the k_queue APIs were intentionally not made available as they
store bookkeeping information about the queue directly in the queue buffers which are visible from user
mode.
APIs that allow the user to register callback functions that run in supervisor mode should never be
exposed as system calls. Reserve these for supervisor-mode access only.
This section describes how to declare new system calls and discusses a few implementation details rele-
vant to them.
Components
C Prototype
The C prototype represents how the API is invoked from either user or supervisor mode. For example, to
initialize a semaphore:
__syscall void k_sem_init(struct k_sem *sem, unsigned int initial_count,
unsigned int limit);
The __syscall attribute is very special. To the C compiler, it simply expands to ‘static inline’. However
to the post-build scripts/build/parse_syscalls.py script, it indicates that this API is a system call. The
scripts/build/parse_syscalls.py script does some parsing of the function prototype, to determine the data
types of its return value and arguments, and has some limitations:
• Array arguments must be passed in as pointers, not arrays. For example, int foo[] or int
foo[12] is not allowed, but should instead be expressed as int *foo.
• Function pointers horribly confuse the limited parser. The workaround is to typedef them first, and
then express in the argument list in terms of that typedef.
• __syscall must be the first thing in the prototype.
The preprocessor is intentionally not used when determining the set of system calls to generate. How-
ever, any generated system calls that don’t actually have a verification function defined (because the
related feature is not enabled in the kernel configuration) will instead point to a special verification for
unimplemented system calls. Data type definitions for APIs should not have conditional visibility to the
compiler.
Any header file that declares system calls must include a special generated header at the very bottom
of the header file. This header follows the naming convention syscalls/<name of header file>. For
example, at the bottom of include/sensor.h:
# include <syscalls/sensor.h>
C prototype functions must be declared in one of the directories listed in the CMake variable
SYSCALL_INCLUDE_DIRS. This list always contains ${ZEPHYR_BASE}/include, but will also contain
APPLICATION_SOURCE_DIR when CONFIG_APPLICATION_DEFINED_SYSCALL is set, or ${ZEPHYR_BASE}/
subsys/testsuite/ztest/include when CONFIG_ZTEST is set. Additional paths can be added to the
list through the CMake command line or in CMake code that is run before find_package(Zephyr ...)
is run.
Invocation Context Source code that uses system call APIs can be made more efficient if it is known
that all the code inside a particular C file runs exclusively in user mode, or exclusively in supervisor
mode. The system will look for the definition of macros __ZEPHYR_SUPERVISOR__ or __ZEPHYR_USER__,
typically these will be added to the compiler flags in the build system for the related files.
• If CONFIG_USERSPACE is not enabled, all APIs just directly call the implementation function.
• Otherwise, the default case is to make a runtime check to see if the processor is currently running
in user mode, and either make the system call or directly call the implementation function as
appropriate.
• If __ZEPHYR_SUPERVISOR__ is defined, then it is assumed that all the code runs in supervisor mode
and all APIs just directly call the implementation function. If the code was actually running in user
mode, there will be a CPU exception as soon as it tries to do something it isn’t allowed to do.
• If __ZEPHYR_USER__ is defined, then it is assumed that all the code runs in user mode and system
calls are unconditionally made.
Implementation Details Declaring an API with __syscall causes some code to be generated in C
and header files by the scripts/build/gen_syscalls.py script, all of which can be found in the project out
directory under include/generated/:
• The system call is added to the enumerated type of system call IDs, which is expressed in include/
generated/syscall_list.h. It is the name of the API in uppercase, prefixed with K_SYSCALL_.
• An entry for the system call is created in the dispatch table _k_syscall_table, expressed in
include/generated/syscall_dispatch.c
• A weak verification function is declared, which is just an alias of the ‘unimplemented system call’
verifier. This is necessary since the real verification function may or may not be built depending on
the kernel configuration. For example, if a user thread makes a sensor subsystem API call, but the
sensor subsystem is not enabled, the weak verifier will be invoked instead.
• An unmarshalling function is defined in include/generated/<name>_mrsh.c
The body of the API is created in the generated system header. Using the example of k_sem_init() , this
API is declared in include/kernel.h. At the bottom of include/kernel.h is:
#include <syscalls/kernel.h>
{
#ifdef CONFIG_USERSPACE
if (z_syscall_trap()) {
arch_syscall_invoke3(*(uintptr_t *)&sem, *(uintptr_t *)&initial_count,
˓→ *(uintptr_t *)&limit, K_SYSCALL_K_SEM_INIT);
return;
}
compiler_barrier();
#endif
z_impl_k_sem_init(sem, initial_count, limit);
}
This generates an inline function that takes three arguments with void return value. Depending on
context it will either directly call the implementation function or go through a system call elevation. A
prototype for the implementation function is also automatically generated.
The final layer is the invocation of the system call itself. All architectures implementing system calls must
implement the seven inline functions _arch_syscall_invoke0() through _arch_syscall_invoke6().
These functions marshal arguments into designated CPU registers and perform the necessary privilege
elevation. Parameters of API inline function, before being passed as arguments to system call, are C
casted to uintptr_t which matches size of register. Exception to above is passing 64-bit parameters on
32-bit systems, in which case 64-bit parameters are split into lower and higher part and passed as two
consecutive arguments. There is always a uintptr_t type return value, which may be neglected if not
needed.
Some system calls may have more than six arguments, but number of arguments passed via registers is
limited to six for all architectures. Additional arguments will need to be passed in an array in the source
memory space, which needs to be treated as untrusted memory in the verification function. This code
(packing, unpacking and validation) is generated automatically as needed in the stub above and in the
unmarshalling function.
System calls return uintptr_t type value that is C casted, by wrapper, to a return type of API prototype
declaration. This means that 64-bit value may not be directly returned, from a system call to its wrapper,
on 32-bit systems. To solve the problem the automatically generated wrapper function defines 64-bit
intermediate variable, which is considered untrusted buffer, on its stack and passes pointer to that
variable to the system call, as a final argument. Upon return from the system call the value written to
that buffer will be returned by the wrapper function. The problem does not exist on 64-bit systems which
are able to return 64-bit values directly.
Implementation Function
The implementation function is what actually does the work for the API. Zephyr normally does little
to no error checking of arguments, or does this kind of checking with assertions. When writing the
implementation function, validation of any parameters is optional and should be done with assertions.
All implementation functions must follow the naming convention, which is the name of the API prefixed
with z_impl_. Implementation functions may be declared in the same header as the API as a static inline
function or declared in some C file. There is no prototype needed for implementation functions, these
are automatically generated.
Verification Function
The verification function runs on the kernel side when a user thread makes a system call. When the user
thread makes a software interrupt to elevate to supervisor mode, the common system call entry point
uses the system call ID provided by the user to look up the appropriate unmarshalling function for that
system call and jump into it. This in turn calls the verification function.
Verification and unmarshalling functions only run when system call APIs are invoked from user mode. If
an API is invoked from supervisor mode, the implementation is simply called and there is no software
trap.
The purpose of the verification function is to validate all the arguments passed in. This includes:
• Any kernel object pointers provided. For example, the semaphore APIs must ensure that the
semaphore object passed in is a valid semaphore and that the calling thread has permission on
it.
• Any memory buffers passed in from user mode. Checks must be made that the calling thread has
read or write permissions on the provided buffer.
• Any other arguments that have a limited range of valid values.
Verification functions involve a great deal of boilerplate code which has been made simpler by some
macros in include/zephyr/syscall_handler.h. Verification functions should be declared using these
macros.
Verifier Definition All system calls are dispatched to a verifier function with a prefixed z_vrfy_ name
based on the system call. They have exactly the same return type and argument types as the wrapped
system call. Their job is to execute the system call (generally by calling the implementation function)
after having validated all arguments.
The verifier is itself invoked by an automatically generated unmarshaller function which takes care of
unpacking the register arguments from the architecture layer and casting them to the correct type. This
is defined in a header file that must be included from user code, generally somewhere after the definition
of the verifier in a translation unit (so that it can be inlined).
For example:
Verification Memory Access Policies Parameters passed to system calls by reference require special
handling, because the value of these parameters can be changed at any time by any user thread that has
access to the memory that parameter points to. If the kernel makes any logical decisions based on the
contents of this memory, this can open up the kernel to attacks even if checking is done. This is a class
of exploits known as TOCTOU (Time Of Check to Time Of Use).
The proper procedure to mitigate these attacks is to make a copies in the verification function, and only
perform parameter checks on the copies, which user threads will never have access to. The implemen-
tation functions get passed the copy and not the original data sent by the user. The z_user_to_copy()
and z_user_from_copy() APIs exist for this purpose.
There is one exception in place, with respect to large data buffers which are only used to provide a
memory area that is either only written to, or whose contents are never used for any validation or
control flow. Further discussion of this later in this section.
As a first example, consider a parameter which is used as an output parameter for some integral value:
ret = z_impl_some_syscall(&local_out_param);
Z_OOPS(z_user_to_copy(out_param, &local_out_param, sizeof(*out_param)));
return ret;
}
Here we have allocated local_out_param on the stack, passed its address to the implementation func-
tion, and then used z_user_to_copy() to fill in the memory passed in by the caller.
It might be tempting to do something more concise:
However, this is unsafe if the implementation ever does any reads to this memory as part of its logic. For
example, it could be used to store some counter value, and this could be meddled with by user threads
that have access to its memory. It is by far safest for small integral values to do the copying as shown in
the first example.
Some parameters may be input/output. For instance, it’s not uncommon to see APIs which pass in a
pointer to some size_t which is a maximum allowable size, which is then updated by the implementa-
tion to reflect the actual number of bytes processed. This too should use a stack copy:
Many system calls pass in structures or even linked data structures. All should be copied. Typically this
is done by allocating copies on the stack:
struct bar {
...
};
struct foo {
...
struct bar *bar_left;
struct bar *bar_right;
};
return z_impl_must_alloc(&foo_copy);
}
In some cases the amount of data isn’t known at compile time or may be too large to allocate on
the stack. In this scenario, it may be necessary to draw memory from the caller’s resource pool via
z_thread_malloc(). This should always be considered last resort. Functional safety programming
guidelines heavily discourage usage of heap and the fact that a resource pool is used must be clearly
documented. Any issues with allocation must be reported, to a caller, with returning the -ENOMEM . The
Z_OOPS() should never be used to verify if resource allocation has been successful.
struct bar {
...
};
struct foo {
size_t count;
struct bar *bar_list; /* array of struct bar of size count */
};
(continues on next page)
ret = z_impl_must_alloc(&foo_copy);
Finally, we must consider large data buffers. These represent areas of user memory which either have
data copied out of, or copied into. It is permitted to pass these pointers to the implementation function
directly. The caller’s access to the buffer still must be validated with Z_SYSCALL_MEMORY APIs. The
following constraints need to be met:
• If the buffer is used by the implementation function to write data, such as data captured from some
MMIO region, the implementation function must only write this data, and never read it.
• If the buffer is used by the implementation function to read data, such as a block of memory to
write to some hardware destination, this data must be read without any processing. No conditional
logic can be implemented due to the data buffer’s contents. If such logic is required a copy must
be made.
• The buffer must only be used synchronously with the call. The implementation must not ever save
the buffer address and use it asynchronously, such as when an interrupt fires.
Verification Return Value Policies When verifying system calls, it’s important to note which kinds
of verification failures should propagate a return value to the caller, and which should simply invoke
Z_OOPS() which kills the calling thread. The current conventions are as follows:
1. For system calls that are defined but not compiled, invocations of these missing system calls are
routed to handler_no_syscall() which invokes Z_OOPS().
2. Any invalid access to memory found by the set of Z_SYSCALL_MEMORY APIs, z_user_from_copy(),
z_user_to_copy() should trigger a Z_OOPS. This happens when the caller doesn’t have appropriate
permissions on the memory buffer or some size calculation overflowed.
3. Most system calls take kernel object pointers as an argument, checked either with one of the
Z_SYSCALL_OBJ functions, Z_SYSCALL_DRIVER_nnnnn, or manually using z_object_validate().
These can fail for a variety of reasons: missing driver API, bad kernel object pointer, wrong kernel
object type, or improper initialization state. These issues should always invoke Z_OOPS().
4. Any error resulting from a failed memory heap allocation, often from invoking
z_thread_malloc(), should propagate -ENOMEM to the caller.
5. General parameter checks should be done in the implementation function, in most cases using
CHECKIF().
• The behavior of CHECKIF() depends on the kernel configuration, but if user mode is enabled,
CONFIG_RUNTIME_ERROR_CHECKS is enforced, which guarantees that these checks will be made
and a return value propagated.
6. It is totally forbidden for any kind of kernel mode callback function to be registered from user mode.
APIs which simply install callbacks shall not be exposed as system calls. Some driver subsystem
APIs may take optional function callback pointers. User mode verification functions for these APIs
must enforce that these are NULL and should invoke Z_OOPS() if not.
7. Some parameter checks are enforced only from user mode. These should be checked in the verifi-
cation function and propagate a return value to the caller if possible.
There are some known exceptions to these policies currently in Zephyr:
• k_thread_join() and k_thread_abort() are no-ops if the thread object isn’t initialized. This is
because for threads, the initialization bit pulls double-duty to indicate whether a thread is running,
cleared upon exit. See #23030.
• k_thread_create() invokes Z_OOPS() for parameter checks, due to a great deal of existing code
ignoring the return value. This will also be addressed by #23030.
• k_thread_abort() invokes Z_OOPS() if an essential thread is aborted, as the function has no
return value.
• Various system calls related to logging invoke Z_OOPS() when bad parameters are passed in as they
do not propagate errors.
Configuration Options
APIs
Helper macros for creating system call verification functions are provided in in-
clude/zephyr/syscall_handler.h:
• Z_SYSCALL_OBJ()
• Z_SYSCALL_OBJ_INIT()
• Z_SYSCALL_OBJ_NEVER_INIT()
• Z_OOPS()
• Z_SYSCALL_MEMORY_READ()
• Z_SYSCALL_MEMORY_WRITE()
• Z_SYSCALL_MEMORY_ARRAY_READ()
• Z_SYSCALL_MEMORY_ARRAY_WRITE()
• Z_SYSCALL_VERIFY_MSG()
• Z_SYSCALL_VERIFY
Functions for invoking system calls are defined in include/zephyr/syscall.h:
• _arch_syscall_invoke0()
• _arch_syscall_invoke1()
• _arch_syscall_invoke2()
• _arch_syscall_invoke3()
• _arch_syscall_invoke4()
• _arch_syscall_invoke5()
• _arch_syscall_invoke6()
Thread stacks are declared statically with K_THREAD_STACK_DEFINE() or embedded within structures
using K_THREAD_STACK_MEMBER()
For architectures which utilize memory protection unit (MPU) hardware, stacks are physically contiguous
allocations. This contiguous allocation has implications for the placement of stacks in memory, as well
as the implementation of other features such as stack protection and userspace. The implications for
placement are directly attributed to the alignment requirements for MPU regions. This is discussed in
the memory placement section below.
Stack Guards
Stack protection mechanisms require hardware support that can restrict access to memory. Memory
protection units can provide this kind of support. The MPU provides a fixed number of regions. Each
region contains information about the start, end, size, and access attributes to be enforced on that
particular region.
Stack guards are implemented by using a single MPU region and setting the attributes for that region to
not allow write access. If invalid accesses occur, a fault ensues. The stack guard is defined at the bottom
(the lowest address) of the stack.
Memory Placement
During stack creation, a set of constraints are enforced on the allocation of memory. These constraints
include determining the alignment of the stack and the correct sizing of the stack. During linking of the
binary, these constraints are used to place the stacks properly.
The main source of the memory constraints is the MPU design for the SoC. The MPU design may re-
quire specific constraints on the region definition. These can include alignment of beginning and end
addresses, sizes of allocations, or even interactions between overlapping regions.
Some MPUs require that each region be aligned to a power of two. These SoCs will have
CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT defined. This means that a 1500 byte stack should
be aligned to a 2kB boundary and the stack size should also be adjusted to 2kB to ensure that nothing
else is placed in the remainder of the region. SoCs which include the unmodified ARM v7m MPU will
have these constraints.
Some ARM MPUs use start and end addresses to define MPU regions and both the start and end addresses
require 32 byte alignment. An example of this kind of MPU is found in the NXP FRDM K64F.
MPUs may have a region priority mechanisms that use the highest priority region that covers the memory
access to determine the enforcement policy. Others may logically OR regions to determine enforcement
policy.
Size and alignment constraints may result in stack allocations being larger than the requested size.
Region priority mechanisms may result in some added complexity when implementing stack guards.
The MPU backed userspace implementation requires the creation of a secondary set of stacks. These
stacks exist in a 1:1 relationship with each thread stack defined in the system. The privileged stacks are
created as a part of the build process.
A post-build script scripts/build/gen_kobject_list.py scans the generated ELF file and finds all of the thread
stack objects. A set of privileged stacks, a lookup table, and a set of helper functions are created and
added to the image.
During the process of dropping a thread to user mode, the privileged stack information is filled in and
later used by the swap and system call infrastructure to configure the MPU regions properly for the
thread stack and guard (if applicable).
During system calls, the user mode thread’s access to the system call and the passed-in parameters are
all validated. The user mode thread is then elevated to privileged mode, the stack is switched to use
the privileged stack, and the call is made to the specified kernel API. On return from the kernel API, the
thread is set back to user mode and the stack is restored to the user stack.
Zephyr provides a collection of utilities that allow threads to dynamically allocate memory.
Creating a Heap The simplest way to define a heap is statically, with the K_HEAP_DEFINE macro. This
creates a static k_heap variable with a given name that manages a memory region of the specified size.
Heaps can also be created to manage arbitrary regions of application-controlled memory using
k_heap_init() .
Allocating Memory Memory can be allocated from a heap using k_heap_alloc() , passing it the
address of the heap object and the number of bytes desired. This functions similarly to standard C
malloc(), returning a NULL pointer on an allocation failure.
The heap supports blocking operation, allowing threads to go to sleep until memory is available. The final
argument is a k_timeout_t timeout value indicating how long the thread may sleep before returning,
or else one of the constant timeout values K_NO_WAIT or K_FOREVER .
Releasing Memory Memory allocated with k_heap_alloc() must be released using k_heap_free() .
Similar to standard C free(), the pointer provided must be either a NULL value or a pointer previously
returned by k_heap_alloc() for the same heap. Freeing a NULL value is defined to have no effect.
The underlying implementation of the k_heap abstraction is provided a data structure named sys_heap.
This implements exactly the same allocation semantics, but provides no kernel synchronization tools. It
is available for applications that want to manage their own blocks of memory in contexts (for example,
userspace) where synchronization is unavailable or more complicated. Unlike k_heap, all calls to any
sys_heap functions on a single heap must be serialized by the caller. Simultaneous use from separate
threads is disallowed.
Implementation Internally, the sys_heap memory block is partitioned into “chunks” of 8 bytes. All
allocations are made out of a contiguous region of chunks. The first chunk of every allocation or unused
block is prefixed by a chunk header that stores the length of the chunk, the length of the next lower
(“left”) chunk in physical memory, a bit indicating whether the chunk is in use, and chunk-indexed link
pointers to the previous and next chunk in a “free list” to which unused chunks are added.
The heap code takes reasonable care to avoid fragmentation. Free block lists are stored in “buckets” by
their size, each bucket storing blocks within one power of two (i.e. a bucket for blocks of 3-4 chunks,
another for 5-8, 9-16, etc. . . ) this allows new allocations to be made from the smallest/most-fragmented
blocks available. Also, as allocations are freed and added to the heap, they are automatically combined
with adjacent free blocks to prevent fragmentation.
All metadata is stored at the beginning of the contiguous block of heap memory, including the variable-
length list of bucket list heads (which depend on heap size). The only external memory required is the
sys_heap structure itself.
The sys_heap functions are unsynchronized. Care must be taken by any users to prevent concurrent
access. Only one context may be inside one of the API functions at a time.
The heap code takes care to present high performance and reliable latency. All sys_heap API functions
are guaranteed to complete within constant time. On typical architectures, they will all complete within
1-200 cycles. One complexity is that the search of the minimum bucket size for an allocation (the set
of free blocks that “might fit”) has a compile-time upper bound of iterations to prevent unbounded list
searches, at the expense of some fragmentation resistance. This CONFIG_SYS_HEAP_ALLOC_LOOPS value
may be chosen by the user at build time, and defaults to a value of 3.
The sys_heap utility requires that all managed memory be in a single contiguous block. It is common for
complicated microcontroller applications to have more complicated memory setups that they still want
to manage dynamically as a “heap”. For example, the memory might exist as separate discontiguous
regions, different areas may have different cache, performance or power behavior, peripheral devices
may only be able to perform DMA to certain regions, etc. . .
For those situations, Zephyr provides a sys_multi_heap utility. Effectively this is a simple wrap-
per around a set of one or more sys_heap objects. It should be initialized after its child
heaps via sys_multi_heap_init(), after which each heap can be added to the managed set via
sys_multi_heap_add_heap(). No destruction utility is provided; just as for sys_heap, applications
that want to destroy a multi heap should simply ensure all allocated blocks are freed (or at least will
never be used again) and repurpose the underlying memory for another usage.
System Heap
The system heap is a predefined memory allocator that allows threads to dynamically allocate memory
from a common memory region in a malloc()-like manner.
Only a single system heap is defined. Unlike other heaps or memory pools, the system heap cannot be
directly referenced using its memory address.
The size of the system heap is configurable to arbitrary sizes, subject to space availability.
A thread can dynamically allocate a chunk of heap memory by calling k_malloc() . The address of the
allocated chunk is guaranteed to be aligned on a multiple of pointer sizes. If a suitable chunk of heap
memory cannot be found NULL is returned.
When the thread is finished with a chunk of heap memory it can release the chunk back to the system
heap by calling k_free() .
Defining the Heap Memory Pool The size of the heap memory pool is specified using the
CONFIG_HEAP_MEM_POOL_SIZE configuration option.
By default, the heap memory pool size is zero bytes. This value instructs the kernel not to define the heap
memory pool object. The maximum size is limited by the amount of available memory in the system.
The project build will fail in the link stage if the size specified can not be supported.
mem_ptr = k_malloc(200);
if (mem_ptr != NULL)) {
memset(mem_ptr, 0, 200);
...
} else {
printf("Memory not allocated");
}
mem_ptr = k_malloc(75);
... /* use memory block */
k_free(mem_ptr);
Suggested Uses Use the heap memory pool to dynamically allocate memory in a malloc()-like manner.
API Reference
group heap_apis
Defines
K_HEAP_DEFINE(name, bytes)
Define a static k_heap.
This macro defines and initializes a static memory region and k_heap of the requested size.
After kernel start, &name can be used as if k_heap_init() had been called.
Note that this macro enforces a minimum size on the memory region to accommodate meta-
data requirements. Very small heaps will be padded to fit.
Parameters
• name – Symbol name for the struct k_heap object
• bytes – Size of memory region, in bytes
K_HEAP_DEFINE_NOCACHE(name, bytes)
Define a static k_heap in uncached memory.
This macro defines and initializes a static memory region and k_heap of the requested size in
uncached memory. After kernel start, &name can be used as if k_heap_init() had been called.
Note that this macro enforces a minimum size on the memory region to accommodate meta-
data requirements. Very small heaps will be padded to fit.
Parameters
• name – Symbol name for the struct k_heap object
• bytes – Size of memory region, in bytes
Functions
void *k_heap_aligned_alloc(struct k_heap *h, size_t align, size_t bytes, k_timeout_t timeout)
Allocate aligned memory from a k_heap.
Behaves in all ways like k_heap_alloc(), except that the returned memory (if available) will
have a starting address in memory which is a multiple of the specified power-of-two alignment
value in bytes. The resulting memory can be returned to the heap using k_heap_free().
Parameters
• h – Heap from which to allocate
• align – Alignment in bytes, must be a power of two
• bytes – Number of bytes requested
• timeout – How long to wait, or K_NO_WAIT
Returns
Pointer to memory the caller can now use
Parameters
• h – Heap from which to allocate
• bytes – Desired size of block to allocate
• timeout – How long to wait, or K_NO_WAIT
Returns
A pointer to valid heap memory, or NULL
Returns
Address of the allocated memory if successful; otherwise NULL.
struct k_heap
#include <kernel.h>
Heap listener
group heap_listener_apis
Defines
HEAP_ID_FROM_POINTER(heap_pointer)
Construct heap identifier from heap pointer.
Construct a heap identifier from a pointer to the heap object, such as sys_heap.
Parameters
• heap_pointer – Pointer to the heap object
HEAP_ID_LIBC
Libc heap identifier.
Identifier of the global libc heap.
HEAP_LISTENER_ALLOC_DEFINE(name, _heap_id, _alloc_cb)
Define heap event listener node for allocation event.
Sample usage:
Parameters
• name – Name of the heap event listener object
• _heap_id – Identifier of the heap to be listened
• _alloc_cb – Function to be called for allocation event
Parameters
• name – Name of the heap event listener object
• _heap_id – Identifier of the heap to be listened
• _free_cb – Function to be called for free event
{
LOG_INF("Libc heap end moved from %p to %p", old_heap_end, new_heap_end);
}
Parameters
• name – Name of the heap event listener object
• _heap_id – Identifier of the heap to be listened
• _resize_cb – Function to be called when the listened heap is resized
Typedefs
Param heap_id
Identifier of heap being resized
Param old_heap_end
Pointer to end of heap before resize
Param new_heap_end
Pointer to end of heap after resize
Note: Heaps managed by libraries outside of code in Zephyr main code repository may not
emit this event.
Note: The number of bytes allocated may not match exactly to the request to the allocation
function. Internal mechanism of the heap may allocate more than requested.
Param heap_id
Heap identifier
Param mem
Pointer to the allocated memory
Param bytes
Size of allocated memory
Note: Heaps managed by libraries outside of code in Zephyr main code repository may not
emit this event.
Note: The number of bytes freed may not match exactly to the request to the allocation
function. Internal mechanism of the heap dictates how memory is allocated or freed.
Param heap_id
Heap identifier
Param mem
Pointer to the freed memory
Param bytes
Size of freed memory
Enums
enum heap_event_types
Values:
enumerator HEAP_EVT_UNKNOWN = 0
enumerator HEAP_RESIZE
enumerator HEAP_ALLOC
enumerator HEAP_FREE
enumerator HEAP_REALLOC
enumerator HEAP_MAX_EVENTS
Functions
Parameters
• listener – Pointer to the heap_listener object
void heap_listener_unregister(struct heap_listener *listener)
Unregister heap event listener.
Remove the listener from the global list of heap listeners that can be notified by different heap
implementations upon certain events related to the heap usage.
Parameters
• listener – Pointer to the heap_listener object
void heap_listener_notify_alloc(uintptr_t heap_id, void *mem, size_t bytes)
Notify listeners of heap allocation event.
Notify registered heap event listeners with matching heap identifier that an allocation has
been done on heap
Parameters
• heap_id – Heap identifier
• mem – Pointer to the allocated memory
• bytes – Size of allocated memory
void heap_listener_notify_free(uintptr_t heap_id, void *mem, size_t bytes)
Notify listeners of heap free event.
Notify registered heap event listeners with matching heap identifier that memory is freed on
heap
Parameters
• heap_id – Heap identifier
• mem – Pointer to the freed memory
• bytes – Size of freed memory
void heap_listener_notify_resize(uintptr_t heap_id, void *old_heap_end, void
*new_heap_end)
Notify listeners of heap resize event.
Notify registered heap event listeners with matching heap identifier that the heap has been
resized.
Parameters
• heap_id – Heap identifier
• old_heap_end – Address of the heap end before the change
• new_heap_end – Address of the heap end after the change
struct heap_listener
#include <heap_listener.h>
Public Members
sys_snode_t node
Singly linked list node
uintptr_t heap_id
Identifier of the heap whose events are listened.
It can be a heap pointer, if the heap is represented as an object, or 0 in the case of the
global libc heap.
The shared multi-heap memory pool manager uses the multi-heap allocator to manage a set of reserved
memory regions with different capabilities / attributes (cacheable, non-cacheable, etc. . . ).
All the different regions can be added at run-time to the shared multi-heap pool providing an opaque
“attribute” value (an integer or enum value) that can be used by drivers or applications to request
memory with certain capabilities.
This framework is commonly used as follow:
1. At boot time some platform code initialize the shared multi-heap framework us-
ing shared_multi_heap_pool_init() and add the memory regions to the pool with
shared_multi_heap_add() , possibly gathering the needed information for the regions from the
DT.
2. Each memory region encoded in a shared_multi_heap_region structure. This structure is also
carrying an opaque and user-defined integer value that is used to define the region capabilities (for
example: cacheability, cpu affinity, etc. . . )
shared_multi_heap_add(&cacheable_r0, NULL);
shared_multi_heap_add(&non_cacheable_r2, NULL);
3. When a driver or application needs some dynamic memory with a certain capability, it can use
shared_multi_heap_alloc() (or the aligned version) to request the memory by using the opaque
parameter to select the correct set of attributes for the needed memory. The framework will take
care of selecting the correct heap (thus memory region) to carve memory from, based on the
opaque parameter and the runtime state of the heaps (available memory, heap state, etc. . . )
The API does not enforce any attributes, but at least it defines the two most common ones:
SMH_REG_ATTR_CACHEABLE and SMH_REG_ATTR_NON_CACHEABLE
group shared_multi_heap
Shared Multi-Heap (SMH) interface.
The shared multi-heap manager uses the multi-heap allocator to manage a set of memory regions
with different capabilities / attributes (cacheable, non-cacheable, etc. . . ).
All the different regions can be added at run-time to the shared multi-heap pool providing an
opaque “attribute” value (an integer or enum value) that can be used by drivers or applications to
request memory with certain capabilities.
This framework is commonly used as follow:
• At boot time some platform code initialize the shared multi-heap framework us-
ing shared_multi_heap_pool_init and add the memory regions to the pool with
shared_multi_heap_add, possibly gathering the needed information for the regions from the
DT.
• Each memory region encoded in a shared_multi_heap_region structure. This structure is also
carrying an opaque and user-defined integer value that is used to define the region capabilities
(for example: cacheability, cpu affinity, etc. . . )
• When a driver or application needs some dynamic memory with a certain capability, it can use
shared_multi_heap_alloc (or the aligned version) to request the memory by using the opaque
parameter to select the correct set of attributes for the needed memory. The framework will
take care of selecting the correct heap (thus memory region) to carve memory from, based
on the opaque parameter and the runtime state of the heaps (available memory, heap state,
etc. . . )
Defines
MAX_SHARED_MULTI_HEAP_ATTR
Maximum number of standard attributes.
Enums
enum smh_reg_attr
SMH region attributes enumeration type.
Enumeration type for some common memory region attributes.
Values:
enumerator SMH_REG_ATTR_CACHEABLE
cacheable
enumerator SMH_REG_ATTR_NON_CACHEABLE
non-cacheable
enumerator SMH_REG_ATTR_NUM
must be the last item
Functions
int shared_multi_heap_pool_init(void)
Init the pool.
This must be the first function to be called to initialize the shared multi-heap pool. All the
individual heaps must be added later with shared_multi_heap_add.
Note: As for the generic multi-heap allocator the expectation is that this function will be
called at soc- or board-level.
Return values
• 0 – on success.
• -EALREADY – when the pool was already inited.
• other – errno codes
struct shared_multi_heap_region
#include <shared_multi_heap.h> SMH region struct.
This struct is carrying information about the memory region to be added in the multi-heap
pool.
Public Members
uintptr_t addr
Memory heap starting virtual address
size_t size
Memory heap size in bytes
A memory slab is a kernel object that allows memory blocks to be dynamically allocated from a designated
memory region. All memory blocks in a memory slab have a single fixed size, allowing them to be
allocated and released efficiently and avoiding memory fragmentation concerns.
• Concepts
– Internal Operation
• Implementation
– Defining a Memory Slab
– Allocating a Memory Block
– Releasing a Memory Block
• Suggested Uses
• Configuration Options
• API Reference
Concepts
Any number of memory slabs can be defined (limited only by available RAM). Each memory slab is
referenced by its memory address.
A memory slab has the following key properties:
• The block size of each block, measured in bytes. It must be at least 4N bytes long, where N is
greater than 0.
• The number of blocks available for allocation. It must be greater than zero.
• A buffer that provides the memory for the memory slab’s blocks. It must be at least “block size”
times “number of blocks” bytes long.
The memory slab’s buffer must be aligned to an N-byte boundary, where N is a power of 2 larger than 2
(i.e. 4, 8, 16, . . . ). To ensure that all memory blocks in the buffer are similarly aligned to this boundary,
the block size must also be a multiple of N.
A memory slab must be initialized before it can be used. This marks all of its blocks as unused.
A thread that needs to use a memory block simply allocates it from a memory slab. When the thread
finishes with a memory block, it must release the block back to the memory slab so the block can be
reused.
If all the blocks are currently in use, a thread can optionally wait for one to become available. Any
number of threads may wait on an empty memory slab simultaneously; when a memory block becomes
available, it is given to the highest-priority thread that has waited the longest.
Unlike a heap, more than one memory slab can be defined, if needed. This allows for a memory slab with
smaller blocks and others with larger-sized blocks. Alternatively, a memory pool object may be used.
Internal Operation A memory slab’s buffer is an array of fixed-size blocks, with no wasted space
between the blocks.
The memory slab keeps track of unallocated blocks using a linked list; the first 4 bytes of each unused
block provide the necessary linkage.
Implementation
Defining a Memory Slab A memory slab is defined using a variable of type k_mem_slab. It must then
be initialized by calling k_mem_slab_init() .
The following code defines and initializes a memory slab that has 6 blocks that are 400 bytes long, each
of which is aligned to a 4-byte boundary..
Alternatively, a memory slab can be defined and initialized at compile time by calling
K_MEM_SLAB_DEFINE .
The following code has the same effect as the code segment above. Observe that the macro defines both
the memory slab and its buffer.
char *block_ptr;
char *block_ptr;
Suggested Uses
Configuration Options
API Reference
group mem_slab_apis
Defines
Note: This macro cannot be used together with a static keyword. If such a use-case is desired,
use K_MEM_SLAB_DEFINE_STATIC instead.
Parameters
• name – Name of the memory slab.
• slab_block_size – Size of each memory block (in bytes).
• slab_num_blocks – Number memory blocks.
• slab_align – Alignment of the memory slab’s buffer (power of 2).
Functions
Parameters
• slab – Address of the memory slab.
• mem – Pointer to block address area.
• timeout – Non-negative waiting period to wait for operation to complete. Use
K_NO_WAIT to return without waiting, or K_FOREVER to wait as long as nec-
essary.
Return values
• 0 – Memory allocated. The block address area pointed at by mem is set to the
starting address of the memory block.
• -ENOMEM – Returned without waiting.
• -EAGAIN – Waiting period timed out.
• -EINVAL – Invalid data supplied
Return values
• 0 – Success
• -EINVAL – Memory slab is NULL
The Memory Blocks Allocator allows memory blocks to be dynamically allocated from a designated
memory region, where:
• All memory blocks have a single fixed size.
• Multiple blocks can be allocated or freed at the same time.
• A group of blocks allocated together may not be contiguous. This is useful for operations such as
scatter-gather DMA transfers.
• Bookkeeping of allocated blocks is done outside of the associated buffer (unlike memory slab).
This allows the buffer to reside in memory regions where these can be powered down to conserve
energy.
• Concepts
– Internal Operation
• Memory Blocks Allocator
• Multi Memory Blocks Allocator Group
• Usage
– Defining a Memory Blocks Allocator
– Allocating Memory Blocks
– Releasing a Memory Block
– Using Multi Memory Blocks Allocator Group
• API Reference
Concepts
Any number of Memory Blocks Allocator can be defined (limited only by available RAM). Each allocator
is referenced by its memory address.
A memory blocks allocator has the following key properties:
• The block size of each block, measured in bytes. It must be at least 4N bytes long, where N is
greater than 0.
• The number of blocks available for allocation. It must be greater than zero.
• A buffer that provides the memory for the memory slab’s blocks. It must be at least “block size”
times “number of blocks” bytes long.
• A blocks bitmap to keep track of which block has been allocated.
The buffer must be aligned to an N-byte boundary, where N is a power of 2 larger than 2 (i.e. 4, 8, 16,
. . . ). To ensure that all memory blocks in the buffer are similarly aligned to this boundary, the block size
must also be a multiple of N.
Due to the use of internal bookkeeping structures and their creation, each memory blocks allocator must
be declared and defined at compile time.
Internal Operation Each buffer associated with an allocator is an array of fixed-size blocks, with no
wasted space between the blocks.
The memory blocks allocator keeps track of unallocated blocks using a bitmap.
Internally, the memory blocks allocator uses a bitmap to keep track of which blocks have been allocated.
Each allocator, utilizing the sys_bitarray interface, gets memory blocks one by one from the backing
buffer up to the requested number of blocks. All the metadata about an allocator is stored outside of the
backing buffer. This allows the memory region of the backing buffer to be powered down to conserve
energy, as the allocator code never touches the content of the buffer.
The Multi Memory Blocks Allocator Group utility functions provide a convenient to manage a group of
allocators. A custom allocator choosing function is used to choose which allocator to use among this
group.
An allocator group should be initialized at runtime via sys_multi_mem_blocks_init() . Each allocator
can then be added via sys_multi_mem_blocks_add_allocator() .
To allocate memory blocks from group, sys_multi_mem_blocks_alloc() is called with an opaque “con-
figuration” parameter. This parameter is passed directly to the allocator choosing function so that an
appropriate allocator can be chosen. After an allocator is chosen, memory blocks are allocated via
sys_mem_blocks_alloc() .
Allocated memory blocks can be freed via sys_multi_mem_blocks_free() . The caller does not need to
pass a configuration parameter. The allocator code matches the passed in memory addresses to find the
correct allocator and then memory blocks are freed via sys_mem_blocks_free() .
Usage
Defining a Memory Blocks Allocator A memory blocks allocator is defined using a variable
of type sys_mem_blocks_t . It needs to be defined and initialized at compile time by calling
SYS_MEM_BLOCKS_DEFINE .
The following code defines and initializes a memory blocks allocator which has 4 blocks that are 64 bytes
long, each of which is aligned to a 4-byte boundary:
SYS_MEM_BLOCKS_DEFINE(allocator, 64, 4, 4);
A pre-defined buffer can also be provided to the allocator where the buffer can be placed separately.
Note that the alignment of the buffer needs to be done at its definition.
uint8_t __aligned(4) backing_buffer[64 * 4];
SYS_MEM_BLOCKS_DEFINE_WITH_EXT_BUF(allocator, 64, 4, backing_buffer);
If ret == 0, the array blocks will contain an array of memory addresses pointing to the allocated blocks.
int ret;
uintptr_t blocks[2];
Using Multi Memory Blocks Allocator Group The following code demonstrates how to initialize an
allocator group:
sys_multi_mem_blocks_init(&alloc_group, choice_fn);
sys_multi_mem_blocks_add_allocator(&alloc_group, &allocator0);
sys_multi_mem_blocks_add_allocator(&alloc_group, &allocator1);
int ret;
uintptr_t blocks[1];
size_t blk_size;
API Reference
group mem_blocks_apis
Defines
Typedefs
Param group
Multi memory blocks allocator structure.
Param cfg
An opaque user-provided value. It may be interpreted in any way by the applica-
tion.
Return
A pointer to the chosen allocator, or NULL if none is chosen.
Functions
Return values
• 0 – Successful
• -EINVAL – Invalid argument supplied.
• -ENOMEM – Some of blocks are taken and cannot be allocated
int sys_mem_blocks_is_region_free(sys_mem_blocks_t *mem_block, void *in_block, size_t
count)
check if the region is free
Parameters
• mem_block – [in] Pointer to memory block object.
• in_block – [in] Address of the first block to check
• count – [in] Number of blocks to check.
Return values
• 1 – All memory blocks are free
• 0 – At least one of the memory blocks is taken
int sys_mem_blocks_free(sys_mem_blocks_t *mem_block, size_t count, void **in_blocks)
Free multiple memory blocks.
Free multiple memory blocks according to the array of memory block pointers.
Parameters
• mem_block – [in] Pointer to memory block object.
• count – [in] Number of blocks to free.
• in_blocks – [in] Input array of pointers to the memory blocks.
Return values
• 0 – Successful
• -EINVAL – Invalid argument supplied.
• -EFAULT – Invalid pointers supplied.
int sys_mem_blocks_free_contiguous(sys_mem_blocks_t *mem_block, void *block, size_t
count)
Free contiguous multiple memory blocks.
Free contiguous multiple memory blocks
Parameters
• mem_block – [in] Pointer to memory block object.
• block – [in] Pointer to the first memory block
• count – [in] Number of blocks to free.
Return values
• 0 – Successful
• -EINVAL – Invalid argument supplied.
• -EFAULT – Invalid pointer supplied.
Return values
• 0 – Successful
• -EINVAL – Invalid argument supplied, or no allocator chosen.
• -EFAULT – Invalid pointer(s) supplied.
Demand paging provides a mechanism where data is only brought into physical memory as required by
current execution context. The physical memory is conceptually divided in page-sized page frames as
regions to hold data.
• When the processor tries to access data and the data page exists in one of the page frames, the
execution continues without any interruptions.
• When the processor tries to access the data page that does not exist in any page frames, a page
fault occurs. The paging code then brings in the corresponding data page from backing store into
physical memory if there is a free page frame. If there is no more free page frames, the eviction
algorithm is invoked to select a data page to be paged out, thus freeing up a page frame for new
data to be paged in. If this data page has been modified after it is first paged in, the data will be
written back into the backing store. If no modifications is done or after written back into backing
store, the data page is now considered paged out and the corresponding page frame is now free.
The paging code then invokes the backing store to page in the data page corresponding to the
location of the requested data. The backing store copies that data page into the free page frame.
Now the data page is in physical memory and execution can continue.
There are functions where paging in and out can be invoked manually using k_mem_page_in() and
k_mem_page_out() . k_mem_page_in() can be used to page in data pages in anticipation that they are
required in the near future. This is used to minimize number of page faults as these data pages are
already in physical memory, and thus minimizing latency. k_mem_page_out() can be used to page out
data pages where they are not going to be accessed for a considerable amount of time. This frees up
page frames so that the next page in can be executed faster as the paging code does not need to invoke
the eviction algorithm.
Terminology
Data Page
A data page is a page-sized region of data. It may exist in a page frame, or be paged out to some
backing store. Its location can always be looked up in the CPU’s page tables (or equivalent) by
virtual address. The data type will always be void * or in some cases uint8_t * when doing
pointer arithmetic.
Page Frame
A page frame is a page-sized physical memory region in RAM. It is a container where a data page
may be placed. It is always referred to by physical address. Zephyr has a convention of using
uintptr_t for physical addresses. For every page frame, a struct z_page_frame is instantiated
to store metadata. Flags for each page frame:
• Z_PAGE_FRAME_PINNED indicates a page frame is pinned in memory and should never be paged
out.
• Z_PAGE_FRAME_RESERVED indicates a physical page reserved by hardware and should not be
used at all.
• Z_PAGE_FRAME_MAPPED is set when a physical page is mapped to virtual memory address.
• Z_PAGE_FRAME_BUSY indicates a page frame is currently involved in a page-in/out operation.
• Z_PAGE_FRAME_BACKED indicates a page frame has a clean copy in the backing store.
Z_SCRATCH_PAGE
The virtual address of a special page provided to the backing store to: * Copy a data page from
Z_SCRATCH_PAGE to the specified location; or, * Copy a data page from the provided location to
Z_SCRATCH_PAGE. This is used as an intermediate page for page in/out operations. This scratch
needs to be mapped read/write for backing store code to access. However the data page itself may
only be mapped as read-only in virtual address space. If this page is provided as-is to backing store,
the data page must be re-mapped as read/write which has security implications as the data page is
no longer read-only to other parts of the application.
Paging Statistics
Eviction Algorithm
The eviction algorithm is used to determine which data page and its corresponding page frame can be
paged out to free up a page frame for the next page in operation. There are two functions which are
called from the kernel paging code:
• k_mem_paging_eviction_init() is called to initialize the eviction algorithm. This is called at
POST_KERNEL.
• k_mem_paging_eviction_select() is called to select a data page to evict. A function argument
dirty is written to signal the caller whether the selected data page has been modified since it is
first paged in. If the dirty bit is returned as set, the paging code signals to the backing store to
write the data page back into storage (thus updating its content). The function returns a pointer
to the page frame corresponding to the selected data page.
Currently, a NRU (Not-Recently-Used) eviction algorithm has been implemented as a sample. This is a
very simple algorithm which ranks each data page on whether they have been accessed and modified.
The selection is based on this ranking.
To implement a new eviction algorithm, the two functions mentioned above must be implemented.
Backing Store
Backing store is responsible for paging in/out data page between their corresponding page frames and
storage. These are the functions which must be implemented:
• k_mem_paging_backing_store_init() is called to initialized the backing store at POST_KERNEL.
API Reference
group mem-demand-paging
Functions
Parameters
• addr – Base page-aligned virtual address
• size – Page-aligned data region size
void k_mem_pin(void *addr, size_t size)
Pin an aligned virtual data region, paging in as necessary
After the function completes, all the page frames associated with this region will be resi-
dent in memory and pinned such that they stay that way. This is a stronger version of
z_mem_page_in().
If CONFIG_DEMAND_PAGING_ALLOW_IRQ is enabled, this function may not be called by
ISRs as the backing store may be in-use.
Parameters
• addr – Base page-aligned virtual address
• size – Page-aligned data region size
void k_mem_unpin(void *addr, size_t size)
Un-pin an aligned virtual data region
After the function completes, all the page frames associated with this region will be no longer
marked as pinned. This does not evict the region, follow this with z_mem_page_out() if you
need that.
Parameters
• addr – Base page-aligned virtual address
• size – Page-aligned data region size
void k_mem_paging_stats_get(struct k_mem_paging_stats_t *stats)
Get the paging statistics since system startup
This populates the paging statistics struct being passed in as argument.
Parameters
• stats – [inout] Paging statistics struct to be filled.
void k_mem_paging_thread_stats_get(struct k_thread *thread, struct k_mem_paging_stats_t
*stats)
Get the paging statistics since system startup for a thread
This populates the paging statistics struct being passed in as argument for a particular thread.
Parameters
• thread – [in] Thread
• stats – [inout] Paging statistics struct to be filled.
void k_mem_paging_histogram_eviction_get(struct k_mem_paging_histogram_t *hist)
Get the eviction timing histogram
This populates the timing histogram struct being passed in as argument.
Parameters
• hist – [inout] Timing histogram struct to be filled.
void k_mem_paging_histogram_backing_store_page_in_get(struct
k_mem_paging_histogram_t
*hist)
Get the backing store page-in timing histogram
This populates the timing histogram struct being passed in as argument.
Parameters
• hist – [inout] Timing histogram struct to be filled.
void k_mem_paging_histogram_backing_store_page_out_get(struct
k_mem_paging_histogram_t
*hist)
Get the backing store page-out timing histogram
This populates the timing histogram struct being passed in as argument.
Parameters
• hist – [inout] Timing histogram struct to be filled.
group mem-demand-paging-eviction
Eviction algorithm APIs
Functions
group mem-demand-paging-backing-store
Backing store APIs
Functions
This function may be called multiple times on the same data page. If its page frame has its
Z_PAGE_FRAME_BACKED bit set, it is expected to return the previous backing store location
for the data page containing a cached clean copy. This clean copy may be updated on page-
out, or used to discard clean pages without needing to write out their contents.
If the backing store is full, some other backing store location which caches a loaded
data page may be selected, in which case its associated page frame will have the
Z_PAGE_FRAME_BACKED bit cleared (as it is no longer cached).
pf->addr will indicate the virtual address the page is currently mapped to. Large, sparse
backing stores which can contain the entire address space may simply generate location tokens
purely as a function of pf->addr with no other management necessary.
This function distinguishes whether it was called on behalf of a page fault. A free backing
store location must always be reserved in order for page faults to succeed. If the page_fault
parameter is not set, this function should return -ENOMEM even if one location is available.
This function is invoked with interrupts locked.
Parameters
• pf – Virtual address to obtain a storage location
• location – [out] storage location token
• page_fault – Whether this request was for a page fault
Returns
0 Success
Returns
-ENOMEM Backing store is full
void k_mem_paging_backing_store_location_free(uintptr_t location)
Free a backing store location
Any stored data may be discarded, and the location token associated with this address may
be re-used for some other data page.
This function is invoked with interrupts locked.
Parameters
• location – Location token to free
void k_mem_paging_backing_store_page_out(uintptr_t location)
Copy a data page from Z_SCRATCH_PAGE to the specified location
Immediately before this is called, Z_SCRATCH_PAGE will be mapped read-write to the in-
tended source page frame for the calling context.
Calls to this and k_mem_paging_backing_store_page_in() will always be serialized, but inter-
rupts may be enabled.
Parameters
• location – Location token for the data page, for later retrieval
void k_mem_paging_backing_store_page_in(uintptr_t location)
Copy a data page from the provided location to Z_SCRATCH_PAGE.
Immediately before this is called, Z_SCRATCH_PAGE will be mapped read-write to the in-
tended destination page frame for the calling context.
Calls to this and k_mem_paging_backing_store_page_out() will always be serialized, but inter-
rupts may be enabled.
Parameters
• location – Location token for the data page
Zephyr provides a library of common general purpose data structures used within the kernel, but useful
by application code in general. These include list and balanced tree structures for storing ordered data,
and a ring buffer for managing “byte stream” data in a clean way.
Note that in general, the collections are implemented as “intrusive” data structures. The “node” data is
the only struct used by the library code, and it does not store a pointer or other metadata to indicate
what user data is “owned” by that node. Instead, the expectation is that the node will be itself embedded
within a user-defined struct. Macros are provided to retrieve a user struct address from the embedded
node pointer in a clean way. The purpose behind this design is to allow the collections to be used in
contexts where dynamic allocation is disallowed (i.e. there is no need to allocate node objects because
the memory is provided by the user).
Note also that these libraries are uniformly unsynchronized; access to them is not threadsafe by default.
These are data structures, not synchronization primitives. The expectation is that any locking needed
will be provided by the user.
Zephyr provides a sys_slist_t type for storing simple singly-linked list data (i.e. data where each list
element stores a pointer to the next element, but not the previous one). This supports constant-time
access to the first (head) and last (tail) elements of the list, insertion before the head and after the tail
of the list and constant time removal of the head. Removal of subsequent nodes requires access to the
“previous” pointer and thus can only be performed in linear time by searching the list.
The sys_slist_t struct may be instantiated by the user in any accessible memory. It should be initialized
with either sys_slist_init() or by static assignment from SYS_SLIST_STATIC_INIT before use. Its
interior fields are opaque and should not be accessed by user code.
The end nodes of a list may be retrieved with sys_slist_peek_head() and sys_slist_peek_tail() ,
which will return NULL if the list is empty, otherwise a pointer to a sys_snode_t struct.
The sys_snode_t struct represents the data to be inserted. In general, it is expected to be allo-
cated/controlled by the user, usually embedded within a struct which is to be added to the list. The
container struct pointer may be retrieved from a list node using SYS_SLIST_CONTAINER , passing it the
struct name of the containing struct and the field name of the node. Internally, the sys_snode_t struct
contains only a next pointer, which may be accessed with sys_slist_peek_next() .
Lists may be modified by adding a single node at the head or tail with sys_slist_prepend() and
sys_slist_append() . They may also have a node added to an interior point with sys_slist_insert() ,
which inserts a new node after an existing one. Similarly sys_slist_remove() will remove a node given
a pointer to its predecessor. These operations are all constant time.
Convenience routines exist for more complicated modifications to a list. sys_slist_merge_slist() will
append an entire list to an existing one. sys_slist_append_list() will append a bounded subset of
an existing list in constant time. And sys_slist_find_and_remove() will search a list (in linear time)
for a given node and remove it if present.
Finally the slist implementation provides a set of “for each” macros that allows for iterat-
ing over a list in a natural way without needing to manually traverse the next pointers.
SYS_SLIST_FOR_EACH_NODE will enumerate every node in a list given a local variable to store the
node pointer. SYS_SLIST_FOR_EACH_NODE_SAFE behaves similarly, but has a more complicated im-
plementation that requires an extra scratch variable for storage and allows the user to delete
the iterated node during the iteration. Each of those macros also exists in a “container” variant
(SYS_SLIST_FOR_EACH_CONTAINER and SYS_SLIST_FOR_EACH_CONTAINER_SAFE ) which assigns a local
variable of a type that matches the user’s container struct and not the node struct, performing the re-
quired offsets internally. And SYS_SLIST_ITERATE_FROM_NODE exists to allow for enumerating a node
and all its successors only, without inspecting the earlier part of the list.
The slist code is designed to be minimal and conventional. Internally, a sys_slist_t struct is nothing
more than a pair of “head” and “tail” pointer fields. And a sys_snode_t stores only a single “next”
pointer.
The specific implementation of the list code, however, is done with an internal “Z_GENLIST” template
API which allows for extracting those fields from arbitrary structures and emits an arbitrarily named set
of functions. This allows for implementing more complicated single-linked list variants using the same
basic primitives. The genlist implementor is responsible for a custom implementation of the primitive
operations only: an “init” step for each struct, and a “get” and “set” primitives for each of head, tail and
next pointers on their relevant structs. These inline functions are passed as parameters to the genlist
macro expansion.
Only one such variant, sflist, exists in Zephyr at the moment.
Flagged List
The sys_sflist_t is implemented using the described genlist template API. With the exception of sym-
bol naming (“sflist” instead of “slist”) and the additional API described next, it operates in all ways
identically to the slist API.
It adds the ability to associate exactly two bits of user defined “flags” with each list node. These can be
accessed and modified with sys_sfnode_flags_get() and sys_sfnode_flags_get() . Internally, the
flags are stored unioned with the bottom bits of the next pointer and incur no SRAM storage overhead
when compared with the simpler slist code.
group single-linked-list_apis
Defines
SYS_SLIST_FOR_EACH_NODE(__sl, __sn)
Provide the primitive to iterate on a list Note: the loop is unsafe and thus __sn should not be
removed.
User MUST add the loop statement curly braces enclosing its own code:
SYS_SLIST_FOR_EACH_NODE(l, n) {
<user code>
}
SYS_SLIST_ITERATE_FROM_NODE(l, n) {
<user code>
}
Like SYS_SLIST_FOR_EACH_NODE(), but __dn already contains a node in the list where to
start searching for the next entry from. If NULL, it starts from the head.
This and other SYS_SLIST_*() macros are not thread safe.
Parameters
• __sl – A pointer on a sys_slist_t to iterate on
• __sn – A sys_snode_t pointer to peek each node of the list it contains the start-
ing node, or NULL to start from the head
SYS_SLIST_FOR_EACH_NODE_SAFE(__sl, __sn, __sns)
Provide the primitive to safely iterate on a list Note: __sn can be removed, it will not break
the loop.
User MUST add the loop statement curly braces enclosing its own code:
SYS_SLIST_FOR_EACH_NODE_SAFE(l, n, s) {
<user code>
}
SYS_SLIST_PEEK_NEXT_CONTAINER(__cn, __n)
Parameters
• __sl – A pointer on a sys_slist_t to iterate on
• __cn – A pointer to peek each entry of the list
• __n – The field name of sys_node_t within the container struct
Parameters
• __sl – A pointer on a sys_slist_t to iterate on
• __cn – A pointer to peek each entry of the list
• __cns – A pointer for the loop to run safely
• __n – The field name of sys_node_t within the container struct
SYS_SLIST_STATIC_INIT(ptr_to_list)
Functions
Returns
a pointer on the next node (or NULL if none)
static inline void sys_slist_prepend(sys_slist_t *list, sys_snode_t *node)
Prepend a node to the given list.
This and other sys_slist_*() functions are not thread safe.
Parameters
• list – A pointer on the list to affect
• node – A pointer on the node to prepend
static inline void sys_slist_append(sys_slist_t *list, sys_snode_t *node)
Append a node to the given list.
This and other sys_slist_*() functions are not thread safe.
Parameters
• list – A pointer on the list to affect
• node – A pointer on the node to append
static inline void sys_slist_append_list(sys_slist_t *list, void *head, void *tail)
Append a list to the given list.
Append a singly-linked, NULL-terminated list consisting of nodes containing the pointer to the
next node as the first element of a node, to list. This and other sys_slist_*() functions are not
thread safe.
FIXME: Why are the element parameters void *?
Parameters
• list – A pointer on the list to affect
• head – A pointer to the first element of the list to append
• tail – A pointer to the last element of the list to append
static inline void sys_slist_merge_slist(sys_slist_t *list, sys_slist_t *list_to_append)
merge two slists, appending the second one to the first
When the operation is completed, the appending list is empty. This and other sys_slist_*()
functions are not thread safe.
Parameters
• list – A pointer on the list to affect
• list_to_append – A pointer to the list to append.
static inline void sys_slist_insert(sys_slist_t *list, sys_snode_t *prev, sys_snode_t *node)
Insert a node to the given list.
This and other sys_slist_*() functions are not thread safe.
Parameters
• list – A pointer on the list to affect
• prev – A pointer on the previous node
• node – A pointer on the node to insert
static inline sys_snode_t *sys_slist_get_not_empty(sys_slist_t *list)
Fetch and remove the first node of the given list.
List must be known to be non-empty. This and other sys_slist_*() functions are not thread
safe.
Parameters
• list – A pointer on the list to affect
Returns
A pointer to the first node of the list
static inline sys_snode_t *sys_slist_get(sys_slist_t *list)
Fetch and remove the first node of the given list.
This and other sys_slist_*() functions are not thread safe.
Parameters
• list – A pointer on the list to affect
Returns
A pointer to the first node of the list (or NULL if empty)
static inline void sys_slist_remove(sys_slist_t *list, sys_snode_t *prev_node, sys_snode_t
*node)
Remove a node.
This and other sys_slist_*() functions are not thread safe.
Parameters
• list – A pointer on the list to affect
• prev_node – A pointer on the previous node (can be NULL, which means the
node is the list’s head)
• node – A pointer on the node to remove
static inline bool sys_slist_find_and_remove(sys_slist_t *list, sys_snode_t *node)
Find and remove a node from a list.
This and other sys_slist_*() functions are not thread safe.
Parameters
• list – A pointer on the list to affect
• node – A pointer on the node to remove from the list
Returns
true if node was removed
group flagged-single-linked-list_apis
Defines
SYS_SFLIST_FOR_EACH_NODE(__sl, __sn)
Provide the primitive to iterate on a list Note: the loop is unsafe and thus __sn should not be
removed.
User MUST add the loop statement curly braces enclosing its own code:
SYS_SFLIST_FOR_EACH_NODE(l, n) {
<user code>
}
Parameters
• __sl – A pointer on a sys_sflist_t to iterate on
• __sn – A sys_sfnode_t pointer to peek each node of the list
SYS_SFLIST_ITERATE_FROM_NODE(__sl, __sn)
Provide the primitive to iterate on a list, from a node in the list Note: the loop is unsafe and
thus __sn should not be removed.
User MUST add the loop statement curly braces enclosing its own code:
SYS_SFLIST_ITERATE_FROM_NODE(l, n) {
<user code>
}
Like SYS_SFLIST_FOR_EACH_NODE(), but __dn already contains a node in the list where to
start searching for the next entry from. If NULL, it starts from the head.
This and other SYS_SFLIST_*() macros are not thread safe.
Parameters
• __sl – A pointer on a sys_sflist_t to iterate on
• __sn – A sys_sfnode_t pointer to peek each node of the list it contains the
starting node, or NULL to start from the head
SYS_SFLIST_FOR_EACH_NODE_SAFE(__sl, __sn, __sns)
Provide the primitive to safely iterate on a list Note: __sn can be removed, it will not break
the loop.
User MUST add the loop statement curly braces enclosing its own code:
SYS_SFLIST_FOR_EACH_NODE_SAFE(l, n, s) {
<user code>
}
SYS_SFLIST_PEEK_NEXT_CONTAINER(__cn, __n)
SYS_SFLIST_FOR_EACH_CONTAINER(l, c, n) {
<user code>
}
Parameters
SYS_SFLIST_FOR_EACH_NODE_SAFE(l, c, cn, n) {
<user code>
}
Parameters
• __sl – A pointer on a sys_sflist_t to iterate on
• __cn – A pointer to peek each entry of the list
• __cns – A pointer for the loop to run safely
• __n – The field name of sys_sfnode_t within the container struct
SYS_SFLIST_STATIC_INIT(ptr_to_list)
SYS_SFLIST_FLAGS_MASK
Functions
Returns
A pointer on the last node of the list (or NULL if none)
static inline void sys_sfnode_init(sys_sfnode_t *node, uint8_t flags)
Initialize an sflist node.
Set an initial flags value for this slist node, which can be a value between 0 and 3. These
flags will persist even if the node is moved around within a list, removed, or transplanted to
a different slist.
This is ever so slightly faster than sys_sfnode_flags_set() and should only be used on a node
that hasn’t been added to any list.
Parameters
• node – A pointer to the node to set the flags on
• flags – A value between 0 and 3 to set the flags value
static inline void sys_sfnode_flags_set(sys_sfnode_t *node, uint8_t flags)
Set flags value for an sflist node.
Set a flags value for this slist node, which can be a value between 0 and 3. These flags will
persist even if the node is moved around within a list, removed, or transplanted to a different
slist.
Parameters
• node – A pointer to the node to set the flags on
• flags – A value between 0 and 3 to set the flags value
static inline bool sys_sflist_is_empty(sys_sflist_t *list)
Test if the given list is empty.
Parameters
• list – A pointer on the list to test
Returns
a boolean, true if it’s empty, false otherwise
static inline sys_sfnode_t *sys_sflist_peek_next_no_check(sys_sfnode_t *node)
Peek the next node from current node, node is not NULL.
Faster then sys_sflist_peek_next() if node is known not to be NULL.
Parameters
• node – A pointer on the node where to peek the next node
Returns
a pointer on the next node (or NULL if none)
static inline sys_sfnode_t *sys_sflist_peek_next(sys_sfnode_t *node)
Peek the next node from current node.
Parameters
• node – A pointer on the node where to peek the next node
Returns
a pointer on the next node (or NULL if none)
static inline void sys_sflist_prepend(sys_sflist_t *list, sys_sfnode_t *node)
Prepend a node to the given list.
This and other sys_sflist_*() functions are not thread safe.
Parameters
Similar to the single-linked list in many respects, Zephyr includes a double-linked implementation. This
provides the same algorithmic behavior for all the existing slist operations, but also allows for constant-
time removal and insertion (at all points: before or after the head, tail or any internal node). To do this,
the list stores two pointers per node, and thus has somewhat higher runtime code and memory space
needs.
A sys_dlist_t struct may be instantiated by the user in any accessible memory. It must be initialized
with sys_dlist_init() or SYS_DLIST_STATIC_INIT before use. The sys_dnode_t struct is expected
to be provided by the user for any nodes added to the list (typically embedded within the struct to be
tracked, as described above). It must be initialized in zeroed/bss memory or with sys_dnode_init()
before use.
Primitive operations may retrieve the head/tail of a list and the next/prev pointers of
a node with sys_dlist_peek_head() , sys_dlist_peek_tail() , sys_dlist_peek_next() and
sys_dlist_peek_prev() . These can all return NULL where appropriate (i.e. for empty lists, or nodes
at the endpoints of the list).
A dlist can be modified in constant time by removing a node with sys_dlist_remove() , by adding a
node to the head or tail of a list with sys_dlist_prepend() and sys_dlist_append() , or by inserting
a node before an existing node with sys_dlist_insert() .
As for slist, each node in a dlist can be processed in a natural code block style using
SYS_DLIST_FOR_EACH_NODE . This macro also exists in a “FROM_NODE” form which allows for iterat-
ing from a known starting point, a “SAFE” variant that allows for removing the node being inspected
within the code block, a “CONTAINER” style that provides the pointer to a containing struct instead of
the raw node, and a “CONTAINER_SAFE” variant that provides both properties.
Convenience utilities provided by dlist include sys_dlist_insert_at() , which inserts a node that lin-
early searches through a list to find the right insertion point, which is provided by the user as a C callback
function pointer, and sys_dnode_is_linked() , which will affirmatively return whether or not a node
is currently linked into a dlist or not (via an implementation that has zero overhead vs. the normal list
processing).
Internally, the dlist implementation is minimal: the sys_dlist_t struct contains “head” and “tail”
pointer fields, the sys_dnode_t contains “prev” and “next” pointers, and no other data is stored. But in
practice the two structs are internally identical, and the list struct is inserted as a node into the list itself.
This allows for a very clean symmetry of operations:
• An empty list has backpointers to itself in the list struct, which can be trivially detected.
• The head and tail of the list can be detected by comparing the prev/next pointers of a node vs. the
list struct address.
• An insertion or deletion never needs to check for the special case of inserting at the head or tail.
There are never any NULL pointers within the list to be avoided. Exactly the same operations are
run, without tests or branches, for all list modification primitives.
Effectively, a dlist of N nodes can be thought of as a “ring” of “N+1” nodes, where one node represents
the list tracking struct.
Fig. 6: A dlist containing three elements. Note that the list struct appears as a fourth “element” in the
list.
group doubly-linked-list_apis
Defines
SYS_DLIST_FOR_EACH_NODE(__dl, __dn)
Provide the primitive to iterate on a list Note: the loop is unsafe and thus __dn should not be
removed.
User MUST add the loop statement curly braces enclosing its own code:
SYS_DLIST_FOR_EACH_NODE(l, n) {
<user code>
}
SYS_DLIST_ITERATE_FROM_NODE(l, n) {
<user code>
}
Like SYS_DLIST_FOR_EACH_NODE(), but __dn already contains a node in the list where to
start searching for the next entry from. If NULL, it starts from the head.
This and other SYS_DLIST_*() macros are not thread safe.
Parameters
• __dl – A pointer on a sys_dlist_t to iterate on
• __dn – A sys_dnode_t pointer to peek each node of the list; it contains the
starting node, or NULL to start from the head
SYS_DLIST_FOR_EACH_NODE_SAFE(__dl, __dn, __dns)
Provide the primitive to safely iterate on a list Note: __dn can be removed, it will not break
the loop.
User MUST add the loop statement curly braces enclosing its own code:
SYS_DLIST_FOR_EACH_NODE_SAFE(l, n, s) {
<user code>
}
SYS_DLIST_FOR_EACH_CONTAINER(l, c, n) {
<user code>
}
Parameters
• __dl – A pointer on a sys_dlist_t to iterate on
• __cn – A pointer to peek each entry of the list
• __n – The field name of sys_dnode_t within the container struct
SYS_DLIST_FOR_EACH_CONTAINER_SAFE(l, c, cn, n) {
<user code>
}
Parameters
• __dl – A pointer on a sys_dlist_t to iterate on
• __cn – A pointer to peek each entry of the list
• __cns – A pointer for the loop to run safely
• __n – The field name of sys_dnode_t within the container struct
SYS_DLIST_STATIC_INIT(ptr_to_list)
Typedefs
Functions
Parameters
• list – the doubly-linked list to operate on
Returns
a pointer to the head element
static inline sys_dnode_t *sys_dlist_peek_next_no_check(sys_dlist_t *list, sys_dnode_t *node)
get a reference to the next item in the list, node is not NULL
Faster than sys_dlist_peek_next() if node is known not to be NULL.
Parameters
• list – the doubly-linked list to operate on
• node – the node from which to get the next element in the list
Returns
a pointer to the next element from a node, NULL if node is the tail
static inline sys_dnode_t *sys_dlist_peek_next(sys_dlist_t *list, sys_dnode_t *node)
get a reference to the next item in the list
Parameters
• list – the doubly-linked list to operate on
• node – the node from which to get the next element in the list
Returns
a pointer to the next element from a node, NULL if node is the tail or NULL (when
node comes from reading the head of an empty list).
static inline sys_dnode_t *sys_dlist_peek_prev_no_check(sys_dlist_t *list, sys_dnode_t *node)
get a reference to the previous item in the list, node is not NULL
Faster than sys_dlist_peek_prev() if node is known not to be NULL.
Parameters
• list – the doubly-linked list to operate on
• node – the node from which to get the previous element in the list
Returns
a pointer to the previous element from a node, NULL if node is the tail
static inline sys_dnode_t *sys_dlist_peek_prev(sys_dlist_t *list, sys_dnode_t *node)
get a reference to the previous item in the list
Parameters
• list – the doubly-linked list to operate on
• node – the node from which to get the previous element in the list
Returns
a pointer to the previous element from a node, NULL if node is the tail or NULL
(when node comes from reading the head of an empty list).
static inline sys_dnode_t *sys_dlist_peek_tail(sys_dlist_t *list)
get a reference to the tail item in the list
Parameters
• list – the doubly-linked list to operate on
Returns
a pointer to the tail element, NULL if list is empty
A Multi Producer Single Consumer Packet Buffer (MPSC_PBUF) is a circular buffer, whose contents are
stored in first-in-first-out order. Variable size packets are stored in the buffer. Packet buffer works under
assumption that there is a single context that consumes the data. However, it is possible that another
context may interfere to flush the data and never come back (panic case). Packet is produced in two
steps: first requested amount of data is allocated, producer fills the data and commits it. Consuming a
packet is also performed in two steps: consumer claims the packet, gets pointer to it and length and later
on packet is freed. This approach reduces memory copying.
A MPSC Packet Buffer has the following key properties:
• Allocate, commit scheme used for packet producing.
• Claim, free scheme used for packet consuming.
• Allocator ensures that contiguous memory of requested length is allocated.
• Following policies can be applied when requested space cannot be allocated:
– Overwrite - oldest entries are dropped until requested amount of memory can be allocated.
For each dropped packet user callback is called.
– No overwrite - When requested amount of space cannot be allocated, allocation fails.
• Dedicated, optimized API for storing short packets.
• Allocation with timeout.
Internals
Each packet in the buffer contains MPSC_PBUF specific header which is used for internal management.
Header consists of 2 bit flags. In order to optimize memory usage, header can be added on top of the user
header using MPSC_PBUF_HDR and remaining bits in the first word can be application specific. Header
consists of following flags:
• valid - bit set to one when packet contains valid user packet
• busy - bit set when packet is being consumed (claimed but not free)
Header state:
Packet buffer space contains free space, valid user packets and internal skip packets. Internal skip packets
indicates padding, e.g. at the of the buffer.
Allocation Using pairs for read and write indexes, available space is determined. If space can be
allocated, temporary write index is moved and pointer to a space within buffer is returned. Packet
header is reset. If allocation required wrapping of the write index, a skip packet is added to the end of
buffer. If space cannot be allocated and overwrite is disabled then NULL pointer is returned or context
blocks if allocation was with timeout.
Allocation with overwrite If overwrite is enabled, oldest packets are dropped until requested amount
of space can be allocated. When packets are dropped busy flag is checked in the header to ensure that
currently consumed packet is not overwritten. In that case, skip packet is added before busy packet
and packets following the busy packet are dropped. When busy packet is being freed, such situation is
detected and packet is converted to skip packet to avoid double processing.
Usage
# include <zephyr/sys/mpsc_packet.h>
struct foo_header {
MPSC_PBUF_HDR;
uint32_t length: 32 - MPSC_PBUF_HDR_BITS;
};
Packet buffer configuration Configuration structure contains buffer details, configuration flags and
callbacks. Following callbacks are used by the packet buffer:
• Drop notification - callback called whenever a packet is dropped due to overwrite.
• Get packet length - callback to determine packet length
fill_data(packet);
mpsc_pbuf_commit(buffer, packet);
mpsc_pbuf_put_word(buffer, data);
mpsc_pbuf_put_word_ext(buffer, data, ptr);
process(packet);
mpsc_pbuf_free(buffer, packet);
A Single Producer Single Consumer Packet Buffer (SPSC_PBUF) is a circular buffer, whose contents are
stored in first-in-first-out order. Variable size packets are stored in the buffer. Packet buffer works under
assumption that there is a single context that produces packets and a single context that consumes the
data.
Implementation is focused on performance and memory footprint.
Packets are added to the buffer using spsc_pbuf_write() which copies a data into the buffer. If the
buffer is full error is returned.
Packets are copied out of the buffer using spsc_pbuf_read().
For circumstances where sorted containers may become large at runtime, a list becomes problematic due
to algorithmic costs of searching it. For these situations, Zephyr provides a balanced tree implementation
which has runtimes on search and removal operations bounded at O(log2(N)) for a tree of size N. This
is implemented using a conventional red/black tree as described by multiple academic sources.
The rbtree tracking struct for a rbtree may be initialized anywhere in user accessible memory. It should
contain only zero bits before first use. No specific initialization API is needed or required.
Unlike a list, where position is explicit, the ordering of nodes within an rbtree must be provided as a pred-
icate function by the user. A function of type rb_lessthan_t() should be assigned to the lessthan_fn
field of the rbtree struct before any tree operations are attempted. This function should, as its name
suggests, return a boolean True value if the first node argument is “less than” the second in the ordering
desired by the tree. Note that “equal” is not allowed, nodes within a tree must have a single fixed order
for the algorithm to work correctly.
As with the slist and dlist containers, nodes within an rbtree are represented as a rbnode structure which
exists in user-managed memory, typically embedded within the the data structure being tracked in the
tree. Unlike the list code, the data within an rbnode is entirely opaque. It is not possible for the user to
extract the binary tree topology and “manually” traverse the tree as it is for a list.
Nodes can be inserted into a tree with rb_insert() and removed with rb_remove() . Access to the
“first” and “last” nodes within a tree (in the sense of the order defined by the comparison function)
is provided by rb_get_min() and rb_get_max() . There is also a predicate, rb_contains() , which
returns a boolean True if the provided node pointer exists as an element within the tree. As described
above, all of these routines are guaranteed to have at most log time complexity in the size of the tree.
There are two mechanisms provided for enumerating all elements in an rbtree. The first, rb_walk() , is a
simple callback implementation where the caller specifies a C function pointer and an untyped argument
to be passed to it, and the tree code calls that function for each node in order. This has the advantage of
a very simple implementation, at the cost of a somewhat more cumbersome API for the user (not unlike
ISO C’s bsearch() routine). It is a recursive implementation, however, and is thus not always available
in environments that forbid the use of unbounded stack techniques like recursion.
There is also a RB_FOR_EACH iterator provided, which, like the similar APIs for the lists, works to iterate
over a list in a more natural way, using a nested code block instead of a callback. It is also nonrecursive,
though it requires log-sized space on the stack by default (however, this can be configured to use a
fixed/maximally size buffer instead where needed to avoid the dynamic allocation). As with the lists, this
is also available in a RB_FOR_EACH_CONTAINER variant which enumerates using a pointer to a container
field and not the raw node pointer.
Tree Internals
As described, the Zephyr rbtree implementation is a conventional red/black tree as described pervasively
in academic sources. Low level details about the algorithm are out of scope for this document, as they
match existing conventions. This discussion will be limited to details notable or specific to the Zephyr
implementation.
The core invariant guaranteed by the tree is that the path from the root of the tree to any leaf is no more
than twice as long as the path to any other leaf. This is achieved by associating one bit of “color” with
each node, either red or black, and enforcing a rule that no red child can be a child of another red child
(i.e. that the number of black nodes on any path to the root must be the same, and that no more than
that number of “extra” red nodes may be present). This rule is enforced by a set of rotation rules used to
“fix” trees following modification.
Fig. 9: A maximally unbalanced rbtree with a black height of two. No more nodes can be added under-
neath the rightmost node without rebalancing.
These rotations are conceptually implemented on top of a primitive that “swaps” the position of one node
with another in the list. Typical implementations effect this by simply swapping the nodes internal “data”
pointers, but because the Zephyr rbnode is intrusive, that cannot work. Zephyr must include somewhat
more elaborate code to handle the edge cases (for example, one swapped node can be the root, or the
two may already be parent/child).
The rbnode struct for a Zephyr rbtree contains only two pointers, representing the “left”, and “right”
children of a node within the binary tree. Traversal of a tree for rebalancing following modification,
however, routinely requires the ability to iterate “upwards” from a node as well. It is very common for
red/black trees in the industry to store a third “parent” pointer for this purpose. Zephyr avoids this
requirement by building a “stack” of node pointers locally as it traverses downward through the tree and
updating it appropriately as modifications are made. So a Zephyr rbtree can be implemented with no
more runtime storage overhead than a dlist.
These properties, of a balanced tree data structure that works with only two pointers of data per node
and that works without any need for a memory allocation API, are quite rare in the industry and are
somewhat unique to Zephyr.
group rbtree_apis
Defines
RB_FOR_EACH(tree, node)
Walk a tree in-order without recursing.
While rb_walk() is very simple, recursing on the C stack can be clumsy for some purposes and
on some architectures wastes significant memory in stack frames. This macro implements a
non-recursive “foreach” loop that can iterate directly on the tree, at a moderate cost in code
size.
Note that the resulting loop is not safe against modifications to the tree. Changes to the
tree structure during the loop will produce incorrect results, as nodes may be skipped or
duplicated. Unlike linked lists, no _SAFE variant exists.
Note also that the macro expands its arguments multiple times, so they should not be expres-
sions with side effects.
Parameters
• tree – A pointer to a struct rbtree to walk
• node – The symbol name of a local struct rbnode* variable to use as the iterator
RB_FOR_EACH_CONTAINER(tree, node, field)
Loop over rbtree with implicit container field logic.
As for RB_FOR_EACH(), but “node” can have an arbitrary type containing a struct rbnode.
Parameters
• tree – A pointer to a struct rbtree to walk
• node – The symbol name of a local iterator
• field – The field name of a struct rbnode inside node
Typedefs
Functions
struct rbtree
#include <rb.h>
A ring buffer is a circular buffer, whose contents are stored in first-in-first-out order.
For circumstances where an application needs to implement asynchronous “streaming” copying of data,
Zephyr provides a struct ring_buf abstraction to manage copies of such data in and out of a shared
buffer of memory.
Two content data modes are supported:
• Byte mode: raw bytes can be enqueued and dequeued.
• Data item mode: Multiple 32-bit word data items with metadata can be enqueued and dequeued
from the ring buffer in chunks of up to 1020 bytes. Each data item also has two associated metadata
values: a type identifier and a 16-bit integer value, both of which are application-specific.
While the underlying data structure is the same, it is not legal to mix these two modes on a single ring
buffer instance. A ring buffer initialized with a byte count must be used only with the “bytes” API, one
initialized with a word count must use the “items” calls.
• Concepts
– Byte mode
– Data item mode
– Concurrency
– Internal Operation
• Implementation
– Defining a Ring Buffer
– Enqueuing Data
– Retrieving Data
• Configuration Options
• API Reference
Concepts
Any number of ring buffers can be defined (limited only by available RAM). Each ring buffer is referenced
by its memory address.
A ring buffer has the following key properties:
• A data buffer of bytes or 32-bit words. The data buffer contains the raw bytes or 32-bit words that
have been added to the ring buffer but not yet removed.
• A data buffer size, measured in bytes or 32-byte words. This governs the maximum amount of
data (including possible metadata values) the ring buffer can hold.
A ring buffer must be initialized before it can be used. This sets its data buffer to empty.
A struct ring_buf may be placed anywhere in user-accessible memory, and must be initialized with
ring_buf_init() or ring_buf_element_init() before use. This must be provided a region of user-
controlled memory for use as the buffer itself. Note carefully that the units of the size of the buffer
passed change (either bytes or words) depending on how the ring buffer will be used later. Macros
for combining these steps in a single static declaration exist for convenience. RING_BUF_DECLARE will
declare and statically initialize a ring buffer with a specified byte count, where RING_BUF_ITEM_DECLARE
will declare and statically initialize a buffer with a given count of 32 bit words. RING_BUF_ITEM_SIZEOF
will compute the size in 32-bit words corresponding to a type or an expression. Note: rounds up if the
size is not a multiple of 32 bits.
“Bytes” data may be copied into the ring buffer using ring_buf_put() , passing a data pointer and
byte count. These bytes will be copied into the buffer in order, as many as will fit in the allocated
buffer. The total number of bytes copied (which may be fewer than provided) will be returned. Likewise
ring_buf_get() will copy bytes out of the ring buffer in the order that they were written, into a user-
provided buffer, returning the number of bytes that were transferred.
To avoid multiply-copied-data situations, a “claim” API exists for byte mode. ring_buf_put_claim()
takes a byte size value from the user and returns a pointer to memory internal to the ring buffer that
can be used to receive those bytes, along with a size of the contiguous internal region (which may be
smaller than requested). The user can then copy data into that region at a later time without assembling
all the bytes in a single region first. When complete, ring_buf_put_finish() can be used to signal the
buffer that the transfer is complete, passing the number of bytes actually transferred. At this point a new
transfer can be initiated. Similarly, ring_buf_get_claim() returns a pointer to internal ring buffer data
from which the user can read without making a verbatim copy, and ring_buf_get_finish() signals the
buffer with how many bytes have been consumed and allows for a new transfer to begin.
“Items” mode works similarly to bytes mode, except that all transfers are in units of 32 bit words
and all memory is assumed to be aligned on 32 bit boundaries. The write and read operations
are ring_buf_item_put() and ring_buf_item_get() , and work otherwise identically to the bytes
mode APIs. There no “claim” API provided for items mode. One important difference is that unlike
ring_buf_put() , ring_buf_item_put() will not do a partial transfer; it will return an error in the case
where the provided data does not fit in its entirety.
The user can manage the capacity of a ring buffer without modifying it using either
ring_buf_space_get() or ring_buf_item_space_get() which returns the number of free bytes or
free 32-bit item words respectively, or by testing the ring_buf_is_empty() predicate.
Finally, a ring_buf_reset() call exists to immediately empty a ring buffer, discarding the tracking of
any bytes or items already written to the buffer. It does not modify the memory contents of the buffer
itself, however.
Byte mode A byte mode ring buffer instance is declared using RING_BUF_DECLARE() and
accessed using: ring_buf_put_claim() , ring_buf_put_finish() , ring_buf_get_claim() ,
ring_buf_get_finish() , ring_buf_put() and ring_buf_get() .
Data can be copied into the ring buffer (see ring_buf_put() ) or ring buffer memory can be used directly
by the user. In the latter case, the operation is split into three stages:
1. allocating the buffer (ring_buf_put_claim() ) when user requests the destination location where
data can be written.
2. writing the data by the user (e.g. buffer written by DMA).
3. indicating the amount of data written to the provided buffer (ring_buf_put_finish() ). The
amount can be less than or equal to the allocated amount.
Data can be retrieved from a ring buffer through copying (see ring_buf_get() ) or accessed directly by
address. In the latter case, the operation is split into three stages:
1. retrieving source location with valid data written to a ring buffer (see ring_buf_get_claim() ).
2. processing data
3. freeing processed data (see ring_buf_get_finish() ). The amount freed can be less than or equal
or to the retrieved amount.
Data item mode A data item mode ring buffer instance is declared using RING_BUF_ITEM_DECLARE()
and accessed using ring_buf_item_put() and ring_buf_item_get() .
A ring buffer data item is an array of 32-bit words from 0 to 1020 bytes in length. When a data item is
enqueued (ring_buf_item_put() ) its contents are copied to the data buffer, along with its associated
metadata values (which occupy one additional 32-bit word). If the ring buffer has insufficient space to
hold the new data item the enqueue operation fails.
A data item is dequeued (ring_buf_item_get() ) from a ring buffer by removing the oldest enqueued
item. The contents of the dequeued data item, as well as its two metadata values, are copied to areas
supplied by the retriever. If the ring buffer is empty, or if the data array supplied by the retriever is not
large enough to hold the data item’s data, the dequeue operation fails.
Concurrency The ring buffer APIs do not provide any concurrency control. Depending on usage (par-
ticularly with respect to number of concurrent readers/writers) applications may need to protect the ring
buffer with mutexes and/or use semaphores to notify consumers that there is data to read.
For the trivial case of one producer and one consumer, concurrency control shouldn’t be needed.
Internal Operation Data streamed through a ring buffer is always written to the next byte within the
buffer, wrapping around to the first element after reaching the end, thus the “ring” structure. Internally,
the struct ring_buf contains its own buffer pointer and its size, and also a set of “head” and “tail”
indices representing where the next read and write operations may occur.
This boundary is invisible to the user using the normal put/get APIs, but becomes a barrier to the “claim”
API, because obviously no contiguous region can be returned that crosses the end of the buffer. This
can be surprising to application code, and produce performance artifacts when transfers need to happen
close to the end of the buffer, as the number of calls to claim/finish needs to double for such transfers.
Implementation
Defining a Ring Buffer A ring buffer is defined using a variable of type ring_buf. It must then be
initialized by calling ring_buf_init() or ring_buf_item_init() .
The following code defines and initializes an empty data item mode ring buffer (which is part of a
larger data structure). The ring buffer’s data buffer is capable of holding 64 words of data and metadata
information.
# define MY_RING_BUF_WORDS 64
struct my_struct {
struct ring_buf rb;
(continues on next page)
void init_my_struct {
ring_buf_item_init(&ms.rb, MY_RING_BUF_WORDS, ms.buffer);
...
}
Alternatively, a ring buffer can be defined and initialized at compile time using one of two macros at file
scope. Each macro defines both the ring buffer itself and its data buffer.
The following code defines a data item mode ring buffer:
# define MY_RING_BUF_WORDS 93
RING_BUF_ITEM_DECLARE(my_ring_buf, MY_RING_BUF_WORDS);
The following code defines a ring buffer intended to be used for raw bytes:
# define MY_RING_BUF_BYTES 93
RING_BUF_DECLARE(my_ring_buf, MY_RING_BUF_BYTES);
Enqueuing Data Bytes are copied to a byte mode ring buffer by calling ring_buf_put() .
uint8_t my_data[MY_RING_BUF_BYTES];
uint32_t ret;
Data can be added to a byte mode ring buffer by directly accessing the ring buffer’s memory. For
example:
uint32_t size;
uint32_t rx_size;
uint8_t *data;
int err;
/* Indicate amount of valid data. rx_size can be equal or less than size. */
err = ring_buf_put_finish(&ring_buf, rx_size);
if (err != 0) {
/* This shouldn't happen unless rx_size > size */
...
}
uint32_t data[MY_DATA_WORDS];
int ret;
If the data item requires only the type or application-specific integer value (i.e. it has no data array), a
size of 0 and data pointer of NULL can be specified.
int ret;
Retrieving Data Data bytes are copied out from a byte mode ring buffer by calling ring_buf_get() .
For example:
uint8_t my_data[MY_DATA_BYTES];
size_t ret;
Data can be retrieved from a byte mode ring buffer by direct operations on the ring buffer’s memory.
For example:
uint32_t size;
uint32_t proc_size;
uint8_t *data;
int err;
/* Indicate amount of data that can be freed. proc_size can be equal or less
* than size.
*/
err = ring_buf_get_finish(&ring_buf, proc_size);
if (err != 0) {
/* proc_size exceeds amount of valid data in a ring buffer. */
...
}
uint32_t my_data[MY_DATA_WORDS];
uint16_t my_type;
uint8_t my_value;
uint8_t my_size;
int ret;
my_size = MY_DATA_WORDS;
ret = ring_buf_item_get(&ring_buf, &my_type, &my_value, my_data, &my_size);
if (ret == -EMSGSIZE) {
printk("Buffer is too small, need %d uint32_t\n", my_size);
} else if (ret == -EAGAIN) {
printk("Ring buffer is empty\n");
} else {
printk("Got item of type %u value &u of size %u dwords\n",
my_type, my_value, my_size);
...
}
Configuration Options
API Reference
group ring_buffer_apis
Defines
RING_BUF_DECLARE(name, size8)
Define and initialize a ring buffer for byte data.
This macro establishes a ring buffer of an arbitrary size. The basic storage unit is a byte.
The ring buffer can be accessed outside the module where it is defined using:
Parameters
• name – Name of the ring buffer.
• size8 – Size of ring buffer (in bytes).
RING_BUF_ITEM_DECLARE(name, size32)
Define and initialize an “item based” ring buffer.
This macro establishes an “item based” ring buffer. Each data item is an array of 32-bit words
(from zero to 1020 bytes in length), coupled with a 16-bit type identifier and an 8-bit integer
value.
The ring buffer can be accessed outside the module where it is defined using:
Parameters
• name – Name of the ring buffer.
• size32 – Size of ring buffer (in 32-bit words).
RING_BUF_ITEM_DECLARE_SIZE(name, size32)
Define and initialize an “item based” ring buffer.
This exists for backward compatibility reasons. RING_BUF_ITEM_DECLARE should be used
instead.
Parameters
• name – Name of the ring buffer.
• size32 – Size of ring buffer (in 32-bit words).
RING_BUF_ITEM_DECLARE_POW2(name, pow)
Define and initialize a power-of-2 sized “item based” ring buffer.
This macro establishes an “item based” ring buffer by specifying its size using a power of 2.
This exists mainly for backward compatibility reasons. RING_BUF_ITEM_DECLARE should be
used instead.
Parameters
• name – Name of the ring buffer.
• pow – Ring buffer size exponent.
RING_BUF_ITEM_SIZEOF(expr)
Compute the ring buffer size in 32-bit needed to store an element.
The argument can be a type or an expression. Note: rounds up if the size is not a multiple of
32 bits.
Parameters
• expr – Expression or type to compute the size of
Functions
static inline void ring_buf_init(struct ring_buf *buf, uint32_t size, uint8_t *data)
Initialize a ring buffer for byte data.
This routine initializes a ring buffer, prior to its first use. It is only used for ring buffers not
defined using RING_BUF_DECLARE.
Parameters
• buf – Address of ring buffer.
• size – Ring buffer size (in bytes).
• data – Ring buffer data area (uint8_t data[size]).
static inline void ring_buf_item_init(struct ring_buf *buf, uint32_t size, uint32_t *data)
Initialize an “item based” ring buffer.
This routine initializes a ring buffer, prior to its first use. It is only used for ring buffers not
defined using RING_BUF_ITEM_DECLARE.
Each data item is an array of 32-bit words (from zero to 1020 bytes in length), coupled with
a 16-bit type identifier and an 8-bit integer value.
Each data item is an array of 32-bit words (from zero to 1020 bytes in length), coupled with
a 16-bit type identifier and an 8-bit integer value.
Parameters
• buf – Address of ring buffer.
• size – Ring buffer size (in 32-bit words)
• data – Ring buffer data area (uint32_t data[size]).
static inline bool ring_buf_is_empty(struct ring_buf *buf)
Determine if a ring buffer is empty.
Parameters
• buf – Address of ring buffer.
Returns
true if the ring buffer is empty, or false if not.
static inline void ring_buf_reset(struct ring_buf *buf)
Reset ring buffer state.
Parameters
• buf – Address of ring buffer.
static inline uint32_t ring_buf_space_get(struct ring_buf *buf)
Determine free space in a ring buffer.
Parameters
• buf – Address of ring buffer.
Returns
Ring buffer free space (in bytes).
static inline uint32_t ring_buf_item_space_get(struct ring_buf *buf)
Determine free space in an “item based” ring buffer.
Parameters
• buf – Address of ring buffer.
Returns
Ring buffer free space (in 32-bit words).
static inline uint32_t ring_buf_capacity_get(struct ring_buf *buf)
Return ring buffer capacity.
Parameters
• buf – Address of ring buffer.
Returns
Ring buffer capacity (in bytes).
static inline uint32_t ring_buf_size_get(struct ring_buf *buf)
Determine used space in a ring buffer.
Parameters
• buf – Address of ring buffer.
Returns
Ring buffer space used (in bytes).
Warning: Use cases involving multiple writers to the ring buffer must prevent concurrent
write operations, either by preventing all writers from being preempted or by using a
mutex to govern writes to the ring buffer.
Warning: Ring buffer instance should not mix byte access and item access (calls prefixed
with ring_buf_item_).
Parameters
• buf – [in] Address of ring buffer.
• data – [out] Pointer to the address. It is set to a location within ring buffer.
• size – [in] Requested allocation size (in bytes).
Returns
Size of allocated buffer which can be smaller than requested if there is not enough
free space or buffer wraps.
Warning: Use cases involving multiple writers to the ring buffer must prevent concurrent
write operations, either by preventing all writers from being preempted or by using a
mutex to govern writes to the ring buffer.
Warning: Ring buffer instance should not mix byte access and item access (calls prefixed
with ring_buf_item_).
Parameters
• buf – Address of ring buffer.
• size – Number of valid bytes in the allocated buffers.
Return values
• 0 – Successful operation.
• -EINVAL – Provided size exceeds free space in the ring buffer.
Warning: Use cases involving multiple writers to the ring buffer must prevent concurrent
write operations, either by preventing all writers from being preempted or by using a
mutex to govern writes to the ring buffer.
Warning: Ring buffer instance should not mix byte access and item access (calls prefixed
with ring_buf_item_).
Parameters
• buf – Address of ring buffer.
• data – Address of data.
• size – Data size (in bytes).
Return values
Number – of bytes written.
Warning: Use cases involving multiple reads of the ring buffer must prevent concurrent
read operations, either by preventing all readers from being preempted or by using a mutex
to govern reads to the ring buffer.
Warning: Ring buffer instance should not mix byte access and item access (calls prefixed
with ring_buf_item_).
Parameters
• buf – [in] Address of ring buffer.
• data – [out] Pointer to the address. It is set to a location within ring buffer.
• size – [in] Requested size (in bytes).
Returns
Number of valid bytes in the provided buffer which can be smaller than requested
if there is not enough free space or buffer wraps.
Warning: Use cases involving multiple reads of the ring buffer must prevent concurrent
read operations, either by preventing all readers from being preempted or by using a mutex
to govern reads to the ring buffer.
Warning: Ring buffer instance should not mix byte access and item mode (calls prefixed
with ring_buf_item_).
Parameters
• buf – Address of ring buffer.
• size – Number of bytes that can be freed.
Return values
• 0 – Successful operation.
• -EINVAL – Provided size exceeds valid bytes in the ring buffer.
Warning: Use cases involving multiple reads of the ring buffer must prevent concurrent
read operations, either by preventing all readers from being preempted or by using a mutex
to govern reads to the ring buffer.
Warning: Ring buffer instance should not mix byte access and item mode (calls prefixed
with ring_buf_item_).
Parameters
• buf – Address of ring buffer.
• data – Address of the output buffer. Can be NULL to discard data.
• size – Data size (in bytes).
Return values
Number – of bytes written to the output buffer.
Warning: Use cases involving multiple reads of the ring buffer must prevent concurrent
read operations, either by preventing all readers from being preempted or by using a mutex
to govern reads to the ring buffer.
Warning: Ring buffer instance should not mix byte access and item mode (calls prefixed
with ring_buf_item_).
Warning: Multiple calls to peek will result in the same data being ‘peeked’ multi-
ple times. To remove data, use either ring_buf_get or ring_buf_get_claim followed by
ring_buf_get_finish with a non-zero size.
Parameters
• buf – Address of ring buffer.
• data – Address of the output buffer. Cannot be NULL.
• size – Data size (in bytes).
Return values
Number – of bytes written to the output buffer.
int ring_buf_item_put(struct ring_buf *buf, uint16_t type, uint8_t value, uint32_t *data, uint8_t
size32)
Write a data item to a ring buffer.
This routine writes a data item to ring buffer buf. The data item is an array of 32-bit words
(from zero to 1020 bytes in length), coupled with a 16-bit type identifier and an 8-bit integer
value.
Warning: Use cases involving multiple writers to the ring buffer must prevent concurrent
write operations, either by preventing all writers from being preempted or by using a
mutex to govern writes to the ring buffer.
Parameters
• buf – Address of ring buffer.
• type – Data item’s type identifier (application specific).
• value – Data item’s integer value (application specific).
• data – Address of data item.
• size32 – Data item size (number of 32-bit words).
Return values
• 0 – Data item was written.
• -EMSGSIZE – Ring buffer has insufficient free space.
int ring_buf_item_get(struct ring_buf *buf, uint16_t *type, uint8_t *value, uint32_t *data,
uint8_t *size32)
Read a data item from a ring buffer.
This routine reads a data item from ring buffer buf. The data item is an array of 32-bit words
(up to 1020 bytes in length), coupled with a 16-bit type identifier and an 8-bit integer value.
Warning: Use cases involving multiple reads of the ring buffer must prevent concurrent
read operations, either by preventing all readers from being preempted or by using a mutex
to govern reads to the ring buffer.
Parameters
• buf – Address of ring buffer.
• type – Area to store the data item’s type identifier.
• value – Area to store the data item’s integer value.
• data – Area to store the data item. Can be NULL to discard data.
• size32 – Size of the data item storage area (number of 32-bit chunks).
Return values
• 0 – Data item was fetched; size32 now contains the number of 32-bit words
read into data area data.
• -EAGAIN – Ring buffer is empty.
• -EMSGSIZE – Data area data is too small; size32 now contains the number of
32-bit words needed.
The timing functions can be used to obtain execution time of a section of code to aid in analysis and
optimization.
Please note that the timing functions may use a different timer than the default kernel timer, where the
timer being used is specified by architecture, SoC or board configuration.
3.6.1 Configuration
3.6.2 Usage
Example
void gather_timing(void)
{
timing_t start_time, end_time;
uint64_t total_cycles;
uint64_t total_ns;
timing_init();
(continues on next page)
start_time = timing_counter_get();
code_execution_to_be_measured();
end_time = timing_counter_get();
timing_stop();
}
group timing_api
Timing Measurement APIs.
Functions
void timing_init(void)
Initialize the timing subsystem.
Perform the necessary steps to initialize the timing subsystem.
void timing_start(void)
Signal the start of the timing information gathering.
Signal to the timing subsystem that timing information will be gathered from this point for-
ward.
void timing_stop(void)
Signal the end of the timing information gathering.
Signal to the timing subsystem that timing information is no longer being gathered from this
point forward.
static inline timing_t timing_counter_get(void)
Return timing counter.
Returns
Timing counter.
static inline uint64_t timing_cycles_get(volatile timing_t *const start, volatile timing_t *const
end)
Get number of cycles between start and end.
For some architectures or SoCs, the raw numbers from counter need to be scaled to obtain
actual number of cycles.
Parameters
• start – Pointer to counter at start of a measured execution.
• end – Pointer to counter at stop of a measured execution.
Returns
Number of cycles between start and end.
3.7.1 Overview
Uptime in Zephyr is based on the a tick counter. With the default CONFIG_TICKLESS_KERNEL this counter
advances at a nominally constant rate from zero at the instant the system started. The POSIX equivalent
to this counter is something like CLOCK_MONOTONIC or, in Linux, CLOCK_MONOTONIC_RAW. k_uptime_get()
provides a millisecond representation of this time.
Applications often need to correlate the Zephyr internal time with external time scales used in daily life,
such as local time or Coordinated Universal Time. These systems interpret time in different ways and
may have discontinuities due to leap seconds and local time offsets like daylight saving time.
Because of these discontinuities, as well as significant inaccuracies in the clocks underlying the cycle
counter, the offset between time estimated from the Zephyr clock and the actual time in a “real” civil
time scale is not constant and can vary widely over the runtime of a Zephyr application.
The time utilities API supports:
• converting between time representations
• synchronizing and aligning time scales
For terminology and concepts that support these functions see Concepts Underlying Time Support in
Zephyr.
Representation Transformation
group timeutil_repr_apis
Functions
See also:
http://man7.org/linux/man-pages/man3/timegm.3.html
Parameters
• tm – pointer to broken down time.
Returns
the corresponding time in the POSIX epoch time scale.
See also:
http://man7.org/linux/man-pages/man3/timegm.3.html
Parameters
• tm – pointer to broken down time.
Returns
the corresponding time in the POSIX epoch time scale. If the time cannot be
represented then (time_t)-1 is returned and errno is set to ERANGE`.
group timeutil_sync_apis
Functions
Optionally update the base timestamp. If the base is replaced the latest instant will be cleared
until timeutil_sync_state_update() is invoked.
Parameters
• tsp – pointer to a time synchronization state.
• skew – the skew to be used. The value must be positive and shouldn’t be too
far away from 1.
• base – optional new base to be set. If provided this becomes the base times-
tamp that will be used along with skew to convert between reference and local
timescale instants. Setting the base clears the captured latest value.
Returns
0 if skew was updated
Returns
-EINVAL if skew was not valid
float timeutil_sync_estimate_skew(const struct timeutil_sync_state *tsp)
Estimate the skew based on current state.
Using the base and latest syncpoints from the state determine the skew of the local clock
relative to the reference clock. See timeutil_sync_state::skew.
Parameters
• tsp – pointer to a time synchronization state. The base and latest syncpoints
must be present and the latest syncpoint must be after the base point in the
local time scale.
Returns
the estimated skew, or zero if skew could not be estimated.
int timeutil_sync_ref_from_local(const struct timeutil_sync_state *tsp, uint64_t local,
uint64_t *refp)
Interpolate a reference timescale instant from a local instant.
Parameters
• tsp – pointer to a time synchronization state. This must have a base and a
skew installed.
• local – an instant measured in the local timescale. This may be before or after
the base instant.
• refp – where the corresponding instant in the reference timescale should be
stored. A negative interpolated reference time produces an error. If interpola-
tion fails the referenced object is not modified.
Return values
• 0 – if interpolated using a skew of 1
• 1 – if interpolated using a skew not equal to 1
• -EINVAL –
– the times synchronization state is not adequately initialized
– refp is null
• -ERANGE – the interpolated reference time would be negative
int timeutil_sync_local_from_ref(const struct timeutil_sync_state *tsp, uint64_t ref, int64_t
*localp)
Interpolate a local timescale instant from a reference instant.
Parameters
• tsp – pointer to a time synchronization state. This must have a base and a
skew installed.
• ref – an instant measured in the reference timescale. This may be before or
after the base instant.
• localp – where the corresponding instant in the local timescale should be
stored. An interpolated value before local time 0 is provided without error. If
interpolation fails the referenced object is not modified.
Return values
• 0 – if successful with a skew of 1
• 1 – if successful with a skew not equal to 1
• -EINVAL –
– the time synchronization state is not adequately initialized
– refp is null
int32_t timeutil_sync_skew_to_ppb(float skew)
Convert from a skew to an error in parts-per-billion.
A skew of 1.0 has zero error. A skew less than 1 has a positive error (clock is faster than it
should be). A skew greater than one has a negative error (clock is slower than it should be).
Note that due to the limited precision of float compared with double the smallest error that
can be represented is about 120 ppb. A “precise” time source may have error on the order of
2000 ppb.
A skew greater than 3.14748 may underflow the 32-bit representation; this represents a clock
running at less than 1/3 its nominal rate.
Returns
skew error represented as parts-per-billion, or INT32_MIN if the skew cannot be
represented in the return type.
struct timeutil_sync_config
#include <timeutil.h> Immutable state for synchronizing two clocks.
Values required to convert durations between two time scales.
Note: The accuracy of the translation and calculated skew between sources depends on
the resolution of these frequencies. A reference frequency with microsecond or nanosecond
resolution would produce the most accurate tracking when the local reference is the Zephyr
tick counter. A reference source like an RTC chip with 1 Hz resolution requires a much larger
interval between sampled instants to detect relative clock drift.
Public Members
uint32_t ref_Hz
The nominal instance counter rate in Hz.
This value is assumed to be precise, but may drift depending on the reference clock source.
The value must be positive.
uint32_t local_Hz
The nominal local counter rate in Hz.
This value is assumed to be inaccurate but reasonably stable. For a local clock driven by a
crystal oscillator an error of 25 ppm is common; for an RC oscillator larger errors should
be expected. The timeutil_sync infrastructure can calculate the skew between the local
and reference clocks and apply it when converting between time scales.
The value must be positive.
struct timeutil_sync_instant
#include <timeutil.h> Representation of an instant in two time scales.
Capturing the same instant in two time scales provides a registration point that can be used
to convert between those time scales.
Public Members
uint64_t ref
An instant in the reference time scale.
This must never be zero in an initialized timeutil_sync_instant object.
uint64_t local
The corresponding instance in the local time scale.
This may be zero in a valid timeutil_sync_instant object.
struct timeutil_sync_state
#include <timeutil.h> State required to convert instants between time scales.
This state in conjunction with functions that manipulate it capture the offset information
necessary to convert between two timescales along with information that corrects for skew
due to inaccuracies in clock rates.
State objects should be zero-initialized before use.
Public Members
float skew
The scale factor used to correct for clock skew.
The nominal rate for the local counter is assumed to be inaccurate but stable, i.e. it will
generally be some parts-per-million faster or slower than specified.
A duration in observed local clock ticks must be multiplied by this value to produce a
duration in ticks of a clock operating at the nominal local rate.
A zero value indicates that the skew has not been initialized. If the value is zero when
base is initialized the skew will be set to 1. Otherwise the skew is assigned through
timeutil_sync_state_set_skew().
International Atomic Time (TAI) is a time scale based on averaging clocks that count in SI seconds. TAI
is a monotonic and continuous time scale.
Universal Time (UT) is a time scale based on Earth’s rotation. UT is a discontinuous time scale as it
requires occasional adjustments (leap seconds) to maintain alignment to changes in Earth’s rotation.
Thus the difference between TAI and UT varies over time. There are several variants of UT, with UTC
being the most common.
UT times are independent of location. UT is the basis for Standard Time (or “local time”) which is the
time at a particular location. Standard time has a fixed offset from UT at any given instant, primarily
influenced by longitude, but the offset may be adjusted (“daylight saving time”) to align standard time
to the local solar time. In a sense local time is “more discontinuous” than UT.
POSIX Time is a time scale that counts seconds since the “POSIX epoch” at 1970-01-01T00:00:00Z
(i.e. the start of 1970 UTC). UNIX Time is an extension of POSIX time using negative values to rep-
resent times before the POSIX epoch. Both of these scales assume that every day has exactly 86400
seconds. In normal use instants in these scales correspond to times in the UTC scale, so they inherit the
discontinuity.
The continuous analogue is UNIX Leap Time which is UNIX time plus all leap-second corrections added
after the POSIX epoch (when TAI-UTC was 8 s).
Example of Time Scale Differences A positive leap second was introduced at the end of 2016, in-
creasing the difference between TAI and UTC from 36 seconds to 37 seconds. There was no leap second
introduced at the end of 1999, when the difference between TAI and UTC was only 32 seconds. The
following table shows relevant civil and epoch times in several scales:
UTC Date UNIX time TAI Date TAI-UTC UNIX Leap Time
1970-01-01T00:00:00Z 0 1970-01-01T00:00:08 +8 0
1999-12-31T23:59:28Z 946684768 2000-01-01T00:00:00 +32 946684792
1999-12-31T23:59:59Z 946684799 2000-01-01T00:00:31 +32 946684823
2000-01-01T00:00:00Z 946684800 2000-01-01T00:00:32 +32 946684824
2016-12-31T23:59:59Z 1483228799 2017-01-01T00:00:35 +36 1483228827
2016-12-31T23:59:60Z undefined 2017-01-01T00:00:36 +36 1483228828
2017-01-01T00:00:00Z 1483228800 2017-01-01T00:00:37 +37 1483228829
Functional Requirements The Zephyr tick counter has no concept of leap seconds or standard time
offsets and is a continuous time scale. However it can be relatively inaccurate, with drifts as much as
three minutes per hour (assuming an RC timer with 5% tolerance).
There are two stages required to support conversion between Zephyr time and common human time
scales:
• Translation between the continuous but inaccurate Zephyr time scale and an accurate external
stable time scale;
• Translation between the stable time scale and the (possibly discontinuous) civil time scale.
The API around timeutil_sync_state_update() supports the first step of converting between contin-
uous time scales.
The second step requires external information including schedules of leap seconds and local time offset
changes. This may be best provided by an external library, and is not currently part of the time utility
APIs.
Selecting an External Source and Time Scale If an application requires civil time accuracy within
several seconds then UTC could be used as the stable time source. However, if the external source
adjusts to a leap second there will be a discontinuity: the elapsed time between two observations taken
at 1 Hz is not equal to the numeric difference between their timestamps.
For precise activities a continuous scale that is independent of local and solar adjustments simplifies
things considerably. Suitable continuous scales include:
• GPS time: epoch of 1980-01-06T00:00:00Z, continuous following TAI with an offset of TAI-
GPS=19 s.
• Bluetooth mesh time: epoch of 2000-01-01T00:00:00Z, continuous following TAI with an offset of
-32.
• UNIX Leap Time: epoch of 1970-01-01T00:00:00Z, continuous following TAI with an offset of -8.
Because C and Zephyr library functions support conversion between integral and calendar time repre-
sentations using the UNIX epoch, UNIX Leap Time is an ideal choice for the external time scale.
The mechanism used to populate synchronization points is not relevant: it may involve reading from
a local high-precision RTC peripheral, exchanging packets over a network using a protocol like NTP or
PTP, or processing NMEA messages received a GPS with or without a 1pps signal.
3.8 Utilities
This page contains reference documentation for <sys/util.h>, which provides miscellaneous utility
functions and macros.
group sys-util
Defines
POINTER_TO_UINT(x)
Cast x, a pointer, to an unsigned integer.
UINT_TO_POINTER(x)
Cast x, an unsigned integer, to a void*.
POINTER_TO_INT(x)
Cast x, a pointer, to a signed integer.
INT_TO_POINTER(x)
Cast x, a signed integer, to a void*.
BITS_PER_LONG
Number of bits in a long int.
BITS_PER_LONG_LONG
Number of bits in a long long int.
GENMASK(h, l)
Create a contiguous bitmask starting at bit position l and ending at position h.
GENMASK64(h, l)
Create a contiguous 64-bit bitmask starting at bit position l and ending at position h.
LSB_GET(value)
Extract the Least Significant Bit from value.
FIELD_GET(mask, value)
Extract a bitfield element from value corresponding to the field mask mask.
FIELD_PREP(mask, value)
Prepare a bitfield element using value with mask representing its field position and width.
The result should be combined with other fields using a logical OR.
ZERO_OR_COMPILE_ERROR(cond)
0 if cond is true-ish; causes a compile error otherwise.
IS_ARRAY(array)
Zero if array has an array type, a compile error otherwise.
This macro is available only from C, not C++.
ARRAY_SIZE(array)
Number of elements in the given array.
In C++, due to language limitations, this will accept as array any type that implements
operator[]. The results may not be particularly meaningful in this case.
In C, passing a pointer as array causes a compile error.
IS_ARRAY_ELEMENT(array, ptr)
Whether ptr is an element of array.
This macro can be seen as a slightly stricter version of PART_OF_ARRAY in that it also ensures
that ptr is aligned to an array-element boundary of array.
In C, passing a pointer as array causes a compile error.
Parameters
• array – the array in question
DIV_ROUND_UP(1, 2); // 1
DIV_ROUND_UP(3, 2); // 2
Parameters
• n – Numerator.
• d – Denominator.
Returns
The result of n / d, rounded up.
ceiling_fraction(numerator, divider)
Ceiling function applied to numerator / divider as a fraction.
Deprecated:
Use DIV_ROUND_UP() instead.
MAX(a, b)
Obtain the maximum of two values.
Note: Arguments are evaluated twice. Use Z_MAX for a GCC-only, single evaluation version
Parameters
• a – First value.
• b – Second value.
Returns
Maximum value of a and b.
MIN(a, b)
Obtain the minimum of two values.
Note: Arguments are evaluated twice. Use Z_MIN for a GCC-only, single evaluation version
Parameters
• a – First value.
• b – Second value.
Returns
Minimum value of a and b.
Note: Arguments are evaluated multiple times. Use Z_CLAMP for a GCC-only, single evalua-
tion version.
Parameters
• val – Value to be clamped.
• low – Lowest allowed value (inclusive).
• high – Highest allowed value (inclusive).
Returns
Clamped value.
Parameters
• val – Value to be checked.
• min – Lower bound (inclusive).
• max – Upper bound (inclusive).
Return values
• true – If value is within range
• false – If the value is not within range
LOG2(x)
Compute log2(x)
Note: This macro expands its argument multiple times (to permit use in constant expres-
sions), which must not have side effects.
Parameters
• x – An unsigned integral value to compute logarithm of (positive only)
Returns
log2(x) when 1 <= x <= max(x), -1 when x < 1
LOG2CEIL(x)
Compute ceil(log2(x))
Note: This macro expands its argument multiple times (to permit use in constant expres-
sions), which must not have side effects.
Parameters
• x – An unsigned integral value
Returns
ceil(log2(x)) when 1 <= x <= max(type(x)), 0 when x < 1
NHPOT(x)
Compute next highest power of two.
Equivalent to 2^ceil(log2(x))
Note: This macro expands its argument multiple times (to permit use in constant expres-
sions), which must not have side effects.
Parameters
• x – An unsigned integral value
Returns
2^ceil(log2(x)) or 0 if 2^ceil(log2(x)) would saturate 64-bits
KB(x)
Number of bytes in x kibibytes.
MB(x)
Number of bytes in x mebibytes.
GB(x)
Number of bytes in x gibibytes.
KHZ(x)
Number of Hz in x kHz.
MHZ(x)
Number of Hz in x MHz.
WAIT_FOR(expr, timeout, delay_stmt)
Wait for an expression to return true with a timeout.
Spin on an expression with a timeout and optional delay between iterations
Commonly needed when waiting on hardware to complete an asynchronous request to
read/write/initialize/reset, but useful for any expression.
Parameters
• expr – Truth expression upon which to poll, e.g.: XYZREG & XYZREG_EN
if (IS_ENABLED(CONFIG_FOO)) {
do_something_with_foo
}
This is cleaner since the compiler can generate errors and warnings for
do_something_with_foo even when CONFIG_FOO is undefined.
Note: Use of IS_ENABLED in a #if statement is discouraged as it doesn’t provide any benefit
vs plain #if defined()
Parameters
• config_macro – Macro to check
Returns
1 if config_macro is defined to 1, 0 otherwise (including if config_macro is not
defined)
COND_CODE_1(_flag, _if_1_code, _else_code)
Insert code depending on whether _flag expands to 1 or not.
This relies on similar tricks as IS_ENABLED(), but as the result of _flag expansion, results in
either _if_1_code or _else_code is expanded.
To prevent the preprocessor from treating commas as argument separators, the _if_1_code
and _else_code expressions must be inside brackets/parentheses: (). These are stripped
away during macro expansion.
Example:
uint32_t x;
MAYBE_DECLARE(x);
However, the advantage of COND_CODE_1() is that code is resolved in place where it is used,
while the #if method defines MAYBE_DECLARE on two lines and requires it to be invoked again
on a separate line. This makes COND_CODE_1() more concise and also sometimes more useful
when used within another macro’s expansion.
Note: _flag can be the result of preprocessor expansion, e.g. an expression involving
NUM_VA_ARGS_LESS_1(...) . However, _if_1_code is only expanded if _flag expands to the
integer literal 1. Integer expressions that evaluate to 1, e.g. after doing some arithmetic, will
not work.
Parameters
• _flag – evaluated flag
• _if_1_code – result if _flag expands to 1; must be in parentheses
• _else_code – result otherwise; must be in parentheses
See also:
COND_CODE_1()
Parameters
• _flag – evaluated flag
• _if_0_code – result if _flag expands to 0; must be in parentheses
• _else_code – result otherwise; must be in parentheses
IF_ENABLED(_flag, _code)
Insert code if _flag is defined and equals 1.
Like COND_CODE_1(), this expands to _code if _flag is defined to 1; it expands to nothing
otherwise.
Example:
IF_ENABLED(CONFIG_FLAG, (uint32_t foo;))
Parameters
• _flag – evaluated flag
• _code – result if _flag expands to 1; must be in parentheses
IS_EMPTY(...)
Check if a macro has a replacement expression.
If a is a macro defined to a nonempty value, this will return true, otherwise it will return false.
It only works with defined macros, so an additional #ifdef test may be needed in some cases.
This macro may be used with COND_CODE_1() and COND_CODE_0() while processing
__VA_ARGS__ to avoid processing empty arguments.
Example:
#define EMPTY
#define NON_EMPTY 1
#undef UNDEFINED
IS_EMPTY(EMPTY)
IS_EMPTY(NON_EMPTY)
(continues on next page)
In above examples, the invocations of IS_EMPTY(. . . ) return true, false, and true;
some_conditional_code is included.
Parameters
• ... – macro to check for emptiness (may be __VA_ARGS__)
IS_EQ(a, b)
Like a == b, but does evaluation and short-circuiting at C preprocessor time.
This however only works for integer literal from 0 to 255.
LIST_DROP_EMPTY(...)
Remove empty arguments from list.
During macro expansion, __VA_ARGS__ and other preprocessor generated lists may contain
empty elements, e.g.:
EMPTY, a, b, EMPTY, d
When processing such lists, e.g. using FOR_EACH(), all empty elements will be pro-
cessed, and may require filtering out. To make that process easier, it is enough to invoke
LIST_DROP_EMPTY which will remove all empty elements.
Example:
LIST_DROP_EMPTY(LIST)
expands to:
a, b, d
Parameters
• ... – list to be processed
EMPTY
Macro with an empty expansion.
This trivial definition is provided for readability when a macro should expand to an empty
result, which e.g. is sometimes needed to silence checkpatch.
Example:
would not.
IDENTITY(V)
Macro that expands to its argument.
This is useful in macros like FOR_EACH() when there is no transformation required on the list
elements.
Parameters
• V – any value
GET_ARG_N(N, ...)
Get nth argument from argument list.
Parameters
• N – Argument index to fetch. Counter from 1.
• ... – Variable list of arguments from which one argument is returned.
Returns
Nth argument.
GET_ARGS_LESS_N(N, ...)
Strips n first arguments from the argument list.
Parameters
• N – Number of arguments to discard.
• ... – Variable list of arguments.
Returns
argument list without N first arguments.
UTIL_OR(a, b)
Like a || b, but does evaluation and short-circuiting at C preprocessor time.
This is not the same as the binary || operator; in particular, a should expand to an integer
literal 0 or 1. However, b can be any value.
This can be useful when b is an expression that would cause a build error when a is 1.
UTIL_AND(a, b)
Like a && b, but does evaluation and short-circuiting at C preprocessor time.
This is not the same as the binary &&, however; in particular, a should expand to an integer
literal 0 or 1. However, b can be any value.
This can be useful when b is an expression that would cause a build error when a is 0.
UTIL_INC(x)
UTIL_INC(x) for an integer literal x from 0 to 255 expands to an integer literal whose value is
x+1.
Similarly, UTIL_DEC(x) is (x-1) as an integer literal.
UTIL_DEC(x)
UTIL_X2(y)
UTIL_X2(y) for an integer literal y from 0 to 255 expands to an integer literal whose value is
2y.
LISTIFY(LEN, F, sep, ...)
Generates a sequence of code with configurable separator.
Example:
#define FOO(i, _) MY_PWM ## i
{ LISTIFY(PWM_COUNT, FOO, (,)) }
Parameters
• LEN – The length of the sequence. Must be an integer literal less than 255.
• F – A macro function that accepts at least two arguments: F(i, ...). F is
called repeatedly in the expansion. Its first argument i is the index in the
sequence, and the variable list of arguments passed to LISTIFY are passed
through to F.
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
int a4;
int a5;
int a6;
Parameters
• F – Macro to invoke
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
• ... – Variable argument list. The macro F is invoked as F(element) for each
element in the list.
int my_array[] = {
FOR_EACH_NONEMPTY_TERM(SQUARE, (,), FOO(...))
FOR_EACH_NONEMPTY_TERM(SQUARE, (,), BAR(...))
FOR_EACH_NONEMPTY_TERM(SQUARE, (,), BAZ(...))
};
a. figuring out whether the FOO, BAR, and BAZ expansions are empty and adding a comma
manually (or not) between FOR_EACH() calls
b. rewriting SQUARE so it reacts appropriately when “x” is empty (which would be necessary
if e.g. FOO expands to nothing)
Parameters
• F – Macro to invoke on each nonempty element of the variable arguments
• term – Terminator (e.g. comma or semicolon) placed after each invocation
of F. Must be in parentheses; this is required to enable providing a comma as
separator.
• ... – Variable argument list. The macro F is invoked as F(element) for each
nonempty element in the list.
int a0 = 4;
int a1 = 5;
int a2 = 6;
Parameters
• F – Macro to invoke
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
func(4, dev);
func(5, dev);
func(6, dev);
Parameters
• F – Macro to invoke
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
• fixed_arg – Fixed argument passed to F as the second macro parameter.
• ... – Variable argument list. The macro F is invoked as F(element,
fixed_arg) for each element in the list.
int a0 = 4;
int a1 = 5;
int a2 = 6;
Parameters
• F – Macro to invoke
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; This is
required to enable providing a comma as separator.
• fixed_arg – Fixed argument passed to F as the third macro parameter.
• ... – Variable list of arguments. The macro F is invoked as F(index,
element, fixed_arg) for each element in the list.
REVERSE_ARGS(...)
Reverse arguments order.
Parameters
item_a_item_b_item_c_
Parameters
• ... – Macro to expand on each argument, followed by its arguments. (The
macro should take exactly one argument.)
Returns
The results of expanding the macro on each argument, all pasted together
MACRO_MAP_CAT_N(N, ...)
Mapping macro that pastes a fixed number of results together.
Similar to MACRO_MAP_CAT(), but expects a fixed number of arguments. If more arguments
are given than are expected, the rest are ignored.
Parameters
• N – Number of arguments to map
• ... – Macro to expand on each argument, followed by its arguments. (The
macro should take exactly one argument.)
Returns
The results of expanding the macro on each argument, all pasted together
Functions
Returns
Pointer to the utf8_str
char *utf8_lcpy(char *dst, const char *src, size_t n)
Copies a UTF-8 encoded string from src to dst.
The resulting dst will always be NULL terminated if n is larger than 0, and the dst string will
always be properly UTF-8 truncated.
Parameters
• dst – The destination of the UTF-8 string.
• src – The source string
• n – The size of the dst buffer. Maximum number of characters copied is n - 1.
If 0 nothing will be done, and the dst will not be NULL terminated.
Returns
Pointer to the dst
This page contains the reference documentation for the iterable sections APIs, which can be
used for defining iterable areas of equally-sized data structures, that can be iterated on using
STRUCT_SECTION_FOREACH .
3.9.1 Usage
Iterable section elements are typically used by defining the data structure and associated initializer in a
common header file, so that they can be instantiated anywhere in the code base.
struct my_data {
int a, b;
};
...
DEFINE_DATA(d1, 1, 2);
DEFINE_DATA(d2, 3, 4);
DEFINE_DATA(d3, 5, 6);
Then the linker has to be setup to place the place the structure in a contiguous segment using one of the
linker macros such as ITERABLE_SECTION_RAM or ITERABLE_SECTION_ROM . Custom linker snippets are
normally declared using one of the zephyr_linker_sources() CMake functions, using the appropriate
section identifier, DATA_SECTIONS for RAM structures and SECTIONS for ROM ones.
# CMakeLists.txt
zephyr_linker_sources(DATA_SECTIONS iterables.ld)
# iterables.ld
ITERABLE_SECTION_RAM(my_data, 4)
STRUCT_SECTION_FOREACH(my_data, data) {
printk("%p: a: %d, b: %d\n", data, data->a, data->b);
}
Note: The linker is going to place the entries sorted by name, so the example above would visit d1, d2
and d3 in that order, regardless of how they were defined in the code.
group iterable_section_apis
Iterable Sections APIs.
Defines
ITERABLE_SECTION_ROM(struct_type, subalign)
Define a read-only iterable section output.
Define an output section which will set up an iterable area of equally-sized data structures.
For use with STRUCT_SECTION_ITERABLE(). Input sections will be sorted by name, per ld’s
SORT_BY_NAME.
This macro should be used for read-only data.
Note that this keeps the symbols in the image even though they are not being directly refer-
enced. Use this when symbols are indirectly referenced by iterating through the section.
ITERABLE_SECTION_ROM_NUMERIC(struct_type, subalign)
Define a read-only iterable section output, sorted numerically.
This version of ITERABLE_SECTION_ROM() sorts the entries numerically, that is, SECNAME_10
will come after SECNAME_2. _ separator is required, and up to 2 numeric digits are handled
(0-99).
See also:
ITERABLE_SECTION_ROM()
ITERABLE_SECTION_ROM_GC_ALLOWED(struct_type, subalign)
Define a garbage collectable read-only iterable section output.
Define an output section which will set up an iterable area of equally-sized data structures.
For use with STRUCT_SECTION_ITERABLE(). Input sections will be sorted by name, per ld’s
SORT_BY_NAME.
This macro should be used for read-only data.
Note that the symbols within the section can be garbage collected.
ITERABLE_SECTION_RAM(struct_type, subalign)
Define a read-write iterable section output.
Define an output section which will set up an iterable area of equally-sized data structures.
For use with STRUCT_SECTION_ITERABLE(). Input sections will be sorted by name, per ld’s
SORT_BY_NAME.
This macro should be used for read-write data that is modified at runtime.
Note that this keeps the symbols in the image even though they are not being directly refer-
enced. Use this when symbols are indirectly referenced by iterating through the section.
ITERABLE_SECTION_RAM_NUMERIC(struct_type, subalign)
Define a read-write iterable section output, sorted numerically.
This version of ITERABLE_SECTION_RAM() sorts the entries numerically, that is, SECNAME10
will come after SECNAME2. Up to 2 numeric digits are handled (0-99).
See also:
ITERABLE_SECTION_RAM()
ITERABLE_SECTION_RAM_GC_ALLOWED(struct_type, subalign)
Define a garbage collectable read-write iterable section output.
Define an output section which will set up an iterable area of equally-sized data structures.
For use with STRUCT_SECTION_ITERABLE(). Input sections will be sorted by name, per ld’s
SORT_BY_NAME.
This macro should be used for read-write data that is modified at runtime.
Note that the symbols within the section can be garbage collected.
TYPE_SECTION_ITERABLE(type, varname, secname, section_postfix)
Defines a new element for an iterable section for a generic type.
Convenience helper combining __in_section() and Z_DECL_ALIGN(). The section name will
be ‘.[SECNAME].static.[SECTION_POSTFIX]’
In the linker script, create output sections for these using ITERABLE_SECTION_ROM() or IT-
ERABLE_SECTION_RAM().
Note: In order to store the element in ROM, a const specifier has to be added to the declara-
tion: const TYPE_SECTION_ITERABLE(. . . );
Parameters
• type – [in] data type of variable
• varname – [in] name of variable to place in section
• secname – [in] type name of iterable section.
• section_postfix – [in] postfix to use in section name
TYPE_SECTION_START(secname)
iterable section start symbol for a generic type
will return ‘_[OUT_TYPE]_list_start’.
Parameters
• secname – [in] type name of iterable section. For ‘struct foobar’ this would be
TYPE_SECTION_START(foobar)
TYPE_SECTION_END(secname)
iterable section end symbol for a generic type
will return ‘_<SECNAME>_list_end’.
Parameters
• secname – [in] type name of iterable section. For ‘struct foobar’ this would be
TYPE_SECTION_START(foobar)
TYPE_SECTION_START_EXTERN(type, secname)
iterable section extern for start symbol for a generic type
Helper macro to give extern for start of iterable section. The macro typically will be called
TYPE_SECTION_START_EXTERN(struct foobar, foobar). This allows the macro to hand differ-
ent types as well as cases where the type and section name may differ.
Parameters
• type – [in] data type of section
• secname – [in] name of output section
TYPE_SECTION_END_EXTERN(type, secname)
iterable section extern for end symbol for a generic type
Helper macro to give extern for end of iterable section. The macro typically will be called
TYPE_SECTION_END_EXTERN(struct foobar, foobar). This allows the macro to hand different
types as well as cases where the type and section name may differ.
Parameters
• type – [in] data type of section
• secname – [in] name of output section
TYPE_SECTION_FOREACH(type, secname, iterator)
Iterate over a specified iterable section for a generic type.
Iterator for structure instances gathered by TYPE_SECTION_ITERABLE(). The linker must
provide a _<SECNAME>_list_start symbol and a _<SECNAME>_list_end symbol to mark
the start and the end of the list of struct objects to iterate over. This is normally done using
ITERABLE_SECTION_ROM() or ITERABLE_SECTION_RAM() in the linker script.
TYPE_SECTION_GET(type, secname, i, dst)
Get element from section for a generic type.
Parameters
• type – [in] type of element
• secname – [in] name of output section
• i – [in] Index.
• dst – [out] Pointer to location where pointer to element is written.
STRUCT_SECTION_START_EXTERN(struct_type)
iterable section extern for start symbol for a struct
Helper macro to give extern for start of iterable section.
Parameters
• struct_type – [in] data type of section
STRUCT_SECTION_END(struct_type)
iterable section end symbol for a struct type
Parameters
• struct_type – [in] data type of section
STRUCT_SECTION_END_EXTERN(struct_type)
iterable section extern for end symbol for a struct
Helper macro to give extern for end of iterable section.
Parameters
• struct_type – [in] data type of section
STRUCT_SECTION_ITERABLE_ALTERNATE(secname, struct_type, varname)
Defines a new element of alternate data type for an iterable section.
Special variant of STRUCT_SECTION_ITERABLE(), for placing alternate data types within the
iterable section of a specific data type. The data type sizes and semantics must be equivalent!
STRUCT_SECTION_ITERABLE_ARRAY_ALTERNATE(secname, struct_type, varname, size)
Defines an array of elements of alternate data type for an iterable section.
See also:
STRUCT_SECTION_ITERABLE_ALTERNATE
STRUCT_SECTION_ITERABLE(struct_type, varname)
Defines a new element for an iterable section.
Convenience helper combining __in_section() and Z_DECL_ALIGN(). The section name is the
struct type prepended with an underscore. The subsection is “static” and the subsubsection is
the variable name.
In the linker script, create output sections for these using ITERABLE_SECTION_ROM() or IT-
ERABLE_SECTION_RAM().
Note: In order to store the element in ROM, a const specifier has to be added to the declara-
tion: const STRUCT_SECTION_ITERABLE(. . . );
See also:
STRUCT_SECTION_ITERABLE
STRUCT_SECTION_ITERABLE_NAMED(struct_type, name, varname)
Defines a new element for an iterable section with a custom name.
The name can be used to customize how iterable section entries are sorted.
See also:
STRUCT_SECTION_ITERABLE()
STRUCT_SECTION_FOREACH_ALTERNATE(secname, struct_type, iterator)
Iterate over a specified iterable section (alternate).
Iterator for structure instances gathered by STRUCT_SECTION_ITERABLE(). The linker must
provide a _<SECNAME>_list_start symbol and a _<SECNAME>_list_end symbol to mark
the start and the end of the list of struct objects to iterate over. This is normally done using
ITERABLE_SECTION_ROM() or ITERABLE_SECTION_RAM() in the linker script.
STRUCT_SECTION_FOREACH(struct_type, iterator)
Iterate over a specified iterable section.
Iterator for structure instances gathered by STRUCT_SECTION_ITERABLE(). The linker must
provide a _<struct_type>_list_start symbol and a _<struct_type>_list_end symbol to mark
the start and the end of the list of struct objects to iterate over. This is normally done using
ITERABLE_SECTION_ROM() or ITERABLE_SECTION_RAM() in the linker script.
STRUCT_SECTION_GET(struct_type, i, dst)
Get element from section.
Parameters
• struct_type – [in] Struct type.
• i – [in] Index.
• dst – [out] Pointer to location where pointer to element is written.
STRUCT_SECTION_COUNT(struct_type, dst)
Count elements in a section.
Parameters
• struct_type – [in] Struct type
• dst – [out] Pointer to location where result is written.
3.10.1 Overview
This feature will allow relocating .text, .rodata, .data, and .bss sections from required files and
place them in the required memory region. The memory region and file are given to the
scripts/build/gen_relocate_app.py script in the form of a string. This script is always invoked from in-
side cmake.
This script provides a robust way to re-order the memory contents without actually having to modify the
code. In simple terms this script will do the job of __attribute__((section("name"))) for a bunch of
files together.
3.10.2 Details
The memory region and file are given to the scripts/build/gen_relocate_app.py script in the form of a
string.
Note: The text section is split into 2 parts in the main linker script. The first section will have some
info regarding vector tables and other debug related info. The second section will have the complete text
section. This is needed to force the required functions and data variables to the correct locations. This
is due to the behavior of the linker. The linker will only link once and hence this text section had to be
split to make room for the generated linker script.
The code_relocation.c file has code that is needed for initializing data sections, and a copy of the text
sections (if XIP). Also this contains code needed for bss zeroing and for data copy operations from ROM
to required memory type.
The procedure to invoke this feature is:
• Enable CONFIG_CODE_DATA_RELOCATION in the prj.conf file
• Inside the CMakeLists.txt file in the project, mention all the files that need relocation.
zephyr_code_relocate(FILES src/*.c LOCATION SRAM2)
Where the first argument is the file/files and the second argument is the memory where it must be
placed.
Note: function zephyr_code_relocate() can be called as many times as required. This step has to
be performed before calling find_package(Zephyr . . . ) in the application’s CMakeLists.txt.
Additional Configurations
This section shows additional configuration options that can be set in CMakeLists.txt
• if the memory is SRAM1, SRAM2, CCD, or AON, then place the full object in the sections for
example:
• if the memory type is appended with _DATA, _TEXT, _RODATA or _BSS, only the selected memory
is placed in the required memory region. for example:
• Multiple regions can also be appended together such as: SRAM2_DATA_BSS. This will place data
and bss inside SRAM2.
• Multiple files can be passed to the FILES argument, or CMake generator expressions can be used
to relocate a comma-separated list of files
NOCOPY flag
When a NOCOPY option is passed to the zephyr_code_relocate() function, the relocation code is not
generated in code_relocation.c. This flag can be used when we want to move the content of a specific
file (or set of files) to a XIP area.
This example will place the .text section of the xip_external_flash.c file to the EXTFLASH memory
region where it will be executed from (XIP). The .data will be relocated as usual into SRAM.
Relocating libraries
Libraries can be relocated using the LIBRARY argument to zephyr_code_relocation() with the library
name. For example, the following snippet will relocate kernel code to ITCM and serial drivers to SRAM2:
Samples/ Tests
OS Services
4.1 Cryptography
The crypto section contains information regarding the cryptographic primitives supported by the Zephyr
kernel. Use the information to understand the principles behind the operation of the different algorithms
and how they were implemented.
The following crypto libraries have been included:
Overview
The TinyCrypt Library provides an implementation for targeting constrained devices with a minimal set
of standard cryptography primitives, as listed below. To better serve applications targeting constrained
devices, TinyCrypt implementations differ from the standard specifications (see the Important Remarks
section for some important differences). Certain cryptographic primitives depend on other primitives, as
mentioned in the list below.
Aside from the Important Remarks section below, valuable information on the usage, security and tech-
nicalities of each cryptographic primitive are found in the corresponding header file.
• SHA-256:
– Type of primitive: Hash function.
– Standard Specification: NIST FIPS PUB 180-4.
– Requires: –
• HMAC-SHA256:
– Type of primitive: Message authentication code.
– Standard Specification: RFC 2104.
– Requires: SHA-256
• HMAC-PRNG:
– Type of primitive: Pseudo-random number generator.
– Standard Specification: NIST SP 800-90A.
– Requires: SHA-256 and HMAC-SHA256.
• AES-128:
– Type of primitive: Block cipher.
579
Zephyr Project Documentation, Release 3.4.0
Design Goals
• Minimize the code size of each cryptographic primitive. This means minimize the size of a board-
independent implementation, as presented in TinyCrypt. Note that various applications may re-
quire further features, optimizations with respect to other metrics and countermeasures for partic-
ular threats. These peculiarities would increase the code size and thus are not considered here.
• Minimize the dependencies among the cryptographic primitives. This means that it is unneces-
sary to build and allocate object code for more primitives than the ones strictly required by the
intended application. In other words, one can select and compile only the primitives required by
the application.
Important Remarks
The cryptographic implementations in TinyCrypt library have some limitations. Some of these limitations
are inherent to the cryptographic primitives themselves, while others are specific to TinyCrypt. Some of
these limitations are discussed in-depth below.
General Remarks
• TinyCrypt does not intend to be fully side-channel resistant. Due to the variety of side-channel
attacks, many of them making certain boards vulnerable. In this sense, instead of penalizing all
library users with side-channel countermeasures such as increasing the overall code size, TinyCrypt
only implements certain generic timing-attack countermeasures.
Specific Remarks
• SHA-256:
– The number of bits_hashed in the state is not checked for overflow. Note however that this
will only be a problem if you intend to hash more than 2^64 bits, which is an extremely large
window.
• HMAC:
– The HMAC verification process is assumed to be performed by the application. This com-
pares the computed tag with some given tag. Note that conventional memory-comparison
methods (such as memcmp function) might be vulnerable to timing attacks; thus be sure to
use a constant-time memory comparison function (such as compare_constant_time function
provided in lib/utils.c).
• HMAC-PRNG:
– Before using HMAC-PRNG, you must find an entropy source to produce a seed. PRNGs only
stretch the seed into a seemingly random output of arbitrary length. The security of the output
is exactly equal to the unpredictability of the seed.
– NIST SP 800-90A requires three items as seed material in the initialization step: entropy seed,
personalization and a nonce (which is not implemented). TinyCrypt requires the personal-
ization byte array and automatically creates the entropy seed using a mandatory call to the
re-seed function.
• AES-128:
– The current implementation does not support other key-lengths (such as 256 bits). Note that
if you need AES-256, it doesn’t sound as though your application is running in a constrained
environment. AES-256 requires keys twice the size as for AES-128, and the key schedule is
40% larger.
• CTR mode:
– The AES-CTR mode limits the size of a data message they encrypt to 2^32 blocks. If you
need to encrypt larger data sets, your application would need to replace the key after 2^32
block encryptions.
• CBC mode:
– TinyCrypt CBC decryption assumes that the iv and the ciphertext are contiguous (as produced
by TinyCrypt CBC encryption). This allows for a very efficient decryption algorithm that would
not otherwise be possible.
• CMAC mode:
– AES128-CMAC mode of operation offers 64 bits of security against collision attacks. Note
however that an external attacker cannot generate the tags him/herself without knowing the
MAC key. In this sense, to attack the collision property of AES128-CMAC, an external attacker
would need the cooperation of the legal user to produce an exponentially high number of tags
(e.g. 2^64) to finally be able to look for collisions and benefit from them. As an extra pre-
caution, the current implementation allows to at most 2^48 calls to tc_cmac_update function
before re-calling tc_cmac_setup (allowing a new key to be set), as suggested in Appendix B of
SP 800-38B.
• CCM mode:
– There are a few tradeoffs for the selection of the parameters of CCM mode. In special, there
is a tradeoff between the maximum number of invocations of CCM under a given key and the
maximum payload length for those invocations. Both things are related to the parameter ‘q’
of CCM mode. The maximum number of invocations of CCM under a given key is determined
by the nonce size, which is: 15-q bytes. The maximum payload length for those invocations is
defined as 2^(8q) bytes.
To achieve minimal code size, TinyCrypt CCM implementation fixes q = 2, which is a quite
reasonable choice for constrained applications. The implications of this choice are:
The nonce size is: 13 bytes.
The maximum payload length is: 2^16 bytes = 65 KB.
The mac size parameter is an important parameter to estimate the security against collision
attacks (that aim at finding different messages that produce the same authentication tag).
TinyCrypt CCM implementation accepts any even integer between 4 and 16, as suggested in
SP 800-38C.
– TinyCrypt CCM implementation accepts associated data of any length between 0 and (2^16
- 2^8) = 65280 bytes.
– TinyCrypt CCM implementation accepts:
* Both non-empty payload and associated data (it encrypts and authenticates the payload
and only authenticates the associated data);
* Non-empty payload and empty associated data (it encrypts and authenticates the pay-
load);
Examples of Applications
It is possible to do useful cryptography with only the given small set of primitives. With this list of
primitives it becomes feasible to support a range of cryptography usages:
• Measurement of code, data structures, and other digital artifacts (SHA256);
• Generate commitments (SHA256);
• Construct keys (HMAC-SHA256);
• Extract entropy from strings containing some randomness (HMAC-SHA256);
• Construct random mappings (HMAC-SHA256);
• Construct nonces and challenges (HMAC-PRNG);
Test Vectors
The library provides a test program for each cryptographic primitive (see ‘test’ folder). Besides illustrating
how to use the primitives, these tests evaluate the correctness of the implementations by checking the
results against well-known publicly validated test vectors.
For the case of the HMAC-PRNG, due to the necessity of performing an extensive battery test to produce
meaningful conclusions, we suggest the user to evaluate the unpredictability of the implementation by
using the NIST Statistical Test Suite (see References).
For the case of the EC-DH and EC-DSA implementations, most of the test vectors were obtained from the
site of the NIST Cryptographic Algorithm Validation Program (CAVP), see References.
References
The random API subsystem provides random number generation APIs in both cryptographically and non-
cryptographically secure instances. Which random API to use is based on the cryptographic requirements
of the random number. The non-cryptographic APIs will return random values much faster if non-
cryptographic values are needed.
The cryptographically secure random functions shall be compliant to the FIPS 140-2 [?] recommended
algorithms. Hardware based random-number generators (RNG) can be used on platforms with appro-
priate hardware support. Platforms without hardware RNG support shall use the CTR-DRBG algorithm.
The algorithm can be provided by TinyCrypt or mbedTLS depending on your application performance
and resource requirements.
Note: The CTR-DRBG generator needs an entropy source to establish and maintain the
cryptographic security of the PRNG.
Kconfig Options
choice RNG_GENERATOR_CHOICE
default XOSHIRO_RANDOM_GENERATOR
endchoice
choice CSPRNG_GENERATOR_CHOICE
default CTR_DRBG_CSPRNG_GENERATOR
endchoice
API Reference
group random_api
Random Function APIs.
Functions
uint32_t sys_rand32_get(void)
Return a 32-bit random value that should pass general randomness tests.
Note: The random value returned is not a cryptographically secure random number value.
Returns
32-bit random value.
Note: The random values returned are not considered cryptographically secure random
number values.
Parameters
• dst – [out] destination buffer to fill with random data.
• len – size of the destination buffer.
Note: If the random values requested do not need to be cryptographically secure then use
sys_rand_get() instead.
Parameters
• dst – [out] destination buffer to fill.
• len – size of the destination buffer.
Returns
0 if success, -EIO if entropy reseed error
Overview
API Reference
group crypto
Crypto APIs.
Defines
CAP_OPAQUE_KEY_HNDL
CAP_RAW_KEY
CAP_KEY_LOADING_API
CAP_INPLACE_OPS
Whether the output is placed in separate buffer or not
CAP_SEPARATE_IO_BUFS
CAP_SYNC_OPS
These denotes if the output (completion of a cipher_xxx_op) is conveyed by the op function
returning, or it is conveyed by an async notification
CAP_ASYNC_OPS
CAP_AUTONONCE
Whether the hardware/driver supports autononce feature
CAP_NO_IV_PREFIX
Don’t prefix IV to cipher blocks
Functions
struct crypto_driver_api
#include <crypto.h> Crypto driver API definition.
Ciphers API
group crypto_cipher
Crypto Cipher APIs.
Typedefs
typedef int (*cbc_op_t)(struct cipher_ctx *ctx, struct cipher_pkt *pkt, uint8_t *iv)
typedef int (*ctr_op_t)(struct cipher_ctx *ctx, struct cipher_pkt *pkt, uint8_t *ctr)
typedef int (*ccm_op_t)(struct cipher_ctx *ctx, struct cipher_aead_pkt *pkt, uint8_t *nonce)
typedef int (*gcm_op_t)(struct cipher_ctx *ctx, struct cipher_aead_pkt *pkt, uint8_t *nonce)
Enums
enum cipher_algo
Cipher Algorithm
Values:
enumerator CRYPTO_CIPHER_ALGO_AES = 1
enum cipher_op
Cipher Operation
Values:
enumerator CRYPTO_CIPHER_OP_DECRYPT = 0
enumerator CRYPTO_CIPHER_OP_ENCRYPT = 1
enum cipher_mode
Possible cipher mode options.
More to be added as required.
Values:
enumerator CRYPTO_CIPHER_MODE_ECB = 1
enumerator CRYPTO_CIPHER_MODE_CBC = 2
enumerator CRYPTO_CIPHER_MODE_CTR = 3
enumerator CRYPTO_CIPHER_MODE_CCM = 4
enumerator CRYPTO_CIPHER_MODE_GCM = 5
Functions
static inline int cipher_begin_session(const struct device *dev, struct cipher_ctx *ctx, enum
cipher_algo algo, enum cipher_mode mode, enum
cipher_op optype)
Setup a crypto session.
Initializes one time parameters, like the session key, algorithm and cipher mode which may
remain constant for all operations in the session. The state may be cached in hardware and/or
driver data state variables.
Parameters
• dev – Pointer to the device structure for the driver instance.
• ctx – Pointer to the context structure. Various one time parameters like key,
keylength, etc. are supplied via this structure. The structure documentation
specifies which fields are to be populated by the app before making this call.
• algo – The crypto algorithm to be used in this session. e.g AES
• mode – The cipher mode to be used in this session. e.g CBC, CTR
• optype – Whether we should encrypt or decrypt in this session
Returns
0 on success, negative errno code on fail.
static inline int cipher_free_session(const struct device *dev, struct cipher_ctx *ctx)
Cleanup a crypto session.
Clears the hardware and/or driver state of a previous session.
Parameters
• dev – Pointer to the device structure for the driver instance.
• ctx – Pointer to the crypto context structure of the session to be freed.
Returns
0 on success, negative errno code on fail.
static inline int cipher_callback_set(const struct device *dev, cipher_completion_cb cb)
Registers an async crypto op completion callback with the driver.
The application can register an async crypto op completion callback handler to be invoked by
the driver, on completion of a prior request submitted via cipher_do_op(). Based on crypto
device hardware semantics, this is likely to be invoked from an ISR context.
Parameters
• dev – Pointer to the device structure for the driver instance.
• cb – Pointer to application callback to be called by the driver.
Returns
0 on success, -ENOTSUP if the driver does not support async op, negative errno
code on other error.
static inline int cipher_block_op(struct cipher_ctx *ctx, struct cipher_pkt *pkt)
Perform single-block crypto operation (ECB cipher mode). This should not be overloaded to
operate on multiple blocks for security reasons.
Parameters
• ctx – Pointer to the crypto context of this op.
• pkt – Structure holding the input/output buffer pointers.
Returns
0 on success, negative errno code on fail.
static inline int cipher_cbc_op(struct cipher_ctx *ctx, struct cipher_pkt *pkt, uint8_t *iv)
Perform Cipher Block Chaining (CBC) crypto operation.
Parameters
• ctx – Pointer to the crypto context of this op.
• pkt – Structure holding the input/output buffer pointers.
• iv – Initialization Vector (IV) for the operation. Same IV value should not be
reused across multiple operations (within a session context) for security.
Returns
0 on success, negative errno code on fail.
static inline int cipher_ctr_op(struct cipher_ctx *ctx, struct cipher_pkt *pkt, uint8_t *iv)
Perform Counter (CTR) mode crypto operation.
Parameters
• ctx – Pointer to the crypto context of this op.
• pkt – Structure holding the input/output buffer pointers.
• iv – Initialization Vector (IV) for the operation. We use a split counter formed
by appending IV and ctr. Consequently ivlen = keylen - ctrlen. ‘ctrlen’ is spec-
ified during session setup through the ‘ctx.mode_params.ctr_params.ctr_len’
parameter. IV should not be reused across multiple operations (within a ses-
sion context) for security. The non-IV part of the split counter is transparent to
the caller and is fully managed by the crypto provider.
Returns
0 on success, negative errno code on fail.
static inline int cipher_ccm_op(struct cipher_ctx *ctx, struct cipher_aead_pkt *pkt, uint8_t
*nonce)
Perform Counter with CBC-MAC (CCM) mode crypto operation.
Parameters
• ctx – Pointer to the crypto context of this op.
• pkt – Structure holding the input/output, Associated Data (AD) and auth tag
buffer pointers.
• nonce – Nonce for the operation. Same nonce value should not be reused
across multiple operations (within a session context) for security.
Returns
0 on success, negative errno code on fail.
static inline int cipher_gcm_op(struct cipher_ctx *ctx, struct cipher_aead_pkt *pkt, uint8_t
*nonce)
Perform Galois/Counter Mode (GCM) crypto operation.
Parameters
• ctx – Pointer to the crypto context of this op.
• pkt – Structure holding the input/output, Associated Data (AD) and auth tag
buffer pointers.
• nonce – Nonce for the operation. Same nonce value should not be reused
across multiple operations (within a session context) for security.
Returns
0 on success, negative errno code on fail.
struct cipher_ops
#include <cipher.h>
struct ccm_params
#include <cipher.h>
struct ctr_params
#include <cipher.h>
struct gcm_params
#include <cipher.h>
struct cipher_ctx
#include <cipher.h> Structure encoding session parameters.
Refer to comments for individual fields to know the contract in terms of who fills what and
when w.r.t begin_session() call.
Public Members
void *drv_sessn_state
If the driver supports multiple simultaneously crypto sessions, this will identify the spe-
cific driver state this crypto session relates to. Since dynamic memory allocation is not
possible, it is suggested that at build time drivers allocate space for the max simulta-
neous sessions they intend to support. To be populated by the driver on return from
begin_session().
void *app_sessn_state
Place for the user app to put info relevant stuff for resuming when completion callback
happens for async ops. Totally managed by the app.
uint16_t keylen
Cryptographic keylength in bytes. To be populated by the app before calling be-
gin_session()
uint16_t flags
How certain fields are to be interpreted for this session. (A bitmask of CAP_* below.) To
be populated by the app before calling begin_session(). An app can obtain the capability
flags supported by a hw/driver by calling crypto_query_hwcaps().
struct cipher_pkt
#include <cipher.h> Structure encoding IO parameters of one cryptographic operation like
encrypt/decrypt.
The fields which has not been explicitly called out has to be filled up by the app before making
the cipher_xxx_op() call.
Public Members
uint8_t *in_buf
Start address of input buffer
int in_len
Bytes to be operated upon
uint8_t *out_buf
Start of the output buffer, to be allocated by the application. Can be NULL for in-place
ops. To be populated with contents by the driver on return from op / async callback.
int out_buf_max
Size of the out_buf area allocated by the application. Drivers should not write past the
size of output buffer.
int out_len
To be populated by driver on return from cipher_xxx_op() and holds the size of the actual
result.
struct cipher_aead_pkt
#include <cipher.h> Structure encoding IO parameters in AEAD (Authenticated Encryption
with Associated Data) scenario like in CCM.
App has to furnish valid contents prior to making cipher_ccm_op() call.
Public Members
uint8_t *ad
Start address for Associated Data. This has to be supplied by app.
uint32_t ad_len
Size of Associated Data. This has to be supplied by the app.
uint8_t *tag
Start address for the auth hash. For an encryption op this will be populated by the driver
when it returns from cipher_ccm_op call. For a decryption op this has to be supplied by
the app.
4.2 Debugging
The thread analyzer module enables all the Zephyr options required to track the thread information, e.g.
thread stack size usage and other runtime thread runtime statistics.
The analysis is performed on demand when the application calls thread_analyzer_run() or
thread_analyzer_print() .
For example, to build the synchronization sample with Thread Analyser enabled, do the following:
west build -b qemu_x86 samples/synchronization/ -- -DCONFIG_QEMU_ICOUNT=n -
˓→DCONFIG_THREAD_ANALYZER=y \
-DCONFIG_THREAD_ANALYZER_USE_PRINTK=y -DCONFIG_THREAD_ANALYZER_AUTO=y \
-DCONFIG_THREAD_ANALYZER_AUTO_INTERVAL=5
When you run the generated application in Qemu, you will get the additional information from Thread
Analyzer:
thread_a: Hello World from cpu 0 on qemu_x86!
Thread analyze:
thread_b : STACK: unused 740 usage 284 / 1024 (27 %); CPU: 0 %
thread_analyzer : STACK: unused 8 usage 504 / 512 (98 %); CPU: 0 %
thread_a : STACK: unused 648 usage 376 / 1024 (36 %); CPU: 98 %
idle : STACK: unused 204 usage 116 / 320 (36 %); CPU: 0 %
thread_b: Hello World from cpu 0 on qemu_x86!
thread_a: Hello World from cpu 0 on qemu_x86!
thread_b: Hello World from cpu 0 on qemu_x86!
thread_a: Hello World from cpu 0 on qemu_x86!
thread_b: Hello World from cpu 0 on qemu_x86!
thread_a: Hello World from cpu 0 on qemu_x86!
thread_b: Hello World from cpu 0 on qemu_x86!
thread_a: Hello World from cpu 0 on qemu_x86!
Thread analyze:
thread_b : STACK: unused 648 usage 376 / 1024 (36 %); CPU: 7 %
thread_analyzer : STACK: unused 8 usage 504 / 512 (98 %); CPU: 0 %
thread_a : STACK: unused 648 usage 376 / 1024 (36 %); CPU: 9 %
idle : STACK: unused 204 usage 116 / 320 (36 %); CPU: 82 %
thread_b: Hello World from cpu 0 on qemu_x86!
thread_a: Hello World from cpu 0 on qemu_x86!
thread_b: Hello World from cpu 0 on qemu_x86!
thread_a: Hello World from cpu 0 on qemu_x86!
thread_b: Hello World from cpu 0 on qemu_x86!
thread_a: Hello World from cpu 0 on qemu_x86!
thread_b: Hello World from cpu 0 on qemu_x86!
thread_a: Hello World from cpu 0 on qemu_x86!
Thread analyze:
thread_b : STACK: unused 648 usage 376 / 1024 (36 %); CPU: 7 %
thread_analyzer : STACK: unused 8 usage 504 / 512 (98 %); CPU: 0 %
thread_a : STACK: unused 648 usage 376 / 1024 (36 %); CPU: 8 %
(continues on next page)
Configuration
API documentation
group thread_analyzer
Module for analyzing threads.
This module implements functions and the configuration that simplifies thread analysis.
Typedefs
Functions
void thread_analyzer_print(void)
Run the thread analyzer and print stack size statistics.
This function runs the thread analyzer and prints the output in standard form.
struct thread_analyzer_info
#include <thread_analyzer.h>
Public Members
size_t stack_size
The total size of the stack
size_t stack_used
Stack size in used
The core dump module enables dumping the CPU registers and memory content for offline debugging.
This module is called when a fatal error is encountered and prints or stores data according to which
backends are enabled.
Configuration
Usage
When the core dump module is enabled, during a fatal error, CPU registers and memory content are
printed or stored according to which backends are enabled. This core dump data can fed into a custom-
made GDB server as a remote target for GDB (and other GDB compatible debuggers). CPU registers,
memory content and stack can be examined in the debugger.
This usually involves the following steps:
1. Get the core dump log from the device depending on enabled backends. For example, if the log
module backend is used, get the log output from the log module backend.
2. Convert the core dump log into a binary format that can be parsed by the GDB server. For example,
scripts/coredump/coredump_serial_log_parser.py can be used to convert the serial console log into
a binary file.
3. Start the custom GDB server using the script scripts/coredump/coredump_gdbserver.py with the
core dump binary log file, and the Zephyr ELF file as parameters.
4. Start the debugger corresponding to the target architecture.
Note: Developers for Intel ADSP CAVS 15-25 platforms using ZEPHYR_TOOLCHAIN_VARIANT=zephyr
should use the debugger in the xtensa-intel_apl_adsp toolchain of the SDK.
Example This example uses the log module backend tied to serial console. This was done on qemu_x86
where a null pointer was dereferenced.
This is the core dump log from the serial console, and is stored in coredump.log:
3. Start GDB:
<path to SDK>/x86_64-zephyr-elf/bin/x86_64-zephyr-elf-gdb build/zephyr/zephyr.elf
(gdb) bt
File Format
The core dump binary file consists of one file header, one architecture-specific block, and multiple mem-
ory blocks. All numbers in the headers below are little endian.
Architecture-specific Block The architecture-specific block contains the byte stream of data specific to
the target architecture (e.g. CPU registers)
Memory Block The memory block contains the start and end addresses and the data within the memory
region.
The architecture-specific block is target specific and requires new dumping routine and parser for new
targets. To add a new target, the following needs to be done:
1. Add a new target code to the enum coredump_tgt_code in include/zephyr/debug/coredump.h.
2. Implement arch_coredump_tgt_code_get() simply to return the newly introduced target code.
3. Implement arch_coredump_info_dump() to construct a target architecture block and call
coredump_buffer_output() to output the block to core dump backend.
4. Add a parser to the core dump GDB stub scripts under scripts/coredump/gdbstubs/
1. Extends the gdbstubs.gdbstub.GdbStub class.
2. During __init__, store the GDB signal corresponding to the exception reason in self.
gdb_signal.
3. Parse the architecture-specific block from self.logfile.get_arch_data(). This needs to
match the format as implemented in step 3 (inside arch_coredump_info_dump() ).
4. Implement the abstract method handle_register_group_read_packet where it returns the
register group as GDB expected. Refer to GDB’s code and documentation on what it is expect-
ing for the new target.
5. Optionally implement handle_register_single_read_packet for registers not covered in
the g packet.
5. Extend get_gdbstub() in scripts/coredump/gdbstubs/__init__.py to return the newly imple-
mented GDB stub.
API documentation
group coredump_apis
Coredump APIs.
Functions
void coredump(unsigned int reason, const z_arch_esf_t *esf, struct k_thread *thread)
Perform coredump.
Normally, this is called inside z_fatal_error() to generate coredump when a fatal error is
encountered. This can also be called on demand whenever a coredump is desired.
Parameters
• reason – Reason for the fatal error
• esf – Exception context
• thread – Thread information to dump
void coredump_memory_dump(uintptr_t start_addr, uintptr_t end_addr)
Dump memory region.
Parameters
• start_addr – Start address of memory region to be dumped
• end_addr – End address of memory region to be dumped
void coredump_buffer_output(uint8_t *buf, size_t buflen)
Output the buffer via coredump.
This outputs the buffer of byte array to the coredump backend. For example, this can be called
to output the coredump section containing registers, or a section for memory dump.
Parameters
• buf – Buffer to be send to coredump output
• buflen – Buffer length
int coredump_query(enum coredump_query_id query_id, void *arg)
Perform query on coredump subsystem.
Query the coredump subsystem for information, for example, if there is an error.
Parameters
• query_id – [in] Query ID
• arg – [inout] Pointer to argument for exchanging information
Returns
Depends on the query
int coredump_cmd(enum coredump_cmd_id query_id, void *arg)
Perform command on coredump subsystem.
Perform certain on coredump subsystem, for example, output the stored coredump via log-
ging.
Parameters
• cmd_id – [in] Command ID
• arg – [inout] Pointer to argument for exchanging information
Returns
Depends on the command
group arch-coredump
Functions
• Overview
• Features
• Enabling GDB Stub
– Using Serial Backend
• Debugging
– Using Serial Backend
• Example
Overview
The gdbstub feature provides an implementation of the GDB Remote Serial Protocol (RSP) that allows
you to remotely debug Zephyr using GDB.
The protocol supports different connection types: serial, UDP/IP and TCP/IP. Zephyr currently supports
only serial device communication.
The GDB program acts as the client while Zephyr acts as the server. When this feature is enabled, Zephyr
stops its execution after gdb_init() starts gdbstub service and waits for a GDB connection. Once a
connection is established it is possible to synchronously interact with Zephyr. Note that currently it is not
possible to asynchronously send commands to the target.
Features
Using Serial Backend The serial backend for GDB stub can be enabled with the
CONFIG_GDBSTUB_SERIAL_BACKEND option.
Since serial backend utilizes UART devices to send and receive GDB commands,
• If there are spare UART devices on the board, set zephyr,gdbstub-uart property of the chosen
node to the spare UART device so that printk() and log messages are not being printed to the
same UART device used for GDB.
• For boards with only one UART device, printk() and logging must be disabled if they are also
using the same UART device for output. GDB related messages may interleave with log messages
which may have unintended consequences. Usually this can be done by disabling CONFIG_PRINTK
and CONFIG_LOG.
Debugging
For example,
Example
Note that QEMU is setup to redirect the serial used for GDB stub in the Zephyr image to a
networking port. Hence the connection to localhost, port 5678.
Response from GDB:
Remote debugging using :5678
arch_gdb_init () at <ZEPHYR_BASE>/arch/x86/core/ia32/gdbstub.c:232
232 }
GDB also shows where the code execution is stopped. In this case, it is at arch/x86/core/
ia32/gdbstub.c, line 232.
3. Use command bt or backtrace to show the backtrace of stack frames.
(gdb) bt
#0 arch_gdb_init () at <ZEPHYR_BASE>/arch/x86/core/ia32/gdbstub.c:232
#1 0x00105068 in gdb_init (arg=0x0) at <ZEPHYR_BASE>/subsys/debug/gdbstub.
˓→c:833
4. Use command list to show the source code and surroundings where code execution is
stopped.
(gdb) list
227 }
228
229 void arch_gdb_init(void)
230 {
231 __asm__ volatile ("int3");
232 }
233
234 /* Hook current IDT. */
235 _EXCEPTION_CONNECT_NOCODE(z_gdb_debug_isr, IV_DEBUG, 3);
236 _EXCEPTION_CONNECT_NOCODE(z_gdb_break_isr, IV_BREAKPOINT, 3);
5. Use command s or step to step through program until it reaches a different source line. Now
that it finished executing arch_gdb_init() and is continuing in gdb_init().
(gdb) s
gdb_init (arg=0x0) at /home/dleung5/zephyr/rtos/zephyr/subsys/debug/gdbstub.
˓→c:834
834 return 0;
(gdb) list
829 LOG_ERR("Could not initialize gdbstub backend.");
830 return -1;
831 }
(continues on next page)
6. Use command br or break to setup a breakpoint. This example sets up a breakpoint at main(),
and let code execution continue without any intervention using command c (or continue).
(gdb) continue
Continuing.
Once code execution reaches main(), execution will be stopped and GDB prompt returns.
32 ret = test();
(gdb) list
27
28 int main(void)
29 {
30 int ret;
31
32 ret = test();
33 printk("%d\n", ret);
34 }
35
36 K_THREAD_DEFINE(thread, STACKSIZE, thread_entry, NULL, NULL, NULL,
(gdb) p ret
$1 = 0x11318c
Since ret has not been assigned a value yet, what it contains is simply a random value.
8. If step (s or step) is used here, it will continue execution until printk() is reached, thus
skipping the interior of test(). To examine code execution inside test(), a breakpoint can
be set for test(), or simply using si (or stepi) to execute one machine instruction, where it
has the side effect of going into the function.
(gdb) si
test () at <ZEPHYR_BASE>/samples/subsys/debug/gdbstub/src/main.c:13
13 {
(gdb) list
8 #include <zephyr/sys/printk.h>
9
10 #define STACKSIZE 512
11
12 static int test(void)
(continues on next page)
9. Here, step can be used to go through all code inside test() until it returns. Or the command
finish can be used to continue execution without intervention until the function returns.
(gdb) finish
Run till exit from #0 test () at <ZEPHYR_BASE>/samples/subsys/debug/gdbstub/
˓→src/main.c:13
32 ret = test();
Value returned is $2 = 0x1e
(gdb) p ret
$3 = 0x11318c
(gdb) s
33 printk("%d\n", ret);
(gdb) p ret
$4 = 0x1e
11. If continue is issued here, code execution will continue indefinitely as there are no break-
points to further stop execution. Breaking execution in GDB via Ctrl-C does not currently
work as the GDB stub does not support this functionality (yet).
Monitor mode debugging is a Cortex-M feature, that provides a non-halting approach to debugging. With
this it’s possible to continue the execution of high-priority interrupts, even when waiting on a breakpoint.
This strategy makes it possible to debug time-sensitive software, that would otherwise crash when the
core halts (e.g. applications that need to keep communication links alive).
Zephyr provides support for enabling and configuring the Debug Monitor exception. It also contains a
ready implementation of the interrupt, which can be used with SEGGER J-Link debuggers.
Configuration
Usage
When monitor mode debuging is enabled, entering a breakpoint will not halt the processor,
but rather generate an interrupt with ISR implemented under z_arm_debug_monitor symbol.
CONFIG_CORTEX_M_DEBUG_MONITOR_HOOK config configures this interrupt to be the lowest available pri-
ority, which will allow other interrupts to execute while processor spins on a breakpoint.
Using other custom ISR In order to provide a custom debug monitor interrupt, override
z_arm_debug_monitor symbol. Additionally, manual configuration of some registers is required (see
debug monitor sample).
4.3.1 MCUmgr
Overview
The management subsystem allows remote management of Zephyr-enabled devices. The following man-
agement operations are available:
• Image management
• File System management
• OS management
• Shell management
• Statistic management
• Zephyr-basic management
over the following transports:
• BLE (Bluetooth Low Energy)
• Serial (UART)
• UDP over IP
The management subsystem is based on the Simple Management Protocol (SMP) provided by MCUmgr,
an open source project that provides a management subsystem that is portable across multiple real-time
operating systems.
The management subsystem is located in subsys/mgmt/ inside of the Zephyr tree.
Additionally, there is a sample that provides management functionality over BLE and serial.
Command-line Tool
MCUmgr provides a command-line tool, mcumgr, for managing remote devices. The tool is written in the
Go programming language.
Note: This tool is provided for evaluation use only and is not recommended for use in a production
environment. It has known issues and will not respect the MCUmgr protocol properly e.g. when an error
is received, instead of aborting will, in some circumstances, sit in an endless loop of sending the same
command over and over again. A universal replacement for this tool is currently in development and
once released, support for the go tool will be dropped entirely.
go get github.com/apache/mynewt-mcumgr-cli/mcumgr
go install github.com/apache/mynewt-mcumgr-cli/mcumgr@latest
There are two command-line options that are responsible for setting and configuring the transport layer
to use when communicating with managed device:
• --conntype is used to choose the transport used, and
• --connstring is used to pass a comma separated list of options in the key=value format, where
each valid key depends on the particular conntype.
Valid transports for --conntype are serial, ble and udp. Each transport expects a different set of
key/value options:
serial
--connstring accepts the following key values:
dev the device name for the OS mcumgr is running on (eg, /dev/ttyUSB0, /dev/tty.
usbserial, COM1, etc).
baud the communication speed; must match the baudrate of the server.
mtu aka Maximum Transmission Unit, the maximum protocol packet size.
ble
--connstring accepts the following key values:
udp
--connstring takes the form [addr]:port where:
addr can be a DNS name (if it can be resolved to the device IP), IPv4 addr (app must be
built with CONFIG_MCUMGR_TRANSPORT_UDP_IPV4), or IPv6 addr (app must be built
with CONFIG_MCUMGR_TRANSPORT_UDP_IPV6)
port any valid UDP port.
The transport configuration can be managed with the conn sub-command and later used with --conn (or
-c) parameter to skip typing both --conntype and --connstring. For example a new config for a se-
rial device that would require typing mcumgr --conntype serial --connstring dev=/dev/ttyACM0,
baud=115200,mtu=512 can be saved with:
mcumgr -c acm0
General options
Some options work for every mcumgr command and might be helpful to debug and fix issues with the
communication, among them the following deserve special mention:
-l Configures the log level, which can be one of critical, error, warn, info or debug,
<log-level> from less to most verbose. When there are communication issues, -lDEBUG might be
useful to dump the packets for later inspection.
-t Changes the timeout waiting for a response from the default of 10s to a given value.
<timeout> Some commands might take a long time of processing, eg, the erase before an image
upload, and might need incrementing the timeout to a larger value.
-r Changes the number of retries on timeout from the default of 1 to a given value.
<tries>
List of Commands
Not all commands defined by mcumgr (and SMP protocol) are currently supported on Zephyr. The ones
that are supported are described in the following table:
Tip: Running mcumgr with no parameters, or -h will display the list of commands.
Command Description
echo Send data to a device and display the echoed back data. This com-
mand is part of the OS group, which must be enabled by setting
CONFIG_MCUMGR_GRP_OS. The echo command itself can be enabled by set-
ting CONFIG_MCUMGR_GRP_OS_ECHO.
fs Access files on a device. More info in Filesystem Management.
image Manage images on a device. More info in Image Management.
reset Perform a soft reset of a device. This command is part of the OS group,
which must be enabled by setting CONFIG_MCUMGR_GRP_OS. The reset com-
mand itself is always enabled and the time taken for a reset to happen can
be set with CONFIG_MCUMGR_GRP_OS_RESET_MS (in ms).
shell Execute a command in the remote shell. This option is disabled by default
and can be enabled with CONFIG_MCUMGR_GRP_SHELL = y. To know more
about the shell in Zephyr check Shell.
stat Read statistics from a device. More info in Statistics Management.
taskstat Read task statistics from a device. This command is part of the OS
group, which must be enabled by setting CONFIG_MCUMGR_GRP_OS.
The taskstat command itself can be enabled by setting
CONFIG_MCUMGR_GRP_OS_TASKSTAT. CONFIG_THREAD_MONITOR also needs to
be enabled otherwise a -8 (MGMT_ERR_ENOTSUP) will be returned.
Tip: taskstat has a few options that might require tweaking. The CONFIG_THREAD_NAME must be set to
display the task names, otherwise the priority is displayed. Since the taskstat packets are large, they
might need increasing the CONFIG_MCUMGR_TRANSPORT_NETBUF_SIZE option.
Warning: To display the correct stack size in the taskstat command, the
CONFIG_THREAD_STACK_INFO option must be set. To display the correct stack usage in the
taskstat command, both CONFIG_THREAD_STACK_INFO and CONFIG_INIT_STACKS options must be
set.
On boards where a J-Link OB is present which has both CDC and MSC (virtual Mass Storage Device, also
known as drag-and-drop) support, the MSD functionality can prevent MCUmgr commands over the CDC
UART port from working due to how USB endpoints are configured in the J-Link firmware (for example
on the Nordic nrf52840dk_nrf52840 board) because of limiting the maximum packet size (most likely
to occur when using image management commands for updating firmware). This issue can be resolved
by disabling MSD functionality on the J-Link device, follow the instructions on nordic_segger_msd to
disable MSD support.
Image Management
The image management provided by mcumgr is based on the image format defined by MCUboot. For
more details on the internals see MCUboot design and Signing Binaries.
To list available images in a device:
Where image is the number of the image pair in a multi-image system, and slot is the number of the slot
where the image is stored, 0 for primary and 1 for secondary. This image being active and confirmed
means it will run again on next reset. Also relevant is the hash, which is used by other commands to
refer to this specific image when performing operations.
An image can be manually erased using:
The behavior of erase is defined by the server (MCUmgr in the device). The current implementation is
limited to erasing the image in the secondary partition.
To upload a new image:
• -n: This option allows uploading a new image to a specific set of images in a multi-image sys-
tem, and is currently only supported by MCUboot when the CONFIG_MCUBOOT_SERIAL option is
enabled.
• -e: This option avoids performing a full erase of the partition before starting a new upload.
Tip: The -e option should always be passed in because the upload command already checks if an erase
is required, respecting the CONFIG_IMG_ERASE_PROGRESSIVELY setting.
Tip: If the upload command times out while waiting for a response from the device, -t might be used
to increase the wait time to something larger than the default of 10s. See general_options.
Warning: mcumgr does not understand .hex files, when uploading a new image always use the .bin
file.
Tip: If the option is set to a value lower than the default one, for example -w 1, less chunks are
transmitted on the window, resulting in lower risk of errors. Conversely, setting a value higher than
5 increases risk of errors and may impact performance.
After an image upload is finished, a new image list would now have an output like this:
This command should mark a test upgrade, which means that after the next reboot the bootloader will
execute the upgrade and jump into the new image. If no other image operations are executed on the
newly running image, it will revert back to the image that was previously running on the device on the
subsequent reset. When a test is requested, flags will be updated with pending to inform that a new
image will be run after a reset:
$ mcumgr -c acm0 image test␣
˓→e8cf0dcef3ec8addee07e8c4d5dc89e64ba3fae46a2c5267fc4efbea4ca0e9f4
Images:
image=0 slot=0
version: 1.0.0
bootable: true
flags: active confirmed
hash: 86dca73a3439112b310b5e033d811ec2df728d2264265f2046fced5a9ed00cc7
image=0 slot=1
version: 1.1.0
bootable: true
flags: pending
hash: e8cf0dcef3ec8addee07e8c4d5dc89e64ba3fae46a2c5267fc4efbea4ca0e9f4
Split status: N/A (0)
Tip: It’s important to mention that an upgrade only ever happens if the image is valid. The first thing
MCUboot does when an upgrade is requested is to validate the image, using the SHA-256 and/or the sig-
nature (depending on the configuration). So before uploading an image, one way to be sure it is valid is
The confirmed flag in the secondary slot tells that after the next reset a revert upgrade will be performed
to switch back to the original layout.
The confirm command used to confirm that an image is OK and no revert should happen (empty hash
required) is:
The confirm command can also be run passing in a hash so that instead of doing a test/revert proce-
dure, the image in the secondary partition is directly upgraded to, eg:
Tip: The whole test/revert cycle does not need to be done using only the mcumgr command-line
tool. A better alternative is to perform a test and allow the new running image to self-confirm after any
checks by calling boot_write_img_confirmed() .
Tip: Building with CONFIG_MCUMGR_GRP_IMG_VERBOSE_ERR enables better error messages when failures
happen (but increases the application size).
Statistics Management
Statistics are used for troubleshooting, maintenance, and usage monitoring; it consists basically of user-
defined counters which are tightly connected to mcumgr and can be used to track any information for
easy retrieval. The available sub-commands are:
Statistics are organized in sections (also called groups), and each section can be individually queried.
Defining new statistics sections is done by using macros available under zephyr/stats/stats.h. Each
section consists of multiple variables (or counters), all with the same size (16, 32 or 64 bits).
To create a new section my_stats:
STATS_SECT_START(my_stats)
STATS_SECT_ENTRY(my_stat_counter1)
STATS_SECT_ENTRY(my_stat_counter2)
STATS_SECT_ENTRY(my_stat_counter3)
STATS_SECT_END;
STATS_SECT_DECL(my_stats) my_stats;
Each entry can be declared with STATS_SECT_ENTRY (or the equivalent STATS_SECT_ENTRY32),
STATS_SECT_ENTRY16 or STATS_SECT_ENTRY64. All statistics in a section must be declared with the
same size.
The statistics counters can either have names or not, depending on the setting of the
CONFIG_STATS_NAMES option. Using names requires an extra declaration step:
STATS_NAME_START(my_stats)
STATS_NAME(my_stats, my_stat_counter1)
(continues on next page)
Tip: Disabling CONFIG_STATS_NAMES will free resources. When this option is disabled the STATS_NAME*
macros output nothing, so adding them in the code does not increase the binary size.
Tip: CONFIG_MCUMGR_GRP_STAT_MAX_NAME_LEN sets the maximum length of a section name that can
can be accepted as parameter for showing the section data, and might require tweaking for long section
names.
The final steps to use a statistics section is to initialize and register it:
In the running code a statistics counter can be incremented by 1 using STATS_INC, by N using STATS_INCN
or reset with STATS_CLEAR.
Let’s suppose we want to increment those counters by 1, 2 and 3 every second. To get a list of stats:
Filesystem Management
The filesystem module is disabled by default due to security concerns: because of a lack of ac-
cess control by default, every file in the FS will be accessible, including secrets, etc. To enable it
CONFIG_MCUMGR_GRP_FS must be set (y). Once enabled the following sub-commands can be used:
Using the fs command, requires CONFIG_FILE_SYSTEM to be enabled, and that some particular filesystem
is enabled and properly mounted by the running application, eg for littlefs this would mean enabling
CONFIG_FILE_SYSTEM_LITTLEFS, defining a storage partition Flash map and mounting the filesystem in
the startup (fs_mount() ).
Uploading a new file to a littlefs storage, mounted under /lfs, can be done with:
Warning: The commands might exhaust the system workqueue, if its size is not large enough, so
increasing CONFIG_SYSTEM_WORKQUEUE_STACK_SIZE might be required for correct behavior.
The size of the stack allocated buffer used to store the blocks, while transferring a file can be adjusted
with CONFIG_MCUMGR_GRP_FS_DL_CHUNK_SIZE; this allows saving RAM resources.
Tip: CONFIG_MCUMGR_GRP_FS_PATH_LEN sets the maximum PATH accepted for a file name. It might
require tweaking for longer file names.
Note: To add security to the filesystem management group, callbacks for MCUmgr hooks can be regis-
tered by a user application when the upload/download functions are ran which allows the application to
control if access to a file is allowed or denied. See the MCUmgr Callbacks section for details.
Bootloader Integration
The Device Firmware Upgrade subsystem integrates the management subsystem with the bootloader,
providing the ability to send and upgrade a Zephyr image to a device.
Currently only the MCUboot bootloader is supported. See MCUboot for more information.
Discord channel
Developers welcome!
• Discord mcumgr channel: https://discord.com/invite/Ck7jw53nU2
API Reference
group mcumgr_mgmt_api
MCUmgr mgmt API.
Defines
MGMT_HDR_SIZE
MGMT_CTXT_SET_RC_RSN(mc, rsn)
MGMT_CTXT_RC_RSN(mc)
Typedefs
Enums
enum mcumgr_op_t
Opcodes; encoded in first byte of header.
Values:
enumerator MGMT_OP_READ = 0
Read op-code
enumerator MGMT_OP_READ_RSP
Read response op-code
enumerator MGMT_OP_WRITE
Write op-code
enumerator MGMT_OP_WRITE_RSP
Write response op-code
enum mcumgr_group_t
MCUmgr groups. The first 64 groups are reserved for system level mcumgr commands. Per-
user commands are then defined after group 64.
Values:
enumerator MGMT_GROUP_ID_OS = 0
OS (operating system) group
enumerator MGMT_GROUP_ID_IMAGE
Image management group, used for uploading firmware images
enumerator MGMT_GROUP_ID_STAT
Statistic management group, used for retieving statistics
enumerator MGMT_GROUP_ID_CONFIG
System configuration group (unused)
enumerator MGMT_GROUP_ID_LOG
Log management group (unused)
enumerator MGMT_GROUP_ID_CRASH
Crash group (unused)
enumerator MGMT_GROUP_ID_SPLIT
Split image management group (unused)
enumerator MGMT_GROUP_ID_RUN
Run group (unused)
enumerator MGMT_GROUP_ID_FS
FS (file system) group, used for performing file IO operations
enumerator MGMT_GROUP_ID_SHELL
Shell management group, used for executing shell commands
enumerator MGMT_GROUP_ID_PERUSER = 64
User groups defined from 64 onwards
enum mcumgr_err_t
MCUmgr error codes.
Values:
enumerator MGMT_ERR_EOK = 0
No error (success).
enumerator MGMT_ERR_EUNKNOWN
Unknown error.
enumerator MGMT_ERR_ENOMEM
Insufficient memory (likely not enough space for CBOR object).
enumerator MGMT_ERR_EINVAL
Error in input value.
enumerator MGMT_ERR_ETIMEOUT
Operation timed out.
enumerator MGMT_ERR_ENOENT
No such file/entry.
enumerator MGMT_ERR_EBADSTATE
Current state disallows command.
enumerator MGMT_ERR_EMSGSIZE
Response too large.
enumerator MGMT_ERR_ENOTSUP
Command not supported.
enumerator MGMT_ERR_ECORRUPT
Corrupt
enumerator MGMT_ERR_EBUSY
Command blocked by processing of other command
enumerator MGMT_ERR_EACCESSDENIED
Access to specific function, command or resource denied
enumerator MGMT_ERR_UNSUPPORTED_TOO_OLD
Requested SMP MCUmgr protocol version is not supported (too old)
enumerator MGMT_ERR_UNSUPPORTED_TOO_NEW
Requested SMP MCUmgr protocol version is not supported (too new)
Functions
struct mgmt_handler
#include <mgmt.h> Read handler and write handler for a single command ID.
struct mgmt_group
#include <mgmt.h> A collection of handlers for an entire command group.
Public Members
sys_snode_t node
Entry list node.
Overview
MCUmgr has a customisable callback/notification system that allows application (and module) code to
receive callbacks for MCUmgr events that they are interested in and react to them or return a status code
to the calling function that provides control over if the action should be allowed or not. An example of
this is with the fs_mgmt group, whereby file access can be gated, the callback allows the application to
inspect the request path and allow or deny access to said file, or it can rewrite the provided path to a
different path for transparent file redirection support.
Implementation
# include <zephyr/kernel.h>
# include <zephyr/mgmt/mcumgr/mgmt/mgmt.h>
# include <zephyr/mgmt/mcumgr/mgmt/callbacks.h>
int main()
{
my_callback.callback = my_function;
my_callback.event_id = MGMT_EVT_OP_CMD_DONE;
mgmt_callback_register(&my_callback);
}
This code registers a handler for the MGMT_EVT_OP_CMD_DONE event, which will be called af-
ter a MCUmgr command has been processed and output generated, note that this requires that
CONFIG_MCUMGR_SMP_COMMAND_STATUS_HOOKS be enabled to receive this callback.
Multiple callbacks can be setup to use a single function as a common callback, and many different
functions can be used for each event by registering each group once, or all notifications for a whole
group can be enabled by using one of the MGMT_EVT_OP_*_ALL events, alternatively a handler can setup
for every notification by using MGMT_EVT_OP_ALL . When setting up handlers, events can be combined that
are in the same group only, for example 5 img_mgmt callbacks can be setup with a single registration
call, but to also setup a callback for an os_mgmt callback, this must be done as a separate registration.
Group IDs are numerical increments, event IDs are bitmask values, hence the restriction.
As an example, the following registration is allowed, which will register for 3 SMP events with a single
callback function in a single registration:
my_callback.callback = my_function;
my_callback.event_id = (MGMT_EVT_OP_CMD_RECV |
MGMT_EVT_OP_CMD_STATUS |
MGMT_EVT_OP_CMD_DONE);
mgmt_callback_register(&my_callback);
The following code is not allowed, and will cause undefined operation, because it mixes the IMG man-
agement group with the OS management group whereby the group is not a bitmask value, only the event
is:
my_callback.callback = my_function;
my_callback.event_id = (MGMT_EVT_OP_IMG_MGMT_DFU_STARTED |
MGMT_EVT_OP_OS_MGMT_RESET);
mgmt_callback_register(&my_callback);
Actions Some callbacks expect a return status to either allow or disallow an operation, an example is
the fs_mgmt access hook which allows for access to files to be allowed or denied. With these handlers,
the first non-OK error code returned by a handler will be returned to the MCUmgr client.
An example of selectively denying file access:
# include <zephyr/kernel.h>
# include <zephyr/mgmt/mcumgr/mgmt/mgmt.h>
# include <zephyr/mgmt/mcumgr/mgmt/callbacks.h>
# include <string.h>
/* Check if this is an upload and deny access if it is, otherwise check the
* the path and deny if is matches a name
*/
if (fs_data->access == FS_MGMT_FILE_ACCESS_WRITE) {
/* Return an access denied error code to the client and abort calling
* further handlers
*/
*abort_more = true;
*rc = MGMT_ERR_EACCESSDENIED;
return MGMT_CB_ERROR_RC;
} else if (strcmp(fs_data->filename, "/lfs1/false_deny.txt") == 0) {
/* Return a no entry error code to the client, call additional handlers
* (which will have failed set to true)
*/
*rc = MGMT_ERR_ENOENT;
return MGMT_CB_ERROR_RC;
}
}
int main()
{
my_callback.callback = my_function;
my_callback.event_id = MGMT_EVT_OP_FS_MGMT_FILE_ACCESS;
mgmt_callback_register(&my_callback);
}
This code registers a handler for the MGMT_EVT_OP_FS_MGMT_FILE_ACCESS event, which will be called
after a fs_mgmt file read/write command has been received to check if access to the file should be allowed
or not, note that this requires that CONFIG_MCUMGR_GRP_FS_FILE_ACCESS_HOOK be enabled to receive this
callback. Two types of errors can be returned, the rc parameter can be set to an mcumgr_err_t error
code and MGMT_CB_ERROR_RC can be returned, or a group error code (introduced with version 2 of the
MCUmgr protocol) can be set by setting the group value to the group and rc value to the group error
code and returning MGMT_CB_ERROR_RET .
MCUmgr Command Callback Usage/Adding New Event Types To add a callback to a MCUmgr com-
mand, mgmt_callback_notify() can be called with the event ID and, optionally, a data struct to pass
to the callback (which can be modified by handlers). If no data needs to be passed back, NULL can be
used instead, and size of the data set to 0.
An example MCUmgr command handler:
# include <zephyr/kernel.h>
# include <zcbor_common.h>
# include <zcbor_encode.h>
# include <zephyr/mgmt/mcumgr/smp/smp.h>
# include <zephyr/mgmt/mcumgr/mgmt/mgmt.h>
# include <zephyr/mgmt/mcumgr/mgmt/callbacks.h>
(continues on next page)
enum user_one_group_events {
/** Callback on first post, data is test_struct. */
MGMT_EVT_OP_USER_ONE_FIRST = MGMT_DEF_EVT_OP_ID(MGMT_EVT_GRP_USER_ONE, 0),
struct test_struct {
uint8_t some_value;
};
rc = mgmt_callback_notify(MGMT_EVT_OP_USER_ONE_FIRST, &test_data,
sizeof(test_data), &ret_rc, &ret_group);
if (rc != MGMT_CB_OK) {
/* A handler returned a failure code */
if (rc == MGMT_CB_ERROR_RC) {
/* The failure code is the RC value */
return ret_rc;
}
end:
rc = (ok ? MGMT_ERR_EOK : MGMT_ERR_EMSGSIZE);
return rc;
}
If no response is required for the callback, the function call be called and casted to void.
Migration
If there is existing code using the previous callback system(s) in Zephyr 3.2 or earlier, then it will
need to be migrated to the new system. To migrate code, the following callback registration func-
tions will need to be migrated to register for callbacks using mgmt_callback_register() (note that
CONFIG_MCUMGR_MGMT_NOTIFICATION_HOOKS will need to be set to enable the new notification system in
addition to any migrations):
• mgmt_evt
Using MGMT_EVT_OP_CMD_RECV , MGMT_EVT_OP_CMD_STATUS , or MGMT_EVT_OP_CMD_DONE as
drop-in replacements for events of the same name, where the provided data is
mgmt_evt_op_cmd_arg . CONFIG_MCUMGR_SMP_COMMAND_STATUS_HOOKS needs to be set.
• fs_mgmt_register_evt_cb
Using MGMT_EVT_OP_FS_MGMT_FILE_ACCESS where the provided data is
fs_mgmt_file_access . Instead of returning true to allow the action or false to deny,
a MCUmgr result code needs to be returned, MGMT_ERR_EOK will allow the action, any other
return code will disallow it and return that code to the client (MGMT_ERR_EACCESSDENIED can
be used for an access denied error). CONFIG_MCUMGR_GRP_IMG_STATUS_HOOKS needs to be set.
• img_mgmt_register_callbacks
Using MGMT_EVT_OP_IMG_MGMT_DFU_STARTED if dfu_started_cb was
used, MGMT_EVT_OP_IMG_MGMT_DFU_STOPPED if dfu_stopped_cb was used,
MGMT_EVT_OP_IMG_MGMT_DFU_PENDING if dfu_pending_cb was used or
MGMT_EVT_OP_IMG_MGMT_DFU_CONFIRMED if dfu_confirmed_cb was used. These call-
backs do not have any return status. CONFIG_MCUMGR_GRP_IMG_STATUS_HOOKS needs to be
set.
• img_mgmt_set_upload_cb
Using MGMT_EVT_OP_IMG_MGMT_DFU_CHUNK where the provided data is
img_mgmt_upload_check . Instead of returning true to allow the action or false to deny, a
MCUmgr result code needs to be returned, MGMT_ERR_EOK will allow the action, any other
return code will disallow it and return that code to the client (MGMT_ERR_EACCESSDENIED can
be used for an access denied error). CONFIG_MCUMGR_GRP_IMG_UPLOAD_CHECK_HOOK needs to
be set.
• os_mgmt_register_reset_evt_cb
Using MGMT_EVT_OP_OS_MGMT_RESET . Instead of returning true to allow the action or
false to deny, a MCUmgr result code needs to be returned, MGMT_ERR_EOK will
allow the action, any other return code will disallow it and return that code
to the client (MGMT_ERR_EACCESSDENIED can be used for an access denied error).
CONFIG_MCUMGR_SMP_COMMAND_STATUS_HOOKS needs to be set.
API Reference
group mcumgr_callback_api
MCUmgr callback API.
Defines
MGMT_EVT_GET_GROUP(event)
Get group from event.
MGMT_EVT_GET_ID(event)
Get event ID from event.
Typedefs
Enums
enum mgmt_cb_return
MGMT event callback return value.
Values:
enumerator MGMT_CB_OK
No error.
enumerator MGMT_CB_ERROR_RC
SMP protocol error and ret_rc contains the mcumgr_err_t error code.
enumerator MGMT_CB_ERROR_RET
Group (application-level) error and ret_group contains the group ID that caused the
error and ret_rc contians the error code of that group to return.
enum mgmt_cb_groups
MGMT event callback group IDs. Note that this is not a 1:1 mapping with mcumgr_group_t
values.
Values:
enumerator MGMT_EVT_GRP_ALL = 0
enumerator MGMT_EVT_GRP_SMP
enumerator MGMT_EVT_GRP_OS
enumerator MGMT_EVT_GRP_IMG
enumerator MGMT_EVT_GRP_FS
enum smp_all_events
MGMT event opcodes for all command processing.
Values:
enum smp_group_events
MGMT event opcodes for base SMP command processing.
Values:
enum fs_mgmt_group_events
MGMT event opcodes for filesystem management group.
Values:
enumerator MGMT_EVT_OP_FS_MGMT_FILE_ACCESS =
MGMT_DEF_EVT_OP_ID(MGMT_EVT_GRP_FS, 0)
Callback when a file has been accessed, data is fs_mgmt_file_access().
enum img_mgmt_group_events
MGMT event opcodes for image management group.
Values:
enumerator MGMT_EVT_OP_IMG_MGMT_DFU_CHUNK =
MGMT_DEF_EVT_OP_ID(MGMT_EVT_GRP_IMG, 0)
Callback when a client sends a file upload chunk, data is img_mgmt_upload_check().
enumerator MGMT_EVT_OP_IMG_MGMT_DFU_STOPPED =
MGMT_DEF_EVT_OP_ID(MGMT_EVT_GRP_IMG, 1)
Callback when a DFU operation is stopped.
enumerator MGMT_EVT_OP_IMG_MGMT_DFU_STARTED =
MGMT_DEF_EVT_OP_ID(MGMT_EVT_GRP_IMG, 2)
Callback when a DFU operation is started.
enumerator MGMT_EVT_OP_IMG_MGMT_DFU_PENDING =
MGMT_DEF_EVT_OP_ID(MGMT_EVT_GRP_IMG, 3)
Callback when a DFU operation has finished being transferred.
enumerator MGMT_EVT_OP_IMG_MGMT_DFU_CONFIRMED =
MGMT_DEF_EVT_OP_ID(MGMT_EVT_GRP_IMG, 4)
Callback when an image has been confirmed.
enumerator MGMT_EVT_OP_IMG_MGMT_ALL =
MGMT_DEF_EVT_OP_ALL(MGMT_EVT_GRP_IMG)
Used to enable all img_mgmt_group events.
enum os_mgmt_group_events
MGMT event opcodes for operating system management group.
Values:
enumerator MGMT_EVT_OP_OS_MGMT_RESET =
MGMT_DEF_EVT_OP_ID(MGMT_EVT_GRP_OS, 0)
Callback when a reset command has been received.
enumerator MGMT_EVT_OP_OS_MGMT_INFO_CHECK =
MGMT_DEF_EVT_OP_ID(MGMT_EVT_GRP_OS, 1)
Callback when an info command is processed, data is os_mgmt_info_check.
enumerator MGMT_EVT_OP_OS_MGMT_INFO_APPEND =
MGMT_DEF_EVT_OP_ID(MGMT_EVT_GRP_OS, 2)
Callback when an info command needs to output data, data is os_mgmt_info_append.
Functions
struct mgmt_callback
#include <callbacks.h> MGMT callback struct
Public Members
sys_snode_t node
Entry list node.
mgmt_cb callback
Callback that will be called.
uint32_t event_id
MGMT_EVT_[. . . ] Event ID for handler to be called on. This has special mean-
ing if MGMT_EVT_OP_ALL is used (which will cover all events for all groups), or
MGMT_EVT_OP_*_MGMT_ALL (which will cover all events for a single group). For
events that are part of a single group, they can be or’d together for this to have one
registration trigger on multiple events, please note that this will only work for a single
group, to register for events in different groups, they must be registered separately.
struct mgmt_evt_op_cmd_arg
#include <callbacks.h> Arguments for MGMT_EVT_OP_CMD_RECV,
MGMT_EVT_OP_CMD_STATUS and MGMT_EVT_OP_CMD_DONE
Public Members
uint16_t group
mcumgr_group_t
uint8_t id
Message ID within group
int err
mcumgr_err_t, used in MGMT_EVT_OP_CMD_DONE
int status
img_mgmt_id_upload_t, used in MGMT_EVT_OP_CMD_STATUS
MCUmgr fs_mgmt callback API.
Enums
enum fs_mgmt_file_access_types
The type of operation that is being requested for a given file access callback.
Values:
enumerator FS_MGMT_FILE_ACCESS_READ
Access to read file (file upload).
enumerator FS_MGMT_FILE_ACCESS_WRITE
Access to write file (file download).
enumerator FS_MGMT_FILE_ACCESS_STATUS
Access to get status of file.
enumerator FS_MGMT_FILE_ACCESS_HASH_CHECKSUM
Access to calculate hash or checksum of file.
struct fs_mgmt_file_access
#include <fs_mgmt_callbacks.h> Structure provided in the
MGMT_EVT_OP_FS_MGMT_FILE_ACCESS notification callback: This callback function is
used to notify the application about a pending file read/write request and to authorise or
deny it. Access will be allowed so long as all notification handlers return MGMT_ERR_EOK, if
one returns an error then access will be denied.
Public Members
char *filename
Path and filename of file be accesses, note that this can be changed by handlers to redirect
file access if needed (as long as it does not exceed the maximum path string size).
MCUmgr img_mgmt callback API.
struct img_mgmt_upload_check
#include <img_mgmt_callbacks.h> Structure provided in the
MGMT_EVT_OP_IMG_MGMT_DFU_CHUNK notification callback: This callback function
is used to notify the application about a pending firmware upload packet from a client and
authorise or deny it. Upload will be allowed so long as all notification handlers return
MGMT_ERR_EOK, if one returns an error then the upload will be denied.
Public Members
The processes described in this document apply to both the zephyr repository itself and the MCUmgr
module defined in west.yml.
Note: Currently, the backporting process, described in this document, is required only when providing
changes to Zephyr version 2.7 LTS
There are two different processes: one for issues that have also been fixed in the current version of
Zephyr (backports), and one for issues that are being fixed only in a previous version.
The upstream MCUmgr repository is located in this page. The Zephyr fork used in version 2.7 and earlier
is located here. Versions of Zephyr past 2.7 use the MCUmgr library that is part of the Zephyr code base.
In Zephyr version 2.7 and earlier, you must first apply the fix to the upstream repository of MCUmgr and
then bring it to Zephyr with snapshot updates.
As such, there are four possible ways to apply a change to the 2.7 branch:
• The fix, done directly to the Zephyr held code of the MCUmgr library, is backported to the v2.
7-branch.
• The fix, ported to the Zephyr held code from the upstream repository, is backported to the v2.
7-branch.
• The fix, done upstream and no longer relevant to the current version, is directly backported
to the v2.7-branch.
• The fix, not present upstream and not relevant for the current version of Zephyr, is
directly applied to the v2.7-branch.
The first three cases are cases of backports , the last one is a case of a new fix and has no corresponding
fix in the current version.
Creating a bug report Every proposed fix requires a bug report submitted for the specified version of
Zephyr affected by the bug.
In case the reported bug in a previous version has already been fixed in the current version, the descrip-
tion of the bug must be copied with the following:
• Additional references to the bug in the current version
• The PR for the current version
• The SHAs of the commits, if the PR has already been merged
You must also apply the backport v2.7-branch label to the bug report.
Creating the pull request for the fix You can either create a backport pull request or a new-fix pull
request.
Creating backport pull requests Backporting a fix means that some or all of the fix commits, as they
exist in the current version, are ported to a previous version.
Note: Backporting requires the fix for the current version to be already merged.
``<sha>`` indicates the SHA of the commit after it has been already merged in the␣
˓→current version.
Creating new-fix pull requests When the fix needed does not have a corresponding fix in the current
version, the bug report must follow the ordinary process.
1. Create the pull request selecting v2.7-branch as the merge target.
2. Update west.yml within Zephyr, creating a pull-request to update the MCUmgr library referenced
in Zephyr 2.7.
Configuration management
This chapter describes the maintainers’ side of accepting and merging fixes and backports.
Prerequisites As a maintainer, these are the steps required before proceeding with the merge process:
1. Check if the author has followed the correct steps that are required to apply the fix, as described in
Applying fixes to previous versions of MCUmgr.
1. Ensure that the author of the fix has also provided the west.yml update for Zephyr 2.7.
The specific merging process depends on where the fix comes from and whether it is a backport or a new
fix.
backport-<source>-<pr_num>-to_v2.7-branch
Merging a new fix Merging a new fix, that is not a backport of either any upstream or Zephyr fix, does
not require any special treatment. Apply the fix directly at the top of v2.7-branch.
Merge west.yml As an MCUmgr maintainer, you may not be able to merge the west.yml update, to
introduce the fix to Zephyr. However, you are responsible for such a merge to happen as soon as possible
after the MCUmgr fixes have been applied to the v2.7-branch of the MCUmgr.
This is description of Simple Management Protocol, SMP, that is used by MCUmgr to pass requests to
devices and receive responses from them.
SMP is an application layer protocol. The underlying transport layer is not in scope of this documenta-
tion.
Note: SMP in this context refers to SMP for MCUmgr (Simple Management Protocol), it is unrelated to
SMP in Bluetooth (Security Manager Protocol), but there is an MCUmgr SMP transport for Bluetooth.
Each frame consists of a header and data. The Data Length field in the header may be used for reassem-
bly purposes if underlying transport layer supports fragmentation. Frames are encoded in “Big Endian”
(Network endianness) when fields are more than one byte long, and takes the following form:
3 2 1 0
7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0
Res Ver OP Flags Data Length
Group ID Sequence Num Command ID
Data . . .
Note: The original specification states that SMP should support receiving both the “Little-endian” and
“Big-endian” frames but in reality the MCUmgr library is hardcoded to always treat “Network” side as
“Big-endian”.
Data is optional and is not present when Data Length is zero. The encoding of data depends on the
target of group/ID.
A description of the various fields and their meaning:
Field Description
Res This is reserved, not-used field and must be always set to 0.
Ver This indicates the version of the protocol being used, this should be set to 0b01 to use the
(Ver- newer SMP transport where error codes are more detailed and returned in the map, otherwise
sion) left as 0b00 to use the legacy SMP protocol. Versions 0b10 and 0b11 are reserved for future
use and should not be used.
OP mcumgr_op_t , determines whether information is written to a device or requested from it and
whether a packet contains request to an SMP server or response from it.
Flags Reserved for flags; there are no flags defined yet, the field should be set to 0
Data Length of the Data field
Length
Group mcumgr_group_t , see Management Group ID’s for further details.
ID
Sequence
This is a frame sequence number. The number is increased by one with each request frame.
Num The Sequence Num of a response should match the one in the request.
CommandThis is a command, within Group.
ID
Data This is data payload of the Data Length size. It is optional as Data Length may be set to zero,
which means that no data follows the header.
Note: Contents of Data depends on a value of an OP, a Group ID, and a Command ID.
Management Group ID’s The SMP protocol supports predefined common groups and allows user de-
fined groups. The following table presents a list of common groups:
The payload for above groups, except for user groups (64 and above) is always CBOR encoded. The
group 64, and above can define their own scheme for data communication.
Minimal response
Regardless of a command issued, as long as there is SMP client on the other side of a request, a response
should be issued containing the header followed by CBOR map container. Lack of response is only
allowed when there is no SMP service or device is non-responsive.
{
(str)"rc" : (int)
}
where:
“rc” mcumgr_err_t
Echo command Echo command responses by sending back string that it has received.
OP Group ID Command ID
0 or 2 0 0
{
(str)"d" : (str)
}
where:
{
(str)"r" : (str)
}
{
(str)"rc" : (int)
}
where:
Task statistics command The command responds with some system statistics.
OP Group ID Command ID
0 0 2
OP Group ID Command ID
1 0 2
{
(str)"tasks" : {
(str)<task_name> : {
(str)"prio" : (uint)
(str)"tid" : (uint)
(str)"state" : (uint)
(str)"stkuse" : (uint)
(str)"stksiz" : (uint)
(str)"cswcnt" : (uint)
(str)"runtime" : (uint)
(str)"last_checkin" : (uint)
(str)"next_checkin" : (uint)
}
...
}
}
{
(str)"rc" : (int)
}
where:
Note: The unit for “stkuse” and “stksiz” is system dependent and in case of Zephyr this is number of 4
byte words.
Memory pool statistics The command is used to obtain information on memory pools active in running
system.
Memory pool statistic request Memory pool statistics request header fields:
OP Group ID Command ID
0 0 3
Memory pool statistics response Memory pool statistics response header fields:
OP Group ID Command ID
1 0 3
{
(str)<pool_name> {
(str)"blksiz" : (int)
(str)"nblks" : (int)
(str)"nfree" : (int)
(str)"min' : (int)
}
...
}
{
(str)"rc" : (int)
}
where:
<pool_name> string representing the pool name, used as a key for dictionary with pool statistics
data
“blksiz” size of the memory block in the pool
“nblks” number of blocks in the pool
“nrfree” number of free blocks
“min” lowest number of free blocks the pool reached during run-time
“rc” mcumgr_err_t only appears if non-zero (error condition).
Date-time command The command allows to obtain string representing current time-date on a device
or set a new time to a device. The time format used, by both set and get operations, is:
“yyyy-MM-dd’T’HH:mm:ss.SSSSSSZZZZZ”
OP Group ID Command ID
0 0 4
OP Group ID Command ID
1 0 4
{
(str)"datetime" : (str)
}
{
(str)"rc" : (int)
}
where:
OP Group ID Command ID
2 0 4
{
(str)"datetime" : (str)
}
where:
OP Group ID Command ID
1 0 4
The command sends an empty CBOR map as data if successful. In case of error the CBOR data takes the
form:
{
(str)"rc" : (int)
}
where:
System reset Performs reset of system. The device should issue response before resetting so that the
SMP client could receive information that the command has been accepted. By default, this command
is accepted in all conditions, however if the CONFIG_MCUMGR_GRP_OS_RESET_HOOK is enabled and an
application registers a callback, the callback will be called when this command is issued and can be used
to perform any necessary tidy operations prior to the module rebooting, or to reject the reset request
outright altogether with an error response. For details on this functionality, see ref:`mcumgr_callbacks.
OP Group ID Command ID
2 0 5
Normally the command sends an empty CBOR map as data, but if a previous reset attempt has responded
with “rc” equal to MGMT_ERR_EBUSY then the following map may be send to force a reset:
{
(opt)"force" : (int)
}
where:
OP Group ID Command ID
3 0 5
The command sends an empty CBOR map as data if successful. In case of error the CBOR data takes the
form:
{
(str)"rc" : (int)
}
where:
OP Group ID Command ID
0 0 6
OP Group ID Command ID
2 0 6
{
(str)"buf_size" : (uint)
(str)"buf_count" : (uint)
}
{
(str)"rc" : (int)
}
where:
“buf_size” Single SMP buffer size, this includes SMP header and CBOR payload
“buf_count” Number of SMP buffers supported
“rc” mcumgr_err_t ; only appears if non-zero (error condition).
OS/Application Info Used to obtain information on running image, similar functionality to the
linux uname command, allowing details such as kernel name, kernel version, build date/time, pro-
cessor type and application-defined details to be returned. This functionality can be enabled with
CONFIG_MCUMGR_GRP_OS_INFO.
OP Group ID Command ID
0 0 7
{
(str,opt)"format" : (str)
}
where:
“for- Format specifier of returned response, fields are appended in their natural ascending index
mat” order, not the order of characters that are received by the command. Format specifiers: * s
Kernel name * n Node name * r Kernel release * v Kernel version * b Build date and time (re-
quires CONFIG_MCUMGR_GRP_OS_INFO_BUILD_DATE_TIME) * m Machine * p Processor * i Hard-
ware platform * o Operating system * a All fields (shorthand for all above options) If this option
is not provided, the s Kernel name option will be used.
OP Group ID Command ID
2 0 7
{
(str)"output" : (str)
(opt,str)"rc" : (int)
}
where:
Notion of “slots” and “images” in Zephyr The “slot” and “image” definition comes from mcuboot
where “image” would consist of two “slots”, further named “primary” and “secondary”; the application
is supposed to run from the “primary slot” and update is supposed to be uploaded to the “secondary
slot”; the mcuboot is responsible in swapping slots on boot. This means that pair of slots is dedicated
to single upgradable application. In case of Zephyr this gets a little bit confusing because DTS will use
“slot0_partition” and “slot1_partition”, as label of fixed-partition dedicated to single application, but
will name them as “image-0” and “image-1” respectively.
Currently Zephyr supports at most two images, in which case mapping is as follows:
State of images The command is used to set state of images and obtain list of images with their current
state.
Get state of images request Get state of images request header fields:
OP Group ID Command ID
0 1 0
Get state of images response Get state of images response header fields:
OP Group ID Command ID
1 1 0
Note: Below definition of the response contains “image” field that has been marked as optional(opt):
the field may not appear in response when target application does not support more than one image.
The field is mandatory when application supports more than one application image to allow identifying
which image information is listed.
A response will only contain information for valid images, if an image can not be identified as valid it is
simply skipped.
CBOR data of successful response:
{
(str)"images" : [
{
(str,opt)"image" : (uint)
(str)"slot" : (uint)
(str)"version" : (str)
(str,opt*)"hash" : (byte str)
(str,opt)"bootable" : (bool)
(str,opt)"pending" : (bool)
(str,opt)"confirmed" : (bool)
(str,opt)"active" : (bool)
(str,opt)"permanent" : (bool)
}
...
]
(str,opt)"splitStatus" : (int)
}
{
(str)"rc" : (int)
(str,opt)"rsn" : (str)
}
where:
“im- semi-optional image number; the field is not required when only one image is supported by
age” running application
“slot” slot number within “image”; each image has two slots : primary (running one) = 0 and sec-
ondary (for DFU dual-bank purposes) = 1
“ver- string representing image version, as set with imgtool
sion”
“hash”SHA256 hash of the image header and body. Note that this will not be the same as the
SHA256 of the whole file, it is the field in the MCUboot TLV section that contains a hash
of the data which is used for signature verification purposes. This field is optional but only
optional when using MCUboot’s serial recovery feature with one pair of image slots, Kconfig
CONFIG_BOOT_SERIAL_IMG_GRP_HASH can be disabled to remove support for hashes in this con-
figuration. MCUmgr in applications must support sending hashes.
Note: See IMAGE_TLV_SHA256 in the MCUboot image format documentation link below.
“bootable”
true if image has bootable flag set; this field does not have to be present if false
“pend-true if image is set for next swap this field does not have to be present if false
ing”
“con- true if image has been confirmed this field does not have to be present if false
firmed”
“ac- true if image is currently active application this field does not have to be present if false
tive”
“per- true if image is to stay in primary slot after next boot this field does not have to be present if
ma- false
nent”
“split- states whether loader of split image is compatible with application part; this is unused by
Sta- Zephyr
tus”
“rc” mcumgr_err_t only appears if non-zero (error condition).
“rsn” optional string that clarifies reason for an error; specifically useful for error code 1, unknown
error
Note: For more information on how does image/slots function, please refer to the MCUBoot documen-
tation https://docs.mcuboot.com/design.html#image-slots For information on MCUboot image format,
please reset to the MCUboot documentation https://docs.mcuboot.com/design.html#image-format
Set state of image request Set state of image request header fields:
OP Group ID Command ID
2 1 0
{
{
(str,opt)"hash" : (str)
(str)"confirm" : (bool)
}
}
If “confirm” is false or not provided, an image with the “hash” will be set for test, which means that it
will not be marked as permanent and upon hard reset the previous application will be restored to the
primary slot. In case when “confirm” is true, the “hash” is optional as the currently running application
will be assumed as target for confirmation.
Set state of image response The response takes the same format as Get state of images response
Image upload The image upload command allows to update application image.
Image upload request The image upload request is sent for each chunk of image that is uploaded,
until complete image gets uploaded to a device.
Image upload request header fields:
OP Group ID Command ID
2 1 1
{
{
(str,opt)"image" : (uint)
(str,opt)"len" : (uint)
(str)"off" : (uint)
(str,opt)"sha" : (byte str)
(str,opt)"data" : (byte str)
(str,opt)"upgrade" : (bool)
}
}
where:
“im- optional image number, it does not have to appear in request at all, in which case it is assumed
age” to be 0; only request with “off” 0 can contain image number
“len” optional length of an image, it only appears in the first packet of request, where “off” is 0
“off” offset of image chunk the request carries
“sha” SHA256 hash of an upload; this is used to identify an upload session (e.g. to allow MCUmgr
to continue a broken session), and for image verification purposes. This must be a full SHA256
hash of the whole image being uploaded, or not included if the hash is not available (in which
case, upload session continuation and image verification functionality will be unavailable).
Should only be present if “off” is zero.
“data”optional image data
“up- optional flag that states that only upgrade should be allowed, so if the version of uploaded
grade”software is not higher then already on a device, the image upload will be rejected. Zephyr only
compares major, minor and revision (x.y.z).
Note: There is no field representing size of chunk that is carried as “data” because that information is
embedded within “data” field itself.
The MCUmgr library uses “sha” field to tag ongoing update session, to be able to continue it in case when
it gets broken, and for upload verification purposes. If library gets request with “off” equal zero it checks
stored “sha” within its state and if it matches it will respond to update client application with offset that
it should continue with. If this hash is not available (e.g. because a file is being streamed) then it must
not be provided, image verification and upload session continuation features will be unavailable in this
case.
OP Group ID Command ID
3 1 1
{
(str,opt)"off" : (uint)
(str,opt)"match" : (bool)
}
{
(str)"rc" : (int)
(str,opt)"rsn" : (str)
}
where:
The “off” field is only included in responses to successfully processed requests; if “rc” is negative then
“off” may not appear.
Image erase The command is used for erasing image slot on a target device.
Note: This is synchronous command which means that a sender of request will not receive response
until the command completes, which can take a long time.
OP Group ID Command ID
2 1 5
{
{
(str,opt)"slot" : (uint)
}
}
where:
“slot” optional slot number, it does not have to appear in the request at all, in which case it is
assumed to be 1.
OP Group ID Command ID
3 1 5
The command sends an empty CBOR map as data if successful. In case of error the CBOR data takes the
form:
{
(str)"rc" : (int)
(str,opt)"rsn" : (str)
}
where:
Note: Response from Zephyr running device may have “rc” value of MGMT_ERR_EBADSTATE , which means
that the secondary image has been marked for next boot already and may not be erased.
Statistics management Statistics management allows to obtain data gathered by Statistics subsystem
of Zephyr, enabled with CONFIG_STATS.
Statistics management group defines commands:
Statistics: group data The command is used to obtain data for group specified by a name. The name is
one of group names as registered, with STATS_INIT_AND_REG macro or stats_init_and_reg() function
call, within module that gathers the statistics.
OP Group ID Command ID
0 2 0
{
(str)"name" : (str)
}
where:
OP Group ID Command ID
1 2 0
{
(str)"name" : (str)
(str)"fields" : {
(str)<entry_name> : (uint)
...
}
}
{
(str)"rc" : (int)
}
where:
Statistics: list of groups The command is used to obtain list of groups of statistics that are gath-
ered on a device. This is a list of names as given to groups with STATS_INIT_AND_REG macro or
stats_init_and_reg() function calls, within module that gathers the statistics; this means that this
command may be considered optional as it is known during compilation what groups will be included
into build and listing them is not needed prior to issuing a query.
OP Group ID Command ID
0 2 1
OP Group ID Command ID
1 2 1
{
(str)"stat_list" : [
(str)<stat_group_name>, ...
(continues on next page)
{
(str)"rc" : (int)
}
where:
“stat_list” array of strings representing group names; this array may be empty if there are no groups
“rc” mcumgr_err_t only appears if non-zero (error condition).
File management The file management group provides commands that allow to upload and download
files to/from a device.
File management group defines following commands:
File download Command allows to download contents of an existing file from specified path of a
target device. Client applications must keep track of data they have already downloaded and where
their position in the file is (MCUmgr will cache these also), and issue subsequent requests, with modified
offset, to gather the entire file. Request does not carry size of requested chunk, the size is specified by
application itself. Note that file handles will remain open for consecutive requests (as long as an idle
timeout has not been reached and another transport does not make use of uploading/downloading files
using fs_mgmt), but files are not exclusively owned by MCUmgr, for the time of download session, and
may change between requests or even be removed.
Note: By default, all file upload/download requests are unconditionally allowed. However, if the
Kconfig option CONFIG_MCUMGR_GRP_FS_FILE_ACCESS_HOOK is enabled, then an application can register
a callback handler for MGMT_EVT_OP_FS_MGMT_FILE_ACCESS (see MCUmgr callbacks), which allows for
allowing or declining access to reading/writing a particular file, or for rewriting the path supplied by the
client.
OP Group ID Command ID
0 8 0
{
(str)"off" : (uint)
(str)"name" : (str)
}
where:
OP Group ID Command ID
1 8 0
{
(str)"off" : (uint)
(str)"data" : (byte str)
(str,opt)"len" : (uint)
}
{
(str)"rc" : (int)
}
where:
In case when “rc” is not 0, success, the other fields will not appear.
File upload Allows to upload a file to a specified location. Command will automatically overwrite
existing file or create a new one if it does not exist at specified path. The protocol supports stateless
upload where each requests carries different chunk of a file and it is client side responsibility to track
progress of upload.
Note that file handles will remain open for consecutive requests (as long as an idle timeout has not been
reached, but files are not exclusively owned by MCUmgr, for the time of download session, and may
change between requests or even be removed. Note that file handles will remain open for consecutive
requests (as long as an idle timeout has not been reached and another transport does not make use of
uploading/downloading files using fs_mgmt), but files are not exclusively owned by MCUmgr, for the
time of download session, and may change between requests or even be removed.
Note: Weirdly, the current Zephyr implementation is half-stateless as is able to hold single upload
context, holding information on ongoing upload, that consists of bool flag indicating in-progress upload,
last successfully uploaded offset and total length only.
Note: By default, all file upload/download requests are unconditionally allowed. However, if the
Kconfig option CONFIG_MCUMGR_GRP_FS_FILE_ACCESS_HOOK is enabled, then an application can register
a callback handler for MGMT_EVT_OP_FS_MGMT_FILE_ACCESS (see MCUmgr callbacks), which allows for
allowing or declining access to reading/writing a particular file, or for rewriting the path supplied by the
client.
OP Group ID Command ID
2 8 0
{
(str)"off" : (uint)
(str)"data" : (str)
(str)"name" : (str)
(str,opt)"len" : (uint)
}
where:
OP Group ID Command ID
3 8 0
{
(str)"off" : (uint)
}
{
(str)"rc" : (int)
}
where:
File status Command allows to retrieve status of an existing file from specified path of a target device.
OP Group ID Command ID
0 8 1
{
(str)"name" : (str)
}
where:
OP Group ID Command ID
1 8 1
{
(str)"len" : (uint)
}
{
(str)"rc" : (int)
}
where:
In case when “rc” is not 0, success, the other fields will not appear.
OP Group ID Command ID
0 8 2
{
(str)"name" : (str)
(str,opt)"type" : (str)
(str,opt)"off" : (uint)
(str,opt)"len" : (uint)
}
where:
Hash/checksum types
Note that the default type will be crc32 if it is enabled, or sha256 if crc32 is not enabled.
OP Group ID Command ID
1 8 2
{
(str)"type" : (str)
(str,opt)"off" : (uint)
(str)"len" : (uint)
(str)"output" : (uint or bstr)
}
{
(str)"rc" : (int)
}
where:
In case when “rc” is not 0, success, the other fields will not appear.
Supported file hash/checksum types Command allows listing which hash and checksum types are
available on a device. Requires Kconfig CONFIG_MCUMGR_GRP_FS_CHECKSUM_HASH_SUPPORTED_CMD to be
enabled.
Supported file hash/checksum types request Supported file hash/checksum types request header:
OP Group ID Command ID
0 8 3
Supported file hash/checksum types response Supported file hash/checksum types response header:
OP Group ID Command ID
1 8 3
{
(str)"rc" : (int)
}
where:
In case when “rc” is not 0, success, the other fields will not appear.
File close Command allows closing any open file handles held by fs_mgmt upload/download requests
that might have stalled or be incomplete.
OP Group ID Command ID
2 8 4
OP Group ID Command ID
3 8 4
The command sends an empty CBOR map as data if successful. In case of error the CBOR data takes the
form:
{
(str)"rc" : (int)
}
where:
Shell management Shell management allows passing commands to the shell subsystem over the SMP
protocol.
Shell management group defines following commands:
Shell command line execute The command allows to execute command line in a similar way to typing
it into a shell, but both a request and a response are transported over SMP.
OP Group ID Command ID
2 9 0
{
(str)"argv" : [
(str)<cmd>
(str,opt)<arg>
...
]
}
where:
Shell command line execute response Command line execute response header fields:
OP Group ID Command ID
3 9 0
{
(str)"o" : (str)
(str)"ret" : (int)
}
{
(str)"rc" : (int)
}
where:
Note: In older versions of Zephyr, “rc” was used for both the mcumgr status code
and shell command execution return code, this legacy behaviour can be restored by enabling
CONFIG_MCUMGR_GRP_SHELL_LEGACY_RC_RETURN_CODE
The documents specifies information needed for implementing server and client side SMP transports.
MCUmgr Clients need to use following BLE Characteristics, when implementing SMP client:
• Service UUID: 8D53DC1D-1DB7-4CD3-868B-8A527460AA84
• Characteristic UUID: DA2E7828-FBCE-4E01-AE9E-261174997C48
All SMP communication utilizes a single GATT characteristic. An SMP request is sent via a GATT Write
Without Response command. An SMP response is sent in the form of a GATT Notification
If an SMP request or response is too large to fit in a single GATT command, the sender fragments it
across several packets. No additional framing is introduced when a request or response is fragmented;
the payload is simply split among several packets. Since GATT guarantees ordered delivery of packets,
the SMP header in the first fragment contains sufficient information for reassembly.
SMP protocol specification by MCUmgr subsystem of Zephyr uses basic framing of data to allow multi-
plexing of UART channel. Multiplexing requires prefixing each frame with two byte marker and termi-
nating it with newline. Currently MCUmgr imposes a 127 byte limit on frame size, although there are no
real protocol constraints that require that limit. The limit includes the prefix and the newline character,
so the allowed payload size is actually 124 bytes.
Although no such transport exists in Zephyr, it is possible to implement MCUmgr client/server over
UART transport that does not have framing at all, or uses hardware serial port control, or other means
of framing.
Frame fragmenting SMP protocol over serial is fragmented into MTU size frames; each frame consists
of two byte start marker, body and terminating newline character.
There are four types of types of frames: initial, partial, partial-final and initial-final; each frame type
differs by start marker and/or body contents.
Frame formats Initial frame requires to be followed by optional sequence of partial frames and finally
by partial-final frame. Body is always Base64 encoded, so the body size, here described as MTU - 3, is
able to actually carry N = (MTU - 3) / 4 * 3 bytes of raw data.
Body of initial frame is preceded by two byte total packet length, encoded in Big Endian, and equals size
of a raw body plus two bytes, size of CRC16; this means that actual body size allowed into an initial
frame is N - 2.
If a body size is smaller than N - 4, than it is possible to carry entire body with preceding length and
following it CRC in a single frame, here called initial-final; for the description of initial-final frame look
below.
Initial frame format:
Initial-final frame format is similar to initial frame format, but differs by <base64-i> definition.
<base64-i> of initial-final frame, is Base64 encoded data taking form:
Partial frame is continuation after previous initial or other partial frame. Partial frame takes form:
CRC Details The CRC16 included in final type frames is calculated over only raw data and does not
include packet length. CRC16 polynomial is 0x1021 and initial value is 0.
API Reference
group mcumgr_transport_smp
MCUmgr transport SMP API.
Typedefs
Functions
struct smp_transport_api_t
#include <smp.h> Function pointers of SMP transport functions, if a handler is NULL then it
is not supported/implemented.
Public Members
smp_transport_out_fn output
Transport’s send function.
smp_transport_get_mtu_fn get_mtu
Transport’s get-MTU function.
smp_transport_ud_copy_fn ud_copy
Transport buffer user_data copy function.
smp_transport_ud_free_fn ud_free
Transport buffer user_data free function.
smp_transport_query_valid_check_fn query_valid_check
Transport’s check function for if a query is valid.
struct smp_transport
#include <smp.h> SMP transport object for sending SMP responses.
Overview
The Device Firmware Upgrade subsystem provides the necessary frameworks to upgrade the image of a
Zephyr-based application at run time. It currently consists of two different modules:
• subsys/dfu/boot/: Interface code to bootloaders
• subsys/dfu/img_util/: Image management code
The DFU subsystem deals with image management, but not with the transport or management protocols
themselves required to send the image to the target device. For information on these protocols and
frameworks please refer to the Device Management section.
Flash Image The flash image API as part of the Device Firmware Upgrade (DFU) subsystem provides
an abstraction on top of Flash Stream to simplify writing firmware image chunks to flash.
API Reference
group flash_img_api
Abstraction layer to write firmware images to flash.
Functions
Returns
0 on success, negative errno code on fail
int flash_img_init(struct flash_img_context *ctx)
Initialize context needed for writing the image to the flash.
Parameters
• ctx – context to be initialized
Returns
0 on success, negative errno code on fail
size_t flash_img_bytes_written(struct flash_img_context *ctx)
Read number of bytes of the image written to the flash.
Parameters
• ctx – context
Returns
Number of bytes written to the image flash.
int flash_img_buffered_write(struct flash_img_context *ctx, const uint8_t *data, size_t len,
bool flush)
Process input buffers to be written to the image slot 1. flash memory in single blocks. Will
store remainder between calls.
A final call to this function with flush set to true will write out the remaining block buffer to
flash. Since flash is written to in blocks, the contents of flash from the last byte written up to
the next multiple of CONFIG_IMG_BLOCK_BUF_SIZE is padded with 0xff.
Parameters
• ctx – context
• data – data to write
• len – Number of bytes to write
• flush – when true this forces any buffered data to be written to flash
Returns
0 on success, negative errno code on fail
int flash_img_check(struct flash_img_context *ctx, const struct flash_img_check *fic, uint8_t
area_id)
Verify flash memory length bytes integrity from a flash area. The start point is indicated by an
offset value.
The function is enabled via CONFIG_IMG_ENABLE_IMAGE_CHECK Kconfig options.
Parameters
• ctx – [in] context.
• fic – [in] flash img check data.
• area_id – [in] flash area id of partition where the image should be verified.
Returns
0 on success, negative errno code on fail
struct flash_img_context
#include <flash_img.h>
struct flash_img_check
#include <flash_img.h> Structure for verify flash region integrity.
Match vector length is fixed and depends on size from hash algorithm used to verify flash
integrity. The current available algorithm is SHA-256.
Public Members
size_t clen
Match vector data
MCUBoot API The MCUboot API is provided to get version information and boot status of application
images. It allows to select application image and boot type for the next boot.
API Reference
group mcuboot_api
MCUboot public API for MCUboot control of image boot process.
Defines
BOOT_SWAP_TYPE_NONE
Attempt to boot the contents of slot 0.
BOOT_SWAP_TYPE_TEST
Swap to slot 1. Absent a confirm command, revert back on next boot.
BOOT_SWAP_TYPE_PERM
Swap to slot 1, and permanently switch to booting its contents.
BOOT_SWAP_TYPE_REVERT
Swap back to alternate slot. A confirm changes this state to NONE.
BOOT_SWAP_TYPE_FAIL
Swap failed because image to be run is not valid
BOOT_IMG_VER_STRLEN_MAX
BOOT_UPGRADE_TEST
Boot upgrade request modes
BOOT_UPGRADE_PERMANENT
Functions
See also:
boot_write_img_confirmed()
Returns
True if the image is confirmed as OK, false otherwise.
int boot_write_img_confirmed(void)
Marks the currently running image as confirmed.
This routine attempts to mark the currently running firmware image as OK, which will install
it permanently, preventing MCUboot from reverting it for an older image at the next reset.
This routine is safe to call if the current image has already been confirmed. It will return a
successful result in this case.
Returns
0 on success, negative errno code on fail.
int boot_write_img_confirmed_multi(int image_index)
Marks the image with the given index in the primary slot as confirmed.
This routine attempts to mark the firmware image in the primary slot as OK, which will install
it permanently, preventing MCUboot from reverting it for an older image at the next reset.
This routine is safe to call if the current image has already been confirmed. It will return a
successful result in this case.
Parameters
• image_index – Image pair index.
Returns
0 on success, negative errno code on fail.
int mcuboot_swap_type(void)
Determines the action, if any, that mcuboot will take on the next reboot.
Returns
a BOOT_SWAP_TYPE_[. . . ] constant on success, negative errno code on fail.
int mcuboot_swap_type_multi(int image_index)
Determines the action, if any, that mcuboot will take on the next reboot.
Parameters
• image_index – Image pair index.
Returns
a BOOT_SWAP_TYPE_[. . . ] constant on success, negative errno code on fail.
int boot_request_upgrade(int permanent)
Marks the image in slot 1 as pending. On the next reboot, the system will perform a boot of
the slot 1 image.
Parameters
• permanent – Whether the image should be used permanently or only tested
once: BOOT_UPGRADE_TEST=run image once, then confirm or revert.
BOOT_UPGRADE_PERMANENT=run image forever.
Returns
0 on success, negative errno code on fail.
int boot_request_upgrade_multi(int image_index, int permanent)
Marks the image with the given index in the secondary slot as pending. On the next reboot,
the system will perform a boot of the secondary slot image.
Parameters
• image_index – Image pair index.
• permanent – Whether the image should be used permanently or only tested
once: BOOT_UPGRADE_TEST=run image once, then confirm or revert.
BOOT_UPGRADE_PERMANENT=run image forever.
Returns
0 on success, negative errno code on fail.
int boot_erase_img_bank(uint8_t area_id)
Erase the image Bank.
Parameters
• area_id – flash_area ID of image bank to be erased.
Returns
0 on success, negative errno code on fail.
ssize_t boot_get_area_trailer_status_offset(uint8_t area_id)
Get the offset of the status in the image bank.
Parameters
• area_id – flash_area ID of image bank to get the status offset
Returns
a positive offset on success, negative errno code on fail
ssize_t boot_get_trailer_status_offset(size_t area_size)
Get the offset of the status from an image bank size.
Parameters
struct mcuboot_img_sem_ver
#include <mcuboot.h> MCUboot image header representation for image version.
The header for an MCUboot firmware image contains an embedded version number, in se-
mantic versioning format. This structure represents the information it contains.
struct mcuboot_img_header_v1
#include <mcuboot.h> Model for the MCUboot image header as of version 1.
This represents the data present in the image header, in version 1 of the header format.
Some information present in the header but not currently relevant to applications is omitted.
Public Members
uint32_t image_size
The size of the image, in bytes.
struct mcuboot_img_header
#include <mcuboot.h> Model for the MCUBoot image header.
This contains the decoded image header, along with the major version of MCUboot that the
header was built for.
(The MCUboot project guarantees that incompatible changes to the image header will result
in major version changes to the bootloader itself, and will be detectable in the persistent
representation of the header.)
Public Members
uint32_t mcuboot_version
The version of MCUboot the header is built for.
The value 1 corresponds to MCUboot versions 1.x.y.
struct mcuboot_img_header_v1 v1
Header information for MCUboot version 1.
union mcuboot_img_header.[anonymous] h
The header information. It is only valid to access fields in the union member correspond-
ing to the mcuboot_version field above.
Bootloaders
MCUboot Zephyr is directly compatible with the open source, cross-RTOS MCUboot boot loader. It
interfaces with MCUboot and is aware of the image format required by it, so that Device Firmware
Upgrade is available when MCUboot is the boot loader used with Zephyr. The source code itself is hosted
in the MCUboot GitHub Project page.
In order to use MCUboot with Zephyr you need to take the following into account:
1. You will need to define the flash partitions required by MCUboot; see Flash map for details.
2. You will have to specify your flash partition as the chosen code partition
/ {
chosen {
zephyr,code-partition = &slot0_partition;
};
};
3. Your application’s .conf file needs to enable the CONFIG_BOOTLOADER_MCUBOOT Kconfig option in
order for Zephyr to be built in an MCUboot-compatible manner
4. You need to build and flash MCUboot itself on your device
5. You might need to take precautions to avoid mass erasing the flash and also to flash the Zephyr
application image at the correct offset (right after the bootloader)
More detailed information regarding the use of MCUboot with Zephyr can be found in the MCUboot with
Zephyr documentation page on the MCUboot website.
Overview
Over-the-Air (OTA) Update is a method for delivering firmware updates to remote devices using a net-
work connection. Although the name implies a wireless connection, updates received over a wired
connection (such as Ethernet) are still commonly referred to as OTA updates. This approach requires
server infrastructure to host the firmware binary and implement a method of signaling when an update is
available. Security is a concern with OTA updates; firmware binaries should be cryptographically signed
and verified before upgrading.
The Device Firmware Upgrade section discusses upgrading Zephyr firmware using MCUboot. The same
method can be used as part of OTA. The binary is first downloaded into an unoccupied code partition,
usually named slot1_partition, then upgraded using the MCUboot process.
Examples of OTA
Golioth Golioth is an IoT management platform that includes OTA updates. Devices are configured to
observe your available firmware revisions on the Golioth Cloud. When a new version is available, the
device downloads and flashes the binary. In this implementation, the connection between cloud and
device is secured using TLS/DTLS, and the signed firmware binary is confirmed by MCUboot before the
upgrade occurs.
1. A working sample can be found on the Golioth Zephyr-SDK repository
2. The Golioth OTA documentation includes complete information about the versioning process
Eclipse hawkBit™ Eclipse hawkBit™ is an update server framework that uses polling on a REST api
to detect firmware updates. When a new update is detected, the binary is downloaded and installed.
MCUboot can be used to verify the signature before upgrading the firmware.
There is a hawkbit-api-sample included in the Zephyr mgmt-samples section.
UpdateHub UpdateHub is a platform for remotely updating embedded devices. Updates can be man-
ually triggered or monitored via polling. When a new update is detected, the binary is downloaded and
installed. MCUboot can be used to verify the signature before upgrading the firmware.
There is an updatehub_fota_sample included in the Zephyr mgmt-samples section.
SMP Server A Simple Management Protocol (SMP) server can be used to update firmware via Blue-
tooth Low Energy (BLE) or UDP. MCUmgr is used to send a signed firmware binary to the remote device
where it is verified by MCUboot before the upgrade occurs.
There is an smp_svr_sample included in the Zephyr mgmt-samples section.
Lightweight M2M (LWM2M) The Lightweight M2M (LWM2M) protocol includes support for firmware
update via CONFIG_LWM2M_FIRMWARE_UPDATE_OBJ_SUPPORT. Devices securely connect to an LwM2M
server using DTLS. An lwm2m-client-sample sample is available but it does not demonstrate the firmware
update feature.
Overview
The host command protocol defines the interface for a host, or application processor, to communicate
with a target embedded controller (EC). The EC Host command subsystem implements the target side of
the protocol, generating responses to commands sent by the host. The host command protocol interface
supports multiple versions, but this subsystem implementation only support protocol version 3.
Architecture
SHI (Serial Host Interface) is different to this because it is used olny for communication with a host. SHI
does not have API itself, thus the backend and peripheral driver layers are combined into one backend
layer.
Initialization
If the application configures one of the following backend chosen nodes and
CONFIG_EC_HOST_CMD_INITIALIZE_AT_BOOT is set, then the corresponding backend initializes the
host command subsystem by calling ec_host_cmd_init() :
• zephyr,host-cmd-espi-backend
• zephyr,host-cmd-shi-backend
• zephyr,host-cmd-uart-backend
If no backend chosen node is configured, the application must call the ec_host_cmd_init() function
directly. This way of initialization is useful if a backend is chosen in runtime based on e.g. GPIO state.
Buffers
The host command communication requires buffers for rx and tx. The buffers are be provided
by the general handler if CONFIG_EC_HOST_CMD_HANDLER_RX_BUFFER_SIZE > 0 for rx buffer and
CONFIG_EC_HOST_CMD_HANDLER_TX_BUFFER_SIZE > 0 for the tx buffer. The shared buffers are useful
for applications that use multiple backends. Defining separate buffers by every backend would increase
the memory usage. However, some buffers can be defined by a peripheral driver e.g. eSPI. These ones
should be reused as much as possible.
API Reference
group ec_host_cmd_interface
EC Host Command Interface.
Defines
Typedefs
Param rx_ctx
[inout] Pointer to the receive context object. These objects are used to receive
data from the driver when the host sends data. The buf member can be assigned
by the backend.
Param tx
[inout] Pointer to the transmit buffer object. The buf and len_max members can
be assigned by the backend. These objects are used to send data by the backend
with the ec_host_cmd_backend_api_send function.
Retval 0
if successful
Enums
enum ec_host_cmd_status
Values:
enumerator EC_HOST_CMD_SUCCESS = 0
Host command was successful.
enumerator EC_HOST_CMD_INVALID_COMMAND = 1
The specified command id is not recognized or supported.
enumerator EC_HOST_CMD_ERROR = 2
Generic Error.
enumerator EC_HOST_CMD_INVALID_PARAM = 3
One of more of the input request parameters is invalid.
enumerator EC_HOST_CMD_ACCESS_DENIED = 4
Host command is not permitted.
enumerator EC_HOST_CMD_INVALID_RESPONSE = 5
Response was invalid (e.g. not version 3 of header).
enumerator EC_HOST_CMD_INVALID_VERSION = 6
Host command id version unsupported.
enumerator EC_HOST_CMD_INVALID_CHECKSUM = 7
Checksum did not match.
enumerator EC_HOST_CMD_IN_PROGRESS = 8
A host command is currently being processed.
enumerator EC_HOST_CMD_UNAVAILABLE = 9
Requested information is currently unavailable.
enumerator EC_HOST_CMD_TIMEOUT = 10
Timeout during processing.
enumerator EC_HOST_CMD_OVERFLOW = 11
Data or table overflow.
enumerator EC_HOST_CMD_INVALID_HEADER = 12
Header is invalid or unsupported (e.g. not version 3 of header).
enumerator EC_HOST_CMD_REQUEST_TRUNCATED = 13
Did not receive all expected request data.
enumerator EC_HOST_CMD_RESPONSE_TOO_BIG = 14
Response was too big to send within one response packet.
enumerator EC_HOST_CMD_BUS_ERROR = 15
Error on underlying communication bus.
enumerator EC_HOST_CMD_BUSY = 16
System busy. Should retry later.
enumerator EC_HOST_CMD_INVALID_HEADER_VERSION = 17
Header version invalid.
enumerator EC_HOST_CMD_INVALID_HEADER_CRC = 18
Header CRC invalid.
enumerator EC_HOST_CMD_INVALID_DATA_CRC = 19
Data CRC invalid.
enumerator EC_HOST_CMD_DUP_UNAVAILABLE = 20
Can’t resend response.
Functions
Return values
0 – if successful.
const struct ec_host_cmd *ec_host_cmd_get_hc(void)
Get the main ec host command structure.
This routine returns a pointer to the main host command structure. It allows the application
code to get inside information for any reason e.g. the host command thread id.
Return values
A – pointer to the main host command structure
FUNC_NORETURN void ec_host_cmd_task(void)
The thread function for Host Command subsystem.
This routine calls the Host Command thread entry function. If
CONFIG_EC_HOST_CMD_DEDICATED_THREAD is not defined, a new thread is not created,
and this function has to be called by application code. It doesn’t return.
struct ec_host_cmd_rx_ctx
#include <backend.h> Context for host command backend and handler to pass rx data.
Public Members
uint8_t *buf
Buffer to hold received data. The buffer is provided by the handler if CON-
FIG_EC_HOST_CMD_HANDLER_RX_BUFFER_SIZE > 0. Otherwise, the backend should
provide the buffer on its own and overwrites buf pointer in the init function.
size_t len
Number of bytes written to buf by backend.
struct ec_host_cmd_tx_buf
#include <backend.h> Context for host command backend and handler to pass tx data.
Public Members
void *buf
Data to write to the host The buffer is provided by the handler if CON-
FIG_EC_HOST_CMD_HANDLER_TX_BUFFER_SIZE > 0. Otherwise, the backend should
provide the buffer on its own and overwrites buf pointer and len_max in the init function.
size_t len
Number of bytes to write from buf.
size_t len_max
Size of buf.
struct ec_host_cmd_backend_api
#include <backend.h>
struct ec_host_cmd
#include <ec_host_cmd.h>
struct ec_host_cmd_handler_args
#include <ec_host_cmd.h> Arguments passed into every installed host command handler.
Public Members
void *reserved
Reserved for compatibility.
uint16_t command
Command identifier.
uint8_t version
The version of the host command that is being requested. This will be a value that has
been static registered as valid for the handler.
uint16_t input_buf_size
The number of valid bytes that can be read from input_buf.
void *output_buf
The data written to this buffer will be send to the host.
uint16_t output_buf_max
Maximum number of bytes that can be written to the output_buf.
uint16_t output_buf_size
Number of bytes of output_buf to send to the host.
struct ec_host_cmd_handler
#include <ec_host_cmd.h> Structure use for statically registering host command handlers.
Public Members
ec_host_cmd_handler_cb handler
Callback routine to process commands that match id.
uint16_t id
The numerical command id used as the lookup for commands.
uint16_t version_mask
The bitfield of all versions that the handler supports, where each bit value represents that
the handler supports that version. E.g. BIT(0) corresponds to version 0.
uint16_t min_rqt_size
The minimum input_buf_size enforced by the framework before passing to the handler.
uint16_t min_rsp_size
The minimum output_buf_size enforced by the framework before passing to the handler.
struct ec_host_cmd_request_header
#include <ec_host_cmd.h> Header for requests from host to embedded controller.
Represent the over-the-wire header in LE format for host command requests. This represent
version 3 of the host command header. The requests are always sent from host to embedded
controller.
Public Members
uint8_t prtcl_ver
Should be 3. The EC will return EC_HOST_CMD_INVALID_HEADER if it receives a header
with a version it doesn’t know how to parse.
uint8_t checksum
Checksum of response and data; sum of all bytes including checksum. Should total to 0.
uint16_t cmd_id
Id of command that is being sent.
uint8_t cmd_ver
Version of the specific cmd_id being requested. Valid versions start at 0.
uint8_t reserved
Unused byte in current protocol version; set to 0.
uint16_t data_len
Length of data which follows this header.
struct ec_host_cmd_response_header
#include <ec_host_cmd.h> Header for responses from embedded controller to host.
Represent the over-the-wire header in LE format for host command responses. This represent
version 3 of the host command header. Responses are always sent from embedded controller
to host.
Public Members
uint8_t prtcl_ver
Should be 3.
uint8_t checksum
Checksum of response and data; sum of all bytes including checksum. Should total to 0.
uint16_t result
A ec_host_cmd_status response code for specific command.
uint16_t data_len
Length of data which follows this header.
uint16_t reserved
Unused bytes in current protocol version; set to 0.
• Using zDSP
• Optimizing for your architecture
• API Reference
The DSP API provides an architecture agnostic way for signal processing. Currently, the API will work
on any architecture but will likely not be optimized. The status of the various architectures can be found
below:
Architecture Status
ARC Optimized
ARM Optimized
ARM64 Optimized
MIPS Unoptimized
NIOS2 Unoptimized
POSIX Unoptimized
RISCV Unoptimized
RISCV64 Unoptimized
SPARC Unoptimized
X86 Unoptimized
XTENSA Unoptimized
zDSP provides various backend options which are selected automatically for the application. By default,
including the CMSIS module will enable all architectures to use the zDSP APIs. This can be done by
setting:
CONFIG_CMSIS_DSP=y
If your architecture is showing as Unoptimized, it’s possible to add a new zDSP backend to better support
it. To do that, a new Kconfig option should be added to subsys/dsp/Kconfig along with the required
dependencies and the default set for DSP_BACKEND Kconfig choice.
Next, the implementation should be added at subsys/dsp/<backend>/ and linked in at subsys/dsp/
CMakeLists.txt. To add architecture-specific attributes, its corresponding Kconfig option should be
added to subsys/dsp/Kconfig and use them to update DSP_DATA and DSP_STATIC_DATA in include/
zephyr/dsp/dsp.h.
group math_dsp
DSP Interface.
Typedefs
Zephyr RTOS Virtual Filesystem Switch (VFS) allows applications to mount multiple file systems at dif-
ferent mount points (e.g., /fatfs and /lfs). The mount point data structure contains all the necessary
information required to instantiate, mount, and operate on a file system. The File system Switch decou-
ples the applications from directly accessing an individual file system’s specific API or internal functions
by introducing file system registration mechanisms.
In Zephyr, any file system implementation or library can be plugged into or pulled out through a file
system registration API. Each file system implementation must have a globally unique integer identifier;
use FS_TYPE_EXTERNAL_BASE to avoid clashes with in-tree identifiers.
Zephyr RTOS supports multiple instances of a file system by making use of the mount point as the disk
volume name, which is used by the file system library while formatting or mounting a disk.
A file system is declared as:
where
• FS_FATFS is the file system type like FATFS or LittleFS.
• FATFS_MNTP is the mount point where the file system will be mounted.
• fat_fs is the file system data which will be used by fs_mount() API.
4.5.1 Samples
Samples for the VFS are mainly supplied in samples/subsys/fs, although various examples of the VFS
usage are provided as important functionalities in samples for different subsystems. Here is the list of
samples worth looking at:
• samples/subsys/fs/fat_fs is an example of FAT file system usage with SDHC media;
• samples/subsys/shell/fs is an example of Shell fs subsystem, using internal flash partition
formatted to LittleFS;
• samples/subsys/usb/mass/ example of USB Mass Storage device that uses FAT FS driver
with RAM
or SPI connected FLASH, or LittleFS in flash, depending on the sample configuration.
group file_system_api
File System APIs.
FS_O_READ
Open for read flag
FS_O_WRITE
Open for write flag
FS_O_RDWR
Open for read-write flag combination
FS_O_MODE_MASK
Bitmask for read and write flags
FS_O_CREATE
Create file if it does not exist
FS_O_APPEND
Open/create file for append
FS_O_FLAGS_MASK
Bitmask for open/create flags
FS_O_MASK
Bitmask for open flags
FS_SEEK_SET
Seek from the beginning of file
FS_SEEK_CUR
Seek from a current position
FS_SEEK_END
Seek from the end of file
Defines
FS_MOUNT_FLAG_NO_FORMAT
Flag prevents formatting device if requested file system not found
FS_MOUNT_FLAG_READ_ONLY
Flag makes mounted file system read-only
FS_MOUNT_FLAG_AUTOMOUNT
Flag used in pre-defined mount structures that are to be mounted on startup.
This flag has no impact in user-defined mount structures.
FS_MOUNT_FLAG_USE_DISK_ACCESS
Flag requests file system driver to use Disk Access API. When the flag is set to the
fs_mount_t.flags prior to fs_mount call, a file system needs to use the Disk Access API, other-
wise mount callback for the driver should return -ENOSUP; when the flag is not set the file
system driver should use Flash API by default, unless it only supports Disc Access API. When
file system will use Disk Access API and the flag is not set, the mount callback for the file
system should set the flag on success.
FSTAB_ENTRY_DT_MOUNT_FLAGS(node_id)
FS_FSTAB_ENTRY(node_id)
The name under which a zephyr,fstab entry mount structure is defined.
FS_FSTAB_DECLARE_ENTRY(node_id)
Generate a declaration for the externally defined fstab entry.
This will evaluate to the name of a struct fs_mount_t object.
Enums
enum fs_dir_entry_type
Values:
enumerator FS_DIR_ENTRY_FILE = 0
Identifier for file entry
enumerator FS_DIR_ENTRY_DIR
Identifier for directory entry
enum [anonymous]
Enumeration to uniquely identify file system types.
Zephyr supports in-tree file systems and external ones. Each requires a unique identifier used
to register the file system implementation and to associate a mount point with the file system
type. This anonymous enum defines global identifiers for the in-tree file systems.
External file systems should be registered using unique identifiers starting at
FS_TYPE_EXTERNAL_BASE. It is the responsibility of applications that use external file
systems to ensure that these identifiers are unique if multiple file system implementations are
used by the application.
Values:
enumerator FS_FATFS = 0
Identifier for in-tree FatFS file system.
enumerator FS_LITTLEFS
Identifier for in-tree LittleFS file system.
enumerator FS_TYPE_EXTERNAL_BASE
Base identifier for external file systems.
Functions
Note: Current implementation does not allow moving files between mount points.
Parameters
• from – The source path
• to – The destination path
Return values
• 0 – on success;
• -EINVAL – when a bad file name is given, or when rename would cause move
between mount points;
• -EROFS – if file is read-only, or when file system has been mounted with the
FS_MOUNT_FLAG_READ_ONLY flag;
• -ENOTSUP – when not implemented by underlying file system driver;
• <0 – an other negative errno code on error.
Return values
• >=0 – a number of bytes read, on success;
• -EBADF – when invoked on zfp that represents unopened/closed file;
• -ENOTSUP – when not implemented by underlying file system driver;
• <0 – a negative errno code on error.
ssize_t fs_write(struct fs_file_t *zfp, const void *ptr, size_t size)
Write file.
Attempts to write size number of bytes to the specified file. If a negative value is returned
from the function, the file pointer has not been advanced. If the function returns a non-
negative number that is lower than size, the global errno variable should be checked for an
error code, as the device may have no free space for data.
Parameters
• zfp – Pointer to the file object
• ptr – Pointer to the data buffer
• size – Number of bytes to be written
Return values
• >=0 – a number of bytes written, on success;
• -EBADF – when invoked on zfp that represents unopened/closed file;
• -ENOTSUP – when not implemented by underlying file system driver;
• <0 – an other negative errno code on error.
int fs_seek(struct fs_file_t *zfp, off_t offset, int whence)
Seek file.
Moves the file position to a new location in the file. The offset is added to file position based
on the whence parameter.
Parameters
• zfp – Pointer to the file object
• offset – Relative location to move the file pointer to
• whence – Relative location from where offset is to be calculated.
– FS_SEEK_SET for the beginning of the file;
– FS_SEEK_CUR for the current position;
– FS_SEEK_END for the end of the file.
Return values
• 0 – on success;
• -EBADF – when invoked on zfp that represents unopened/closed file;
• -ENOTSUP – if not supported by underlying file system driver;
• <0 – an other negative errno code on error.
off_t fs_tell(struct fs_file_t *zfp)
Get current file position.
Retrieves and returns the current position in the file stream.
Parameters
• zfp – Pointer to the file object
Return values
• >= – 0 a current position in file;
• -EBADF – when invoked on zfp that represents unopened/closed file;
• -ENOTSUP – if not supported by underlying file system driver;
• <0 – an other negative errno code on error.
int fs_truncate(struct fs_file_t *zfp, off_t length)
Truncate or extend an open file to a given size.
Truncates the file to the new length if it is shorter than the current size of the file. Expands the
file if the new length is greater than the current size of the file. The expanded region would
be filled with zeroes.
Note: In the case of expansion, if the volume got full during the expansion process, the
function will expand to the maximum possible length and return success. Caller should check
if the expanded size matches the requested length.
Parameters
• zfp – Pointer to the file object
• length – New size of the file in bytes
Return values
• 0 – on success;
• -EBADF – when invoked on zfp that represents unopened/closed file;
• -ENOTSUP – when not implemented by underlying file system driver;
• <0 – an other negative errno code on error.
Note: Closing a file will cause caches to be flushed correctly so the function need not be
called when the file is being closed.
Parameters
• zfp – Pointer to the file object
Return values
• 0 – on success;
• -EBADF – when invoked on zfp that represents unopened/closed file;
• -ENOTSUP – when not implemented by underlying file system driver;
• <0 – a negative errno code on error.
Note: : Most existing underlying file systems do not generate POSIX special directory entries
“.” or “..”. For consistency the abstraction layer will remove these from lower layer results so
higher layers see consistent results.
Parameters
• zdp – Pointer to the directory object
• entry – Pointer to zfs_dirent structure to read the entry into
Return values
• 0 – on success or end-of-dir;
• -ENOENT – when no such directory found;
• -ENOTSUP – when not implemented by underlying file system driver;
• <0 – a negative errno code on error.
Note: Current implementation of ELM FAT driver allows only following mount points:
“/RAM:”,”/NAND:”,”/CF:”,”/SD:”,”/SD2:”,”/USB:”,”/USB2:”,”/USB3:” or mount points that
consist of single digit, e.g: “/0:”, “/1:” and so forth.
Parameters
• mp – Pointer to the fs_mount_t structure. Referenced object is not changed if the
mount operation failed. A reference is captured in the fs infrastructure if the
mount operation succeeds, and the application must not mutate the structure
contents until fs_unmount is successfully invoked on the same pointer.
Return values
• 0 – on success;
• -ENOENT – when file system type has not been registered;
• -ENOTSUP – when not supported by underlying file system driver; when
FS_MOUNT_FLAG_USE_DISK_ACCESS is set but driver does not support it.
• -EROFS – if system requires formatting but FS_MOUNT_FLAG_READ_ONLY has
been set;
• <0 – an other negative errno code on error.
Note: The file on a storage device may not be updated until it is closed.
Parameters
• path – Path to the file or directory
• entry – Pointer to the zfs_dirent structure to fill if the file or directory exists.
Return values
• 0 – on success;
• -EINVAL – when a bad directory or file name is given;
• -ENOENT – when no such directory or file is found;
• -ENOTSUP – when not supported by underlying file system driver;
• <0 – negative errno code on error.
struct fs_mount_t
#include <fs.h> File system mount info structure.
Param node
Entry for the fs_mount_list list
Param type
File system type
Param mnt_point
Mount point directory name (ex: “/fatfs”)
Param fs_data
Pointer to file system specific data
Param storage_dev
Pointer to backend storage device
Param mountp_len
Length of Mount point string
Param fs
Pointer to File system interface of the mount point
Param flags
Mount flags
struct fs_dirent
#include <fs.h> Structure to receive file or directory information.
Used in functions that reads the directory entries to get file or directory information.
Param dir_entry_type
Whether file or directory
• FS_DIR_ENTRY_FILE
• FS_DIR_ENTRY_DIR
Param name
Name of directory or file
Param size
Size of file. 0 if directory
struct fs_statvfs
#include <fs.h> Structure to receive volume statistics.
Used to retrieve information about total and available space in the volume.
Param f_bsize
Optimal transfer block size
Param f_frsize
Allocation unit size
Param f_blocks
Size of FS in f_frsize units
Param f_bfree
Number of free blocks
struct fs_file_t
#include <fs_interface.h> File object representing an open file.
The object needs to be initialized with function fs_file_t_init().
Param Pointer
to FATFS file object structure
Param mp
Pointer to mount point structure
struct fs_dir_t
#include <fs_interface.h> Directory object representing an open directory.
The object needs to be initialized with function fs_dir_t_init().
Param dirp
Pointer to directory object structure
Param mp
Pointer to mount point structure
struct fs_file_system_t
#include <fs_sys.h> File System interface structure.
Param open
Opens or creates a file, depending on flags given
Param read
Reads nbytes number of bytes
Param write
Writes nbytes number of bytes
Param lseek
Moves the file position to a new location in the file
Param tell
Retrieves the current position in the file
Param truncate
Truncates/expands the file to the new length
Param sync
Flushes the cache of an open file
Param close
Flushes the associated stream and closes the file
Param opendir
Opens an existing directory specified by the path
Param readdir
Reads directory entries of an open directory
Param closedir
Closes an open directory
Param mount
Mounts a file system
Param unmount
Unmounts a file system
Param unlink
Deletes the specified file or directory
Param rename
Renames a file or directory
Param mkdir
Creates a new directory using specified path
Param stat
Checks the status of a file or directory specified by the path
Param statvfs
Returns the total and available space on the file system volume
Param mkfs
Formats a device to specified file system type. Note that this operation destroys
existing data on a target device.
Applications as well as Zephyr itself requires infrastructure to format values for user consumption. The
standard C99 library *printf() functionality fulfills this need for streaming output devices or memory
buffers, but in an embedded system devices may not accept streamed data and memory may not be
available to store the formatted output.
Internal Zephyr API traditionally provided this both for printk() and for Zephyr’s internal minimal libc,
but with separate internal interfaces. Logging, tracing, shell, and other applications made use of either
these APIs or standard libc routines based on build options.
The cbprintf() public APIs convert C99 format strings and arguments, providing output produced one
character at a time through a callback mechanism, replacing the original internal functions and providing
support for almost all C99 format specifications. Existing use of s*printf() C libraries in Zephyr can be
converted to snprintfcb() to avoid pulling in libc implementations.
Several Kconfig options control the set of features that are enabled, allowing some control over features
and memory usage:
• CONFIG_CBPRINTF_FULL_INTEGRAL or CONFIG_CBPRINTF_REDUCED_INTEGRAL
• CONFIG_CBPRINTF_FP_SUPPORT
• CONFIG_CBPRINTF_FP_A_SUPPORT
• CONFIG_CBPRINTF_FP_ALWAYS_A
• CONFIG_CBPRINTF_N_SPECIFIER
CONFIG_CBPRINTF_LIBC_SUBSTS can be used to provide functions that behave like standard libc functions
but use the selected cbprintf formatter rather than pulling in another formatter from libc.
In addition CONFIG_CBPRINTF_NANO can be used to revert back to the very space-optimized but limited
formatter used for printk() before this capability was added.
Typically, strings are formatted synchronously when a function from printf family is called. However,
there are cases when it is beneficial that formatting is deferred. In that case, a state (format string and
arguments) must be captured. Such state forms a self-contained package which contains format string
and arguments. Additionally, package may contain copies of strings which are part of a format string
(format string or any %s argument). Package primary content resembles va_list stack frame thus standard
formatting functions are used to process a package. Since package contains data which is processed as
va_list frame, strict alignment must be maintained. Due to required padding, size of the package depends
on alignment. When package is copied, it should be copied to a memory block with the same alignment
as origin.
Package can have following variants:
• Self-contained - non read-only strings appended to the package. String can be formatted from such
package as long as there is access to read-only string locations. Package may contain information
where read-only strings are located within the package. That information can be used to convert
packet to fully self-contained package.
• Fully self-contained - all strings are appended to the package. String can be formatted from such
package without any external data.
• Transient- only arguments are stored. Package contain information where pointers to non read-
only strings are located within the package. Optionally, it may contain read-only string location in-
formation. String can be formatted from such package as long as non read-only strings are still valid
and read-only strings are accessible. Alternatively, package can be converted to self-contained
package or fully self-contained if information about read-only string locations is present in the
package.
Package can be created using two methods:
• runtime - using cbprintf_package() or cbvprintf_package() . This method scans format string
and based on detected format specifiers builds the package.
• static - types of arguments are detected at compile time by the preprocessor and package is cre-
ated as simple assignments to a provided memory. This method is significantly faster than runtime
(more than 15 times) but has following limitations: requires _Generic keyword (C11 feature) to
be supported by the compiler and cannot distinguish between %p and %s if char pointer is used.
It treats all (unsigned) char pointers as %s thus it will attempt to append string to a package.
It can be handled correctly during conversion from transient package to self-contained pack-
age using CBPRINTF_PACKAGE_CONVERT_PTR_CHECK flag. However, it requires access to the for-
mat string and it is not always possible thus it is recommended to cast char pointers used for %p
to void *. There is a logging warning generated by cbprintf_package_convert() called with
CBPRINTF_PACKAGE_CONVERT_PTR_CHECK flag when char pointer is used with %p.
Several Kconfig options control behavior of the packaging:
• CONFIG_CBPRINTF_PACKAGE_LONGDOUBLE
• CONFIG_CBPRINTF_STATIC_PACKAGE_CHECK_ALIGNMENT
It is possible to convert package to a variant which contains more information, e.g transient pack-
age can be converted to self-contained. Conversion to fully self-contained package is possible if
CBPRINTF_PACKAGE_ADD_RO_STR_POS flag was used when package was created.
cbprintf_package_copy() is used to calculate space needed for the new package and to copy and
convert a package.
Format of the package contains paddings which are platform specific. Package consists of header which
contains size of package (excluding appended strings) and number of appended strings. It is followed
by the arguments which contains alignment paddings and resembles va_list stack frame. It is followed
by data associated with character pointer arguments used by the string which are not appended to
the string (but may be appended later by cbprinf_package_convert()). Finally, package, optionally,
contains appended strings. Each string contains 1 byte header which contains index of the location
where address argument is stored. During packaging address is set to null and before string formatting
it is updated to point to the current string location within the package. Updating address argument must
happen just before string formatting since address changes whenever package is copied.
Header 1 byte: Argument list size including header and fmt (in 32 bit words)
sizeof(void *) 1 byte: Number of strings appended to the package
1 byte: Number of read-only string argument locations
1 byte: Number of transient string argument locations
platform specific padding to sizeof(void *)
Arguments Pointer to fmt (or null if fmt is appended to the package)
(optional padding for platform specific alignment)
argument 0
(optional padding for platform specific alignment)
argument 1
...
String location information Indexes of words within the package where read-only strings are lo-
(optional) cated
Pairs of argument index and argument location index where transient
strings are located
Appended strings (optional) 1 byte: Index within the package to the location of associated argu-
ment
Null terminated string
...
• C11 _Generic support is required by the compiler to use static (fast) packaging.
• It is recommended to cast any character pointer used with %p format specifier to other pointer type
(e.g. void *). If format string is not accessible then only static packaging is possible and it will
append all detected strings. Character pointer used for %p will be considered as string pointer.
Copying from unexpected location can have serious consequences (e.g., memory fault or security
violation).
group cbprintf_apis
Defines
CBPRINTF_PACKAGE_ALIGNMENT
Required alignment of the buffer used for packaging.
CBPRINTF_MUST_RUNTIME_PACKAGE(flags, ...)
Determine if string must be packaged in run time.
Static packaging can be applied if size of the package can be determined at compile time.
In general, package size can be determined at compile time if there are no string arguments
which might be copied into package body if they are considered transient.
Note: By default any char pointers are considered to be pointing at transient strings. This can
be narrowed down to non const pointers by using CBPRINTF_PACKAGE_CONST_CHAR_RO.
Parameters
• ... – String with arguments.
• flags – option flags. See Package flags..
Return values
• 1 – if string must be packaged in run time.
• 0 – string can be statically packaged.
• packaged – pointer to where the packaged data can be stored. Pass a null
pointer to skip packaging but still calculate the total space required. The data
stored here is relocatable, that is it can be moved to another contiguous block
of memory. It must be aligned to the size of the longest argument. It is recom-
mended to use CBPRINTF_PACKAGE_ALIGNMENT for alignment.
• inlen – set to the number of bytes available at packaged. If packaged is NULL
the value is ignored.
• outlen – variable updated to the number of bytes required to completely store
the packed information. If input buffer was too small it is set to -ENOSPC.
• align_offset – input buffer alignment offset in bytes. Where offset 0 means
that buffer is aligned to CBPRINTF_PACKAGE_ALIGNMENT. Xtensa requires
that packaged is aligned to CBPRINTF_PACKAGE_ALIGNMENT so it must be
multiply of CBPRINTF_PACKAGE_ALIGNMENT or 0.
• flags – option flags. See Package flags..
• ... – formatted string with arguments. Format string must be constant.
Typedefs
• c a character to output. The output behavior should be as if this was cast to an unsigned
char.
• ctx a pointer to an object that provides context for the output operation.
The declaration does not specify the parameter types. This allows a function like fputc to be
used without requiring all context pointers to be to a FILE object.
Return
the value of c cast to an unsigned char then back to int, or a negative error code
that will be returned from cbprintf().
Param out
the function used to emit each generated character.
Param ctx
a pointer to an object that provides context for the external formatter.
Param fmt
a standard ISO C format string with characters and conversion specifications.
Param ap
captured stack arguments corresponding to the conversion specifications found
within fmt.
Return
vprintf like return values: the number of characters printed, or a negative error
value returned from external formatter.
Functions
int cbprintf_package(void *packaged, size_t len, uint32_t flags, const char *format, ...)
Capture state required to output formatted data later.
Like cbprintf() but instead of processing the arguments and emitting the formatted results
immediately all arguments are captured so this can be done in a different context, e.g. when
the output function can block.
In addition to the values extracted from arguments this will ensure that copies are made of the
necessary portions of any string parameters that are not confirmed to be stored in read-only
memory (hence assumed to be safe to refer to directly later).
Parameters
• packaged – pointer to where the packaged data can be stored. Pass a null
pointer to store nothing but still calculate the total space required. The data
stored here is relocatable, that is it can be moved to another contiguous block
of memory. However, under condition that alignment is maintained. It must be
aligned to at least the size of a pointer.
• len – this must be set to the number of bytes available at packaged if it is not
null. If packaged is null then it indicates hypothetical buffer alignment offset
in bytes compared to CBPRINTF_PACKAGE_ALIGNMENT alignment. Buffer
alignment offset impacts returned size of the package. Xtensa requires that
buffer is always aligned to CBPRINTF_PACKAGE_ALIGNMENT so it must be
multiply of CBPRINTF_PACKAGE_ALIGNMENT or 0 when packaged is null.
• flags – option flags. See Package flags..
• format – a standard ISO C format string with characters and conversion speci-
fications.
• ... – arguments corresponding to the conversion specifications found within
format.
Return values
• nonegative – the number of bytes successfully stored at packaged. This will
not exceed len.
• -EINVAL – if format is not acceptable
• -EFAULT – if packaged alignment is not acceptable
• -ENOSPC – if packaged was not null and the space required to store exceed len.
int cbvprintf_package(void *packaged, size_t len, uint32_t flags, const char *format, va_list ap)
Capture state required to output formatted data later.
Like cbprintf() but instead of processing the arguments and emitting the formatted results
immediately all arguments are captured so this can be done in a different context, e.g. when
the output function can block.
In addition to the values extracted from arguments this will ensure that copies are made of the
necessary portions of any string parameters that are not confirmed to be stored in read-only
memory (hence assumed to be safe to refer to directly later).
Parameters
• packaged – pointer to where the packaged data can be stored. Pass a null
pointer to store nothing but still calculate the total space required. The data
stored here is relocatable, that is it can be moved to another contiguous block
of memory. The pointer must be aligned to a multiple of the largest element in
the argument list.
• len – this must be set to the number of bytes available at packaged. Ignored if
packaged is NULL.
• flags – option flags. See Package flags..
• format – a standard ISO C format string with characters and conversion speci-
fications.
• ap – captured stack arguments corresponding to the conversion specifications
found within format.
Return values
• nonegative – the number of bytes successfully stored at packaged. This will
not exceed len.
• -EINVAL – if format is not acceptable
• -ENOSPC – if packaged was not null and the space required to store exceed len.
int cbprintf_package_convert(void *in_packaged, size_t in_len, cbprintf_convert_cb cb, void
*ctx, uint32_t flags, uint16_t *strl, size_t strl_len)
Convert a package.
Converting may include appending strings used in the package to the package
body. If input package was created with CBPRINTF_PACKAGE_ADD_RO_STR_POS or
CBPRINTF_PACKAGE_ADD_RW_STR_POS, it contains information where strings are located
within the package. This information can be used to copy strings during the conversion.
cb is called with portions of the output package. At the end of the conversion cb is called with
null buffer.
Parameters
• in_packaged – Input package.
• in_len – Input package length. If 0 package length will be retrieved from the
in_packaged
• cb – callback called with portions of the converted package. If null only length
of the output package is calculated.
• ctx – Context provided to the cb.
• flags – Flags. See Package flags..
• strl – [inout] if packaged is null, it is a pointer to the array where strl_len
first string lengths will is stored. If packaged is not null, it contains lengths of
first strl_len strings. It can be used to optimize copying so that string length
is calculated only once (at length calculation phase when packaged is null.)
Parameters
• out – the function used to emit each generated character.
• formatter – external formatter function.
• ctx – a pointer to an object that provides context for the external formatter.
• packaged – the data required to generate the formatted output, as captured by
cbprintf_package() or cbvprintf_package(). The alignment requirement on this
data is the same as when it was initially created.
Returns
printf like return values: the number of characters printed, or a negative error
value returned from external formatter.
Parameters
• out – the function used to emit each generated character.
• ctx – context provided when invoking out
• format – a standard ISO C format string with characters and conversion speci-
fications.
• ... – arguments corresponding to the conversion specifications found within
format.
Returns
the number of characters printed, or a negative error value returned from invok-
ing out.
static inline int cbvprintf(cbprintf_cb out, void *ctx, const char *format, va_list ap)
varargs-aware *printf-like output through a callback.
This is essentially vsprintf() except the output is generated character-by-character using the
provided out function. This allows formatting text of unbounded length without incurring
the cost of a temporary buffer.
Parameters
• out – the function used to emit each generated character.
• ctx – context provided when invoking out
• format – a standard ISO C format string with characters and conversion speci-
fications.
• ap – a reference to the values to be converted.
Returns
the number of characters generated, or a negative error value returned from
invoking out.
static inline int cbvprintf_tagged_args(cbprintf_cb out, void *ctx, const char *format, va_list
ap)
varargs-aware *printf-like output through a callback with tagged arguments.
This is essentially vsprintf() except the output is generated character-by-character using the
provided out function. This allows formatting text of unbounded length without incurring
the cost of a temporary buffer.
Note that the argument list ap are tagged.
Parameters
• out – the function used to emit each generated character.
• ctx – context provided when invoking out
• format – a standard ISO C format string with characters and conversion speci-
fications.
• ap – a reference to the values to be converted.
Returns
the number of characters generated, or a negative error value returned from
invoking out.
Parameters
• out – the function used to emit each generated character.
• ctx – context provided when invoking out
• packaged – the data required to generate the formatted output, as captured by
cbprintf_package() or cbvprintf_package(). The alignment requirement on this
data is the same as when it was initially created.
Returns
the number of characters printed, or a negative error value returned from invok-
ing out.
Parameters
• stream – the stream to which the output should be written.
• format – a standard ISO C format string with characters and conversion speci-
fications.
• ... – arguments corresponding to the conversion specifications found within
format.
Parameters
• stream – the stream to which the output should be written.
• format – a standard ISO C format string with characters and conversion speci-
fications.
• ap – a reference to the values to be converted.
Returns
The number of characters printed.
Parameters
• format – a standard ISO C format string with characters and conversion speci-
fications.
• ... – arguments corresponding to the conversion specifications found within
format.
Returns
The number of characters printed.
Parameters
• format – a standard ISO C format string with characters and conversion speci-
fications.
• ap – a reference to the values to be converted.
Returns
The number of characters printed.
Parameters
• str – where the formatted content should be written
• size – maximum number of chaacters for the formatted output, including the
terminating null byte.
• format – a standard ISO C format string with characters and conversion speci-
fications.
• ... – arguments corresponding to the conversion specifications found within
format.
Returns
The number of characters that would have been written to str, excluding the
terminating null byte. This is greater than the number actually written if size is
too small.
int vsnprintfcb(char *str, size_t size, const char *format, va_list ap)
vsnprintf using Zephyrs cbprintf infrastructure.
Parameters
• str – where the formatted content should be written
• size – maximum number of chaacters for the formatted output, including the
terminating null byte.
• format – a standard ISO C format string with characters and conversion speci-
fications.
• ap – a reference to the values to be converted.
Returns
The number of characters that would have been written to str, excluding the
terminating null byte. This is greater than the number actually written if size is
too small.
4.7 Input
The input subsystem provides an API for dispatching input events from input devices to the application.
The subsystem is built around the input_event structure. An input event represents a change in an
individual event entity, for example the state of a single button, or a movement in a single axis.
The input_event structure describes the specific event, and includes a synchronization bit to indicate
that the device reached a stable state, for example when the events corresponding to multiple axes of a
multi-axis device have been reported.
An input device can report input events directly using input_report() or any related function; for
example buttons or other on-off input entities would use input_report_key() .
Complex devices may use a combination of multiple events, and set the sync bit once the output is stable.
The input_report* functions take a device pointer, which is used to indicate which device reported
the event and can be used by subscribers to only receive events from a specific device. If there’s no actual
device associated with the event, it can be set to NULL, in which case only subscribers with no device
filter will receive the event.
An application can register a callback using the INPUT_LISTENER_CB_DEFINE macro. If a device node
is specified, the callback is only invoked for events from the specific device, otherwise the callback will
receive all the events in the system. This is the only type of filtering supported, any more complex
filtering logic has to be implemented in the callback itself.
The subsystem can operate synchronously or by using an event queue, depending on the
CONFIG_INPUT_MODE option. If the input thread is used, all the events are added to a queue and ex-
ecuted in a common input thread. If the thread is not used, the callback are invoked directly in the
input driver context.
The synchronous mode can be used in a simple application to keep a minimal footprint, or in a complex
application with an existing event model, where the callback is just a wrapper to pipe back the event in
a more complex application specific event system.
Input devices generating X/Y/Touch events can be used in existing applications based on the Keyboard
Scan API by enabling both CONFIG_INPUT and CONFIG_KSCAN, defining a zephyr,kscan-input node as
a child node of the corresponding input device and pointing the zephyr,keyboard-scan chosen node to
the compatibility device node, for example:
chosen {
zephyr,keyboard-scan = &kscan_input;
};
ft5336@38 {
...
kscan_input: kscan-input {
compatible = "zephyr,kscan-input";
};
};
group input_interface
Input Interface.
Defines
INPUT_LISTENER_CB_DEFINE(_dev, _callback)
Register a callback structure for input events.
The _dev field can be used to only invoke callback for events generated by a specific device.
Setting dev to NULL causes callback to be invoked for every event.
Parameters
• _dev – device pointer or NULL.
• _callback – The callback function.
Functions
int input_report(const struct device *dev, uint8_t type, uint16_t code, int32_t value, bool sync,
k_timeout_t timeout)
Report a new input event.
This causes all the listeners for the specified device to be triggered, either synchronously or
through the input thread if utilized.
Parameters
• dev – Device generating the event or NULL.
• type – Event type (see INPUT_EV_CODES).
• code – Event code (see INPUT_KEY_CODES, INPUT_BTN_CODES, IN-
PUT_ABS_CODES, INPUT_REL_CODES, INPUT_MSC_CODES).
• value – Event value.
• sync – Set the synchronization bit for the event.
• timeout – Timeout for reporting the event, ignored if
CONFIG_INPUT_MODE_SYNCHRONOUS is used.
Return values
• 0 – if the message has been processed.
• negative – if CONFIG_INPUT_MODE_THREAD is enabled and the message failed
to be enqueued.
static inline int input_report_key(const struct device *dev, uint16_t code, int32_t value, bool
sync, k_timeout_t timeout)
Report a new INPUT_EV_KEY input event, note that value is converted to either 0 or 1.
See also:
input_report() for more details.
static inline int input_report_rel(const struct device *dev, uint16_t code, int32_t value, bool
sync, k_timeout_t timeout)
Report a new INPUT_EV_REL input event.
See also:
input_report() for more details.
static inline int input_report_abs(const struct device *dev, uint16_t code, int32_t value, bool
sync, k_timeout_t timeout)
Report a new INPUT_EV_ABS input event.
See also:
input_report() for more details.
bool input_queue_empty(void)
Returns true if the input queue is empty.
This can be used to batch input event processing until the whole queue has been emptied.
Always returns true if CONFIG_INPUT_MODE_SYNCHRONOUS is enabled.
struct input_event
#include <input.h> Input event structure.
This structure represents a single input event, for example a key or button press for a single
button, or an absolute or relative coordinate for a single axis.
Public Members
uint8_t sync
Sync flag.
uint8_t type
Event type (see INPUT_EV_CODES).
uint16_t code
Event code (see INPUT_KEY_CODES, INPUT_BTN_CODES, INPUT_ABS_CODES, IN-
PUT_REL_CODES, INPUT_MSC_CODES).
int32_t value
Event value.
struct input_listener
#include <input.h> Input listener callback structure.
Public Members
group input_events
INPUT_EV_KEY
INPUT_EV_REL
INPUT_EV_ABS
INPUT_EV_MSC
INPUT_EV_VENDOR_START
INPUT_EV_VENDOR_STOP
INPUT_KEY_0
INPUT_KEY_1
INPUT_KEY_2
INPUT_KEY_3
INPUT_KEY_4
INPUT_KEY_5
INPUT_KEY_6
INPUT_KEY_7
INPUT_KEY_8
INPUT_KEY_9
INPUT_KEY_A
INPUT_KEY_B
INPUT_KEY_C
INPUT_KEY_D
INPUT_KEY_E
INPUT_KEY_F
INPUT_KEY_G
INPUT_KEY_H
INPUT_KEY_I
INPUT_KEY_J
INPUT_KEY_K
INPUT_KEY_L
INPUT_KEY_M
INPUT_KEY_N
INPUT_KEY_O
INPUT_KEY_P
INPUT_KEY_Q
INPUT_KEY_R
INPUT_KEY_S
INPUT_KEY_T
INPUT_KEY_U
INPUT_KEY_V
INPUT_KEY_VOLUMEDOWN
INPUT_KEY_VOLUMEUP
INPUT_KEY_W
INPUT_KEY_X
INPUT_KEY_Y
INPUT_KEY_Z
INPUT_BTN_DPAD_DOWN
INPUT_BTN_DPAD_LEFT
INPUT_BTN_DPAD_RIGHT
INPUT_BTN_DPAD_UP
INPUT_BTN_EAST
INPUT_BTN_LEFT
INPUT_BTN_MIDDLE
INPUT_BTN_MODE
INPUT_BTN_NORTH
INPUT_BTN_RIGHT
INPUT_BTN_SELECT
INPUT_BTN_SOUTH
INPUT_BTN_START
INPUT_BTN_THUMBL
INPUT_BTN_THUMBR
INPUT_BTN_TL
INPUT_BTN_TL2
INPUT_BTN_TOUCH
INPUT_BTN_TR
INPUT_BTN_TR2
INPUT_BTN_WEST
INPUT_ABS_RX
INPUT_ABS_RY
INPUT_ABS_RZ
INPUT_ABS_X
INPUT_ABS_Y
INPUT_ABS_Z
INPUT_REL_RX
INPUT_REL_RY
INPUT_REL_RZ
INPUT_REL_X
INPUT_REL_Y
INPUT_REL_Z
INPUT_MSC_SCAN
• Overview
• Simple data exchange
• Data exchange using the no-copy API
– Backends
– API Reference
• IPC service API
• IPC service backend API
The IPC service API provides an interface to exchange data between two domains or CPUs.
Overview
An IPC service communication channel consists of one instance and one or several endpoints associated
with the instance.
An instance is the external representation of a physical communication channel between two domains or
CPUs. The actual implementation and internal representation of the instance is peculiar to each backend.
An individual instance is not used to send data between domains/CPUs. To send and receive the data,
the user must create (register) an endpoint in the instance. This allows for the connection of the two
domains of interest.
It is possible to have zero or multiple endpoints for one single instance, possibly with different priorities,
and to use each to exchange data. Endpoint prioritization and multi-instance ability highly depend on
the backend used.
The endpoint is an entity the user must use to send and receive data between two domains (connected
by the instance). An endpoint is always associated to an instance.
The creation of the instances is left to the backend, usually at init time. The registration of the endpoints
is left to the user, usually at run time.
The API does not mandate a way for the backend to create instances but it is strongly recommended
to use the devicetree to retrieve the configuration parameters for an instance. Currently, each backend
defines its own DT-compatible configuration that is used to configure the interface at boot time.
The following usage scenarios are supported:
• Simple data exchange.
• Data exchange using the no-copy API.
To send data between domains or CPUs, an endpoint must be registered onto an instance.
See the following example:
Note: Before registering an endpoint, the instance must be opened using the
ipc_service_open_instance() function.
# include <zephyr/include/ipc_service.h>
int main(void)
{
const struct device *inst0;
struct ipc_ept ept0;
int ret;
inst0 = DEVICE_DT_GET(DT_NODELABEL(ipc0));
ret = ipc_service_open_instance(inst0);
ret = ipc_service_register_endpoint(inst0, &ept0, &ept0_cfg);
If the backend supports the no-copy API you can use it to directly write and read to and from shared
memory regions.
See the following example:
# include <zephyr/include/ipc_service.h>
# include <stdint.h>
# include <string.h>
int main(void)
{
const struct device *inst0;
int ret;
inst0 = DEVICE_DT_GET(DT_NODELABEL(ipc0));
ret = ipc_service_open_instance(inst0);
ret = ipc_service_register_endpoint(inst0, &ept0, &ept0_cfg);
Backends The requirements needed for implementating backends give flexibility to the IPC service.
These allow for the addition of dedicated backends having only a subsets of features for specific use
cases.
The backend must support at least the following:
• The init-time creation of instances.
• The run-time registration of an endpoint in an instance.
Additionally, the backend can also support the following:
• The run-time deregistration of an endpoint from the instance.
• The run-time closing of an instance.
• The no-copy API.
Each backend can have its own limitations and features that make the backend unique and dedicated to
a specific use case. The IPC service API can be used with multiple backends simultaneously, combining
the pros and cons of each backend.
ICMsg backend The inter core messaging backend (ICMsg) is a lighter alternative to the heavier RPMsg
static vrings backend. It offers a minimal feature set in a small memory footprint. The ICMsg backend is
build on top of Single Producer Single Consumer Packet Buffer.
Overview The ICMsg backend uses shared memory and MBOX devices for exchanging data. Shared
memory is used to store the data, MBOX devices are used to signal that the data has been written.
The backend supports the registration of a single endpoint on a single instance. If the application re-
quires more than one communication channel, you must define multiple instances, each having its own
dedicated endpoint.
Configuration The backend is configured via Kconfig and devicetree. When configuring the backend,
do the following:
• Define two memory regions and assign them to tx-region and rx-region of an instance. Ensure
that the memory regions used for data exchange are unique (not overlapping any other region)
and accessible by both domains (or CPUs).
• Define MBOX devices which are used to send the signal that informs the other domain (or CPU)
that data has been written. Ensure that the other domain (or CPU) is able to receive the signal.
See the following configuration example for one of the instances:
reserved-memory {
tx: memory@20070000 {
reg = <0x20070000 0x0800>;
};
rx: memory@20078000 {
reg = <0x20078000 0x0800>;
};
};
ipc {
ipc0: ipc0 {
compatible = "zephyr,ipc-icmsg";
tx-region = <&tx>;
rx-region = <&rx>;
mboxes = <&mbox 0>, <&mbox 1>;
mbox-names = "tx", "rx";
status = "okay";
};
};
};
You must provide a similar configuration for the other side of the communication (domain or CPU) but
you must swap the MBOX channels and memory regions (tx-region and rx-region).
Bonding When the endpoint is registered, the following happens on each domain (or CPU) connected
through the IPC instance:
1. The domain (or CPU) writes a magic number to its tx-region of the shared memory. #.
It then sends a signal to the other domain or CPU, informing that the data has been writ-
ten. Sending the signal to the other domain or CPU is repeated with timeout specified by
CONFIG_IPC_SERVICE_ICMSG_BOND_NOTIFY_REPEAT_TO_MS option. #. When the signal from the other
domain or CPU is received, the magic number is read from rx-region. If it is correct, the bonding
process is finished and the backend informs the application by calling ipc_service_cb.bound callback.
Samples
• ipc_icmsg_sample
API Reference
group ipc_service_api
IPC Service API.
Functions
Note: Keep the variable pointed by cfg alive when endpoint is in use.
Parameters
• instance – [in] Instance to register the endpoint onto.
• ept – [in] Endpoint object.
• cfg – [in] Endpoint configuration.
Return values
• -EIO – when no backend is registered.
• -EINVAL – when instance, endpoint or configuration is invalid.
• -EBUSY – when the instance is busy.
• 0 – on success.
• other – errno codes depending on the implementation of the backend.
• -EBADMSG – when the data is invalid (i.e. invalid data format, invalid length,
...)
• -EBUSY – when the instance is busy.
• bytes – number of bytes sent.
• other – errno codes depending on the implementation of the backend.
int ipc_service_hold_rx_buffer(struct ipc_ept *ept, void *data)
Holds the RX buffer for usage outside the receive callback.
Calling this function prevents the receive buffer from being released back to the pool of shmem
buffers. This function can be called in the receive callback when the user does not want to
copy the message out in the callback itself.
After the message is processed, the application must release the buffer using the
ipc_service_release_rx_buffer function.
Parameters
• ept – [in] Registered endpoint by ipc_service_register_endpoint.
• data – [in] Pointer to the RX buffer to hold.
Return values
• -EIO – when no backend is registered or release hook is missing from backend.
• -EINVAL – when instance or endpoint is invalid.
• -ENOENT – when the endpoint is not registered with the instance.
• -EALREADY – when the buffer data has been hold already.
• -ENOTSUP – when this is not supported by backend.
• 0 – on success.
• other – errno codes depending on the implementation of the backend.
int ipc_service_release_rx_buffer(struct ipc_ept *ept, void *data)
Release the RX buffer for future reuse.
When supported by the backend, this function can be called after the received message has
been processed and the buffer can be marked as reusable again.
It is possible to release only RX buffers on which ipc_service_hold_rx_buffer was previously
used.
Parameters
• ept – [in] Registered endpoint by ipc_service_register_endpoint.
• data – [in] Pointer to the RX buffer to release.
Return values
• -EIO – when no backend is registered or release hook is missing from backend.
• -EINVAL – when instance or endpoint is invalid.
• -ENOENT – when the endpoint is not registered with the instance.
• -EALREADY – when the buffer data has been already released.
• -ENOTSUP – when this is not supported by backend.
• -ENXIO – when the buffer was not hold before using ipc_service_hold_rx_buffer
• 0 – on success.
• other – errno codes depending on the implementation of the backend.
struct ipc_service_cb
#include <ipc_service.h> Event callback structure.
It is registered during endpoint registration. This structure is part of the endpoint configura-
tion.
Public Members
Param data
[in] Pointer to data buffer.
Param len
[in] Length of data.
Param priv
[in] Private user data.
struct ipc_ept
#include <ipc_service.h> Endpoint instance.
Token is not important for user of the API. It is implemented in a specific backend.
Public Members
void *token
Backend-specific token used to identify an endpoint in an instance.
struct ipc_ept_cfg
#include <ipc_service.h> Endpoint configuration structure.
Public Members
int prio
Endpoint priority. If the backend supports priorities.
struct ipc_service_cb cb
Event callback structure.
void *priv
Private user data.
group ipc_service_backend
IPC service backend.
struct ipc_service_backend
#include <ipc_service_backend.h> IPC backend configuration structure.
This structure is used for configuration backend during registration.
Public Members
int (*send)(const struct device *instance, void *token, const void *data, size_t len)
Pointer to the function that will be used to send data to the endpoint.
Param instance
[in] Instance pointer.
Param token
[in] Backend-specific token.
Param data
[in] Pointer to the buffer to send.
Param len
[in] Number of bytes to send.
Retval -EINVAL
when instance is invalid.
Retval -ENOENT
when the endpoint is not registered with the instance.
Retval -EBADMSG
when the message is invalid.
Retval -EBUSY
when the instance is busy or not ready.
Retval -ENOMEM
when no memory / buffers are available.
Retval bytes
number of bytes sent.
Retval other
errno codes depending on the implementation of the backend.
int (*get_tx_buffer)(const struct device *instance, void *token, void **data, uint32_t *len,
k_timeout_t wait)
Pointer to the function that will return an empty TX buffer.
Param instance
[in] Instance pointer.
Param token
[in] Backend-specific token.
Param data
[out] Pointer to the empty TX buffer.
Param len
[inout] Pointer to store the TX buffer size.
Param wait
[in] Timeout waiting for an available TX buffer.
Retval -EINVAL
when instance is invalid.
Retval -ENOENT
when the endpoint is not registered with the instance.
Retval -ENOTSUP
when the operation or the timeout is not supported.
Retval -ENOBUFS
when there are no TX buffers available.
Retval -EALREADY
when a buffer was already claimed and not yet released.
Retval -ENOMEM
when the requested size is too big (and the size parameter contains the maxi-
mum allowed size).
Retval 0
on success
Retval other
errno codes depending on the implementation of the backend.
int (*drop_tx_buffer)(const struct device *instance, void *token, const void *data)
Pointer to the function that will drop a TX buffer.
Param instance
[in] Instance pointer.
Param token
[in] Backend-specific token.
Param data
[in] Pointer to the TX buffer.
Retval -EINVAL
when instance is invalid.
Retval -ENOENT
when the endpoint is not registered with the instance.
Retval -ENOTSUP
when this function is not supported.
Retval -EALREADY
when the buffer was already dropped.
Retval 0
on success
Retval other
errno codes depending on the implementation of the backend.
int (*send_nocopy)(const struct device *instance, void *token, const void *data, size_t len)
Pointer to the function that will be used to send data to the endpoint when the TX buffer
has been obtained using ipc_service_get_tx_buffer.
Param instance
[in] Instance pointer.
Param token
[in] Backend-specific token.
Param data
[in] Pointer to the buffer to send.
Param len
[in] Number of bytes to send.
Retval -EINVAL
when instance is invalid.
Retval -ENOENT
when the endpoint is not registered with the instance.
Retval -EBADMSG
when the data is invalid (i.e. invalid data format, invalid length, . . . )
Retval -EBUSY
when the instance is busy or not ready.
Retval bytes
number of bytes sent.
Retval other
errno codes depending on the implementation of the backend.
4.9 Logging
– Log message
– Logger backend interface
– Logger output formatting
The logging API provides a common interface to process messages issued by developers. Messages are
passed through a frontend and are then processed by active backends. Custom frontend and backends
can be used if needed.
Summary of the logging features:
• Deferred logging reduces the time needed to log a message by shifting time consuming operations
to a known context instead of processing and sending the log message when called.
• Multiple backends supported (up to 9 backends).
• Custom frontend support. It can work together with backends.
• Compile time filtering on module level.
• Run time filtering independent for each backend.
• Additional run time filtering on module instance level.
• Timestamping with user provided function. Timestamp can have 32 or 64 bits.
• Dedicated API for dumping data.
• Dedicated API for handling transient strings.
• Panic support - in panic mode logging switches to blocking, synchronous processing.
• Printk support - printk message can be redirected to the logging.
• Design ready for multi-domain/multi-processor system.
• Support for logging floating point variables and long long arguments.
• Built-in copying of transient strings used as arguments.
• Support for multi-domain logging.
Logging API is highly configurable at compile time as well as at run time. Using Kconfig options (see
Global Kconfig Options) logs can be gradually removed from compilation to reduce image size and exe-
cution time when logs are not needed. During compilation logs can be filtered out on module basis and
severity level.
Logs can also be compiled in but filtered on run time using dedicate API. Run time filtering is independent
for each backend and each source of log messages. Source of log messages can be a module or specific
instance of the module.
There are four severity levels available in the system: error, warning, info and debug. For each severity
level the logging API (include/zephyr/logging/log.h) has set of dedicated macros. Logger API also has
macros for logging data.
For each level following set of macros are available:
• LOG_X for standard printf-like messages, e.g. LOG_ERR .
• LOG_HEXDUMP_X for dumping data, e.g. LOG_HEXDUMP_WRN .
• LOG_INST_X for standard printf-like message associated with the particular instance, e.g.
LOG_INST_INF .
• LOG_INST_HEXDUMP_X for dumping data associated with the particular instance, e.g.
LOG_HEXDUMP_INST_DBG
There are two configuration categories: configurations per module and global configuration. When
logging is enabled globally, it works for modules. However, modules can disable logging locally. Every
module can specify its own logging level. The module must define the LOG_LEVEL macro before using the
API. Unless a global override is set, the module logging level will be honored. The global override can
only increase the logging level. It cannot be used to lower module logging levels that were previously set
higher. It is also possible to globally limit logs by providing maximal severity level present in the system,
where maximal means lowest severity (e.g. if maximal level in the system is set to info, it means that
errors, warnings and info levels are present but debug messages are excluded).
Each module which is using the logging must specify its unique name and register itself to the logging.
If module consists of more than one file, registration is performed in one file but each file must define a
module name.
Logger’s default frontend is designed to be thread safe and minimizes time needed to log the message.
Time consuming operations like string formatting or access to the transport are not performed by default
when logging API is called. When logging API is called a message is created and added to the list.
Dedicated, configurable buffer for pool of log messages is used. There are 2 types of messages: standard
and hexdump. Each message contain source ID (module or instance ID and domain ID which might be
used for multiprocessor systems), timestamp and severity level. Standard message contains pointer to
the string and arguments. Hexdump message contains copied data and string.
4.9.2 Usage
Logging in a module
In order to use logging in the module, a unique name of a module must be specified and module must
be registered using LOG_MODULE_REGISTER . Optionally, a compile time log level for the module can be
specified as the second parameter. Default log level (CONFIG_LOG_DEFAULT_LEVEL) is used if custom log
level is not provided.
# include <zephyr/logging/log.h>
LOG_MODULE_REGISTER(foo, CONFIG_FOO_LOG_LEVEL);
If the module consists of multiple files, then LOG_MODULE_REGISTER() should appear in exactly one
of them. Each other file should use LOG_MODULE_DECLARE to declare its membership in the module.
Optionally, a compile time log level for the module can be specified as the second parameter. Default log
level (CONFIG_LOG_DEFAULT_LEVEL) is used if custom log level is not provided.
# include <zephyr/logging/log.h>
/* In all files comprising the module but one */
LOG_MODULE_DECLARE(foo, CONFIG_FOO_LOG_LEVEL);
In order to use logging API in a function implemented in a header file LOG_MODULE_DECLARE macro must
be used in the function body before logging API is called. Optionally, a compile time log level for the
module can be specified as the second parameter. Default log level (CONFIG_LOG_DEFAULT_LEVEL) is
used if custom log level is not provided.
# include <zephyr/logging/log.h>
LOG_INF("foo");
}
module = FOO
module-str = foo
source "subsys/logging/Kconfig.template.log_config"
In case of modules which are multi-instance and instances are widely used across the system enabling
logs will lead to flooding. Logger provide the tools which can be used to provide filtering on instance
level rather than module level. In that case logging can be enabled for particular instance.
In order to use instance level filtering following steps must be performed:
• a pointer to specific logging structure is declared in instance structure.
LOG_INSTANCE_PTR_DECLARE is used for that.
# include <zephyr/logging/log_instance.h>
struct foo_object {
LOG_INSTANCE_PTR_DECLARE(log);
uint32_t id;
}
• module must provide macro for instantiation. In that macro, logging instance is registered and log
instance pointer is initialized in the object structure.
# define FOO_OBJECT_DEFINE(_name) \
LOG_INSTANCE_REGISTER(foo, _name, CONFIG_FOO_LOG_LEVEL) \
struct foo_object _name = { \
LOG_INSTANCE_PTR_INIT(log, foo, _name) \
}
Note that when logging is disabled logging instance and pointer to that instance are not created.
In order to use the instance logging API in a source file, a compile-time log level must be set using
LOG_LEVEL_SET .
LOG_LEVEL_SET(CONFIG_FOO_LOG_LEVEL);
In order to use the instance logging API in a header file, a compile-time log level must be set using
LOG_LEVEL_SET .
LOG_INST_INF(f->log, "Initialized.");
}
By default, logging processing in deferred mode is handled internally by the dedicated task which starts
automatically. However, it might not be available if multithreading is disabled. It can also be disabled
by unsetting CONFIG_LOG_PROCESS_TRIGGER_THRESHOLD. In that case, logging can be controlled using
API defined in include/zephyr/logging/log_ctrl.h. Logging must be initialized before it can be used.
Optionally, user can provide function which returns timestamp value. If not provided, k_cycle_get or
k_cycle_get_32 is used for timestamping. log_process() function is used to trigger processing of
one log message (if pending). Function returns true if there is more messages pending. However, it is
recommended to use macro wrappers (LOG_INIT and LOG_PROCESS ) which handles case when logging
is disabled.
Following snippet shows how logging can be processed in simple forever loop.
# include <zephyr/log_ctrl.h>
int main(void)
{
LOG_INIT();
/* If multithreading is enabled provide thread id to the logging. */
log_thread_set(k_current_get());
while (1) {
if (LOG_PROCESS() == false) {
/* sleep */
}
}
}
If logs are processed from a thread (user or internal) then it is possible to enable a feature
which will wake up processing thread when certain amount of log messages are buffered (see
CONFIG_LOG_PROCESS_TRIGGER_THRESHOLD).
In case of error condition system usually can no longer rely on scheduler or interrupts. In that situation
deferred log message processing is not an option. Logger controlling API provides a function for entering
into panic mode (log_panic() ) which should be called in such situation.
When log_panic() is called, _panic_ notification is sent to all active backends. Once all backends are
notified, all buffered messages are flushed. Since that moment all logs are processed in a blocking way.
4.9.4 Printk
Typically, logging and printk() is using the same output for which they compete. This can lead to issues
if the output does not support preemption but also it may result in the corrupted output because logging
data is interleaved with printk data. However, it is possible to redirect printk messages to the logging
subsystem by enabling CONFIG_LOG_PRINTK. In that case, printk entries are treated as log messages with
level 0 (they cannot be disabled). When enabled, logging manages the output so there is no interleaving.
However, in the deferred mode it changes the behavior of the printk because output is delayed until
logging thread processes the data. CONFIG_LOG_PRINTK is by default enabled.
4.9.5 Architecture
• Core
• Backends
Log message is generated by a source of logging which can be a module or instance of a module.
Default Frontend
Default frontend is engaged when logging API is called in a source of logging (e.g. LOG_INF ) and is
responsible for filtering a message (compile and run time), allocating buffer for the message, creating
the message and committing that message. Since logging API can be called in an interrupt, frontend is
optimized to log the message as fast as possible.
Log message Log message contains message descriptor (source, domain and level), timestamp, format-
ted string details (see Cbprintf Packaging) and optional data. Log messages are stored in a continuous
block of memory. Memory is allocated from a circular packet buffer (Multi Producer Single Consumer
Packet Buffer). It has few consequences:
• Each message is self-contained, continuous block of memory thus it is suited for copying the mes-
sage (e.g. for offline processing).
• Messages must be sequentially freed. Backend processing is synchronous. Backend can make a
copy for deferred processing.
Log message has following format:
package Arguments
(optional)
Appended strings
Log message allocation It may happen that frontend cannot allocate a message. It happens if system
is generating more log messages than it can process in certain time frame. There are two strategies to
handle that case:
• No overflow - new log is dropped if space for a message cannot be allocated.
• Overflow - oldest pending messages are freed, until new message can be allocated. Enabled by
CONFIG_LOG_MODE_OVERFLOW. Note that it degrades performance thus it is recommended to adjust
buffer size and amount of enabled logs to limit dropping.
1 Depending on the platform and the timestamp size fields may be swapped.
2 It may be required for cbprintf package alignment
Run-time filtering If run-time filtering is enabled, then for each source of logging a filter structure in
RAM is declared. Such filter is using 32 bits divided into ten 3 bit slots. Except slot 0, each slot stores
current filter for one backend in the system. Slot 0 (bits 0-2) is used to aggregate maximal filter setting
for given source of logging. Aggregate slot determines if log message is created for given entry since
it indicates if there is at least one backend expecting that log entry. Backend slots are examined when
message is processed by the core to determine if message is accepted by the given backend. Contrary to
compile time filtering, binary footprint is increased because logs are compiled in.
In the example below backend 1 is set to receive errors (slot 1) and backend 2 up to info level (slot 2).
Slots 3-9 are not used. Aggregated filter (slot 0) is set to info level and up to this level message from that
particular source will be buffered.
Custom Frontend
Custom frontend is enabled using CONFIG_LOG_FRONTEND. Logs are directed to functions declared in
include/zephyr/logging/log_frontend.h. If option CONFIG_LOG_FRONTEND_ONLY is enabled then log mes-
sage is not created and no backend is handled. Otherwise, custom frontend can coexist with backends.
In some cases, logs need to be redirected at the macro level. For these cases, CONFIG_LOG_CUSTOM_HEADER
can be used to inject an application provided header named zephyr_custom_log.h at the end of in-
clude/zephyr/logging/log.h.
Logging strings
String arguments are handled by Cbprintf Packaging. See Limitations and recommendations for limitations
and recommendations.
Multi-domain support
More complex systems can consist of multiple domains where each domain is an independent binary.
Examples of domains are a core in a multicore SoC or one of the binaries (Secure or Nonsecure) on an
ARM TrustZone core.
Tracing and debugging on a multi-domain system is more complex and requires an efficient logging
system. Two approaches can be used to structure this logging system:
• Log inside each domain independently. This option is not always possible as it requires that each
domain has an available backend (for example, UART). This approach can also be troublesome to
use and not scalable, as logs are presented on independent outputs.
• Use a multi-domain logging system where log messages from each domain end up in one root do-
main, where they are processed exactly as in a single domain case. In this approach, log messages
are passed between domains using a connection between domains created from the backend on
one side and linked to the other.
The Log link is an interface introduced in this multi-domain approach. The Log link is responsible
for receiving any log message from another domain, creating a copy, and putting that local log mes-
sage copy (including remote data) into the message queue. This specific log link implementation
matches the complementary backend implementation to allow log messages exchange and logger
control like configuring filtering, getting log source names, and so on.
There are three types of domains in a multi-domain system:
• The end domain has the logging core implementation and a cross-domain backend. It can also have
other backends in parallel.
• The relay domain has one or more links to other domains but does not have backends that output
logs to the user. It has a cross-domain backend either to another relay or to the root domain.
• The root domain has one or multiple links and a backend that outputs logs to the user.
See the following image for an example of a multi-domain setup:
In this architecture, a link can handle multiple domains. For example, let’s consider an SoC with two
ARM Cortex-M33 cores with TrustZone: cores A and B (see the example illustrated above). There are
four domains in the system, as each core has both a Secure and a Nonsecure domain. If core A nonsecure
(A_NS) is the root domain, it has two links: one to core A secure (A_NS-A_S) and one to core B nonsecure
(A_NS-B_NS). B_NS domain has one link, to core B secure B_NS-B_S), and a backend to A_NS.
Since in all instances there is a standard logging subsystem, it is always possible to have multiple back-
ends and simultaneously output messages to them. An example of this is shown in the illustration above
as a dotted UART backend on the B_NS domain.
Domain ID The source of each log message can be identified by the following fields in the header:
source_id and domain_id.
The value assigned to the domain_id is relative. Whenever a domain creates a log message, it sets its
domain_id to 0. When a message crosses the domain, domain_id changes as it is increased by the link
offset. The link offset is assigned during the initialization, where the logger core is iterating over all the
registered links and assigned offsets.
The first link has the offset set to 1. The following offset equals the previous link offset plus the number
of domains in the previous link.
The following example is shown below, where the assigned domain_ids are shown for each domain:
Let’s consider a log message created on the B_S domain:
1. Initially, it has its domain_id set to 0.
2. When the B_NS-B_S link receives the message, it increases the domain_id to 1 by adding the B_NS-
B_S offset.
3. The message is passed to A_NS.
4. When the A_NS-B_NS link receives the message, it adds the offset (2) to the domain_id. The
message ends up with the domain_id set to 3, which uniquely identifies the message originator.
Cross-domain log message In most cases, the address space of each domain is unique, and one domain
cannot access directly the data in another domain. For this reason, the backend can partially process the
message before it is passed to another domain. Partial processing can include converting a string package
to a fully self-contained version (copying read-only strings to the package body).
Each domain can have a different timestamp source in terms of frequency and offset. Logging does not
perform any timestamp conversion.
Runtime filtering In the single-domain case, each log source has a dedicated variable with runtime
filtering for each backend in the system. In the multi-domain case, the originator of the log message is
not aware of the number of backends in the root domain.
As such, to filter logs in multiple domains, each source requires a runtime filtering setting in each domain
on the way to the root domain. As the number of sources in other domains is not known during the
compilation, the runtime filtering of remote sources must use dynamically allocated memory (one word
per source). When a backend in the root domain changes the filtering of the module from a remote
domain, the local filter is updated. After the update, the aggregated filter (the maximum from all the
local backends) is checked and, if changed, the remote domain is informed about this change. With this
approach, the runtime filtering works identically in both multi-domain and single-domain scenarios.
Message ordering Logging does not provide any mechanism for synchronizing timestamps across mul-
tiple domains:
• If domains have different timestamp sources, messages will be processed in the order of arrival to
the buffer in the root domain.
• If domains have the same timestamp source or if there is an out-of-bound mechanism that recalcu-
lates timestamps, there are 2 options:
– Messages are processed as they arrive in the buffer in the root domain. Messages are un-
ordered but they can be sorted by the host as the timestamp indicates the time of the message
generation.
– Links have dedicated buffers. During processing, the head of each buffer is checked and the
oldest message is processed first.
With this approach, it is possible to maintain the order of the messages at the cost of a sub-
optimal memory utilization (since the buffer is not shared) and increased processing latency
(see CONFIG_LOG_PROCESSING_LATENCY_US).
Logging backends
Logging backends are registered using LOG_BACKEND_DEFINE . The macro creates an instance in the ded-
icated memory section. Backends can be dynamically enabled (log_backend_enable() ) and disabled.
When Run-time filtering is enabled, log_filter_set() can be used to dynamically change filtering of
a module logs for given backend. Module is identified by source ID and domain ID. Source ID can be
retrieved if source name is known by iterating through all registered sources.
Logging supports up to 9 concurrent backends. Log message is passed to the each backend in processing
phase. Additionally, backend is notified when logging enter panic mode with log_backend_panic() .
On that call backend should switch to synchronous, interrupt-less operation or shut down itself if that
is not supported. Occasionally, logging may inform backend about number of dropped messages with
log_backend_dropped() . Message processing API is version specific.
log_backend_msg2_process() is used for processing message. It is common for standard and hexdump
messages because log message hold string with arguments and data. It is also common for deferred and
immediate logging.
Message formatting Logging provides set of function that can be used by the backend to format a
message. Helper functions are available in include/zephyr/logging/log_output.h.
Example message formatted using log_output_msg2_process().
Dictionary-based Logging
Dictionary-based logging, instead of human readable texts, outputs the log messages in binary format.
This binary format encodes arguments to formatted strings in their native storage formats which can be
more compact than their text equivalents. For statically defined strings (including the format strings and
any string arguments), references to the ELF file are encoded instead of the whole strings. A dictionary
created at build time contains the mappings between these references and the actual strings. This allows
the offline parser to obtain the strings from the dictionary to parse the log messages. This binary format
allows a more compact representation of log messages in certain scenarios. However, this requires the
use of an offline parser and is not as intuitive to use as text-based log messages.
Note that long double is not supported by Python’s struct module. Therefore, log messages with long
double will not display the correct values.
Usage When dictionary-based logging is enabled via enabling related logging backends, a JSON
database file, named log_dictionary.json, will be created in the build directory. This database file
contains information for the parser to correctly parse the log data. Note that this database file only
works with the same build, and cannot be used for any other builds.
To use the log parser:
The parser takes two required arguments, where the first one is the full path to the JSON
database file, and the second part is the file containing log data. Add an optional ar-
gument --hex to the end if the log data file contains hexadecimal characters (e.g. when
CONFIG_LOG_BACKEND_UART_OUTPUT_DICTIONARY_HEX=y). This tells the parser to convert the hexadeci-
mal characters to binary before parsing.
Please refer to logging_dictionary_sample on how to use the log parser.
4.9.6 Recommendations
LOG_WRN("%s", str);
LOG_WRN("%p", (void *)str);
4.9.7 Benchmark
Feature
Kernel logging 7us3 /11us
User logging 13us
kernel logging with overwrite 10usPage 734, 3 /15us
Logging transient string 42us
Logging transient string from user 50us
Memory utilization4 518
Memory footprint (test)5 2k
Memory footprint (application)6 3.5k
Message footprint7 47Page 734, 3 /32 bytes
Benchmark details
When logging is enabled it impacts stack usage of the context that uses logging API. If stack is optimized
it may lead to stack overflow. Stack usage depends on mode and optimization. It also significantly varies
between platforms. In general, when CONFIG_LOG_MODE_DEFERRED is used stack usage is smaller since
logging is limited to creating and storing log message. When CONFIG_LOG_MODE_IMMEDIATE is used then
log message is processed by the backend which includes string formatting. In case of that mode, stack
usage will depend on which backends are used.
tests/subsys/logging/log_stack test is used to characterize stack usage depending on mode, optimization
and platform used. Test is using only the default backend.
Some of the platforms characterization for log message with two integer arguments listed below:
Logger API
group log_api
Logger API.
Defines
LOG_ERR(...)
Writes an ERROR level message to the log.
It’s meant to report severe errors, such as those from which it’s not possible to recover.
Parameters
• ... – A string optionally containing printk valid conversion specifier, followed
by as many values as specifiers.
LOG_WRN(...)
Writes a WARNING level message to the log.
It’s meant to register messages related to unusual situations that are not necessarily errors.
Parameters
• ... – A string optionally containing printk valid conversion specifier, followed
by as many values as specifiers.
3 CONFIG_LOG_SPEED enabled.
4 Number of log messages with various number of arguments that fits in 2048 bytes dedicated for logging.
5 Logging subsystem memory footprint in tests/subsys/logging/log_benchmark where filtering and formatting features are not
used.
6 Logging subsystem memory footprint in samples/subsys/logging/logger.
7 Average size of a log message (excluding string) with 2 arguments on Cortex M3
LOG_INF(...)
Writes an INFO level message to the log.
It’s meant to write generic user oriented messages.
Parameters
• ... – A string optionally containing printk valid conversion specifier, followed
by as many values as specifiers.
LOG_DBG(...)
Writes a DEBUG level message to the log.
It’s meant to write developer oriented information.
Parameters
• ... – A string optionally containing printk valid conversion specifier, followed
by as many values as specifiers.
LOG_PRINTK(...)
Unconditionally print raw log message.
The result is same as if printk was used but it goes through logging infrastructure thus utilizes
logging mode, e.g. deferred mode.
Parameters
• ... – A string optionally containing printk valid conversion specifier, followed
by as many values as specifiers.
LOG_RAW(...)
Unconditionally print raw log message.
Provided string is printed as is without appending any characters (e.g., color or newline).
Parameters
• ... – A string optionally containing printk valid conversion specifier, followed
by as many values as specifiers.
LOG_INST_ERR(_log_inst, ...)
Writes an ERROR level message associated with the instance to the log.
Message is associated with specific instance of the module which has indepen-
dent filtering settings (if runtime filtering is enabled) and message prefix (<mod-
ule_name>.<instance_name>). It’s meant to report severe errors, such as those from which
it’s not possible to recover.
Parameters
• _log_inst – Pointer to the log structure associated with the instance.
• ... – A string optionally containing printk valid conversion specifier, followed
by as many values as specifiers.
LOG_INST_WRN(_log_inst, ...)
Writes a WARNING level message associated with the instance to the log.
Message is associated with specific instance of the module which has indepen-
dent filtering settings (if runtime filtering is enabled) and message prefix (<mod-
ule_name>.<instance_name>). It’s meant to register messages related to unusual situations
that are not necessarily errors.
Parameters
• _log_inst – Pointer to the log structure associated with the instance.
• The module consists of more than one file, and another file invokes this macro.
(LOG_MODULE_DECLARE() should be used instead in all of the module’s other files.)
• Instance logging is used and there is no need to create module entry. In that case
LOG_LEVEL_SET() should be used to set log level used within the file.
Macro accepts one or two parameters:
• module name
• optional log level. If not provided then default log level is used in the file.
Example usage:
• LOG_MODULE_REGISTER(foo, CONFIG_FOO_LOG_LEVEL)
• LOG_MODULE_REGISTER(foo)
See also:
LOG_MODULE_DECLARE
Note: The module’s state is defined, and the module is registered, only if LOG_LEVEL for
the current source file is non-zero or it is not defined and CONFIG_LOG_DEFAULT_LEVEL is
non-zero. In other cases, this macro has no effect.
LOG_MODULE_DECLARE(...)
Macro for declaring a log module (not registering it).
Modules which are split up over multiple files must have exactly one file use
LOG_MODULE_REGISTER() to create module-specific state and register the module with the
logger core.
The other files in the module should use this macro instead to declare that same state. (Oth-
erwise, LOG_INF() etc. will not be able to refer to module-specific state variables.)
Macro accepts one or two parameters:
• module name
• optional log level. If not provided then default log level is used in the file.
Example usage:
• LOG_MODULE_DECLARE(foo, CONFIG_FOO_LOG_LEVEL)
• LOG_MODULE_DECLARE(foo)
See also:
LOG_MODULE_REGISTER
Note: The module’s state is declared only if LOG_LEVEL for the current source file is non-
zero or it is not defined and CONFIG_LOG_DEFAULT_LEVEL is non-zero. In other cases, this
macro has no effect.
LOG_LEVEL_SET(level)
Macro for setting log level in the file or function where instance logging API is used.
Parameters
• level – Level used in file or in function.
Logger control
group log_ctrl
Logger control API.
Defines
LOG_CORE_INIT()
LOG_INIT()
LOG_PANIC()
LOG_PROCESS()
Typedefs
Functions
void log_core_init(void)
Function system initialization of the logger.
Function is called during start up to allow logging before user can explicitly initialize the
logger.
void log_init(void)
Function for user initialization of the logger.
void log_thread_set(k_tid_t process_tid)
Function for providing thread which is processing logs.
See CONFIG_LOG_PROCESS_TRIGGER_THRESHOLD.
Note: Function has asserts and has no effect when CONFIG_LOG_PROCESS_THREAD is set.
Parameters
• process_tid – Process thread id. Used to wake up the thread.
• max – [out] Maximum number of bytes used for pending log messages.
Return values
• -EINVAL – if logging mode does not use the buffer.
• -ENOTSUP – if instrumentation is not enabled. not been enabled.
• 0 – successfully collected usage data.
Log message
group log_msg
Log message API.
Defines
LOG_MSG_GENERIC_HDR
Functions
struct log_msg_desc
#include <log_msg.h>
union log_msg_source
#include <log_msg.h>
Public Members
void *raw
struct log_msg_hdr
#include <log_msg.h>
struct log_msg
#include <log_msg.h>
struct log_msg_generic_hdr
#include <log_msg.h>
union log_msg_generic
#include <log_msg.h>
Public Members
group log_backend
Logger backend interface.
Defines
Enums
enum log_backend_evt
Backend events.
Values:
enumerator LOG_BACKEND_EVT_PROCESS_THREAD_DONE
Event when process thread finishes processing.
This event is emitted when the process thread finishes processing pending log messages.
Note: This is not emitted when there are no pending log messages being processed.
enumerator LOG_BACKEND_EVT_MAX
Maximum number of backend events.
Functions
Parameters
• backend – Pointer to the backend instance.
• id – ID.
Parameters
• backend – [in] Pointer to the backend instance.
Returns
Id.
union log_backend_evt_arg
#include <log_backend.h> Argument(s) for backend events.
Public Members
void *raw
Unspecified argument(s).
struct log_backend_api
#include <log_backend.h> Logger backend API.
struct log_backend_control_block
#include <log_backend.h> Logger backend control block.
struct log_backend
#include <log_backend.h> Logger backend structure.
group log_output
Log output API.
Unnamed Group
Defines
LOG_OUTPUT_TEXT
Supported backend logging format types for use with log_format_set() API to switch log for-
mat at runtime.
LOG_OUTPUT_SYST
LOG_OUTPUT_DICT
LOG_OUTPUT_CUSTOM
Typedefs
Note: If the log output function cannot process all of the data, it is its responsibility to mark
them as dropped or discarded by returning the corresponding number of bytes dropped or
discarded to the caller.
Param buf
The buffer data.
Param size
The buffer size.
Param ctx
User context.
Return
Number of bytes processed, dropped or discarded.
Functions
Returns
Timestamp value in us.
struct log_output_control_block
#include <log_output.h>
struct log_output
#include <log_output.h> Log_output instance structure.
4.10 Tracing
4.10.1 Overview
The tracing feature provides hooks that permits you to collect data from your application and allows
tools running on a host to visualize the inner-working of the kernel and various subsystems.
Every system has application-specific events to trace out. Historically, that has implied:
1. Determining the application-specific payload,
2. Choosing suitable serialization-format,
3. Writing the on-target serialization code,
4. Deciding on and writing the I/O transport mechanics,
5. Writing the PC-side deserializer/parser,
6. Writing custom ad-hoc tools for filtering and presentation.
An application can use one of the existing formats or define a custom format by overriding the macros
declared in include/zephyr/tracing/tracing.h.
Different formats, transports and host tools are available and supported in Zephyr.
In fact, I/O varies greatly from system to system. Therefore, it is instructive to create a taxonomy for
I/O types when we must ensure the interface between payload/format (Top Layer) and the transport
mechanics (bottom Layer) is generic and efficient enough to model these. See the I/O taxonomy section
below.
Common Trace Format, CTF, is an open format and language to describe trace formats. This enables tool
reuse, of which line-textual (babeltrace) and graphical (TraceCompass) variants already exist.
CTF should look familiar to C programmers but adds stronger typing. See CTF - A Flexible, High-
performance Binary Trace Format.
CTF allows us to formally describe application specific payload and the serialization format, which en-
ables common infrastructure for host tools and parsers and tools for filtering and presentation.
A Generic Interface In CTF, an event is serialized to a packet containing one or more fields. As seen
from I/O taxonomy section below, a bottom layer may:
• perform actions at transaction-start (e.g. mutex-lock),
• process each field in some way (e.g. sync-push emit, concat, enqueue to thread-bound FIFO),
• perform actions at transaction-stop (e.g. mutex-release, emit of concat buffer).
CTF Top-Layer Example The CTF_EVENT macro will serialize each argument to a field:
How to serialize and emit fields as well as handling alignment, can be done internally and statically at
compile-time in the bottom-layer.
The CTF top layer is enabled using the configuration option CONFIG_TRACING_CTF and can be used with
the different transport backends both in synchronous and asynchronous modes.
Zephyr provides built-in support for SEGGER SystemView that can be enabled in any application for
platforms that have the required hardware support.
The payload and format used with SystemView is custom to the application and relies on RTT as a
transport. Newer versions of SystemView support other transports, for example UART or using snapshot
mode (both still not supported in Zephyr).
To enable tracing support with SEGGER SystemView add the configuration option
CONFIG_SEGGER_SYSTEMVIEW to your project configuration file and set it to y. For example,
this can be added to the synchronization_sample to visualize fast switching between threads.
SystemView can also be used for post-mortem tracing, which can be enabled with CON-
FIG_SEGGER_SYSVIEW_POST_MORTEM_MODE. In this mode, a debugger can be attached after
the system has crashed using west attach after which the latest data from the internal RAM buffer can
be loaded into SystemView:
CONFIG_STDOUT_CONSOLE=y
# enable to use thread names
CONFIG_THREAD_NAME=y
CONFIG_SEGGER_SYSTEMVIEW=y
CONFIG_USE_SEGGER_RTT=y
CONFIG_TRACING=y
# enable for post-mortem tracing
CONFIG_SEGGER_SYSVIEW_POST_MORTEM_MODE=n
Recent versions of SEGGER SystemView come with an API translation table for Zephyr which is incom-
plete and does not match the current level of support available in Zephyr. To use the latest Zephyr API
description table, copy the file available in the tree to your local configuration directory to override the
builtin table:
User-Defined Tracing
This tracing format allows the user to define functions to perform any work desired when a task is
switched in or out, when an interrupt is entered or exited, and when the cpu is idle.
Examples include: - simple toggling of GPIO for external scope tracing while minimizing extra cpu load
- generating/outputting trace data in a non-standard or proprietary format that can not be supported by
the other tracing systems
The following functions can be defined by the user: - void sys_trace_thread_create_user(struct
k_thread *thread) - void sys_trace_thread_abort_user(struct k_thread
*thread) - void sys_trace_thread_suspend_user(struct k_thread
*thread) - void sys_trace_thread_resume_user(struct k_thread *thread)
- void sys_trace_thread_name_set_user(struct k_thread *thread) -
void sys_trace_thread_switched_in_user(struct k_thread *thread) -
void sys_trace_thread_switched_out_user(struct k_thread *thread) -
void sys_trace_thread_info_user(struct k_thread *thread) - void
sys_trace_thread_sched_ready_user(struct k_thread *thread) - void
sys_trace_thread_pend_user(struct k_thread *thread) - void sys_trace_thread_priority_set_user(struct
k_thread *thread, int prio) - void sys_trace_isr_enter_user(int nested_interrupts) - void
sys_trace_isr_exit_user(int nested_interrupts) - void sys_trace_idle_user()
Enable this format with the CONFIG_TRACING_USER option.
The sample samples/subsys/tracing demonstrates tracing with different formats and backends.
To get started, the simplest way is to use the CTF format with the native_posix port, build the sample
as follows:
Using west:
You can then run the resulting binary with the option -trace-file to generate the tracing data:
mkdir data
cp $ZEPHYR_BASE/subsys/tracing/ctf/tsdl/metadata data/
./build/zephyr/zephyr.exe -trace-file=data/channel0_0
The resulting CTF output can be visualized using babeltrace or TraceCompass by pointing the tool to the
data directory with the metadata and trace files.
For devices that do not have available I/O for tracing such as USB or UART but have enough RAM to
collect trace data, the ram backend can be enabled with configuration CONFIG_TRACING_BACKEND_RAM.
Adjust CONFIG_RAM_TRACING_BUFFER_SIZE to be able to record enough traces for your needs. Then
thanks to a runtime debugger such as gdb this buffer can be fetched from the target to an host computer:
The resulting channel0_0 file have to be placed in a directory with the metadata file like the other
backend.
TraceCompass
TraceCompass is an open source tool that visualizes CTF events such as thread scheduling and interrupts,
and is helpful to find unintended interactions and resource conflicts on complex systems.
See also the presentation by Ericsson, Advanced Trouble-shooting Of Real-time Systems.
Currently, the top-layer provided here is quite simple and bare-bones, and needlessly copied from
Zephyr’s Segger SystemView debug module.
For an OS like Zephyr, it would make sense to draw inspiration from Linux’s LTTng and change the
top-layer to serialize to the same format. Doing this would enable direct reuse of TraceCompass’ canned
analyses for Linux. Alternatively, LTTng-analyses in TraceCompass could be customized to Zephyr. It is
ongoing work to enable TraceCompass visibility of Zephyr in a target-agnostic and open source way.
I/O Taxonomy
• Atomic Push/Produce/Write/Enqueue:
– synchronous:
means data-transmission has completed with the return of the call.
– asynchronous:
means data-transmission is pending or ongoing with the return of the call. Usually, inter-
rupts/callbacks/signals or polling is used to determine completion.
– buffered:
means data-transmissions are copied and grouped together to form a larger ones. Usually
for amortizing overhead (burst dequeue) or jitter-mitigation (steady dequeue).
Examples:
– sync unbuffered
E.g. PIO via GPIOs having steady stream, no extra FIFO memory needed. Low jitter
but may be less efficient (can’t amortize the overhead of writing).
– sync buffered
E.g. fwrite() or enqueuing into FIFO. Blockingly burst the FIFO when its buffer-
waterlevel exceeds threshold. Jitter due to bursts may lead to missed deadlines.
– async unbuffered
E.g. DMA, or zero-copying in shared memory. Be careful of data hazards, race condi-
tions, etc!
– async buffered
E.g. enqueuing into FIFO.
• Atomic Pull/Consume/Read/Dequeue:
– synchronous:
means data-reception has completed with the return of the call.
– asynchronous:
means data-reception is pending or ongoing with the return of the call. Usually, inter-
rupts/callbacks/signals or polling is used to determine completion.
– buffered:
means data is copied-in in larger chunks than request-size. Usually for amortizing wait-
time.
Examples:
– sync unbuffered
E.g. Blocking read-call, fread() or SPI-read, zero-copying in shared memory.
– sync buffered
E.g. Blocking read-call with caching applied. Makes sense if read pattern exhibits
spatial locality.
– async unbuffered
E.g. zero-copying in shared memory. Be careful of data hazards, race conditions, etc!
– async buffered
E.g. aio_read() or DMA.
Unfortunately, I/O may not be atomic and may, therefore, require locking. Locking may not be needed if
multiple independent channels are available.
• The system has non-atomic write and one shared channel
E.g. UART. Locking required.
lock(); emit(a); emit(b); emit(c); release();
The kernel can also maintain lists of objects that can be used to track their usage. Currently, the following
lists can be enabled:
Those global variables are the head of each list - they can be traversed with the help of macro
SYS_PORT_TRACK_NEXT. For instance, to traverse all initialized mutexes, one can write:
cur = SYS_PORT_TRACK_NEXT(cur);
}
To enable object tracking, enable CONFIG_TRACING_OBJECT_TRACKING. Note that each list can be enabled
or disabled via their tracing configuration. For example, to disable tracking of semaphores, one can
disable CONFIG_TRACING_SEMAPHORE.
Object tracking is behind tracing configuration as it currently leverages tracing infrastructure to perform
the tracking.
4.10.8 API
Common
group subsys_tracing_apis
Tracing APIs.
Functions
void sys_trace_isr_enter(void)
Called when entering an ISR.
void sys_trace_isr_exit(void)
Called when exiting an ISR.
void sys_trace_isr_exit_to_scheduler(void)
Called when exiting an ISR and switching to scheduler.
void sys_trace_idle(void)
Called when the cpu enters the idle state.
Threads
group subsys_tracing_apis_thread
Thread Tracing APIs.
Defines
sys_port_trace_k_thread_foreach_enter()
Called when entering a k_thread_foreach call.
sys_port_trace_k_thread_foreach_exit()
Called when exiting a k_thread_foreach call.
sys_port_trace_k_thread_foreach_unlocked_enter()
Called when entering a k_thread_foreach_unlocked.
sys_port_trace_k_thread_foreach_unlocked_exit()
Called when exiting a k_thread_foreach_unlocked.
sys_port_trace_k_thread_create(new_thread)
Trace creating a Thread.
Parameters
• new_thread – Thread object
sys_port_trace_k_thread_user_mode_enter()
Trace Thread entering user mode.
sys_port_trace_k_thread_join_enter(thread, timeout)
Called when entering a k_thread_join.
Parameters
• thread – Thread object
• timeout – Timeout period
sys_port_trace_k_thread_join_blocking(thread, timeout)
Called when k_thread_join blocks.
Parameters
• thread – Thread object
• timeout – Timeout period
sys_port_trace_k_thread_resume_exit(thread)
Called when a thread exits the resumed from suspension function.
Parameters
• thread – Thread object
sys_port_trace_k_thread_sched_lock()
Called when the thread scheduler is locked.
sys_port_trace_k_thread_sched_unlock()
Called when the thread scheduler is unlocked.
sys_port_trace_k_thread_name_set(thread, ret)
Called when a thread name is set.
Parameters
• thread – Thread object
• ret – Return value
sys_port_trace_k_thread_switched_out()
Called before a thread has been selected to run.
sys_port_trace_k_thread_switched_in()
Called after a thread has been selected to run.
sys_port_trace_k_thread_ready(thread)
Called when a thread is ready to run.
Parameters
• thread – Thread object
sys_port_trace_k_thread_pend(thread)
Called when a thread is pending.
Parameters
• thread – Thread object
sys_port_trace_k_thread_info(thread)
Provide information about specific thread.
Parameters
• thread – Thread object
sys_port_trace_k_thread_sched_wakeup(thread)
Trace implicit thread wakeup invocation by the scheduler.
Parameters
• thread – Thread object
sys_port_trace_k_thread_sched_abort(thread)
Trace implicit thread abort invocation by the scheduler.
Parameters
• thread – Thread object
sys_port_trace_k_thread_sched_priority_set(thread, prio)
Trace implicit thread set priority invocation by the scheduler.
Parameters
• thread – Thread object
Work Queues
group subsys_tracing_apis_work
Work Tracing APIs.
Defines
sys_port_trace_k_work_init(work)
Trace initialisation of a Work structure.
Parameters
• work – Work structure
sys_port_trace_k_work_submit_to_queue_enter(queue, work)
Trace submit work to work queue call entry.
Parameters
• queue – Work queue structure
• work – Work structure
sys_port_trace_k_work_submit_to_queue_exit(queue, work, ret)
Trace submit work to work queue call exit.
Parameters
• queue – Work queue structure
• work – Work structure
• ret – Return value
sys_port_trace_k_work_submit_enter(work)
Trace submit work to system work queue call entry.
Parameters
• work – Work structure
sys_port_trace_k_work_submit_exit(work, ret)
Trace submit work to system work queue call exit.
Parameters
• work – Work structure
• ret – Return value
sys_port_trace_k_work_flush_enter(work)
Trace flush work call entry.
Parameters
• work – Work structure
sys_port_trace_k_work_flush_blocking(work, timeout)
Trace flush work call blocking.
Parameters
• work – Work structure
• timeout – Timeout period
sys_port_trace_k_work_flush_exit(work, ret)
Trace flush work call exit.
Parameters
• work – Work structure
• ret – Return value
sys_port_trace_k_work_cancel_enter(work)
Trace cancel work call entry.
Parameters
• work – Work structure
sys_port_trace_k_work_cancel_exit(work, ret)
Trace cancel work call exit.
Parameters
• work – Work structure
• ret – Return value
sys_port_trace_k_work_cancel_sync_enter(work, sync)
Trace cancel sync work call entry.
Parameters
• work – Work structure
• sync – Sync object
sys_port_trace_k_work_cancel_sync_blocking(work, sync)
Trace cancel sync work call blocking.
Parameters
Poll
group subsys_tracing_apis_poll
Poll Tracing APIs.
Defines
sys_port_trace_k_poll_api_event_init(event)
Trace initialisation of a Poll Event.
Parameters
• event – Poll Event
sys_port_trace_k_poll_api_poll_enter(events)
Trace Polling call start.
Parameters
• events – Poll Events
sys_port_trace_k_poll_api_poll_exit(events, ret)
Trace Polling call outcome.
Parameters
• events – Poll Events
• ret – Return value
sys_port_trace_k_poll_api_signal_init(signal)
Trace initialisation of a Poll Signal.
Parameters
• signal – Poll Signal
sys_port_trace_k_poll_api_signal_reset(signal)
Trace resetting of Poll Signal.
Parameters
• signal – Poll Signal
sys_port_trace_k_poll_api_signal_check(signal)
Trace checking of Poll Signal.
Parameters
• signal – Poll Signal
sys_port_trace_k_poll_api_signal_raise(signal, ret)
Trace raising of Poll Signal.
Parameters
• signal – Poll Signal
• ret – Return value
Semaphore
group subsys_tracing_apis_sem
Semaphore Tracing APIs.
Defines
sys_port_trace_k_sem_init(sem, ret)
Trace initialisation of a Semaphore.
Parameters
• sem – Semaphore object
• ret – Return value
sys_port_trace_k_sem_give_enter(sem)
Trace giving a Semaphore entry.
Parameters
• sem – Semaphore object
sys_port_trace_k_sem_give_exit(sem)
Trace giving a Semaphore exit.
Parameters
• sem – Semaphore object
sys_port_trace_k_sem_take_enter(sem, timeout)
Trace taking a Semaphore attempt start.
Parameters
• sem – Semaphore object
• timeout – Timeout period
sys_port_trace_k_sem_take_blocking(sem, timeout)
Trace taking a Semaphore attempt blocking.
Parameters
• sem – Semaphore object
• timeout – Timeout period
sys_port_trace_k_sem_take_exit(sem, timeout, ret)
Trace taking a Semaphore attempt outcome.
Parameters
• sem – Semaphore object
• timeout – Timeout period
Mutex
group subsys_tracing_apis_mutex
Mutex Tracing APIs.
Defines
sys_port_trace_k_mutex_init(mutex, ret)
Trace initialization of Mutex.
Parameters
• mutex – Mutex object
• ret – Return value
sys_port_trace_k_mutex_lock_enter(mutex, timeout)
Trace Mutex lock attempt start.
Parameters
• mutex – Mutex object
• timeout – Timeout period
sys_port_trace_k_mutex_lock_blocking(mutex, timeout)
Trace Mutex lock attempt blocking.
Parameters
• mutex – Mutex object
• timeout – Timeout period
sys_port_trace_k_mutex_lock_exit(mutex, timeout, ret)
Trace Mutex lock attempt outcome.
Parameters
• mutex – Mutex object
• timeout – Timeout period
• ret – Return value
sys_port_trace_k_mutex_unlock_enter(mutex)
Trace Mutex unlock entry.
Parameters
• mutex – Mutex object
sys_port_trace_k_mutex_unlock_exit(mutex, ret)
Trace Mutex unlock exit.
Condition Variables
group subsys_tracing_apis_condvar
Conditional Variable Tracing APIs.
Defines
sys_port_trace_k_condvar_init(condvar, ret)
Trace initialization of Conditional Variable.
Parameters
• condvar – Conditional Variable object
• ret – Return value
sys_port_trace_k_condvar_signal_enter(condvar)
Trace Conditional Variable signaling start.
Parameters
• condvar – Conditional Variable object
sys_port_trace_k_condvar_signal_blocking(condvar, timeout)
Trace Conditional Variable signaling blocking.
Parameters
• condvar – Conditional Variable object
• timeout – Timeout period
sys_port_trace_k_condvar_signal_exit(condvar, ret)
Trace Conditional Variable signaling outcome.
Parameters
• condvar – Conditional Variable object
• ret – Return value
sys_port_trace_k_condvar_broadcast_enter(condvar)
Trace Conditional Variable broadcast enter.
Parameters
• condvar – Conditional Variable object
sys_port_trace_k_condvar_broadcast_exit(condvar, ret)
Trace Conditional Variable broadcast exit.
Parameters
• condvar – Conditional Variable object
• ret – Return value
sys_port_trace_k_condvar_wait_enter(condvar)
Trace Conditional Variable wait enter.
Parameters
• condvar – Conditional Variable object
sys_port_trace_k_condvar_wait_exit(condvar, ret)
Trace Conditional Variable wait exit.
Parameters
• condvar – Conditional Variable object
• ret – Return value
Queues
group subsys_tracing_apis_queue
Queue Tracing APIs.
Defines
sys_port_trace_k_queue_init(queue)
Trace initialization of Queue.
Parameters
• queue – Queue object
sys_port_trace_k_queue_cancel_wait(queue)
Trace Queue cancel wait.
Parameters
• queue – Queue object
sys_port_trace_k_queue_queue_insert_enter(queue, alloc)
Trace Queue insert attempt entry.
Parameters
• queue – Queue object
• alloc – Allocation flag
sys_port_trace_k_queue_queue_insert_blocking(queue, alloc, timeout)
Trace Queue insert attempt blocking.
Parameters
• queue – Queue object
• alloc – Allocation flag
• timeout – Timeout period
sys_port_trace_k_queue_queue_insert_exit(queue, alloc, ret)
Trace Queue insert attempt outcome.
Parameters
• queue – Queue object
• alloc – Allocation flag
• ret – Return value
sys_port_trace_k_queue_append_enter(queue)
Trace Queue append enter.
Parameters
• queue – Queue object
sys_port_trace_k_queue_append_exit(queue)
Trace Queue append exit.
Parameters
• queue – Queue object
sys_port_trace_k_queue_alloc_append_enter(queue)
Trace Queue alloc append enter.
Parameters
• queue – Queue object
sys_port_trace_k_queue_alloc_append_exit(queue, ret)
Trace Queue alloc append exit.
Parameters
• queue – Queue object
• ret – Return value
sys_port_trace_k_queue_prepend_enter(queue)
Trace Queue prepend enter.
Parameters
• queue – Queue object
sys_port_trace_k_queue_prepend_exit(queue)
Trace Queue prepend exit.
Parameters
• queue – Queue object
sys_port_trace_k_queue_alloc_prepend_enter(queue)
Trace Queue alloc prepend enter.
Parameters
• queue – Queue object
sys_port_trace_k_queue_alloc_prepend_exit(queue, ret)
Trace Queue alloc prepend exit.
Parameters
• queue – Queue object
• ret – Return value
sys_port_trace_k_queue_insert_enter(queue)
Trace Queue insert attempt entry.
Parameters
• queue – Queue object
sys_port_trace_k_queue_insert_blocking(queue, timeout)
Trace Queue insert attempt blocking.
Parameters
• queue – Queue object
• timeout – Timeout period
sys_port_trace_k_queue_insert_exit(queue)
Trace Queue insert attempt exit.
Parameters
• queue – Queue object
sys_port_trace_k_queue_append_list_enter(queue)
Trace Queue append list enter.
Parameters
• queue – Queue object
sys_port_trace_k_queue_append_list_exit(queue, ret)
Trace Queue append list exit.
Parameters
• queue – Queue object
• ret – Return value
sys_port_trace_k_queue_merge_slist_enter(queue)
Trace Queue merge slist enter.
Parameters
• queue – Queue object
sys_port_trace_k_queue_merge_slist_exit(queue, ret)
Trace Queue merge slist exit.
Parameters
• queue – Queue object
• ret – Return value
sys_port_trace_k_queue_get_enter(queue, timeout)
Trace Queue get attempt enter.
Parameters
• queue – Queue object
• timeout – Timeout period
sys_port_trace_k_queue_get_blocking(queue, timeout)
Trace Queue get attempt blockings.
Parameters
• queue – Queue object
• timeout – Timeout period
sys_port_trace_k_queue_get_exit(queue, timeout, ret)
Trace Queue get attempt outcome.
Parameters
FIFO
group subsys_tracing_apis_fifo
FIFO Tracing APIs.
Defines
sys_port_trace_k_fifo_init_enter(fifo)
Trace initialization of FIFO Queue entry.
Parameters
• fifo – FIFO object
sys_port_trace_k_fifo_init_exit(fifo)
Trace initialization of FIFO Queue exit.
Parameters
• fifo – FIFO object
sys_port_trace_k_fifo_cancel_wait_enter(fifo)
Trace FIFO Queue cancel wait entry.
Parameters
• fifo – FIFO object
sys_port_trace_k_fifo_cancel_wait_exit(fifo)
Trace FIFO Queue cancel wait exit.
Parameters
• fifo – FIFO object
sys_port_trace_k_fifo_put_enter(fifo, data)
Trace FIFO Queue put entry.
Parameters
• fifo – FIFO object
• data – Data item
sys_port_trace_k_fifo_put_exit(fifo, data)
Trace FIFO Queue put exit.
Parameters
• fifo – FIFO object
• data – Data item
sys_port_trace_k_fifo_alloc_put_enter(fifo, data)
Trace FIFO Queue alloc put entry.
Parameters
• fifo – FIFO object
• data – Data item
sys_port_trace_k_fifo_alloc_put_exit(fifo, data, ret)
Trace FIFO Queue alloc put exit.
Parameters
• fifo – FIFO object
• data – Data item
• ret – Return value
sys_port_trace_k_fifo_put_list_enter(fifo, head, tail)
Trace FIFO Queue put list entry.
Parameters
sys_port_trace_k_fifo_peek_tail_enter(fifo)
Trace FIFO Queue peek tail entry.
Parameters
• fifo – FIFO object
sys_port_trace_k_fifo_peek_tail_exit(fifo, ret)
Trace FIFO Queue peek tail exit.
Parameters
• fifo – FIFO object
• ret – Return value
LIFO
group subsys_tracing_apis_lifo
LIFO Tracing APIs.
Defines
sys_port_trace_k_lifo_init_enter(lifo)
Trace initialization of LIFO Queue entry.
Parameters
• lifo – LIFO object
sys_port_trace_k_lifo_init_exit(lifo)
Trace initialization of LIFO Queue exit.
Parameters
• lifo – LIFO object
sys_port_trace_k_lifo_put_enter(lifo, data)
Trace LIFO Queue put entry.
Parameters
• lifo – LIFO object
• data – Data item
sys_port_trace_k_lifo_put_exit(lifo, data)
Trace LIFO Queue put exit.
Parameters
• lifo – LIFO object
• data – Data item
sys_port_trace_k_lifo_alloc_put_enter(lifo, data)
Trace LIFO Queue alloc put entry.
Parameters
• lifo – LIFO object
• data – Data item
Stacks
group subsys_tracing_apis_stack
Stack Tracing APIs.
Defines
sys_port_trace_k_stack_init(stack)
Trace initialization of Stack.
Parameters
• stack – Stack object
sys_port_trace_k_stack_alloc_init_enter(stack)
Trace Stack alloc init attempt entry.
Parameters
• stack – Stack object
sys_port_trace_k_stack_alloc_init_exit(stack, ret)
Trace Stack alloc init outcome.
Parameters
• stack – Stack object
• ret – Return value
sys_port_trace_k_stack_cleanup_enter(stack)
Trace Stack cleanup attempt entry.
Parameters
• stack – Stack object
sys_port_trace_k_stack_cleanup_exit(stack, ret)
Trace Stack cleanup outcome.
Parameters
• stack – Stack object
• ret – Return value
sys_port_trace_k_stack_push_enter(stack)
Trace Stack push attempt entry.
Parameters
• stack – Stack object
sys_port_trace_k_stack_push_exit(stack, ret)
Trace Stack push attempt outcome.
Parameters
• stack – Stack object
• ret – Return value
sys_port_trace_k_stack_pop_enter(stack, timeout)
Trace Stack pop attempt entry.
Parameters
• stack – Stack object
• timeout – Timeout period
sys_port_trace_k_stack_pop_blocking(stack, timeout)
Trace Stack pop attempt blocking.
Parameters
• stack – Stack object
• timeout – Timeout period
sys_port_trace_k_stack_pop_exit(stack, timeout, ret)
Trace Stack pop attempt outcome.
Parameters
• stack – Stack object
• timeout – Timeout period
• ret – Return value
Message Queues
group subsys_tracing_apis_msgq
Message Queue Tracing APIs.
Defines
sys_port_trace_k_msgq_init(msgq)
Trace initialization of Message Queue.
Parameters
• msgq – Message Queue object
sys_port_trace_k_msgq_alloc_init_enter(msgq)
Trace Message Queue alloc init attempt entry.
Parameters
• msgq – Message Queue object
sys_port_trace_k_msgq_alloc_init_exit(msgq, ret)
Trace Message Queue alloc init attempt outcome.
Parameters
• msgq – Message Queue object
• ret – Return value
sys_port_trace_k_msgq_cleanup_enter(msgq)
Trace Message Queue cleanup attempt entry.
Parameters
• msgq – Message Queue object
sys_port_trace_k_msgq_cleanup_exit(msgq, ret)
Trace Message Queue cleanup attempt outcome.
Parameters
• msgq – Message Queue object
• ret – Return value
sys_port_trace_k_msgq_put_enter(msgq, timeout)
Trace Message Queue put attempt entry.
Parameters
• msgq – Message Queue object
• timeout – Timeout period
sys_port_trace_k_msgq_put_blocking(msgq, timeout)
Trace Message Queue put attempt blocking.
Parameters
• msgq – Message Queue object
• timeout – Timeout period
sys_port_trace_k_msgq_put_exit(msgq, timeout, ret)
Trace Message Queue put attempt outcome.
Parameters
• msgq – Message Queue object
• timeout – Timeout period
• ret – Return value
sys_port_trace_k_msgq_get_enter(msgq, timeout)
Trace Message Queue get attempt entry.
Parameters
• msgq – Message Queue object
• timeout – Timeout period
sys_port_trace_k_msgq_get_blocking(msgq, timeout)
Trace Message Queue get attempt blockings.
Parameters
• msgq – Message Queue object
• timeout – Timeout period
sys_port_trace_k_msgq_get_exit(msgq, timeout, ret)
Trace Message Queue get attempt outcome.
Parameters
• msgq – Message Queue object
• timeout – Timeout period
• ret – Return value
sys_port_trace_k_msgq_peek(msgq, ret)
Trace Message Queue peek.
Parameters
• msgq – Message Queue object
• ret – Return value
sys_port_trace_k_msgq_purge(msgq)
Trace Message Queue purge.
Parameters
• msgq – Message Queue object
Mailbox
group subsys_tracing_apis_mbox
Mailbox Tracing APIs.
Defines
sys_port_trace_k_mbox_init(mbox)
Trace initialization of Mailbox.
Parameters
• mbox – Mailbox object
sys_port_trace_k_mbox_message_put_enter(mbox, timeout)
Trace Mailbox message put attempt entry.
Parameters
• mbox – Mailbox object
sys_port_trace_k_mbox_get_blocking(mbox, timeout)
Trace Mailbox get attempt blocking.
Parameters
• mbox – Mailbox entry
• timeout – Timeout period
sys_port_trace_k_mbox_get_exit(mbox, timeout, ret)
Trace Mailbox get attempt outcome.
Parameters
• mbox – Mailbox entry
• timeout – Timeout period
• ret – Return value
sys_port_trace_k_mbox_data_get(rx_msg)
Trace Mailbox data get.
rx_msg Receive Message object
Pipes
group subsys_tracing_apis_pipe
Pipe Tracing APIs.
Defines
sys_port_trace_k_pipe_init(pipe)
Trace initialization of Pipe.
Parameters
• pipe – Pipe object
sys_port_trace_k_pipe_cleanup_enter(pipe)
Trace Pipe cleanup entry.
Parameters
• pipe – Pipe object
sys_port_trace_k_pipe_cleanup_exit(pipe, ret)
Trace Pipe cleanup exit.
Parameters
• pipe – Pipe object
• ret – Return value
sys_port_trace_k_pipe_alloc_init_enter(pipe)
Trace Pipe alloc init entry.
Parameters
• pipe – Pipe object
sys_port_trace_k_pipe_alloc_init_exit(pipe, ret)
Trace Pipe alloc init exit.
Parameters
• pipe – Pipe object
• ret – Return value
sys_port_trace_k_pipe_flush_enter(pipe)
Trace Pipe flush entry.
Parameters
• pipe – Pipe object
sys_port_trace_k_pipe_flush_exit(pipe)
Trace Pipe flush exit.
Parameters
• pipe – Pipe object
sys_port_trace_k_pipe_buffer_flush_enter(pipe)
Trace Pipe buffer flush entry.
Parameters
• pipe – Pipe object
sys_port_trace_k_pipe_buffer_flush_exit(pipe)
Trace Pipe buffer flush exit.
Parameters
• pipe – Pipe object
sys_port_trace_k_pipe_put_enter(pipe, timeout)
Trace Pipe put attempt entry.
Parameters
• pipe – Pipe object
• timeout – Timeout period
sys_port_trace_k_pipe_put_blocking(pipe, timeout)
Trace Pipe put attempt blocking.
Parameters
• pipe – Pipe object
• timeout – Timeout period
sys_port_trace_k_pipe_put_exit(pipe, timeout, ret)
Trace Pipe put attempt outcome.
Parameters
• pipe – Pipe object
• timeout – Timeout period
• ret – Return value
sys_port_trace_k_pipe_get_enter(pipe, timeout)
Trace Pipe get attempt entry.
Parameters
Heaps
group subsys_tracing_apis_heap
Heap Tracing APIs.
Defines
sys_port_trace_k_heap_init(h)
Trace initialization of Heap.
Parameters
• h – Heap object
sys_port_trace_k_heap_aligned_alloc_enter(h, timeout)
Trace Heap aligned alloc attempt entry.
Parameters
• h – Heap object
• timeout – Timeout period
sys_port_trace_k_heap_aligned_alloc_blocking(h, timeout)
Trace Heap align alloc attempt blocking.
Parameters
• h – Heap object
• timeout – Timeout period
sys_port_trace_k_heap_aligned_alloc_exit(h, timeout, ret)
Trace Heap align alloc attempt outcome.
Parameters
• h – Heap object
• timeout – Timeout period
• ret – Return value
sys_port_trace_k_heap_alloc_enter(h, timeout)
Trace Heap alloc enter.
Parameters
• h – Heap object
• timeout – Timeout period
sys_port_trace_k_heap_alloc_exit(h, timeout, ret)
Trace Heap alloc exit.
Parameters
• h – Heap object
• timeout – Timeout period
• ret – Return value
sys_port_trace_k_heap_free(h)
Trace Heap free.
Parameters
• h – Heap object
sys_port_trace_k_heap_sys_k_aligned_alloc_enter(heap)
Trace System Heap aligned alloc enter.
Parameters
• heap – Heap object
sys_port_trace_k_heap_sys_k_aligned_alloc_exit(heap, ret)
Trace System Heap aligned alloc exit.
Parameters
• heap – Heap object
• ret – Return value
sys_port_trace_k_heap_sys_k_malloc_enter(heap)
Trace System Heap aligned alloc enter.
Parameters
• heap – Heap object
sys_port_trace_k_heap_sys_k_malloc_exit(heap, ret)
Trace System Heap aligned alloc exit.
Parameters
• heap – Heap object
• ret – Return value
sys_port_trace_k_heap_sys_k_free_enter(heap, heap_ref)
Trace System Heap free entry.
Parameters
• heap – Heap object
• heap_ref – Heap reference
sys_port_trace_k_heap_sys_k_free_exit(heap, heap_ref)
Trace System Heap free exit.
Parameters
• heap – Heap object
• heap_ref – Heap reference
sys_port_trace_k_heap_sys_k_calloc_enter(heap)
Trace System heap calloc enter.
Parameters
• heap –
sys_port_trace_k_heap_sys_k_calloc_exit(heap, ret)
Trace System heap calloc exit.
Parameters
• heap – Heap object
• ret – Return value
Memory Slabs
group subsys_tracing_apis_mslab
Memory Slab Tracing APIs.
Defines
sys_port_trace_k_mem_slab_init(slab, rc)
Trace initialization of Memory Slab.
Parameters
• slab – Memory Slab object
• rc – Return value
sys_port_trace_k_mem_slab_alloc_enter(slab, timeout)
Trace Memory Slab alloc attempt entry.
Parameters
• slab – Memory Slab object
Timers
group subsys_tracing_apis_timer
Timer Tracing APIs.
Defines
sys_port_trace_k_timer_init(timer)
Trace initialization of Timer.
Parameters
• timer – Timer object
sys_port_trace_k_timer_start(timer, duration, period)
Trace Timer start.
Parameters
• timer – Timer object
• duration – Timer duration
• period – Timer period
sys_port_trace_k_timer_stop(timer)
Trace Timer stop.
Parameters
• timer – Timer object
sys_port_trace_k_timer_status_sync_enter(timer)
Trace Timer status sync entry.
Parameters
• timer – Timer object
sys_port_trace_k_timer_status_sync_blocking(timer, timeout)
Trace Timer Status sync blocking.
Parameters
• timer – Timer object
• timeout – Timeout period
sys_port_trace_k_timer_status_sync_exit(timer, result)
Trace Time Status sync outcome.
Parameters
• timer – Timer object
• result – Return value
Object tracking
group subsys_tracing_object_tracking
Object tracking.
Object tracking provides lists to kernel objects, so their existence and current status can be tracked.
The following global variables are the heads of available lists:
• _track_list_k_timer
• _track_list_k_mem_slab
• _track_list_k_sem
• _track_list_k_mutex
• _track_list_k_stack
• _track_list_k_msgq
• _track_list_k_mbox
• _track_list_k_pipe
• _track_list_k_queue
• _track_list_k_event
Defines
SYS_PORT_TRACK_NEXT(list)
Gets node’s next element in a object tracking list.
Parameters
• list – Node to get next element from.
Syscalls
group subsys_tracing_apis_syscall
Syscall Tracing APIs.
Defines
There are various situations where it’s necessary to coordinate resource use at runtime among multiple
clients. These include power rails, clocks, other peripherals, and binary device power management. The
complexity of properly managing multiple consumers of a device in a multithreaded system, especially
when transitions may be asynchronous, suggests that a shared implementation is desirable.
Zephyr provides managers for several coordination policies. These managers are embedded into services
that use them for specific functions.
• On-Off Manager
An on-off manager supports an arbitrary number of clients of a service which has a binary state. Example
applications are power rails, clocks, and binary device power management.
The manager has the following properties:
• The stable states are off, on, and error. The service always begins in the off state. The service may
also be in a transition to a given state.
• The core operations are request (add a dependency) and release (remove a dependency). Sup-
porting operations are reset (to clear an error state) and cancel (to reclaim client data from an
in-progress transition). The service manages the state based on calls to functions that initiate these
operations.
• The service transitions from off to on when first client request is received.
• The service transitions from on to off when last client release is received.
• Each service configuration provides functions that implement the transition from off to on, from
on to off, and optionally from an error state to off. Transitions must be invokable from both thread
and interrupt context.
• The request and reset operations are asynchronous using Asynchronous Notifications. Both opera-
tions may be cancelled, but cancellation has no effect on the in-progress transition.
• Requests to turn on may be queued while a transition to off is in progress: when the service has
turned off successfully it will be immediately turned on again (where context allows) and waiting
clients notified when the start completes.
Requests are reference counted, but not tracked. That means clients are responsible for recording
whether their requests were accepted, and for initiating a release only if they have previously successfully
completed a request. Improper use of the API can cause an active client to be shut out, and the manager
does not maintain a record of specific clients that have been granted a request.
Failures in executing a transition are recorded and inhibit further requests or releases until the manager
is reset. Pending requests are notified (and cancelled) when errors are discovered.
Transition operation completion notifications are provided through Asynchronous Notifications.
Clients and other components interested in tracking all service state changes, including when a service
begins turning off or enters an error state, can be informed of state transitions by registering a monitor
with onoff_monitor_register(). Notification of changes are provided before issuing completion notifica-
tions associated with the new state.
Note: A generic API may be implemented by multiple drivers where the common case is asynchronous.
The on-off client structure may be an appropriate solution for the generic API. Where drivers that can
guarantee synchronous context-independent transitions a driver may use onoff_sync_service and its
supporting API rather than onoff_manager , with only a small reduction in functionality (primarily no
support for the monitor API).
group resource_mgmt_onoff_apis
Defines
ONOFF_FLAG_ERROR
Flag indicating an error state.
Error states are cleared using onoff_reset().
ONOFF_FLAG_ONOFF
ONOFF_FLAG_TRANSITION
ONOFF_STATE_MASK
Mask used to isolate bits defining the service state.
Mask a value with this then test for ONOFF_FLAG_ERROR to determine whether the ma-
chine has an unfixed error, or compare against ONOFF_STATE_ON, ONOFF_STATE_OFF,
ONOFF_STATE_TO_ON, ONOFF_STATE_TO_OFF, or ONOFF_STATE_RESETTING.
ONOFF_STATE_OFF
Value exposed by ONOFF_STATE_MASK when service is off.
ONOFF_STATE_ON
Value exposed by ONOFF_STATE_MASK when service is on.
ONOFF_STATE_ERROR
Value exposed by ONOFF_STATE_MASK when the service is in an error state (and not in the
process of resetting its state).
ONOFF_STATE_TO_ON
Value exposed by ONOFF_STATE_MASK when service is transitioning to on.
ONOFF_STATE_TO_OFF
Value exposed by ONOFF_STATE_MASK when service is transitioning to off.
ONOFF_STATE_RESETTING
Value exposed by ONOFF_STATE_MASK when service is in the process of resetting.
ONOFF_TRANSITIONS_INITIALIZER(_start, _stop, _reset)
Initializer for a onoff_transitions object.
Parameters
• _start – a function used to transition from off to on state.
• _stop – a function used to transition from on to off state.
• _reset – a function used to clear errors and force the service to an off state.
Can be null.
ONOFF_MANAGER_INITIALIZER(_transitions)
ONOFF_CLIENT_EXTENSION_POS
Identify region of sys_notify flags available for containing services.
Bits of the flags field of the sys_notify structure contained within the queued_operation struc-
ture at and above this position may be used by extensions to the onoff_client structure.
These bits are intended for use by containing service implementations to record client-specific
information and are subject to other conditions of use specified on the sys_notify API.
Typedefs
Param mgr
the manager for which transition was requested.
Param res
the result of the transition. This shall be non-negative on success, or a negative
error code. If an error is indicated the service shall enter an error state.
Param mgr
the manager for which transition was requested.
Param notify
the function to be invoked when the transition has completed. If the transition is
synchronous, notify shall be invoked by the implementation before the transition
function returns. Otherwise the implementation shall capture this parameter and
invoke it when the transition completes.
This is similar to onoff_client_callback but provides information about all transitions, not just
ones associated with a specific client. Monitor callbacks are invoked before any completion
notifications associated with the state change are made.
These functions may be invoked from any context including pre-kernel, ISR, or cooperative or
pre-emptible threads. Compatible functions must be isr-ok and not sleep.
The callback is permitted to unregister itself from the manager, but must not register or un-
register any other monitors.
Param mgr
the manager for which a transition has completed.
Param mon
the monitor instance through which this notification arrived.
Param state
the state of the machine at the time of completion, restricted by
ONOFF_STATE_MASK. All valid states may be observed.
Param res
the result of the operation. Expected values are service- and state-specific, but
the value shall be non-negative if the operation succeeded, and negative if the
operation failed.
Functions
the request operation is provided through the configured client notification method, possibly
before this call returns.
Note that the call to this function may succeed in a case where the actual request fails. Always
check the operation completion result.
Parameters
• mgr – the manager that will be used.
• cli – a non-null pointer to client state providing instructions on synchronous
expectations and how to notify the client when the request completes. Behavior
is undefined if client passes a pointer object associated with an incomplete
service operation.
Return values
• non-negative – the observed state of the machine at the time the request was
processed, if successful.
• -EIO – if service has recorded an an error.
• -EINVAL – if the parameters are invalid.
• -EAGAIN – if the reference count would overflow.
int onoff_release(struct onoff_manager *mgr)
Release a reserved use of an on-off service.
This synchronously releases the caller’s previous request. If the last request is released
the manager will initiate a transition to off, which can be observed by registering an
onoff_monitor.
Note: Behavior is undefined if this is not paired with a preceding onoff_request() call that
completed successfully.
Parameters
• mgr – the manager for which a request was successful.
Return values
• non-negative – the observed state (ONOFF_STATE_ON) of the machine at the
time of the release, if the release succeeds.
• -EIO – if service has recorded an an error.
• -ENOTSUP – if the machine is not in a state that permits release.
• cli – a pointer to the same client state that was provided when the operation
to be cancelled was issued.
Return values
• non-negative – the observed state of the machine at the time of the cancella-
tion, if the cancellation succeeds. On successful cancellation ownership of *cli
reverts to the client.
• -EINVAL – if the parameters are invalid.
• -EALREADY – if cli was not a record of an uncompleted notification at the time
the cancellation was processed. This likely indicates that the operation and
client notification had already completed.
static inline int onoff_cancel_or_release(struct onoff_manager *mgr, struct onoff_client *cli)
Helper function to safely cancel a request.
Some applications may want to issue requests on an asynchronous event (such as connection
to a USB bus) and to release on a paired event (such as loss of connection to a USB bus).
Applications cannot precisely determine that an in-progress request is still pending without
using onoff_monitor and carefully avoiding race conditions.
This function is a helper that attempts to cancel the operation and issues a release if cancel-
lation fails because the request was completed. This synchronously ensures that ownership of
the client data reverts to the client so is available for a future request.
Parameters
• mgr – the manager for which an operation is to be cancelled.
• cli – a pointer to the same client state that was provided when onoff_request()
was invoked. Behavior is undefined if this is a pointer to client data associated
with an onoff_reset() request.
Return values
• ONOFF_STATE_TO_ON – if the cancellation occurred before the transition com-
pleted.
• ONOFF_STATE_ON – if the cancellation occurred after the transition completed.
• -EINVAL – if the parameters are invalid.
• negative – other errors produced by onoff_release().
int onoff_reset(struct onoff_manager *mgr, struct onoff_client *cli)
Clear errors on an on-off service and reset it to its off state.
A service can only be reset when it is in an error state as indicated by onoff_has_error().
The return value indicates the success or failure of an attempt to initiate an operation to reset
the resource. If initiation of the operation succeeds the result of the reset operation itself is
provided through the configured client notification method, possibly before this call returns.
Multiple clients may request a reset; all are notified when it is complete.
Note that the call to this function may succeed in a case where the actual reset fails. Always
check the operation completion result.
Note: Due to the conditions on state transition all incomplete asynchronous operations will
have been informed of the error when it occurred. There need be no concern about dangling
requests left after a reset completes.
Parameters
• mgr – the manager to be reset.
• cli – pointer to client state, including instructions on how to notify the client
when reset completes. Behavior is undefined if cli references an object associ-
ated with an incomplete service operation.
Return values
• non-negative – the observed state of the machine at the time of the reset, if
the reset succeeds.
• -ENOTSUP – if reset is not supported by the service.
• -EINVAL – if the parameters are invalid.
• -EALREADY – if the service does not have a recorded error.
Note: If an error state is returned it is the caller’s responsibility to decide whether to preserve
it (finalize with the same error state) or clear the error (finalize with a non-error result).
Parameters
• srv – pointer to the synchronous service state.
• keyp – pointer to where the lock key should be stored
Returns
negative if the service is in an error state, otherwise the number of active requests
at the time the lock was taken. The lock is held on return regardless of whether
a negative state is returned.
Parameters
• srv – pointer to the synchronous service state
• key – the key returned by the preceding invocation of onoff_sync_lock().
• cli – pointer to the onoff client through which completion information is re-
turned. If a null pointer is passed only the state of the service is updated. For
compatibility with the behavior of callbacks used with the manager API cli
must be null when on is false (the manager does not support callbacks when
turning off devices).
• res – the result of the transition. A negative value places the service into an
error state. A non-negative value increments or decrements the reference count
as specified by on.
• on – Only when res is non-negative, the service reference count will be incre-
mented ifon is true, and decremented if on is false.
Returns
negative if the service is left or put into an error state, otherwise the number of
active requests at the time the lock was released.
struct onoff_transitions
#include <onoff.h> On-off service transition functions.
struct onoff_manager
#include <onoff.h> State associated with an on-off manager.
No fields in this structure are intended for use by service providers or clients. The state is
to be initialized once, using onoff_manager_init(), when the service provider is initialized. In
case of error it may be reset through the onoff_reset() API.
struct onoff_client
#include <onoff.h> State associated with a client of an on-off service.
Objects of this type are allocated by a client, which is responsible for zero-initializing the node
field and invoking the appropriate sys_notify init function to configure notification.
Control of the object content transfers to the service provider when a pointer to the object
is passed to any on-off manager function. While the service provider controls the object the
client must not change any object fields. Control reverts to the client concurrent with release
of the owned sys_notify structure, or when indicated by an onoff_cancel() return value.
After control has reverted to the client the notify field must be reinitialized for the next oper-
ation.
Public Members
struct onoff_monitor
#include <onoff.h> Registration state for notifications of onoff service transitions.
Any given onoff_monitor structure can be associated with at most one onoff_manager instance.
Public Members
onoff_monitor_callback callback
Callback to be invoked on state change.
This must not be null.
struct onoff_sync_service
#include <onoff.h> State used when a driver uses the on-off service API for synchronous
operations.
This is useful when a subsystem API uses the on-off API to support asynchronous opera-
tions but the transitions required by a particular driver are isr-ok and not sleep. It serves
as a substitute for onoff_manager, with locking and persisted state updates supported by
onoff_sync_lock() and onoff_sync_finalize().
4.12 Modbus
Modbus is an industrial messaging protocol. The protocol is specified for different types of networks
or buses. Zephyr OS implementation supports communication over serial line and may be used with
different physical interfaces, like RS485 or RS232. TCP support is not implemented directly, but there
are helper functions to realize TCP support according to the application’s needs.
Modbus communication is based on client/server model. Only one client may be present on the bus.
Client can communicate with several server devices. Server devices themselves are passive and must not
send requests or unsolicited responses. Services requested by the client are specified by function codes
(FCxx), and can be found in the specification or documentation of the API below.
Zephyr RTOS implementation supports both client and server roles.
More information about Modbus and Modbus RTU can be found on the website MODBUS Protocol
Specifications.
4.12.1 Samples
modbus-rtu-server-sample and modbus-rtu-client-sample give the possibility to try out RTU server and
RTU client implementation with an evaluation board.
modbus-tcp-server-sample is a simple Modbus TCP server.
modbus-gateway-sample is an example how to build a TCP to serial line gateway with Zephyr OS.
group modbus
MODBUS transport protocol API.
Defines
MODBUS_MBAP_LENGTH
Length of MBAP Header
MODBUS_MBAP_AND_FC_LENGTH
Length of MBAP Header plus function code
Typedefs
typedef int (*modbus_raw_cb_t)(const int iface, const struct modbus_adu *adu, void *user_data)
ADU raw callback function signature.
Param iface
Modbus RTU interface index
Param adu
Pointer to the RAW ADU struct to send
Param user_data
Pointer to the user data
Retval 0
If transfer was successful
Enums
enum modbus_mode
Modbus interface mode.
Values:
enumerator MODBUS_MODE_RTU
Modbus over serial line RTU mode
enumerator MODBUS_MODE_ASCII
Modbus over serial line ASCII mode
enumerator MODBUS_MODE_RAW
Modbus raw ADU mode
Functions
int modbus_read_coils(const int iface, const uint8_t unit_id, const uint16_t start_addr, uint8_t
*const coil_tbl, const uint16_t num_coils)
Coil read (FC01)
Sends a Modbus message to read the status of coils from a server.
Parameters
• iface – Modbus interface index
• unit_id – Modbus unit ID of the server
• start_addr – Coil starting address
• coil_tbl – Pointer to an array of bytes containing the value of the coils read.
The format is:
MSB LSB
B7 B6 B5 B4 B3 B2 B1 B0
-------------------------------------
coil_tbl[0] #8 #7 #1
coil_tbl[1] #16 #15 #9
:
:
Note that the array that will be receiving the coil values must be greater than
or equal to: (num_coils - 1) / 8 + 1
• num_coils – Quantity of coils to read
Return values
0 – If the function was successful
int modbus_read_dinputs(const int iface, const uint8_t unit_id, const uint16_t start_addr, uint8_t
*const di_tbl, const uint16_t num_di)
Read discrete inputs (FC02)
Sends a Modbus message to read the status of discrete inputs from a server.
Parameters
• iface – Modbus interface index
• unit_id – Modbus unit ID of the server
• start_addr – Discrete input starting address
• di_tbl – Pointer to an array that will receive the state of the discrete inputs.
The format of the array is as follows:
MSB LSB
B7 B6 B5 B4 B3 B2 B1 B0
-------------------------------------
di_tbl[0] #8 #7 #1
di_tbl[1] #16 #15 #9
:
:
Note that the array that will be receiving the discrete input values must be
greater than or equal to: (num_di - 1) / 8 + 1
• num_di – Quantity of discrete inputs to read
Return values
0 – If the function was successful
int modbus_read_holding_regs(const int iface, const uint8_t unit_id, const uint16_t start_addr,
uint16_t *const reg_buf, const uint16_t num_regs)
Read holding registers (FC03)
Sends a Modbus message to read the value of holding registers from a server.
Parameters
• iface – Modbus interface index
• unit_id – Modbus unit ID of the server
• start_addr – Register starting address
• reg_buf – Is a pointer to an array that will receive the current values of the
holding registers from the server. The array pointed to by ‘reg_buf’ needs to be
able to hold at least ‘num_regs’ entries.
Parameters
• iface – Modbus interface index
• unit_id – Modbus unit ID of the server
• sfunc – Diagnostic sub-function code
• data – Sub-function data
• data_out – Pointer to the data value
Return values
0 – If the function was successful
int modbus_write_coils(const int iface, const uint8_t unit_id, const uint16_t start_addr, uint8_t
*const coil_tbl, const uint16_t num_coils)
Write coils (FC15)
Sends a Modbus message to write to coils on a server unit.
Parameters
• iface – Modbus interface index
• unit_id – Modbus unit ID of the server
• start_addr – Coils starting address
• coil_tbl – Pointer to an array of bytes containing the value of the coils to
write. The format is:
MSB LSB
B7 B6 B5 B4 B3 B2 B1 B0
-------------------------------------
coil_tbl[0] #8 #7 #1
coil_tbl[1] #16 #15 #9
:
:
Note that the array that will be receiving the coil values must be greater than
or equal to: (num_coils - 1) / 8 + 1
• num_coils – Quantity of coils to write
Return values
0 – If the function was successful
int modbus_write_holding_regs(const int iface, const uint8_t unit_id, const uint16_t start_addr,
uint16_t *const reg_buf, const uint16_t num_regs)
Write holding registers (FC16)
Sends a Modbus message to write to integer holding registers to a server unit.
Parameters
• iface – Modbus interface index
• unit_id – Modbus unit ID of the server
• start_addr – Register starting address
• reg_buf – Is a pointer to an array containing the value of the holding registers
to write. Note that the array containing the register values must be greater
than or equal to ‘num_regs’
• num_regs – Quantity of registers to write
Return values
0 – If the function was successful
struct modbus_adu
#include <modbus.h> Frame struct used internally and for raw ADU support.
Public Members
uint16_t trans_id
Transaction Identifier
uint16_t proto_id
Protocol Identifier
uint16_t length
Length of the data only (not the length of unit ID + PDU)
uint8_t unit_id
Unit Identifier
uint8_t fc
Function Code
uint8_t data[CONFIG_MODBUS_BUFFER_SIZE - 4]
Transaction Data
uint16_t crc
RTU CRC
struct modbus_user_callbacks
#include <modbus.h> Modbus Server User Callback structure
Public Members
struct modbus_serial_param
#include <modbus.h> Modbus serial line parameter.
Public Members
uint32_t baud
Baudrate of the serial line
struct modbus_server_param
#include <modbus.h> Modbus server parameter.
Public Members
uint8_t unit_id
Modbus unit ID of the server
struct modbus_raw_cb
#include <modbus.h>
struct modbus_iface_param
#include <modbus.h> User parameter structure to configure Modbus interface as client or
server.
Public Members
uint32_t rx_timeout
Amount of time client will wait for a response from the server.
Zephyr APIs often include async functions where an operation is initiated and the application needs to
be informed when it completes, and whether it succeeded. Using k_poll() is often a good method, but
some application architectures may be more suited to a callback notification, and operations like enabling
clocks and power rails may need to be invoked before kernel functions are available so a busy-wait for
completion may be needed.
This API is intended to be embedded within specific subsystems such as On-Off Manager and other APIs
that support async transactions. The subsystem wrappers are responsible for extracting operation-specific
data from requests that include a notification element, and for invoking callbacks with the parameters
required by the API.
A limitation is that this API is not suitable for System Calls because:
• sys_notify is not a kernel object;
• copying the notification content from userspace will break use of CONTAINER_OF in the implement-
ing function;
• neither the spin-wait nor callback notification methods can be accepted from userspace callers.
Where a notification is required for an asynchronous operation invoked from a user mode thread the
subsystem or driver should provide a syscall API that uses k_poll_signal for notification.
group sys_notify_apis
Typedefs
• a pointer to a specific client request structure, i.e. the one that contains the sys_notify
structure.
• the result of the operation, either as passed to sys_notify_finalize() or extracted afterwards
using sys_notify_fetch_result(). Expected values are service-specific, but the value shall be
non-negative if the operation succeeded, and negative if the operation failed.
Functions
Parameters
• notify – pointer to the notification configuration object.
• sigp – pointer to the signal to use for notification. The value must not be null.
The signal must be reset before the client object is passed to the on-off service
API.
struct sys_notify
#include <notify.h> State associated with notification for an asynchronous operation.
Objects of this type are allocated by a client, which must use an initialization function (e.g.
sys_notify_init_signal()) to configure them. Generally the structure is a member of a service-
specific client structure, such as onoff_client.
Control of the containing object transfers to the service provider when a pointer to the ob-
ject is passed to a service function that is documented to take control of the object, such as
onoff_service_request(). While the service provider controls the object the client must not
change any object fields. Control reverts to the client:
• if the call to the service API returns an error;
• when operation completion is posted. This may occur before the call to the service API
returns.
Operation completion is technically posted when the flags field is updated so that
sys_notify_fetch_result() returns success. This will happen before the signal is posted or call-
back is invoked. Note that although the manager will no longer reference the sys_notify object
past this point, the containing object may have state that will be referenced within the call-
back. Where callbacks are used control of the containing object does not revert to the client
until the callback has been invoked. (Re-use within the callback is explicitly permitted.)
After control has reverted to the client the notify object must be reinitialized for the next
operation.
The content of this structure is not public API to clients: all configuration and inspection
should be done with functions like sys_notify_init_callback() and sys_notify_fetch_result().
However, services that use this structure may access certain fields directly.
union method
#include <notify.h>
Public Members
sys_notify_generic_callback callback
Zephyr RTOS power management subsystem provides several means for a system integrator to imple-
ment power management support that can take full advantage of the power saving features of SOCs.
4.14.1 Overview
The interfaces and APIs provided by the power management subsystem are designed to be architec-
ture and SOC independent. This enables power management implementations to be easily adapted to
different SOCs and architectures.
The architecture and SOC independence is achieved by separating the core infrastructure and the SOC
specific implementations. The SOC specific implementations are abstracted to the application and the
OS using hardware abstraction layers.
The power management features are classified into the following categories.
• System Power Management
• Device Power Management
The kernel enters the idle state when it has nothing to schedule. If enabled via the CONFIG_PM Kconfig
option, the Power Management Subsystem can put an idle system in one of the supported power states,
based on the selected power management policy and the duration of the idle time allotted by the kernel.
It is an application responsibility to set up a wake up event. A wake up event will typically be an interrupt
triggered by one of the SoC peripheral modules such as a SysTick, RTC, counter, or GPIO. Depending on
the power mode entered, only some SoC peripheral modules may be active and can be used as a wake
up source.
The following diagram describes system power management:
Idle Thread
arch_irq_lock()
no
CONFIG_PM k_cpu_idle()
yes
pm_system_supspend (ticks)
ACTIVE
pm_policy_next_state()
SUSPEND_TO_RAM... RUNTIME_IDLE...
pm_suspend_devices() pm_low_power_devices()
k_schedule_lock()
pm_state_notify()
pm_power_state_set(state)
pm_resume_devices()
pm_state_exit_post_ops()
CONFI...
pm_state_notify()
SoC Implementation
k_sched_unlock()
Power States
The power management subsystem contains a set of states based on power consumption and context
retention.
The list of available power states is defined by pm_state . In general power states with higher indexes
will offer greater power savings and have higher wake latencies.
The power management subsystem supports the following power management policies:
• Residency based
• Application defined
The policy manager is responsible for informing the power subsystem which power state the system
should transition to based on states defined by the platform and other constraints such as a list of
allowed states.
More details on the states definition can be found in the zephyr,power-state binding documentation.
Residency The power management system enters the power state which offers the highest power sav-
ings, and with a minimum residency value (see zephyr,power-state) less than or equal to the scheduled
system idle time duration.
This policy also accounts for the time necessary to become active again. The core logic used by this policy
to select the best power state is:
Application The application defines the power management policy by implementing the
pm_policy_next_state() function. In this policy the application is free to decide which power state
the system should transition to based on the remaining time for the next scheduled timeout.
An example of an application that defines its own policy can be found in tests/subsys/pm/power_mgmt/.
Policy and Power States The power management subsystem allows different Zephyr components and
applications to configure the policy manager to block system from transitioning into certain power states.
This can be used by devices when executing tasks in background to prevent the system from going to a
specific state where it would lose context.
Introduction
Device power management (PM) on Zephyr is a feature that enables devices to save energy when they
are not being used. This feature can be enabled by setting CONFIG_PM_DEVICE to y. When this option is
selected, device drivers implementing power management will be able to take advantage of the device
power management subsystem.
Zephyr supports two types of device power management:
• Device Runtime Power Management
• System Power Management
In this method, the application or any component that deals with devices directly and has the best
knowledge of their use, performs the device power management. This saves power if some devices that
are not in use can be turned off or put in power saving mode. This method allows saving power even
when the CPU is active. The components that use the devices need to be power aware and should be
able to make decisions related to managing device power.
When using this type of device power management, the kernel can change CPU power states quickly
when pm_system_suspend() gets called. This is because it does not need to spend time doing device
power management if the devices are already put in the appropriate power state by the application or
component managing the devices.
For more information, see Device Runtime Power Management.
When using this type, device power management is mostly done inside pm_system_suspend() along
with entering a CPU or SOC power state.
If a decision to enter a CPU lower power state is made, the power management subsystem will suspend
devices before changing state. The subsystem takes care of suspending devices following their initializa-
tion order, ensuring that possible dependencies between them are satisfied. As soon as the CPU wakes
up from a sleep state, devices are resumed in the opposite order that they were suspended.
Note: When using System Power Management, device transitions can be run from the idle thread. As
functions in this context cannot block, transitions that intend to use blocking APIs must check whether
they can do so with k_can_yield() .
This type of device power management can be useful when the application is not power aware and does
not implement runtime device power management. Though, Device Runtime Power Management is the
preferred option for device power management.
Note: When using this type of device power management, the CPU will only enter a low power state
only if no device is in the middle of a hardware transaction that cannot be interrupted.
Note: Devices are suspended only when the last active core is entering a low power state and devices
are resumed by the first core that becomes active.
The power management subsystem defines device states in pm_device_state . This type is used to track
power states of a particular device. It is important to emphasize that, although the state is tracked by
the subsystem, it is the responsibility of each device driver to handle device actions(pm_device_action )
which change device state.
Each pm_device_action have a direct an unambiguous relationship with a pm_device_state .
PM_DEVICE_STATE_SUSPENDING
PM_DEVICE_ACTION_RESUME PM_DEVICE_STATE_ACTIVE
PM_DEVICE_ACTION_TURN_OFF
PM_DEVICE_ACTION_SUSPEND
PM_DEVICE_ACTION_TURN_OFF
PM_DEVICE_STATE_SUSPENDED PM_DEVICE_STATE_OFF
PM_DEVICE_ACTION_TURN_ON
As mentioned above, device drivers do not directly change between these states. This is entirely done
by the power management subsystem. Instead, drivers are responsible for implementing any hardware-
specific tasks needed to handle state changes.
Drivers initialize devices using macros. See Device Driver Model for details on how these macros are
used. A driver which implements device power management support must provide these macros with
arguments that describe its power management implementation.
Use PM_DEVICE_DEFINE or PM_DEVICE_DT_DEFINE to define the power management resources required
by a driver. These macros allocate the driver-specific state which is required by the power management
subsystem.
Drivers can use PM_DEVICE_GET or PM_DEVICE_DT_GET to get a pointer to this state. These pointers
should be passed to DEVICE_DEFINE or DEVICE_DT_DEFINE to initialize the power management field in
each device .
Here is some example code showing how to implement device power management support in a device
driver.
return 0;
}
PM_DEVICE_DT_INST_DEFINE(0, dummy_driver_pm_action);
DEVICE_DT_INST_DEFINE(0, &dummy_init,
PM_DEVICE_DT_INST_GET(0), NULL, NULL, POST_KERNEL,
CONFIG_KERNEL_INIT_PRIORITY_DEFAULT, NULL);
When the system is idle and the SoC is going to sleep, the power management subsystem can suspend
devices, as described in System Power Management. This can cause device hardware to lose some states.
Suspending a device which is in the middle of a hardware transaction, such as writing to a flash memory,
may lead to undefined behavior or inconsistent states. This API guards such transactions by indicating
to the kernel that the device is in the middle of an operation and should not be suspended.
When pm_device_busy_set() is called, the device is marked as busy and the system will not do power
management on it. After the device is no longer doing an operation and can be suspended, it should call
pm_device_busy_clear() .
Wakeup capability
Some devices are capable of waking the system up from a sleep state. When a device has
such capability, applications can enable or disable this feature on a device dynamically using
pm_device_wakeup_enable() .
This property can be set on device declaring the property wakeup-source in the device node in device-
tree. For example, this devicetree fragment sets the gpio0 device as a “wakeup” source:
gpio0: gpio@40022000 {
compatible = "ti,cc13xx-cc26xx-gpio";
reg = <0x40022000 0x400>;
interrupts = <0 0>;
status = "disabled";
label = "GPIO_0";
gpio-controller;
wakeup-source;
#gpio-cells = <2>;
};
By default, “wakeup” capable devices do not have this functionality enabled during the device initializa-
tion. Applications can enable this functionality later calling pm_device_wakeup_enable() .
Note: This property is only used by the system power management to identify devices that should not
be suspended. It is responsibility of driver or the application to do any additional configuration required
by the device to support it.
Power Domain
Power domain on Zephyr is represented as a regular device. The power management subsystem ensures
that a domain is resumed before and suspended after devices using it. For more details, see Power
Domain.
Introduction
The device runtime power management (PM) framework is an active power management mechanism
which reduces the overall system power consumption by suspending the devices which are idle or not
used independently of the system state. It can be enabled by setting CONFIG_PM_DEVICE_RUNTIME. In this
model the device driver is responsible to indicate when it needs the device and when it does not. This
information is used to determine when to suspend or resume a device based on usage count.
When device runtime power management is enabled on a device, its state will be initially set to a
PM_DEVICE_STATE_SUSPENDED indicating it is not used. On the first device request, it will be resumed
and so put into the PM_DEVICE_STATE_ACTIVE state. The device will remain in this state until it is no
longer used. At this point, the device will be suspended until the next device request. If the suspension
is performed synchronously the device will be immediately put into the PM_DEVICE_STATE_SUSPENDED
state, whereas if it is performed asynchronously, it will be put into the PM_DEVICE_STATE_SUSPENDING
state first and then into the PM_DEVICE_STATE_SUSPENDED state when the action is run.
The device runtime power management framework has been designed to minimize devices power con-
sumption with minimal application work. Device drivers are responsible for indicating when they need
the device to be operational and when they do not. Therefore, applications can not manually suspend
PM_DEVICE_STATE_SUSPENDED
PM_DEVICE_STATE_SUSPENDING
PM_DEVICE_STATE_ACTIVE
or resume a device. An application can, however, decide when to disable or enable runtime power man-
agement for a device. This can be useful, for example, if an application wants a particular device to be
always active.
Design principles
When runtime PM is enabled on a device it will no longer be resumed or suspended during system power
transitions. Instead, the device is fully responsible to indicate when it needs a device and when it does
not. The device runtime PM API uses reference counting to keep track of device’s usage. This allows
the API to determine when a device needs to be resumed or suspended. The API uses the get and put
terminology to indicate when a device is needed or not, respectively. This mechanism plays a key role
when we account for device dependencies. For example, if a bus device is used by multiple sensors, we
can keep the bus active until the last sensor has finished using it.
Note: As of today, the device runtime power management API does not manage device dependencies.
This effectively means that, if a device depends on other devices to operate (e.g. a sensor may depend
on a bus device), the bus will be resumed and suspended on every transaction. In general, it is more
efficient to keep parent devices active when their children are used, since the children may perform
multiple transactions in a short period of time. Until this feature is added, devices can manually get or
put their dependencies.
The pm_device_runtime_get() function can be used by a device driver to indicate it needs the device
to be active or operational. This function will increase device usage count and resume the device if
necessary. Similarly, the pm_device_runtime_put() function can be used to indicate that the device is
no longer needed. This function will decrease the device usage count and suspend the device if necessary.
It is worth to note that in both cases, the operation is carried out synchronously. The sequence diagram
shown below illustrates how a device can use this API and the expected sequence of events.
The synchronous model is as simple as it gets. However, it may introduce unnecessary delays since the
application will not get the operation result until the device is suspended (in case device is no longer
used). It will likely not be a problem if the operation is fast, e.g. a register toggle. However, the situation
operation(dev)
pm_device_runtime_get(dev)
Increase usage
alt [usage == 1]
PM_DEVICE_ACTION_RESUME
Operation
pm_device_runtime_put(dev)
Decrease usage
alt [usage == 0]
PM_DEVICE_ACTION_SUSPEND
will not be the same if suspension involves sending packets through a slow bus. For this reason the
device drivers can also make use of the pm_device_runtime_put_async() function. This function will
schedule the suspend operation, again, if device is no longer used. The suspension will then be carried
out when the system work queue gets the chance to run. The sequence diagram shown below illustrates
this scenario.
operation(dev)
pm_device_runtime_get(dev)
Increase usage
alt [usage == 1]
PM_DEVICE_ACTION_RESUME
Operation
pm_device_runtime_put_async(dev)
Decrease usage
alt [usage == 0]
Schedule suspend
PM_DEVICE_ACTION_SUSPEND
Implementation guidelines
In a first place, a device driver needs to implement the PM action callback used by the PM subsystem to
suspend or resume devices.
static int mydev_pm_action(const struct device *dev,
enum pm_device_action *action)
{
switch (action) {
case PM_DEVICE_ACTION_SUSPEND:
/* suspend the device */
...
break;
case PM_DEVICE_ACTION_RESUME:
/* resume the device */
...
break;
default:
(continues on next page)
return 0;
}
The PM action callback calls are serialized by the PM subsystem, therefore, no special synchronization is
required.
To enable device runtime power management on a device, the driver needs to call
pm_device_runtime_enable() at initialization time. Note that this function will suspend the device
if its state is PM_DEVICE_STATE_ACTIVE . In case the device is physically suspended, the init function
should call pm_device_init_suspended() before calling pm_device_runtime_enable() .
Device runtime power management can also be automatically enabled on a device instance by
adding the zephyr,pm-device-runtime-auto flag onto the corresponding devicetree node. If enabled,
pm_device_runtime_enable() is called immediately after the init function of the device runs and
returns successfully.
foo {
/* ... */
zephyr,pm-device-runtime-auto;
};
Assuming an example device driver that implements an operation API call, the get and put operations
could be carried out as follows:
In case the suspend operation is slow, the device driver can use the asynchronous API:
static int mydev_operation(const struct device *dev)
{
int ret;
Introduction
The Zephyr power domain abstraction is designed to support groupings of devices powered by a common
source to be notified of power source state changes in a generic fashion. Application code that is using
device A does not need to know that device B is on the same power domain and should also be configured
into a low power state.
Power domains are optional on Zephyr, to enable this feature the option
CONFIG_PM_DEVICE_POWER_DOMAIN has to be set.
When a power domain turns itself on or off, it is the responsibility of the power domain to notify all
devices using it through their power management callback called with PM_DEVICE_ACTION_TURN_ON or
PM_DEVICE_ACTION_TURN_OFF respectively. This work flow is illustrated in the diagram below.
pm_device_get()
gpio0 gpio1
gpio_domain
Internal Power Domains Most of the devices in an SoC have independent power control that can be
turned on or off to reduce power consumption. But there is a significant amount of static current leakage
that can’t be controlled only using device power management. To solve this problem, SoCs normally are
divided into several regions grouping devices that are generally used together, so that an unused region
can be completely powered off to eliminate this leakage. These regions are called “power domains”, can
be present in a hierarchy and can be nested.
External Power Domains Devices external to a SoC can be powered from sources other than the main
power source of the SoC. These external sources are typically a switch, a regulator, or a dedicated power
IC. Multiple devices can be powered from the same source, and this grouping of devices is typically called
a “power domain”.
Placing devices on power domains can be done for a variety of reasons, including to enable devices with
high power consumption in low power mode to be completely turned off when not in use.
Implementation guidelines
In a first place, a device that acts as a power domain needs to declare compatible with power-domain.
Taking Power domain work flow as example, the following code defines a domain called gpio_domain.
gpio_domain: gpio_domain@4 {
compatible = "power-domain";
...
};
A power domain needs to implement the PM action callback used by the PM subsystem to turn devices
on and off.
return 0;
}
Devices belonging to this device can be declared referring it in the power-domain node’s property. The
example below declares devices gpio0 and gpio1 belonging to domain gpio_domain`.
&gpio0 {
compatible = "zephyr,gpio-emul";
gpio-controller;
power-domain = <&gpio_domain>;
};
&gpio1 {
compatible = "zephyr,gpio-emul";
gpio-controller;
power-domain = <&gpio_domain>;
};
All devices under a domain will be notified when the domain changes state. These notifications are sent
as actions in the device PM action callback and can be used by them to do any additional work required.
They can safely be ignored though.
static int mydev_pm_action(const struct device *dev,
enum pm_device_action *action)
{
switch (action) {
case PM_DEVICE_ACTION_SUSPEND:
/* suspend the device */
...
break;
case PM_DEVICE_ACTION_RESUME:
/* resume the device */
...
break;
case PM_DEVICE_ACTION_TURN_ON:
/* configure the device into low power mode */
...
break;
case PM_DEVICE_ACTION_TURN_OFF:
/* prepare the device for power down */
...
break;
default:
return -ENOTSUP;
}
return 0;
}
Note: It is responsibility of driver or the application to set the domain as “wakeup” source if a device
depending on it is used as “wakeup” source.
System PM APIs
group subsys_pm_sys
System Power Management API.
Functions
Parameters
• cpu – CPU index.
• info – Power state which should be used in the ongoing suspend operation.
struct pm_notifier
#include <pm.h> Power management notifier struct
This struct contains callbacks that are called when the target enters and exits power states.
As currently implemented the entry callback is invoked when transitioning from
PM_STATE_ACTIVE to another state, and the exit callback is invoked when transitioning from
a non-active state to PM_STATE_ACTIVE. This behavior may change in the future.
Note: These callbacks can be called from the ISR of the event that caused the kernel exit
from idling.
Public Members
States
group subsys_pm_states
System Power Management States.
Defines
PM_STATE_INFO_DT_INIT(node_id)
Initializer for struct pm_state_info given a DT node identifier with zephyr,power-state compat-
ible.
Parameters
• node_id – A node identifier with compatible zephyr,power-state
PM_STATE_DT_INIT(node_id)
Initializer for enum pm_state given a DT node identifier with zephyr,power-state compatible.
Parameters
• node_id – A node identifier with compatible zephyr,power-state
DT_NUM_CPU_POWER_STATES(node_id)
Obtain number of CPU power states supported by the given CPU node identifier.
Parameters
• node_id – A CPU node identifier.
Returns
Number of supported CPU power states.
PM_STATE_INFO_LIST_FROM_DT_CPU(node_id)
Initialize an array of struct pm_state_info with information from all the states present in the
given CPU node identifier.
Example devicetree fragment:
cpus {
...
cpu0: cpu@0 {
device_type = "cpu";
...
cpu-power-states = <&state0 &state1>;
};
(continues on next page)
...
power-states {
state0: state0 {
compatible = "zephyr,power-state";
power-state-name = "suspend-to-idle";
min-residency-us = <10000>;
exit-latency-us = <100>;
};
state1: state1 {
compatible = "zephyr,power-state";
power-state-name = "suspend-to-ram";
min-residency-us = <50000>;
exit-latency-us = <500>;
};
};
Example usage:
Parameters
• node_id – A CPU node identifier.
PM_STATE_LIST_FROM_DT_CPU(node_id)
Initialize an array of struct pm_state with information from all the states present in the given
CPU node identifier.
Example devicetree fragment:
cpus {
...
cpu0: cpu@0 {
device_type = "cpu";
...
cpu-power-states = <&state0 &state1>;
};
};
...
power-states {
state0: state0 {
compatible = "zephyr,power-state";
power-state-name = "suspend-to-idle";
min-residency-us = <10000>;
exit-latency-us = <100>;
};
state1: state1 {
compatible = "zephyr,power-state";
power-state-name = "suspend-to-ram";
min-residency-us = <50000>;
exit-latency-us = <500>;
(continues on next page)
Example usage:
Parameters
• node_id – A CPU node identifier.
Enums
enum pm_state
Power management state
Values:
enumerator PM_STATE_ACTIVE
Runtime active state.
The system is fully powered and active.
enumerator PM_STATE_RUNTIME_IDLE
Runtime idle state.
Runtime idle is a system sleep state in which all of the cores enter deepest possible idle
state and wait for interrupts, no requirements for the devices, leaving them at the states
where they are.
enumerator PM_STATE_SUSPEND_TO_IDLE
Suspend to idle state.
The system goes through a normal platform suspend where it puts all of the cores in
deepest possible idle state and may puts peripherals into low-power states. No operating
state is lost (ie. the cpu core does not lose execution context), so the system can go back
to where it left off easily enough.
enumerator PM_STATE_STANDBY
Standby state.
In addition to putting peripherals into low-power states all non-boot CPUs are powered
off. It should allow more energy to be saved relative to suspend to idle, but the resume
latency will generally be greater than for that state. But it should be the same state with
suspend to idle state on uniprocessor system.
enumerator PM_STATE_SUSPEND_TO_RAM
Suspend to ram state.
This state offers significant energy savings by powering off as much of the system as
possible, where memory should be placed into the self-refresh mode to retain its contents.
The state of devices and CPUs is saved and held in memory, and it may require some boot-
strapping code in ROM to resume the system from it.
enumerator PM_STATE_SUSPEND_TO_DISK
Suspend to disk state.
This state offers significant energy savings by powering off as much of the system as
possible, including the memory. The contents of memory are written to disk or other non-
volatile storage, and on resume it’s read back into memory with the help of boot-strapping
code, restores the system to the same point of execution where it went to suspend to disk.
enumerator PM_STATE_SOFT_OFF
Soft off state.
This state consumes a minimal amount of power and requires a large latency in order
to return to runtime active state. The contents of system(CPU and memory) will not be
preserved, so the system will be restarted as if from initial power-up and kernel boot.
enumerator PM_STATE_COUNT
Number of power management states (internal use)
Functions
struct pm_state_info
#include <state.h> Information about a power management state
Public Members
uint8_t substate_id
Some platforms have multiple states that map to one Zephyr power state. This property
allows the platform distinguish them. e.g:
power-states {
state0: state0 {
compatible = "zephyr,power-state";
power-state-name = "suspend-to-idle";
substate-id = <1>;
min-residency-us = <10000>;
exit-latency-us = <100>;
};
state1: state1 {
compatible = "zephyr,power-state";
power-state-name = "suspend-to-idle";
substate-id = <2>;
min-residency-us = <20000>;
exit-latency-us = <200>;
};
};
uint32_t min_residency_us
Minimum residency duration in microseconds. It is the minimum time for a given idle
state to be worthwhile energywise.
Note: 0 means that this property is not available for this state.
uint32_t exit_latency_us
Worst case latency in microseconds required to exit the idle state.
Note: 0 means that this property is not available for this state.
Policy
group subsys_pm_sys_policy
System Power Management Policy API.
Defines
PM_ALL_SUBSTATES
Special value for ‘all substates’.
Typedefs
Param latency
New maximum latency. Positive value represents latency in microseconds.
SYS_FOREVER_US value lifts the latency constraint. Other values are forbidden.
Functions
See also:
pm_policy_state_lock_put()
Parameters
• state – Power state.
• substate_id – Power substate ID. Use PM_ALL_SUBSTATES to affect all the
substates in the given power state.
See also:
pm_policy_state_lock_get()
Parameters
• state – Power state.
• substate_id – Power substate ID. Use PM_ALL_SUBSTATES to affect all the
substates in the given power state.
See also:
pm_policy_event_unregister
Parameters
• evt – Event.
• time_us – When the event will occur, in microseconds from now.
See also:
pm_policy_event_register
Parameters
• evt – Event.
• time_us – When the event will occur, in microseconds from now.
See also:
pm_policy_event_register
Parameters
• evt – Event.
struct pm_policy_latency_subscription
#include <policy.h> Latency change subscription.
Note: All fields in this structure are meant for private usage.
struct pm_policy_latency_request
#include <policy.h> Latency request.
Note: All fields in this structure are meant for private usage.
struct pm_policy_event
#include <policy.h> Event.
Note: All fields in this structure are meant for private usage.
Hooks
group subsys_pm_sys_hooks
System Power Management Hooks.
Functions
Device PM APIs
group subsys_pm_device
Device Power Management API.
Defines
PM_DEVICE_DEFINE(dev_id, pm_action_cb)
Define device PM resources for the given device name.
See also:
PM_DEVICE_DT_DEFINE, PM_DEVICE_DT_INST_DEFINE
Parameters
• dev_id – Device id.
• pm_action_cb – PM control callback.
PM_DEVICE_DT_DEFINE(node_id, pm_action_cb)
Define device PM resources for the given node identifier.
See also:
PM_DEVICE_DT_INST_DEFINE, PM_DEVICE_DEFINE
Parameters
• node_id – Node identifier.
• pm_action_cb – PM control callback.
PM_DEVICE_DT_INST_DEFINE(idx, pm_action_cb)
Define device PM resources for the given instance.
See also:
PM_DEVICE_DT_DEFINE, PM_DEVICE_DEFINE
Parameters
• idx – Instance index.
• pm_action_cb – PM control callback.
PM_DEVICE_GET(dev_id)
Obtain a reference to the device PM resources for the given device.
Parameters
• dev_id – Device id.
Returns
Reference to the device PM resources (NULL if device CONFIG_PM_DEVICE is dis-
abled).
PM_DEVICE_DT_GET(node_id)
Obtain a reference to the device PM resources for the given node.
Parameters
• node_id – Node identifier.
Returns
Reference to the device PM resources (NULL if device CONFIG_PM_DEVICE is dis-
abled).
PM_DEVICE_DT_INST_GET(idx)
Obtain a reference to the device PM resources for the given instance.
Parameters
• idx – Instance index.
Returns
Reference to the device PM resources (NULL if device CONFIG_PM_DEVICE is dis-
abled).
Typedefs
Enums
enum pm_device_state
Device power states.
Values:
enumerator PM_DEVICE_STATE_ACTIVE
Device is in active or regular state.
enumerator PM_DEVICE_STATE_SUSPENDED
Device is suspended.
enumerator PM_DEVICE_STATE_SUSPENDING
Device is being suspended.
enumerator PM_DEVICE_STATE_OFF
Device is turned off (power removed).
enum pm_device_action
Device PM actions.
Values:
enumerator PM_DEVICE_ACTION_SUSPEND
Suspend.
enumerator PM_DEVICE_ACTION_RESUME
Resume.
enumerator PM_DEVICE_ACTION_TURN_OFF
Turn off.
enumerator PM_DEVICE_ACTION_TURN_ON
Turn on.
Functions
See also:
pm_device_busy_clear()
Parameters
See also:
pm_device_busy_set()
Parameters
• dev – Device instance.
bool pm_device_is_any_busy(void)
Check if any device is busy.
Return values
• false – If no device is busy
• true – If one or more devices are busy
bool pm_device_is_busy(const struct device *dev)
Check if a device is busy.
Parameters
• dev – Device instance.
Return values
• false – If the device is not busy
• true – If the device is busy
bool pm_device_wakeup_enable(const struct device *dev, bool enable)
Enable or disable a device as a wake up source.
A device marked as a wake up source will not be suspended when the system goes into low-
power modes, thus allowing to use it as a wake up source for the system.
Parameters
• dev – Device instance.
• enable – true to enable or false to disable
Return values
• true – If the wakeup source was successfully enabled.
• false – If the wakeup source was not successfully enabled.
bool pm_device_wakeup_is_enabled(const struct device *dev)
Check if a device is enabled as a wake up source.
Parameters
• dev – Device instance.
Return values
• true – if the wakeup source is enabled.
• false – if the wakeup source is not enabled.
See also:
pm_device_state_unlock
Note: The given device should not have device runtime enabled.
Parameters
• dev – Device instance.
See also:
pm_device_state_lock
Parameters
• dev – Device instance.
struct pm_device
#include <device.h> Device PM info.
Public Members
uint32_t usage
Device usage count
pm_device_action_cb_t action_cb
Device PM action callback
group subsys_pm_device_runtime
Device Runtime Power Management API.
Functions
Note: Must not be called from application code. See the zephyr,pm-device-runtime-auto
property in pm.yaml and z_sys_init_run_level.
Parameters
• dev – Device instance.
Return values
• 0 – If the device runtime PM is enabled successfully or it has not been requested
for this device in devicetree.
• -errno – Other negative errno, result of enabled device runtime PM.
See also:
pm_device_init_suspended()
Parameters
• dev – Device instance.
Return values
• 0 – If the device runtime PM is enabled successfully.
• -EPERM – If device has power state locked.
• -ENOTSUP – If the device does not support PM.
• -errno – Other negative errno, result of suspending the device.
Parameters
• dev – Device instance.
Return values
• 0 – If the device runtime PM is disabled successfully.
• -ENOTSUP – If the device does not support PM.
• -errno – Other negative errno, result of resuming the device.
Parameters
• dev – Device instance.
Return values
• 0 – If it succeeds. In case device runtime PM is not enabled or not available
this function will be a no-op and will also return 0.
• -errno – Other negative errno, result of the PM action callback.
See also:
pm_device_runtime_put_async()
Parameters
• dev – Device instance.
Return values
• 0 – If it succeeds. In case device runtime PM is not enabled or not available
this function will be a no-op and will also return 0.
• -EALREADY – If device is already suspended (can only happen if get/put calls
are unbalanced).
• -errno – Other negative errno, result of the action callback.
See also:
pm_device_runtime_put()
Note: Asynchronous operations are not supported when in pre-kernel mode. In this case, the
function will be blocking (equivalent to pm_device_runtime_put()).
Parameters
• dev – Device instance.
Return values
• 0 – If it succeeds. In case device runtime PM is not enabled or not available
this function will be a no-op and will also return 0.
• -EBUSY – If the device is busy.
• -EALREADY – If device is already suspended (can only happen if get/put calls
are unbalanced).
See also:
pm_device_runtime_enable()
Parameters
• dev – Device instance.
Return values
• true – If device has device runtime PM enabled.
• false – If the device has device runtime PM disabled.
4.15 OS Abstraction
OS abstraction layers (OSAL) provide wrapper function APIs that encapsulate common system functions
offered by any operating system. These APIs make it easier and quicker to develop for, and port code to
multiple software and hardware platforms.
These sections describe the software and hardware abstraction layers supported by the Zephyr RTOS.
The Portable Operating System Interface (POSIX) is a family of standards specified by the IEEE Computer
Society for maintaining compatibility between operating systems. Zephyr implements a subset of the
embedded profiles PSE51 and PSE52, and BSD Sockets API.
With the POSIX support available in Zephyr, an existing POSIX compliant application can be ported
to run on the Zephyr kernel, and therefore leverage Zephyr features and functionality. Additionally, a
library designed for use with POSIX threading compatible operating systems can be ported to Zephyr
kernel based applications with minimal or no changes.
Application
Zephyr Kernel
BSP
Hardware
The POSIX API subset is an increasingly popular OSAL (operating system abstraction layer) for IoT and
embedded applications, as can be seen in Zephyr, AWS:FreeRTOS, TI-RTOS, and NuttX.
Benefits of POSIX support in Zephyr include:
• Offering a familiar API to non-embedded programmers, especially from Linux
System Overview
Units of Functionality The system profile is defined in terms of component profiles that specify Units
of Functionality that can be combined to realize the application platform. A Unit of Functionality is
a defined set of services which can be implemented. If implemented, the standard prescribes that all
services in the Unit must be implemented.
A Minimal Realtime System Profile implementation must support the following Units of Functionality as
defined in IEEE Std. 1003.1 (also referred to as POSIX.1-2017).
Option Requirements An implementation supporting the Minimal Realtime System Profile must sup-
port the POSIX.1 Option Requirements which are defined in the standard. Options Requirements are
used for further sub-profiling within the units of functionality: they further define the functional be-
havior of the system service (normally adding extra functionality). Depending on the profile to which
the POSIX implementation complies,parameters and/or the precise functionality of certain services may
differ.
The following list shows the option requirements that are implemented in Zephyr.
Units of Functionality
This section describes the Units of Functionality (fixed sets of interfaces) which are implemented (par-
tially or completely) in Zephyr. Please refer to the standard for a full description of each listed interface.
POSIX_THREADS_BASE The basic assumption in this profile is that the system consists of a single
(implicit) process with multiple threads. Therefore, the standard requires all basic thread services, except
those related to multiple processes.
Table 6: POSIX_THREADS_BASE
API Supported
pthread_atfork()
pthread_attr_destroy() yes
pthread_attr_getdetachstate() yes
pthread_attr_getschedparam() yes
pthread_attr_init() yes
pthread_attr_setdetachstate() yes
pthread_attr_setschedparam() yes
pthread_barrier_destroy() yes
pthread_barrier_init() yes
pthread_barrier_wait() yes
pthread_barrierattr_destroy()
pthread_barrierattr_getpshared()
pthread_barrierattr_init()
pthread_barrierattr_setpshared()
pthread_cancel() yes
pthread_cleanup_pop()
continues on next page
Table 7: XSI_THREAD_EXT
API Supported
pthread_attr_getguardsize()
pthread_attr_getstack() yes
pthread_attr_setguardsize()
pthread_attr_setstack() yes
pthread_getconcurrency()
pthread_setconcurrency()
Table 8: XSI_THREAD_MUTEX_EXT
API Supported
pthread_mutexattr_gettype() yes
pthread_mutexattr_settype() yes
Table 9: POSIX_C_LANG_SUPPORT
API Supported
abs() yes
asctime()
asctime_r()
atof()
atoi() yes
atol()
atoll()
bsearch() yes
calloc() yes
ctime()
ctime_r()
difftime()
div()
feclearexcept()
fegetenv()
fegetexceptflag()
fegetround()
feholdexcept()
feraiseexcept()
fesetenv()
fesetexceptflag()
fesetround()
fetestexcept()
feupdateenv()
free() yes
gmtime() yes
gmtime_r() yes
imaxabs()
imaxdiv()
isalnum() yes
isalpha() yes
isblank()
iscntrl() yes
isdigit() yes
isgraph() yes
islower()
isprint() yes
ispunct()
isspace() yes
isupper() yes
isxdigit() yes
labs() yes
continues on next page
POSIX_SIGNALS Signal services are a basic mechanism within POSIX-based systems and are required
for error and event handling.
POSIX_DEVICE_IO
making it generic. For more information on CMSIS RTOS v1, please refer http://www.keil.com/pack/
doc/CMSIS/RTOS/html/index.html
Kernel
osKernelGetState, osKernelSuspend, osKernelResume, osKernelInitialize and
osKernelStart are not supported.
Mutex
osMutexPrioInherit is supported by default and is not configurable, you cannot select/unselect
this attribute.
osMutexRecursive is also supported by default. If this attribute is not set, an error is thrown when
the same thread tries to acquire it the second time.
osMutexRobust is not supported in Zephyr.
osEventFlagsSet, osEventFlagsClear
osFlagsErrorUnknown (Unspecified error) and osFlagsErrorResource (Event flags object specified
by parameter ef_id is not ready to be used) are not supported.
osEventFlagsDelete
osErrorParameter (the value of the parameter ef_id is incorrect) is not supported.
osThreadFlagsSet
osFlagsErrorUnknown (Unspecified error) and osFlagsErrorResource (Thread specified by pa-
rameter thread_id is not active to receive flags) are not supported.
osThreadFlagsClear
osFlagsErrorResource (Running thread is not active to receive flags) is not supported.
osDelayUntil
osParameter (the time cannot be handled) is not supported.
4.16 Shell
• Overview
– Connecting to Segger RTT via TCP (on macOS, for example)
• Commands
– Creating commands
– Dictionary commands
– Commands execution
– Built-in commands
• Tab Feature
• History Feature
• Wildcards Feature
• Meta Keys Feature
• Getopt Feature
• Obscured Input Feature
• Shell Logger Backend Feature
• RTT Backend Channel Selection
• Usage
• API Reference
4.16.1 Overview
This module allows you to create and handle a shell with a user-defined command set. You can use it in
examples where more than simple button or LED user interaction is required. This module is a Unix-like
shell with these features:
• Support for multiple instances.
• Advanced cooperation with the Logging.
• Support for static and dynamic commands.
Note: Some of these features have a significant impact on RAM and flash usage, but many can be
disabled when not needed. To default to options which favor reduced RAM and flash requirements
instead of features, you should enable CONFIG_SHELL_MINIMAL and selectively enable just the features
you want.
The module can be connected to any transport for command input and output. At this point, the follow-
ing transport layers are implemented:
• Segger RTT
• SMP
• Telnet
• UART
• USB
• DUMMY - not a physical transport layer.
On macOS JLinkRTTClient won’t let you enter input. Instead, please use following procedure:
• Open up a first Terminal window and enter:
nc localhost 19021
• Now you should have a network connection to RTT that will let you enter input to the shell.
4.16.2 Commands
Shell commands are organized in a tree structure and grouped into the following types:
• Root command (level 0): Gathered and alphabetically sorted in a dedicated memory section.
• Static subcommand (level > 0): Number and syntax must be known during compile time. Created
in the software module.
• Dynamic subcommand (level > 0): Number and syntax does not need to be known during compile
time. Created in the software module.
Creating commands
Static commands Example code demonstrating how to create a root command with static subcom-
mands.
Dictionary commands
This is a special kind of static commands. Dictionary commands can be used every time you want to use
a pair: (string <-> corresponding data) in a command handler. The string is usually a verbal description
of a given data. The idea is to use the string as a command syntax that can be prompted by the shell and
corresponding data can be used to process the command.
Let’s use an example. Suppose you created a command to set an ADC gain. It is a perfect place where a
dictionary can be used. The dictionary would be a set of pairs: (string: gain_value, int: value) where int
value could be used with the ADC driver API.
Abstract code for this task would look like this:
return 0;
}
SHELL_SUBCMD_DICT_SET_CREATE(sub_gain, gain_cmd_handler,
(gain_1, 1, "gain 1"), (gain_2, 2, "gain 2"),
(gain_1_2, 3, "gain 1/2"), (gain_1_4, 4, "gain 1/4")
);
SHELL_CMD_REGISTER(gain, &sub_gain, "Set ADC gain", NULL);
Dynamic commands Example code demonstrating how to create a root command with static and
dynamic subcommands. At the beginning dynamic command list is empty. New commands can be added
by typing:
Newly added commands can be prompted or autocompleted with the Tab key.
/* commands counter */
static uint8_t dynamic_cmd_cnt;
SHELL_DYNAMIC_CMD_CREATE(m_sub_dynamic_set, dynamic_cmd_get);
SHELL_STATIC_SUBCMD_SET_CREATE(m_sub_dynamic,
SHELL_CMD(add, NULL,"Add new command to dynamic_cmd_buffer and"
" sort them alphabetically.",
cmd_dynamic_add),
SHELL_CMD(execute, &m_sub_dynamic_set,
"Execute a command.", cmd_dynamic_execute),
SHELL_CMD(remove, &m_sub_dynamic_set,
"Remove a command from dynamic_cmd_buffer.",
cmd_dynamic_remove),
SHELL_CMD(show, NULL,
"Show all commands in dynamic_cmd_buffer.",
cmd_dynamic_show),
SHELL_SUBCMD_SET_END
);
SHELL_CMD_REGISTER(dynamic, &m_sub_dynamic,
"Demonstrate dynamic command usage.", cmd_dynamic);
Commands execution
Each command or subcommand may have a handler. The shell executes the handler that is found deepest
in the command tree and further subcommands (without a handler) are passed as arguments. Characters
within parentheses are treated as one argument. If shell won’t find a handler it will display an error
message.
Commands can be also executed from a user application using any active backend and a function
shell_execute_cmd() , as shown in this example:
int main(void)
{
/* Below code will execute "clear" command on a DUMMY backend */
shell_execute_cmd(NULL, "clear");
Commands execution example Let’s assume a command structure as in the following figure, where:
• root_cmd - root command without a handler
• cmd_xxx_h - command has a handler
• cmd_xxx - command does not have a handler
Example 1 Sequence: root_cmd cmd_1_h cmd_12_h cmd_121_h parameter will execute command
cmd_121_h and parameter will be passed as an argument.
Example 2 Sequence: root_cmd cmd_2 cmd_22_h parameter1 parameter2 will execute command
cmd_22_h and parameter1 parameter2 will be passed as an arguments.
Example 3 Sequence: root_cmd cmd_1_h parameter1 cmd_121_h parameter2 will execute command
cmd_1_h and parameter1, cmd_121_h and parameter2 will be passed as an arguments.
Example 4 Sequence: root_cmd parameter cmd_121_h parameter2 will not execute any command.
return 0;
}
Function shell_fprintf() or the shell print macros: shell_print , shell_info , shell_warn and
shell_error can be used from the command handler or from threads, but not from an interrupt context.
Instead, interrupt handlers should use Logging for printing.
Command help Every user-defined command or subcommand can have its own help description. The
help for commands and subcommands can be created with respective macros: SHELL_CMD_REGISTER ,
SHELL_CMD_ARG_REGISTER , SHELL_CMD , and SHELL_CMD_ARG .
Shell prints this help message when you call a command or subcommand with -h or --help parameter.
Parent commands In the subcommand handler, you can access both the parameters passed to com-
mands or the parent commands, depending on how you index argv.
• When indexing argv with positive numbers, you can access the parameters.
• When indexing argv with negative numbers, you can access the parent commands.
• The subcommand to which the handler belongs has the argv index of 0.
return 0;
}
Built-in commands
The Tab button can be used to suggest commands or subcommands. This feature is enabled by
CONFIG_SHELL_TAB set to y. It can also be used for partial or complete auto-completion of commands.
This feature is activated by CONFIG_SHELL_TAB_AUTOCOMPLETION set to y. When user starts writing a
command and presses the Tab button then the shell will do one of 3 possible things:
• Autocomplete the command.
• Prompts available commands and if possible partly completes the command.
• Will not do anything if there are no available or matching commands.
This feature enables commands history in the shell. It is activated by: CONFIG_SHELL_HISTORY set to y.
History can be accessed using keys: ↑ ↓ or Ctrl + n and Ctrl + p if meta keys are active. Number of
commands that can be stored depends on size of CONFIG_SHELL_HISTORY_BUFFER parameter.
The shell module can handle wildcards. Wildcards are interpreted correctly when expanded command
and its subcommands do not have a handler. For example, if you want to set logging level to err for the
app and app_test modules you can execute the following command:
Some shell users apart from subcommands might need to use options as well. the arguments string,
looking for supported options. Typically, this task is accomplished by the getopt family functions.
For this purpose shell supports the getopt and getopt_long libraries available in the FreeBSD project.
This feature is activated by: CONFIG_GETOPT set to y and CONFIG_GETOPT_LONG set to y.
This feature can be used in thread safe as well as non thread safe manner. The former is full compatible
with regular getopt usage while the latter a bit differs.
An example non-thread safe usage:
With the obscured input feature, the shell can be used for implementing a login prompt or other user in-
teraction whereby the characters the user types should not be revealed on screen, such as when entering
a password.
Once the obscured input has been accepted, it is normally desired to return the shell to normal operation.
Such runtime control is possible with the shell_obscure_set function.
An example of login and logout commands using this feature is lo-
cated in samples/subsys/shell/shell_module/src/main.c and the config file sam-
ples/subsys/shell/shell_module/prj_login.conf.
This feature is activated upon startup by CONFIG_SHELL_START_OBSCURED set to y. With this set either
way, the option can still be controlled later at runtime. CONFIG_SHELL_CMDS_SELECT is useful to prevent
entry of any other command besides a login command, by means of the shell_set_root_cmd function.
Likewise, CONFIG_SHELL_PROMPT_UART allows you to set the prompt upon startup, but it can be changed
later with the shell_prompt_change function.
Shell instance can act as the Logging backend. Shell ensures that log messages are correctly multiplexed
with shell output. Log messages from logger thread are enqueued and processed in the shell thread.
Logger thread will block for configurable amount of time if queue is full, blocking logger thread context
for that time. Oldest log message is removed from the queue after timeout and new message is enqueued.
Use the shell stats show command to retrieve number of log messages dropped by the shell instance.
Log queue size and timeout are SHELL_DEFINE arguments.
This feature is activated by: CONFIG_SHELL_LOG_BACKEND set to y.
Warning: Enqueuing timeout must be set carefully when multiple backends are used in the system.
The shell instance could have a slow transport or could block, for example, by a UART with hardware
flow control. If timeout is set too high, the logger thread could be blocked and impact other logger
backends.
Warning: As the shell is a complex logger backend, it can not output logs if the application crashes
before the shell thread is running. In this situation, you can enable one of the simple logging backends
instead, such as UART (CONFIG_LOG_BACKEND_UART) or RTT (CONFIG_LOG_BACKEND_RTT), which are
available earlier during system initialization.
Instead of using the shell as a logger backend, RTT shell backend and RTT log backend can also be used
simulatenously, but over different channels. By separating them, the log can be captured or monitored
without shell output or the shell may be scripted without log interference. Enabling both the Shell RTT
backend and the Log RTT backend does not work by default, because both default to channel 0. There
are two options:
1. The Shell buffer can use an alternate channel, for example using SHELL_BACKEND_RTT_BUFFER set to
1. This allows monitoring the log using JLinkRTTViewer while a script interfaces over channel 1.
2. The Log buffer can use an alternate channel, for example using LOG_BACKEND_RTT_BUFFER set to 1.
This allows interactive use of the shell through JLinkRTTViewer, while the log is written to file.
Warning: Regardless of the channel selection, the RTT log backend must be explicitly enabled using
LOG_BACKEND_RTT set to y, because it defaults to n when the Shell RTT backend is also enabled using
SHELL_BACKEND_RTT being set to y.
4.16.11 Usage
To create a new shell instance user needs to activate requested backend using menuconfig.
The following code shows a simple use case of this library:
int main(void)
{
shell_print(sh, "pong");
return 0;
}
Users may use the Tab key to complete a command/subcommand or to see the available subcommands
for the currently entered command level. For example, when the cursor is positioned at the beginning of
the command line and the Tab key is pressed, the user will see all root (level 0) commands:
Note: To view the subcommands that are available for a specific command, you must first type a space
after this command and then hit Tab.
params ping
group shell_api
Shell API.
Defines
Note: Each root command shall have unique syntax. If a command will be called with wrong
number of arguments shell will print an error message and command handler will not be
called.
Parameters
• syntax – [in] Command syntax (for example: history).
• subcmd – [in] Pointer to a subcommands array.
• help – [in] Pointer to a command help string.
• handler – [in] Pointer to a function handler.
• mandatory – [in] Number of mandatory arguments including command name.
• optional – [in] Number of optional arguments.
Macro can be used to create a command which can be conditionally present. It is and alterna-
tive to #ifdefs around command registration and command handler. If command is disabled
handler and subcommands are removed from the application.
See also:
SHELL_CMD_ARG_REGISTER for details.
Parameters
• flag – [in] Compile time flag. Command is present only if flag exists and
equals 1.
• syntax – [in] Command syntax (for example: history).
• subcmd – [in] Pointer to a subcommands array.
• help – [in] Pointer to a command help string.
• handler – [in] Pointer to a function handler.
• mandatory – [in] Number of mandatory arguments including command name.
• optional – [in] Number of optional arguments.
Parameters
• syntax – [in] Command syntax (for example: history).
• subcmd – [in] Pointer to a subcommands array.
• help – [in] Pointer to a command help string.
See also:
SHELL_COND_CMD_ARG_REGISTER.
Parameters
• flag – [in] Compile time flag. Command is present only if flag exists and
equals 1.
• syntax – [in] Command syntax (for example: history).
• subcmd – [in] Pointer to a subcommands array.
• help – [in] Pointer to a command help string.
• handler – [in] Pointer to a function handler.
SHELL_STATIC_SUBCMD_SET_CREATE(name, ...)
Macro for creating a subcommand set. It must be used outside of any function body.
Example usage: SHELL_STATIC_SUBCMD_SET_CREATE( foo, SHELL_CMD(abc, . . . ),
SHELL_CMD(def, . . . ), SHELL_SUBCMD_SET_END )
Parameters
• name – [in] Name of the subcommand set.
• ... – [in] List of commands created with SHELL_CMD_ARG or or SHELL_CMD
SHELL_SUBCMD_SET_CREATE(_name, _parent)
Create set of subcommands.
Commands to this set are added using SHELL_SUBCMD_ADD and
SHELL_SUBCMD_COND_ADD. Commands can be added from multiple files.
Parameters
• _name – [in] Name of the set. _name is used to refer the set in the parent
command.
• _parent – [in] Set of comma separated parent commands in parenthesis, e.g.
(foo_cmd) if subcommands are for the root command “foo_cmd”.
SHELL_SUBCMD_COND_ADD(_flag, _parent, _syntax, _subcmd, _help, _handler, _mand, _opt)
Conditionally add command to the set of subcommands.
Add command to the set created with SHELL_SUBCMD_SET_CREATE.
Note: The name of the section is formed as concatenation of number of parent commands,
names of all parent commands and own syntax. Number of parent commands is added to
ensure that section prefix is unique. Without it subcommands of (foo) and (foo, cmd1) would
mix.
Parameters
• _flag – [in] Compile time flag. Command is present only if flag exists and
equals 1.
• _parent – [in] Parent command sequence. Comma separated in parenthesis.
SHELL_SUBCMD_SET_END
Define ending subcommands set.
SHELL_DYNAMIC_CMD_CREATE(name, get)
Macro for creating a dynamic entry.
Parameters
• name – [in] Name of the dynamic entry.
• get – [in] Pointer to the function returning dynamic commands array
SHELL_CMD_ARG(syntax, subcmd, help, handler, mand, opt)
Initializes a shell command with arguments.
Note: If a command will be called with wrong number of arguments shell will print an error
message and command handler will not be called.
Parameters
• syntax – [in] Command syntax (for example: history).
• subcmd – [in] Pointer to a subcommands array.
• help – [in] Pointer to a command help string.
• handler – [in] Pointer to a function handler.
• mand – [in] Number of mandatory arguments including command name.
• opt – [in] Number of optional arguments.
See also:
SHELL_CMD_ARG. Based on the flag, creates a valid entry or an empty command which is
ignored by the shell. It is an alternative to #ifdefs around command registration and com-
mand handler. However, empty structure is present in the flash even if command is disabled
(subcommands and handler are removed). Macro internally handles case if flag is not defined
so flag must be provided without any wrapper, e.g.: SHELL_COND_CMD_ARG(CONFIG_FOO,
...)
Parameters
• flag – [in] Compile time flag. Command is present only if flag exists and
equals 1.
• syntax – [in] Command syntax (for example: history).
• subcmd – [in] Pointer to a subcommands array.
• help – [in] Pointer to a command help string.
• handler – [in] Pointer to a function handler.
• mand – [in] Number of mandatory arguments including command name.
• opt – [in] Number of optional arguments.
See also:
SHELL_CMD_ARG. Based on the expression, creates a valid entry or an empty command which
is ignored by the shell. It should be used instead of SHELL_COND_CMD_ARG if condition is
not a single configuration flag, e.g.: SHELL_EXPR_CMD_ARG(IS_ENABLED(CONFIG_FOO) &&
IS_ENABLED(CONFIG_FOO_SETTING_1), . . . )
Parameters
• _expr – [in] Expression.
• _syntax – [in] Command syntax (for example: history).
• _subcmd – [in] Pointer to a subcommands array.
• _help – [in] Pointer to a command help string.
• _handler – [in] Pointer to a function handler.
• _mand – [in] Number of mandatory arguments including command name.
• _opt – [in] Number of optional arguments.
See also:
SHELL_COND_CMD_ARG.
Parameters
• _flag – [in] Compile time flag. Command is present only if flag exists and
equals 1.
• _syntax – [in] Command syntax (for example: history).
• _subcmd – [in] Pointer to a subcommands array.
• _help – [in] Pointer to a command help string.
• _handler – [in] Pointer to a function handler.
See also:
SHELL_EXPR_CMD_ARG.
Parameters
• _expr – [in] Compile time expression. Command is present only if expression
is non-zero.
• _syntax – [in] Command syntax (for example: history).
• _subcmd – [in] Pointer to a subcommands array.
• _help – [in] Pointer to a command help string.
• _handler – [in] Pointer to a function handler.
SHELL_CMD_DICT_CREATE(_data, _handler)
SHELL_DEFAULT_BACKEND_CONFIG_FLAGS
SHELL_NORMAL
Terminal default text color for shell_fprintf function.
SHELL_INFO
Green text color for shell_fprintf function.
SHELL_OPTION
Cyan text color for shell_fprintf function.
SHELL_WARNING
Yellow text color for shell_fprintf function.
SHELL_ERROR
Red text color for shell_fprintf function.
shell_info(_sh, _ft, ...)
Print info message to the shell.
See shell_fprintf.
Parameters
• _sh – [in] Pointer to the shell instance.
• _ft – [in] Format string.
• ... – [in] List of parameters to print.
SHELL_CMD_HELP_PRINTED
Typedefs
typedef int (*shell_cmd_handler)(const struct shell *sh, size_t argc, char **argv)
Shell command handler prototype.
Param sh
Shell instance.
Param argc
Arguments count.
Param argv
Arguments.
Retval 0
Successful command execution.
Retval 1
Help printed and command not executed.
Retval -EINVAL
Argument validation failed.
Retval -ENOEXEC
Command not executed.
typedef int (*shell_dict_cmd_handler)(const struct shell *sh, size_t argc, char **argv, void
*data)
Shell dictionary command handler prototype.
Param sh
Shell instance.
Param argc
Arguments count.
Param argv
Arguments.
Param data
Pointer to the user data.
Retval 0
Successful command execution.
Retval 1
Help printed and command not executed.
Retval -EINVAL
Argument validation failed.
Retval -ENOEXEC
Command not executed.
typedef void (*shell_bypass_cb_t)(const struct shell *sh, uint8_t *data, size_t len)
Bypass callback.
Param sh
Shell instance.
Param data
Raw data from transport.
Param len
Data length.
Enums
enum shell_receive_state
Values:
enumerator SHELL_RECEIVE_DEFAULT
enumerator SHELL_RECEIVE_ESC
enumerator SHELL_RECEIVE_ESC_SEQ
enumerator SHELL_RECEIVE_TILDE_EXP
enum shell_state
Values:
enumerator SHELL_STATE_UNINITIALIZED
enumerator SHELL_STATE_INITIALIZED
enumerator SHELL_STATE_ACTIVE
enumerator SHELL_STATE_PANIC_MODE_ACTIVE
Panic activated.
enumerator SHELL_STATE_PANIC_MODE_INACTIVE
Panic requested, not supported.
enum shell_transport_evt
Shell transport event.
Values:
enumerator SHELL_TRANSPORT_EVT_RX_RDY
enumerator SHELL_TRANSPORT_EVT_TX_RDY
enum shell_signal
Values:
enumerator SHELL_SIGNAL_RXRDY
enumerator SHELL_SIGNAL_LOG_MSG
enumerator SHELL_SIGNAL_KILL
enumerator SHELL_SIGNAL_TXDONE
enumerator SHELL_SIGNALS
enum shell_flag
Flags for setting shell output newline sequence.
Values:
Functions
Returns
Standard error code.
void shell_fprintf(const struct shell *sh, enum shell_vt100_color color, const char *fmt, ...)
printf-like function which sends formatted data stream to the shell.
This function can be used from the command handler or from threads, but not from an inter-
rupt context.
Parameters
• sh – [in] Pointer to the shell instance.
• color – [in] Printed text color.
• fmt – [in] Format string.
• ... – [in] List of parameters to print.
void shell_vfprintf(const struct shell *sh, enum shell_vt100_color color, const char *fmt,
va_list args)
vprintf-like function which sends formatted data stream to the shell.
This function can be used from the command handler or from threads, but not from an inter-
rupt context. It is similar to shell_fprintf() but takes a va_list instead of variable arguments.
Parameters
• sh – [in] Pointer to the shell instance.
• color – [in] Printed text color.
• fmt – [in] Format string.
• args – [in] List of parameters to print.
void shell_hexdump_line(const struct shell *sh, unsigned int offset, const uint8_t *data, size_t
len)
Print a line of data in hexadecimal format.
Each line shows the offset, bytes and then ASCII representation.
For example:
00008010: 20 25 00 20 2f 48 00 08 80 05 00 20 af 46 00 | %. /H.. . . . .F. |
Parameters
• sh – [in] Pointer to the shell instance.
• offset – [in] Offset to show for this line.
• data – [in] Pointer to data.
• len – [in] Length of data.
void shell_hexdump(const struct shell *sh, const uint8_t *data, size_t len)
Print data in hexadecimal format.
Parameters
• sh – [in] Pointer to the shell instance.
• data – [in] Pointer to data.
• len – [in] Length of data.
void shell_process(const struct shell *sh)
Process function, which should be executed when data is ready in the transport interface. To
be used if shell thread is disabled.
Parameters
Variables
union shell_cmd_entry
#include <shell.h> Shell command descriptor.
Public Members
shell_dynamic_get dynamic_get
< Pointer to function returning dynamic commands. Pointer to array of static commands.
struct shell_static_args
#include <shell.h>
Public Members
uint8_t mandatory
Number of mandatory arguments.
uint8_t optional
Number of optional arguments.
struct shell_static_entry
#include <shell.h>
Public Members
shell_cmd_handler handler
Command handler.
struct shell_transport_api
#include <shell.h> Unified shell transport interface.
Public Members
Param context
[in] Pointer to the context passed to event handler.
Return
Standard error code.
int (*write)(const struct shell_transport *transport, const void *data, size_t length, size_t
*cnt)
Function for writing data to the transport interface.
Param transport
[in] Pointer to the transfer instance.
Param data
[in] Pointer to the source buffer.
Param length
[in] Source buffer length.
Param cnt
[out] Pointer to the sent bytes counter.
Return
Standard error code.
int (*read)(const struct shell_transport *transport, void *data, size_t length, size_t *cnt)
Function for reading data from the transport interface.
Param p_transport
[in] Pointer to the transfer instance.
Param p_data
[in] Pointer to the destination buffer.
Param length
[in] Destination buffer length.
Param cnt
[out] Pointer to the received bytes counter.
Return
Standard error code.
struct shell_transport
#include <shell.h>
struct shell_stats
#include <shell.h> Shell statistics structure.
Public Members
atomic_t log_lost_cnt
Lost log counter.
struct shell_backend_config_flags
#include <shell.h>
Public Members
uint32_t insert_mode
Controls insert mode for text introduction
uint32_t echo
Controls shell echo
uint32_t obscure
If echo on, print asterisk instead
uint32_t mode_delete
Operation mode of backspace key
uint32_t use_colors
Controls colored syntax
uint32_t use_vt100
Controls VT100 commands usage in shell
struct shell_backend_ctx_flags
#include <shell.h>
Public Members
uint32_t processing
Shell is executing process function
uint32_t history_exit
Request to exit history mode
uint32_t last_nl
Last received new line character
uint32_t cmd_ctx
Shell is executing command
uint32_t print_noinit
Print request from not initialized shell
uint32_t sync_mode
Shell in synchronous mode
union shell_backend_cfg
#include <shell.h>
Public Members
atomic_t value
union shell_backend_ctx
#include <shell.h>
Public Members
uint32_t value
struct shell_ctx
#include <shell.h> Shell instance context.
Public Members
shell_uninit_cb_t uninit_cb
When bypass is set, all incoming data is passed to the callback.
uint16_t cmd_buff_len
Command length.
uint16_t cmd_buff_pos
Command buffer cursor position.
uint16_t cmd_tmp_buff_len
Command length in tmp buffer. Command input buffer.
char cmd_buff[0]
Command temporary buffer.
char temp_buff[0]
Printf buffer size.
struct shell
#include <shell.h> Shell instance internals.
Public Members
4.17 Settings
The settings subsystem gives modules a way to store persistent per-device configuration and runtime
state. A variety of storage implementations are provided behind a common API using FCB, NVS, or
a file system. These different implementations give the application developer flexibility to select an
appropriate storage medium, and even change it later as needs change. This subsystem is used by
various Zephyr components and can be used simultaneously by user applications.
Settings items are stored as key-value pair strings. By convention, the keys can be organized by the pack-
age and subtree defining the key, for example the key id/serial would define the serial configuration
element for the package id.
Convenience routines are provided for converting a key value to and from a string type.
For an example of the settings subsystem refer to the sample.
Note: As of Zephyr release 2.1 the recommended backend for non-filesystem storage is NVS.
4.17.1 Handlers
Settings handlers for subtree implement a set of handler functions. These are registered using a call to
settings_register().
h_get
This gets called when asking for a settings element value by its name using
settings_runtime_get() from the runtime backend.
h_set
This gets called when the value is loaded from persisted storage with settings_load(), or when
using settings_runtime_set() from the runtime backend.
h_commit
This gets called after the settings have been loaded in full. Sometimes you don’t want an indi-
vidual setting value to take effect right away, for example if there are multiple settings which are
interdependent.
h_export
This gets called to write all current settings. This happens when settings_save() tries to save the
settings or transfer to any user-implemented back-end.
4.17.2 Backends
Backends are meant to load and save data to/from setting handlers, and implement a set of handler
functions. These are registered using a call to settings_src_register() for backends that can load
data, and/or settings_dst_register() for backends that can save data. The current implementation
allows for multiple source backends but only a single destination backend.
csi_load
This gets called when loading values from persistent storage using settings_load().
csi_save
This gets called when saving a single setting to persistent storage using settings_save_one().
csi_save_start
This gets called when starting a save of all current settings using settings_save().
csi_save_end
This gets called after having saved of all current settings using settings_save().
Zephyr has three storage backends: a Flash Circular Buffer (CONFIG_SETTINGS_FCB), a file in the filesys-
tem (CONFIG_SETTINGS_FILE), or non-volatile storage (CONFIG_SETTINGS_NVS).
You can declare multiple sources for settings; settings from all of these are restored when
settings_load() is called.
There can be only one target for writing settings; this is where data is stored when you call
settings_save(), or settings_save_one().
FCB read target is registered using settings_fcb_src(), and write target using settings_fcb_dst().
As a side-effect, settings_fcb_src() initializes the FCB area, so it must be called before calling
settings_fcb_dst(). File read target is registered using settings_file_src(), and write target by
using settings_file_dst(). Non-volatile storage read target is registered using settings_nvs_src(),
and write target by using settings_nvs_dst().
The FCB and non-volatile storage (NVS) backends both look for a fixed partition with label “storage” by
default. A different partition can be selected by setting the zephyr,settings-partition property of the
chosen node in the devicetree.
The file path used by the file backend to store settings is selected via the option
CONFIG_SETTINGS_FILE_PATH.
A call to settings_load() uses an h_set implementation to load settings data from storage to volatile
memory. After all data is loaded, the h_commit handler is issued, signalling the application that the
settings were successfully retrieved.
Technically FCB and file backends may store some history of the entities. This means that the newest
data entity is stored after any older existing data entities. Starting with Zephyr 2.1, the back-end must
filter out all old entities and call the callback with only the newest entity.
A call to settings_save_one() uses a backend implementation to store settings data to the storage
medium. A call to settings_save() uses an h_export implementation to store different data in one
operation using settings_save_one(). A key need to be covered by a h_export only if it is supposed to
be stored by settings_save() call.
For both FCB and file back-end only storage requests with data which changes most actual key’s value are
stored, therefore there is no need to check whether a value changed by the application. Such a storage
mechanism implies that storage can contain multiple value assignments for a key , while only the last is
the current value for the key.
Garbage collection
When storage becomes full (FCB) or consumes too much space (file), the backend removes non-recent
key-value pairs records and unnecessary key-delete records.
Currently settings doesn’t provide scheme of being secure, and non-secure configuration storage simul-
taneously for the same instance. It is recommended that secure domain uses its own settings instance
and it might provide data for non-secure domain using dedicated interface if needed (case dependent).
This is a simple example, where the settings handler only implements h_set and h_export. h_set is
called when the value is restored from storage (or when set initially), and h_export is used to write the
value to storage thanks to storage_func(). The user can also implement some other export functional-
ity, for example, writing to the shell console).
# define DEFAULT_FOO_VAL_VALUE 1
return -ENOENT;
}
This is a simple example showing how to persist runtime state. In this example, only h_set is defined,
which is used when restoring value from persisted storage.
In this example, the main function increments foo_val, and then persists the latest number. When
the system restarts, the application calls settings_load() while initializing, and foo_val will continue
counting up from where it was before restart.
# include <zephyr/kernel.h>
# include <zephyr/sys/reboot.h>
# include <zephyr/settings/settings.h>
# include <zephyr/sys/printk.h>
# include <inttypes.h>
# define DEFAULT_FOO_VAL_VALUE 0
return rc;
}
return -ENOENT;
}
int main(void)
{
settings_subsys_init();
settings_register(&my_conf);
settings_load();
foo_val++;
settings_save_one("foo/bar", &foo_val, sizeof(foo_val));
k_msleep(1000);
sys_reboot(SYS_REBOOT_COLD);
}
This is a simple example showing how to register a simple custom backend handler
(CONFIG_SETTINGS_CUSTOM).
int settings_backend_init(void)
{
/* register custom backend */
settings_dst_register(&settings_custom_store);
settings_src_register(&settings_custom_store);
return 0;
}
group settings
Defines
SETTINGS_MAX_DIR_DEPTH
SETTINGS_MAX_NAME_LEN
SETTINGS_MAX_VAL_LEN
SETTINGS_NAME_SEPARATOR
SETTINGS_NAME_END
SETTINGS_EXTRA_LEN
Typedefs
Param param
[inout] parameter given to the settings_load_subtree_direct function.
Return
When nonzero value is returned, further subtree searching is stopped.
Functions
int settings_subsys_init(void)
Initialization of settings and backend
Can be called at application startup. In case the backend is a FS Remember to call it after the
FS was mounted. For FCB backend it can be called without such a restriction.
Returns
0 on success, non-zero on failure.
int settings_register(struct settings_handler *cf)
Register a handler for settings items stored in RAM.
Parameters
• cf – Structure containing registration info.
Returns
0 on success, non-zero on failure.
int settings_load(void)
Load serialized items from registered persistence sources. Handlers for serialized item sub-
trees registered earlier will be called for encountered values.
Returns
0 on success, non-zero on failure.
int settings_load_subtree(const char *subtree)
Load limited set of serialized items from registered persistence sources. Handlers for serial-
ized item subtrees registered earlier will be called for encountered values that belong to the
subtree.
Parameters
• subtree – [in] name of the subtree to be loaded.
Returns
0 on success, non-zero on failure.
int settings_load_subtree_direct(const char *subtree, settings_load_direct_cb cb, void
*param)
Load limited set of serialized items using given callback.
This function bypasses the normal data workflow in settings module. All the settings values
that are found are passed to the given callback.
Note: This function does not call commit function. It works as a blocking function, so it is
up to the user to call any kind of commit function when this operation ends.
Parameters
• subtree – [in] subtree name of the subtree to be loaded.
• cb – [in] pointer to the callback function.
• param – [inout] parameter to be passed when callback function is called.
Returns
0 on success, non-zero on failure.
int settings_save(void)
Save currently running serialized items. All serialized items which are different from currently
persisted values will be saved.
Returns
0 on success, non-zero on failure.
int settings_save_one(const char *name, const void *value, size_t val_len)
Write a single serialized value to persisted storage (if it has changed value).
Parameters
• name – Name/key of the settings item.
• value – Pointer to the value of the settings item. This value will be transferred
to the settings_handler::h_export handler implementation.
• val_len – Length of the value.
Returns
0 on success, non-zero on failure.
int settings_delete(const char *name)
Delete a single serialized in persisted storage.
Deleting an existing key-value pair in the settings mean to set its value to NULL.
Parameters
• name – Name/key of the settings item.
Returns
0 on success, non-zero on failure.
int settings_commit(void)
Call commit for all settings handler. This should apply all settings which has been set, but not
applied yet.
Returns
0 on success, non-zero on failure.
int settings_commit_subtree(const char *subtree)
Call commit for settings handler that belong to subtree. This should apply all settings which
has been set, but not applied yet.
Parameters
• subtree – [in] name of the subtree to be committed.
Returns
0 on success, non-zero on failure.
struct settings_handler
#include <settings.h> Config handlers for subtree implement a set of handler functions. These
are registered using a call to settings_register.
Public Members
int (*h_set)(const char *key, size_t len, settings_read_cb read_cb, void *cb_arg)
Set value handler of settings items identified by keyword names.
Parameters:
• key[in] the name with skipped part that was used as name in handler registration
• len[in] the size of the data found in the backend.
• read_cb[in] function provided to read the data from the backend.
• cb_arg[in] arguments for the read function provided by the backend.
Return: 0 on success, non-zero on failure.
int (*h_commit)(void)
This handler gets called after settings has been loaded in full. User might use it to apply
setting to the application.
Return: 0 on success, non-zero on failure.
int (*h_export)(int (*export_func)(const char *name, const void *val, size_t val_len))
This gets called to dump all current settings items.
This happens when settings_save tries to save the settings. Parameters:
• export_func: the pointer to the internal function which appends a single key-value
pair to persisted settings. Don’t store duplicated value. The name is subtree/key
string, val is the string with value.
Remark
The User might limit a implementations of handler to serving only one keyword at one
call - what will impose limit to get/set values using full subtree/key name.
sys_snode_t node
Linked list node info for module internal usage.
struct settings_handler_static
#include <settings.h> Config handlers without the node element, used for static handlers.
These are registered using a call to SETTINGS_STATIC_HANDLER_DEFINE().
Public Members
int (*h_set)(const char *key, size_t len, settings_read_cb read_cb, void *cb_arg)
Set value handler of settings items identified by keyword names.
Parameters:
• key[in] the name with skipped part that was used as name in handler registration
• len[in] the size of the data found in the backend.
• read_cb[in] function provided to read the data from the backend.
• cb_arg[in] arguments for the read function provided by the backend.
Return: 0 on success, non-zero on failure.
int (*h_commit)(void)
This handler gets called after settings has been loaded in full. User might use it to apply
setting to the application.
int (*h_export)(int (*export_func)(const char *name, const void *val, size_t val_len))
This gets called to dump all current settings items.
This happens when settings_save tries to save the settings. Parameters:
• export_func: the pointer to the internal function which appends a single key-value
pair to persisted settings. Don’t store duplicated value. The name is subtree/key
string, val is the string with value.
Remark
The User might limit a implementations of handler to serving only one keyword at one
call - what will impose limit to get/set values using full subtree/key name.
group settings_name_proc
API for const name processing.
Functions
int settings_name_steq(const char *name, const char *key, const char **next)
Compares the start of name with a key
REMARK: This routine could be simplified if the settings_handler names would include a sep-
arator at the end.
Parameters
• name – [in] in string format
• key – [in] comparison string
• next – [out] pointer to remaining of name, when the remaining part starts
with a separator the separator is removed from next
Returns
0: no match 1: match, next can be used to check if match is full
int settings_name_next(const char *name, const char **next)
determine the number of characters before the first separator
Parameters
• name – [in] in string format
• next – [out] pointer to remaining of name (excluding separator)
Returns
index of the first separator, in case no separator was found this is the size of name
group settings_rt
API for runtime settings.
Functions
group settings_backend
settings
Functions
struct settings_store
#include <settings.h> Backend handler node for storage handling.
Public Members
sys_snode_t cs_next
Linked list node info for internal usage.
struct settings_load_arg
#include <settings.h> Arguments for data loading. Holds all parameters that changes the
way data should be loaded from backend.
Public Members
settings_load_direct_cb cb
Pointer to the callback function.
If NULL then matching registered function would be used.
void *param
Parameter for callback function.
Parameter to be passed to the callback function.
struct settings_store_itf
#include <settings.h> Backend handler functions. Sources are registered using a call to set-
tings_src_register. Destinations are registered using a call to settings_dst_register.
Public Members
Note: Backend is expected not to provide duplicates of the entities. It means that if the
backend does not contain any functionality to really delete old keys, it has to filter out
old entities and call load callback only on the final entity.
int (*csi_save)(struct settings_store *cs, const char *name, const char *value, size_t val_len)
Save a single key-value pair to storage.
Parameters:
• cs - Corresponding backend handler node
• name - Key in string format
• value - Binary value
• val_len - Length of value in bytes.
4.18.1 Overview
The State Machine Framework (SMF) is an application agnostic framework that provides an easy way
for developers to integrate state machines into their application. The framework can be added to any
project by enabling the CONFIG_SMF option.
A state is represented by three functions, where one function implements the Entry actions, another
function implements the Run actions, and the last function implements the Exit actions. The prototype
for these functions is as follows: void funct(void *obj), where the obj parameter is a user defined
structure that has the state machine context, struct smf_ctx, as its first member. For example:
struct user_object {
struct smf_ctx ctx;
/* All User Defined Data Follows */
};
The struct smf_ctx member must be first because the state machine framework’s functions casts the
user defined object to the struct smf_ctx type with the following macro: SMF_CTX(o)
For example instead of doing this (struct smf_ctx *)&user_obj, you could use SMF_CTX(&user_obj).
By default, a state can have no ancestor states, resulting in a flat state machine. But to enable the
creation of a hierarchical state machine, the CONFIG_SMF_ANCESTOR_SUPPORT option must be enabled.
The following macro can be used for easy state creation:
• SMF_CREATE_STATE Create a state
NOTE: The SMF_CREATE_STATE macro takes an additional parameter when
CONFIG_SMF_ANCESTOR_SUPPORT is enabled.
A state machine is created by defining a table of states that’s indexed by an enum. For example, the
following creates three flat states:
To set the initial state, the smf_set_initial function should be called. It has the following prototype:
void smf_set_initial(smf_ctx *ctx, smf_state *state)
To transition from one state to another, the smf_set_state function is used and it has the following
prototype: void smf_set_state(smf_ctx *ctx, smf_state *state)
NOTE: While the state machine is running, smf_set_state should only be called from the Entry and Run
functions. Calling smf_set_state from the Exit functions doesn’t make sense and will generate a warning.
To run the state machine, the smf_run_state function should be called in some application dependent
way. An application should cease calling smf_run_state if it returns a non-zero value. The function has
the following prototype: int32_t smf_run_state(smf_ctx *ctx)
To terminate the state machine, the smf_terminate function should be called. It can be called from
the entry, run, or exit action. The function takes a non-zero user defined value that’s returned by the
smf_run_state function. The function has the following prototype: void smf_terminate(smf_ctx
*ctx, int32_t val)
This example turns the following state diagram into code using the SMF, where the initial state is S0.
Code:
#include <zephyr/smf.h>
STATE_S0
STATE_S1
STATE_S2
/* State S0 */
static void s0_entry(void *o)
{
/* Do something */
}
static void s0_run(void *o)
{
smf_set_state(SMF_CTX(&s_obj), &demo_states[S1]);
}
static void s0_exit(void *o)
{
/* Do something */
}
/* State S1 */
static void s1_run(void *o)
(continues on next page)
/* State S2 */
static void s2_entry(void *o)
{
/* Do something */
}
static void s2_run(void *o)
{
smf_set_state(SMF_CTX(&s_obj), &demo_states[S0]);
}
int main(void)
{
int32_t ret;
This example turns the following state diagram into code using the SMF, where S0 and S1 share a parent
state and S0 is the initial state.
Code:
#include <zephyr/smf.h>
PARENT
STATE_S0
STATE_S1
STATE_S2
/* Parent State */
static void parent_entry(void *o)
{
/* Do something */
}
static void parent_exit(void *o)
{
/* Do something */
}
/* State S0 */
static void s0_run(void *o)
{
(continues on next page)
/* State S1 */
static void s1_run(void *o)
{
smf_set_state(SMF_CTX(&s_obj), &demo_states[S2]);
}
/* State S2 */
static void s2_run(void *o)
{
smf_set_state(SMF_CTX(&s_obj), &demo_states[S0]);
}
int main(void)
{
int32_t ret;
Events are not explicitly part of the State Machine Framework but an event driven state machine can be
implemented using Zephyr Events.
STATE_S0
STATE_S1
Code:
#include <zephyr/kernel.h>
#include <zephyr/drivers/gpio.h>
#include <zephyr/smf.h>
/* List of events */
#define EVENT_BTN_PRESS BIT(0)
/* Events */
struct k_event smf_event;
(continues on next page)
/* State S0 */
static void s0_entry(void *o)
{
printk("STATE0\n");
}
/* State S1 */
static void s1_entry(void *o)
{
printk("STATE1\n");
}
int main(void)
{
int ret;
if (!device_is_ready(button.port)) {
printk("Error: button device %s is not ready\n",
(continues on next page)
ret = gpio_pin_interrupt_configure_dt(&button,
GPIO_INT_EDGE_TO_ACTIVE);
if (ret != 0) {
printk("Error %d: failed to configure interrupt on %s pin %d\n",
ret, button.port->name, button.pin);
return;
}
4.19 Storage
Elements, represented as id-data pairs, are stored in flash using a FIFO-managed circular buffer. The
flash area is divided into sectors. Elements are appended to a sector until storage space in the sector is
exhausted. Then a new sector in the flash area is prepared for use (erased). Before erasing the sector it
is checked that identifier - data pairs exist in the sectors in use, if not the id-data pair is copied.
The id is a 16-bit unsigned number. NVS ensures that for each used id there is at least one id-data pair
stored in flash at all time.
NVS allows storage of binary blobs, strings, integers, longs, and any combination of these.
Each element is stored in flash as metadata (8 byte) and data. The metadata is written in a table starting
from the end of a nvs sector, the data is written one after the other from the start of the sector. The
metadata consists of: id, data offset in sector, data length, part (unused) and a crc.
A write of data to nvs always starts with writing the data, followed by a write of the metadata. Data that
is written in flash without metadata is ignored during initialization.
During initialization NVS will verify the data stored in flash, if it encounters an error it will ignore any
data with missing/incorrect metadata.
NVS checks the id-data pair before writing data to flash. If the id-data pair is unchanged no write to flash
is performed.
To protect the flash area against frequent erases it is important that there is sufficient free space. NVS
has a protection mechanism to avoid getting in a endless loop of flash page erases when there is limited
free space. When such a loop is detected NVS returns that there is no more space available.
For NVS the file system is declared as:
where
• NVS_FLASH_DEVICE is a reference to the flash device that will be used. The device needs to be
operational.
• NVS_SECTOR_SIZE is the sector size, it has to be a multiple of the flash erase page size and a power
of 2.
• NVS_SECTOR_COUNT is the number of sectors, it is at least 2, one sector is always kept empty to
allow copying of existing data.
• NVS_STORAGE_OFFSET is the offset of the storage area in flash.
Flash wear
When writing data to flash a study of the flash wear is important. Flash has a limited life which is
determined by the number of times flash can be erased. Flash is erased one page at a time and the
pagesize is determined by the hardware. As an example a nRF51822 device has a pagesize of 1024 bytes
and each page can be erased about 20,000 times.
Calculating expected device lifetime Suppose we use a 4 bytes state variable that is changed every
minute and needs to be restored after reboot. NVS has been defined with a sector_size equal to the
pagesize (1024 bytes) and 2 sectors have been defined.
Each write of the state variable requires 12 bytes of flash storage: 8 bytes for the metadata and 4 bytes
for the data. When storing the data the first sector will be full after 1024/12 = 85.33 minutes. After
another 85.33 minutes, the second sector is full. When this happens, because we’re using only two
sectors, the first sector will be used for storage and will be erased after 171 minutes of system time. With
the expected device life of 20,000 writes, with two sectors writing every 171 minutes, the device should
last about 171 * 20,000 minutes, or about 6.5 years.
More generally then, with
• NS as the number of storage requests per minute,
• DS as the data size in bytes,
From this formula it is also clear what to do in case the expected life is too short: increase SECTOR_COUNT
or SECTOR_SIZE.
It is possible that during a DFU process, the flash driver used by the NVS changes the supported minimal
write block size. The NVS in-flash image will stay compatible unless the physical ATE size changes.
Especially, migration between 1,2,4,8-bytes write block sizes is allowed.
Sample
Troubleshooting
API Reference
group nvs_data_structures
Non-volatile Storage Data Structures.
struct nvs_fs
#include <nvs.h> Non-volatile Storage File system structure.
Param offset
File system offset in flash
Param ate_wra
Allocation table entry write address. Addresses are stored as uint32_t: high 2
bytes correspond to the sector, low 2 bytes are the offset in the sector
Param data_wra
Data write address
Param sector_size
File system is split into sectors, each sector must be multiple of pagesize
Param sector_count
Number of sectors in the file systems
Param ready
Flag indicating if the filesystem is initialized
Param nvs_lock
Mutex
Param flash_device
Flash Device runtime structure
Param flash_parameters
Flash memory parameters structure
group nvs_high_level_api
Non-volatile Storage APIs.
Functions
Returns
Number of bytes free. On success, it will be equal to the number of bytes that can
still be written to the file system. Calculating the free space is a time consuming
operation, especially on spi flash. On error, returns negative value of errno.h
defined error codes.
Overview
SD Card support
Zephyr has support for some SD card controllers and support for interfacing SD cards via SPI. These
drivers use disk driver interface and a file system can access the SD cards via disk access API. Both
standard and high-capacity SD cards are supported.
Note: The system does not support inserting or removing cards while the system is running. The cards
must be present at boot and must not be removed. This may be fixed in future releases.
FAT filesystems are not power safe so the filesystem may become corrupted if power is lost or if the card
is removed.
SD Memory Card subsystem Zephyr supports SD memory cards via the disk driver API, or via the
SDMMC subsystem. This subsystem can be used transparently via the disk driver API, but also supports
direct block level access to cards. The SDMMC subsystem interacts with the sd host controller api to
communicate with attached SD cards.
SD Card support via SPI Example devicetree fragment below shows how to add SD card node to spi1
interface. Example uses pin PA27 for chip select, and runs the SPI bus at 24 MHz once the SD card has
been initialized:
&spi1 {
status = "okay";
cs-gpios = <&porta 27 GPIO_ACTIVE_LOW>;
sdhc0: sdhc@0 {
compatible = "zephyr,sdhc-spi-slot";
reg = <0>;
status = "okay";
mmc {
compatible = "zephyr,sdmmc-disk";
status = "okay";
};
spi-max-frequency = <24000000>;
};
};
The SD card will be automatically detected and initialized by the filesystem driver when the board boots.
To read and write files and directories, see the File Systems in include/zephyr/fs/fs.h such as fs_open() ,
fs_read() , and fs_write() .
Zephyr flashdisk driver makes it possible to use flash memory partition as a block device. The flashdisk
instances are defined in devicetree:
/ {
msc_disk0 {
compatible = "zephyr,flash-disk";
partition = <&storage_partition>;
disk-name = "NAND";
cache-size = <4096>;
};
};
The cache size specified in zephyr,flash-disk node should be equal to backing partition minimum
erasable block size.
NVMe NVMe is a standardized logical device interface on PCIe bus exposing storage devices.
NVMe controllers and disks are supported. Disks can be accessed via the Disk Access API they expose and
thus be used through the File System API.
NVMe configuration
DTS Any board exposing an NVMe disk should provide a DTS overlay to enable its use whitin Zephyr
#include <zephyr/dt-bindings/pcie/pcie.h>
/ {
pcie0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "intel,pcie";
ranges;
Where VENDOR_ID and DEVICE_ID are the ones from the exposed NVMe controller.
Options
• CONFIG_NVME
Note that NVME requires the target to support PCIe multi-vector MSI-X in order to function.
• CONFIG_NVME_MAX_NAMESPACES
API Reference
group disk_access_interface
Disk Access APIs.
Functions
group disk_driver_interface
Disk Driver Interface.
Defines
DISK_IOCTL_GET_SECTOR_COUNT
Possible Cmd Codes for disk_ioctl()
Get the number of sectors in the disk
DISK_IOCTL_GET_SECTOR_SIZE
Get the size of a disk SECTOR in bytes
DISK_IOCTL_RESERVED
reserved. It used to be DISK_IOCTL_GET_DISK_SIZE
DISK_IOCTL_GET_ERASE_BLOCK_SZ
How many sectors constitute a FLASH Erase block
DISK_IOCTL_CTRL_SYNC
Commit any cached read/writes to disk
DISK_STATUS_OK
Possible return bitmasks for disk_status()
Disk status okay
DISK_STATUS_UNINIT
Disk status uninitialized
DISK_STATUS_NOMEDIA
Disk status no media
DISK_STATUS_WR_PROTECT
Disk status write protected
Functions
struct disk_info
#include <disk.h> Disk info.
Public Members
sys_dnode_t node
Internally used list node
char *name
Disk name
struct disk_operations
#include <disk.h> Disk operations.
The <zephyr/storage/flash_map.h> API allows accessing information about device flash partitions via
flash_area structures.
Each flash_area describes a flash partition. The API provides access to a “flash map”, which con-
tains predefined flash areas accessible via globally unique ID numbers. The map is created from “fixed-
partition” compatible entries in DTS file. Users may also create flash_area objects at runtime for
application-specific purposes.
This documentation uses “flash area” when referencing single “fixed-partition” entities.
The flash_area contains a pointer to a device , which can be used to access the flash device an area is
placed on directly with the flash API. Each flash area is characterized by a device it is placed on, offset
from the beginning of the device and size on the device. An additional identifier parameter is used by
the flash_area_open() function to find flash area in flash map.
The flash_map.h API provides functions for operating on a flash_area . The main examples are
flash_area_read() and flash_area_write() . These functions are basically wrappers around the flash
API with additional offset and size checks, to limit flash operations to a predefined area.
Most <zephyr/storage/flash_map.h> API functions require a flash_area object pointer characterizing
the flash area they will be working on. There are two possible methods to obtain such a pointer:
• obtain it using flash_area_open;
• defining a flash_area type object, which requires providing a valid device object pointer with
offset and size of the area within the flash device.
flash_area_open() uses numeric identifiers to search flash map for flash_area objects and returns,
if found, a pointer to an object representing area with given ID. The ID number for a flash area can be
obtained from a fixed-partition DTS node label using FIXED_PARTITION_ID() ; these labels are obtained
from the devicetree as described below.
The flash_map.h API uses data generated from the Devicetree API, in particular its Fixed flash partitions.
Zephyr additionally has some partitioning conventions used for Device Firmware Upgrade via the MCU-
boot bootloader, as well as defining partitions usable by file systems or other nonvolatile storage.
Here is an example devicetree fragment which uses fixed flash partitions for both MCUboot and a storage
partition. Some details were left out for clarity.
/ {
soc {
flashctrl: flash-controller@deadbeef {
flash0: flash@0 {
compatible = "soc-nv-flash";
reg = <0x0 0x100000>;
partitions {
compatible = "fixed-partitions";
#address-cells = <0x1>;
#size-cells = <0x1>;
boot_partition: partition@0 {
reg = <0x0 0x10000>;
read-only;
};
storage_partition: partition@1e000 {
reg = <0x1e000 0x2000>;
};
slot0_partition: partition@20000 {
reg = <0x20000 0x60000>;
};
slot1_partition: partition@80000 {
reg = <0x80000 0x60000>;
};
scratch_partition: partition@e0000 {
reg = <0xe0000 0x20000>;
};
};
};
};
};
};
Partition offset shall be expressed in relation to the flash memory beginning address, to which the parti-
tion belongs to.
The boot_partition, slot0_partition, slot1_partition, and scratch_partition node labels are
defined for MCUboot, though not all MCUboot configurations require all of them to be defined. See the
MCUboot documentation for more details.
The storage_partition node is defined for use by a file system or other nonvolatile storage API.
Numeric flash area ID is obtained by passing DTS node label to FIXED_PARTITION_ID() ; for example to
obtain ID number for slot0_partition, user would invoke FIXED_PARITION_ID(slot0_partition).
All FIXED_PARTITION_ macros take DTS node labels as partition identifiers.
Users do not have to obtain a flash_area object pointer using flash_map_open() to get information on
flash area size, offset or device, if such area is defined in DTS file. Knowing the DTS node label of an area,
users may use FIXED_PARTITION_OFFSET() , FIXED_PARTITION_SIZE() or FIXED_PARTITION_DEVICE()
respectively to obtain such information directly from DTS node definition. For example to obtain offset
of storage_partition it is enough to invoke FIXED_PARTITION_OFFSET(storage_partition).
Below example shows how to obtain a flash_area object pointer using flash_area_open() and DTS
node label:
if (err != 0) {
handle_the_error(err);
} else {
flash_area_read(my_area, ...);
}
API Reference
group flash_area_api
Abstraction over flash partitions/areas and their drivers.
Defines
SOC_FLASH_0_ID
Provided for compatibility with MCUboot
SPI_FLASH_0_ID
Provided for compatibility with MCUboot
FLASH_AREA_LABEL_EXISTS(label)
FLASH_AREA_LABEL_STR(lbl)
FLASH_AREA_ID(label)
FLASH_AREA_OFFSET(label)
FLASH_AREA_SIZE(label)
FIXED_PARTITION_EXISTS(label)
Returns non-0 value if fixed-partition of given DTS node label exists.
Parameters
• label – DTS node label
Returns
non-0 if fixed-partition node exists and is enabled; 0 if node does not exist, is not
enabled or is not fixed-partition.
FIXED_PARTITION_ID(label)
Get flash area ID from fixed-partition DTS node label
Parameters
• label – DTS node label of a partition
Returns
flash area ID
FIXED_PARTITION_OFFSET(label)
Get fixed-partition offset from DTS node label
Parameters
• label – DTS node label of a partition
Returns
fixed-partition offset, as defined for the partition in DTS.
FIXED_PARTITION_SIZE(label)
Get fixed-partition size for DTS node label
Parameters
• label – DTS node label
Returns
fixed-partition offset, as defined for the partition in DTS.
FLASH_AREA_DEVICE(label)
Get device pointer for device the area/partition resides on
Parameters
• label – DTS node label of a partition
Returns
const struct device type pointer
FIXED_PARTITION_DEVICE(label)
Get device pointer for device the area/partition resides on
Parameters
• label – DTS node label of a partition
Returns
Pointer to a device.
Typedefs
Functions
Returns
0 on success, -EACCES if the flash_map is not available , -ENOENT if ID is un-
known, -ENODEV if there is no driver attached to the area.
void flash_area_close(const struct flash_area *fa)
Close flash_area.
Reserved for future usage and external projects compatibility reason. Currently is NOP.
Parameters
• fa – [in] Flash area to be closed.
int flash_area_read(const struct flash_area *fa, off_t off, void *dst, size_t len)
Read flash area data.
Read data from flash area. Area readout boundaries are asserted before read request. API has
the same limitation regard read-block alignment and size as wrapped flash driver.
Parameters
• fa – [in] Flash area
• off – [in] Offset relative from beginning of flash area to read
• dst – [out] Buffer to store read data
• len – [in] Number of bytes to read
Returns
0 on success, negative errno code on fail.
int flash_area_write(const struct flash_area *fa, off_t off, const void *src, size_t len)
Write data to flash area.
Write data to flash area. Area write boundaries are asserted before write request. API has the
same limitation regard write-block alignment and size as wrapped flash driver.
Parameters
• fa – [in] Flash area
• off – [in] Offset relative from beginning of flash area to write
• src – [in] Buffer with data to be written
• len – [in] Number of bytes to write
Returns
0 on success, negative errno code on fail.
int flash_area_erase(const struct flash_area *fa, off_t off, size_t len)
Erase flash area.
Erase given flash area range. Area boundaries are asserted before erase request. API has the
same limitation regard erase-block alignment and size as wrapped flash driver.
Parameters
• fa – [in] Flash area
• off – [in] Offset relative from beginning of flash area.
• len – [in] Number of bytes to be erase
Returns
0 on success, negative errno code on fail.
struct flash_area
#include <flash_map.h> Flash partition.
This structure represents a fixed-size partition on a flash device. Each partition contains one
or more flash sectors.
Public Members
uint8_t fa_id
ID number
off_t fa_off
Start offset from the beginning of the flash device
size_t fa_size
Total size
struct flash_sector
#include <flash_map.h> Structure for transfer flash sector boundaries.
This template is used for presentation of flash memory structure. It consumes much less RAM
than flash_area
Public Members
off_t fs_off
Sector offset from the beginning of the flash device
size_t fs_size
Sector size in bytes
Flash circular buffer provides an abstraction through which you can treat flash like a FIFO. You append
entries to the end, and read data from the beginning.
Note: As of Zephyr release 2.1 the NVS storage API is recommended over FCB for use as a back-end for
the settings API.
Description
Entries in the flash contain the length of the entry, the data within the entry, and checksum over the
entry contents.
Storage of entries in flash is done in a FIFO fashion. When you request space for the next entry, space is
located at the end of the used area. When you start reading, the first entry served is the oldest entry in
flash.
Entries can be appended to the end of the area until storage space is exhausted. You have control over
what happens next; either erase oldest block of data, thereby freeing up some space, or stop writing new
data until existing data has been collected. FCB treats underlying storage as an array of flash sectors;
when it erases old data, it does this a sector at a time.
Entries in the flash are checksummed. That is how FCB detects whether writing entry to flash completed
ok. It will skip over entries which don’t have a valid checksum.
Usage
API Reference
Data structures
group fcb_data_structures
Defines
FCB_MAX_LEN
Max length of element
FCB_ENTRY_FA_DATA_OFF(entry)
Helper macro for calculating the data offset related to the fcb flash_area start offset.
Parameters
• entry – fcb entry structure
FCB_FLAGS_CRC_DISABLED
Flag to disable CRC for the fcb_entries in flash.
struct fcb_entry
#include <fcb.h> FCB entry info structure. This data structure describes the element location
in the flash.
You would use it to figure out what parameters to pass to flash_area_read() to read element
contents. Or to flash_area_write() when adding a new element. Entry location is pointer to
area (within fcb->f_sectors), and offset within that area.
Public Members
uint32_t fe_elem_off
Offset from the start of the sector to beginning of element.
uint32_t fe_data_off
Offset from the start of the sector to the start of element.
uint16_t fe_data_len
Size of data area in fcb entry
struct fcb_entry_ctx
#include <fcb.h> Structure for transferring complete information about FCB entry location
within flash memory.
Public Members
struct fcb
#include <fcb.h> FCB instance structure.
The following data structure describes the FCB itself. First part should be filled in by the user
before calling fcb_init. The second part is used by FCB for its internal bookkeeping.
Public Members
uint32_t f_magic
Magic value, should not be 0xFFFFFFFF. It is xored with inversion of f_erase_value and
placed in the beginning of FCB flash sector. FCB uses this when determining whether
sector contains valid data or not. Giving it value of 0xFFFFFFFF means leaving bytes of
the filed in “erased” state.
uint8_t f_version
Current version number of the data
uint8_t f_sector_cnt
Number of elements in sector array
uint8_t f_scratch_cnt
Number of sectors to keep empty. This can be used if you need to have scratch space for
garbage collecting when FCB fills up.
uint16_t f_active_id
Flash location where the newest data is, internal state
uint8_t f_align
writes to flash have to aligned to this, internal state
uint8_t f_erase_value
The value flash takes when it is erased. This is read from flash parameters and initialized
upon call to fcb_init.
API functions
group fcb_api
Flash Circular Buffer APIs.
Typedefs
Entry data can be read using flash_area_read(), using loc_ctx fields as arguments. If cb wants
to stop the walk, it should return non-zero value.
Param loc_ctx
[in] entry location information (full context)
Param arg
[inout] callback context, transferred from fcb_walk.
Return
0 continue walking, non-zero stop walking.
Functions
Returns
0 on success, negative on failure (or transferred form callback return-value), pos-
itive transferred form callback return-value
int fcb_getnext(struct fcb *fcb, struct fcb_entry *loc)
Get next fcb entry location.
Function to obtain fcb entry location in relation to entry pointed by
loc. If loc->fe_sector is set and loc->fe_elem_off is not 0 function fetches next fcb entry
location. If loc->fe_sector is NULL function fetches the oldest entry location within FCB
storage. loc->fe_sector is set and loc->fe_elem_off is 0 function fetches the first entry location
in the fcb sector.
Parameters
• fcb – [in] FCB instance structure.
• loc – [inout] entry location information
Returns
0 on success, non-zero on failure.
int fcb_rotate(struct fcb *fcb)
Rotate fcb sectors
Function erases the data from oldest sector. Upon that the next sector becomes the oldest.
Active sector is also switched if needed.
Parameters
• fcb – [in] FCB instance structure.
int fcb_append_to_scratch(struct fcb *fcb)
Start using the scratch block.
Take one of the scratch blocks into use. So a scratch sector becomes active sector to which
entries can be appended.
Parameters
• fcb – [in] FCB instance structure.
Returns
0 on success, non-zero on failure.
int fcb_free_sector_cnt(struct fcb *fcb)
Get free sector count.
Parameters
• fcb – [in] FCB instance structure.
Returns
Number of free sectors.
int fcb_is_empty(struct fcb *fcb)
Check whether FCB has any data.
Parameters
• fcb – [in] FCB instance structure.
Returns
Positive value if fcb is empty, otherwise 0.
int fcb_offset_last_n(struct fcb *fcb, uint8_t entries, struct fcb_entry *last_n_entry)
Finds the fcb entry that gives back up to n entries at the end.
Parameters
The Stream Flash module takes contiguous fragments of a stream of data (e.g. from radio packets),
aggregates them into a user-provided buffer, then when the buffer fills (or stream ends) writes it to
a raw flash partition. It supports providing the read-back buffer to the client to use in validating the
persisted stream content.
One typical use of a stream write operation is when receiving a new firmware image to be used in a DFU
operation.
There are several reasons why one might want to use buffered writes instead of writing the data directly
as it is made available. Some devices have hardware limitations which does not allow flash writes to be
performed in parallel with other operations, such as radio RX and TX. Also, fewer write operations result
in faster response times seen from the application.
Some stream write operations, such as DFU operations, may run for a long time. When performing such
long running operations it can be useful to be able to save the stream write progress to persistent storage
so that the operation can resume at the same point after an unexpected interruption.
The Stream Flash module offers an API for loading, saving and clearing stream write progress to persis-
tent storage using the Settings module. The API can be enabled using CONFIG_STREAM_FLASH_PROGRESS.
API Reference
group stream_flash
Abstraction over stream writes to flash.
Typedefs
Param buf
Pointer to the data read.
Param len
The length of the data read.
Param offset
The offset the data was read from.
Functions
int stream_flash_init(struct stream_flash_ctx *ctx, const struct device *fdev, uint8_t *buf, size_t
buf_len, size_t offset, size_t size, stream_flash_callback_t cb)
Initialize context needed for stream writes to flash.
Parameters
• ctx – context to be initialized
• fdev – Flash device to operate on
• buf – Write buffer
• buf_len – Length of write buffer. Can not be larger than the page size. Must
be multiple of the flash device write-block-size.
• offset – Offset within flash device to start writing to
• size – Number of bytes available for performing buffered write. If this is ‘0’,
the size will be set to the total size of the flash device minus the offset.
• cb – Callback to be invoked on completed flash write operations.
Returns
non-negative on success, negative errno code on fail
size_t stream_flash_bytes_written(struct stream_flash_ctx *ctx)
Read number of bytes written to the flash.
Parameters
• ctx – context
Returns
Number of payload bytes written to flash.
• flush – when true this forces any buffered data to be written to flash A flush
write should be the last write operation in a sequence of write operations for
given context (although this is not mandatory if the total data size is a multiple
of the buffer size).
Returns
non-negative on success, negative errno code on fail
int stream_flash_erase_page(struct stream_flash_ctx *ctx, off_t off)
Erase the flash page to which a given offset belongs.
This function erases a flash page to which an offset belongs if this page is not the page previ-
ously erased by the provided ctx (ctx->last_erased_page_start_offset).
Parameters
• ctx – context
• off – offset from the base address of the flash device
Returns
non-negative on success, negative errno code on fail
int stream_flash_progress_load(struct stream_flash_ctx *ctx, const char *settings_key)
Load persistent stream write progress stored with key settings_key .
This function should be called directly after stream_flash_init to load previous stream write
progress before writing any data. If the loaded progress has fewer bytes written than ctx then
it will be ignored.
Parameters
• ctx – context
• settings_key – key to use with the settings module for loading the stream
write progress
Returns
non-negative on success, negative errno code on fail
int stream_flash_progress_save(struct stream_flash_ctx *ctx, const char *settings_key)
Save persistent stream write progress using key settings_key .
Parameters
• ctx – context
• settings_key – key to use with the settings module for storing the stream
write progress
Returns
non-negative on success, negative errno code on fail
int stream_flash_progress_clear(struct stream_flash_ctx *ctx, const char *settings_key)
Clear persistent stream write progress stored with key settings_key .
Parameters
• ctx – context
• settings_key – key previously used for storing the stream write progress
Returns
non-negative on success, negative errno code on fail
struct stream_flash_ctx
#include <stream_flash.h> Structure for stream flash context.
Users should treat these structures as opaque values and only interact with them through the
below API.
4.20.1 Overview
Many microcontrollers feature a hardware watchdog timer peripheral. Its purpose is to trigger an action
(usually a system reset) in case of severe software malfunctions. Once initialized, the watchdog timer
has to be restarted (“fed”) in regular intervals to prevent it from timing out. If the software got stuck and
does not manage to feed the watchdog anymore, the corrective action is triggered to bring the system
back to normal operation.
In real-time operating systems with multiple tasks running in parallel, a single watchdog instance may
not be sufficient anymore, as it can be used for only one task. This software watchdog based on kernel
timers provides a method to supervise multiple threads or tasks (called watchdog channels).
An existing hardware watchdog can be used as an optional fallback if the task watchdog itself or the
scheduler has a malfunction.
The task watchdog uses a kernel timer as its backend. If configured properly, the timer ISR is never
actually called during normal operation, as the timer is continuously updated in the feed calls.
It’s currently not possible to have multiple instances of task watchdogs. Instead, the task watchdog API
can be accessed globally to add or delete new channels without passing around a context or device
pointer in the firmware.
The maximum number of channels is predefined via Kconfig and should be adjusted to match exactly the
number of channels required by the application.
group task_wdt_api
Task Watchdog APIs.
Typedefs
Functions
Return values
• 0 – If successful.
• -EINVAL – If there is no installed timeout for supplied channel.
Trusted Firmware-M (TF-M) is a reference implementation of the Platform Security Architecture (PSA)
IoT Security Framework. It defines and implements an architecture and a set of software components
that aim to address some of the main security concerns in IoT products.
Zephyr RTOS has been PSA Certified since Zephyr 2.0.0 with TF-M 1.0, and is currently integrated with
TF-M 1.4.1.
When using TF-M with a supported platform, TF-M will be automatically built and link in the background
as part of the standard Zephyr build process. This build process makes a number of assumptions about
how TF-M is being used, and has certain implications about what the Zephyr application image can and
can not do:
• The secure processing environment (secure boot and TF-M) starts first
• Resource allocation for Zephyr relies on choices made in the secure image.
Architecture Overview
A TF-M application will, generally, have the following three parts, from most to least trusted, left-to-right,
with code execution happening in the same order (secure boot > secure image > ns image).
While the secure bootloader is optional, it is enabled by default, and secure boot is an important part of
providing a secure solution:
+-------------------------------------+ +--------------+
| Secure Processing Environment (SPE) | | NSPE |
| +----------++---------------------+ | | +----------+ |
| | || | | | | | |
| | bl2.bin || tfm_s_signed.bin | | | |zephyr.bin| |
(continues on next page)
Communication between the (Zephyr) Non-Secure Processing Environment (NSPE) and the (TF-M) Se-
cure Processing Environment image happens based on a set of PSA APIs, and normally makes use
of an IPC mechanism that is included as part of the TF-M build, and implemented in Zephyr (see
modules/trusted-firmware-m/interface).
Root of Trust (RoT) Architecture TF-M is based upon a Root of Trust (RoT) architecture. This allows
for hierarchies of trust from most, to less, to least trusted, providing a sound foundation upon which to
build or access trusted services and resources.
The benefit of this approach is that less trusted components are prevented from accessing or compro-
mising more critical parts of the system, and error conditions in less trusted environments won’t corrupt
more trusted, isolated resources.
The following RoT hierarchy is defined for TF-M, from most to least trusted:
• PSA Root of Trust (PRoT), which consists of:
– PSA Immutable Root of Trust: secure boot
– PSA Updateable Root of Trust: most trusted secure services
• Application Root of Trust (ARoT): isolated secure services
The PSA Immutable Root of Trust is the most trusted piece of code in the system, to which subsequent
Roots of Trust are anchored. In TF-M, this is the secure boot image, which verifies that the secure and
non-secure images are valid, have not been tampered with, and come from a reliable source. The secure
bootloader also verifies new images during the firmware update process, thanks to the public signing
key(s) built into it. As the name implies, this image is immutable.
The PSA Updateable Root of Trust implements the most trusted secure services and components in
TF-M, such as the Secure Partition Manager (SPM), and shared secure services like PSA Crypto, Internal
Trusted Storage (ITS), etc. Services in the PSA Updateable Root of Trust have access to other resources
in the same Root of Trust.
The Application Root of Trust is a reduced-privilege area in the secure processing environment which,
depending on the isolation level chosen when building TF-M, has limited access to the PRoT, or even
other ARoT services at the highest isolation levels. Some standard services exist in the ARoT, such as
Protected Storage (PS), and generally custom secure services that you implement should be placed in
the ARoT, unless a compelling reason is present to place them in the PRoT.
These divisions are distinct from the untrusted code, which runs in the non-secure environment, and
has the least privilege in the system. This is the Zephyr application image in this case.
Isolation Levels At present, there are three distinct isolation levels defined in TF-M, with increasingly
rigid boundaries between regions. The isolation level used will depend on your security requirements,
and the system resources available to you.
• Isolation Level 1 is the lowest isolation level, and the only major boundary is between the secure
and non-secure processing environment, usually by means of Arm TrustZone on Armv8-M pro-
cessors. There is no distinction here between the PSA Updateable Root of Trust (PRoT) and the
Application Root of Trust (ARoT). They execute at the same privilege level. This isolation level will
lead to the smallest combined application images.
• Isolation Level 2 builds upon level one by introducing a distinction between the PSA Updateable
Root of Trust and the Application Root of Trust, where ARoT services have limited access to PRoT
services, and can only communicate with them through public APIs exposed by the PRoT services.
ARoT services, however, are not strictly isolated from one another.
• Isolation Level 3 is the highest isolation level, and builds upon level 2 by isolating ARoT services
from each other, so that each ARoT is essentially silo’ed from other services. This provides the
highest level of isolation, but also comes at the cost of additional overhead and code duplication
between services.
The current isolation level can be checked via CONFIG_TFM_ISOLATION_LEVEL.
Secure Boot The default secure bootloader in TF-M is based on MCUBoot, and is referred to as BL2
in TF-M (for the second-stage bootloader, potentially after a HW-based bootloader on the secure MCU,
etc.).
All images in TF-M are hashed and signed, with the hash and signature verified by MCUBoot during the
firmware update process.
Some key features of MCUBoot as used in TF-M are:
• Public signing key(s) are baked into the bootloader
• S and NS images can be signed using different keys
• Firmware images can optionally be encrypted
• Client software is responsible for writing a new image to the secondary slot
• By default, uses static flash layout of two identically-sized memory regions
• Optional security counter for rollback protection
When dealing with (optionally) encrypted images:
• Only the payload is encrypted (header, TLVs are plain text)
• Hashing and signing are applied over the un-encrypted data
• Uses AES-CTR-128 or AES-CTR-256 for encryption
• Encryption key randomized every encryption cycle (via imgtool)
• The AES-CTR key is included in the image and can be encrypted using:
– RSA-OAEP
– AES-KW (128 or 256 bits depending on the AES-CTR key length)
– ECIES-P256
– ECIES-X25519
Key config properties to control secure boot in Zephyr are:
• CONFIG_TFM_BL2 toggles the bootloader (default = y).
• CONFIG_TFM_KEY_FILE_S overrides the secure signing key.
• CONFIG_TFM_KEY_FILE_NS overrides the non-secure signing key.
Secure Processing Environment Once the secure bootloader has finished executing, a TF-M based
secure image will begin execution in the secure processing environment. This is where our device will
be initially configured, and any secure services will be initialised.
Note that the starting state of our device is controlled by the secure firmware, meaning that when the
non-secure Zephyr application starts, peripherals may not be in the HW-default reset state. In case
of doubts, be sure to consult the board support packages in TF-M, available in the platform/ext/
target/ folder of the TF-M module (which is in modules/tee/tf-m/trusted-firmware-m/ within a
default Zephyr west workspace.)
Secure Services As of TF-M 1.4.1, the following secure services are generally available (although ven-
dor support may vary):
• Audit Logging (Audit)
• Crypto (Crypto)
• Firmware Update (FWU)
• Initial Attestation (IAS)
• Platform (Platform)
• Secure Storage, which has two parts:
– Internal Trusted Storage (ITS)
– Protected Storage (PS)
A template also exists for creating your own custom services.
For full details on these services, and their exposed APIs, please consult the TF-M Documentation.
Key Management and Derivation Key and secret management is a critical part of any secure device.
You need to ensure that key material is available to regions that require it, but not to anything else, and
that it is stored securely in a way that makes it difficult to tamper with or maliciously access.
The Internal Trusted Storage service in TF-M is used by the PSA Crypto service (which itself makes use
of mbedtls) to store keys, and ensure that private keys are only ever accessible to the secure processing
environment. Crypto operations that make use of key material, such as when signing payloads or when
decrypting sensitive data, all take place via key handles. At no point should the key material ever be
exposed to the NS environment.
One exception is that private keys can be provisioned into the secure processing environment as a one-
way operation, such as during a factory provisioning process, but even this should be avoided where
possible, and a request should be made to the SPE (via the PSA Crypto service) to generate a new private
key itself, and the public key for that can be requested during provisioning and logged in the factory.
This ensures the private key material is never exposed, or even known during the provisioning phase.
TF-M also makes extensive use of the Hardware Unique Key (HUK), which every TF-M device must
provide. This device-unique key is used by the Protected Storage service, for example, to encrypt
information stored in external memory. For example, this ensures that the contents of flash memory
can’t be decrypted if they are removed and placed on a new device, since each device has its own unique
HUK used while encrypting the memory contents the first time.
HUKs provide an additional advantage for developers, in that they can be used to derive new keys, and
the derived keys don’t need to be stored since they can be regenerated from the HUK at startup, using an
additional salt/seed value (depending on the key derivation algorithm used). This removes the storage
issue and a frequent attack vector. The HUK itself it usually highly protected in secure devices, and
inaccessible directly by users.
TFM_CRYPTO_ALG_HUK_DERIVATION identifies the default key derivation algorithm used if a software im-
plementation is used. The current default algorithm is HKDF (RFC 5869) with a SHA-256 hash. Other
hardware implementations may be available on some platforms.
Non-Secure Processing Environment Zephyr is used for the NSPE, using a board that is supported by
TF-M where the CONFIG_BUILD_WITH_TFM flag has been enabled.
Generally, you simply need to select the *_ns variant of a valid target (for example mps2_an521_ns),
which will configure your Zephyr application to run in the NSPE, correctly build and link it with the
TF-M secure images, sign the secure and non-secure images, and merge the three binaries into a single
tfm_merged.hex file. The west flash command will flash tfm_merged.hex by default in this configura-
tion.
At present, Zephyr can not be configured to be used as the secure processing environment.
The following are some of the boards that can be used with TF-M:
You can run west boards -n _ns$ to search for non-secure variants of different board targets. To make
sure TF-M is supported for a board in its output, check that CONFIG_TRUSTED_EXECUTION_NONSECURE is
set to y in that board’s default configuration.
Software Requirements
The following Python modules are required when building TF-M binaries:
• cryptography
• pyasn1
• pyyaml
• cbor>=1.0.0
• imgtool>=1.9.0
• jinja2
• click
You can install them via:
They are used by TF-M’s signing utility to prepare firmware images for validation by the bootloader.
Part of the process of generating binaries for QEMU and merging signed secure and non-secure binaries
on certain platforms also requires the use of the srec_cat utility.
This can be installed on Linux via:
And on OS X via:
For Windows-based systems, please make sure you have a copy of the utility available on your system
path. See, for example: SRecord for Windows
When building a valid _ns board target, TF-M will be built in the background, and linked with the Zephyr
non-secure application. No knowledge of TF-M’s build system is required in most cases, and the following
will build a TF-M and Zephyr image pair, and run it in qemu with no additional steps required:
The outputs and certain key steps in this build process are described here, however, since you will need
to understand and interact with the outputs, and deal with signing the secure and non-secure images
before deploying them.
$<TARGET_PROPERTY:tfm,TFM_S_HEX_FILE>
See the top level CMakeLists.txt file in the tfm module for an overview of all the properties.
Signing Images
When CONFIG_TFM_BL2 is set to y, TF-M uses a secure bootloader (BL2) and firmware images must be
signed with a private key. The firmware image is validated by the bootloader during updates using the
corresponding public key, which is stored inside the secure bootloader firmware image.
By default, <tfm-dir>/bl2/ext/mcuboot/root-rsa-3072.pem is used to sign secure images, and
<tfm-dir>/bl2/ext/mcuboot/root-rsa-3072_1.pem is used to sign non-secure images. These
default .pem keys can (and should) be overridden using the CONFIG_TFM_KEY_FILE_S and
CONFIG_TFM_KEY_FILE_NS config flags.
To satisfy PSA Certified Level 1 requirements, You MUST replace the default .pem file with a new key
pair!
To generate a new public/private key pair, run the following commands:
$ imgtool keygen -k root-rsa-3072_s.pem -t rsa-3072
$ imgtool keygen -k root-rsa-3072_ns.pem -t rsa-3072
You can then place the new .pem files in an alternate location, such as your Zephyr application folder,
and reference them in the prj.conf file via the CONFIG_TFM_KEY_FILE_S and CONFIG_TFM_KEY_FILE_NS
config flags.
Warning: Be sure to keep your private key file in a safe, reliable location! If you lose this
key file, you will be unable to sign any future firmware images, and it will no longer be
possible to update your devices in the field!
After the built-in signing script has run, it creates a tfm_merged.hex file that contains all three binaries:
bl2, tfm_s, and the zephyr app. This hex file can then be flashed to your development board or run in
QEMU.
Custom CMake arguments When building a Zephyr application with TF-M it might be necessary to
control the CMake arguments passed to the TF-M build.
Zephyr TF-M build offers several Kconfig options for controlling the build, but doesn’t cover every CMake
argument supported by the TF-M build system.
The TFM_CMAKE_OPTIONS property on the zephyr_property_target can be used to pass custom CMake
arguments to the TF-M build system.
To pass the CMake argument -DFOO=bar to the TF-M build system, place the following CMake snippet in
your CMakeLists.txt file.
set_property(TARGET zephyr_property_target
APPEND PROPERTY TFM_CMAKE_OPTIONS
-DFOO=bar
)
Note: The TFM_CMAKE_OPTIONS is a list so it is possible to append multiple options. Also CMake
generator expressions are supported, such as $<1:-DFOO=bar>
Since TFM_CMAKE_OPTIONS is a list argument it will be expanded before it is passed to the TF-M build
system. Options that have list arguments must therefore be properly escaped to avoid being expanded
as a list.
set_property(TARGET zephyr_property_target
APPEND PROPERTY TFM_CMAKE_OPTIONS
-DFOO="bar\\\;baz"
)
The build system offers targets to view and analyse RAM and ROM usage in generated images. The tools
run on the final images and give information about size of symbols and code being used in both RAM
and ROM. For more information on these tools look here: Footprint and Memory Usage
Use the tfm_ram_report to get the RAM report for TF-M secure firmware (tfm_s).
Using west:
Use the tfm_rom_report to get the ROM report for TF-M secure firmware (tfm_s).
Using west:
Use the bl2_ram_report to get the RAM report for TF-M MCUboot, if enabled.
Using west:
Use the bl2_rom_report to get the ROM report for TF-M MCUboot, if enabled.
Using west:
The Trusted Firmware-M (TF-M) section contains information about the integration between TF-M and
Zephyr RTOS. Use this information to help understand how to integrate TF-M with Zephyr for Cortex-M
platforms and make use of its secure run-time services in Zephyr applications.
Board Definitions
TF-M will be built for the secure processing environment along with Zephyr if the
CONFIG_BUILD_WITH_TFM flag is set to y.
Generally, this value should never be set at the application level, however, and all config flags required
for TF-M should be set in a board variant with the _ns suffix.
This board variant must define an appropriate flash, SRAM and peripheral configuration that takes into
account the initialisation process in the secure processing environment. CONFIG_TFM_BOARD must also be
set via modules/trusted-firmware-m/Kconfig.tfm to the board name that TF-M expects for this target, so
that it knows which target to build for the secure processing environment.
Example: mps2_an521_ns The mps2_an521 target is a dual-core Arm Cortex-M33 evaluation board
that, when using the default board variant, would generate a secure Zephyr binary.
The optional mps2_an521_ns target, however, sets these additional kconfig flags that indicate that Zephyr
should be built as a non-secure image, linked with TF-M as an external project, and optionally the secure
bootloader:
• CONFIG_TRUSTED_EXECUTION_NONSECURE y
• CONFIG_ARM_TRUSTZONE_M y
Comparing the mps2_an521.dts and mps2_an521_ns.dts files, we can see that the _ns version defines
offsets in flash and SRAM memory, which leave the required space for TF-M and the secure bootloader:
reserved-memory {
#address-cells = <1>;
#size-cells = <1>;
ranges;
/* The memory regions defined below must match what the TF-M
* project has defined for that board - a single image boot is
* assumed. Please see the memory layout in:
* https://git.trustedfirmware.org/TF-M/trusted-firmware-m.git/tree/
˓→platform/ext/target/mps2/an521/partition/flash_layout.h
*/
code: memory@100000 {
reg = <0x00100000 DT_SIZE_K(512)>;
};
ram: memory@28100000 {
reg = <0x28100000 DT_SIZE_M(1)>;
};
};
This reserves 1 MB of code memory and 1 MB of RAM for secure boot and TF-M, such that our non-
secure Zephyr application code will start at 0x10000, with RAM at 0x28100000. 512 KB code memory
is available for the NS zephyr image, along with 1 MB of RAM.
This matches the flash memory layout we see in flash_layout.h in TF-M:
mps2/an521 will be passed in to Tf-M as the board target, specified via CONFIG_TFM_BOARD.
The regression test suite can be run via the tfm_regression_test sample.
This sample tests various services and communication mechanisms across the NS/S boundary via the
PSA APIs. They provide a useful sanity check for proper integration between the NS RTOS (Zephyr in
this case) and the secure application (TF-M).
The PSA Arch Test suite, available via tfm_psa_test, contains a number of test suites that can be used
to validate that PSA API specifications are being followed by the secure application, TF-M being an
implementation of the Platform Security Architecture (PSA).
Only one of these suites can be run at a time, with the available test suites described via
CONFIG_TFM_PSA_TEST_* KConfig flags:
Purpose
The output of these test suites is required to obtain PSA Certification for your specific board, RTOS
(Zephyr here), and PSA implementation (TF-M in this case).
They also provide a useful test case to validate any PRs that make meaningful changes to TF-M, such
as enabling a new TF-M board target, or making changes to the core TF-M module(s). They should
generally be run as a coherence check before publishing a new PR for new board support, etc.
4.22 Virtualization
• Overview
• Support
• ivshmem-v2
• API Reference
Overview
As Zephyr is enabled to run as a guest OS on Qemu and ACRN it might be necessary to make VMs aware
of each other, or aware of the host. This is made possible by exposing a shared memory among parties
via a feature called ivshmem, which stands for inter-VM Shared Memory.
The Two types are supported: a plain shared memory (ivshmem-plain) or a shared memory with the
ability for a VM to generate an interruption on another, and thus to be interrupted as well itself (ivshmem-
doorbell).
Please refer to the official Qemu ivshmem documentation for more information.
Support
Zephyr supports both version: plain and doorbell. Ivshmem driver can be build by enabling
CONFIG_IVSHMEM. By default, this will expose the plain version. CONFIG_IVSHMEM_DOORBELL needs to
be enabled to get the doorbell version.
Because the doorbell version uses MSI-X vectors to support notification vectors, the
CONFIG_IVSHMEM_MSI_X_VECTORS has to be tweaked to the amount of vectors that will be needed.
Note that a tiny shell module can be exposed to test the ivshmem feature by enabling
CONFIG_IVSHMEM_SHELL.
ivshmem-v2
API Reference
group ivshmem
Inter-VM Shared Memory (ivshmem) reference API.
Defines
IVSHMEM_V2_PROTO_UNDEFINED
IVSHMEM_V2_PROTO_NET
Typedefs
typedef int (*ivshmem_int_peer_f)(const struct device *dev, uint32_t peer_id, uint16_t vector)
Functions
Note: The returned status, if positive, to a raised signal is the vector that generated the signal.
This lets the possibility to the user to have one signal for all vectors, or one per-vector.
Parameters
• dev – Pointer to the device structure for the driver instance
• signal – A pointer to a valid and ready to be signaled struct k_poll_signal. Or
NULL to unregister any handler registered for the given vector.
• vector – The interrupt vector to get notification from
Returns
0 on success, a negative errno otherwise
struct ivshmem_driver_api
#include <ivshmem.h>
The retention system provides an API which allows applications to read and write data from and to
memory areas or devices that retain the data while the device is powered. This allows for sharing
information between different applications or within a single application without losing state information
when a device reboots. The stored data should not persist in the event of a power failure (or during some
low-power modes on some devices) nor should it be stored to a non-volatile storage like Flash, Electrically
Erasable Programmable Read-Only Memory (EEPROM), or battery-backed RAM.
The retention system builds on top of the retained data driver, and adds additional software-level features
to it for ensuring the validity of data. Optionally, a magic header can be used to check if the front of the
retained data memory section contains this specific value, and an optional checksum (1, 2, or 4-bytes
in size) of the stored data can be appended to the end of the data. Additionally, the retention system
API allows partitioning of the retained data sections into multiple distinct areas. For example, a 64-byte
retained data area could be split up into 4 bytes for a boot mode, 16 bytes for a timestamp, 44 bytes
for a last log message. All of these sections can be accessed or updated independently. The prefix and
checksum can be set per-instance using devicetree.
To use the retention system, a retained data driver must be setup for the board you are using, there is
a zephyr driver which can be used which will use some RAM as non-init for this purpose. The retention
system is then initialised as a child node of this device 1 or more times - note that the memory region
will need to be decremented to account for this reserved portion of RAM. See the following example
(examples in this guide are based on the nrf52840dk_nrf52840 board and memory layout):
/ {
sram@2003FC00 {
compatible = "zephyr,memory-region", "mmio-sram";
reg = <0x2003FC00 DT_SIZE_K(1)>;
zephyr,memory-region = "RetainedMem";
status = "okay";
retainedmem {
(continues on next page)
The retention areas can then be accessed using the data retention API (once enabled with
CONFIG_RETENTION, which requires that CONFIG_RETAINED_MEM be enabled) by getting the device by
using:
# include <zephyr/device.h>
# include <zephyr/retention/retention.h>
When the write function is called, the magic header and checksum (if enabled) will be set on the area,
and it will be marked as valid from that point onwards.
An addition to the retention subsystem is a boot mode interface, this can be used to dynamically change
the state of an application or run a different application with a minimal set of functions when a device is
rebooted (an example is to have a buttonless way of entering mcuboot’s serial recovery feature from the
main application).
To use the boot mode feature, a data retention entry must exist in the device tree, which is dedicated for
use as the boot mode selection (the user area data size only needs to be a single byte), and this area be
assigned to the chosen node of zephyr,boot-mode. See the following example:
/ {
sram@2003FFFF {
compatible = "zephyr,memory-region", "mmio-sram";
reg = <0x2003FFFF 0x1>;
zephyr,memory-region = "RetainedMem";
status = "okay";
retainedmem {
compatible = "zephyr,retained-ram";
status = "okay";
#address-cells = <1>;
#size-cells = <1>;
retention0: retention@0 {
compatible = "zephyr,retention";
status = "okay";
reg = <0x0 0x1>;
};
};
};
chosen {
zephyr,boot-mode = &retention0;
};
};
The boot mode interface can be enabled with CONFIG_RETENTION_BOOT_MODE and then accessed
by using the boot mode functions. If using mcuboot with serial recovery, it can be built with
CONFIG_MCUBOOT_SERIAL and CONFIG_BOOT_SERIAL_BOOT_MODE enabled which will allow rebooting di-
rectly into the serial recovery mode by using:
# include <zephyr/retention/bootmode.h>
# include <zephyr/sys/reboot.h>
bootmode_set(BOOT_MODE_TYPE_BOOTLOADER);
sys_reboot(0);
group retention_api
Retention API.
Typedefs
typedef int (*retention_read_api)(const struct device *dev, off_t offset, uint8_t *buffer, size_t
size)
typedef int (*retention_write_api)(const struct device *dev, off_t offset, const uint8_t *buffer,
size_t size)
Functions
struct retention_api
#include <retention.h>
group boot_mode_interface
Boot mode interface.
Enums
enum BOOT_MODE_TYPES
Values:
enumerator BOOT_MODE_TYPE_BOOTLOADER
Bootloader boot mode (e.g. serial recovery for MCUboot)
Functions
• Problem
• Inspiration, introducing io_uring
• Submission Queue and Chaining
• Completion Queue
• Executor and IODev
• Memory pools
• Outstanding Questions
– Timeouts and Deadlines
– Cancellation
– Userspace Support
– IODev and Executor API
– Special Hardware: Intel HDA
• When to Use
• Examples
– Chained Blocking Requests
– Non blocking device to device
– Nested iodevs for Devices on Buses (Sensors), Theoretical
• API Reference
– RTIO API
– RTIO SPSC API
RTIO provides a framework for doing asynchronous operation chains with event driven I/O. This section
covers the RTIO API, queues, executor, iodev, and common usage patterns with peripheral devices.
RTIO takes a lot of inspiration from Linux’s io_uring in its operations and API as that API matches up
well with hardware DMA transfer queues and descriptions.
A quick sales pitch on why RTIO works well in many scenarios:
1. API is DMA and interrupt friendly
2. No buffer copying
3. No callbacks
4. Blocking or non-blocking operation
4.24.1 Problem
An application wishing to do complex DMA or interrupt driven operations today in Zephyr requires direct
knowledge of the hardware and how it works. There is no understanding in the DMA API of other Zephyr
devices and how they relate.
This means doing complex audio, video, or sensor streaming requires direct hardware knowledge or
leaky abstractions over DMA controllers. Neither is ideal.
To enable asynchronous operations, especially with DMA, a description of what to do rather than direct
operations through C and callbacks is needed. Enabling DMA features such as channels with priority,
and sequences of transfers requires more than a simple list of descriptions.
Using DMA and/or interrupt driven I/O shouldn’t dictate whether or not the call is blocking or not.
It’s better not to reinvent the wheel (or ring in this case) and io_uring as an API from the Linux kernel
provides a winning model. In io_uring there are two lock-free ring buffers acting as queues shared
between the kernel and a userland application. One queue for submission entries which may be chained
and flushed to create concurrent sequential requests. A second queue for completion queue events. Only
a single syscall is actually required to execute many operations, the io_uring_submit call. This call may
block the caller when a number of operations to wait on is given.
This model maps well to DMA and interrupt driven transfers. A request to do a sequence of operations
in an asynchronous way directly relates to the way hardware typically works with interrupt driven state
machines potentially involving multiple peripheral IPs like bus and DMA controllers.
The submission queue (sq), is the description of the operations to perform in concurrent chains.
For example imagine a typical SPI transfer where you wish to write a register address to then read from.
So the sequence of operations might be. . .
1. Chip Select
2. Clock Enable
3. Write register address into SPI transmit register
4. Read from the SPI receive register into a buffer
5. Disable clock
6. Disable Chip Select
If anything in this chain of operations fails give up. Some of those operations can be embodied in a
device abstraction that understands a read or write implicitly means setup the clock and chip select.
The transactional nature of the request also needs to be embodied in some manner. Of the operations
above perhaps the read could be done using DMA as its large enough make sense. That requires an
understanding of how to setup the device’s particular DMA to do so.
The above sequence of operations is embodied in RTIO as chain of submission queue entries (sqe).
Chaining is done by setting a bitflag in an sqe to signify the next sqe must wait on the current one.
Because the chip select and clocking is common to a particular SPI controller and device on the bus it is
embodied in what RTIO calls an iodev.
Multiple operations against the same iodev are done in the order provided as soon as possible. If two
operation chains have varying points using the same device its possible one chain will have to wait for
another to complete.
In order to know when a sqe has completed there is a completion queue (cq) with completion queue
events (cqe). A sqe once completed results in a cqe being pushed into the cq. The ordering of cqe may
not be the same order of sqe. A chain of sqe will however ensure ordering and failure cascading.
Other potential schemes are possible but a completion queue is a well trod idea with io_uring and other
similar operating system APIs.
Turning submission queue entries (sqe) into completion queue events (cqe) is the job of objects imple-
menting the executor and iodev APIs. These APIs enable coordination between themselves to enable
things like DMA transfers.
The end result of these APIs should be a method to resolve the request by deciding some of the following
questions with heuristic/constraint based decision making.
• Polling, Interrupt, or DMA transfer?
• If DMA, are the requirements met (peripheral supported by DMAC, etc).
The executor is meant to provide policy for when to use each transfer type, and provide the common code
for walking through submission queue chains by providing calls the iodev may use to signal completion,
error, or a need to suspend and wait.
In some cases, the consumer may not know how much data will be produced. Alternatively, a consumer
might be handling data from multiple producers where the frequency of the data is unpredictable. In
these cases, read operations may not want to bind memory at the time of allocation, but leave it to the
IODev. In such cases, there exists a macro RTIO_DEFINE_WITH_MEMPOOL . It allows creating the RTIO
context with a dedicated pool of “memory blocks” which can be consumed by the IODev. Below is a
snippet setting up the RTIO context with a memory pool. The memory pool has 128 blocks, each block
has the size of 16 bytes, and the data is 4 byte aligned.
# include <zephyr/rtio/rtio.h>
# define SQ_SIZE 4
# define CQ_SIZE 4
# define MEM_BLK_COUNT 128
# define MEM_BLK_SIZE 16
# define MEM_BLK_ALIGN 4
RTIO_EXECUTOR_SIMPLE_DEFINE(simple_exec);
RTIO_DEFINE_WITH_MEMPOOL(rtio_context, (struct rtio_executor *)&simple_exec,
SQ_SIZE, CQ_SIZE, MEM_BLK_COUNT, MEM_BLK_SIZE, MEM_BLK_ALIGN);
When a read is needed, the consumer simply needs to replace the call rtio_sqe_prep_read() (which
takes a pointer to a buffer and a length) with a call to rtio_sqe_prep_read_with_pool() . The IODev
requires only a small change which works with both pre-allocated data buffers as well as the mempool.
When the read is ready, instead of getting the buffers directly from the rtio_iodev_sqe , the IODev
should get the buffer and count by calling rtio_sqe_rx_buf() like so:
uint8_t *buf;
uint32_t buf_len;
int rc = rtio_sqe_rx_buff(iodev_sqe, MIN_BUF_LEN, DESIRED_BUF_LEN, &buf, &buf_len);
Finally, the consumer will be able to access the allocated buffer via c:func:rtio_cqe_get_mempool_buffer.
uint8_t *buf;
uint32_t buf_len;
int rc = rtio_cqe_get_mempool_buffer(&rtio_context, &cqe, &buf, &buf_len);
if (rc != 0) {
LOG_ERR("Failed to get mempool buffer");
return rc;
}
/* Release the cqe events (note that the buffer is not released yet */
rtio_cqe_release_all(&rtio_context);
RTIO is not a complete API and solution, and is currently evolving to best fit the nature of an RTOS. The
general ideas behind a pair of queues to describe requests and completions seems sound and has been
proven out in other contexts. Questions remain though.
Timeouts and deadlines are key to being Real-Time. Real-Time in Zephyr means being able to do things
when an application wants them done. That could mean different things from a deadline with best effort
attempts or a timeout and failure.
These features would surely be useful in many cases, but would likely add some significant complexities.
It’s something to decide upon, and even if enabled would likely be a compile time optional feature
leading to complex testing.
Cancellation
Canceling an already queued operation could be possible with a small API addition to perhaps take both
the RTIO context and a pointer to the submission queue entry. However, cancellation as an API induces
many potential complexities that might not be appropriate. It’s something to be decided upon.
Userspace Support
RTIO with userspace is certainly plausible but would require the equivalent of a memory map call to map
the shared ringbuffers and also potentially dma buffers.
Additionally a DMA buffer interface would likely need to be provided for coherence and MMU usage.
In some cases there’s a need to always do things in a specific order with a specific buffer allocation
strategy. Consider a DMA that requires the usage of a circular buffer segmented into blocks that may
only be transferred one after another. This is the case of the Intel HDA stream for audio.
In this scenario the above API can still work, but would require an additional buffer allocator to work
with fixed sized segments.
It’s important to understand when DMA like transfers are useful and when they are not. It’s a poor
idea to assume that something made for high throughput will work for you. There is a computational,
memory, and latency cost to setup the description of transfers.
Polling at 1Hz an air sensor will almost certainly result in a net negative result compared to ad-hoc sensor
(i2c/spi) requests to get the sample.
Continuous transfers, driven by timer or interrupt, of data from a peripheral’s on board FIFO over I2C,
I3C, SPI, MIPI, I2S, etc. . . maybe, but not always!
4.24.9 Examples
Examples speak loudly about the intended uses and goals of an API. So several key examples are pre-
sented below. Some are entirely plausible today without a big leap. Others (the sensor example) would
require additional work in other APIs outside of RTIO as a sub system and are theoretical.
A common scenario is needing to write the register address to then read from. This can be accomplished
by chaining a write into a read operation.
The transaction on i2c is implicit for each operation chain.
RTIO_I2C_IODEV(i2c_dev, I2C_DT_SPEC_INST(n));
RTIO_DEFINE(ez_io, 4, 4);
static uint16_t reg_addr;
static uint8_t buf[32];
if(read_cqe->result < 0) {
LOG_ERR("read failed!");
}
if(write_cqe->result < 0) {
LOG_ERR("write failed!");
}
rtio_spsc_release(ez_io.cq);
rtio_spsc_release(ez_io.cq);
}
Imagine wishing to read from one device on an I2C bus and then write the same buffer to a device on a
SPI bus without blocking the thread or setting up callbacks or other IPC notification mechanisms.
Perhaps an I2C temperature sensor and a SPI lowrawan module. The following is a simplified version of
that potential operation chain.
RTIO_I2C_IODEV(i2c_dev, I2C_DT_SPEC_INST(n));
RTIO_SPI_IODEV(spi_dev, SPI_DT_SPEC_INST(m));
RTIO_DEFINE(ez_io, 4, 4);
static uint8_t buf[32];
int do_some_io(void)
{
uint32_t read, write;
struct rtio_sqe *read_sqe = rtio_spsc_acquire(ez_io.sq);
rtio_sqe_prep_read(read_sqe, i2c_dev, RTIO_PRIO_LOW, buf, 32);
read_sqe->flags = RTIO_SQE_CHAINED; /* the next item in the queue will wait␣
˓→on this one */
/* These calls might return NULL if the operations have not yet completed! */
for (int i = 0; i < 2; i++) {
struct rtio_cqe *cqe = rtio_spsc_consume(ez_io.cq);
while(cqe == NULL) {
cqe = rtio_spsc_consume(ez_io.cq);
k_yield();
}
if(cqe->userdata == &read && cqe->result < 0) {
LOG_ERR("read from i2c failed!");
}
if(cqe->userdata == &write && cqe->result < 0) {
LOG_ERR("write to spi failed!");
}
/* Must release the completion queue event after consume */
rtio_spsc_release(ez_io.cq);
}
}
/* Note that the sensor device itself can use RTIO to get data over I2C/SPI
* potentially with DMA, but we don't need to worry about that here
* All we need to know is the device tree node_id and that it can be an iodev
*/
RTIO_SENSOR_IODEV(sensor_dev, DEVICE_DT_GET(DT_NODE(super6axis));
RTIO_DEFINE(ez_io, 4, 4);
/* The sensor driver decides the minimum buffer size for us, we decide how
* many bufs. This could be a typical multiple of a fifo packet the sensor
* produces, ICM42688 for example produces a FIFO packet of 20 bytes in
* 20bit mode at 32KHz so perhaps we'd like to get 4 buffers of 4ms of data
* each in this setup to process on. and its already been defined here for us.
*/
# include <sensors/icm42688_p.h>
static uint8_t bufs[4][ICM42688_RTIO_BUF_SIZE];
int do_some_sensors(void) {
/* Obtain a dmac executor from the DMA device */
struct device *dma = DEVICE_DT_GET(DT_NODE(dma0));
const struct rtio_executor *rtio_dma_exec =
(continues on next page)
/*
* Set the executor for our queue context
*/
rtio_set_executor(ez_io, rtio_dma_exec);
/* Mostly we want to feed the sensor driver enough buffers to fill while
* we wait and process! Small enough to process quickly with low latency,
* big enough to not spend all the time setting transfers up.
*
* It's assumed here that the sensor has been configured already
* and each FIFO watermark interrupt that occurs it attempts
* to pull from the queue, fill the buffer with a small metadata
* offset using its own rtio request to the SPI bus using DMA.
*/
for(int i = 0; i < 4; i++) {
struct rtio_sqe *read_sqe = rtio_spsc_acquire(ez_io.sq);
next:
/* Release completion queue event */
rtio_spsc_release(ez_io.cq);
/* resubmit a read request with the newly freed buffer to the sensor␣
˓→*/
struct rtio_sqe *read_sqe = rtio_spsc_acquire(ez_io.sq);
rtio_sqe_prep_read(read_sqe, sensor_dev, RTIO_PRIO_HIGH, buf,␣
˓→ICM20649_RTIO_BUF_SIZE);
}
}
RTIO API
group rtio_api
RTIO API.
Defines
RTIO_OP_NOP
An operation that does nothing and will complete immediately
RTIO_OP_RX
An operation that receives (reads)
RTIO_OP_TX
An operation that transmits (writes)
RTIO_OP_TINY_TX
An operation that transmits tiny writes by copying the data to write
RTIO_OP_CALLBACK
An operation that calls a given function (callback)
RTIO_OP_TXRX
An operation that transceives (reads and writes simultaneously)
RTIO_IODEV_DEFINE(name, iodev_api, iodev_data)
Statically define and initialize an RTIO IODev.
Parameters
• name – Name of the iodev
• iodev_api – Pointer to struct rtio_iodev_api
• iodev_data – Data pointer
RTIO_BMEM
Allocate to bss if available.
If CONFIG_USERSPACE is selected, allocate to the rtio_partition bss. Maps to:
K_APP_BMEM(rtio_partition) static
If CONFIG_USERSPACE is disabled, allocate as plain static: static
RTIO_DMEM
Allocate as initialized memory if available.
If CONFIG_USERSPACE is selected, allocate to the rtio_partition init. Maps to:
K_APP_DMEM(rtio_partition) static
If CONFIG_USERSPACE is disabled, allocate as plain static: static
RTIO_DEFINE(name, sq_sz, cq_sz)
Statically define and initialize an RTIO context.
Parameters
• name – Name of the RTIO
• sq_sz – Size of the submission queue entry pool
• cq_sz – Size of the completion queue entry pool
RTIO_DEFINE_WITH_MEMPOOL(name, sq_sz, cq_sz, num_blks, blk_size, balign)
Statically define and initialize an RTIO context.
Parameters
• name – Name of the RTIO
• sq_sz – Size of the submission queue, must be power of 2
• cq_sz – Size of the completion queue, must be power of 2
• num_blks – Number of blocks in the memory pool
• blk_size – The number of bytes in each block
• balign – The block alignment
Typedefs
typedef void (*rtio_callback_t)(struct rtio *r, const struct rtio_sqe *sqe, void *arg0)
Callback signature for RTIO_OP_CALLBACK.
Param r
RTIO context being used with the callback
Param sqe
Submission for the callback op
Param arg0
Argument option as part of the sqe
Functions
static inline void rtio_sqe_prep_nop(struct rtio_sqe *sqe, const struct rtio_iodev *iodev, void
*userdata)
Prepare a nop (no op) submission.
static inline void rtio_sqe_prep_read(struct rtio_sqe *sqe, const struct rtio_iodev *iodev, int8_t
prio, uint8_t *buf, uint32_t len, void *userdata)
Prepare a read op submission.
static inline void rtio_sqe_prep_read_with_pool(struct rtio_sqe *sqe, const struct rtio_iodev
*iodev, int8_t prio, void *userdata)
Prepare a read op submission with context’s mempool.
See also:
rtio_sqe_prep_read()
static inline void rtio_sqe_prep_read_multishot(struct rtio_sqe *sqe, const struct rtio_iodev
*iodev, int8_t prio, void *userdata)
static inline void rtio_sqe_prep_write(struct rtio_sqe *sqe, const struct rtio_iodev *iodev, int8_t
prio, uint8_t *buf, uint32_t len, void *userdata)
Prepare a write op submission.
static inline void rtio_sqe_prep_tiny_write(struct rtio_sqe *sqe, const struct rtio_iodev *iodev,
int8_t prio, const uint8_t *tiny_write_data, uint8_t
tiny_write_len, void *userdata)
Prepare a tiny write op submission.
Unlike the normal write operation where the source buffer must outlive the call the tiny write
data in this case is copied to the sqe. It must be tiny to fit within the specified size of a rtio_sqe.
This is useful in many scenarios with RTL logic where a write of the register to subsequently
read must be done.
static inline void rtio_sqe_prep_callback(struct rtio_sqe *sqe, rtio_callback_t callback, void
*arg0, void *userdata)
Prepare a callback op submission.
A somewhat special operation in that it may only be done in kernel mode.
Used where general purpose logic is required in a queue of io operations to do transforms or
logic.
static inline void rtio_sqe_prep_transceive(struct rtio_sqe *sqe, const struct rtio_iodev *iodev,
int8_t prio, uint8_t *tx_buf, uint8_t *rx_buf,
uint32_t buf_len, void *userdata)
Prepare a transceive op submission.
static inline struct rtio_iodev_sqe *rtio_sqe_pool_alloc(struct rtio_sqe_pool *pool)
Return values
0 – On success
Variables
struct rtio_sqe
#include <rtio.h> A submission queue event.
Public Members
uint8_t op
Op code
uint8_t prio
Op priority
uint16_t flags
Op Flags
uint16_t iodev_flags
Op iodev flags
void *userdata
User provided data which is returned upon operation completion. Could be a pointer or
integer.
If unique identification of completions is desired this should be unique as well.
uint32_t buf_len
Length of buffer
uint8_t *buf
Buffer to use
uint8_t tiny_buf_len
Length of tiny buffer
uint8_t tiny_buf[7]
Tiny buffer
void *arg0
Last argument given to callback
struct rtio_cqe
#include <rtio.h> A completion queue event.
Public Members
int32_t result
Result from operation
void *userdata
Associated userdata with operation
uint32_t flags
Flags associated with the operation
struct rtio_sqe_pool
#include <rtio.h>
struct rtio_cqe_pool
#include <rtio.h>
struct rtio_block_pool
#include <rtio.h>
struct rtio
#include <rtio.h> An RTIO context containing what can be viewed as a pair of queues.
A queue for submissions (available and in queue to be produced) as well as a queue of com-
pletions (available and ready to be consumed).
The rtio executor along with any objects implementing the rtio_iodev interface are the con-
sumers of submissions and producers of completions.
No work is started until rtio_submit is called.
struct rtio_iodev_sqe
#include <rtio.h> Compute the mempool block index for a given pointer.
struct rtio_iodev_api
#include <rtio.h> API that an RTIO IO device should implement.
Public Members
struct rtio_iodev
#include <rtio.h> An IO device with a function table for submitting requests.
group rtio_spsc
RTIO Single Producer Single Consumer (SPSC) Queue API.
Defines
RTIO_SPSC_INITIALIZER(sz, buf)
Statically initialize an rtio_spsc.
Parameters
• sz – Size of the spsc, must be power of 2 (ex: 2, 4, 8)
• buf – Buffer pointer
RTIO_SPSC_DECLARE(name, type)
Declare an anonymous struct type for an rtio_spsc.
Parameters
• name – Name of the spsc symbol to be provided
• type – Type stored in the spsc
RTIO_SPSC_DEFINE(name, type, sz)
Define an rtio_spsc with a fixed size.
Parameters
• name – Name of the spsc symbol to be provided
• type – Type stored in the spsc
• sz – Size of the spsc, must be power of 2 (ex: 2, 4, 8)
rtio_spsc_size(spsc)
Size of the SPSC queue.
Parameters
• spsc – SPSC reference
rtio_spsc_reset(spsc)
Initialize/reset a spsc such that its empty.
Note that this is not safe to do while being used in a producer/consumer situation with mul-
tiple calling contexts (isrs/threads).
Parameters
• spsc – SPSC to initialize/reset
rtio_spsc_acquire(spsc)
Acquire an element to produce from the SPSC.
Parameters
• spsc – SPSC to acquire an element from for producing
Returns
A pointer to the acquired element or null if the spsc is full
rtio_spsc_produce(spsc)
Produce one previously acquired element to the SPSC.
This makes one element available to the consumer immediately
Parameters
• spsc – SPSC to produce the previously acquired element or do nothing
rtio_spsc_produce_all(spsc)
Produce all previously acquired elements to the SPSC.
This makes all previous acquired elements available to the consumer immediately
Parameters
• spsc – SPSC to produce all previously acquired elements or do nothing
rtio_spsc_drop_all(spsc)
Drop all previously acquired elements.
This makes all previous acquired elements available to be acquired again
Parameters
• spsc – SPSC to drop all previously acquired elements or do nothing
rtio_spsc_consume(spsc)
Consume an element from the spsc.
Parameters
• spsc – Spsc to consume from
Returns
Pointer to element or null if no consumable elements left
rtio_spsc_release(spsc)
Release a consumed element.
Parameters
• spsc – SPSC to release consumed element or do nothing
rtio_spsc_release_all(spsc)
Release all consumed elements.
Parameters
• spsc – SPSC to release consumed elements or do nothing
rtio_spsc_acquirable(spsc)
Count of acquirable in spsc.
Parameters
• spsc – SPSC to get item count for
rtio_spsc_consumable(spsc)
Count of consumables in spsc.
Parameters
• spsc – SPSC to get item count for
rtio_spsc_peek(spsc)
Peek at the first available item in queue.
Parameters
• spsc – Spsc to peek into
Returns
Pointer to element or null if no consumable elements left
rtio_spsc_next(spsc, item)
Peek at the next item in the queue from a given one.
Parameters
• spsc – SPSC to peek at
• item – Pointer to an item in the queue
Returns
Pointer to element or null if none left
rtio_spsc_prev(spsc, item)
Get the previous item in the queue from a given one.
Parameters
• spsc – SPSC to peek at
• item – Pointer to an item in the queue
Returns
Pointer to element or null if none left
struct rtio_spsc
Common SPSC attributes.
The Zephyr message bus - Zbus is a lightweight and flexible message bus enabling a simple way for threads
to talk to one another.
• Concepts
– Virtual Distributed Event Dispatcher
– Limitations
• Usage
– Publishing to a channel
– Reading from a channel
– Forcing channel notification
– Declaring channels and observers
– Iterating over channels and observers
– Advanced channel control
• Samples
• Suggested Uses
• Configuration Options
• API Reference
4.25.1 Concepts
Threads can broadcast messages to all interested observers using zbus. Many-to-many communication
is possible. The bus implements message-passing and publish/subscribe communication paradigms that
enable threads to communicate synchronously or asynchronously through shared memory. The commu-
nication through zbus is channel-based, where threads publish and read to and from using messages.
Additionally, threads can observe channels and receive notifications from the bus when the channels are
modified. The figure below shows an example of a typical application using zbus in which the appli-
cation logic (hardware independent) talks to other threads via message bus. Note that the threads are
decoupled from each other because they only use zbus’ channels and do not need to know each other to
talk.
• Threads (subscribers) and callbacks (listeners) publishing, reading, and receiving notifications
from the bus.
The bus makes the publish, read, and subscribe actions available over channels. Publishing and reading
are available in all RTOS thread contexts. However, it cannot run inside Interrupt Service Routines (ISR)
because it uses mutexes to control channels access, and mutexes cannot work appropriately inside ISRs.
The publish and read operations are simple and fast; the procedure is a mutex locking followed by a
memory copy to and from a shared memory region and then a mutex unlocking. Another essential
aspect of zbus is the observers, which can be:
• Static; defined in compile time. It is not possible to remove it at runtime, but it is possible to
suppress it by calling the zbus_obs_set_enable() ;
• Dynamic; it can be added and removed to and from a channel at runtime.
For illustration purposes, suppose a usual sensor-based solution in the figure below. When the timer is
triggered, it pushes an action to a work queue that publishes to the Start trigger channel. As the
sensor thread subscribed to the Start trigger channel, it fetches the sensor data. Notice the VDED
executes the blink callback because it also listens to the Start trigger channel. When the sensor data
is ready, the sensor thread publishes it to the Sensor data channel. The core thread, as a Sensor data
channel subscriber, processes the sensor data and stores it in an internal sample buffer. It repeats until
the sample buffer is full; when it happens, the core thread aggregates the sample buffer information,
prepares a package, and publishes that to the Payload channel. The Lora thread receives that because it
is a Payload channel subscriber and sends the payload to the cloud. When it completes the transmission,
the Lora thread publishes to the Transmission done channel. The VDED executes the blink callback
again since it listens to the Transmission done channel.
This way of implementing the solution makes the application more flexible, enabling us to change things
independently. For example, we want to change the trigger from a timer to a button press. We can do
that, and the change does not affect other parts of the system. Likewise, we would like to change the
communication interface from LoRa to Bluetooth; we only need to change the LoRa thread. No other
change is required in order to make that work. Thus, the developer would do that for every block of the
image. Based on that, there is a sign zbus promotes decoupling in the system architecture.
Another important aspect of using zbus is the reuse of system modules. If a code portion with well-
defined behaviors (we call that module) only uses zbus channels and not hardware interfaces, it can
easily be reused in other solutions. The new solution must implement the interfaces (set of channels)
the module needs to work. That indicates zbus could improve the module reuse.
The last important note is the zbus solution reach. We can count on many ways of using zbus to enable
the developer to be as free as possible to create what they need. For example, messages can be dynamic
or static allocated; notifications can be synchronous or asynchronous; the developer can control the
channel in so many different ways claiming the channel, developers can add their metadata information
to a channel by using the user-data field, the discretionary use of a validator enables the systems to be
accurate over message format, and so on. Those characteristics increase the solutions that can be done
with zbus and make it a good fit as an open-source community tool.
The VDED execution always happens in the publishing’s (thread) context. So it cannot occur inside
an Interrupt Service Routine (ISR). Therefore, the IRSs must only access channels indirectly. The basic
description of the execution is as follows:
• The channel mutex is acquired;
• The channel receives the new message via direct copy (by a raw memcpy());
• The event dispatcher logic executes the listeners in the same sequence they appear on the channel
observers’ list. The listeners can perform non-copy quick access to the constant message reference
directly (via the zbus_chan_const_msg() function) since the channel is still locked;
• The event dispatcher logic pushes the channel’s reference to the subscribers’ notification message
queue. The pushing sequence is the same as the subscribers appear in the channel observers’ list;
• At last, the publishing function unlocks the channel.
To illustrate the VDED execution, consider the example illustrated below. We have four threads in as-
cending priority T1, T2, T3, and T4 (the highest priority); two listeners, L1 and L2; and channel A.
Suposing L1, L2, T2, T3, and T4 observer channel A.
The following code implements channel A. Note the struct a_msg is illustrative only.
ZBUS_CHAN_DEFINE(a_chan, /* Name */
struct a_msg, /* Message type */
(continues on next page)
NULL, /* Validator */
NULL, /* User Data */
ZBUS_OBSERVERS(L1, L2, T2, T3, T4), /* observers */
ZBUS_MSG_INIT(0) /* Initial value {0} */
);
In the figure below, the letters indicate some action related to the VDED execution. The X-axis represents
the time, and the Y-axis represents the priority of threads. Channel A’s message, represented by a voice
balloon, is only one memory portion (shared memory). It appears several times only as an illustration of
the message at that point in time.
The figure above illustrates the actions performed during the VDED execution when T1 publishes to
channel A. Thus, the figure below describes the actions (represented by a letter) of the VDED execution.
Limitations
Based on the fact that developers can use zbus to solve many different problems, some challenges arise.
Zbus will not solve every problem, so it is necessary to analyze the situation to be sure zbus is applicable.
For instance, based on the zbus benchmark, it would not be well suited to a high-speed stream of bytes
between threads. The Pipe kernel object solves this kind of need.
Delivery guarantees Zbus always delivers the messages to the listeners. However, there are no mes-
sage delivery guarantees for subscribers because zbus only sends the notification, but the message read-
ing depends on the subscriber’s implementation. This is because channels have a mutex protected sin-
gleton objects for which message transfer is used. In other words, it can be seen as a single size queue
where publishers always overwrite if queue is full. It is possible to increase the delivery rate by following
design tips:
• Keep the listeners quick-as-possible (deal with them as ISRs). If some processing is needed, con-
sider submitting a work to a work-queue;
• Try to give producers a high priority to avoid losses;
• Leave spare CPU for observers to consume data produced;
• Consider using message queues or pipes for intensive byte transfers.
Message delivery sequence The listeners (synchronous observers) will follow the channel definition
sequence as the notification and message consumption sequence. However, the subscribers, as they have
an asynchronous nature, all will receive the notification as the channel definition sequence but only will
consume the data when they execute again, so the delivery respects the order, but the priority assigned
to the subscribers will define the reaction sequence. All the listeners (static o dynamic) will receive the
message before subscribers receive the notification. The sequence of delivery is: (i) static listeners; (ii)
runtime listeners; (iii) static subscribers; at last (iv) runtime subscribers.
4.25.2 Usage
Zbus operation depends on channels and observers. Therefore, it is necessary to determine its message
and observers list during the channel definition. A message is a regular C struct; the observer can be a
subscriber (asynchronous) or a listener (synchronous).
The following code defines and initializes a regular channel and its dependencies. This channel ex-
changes accelerometer data, for example.
struct acc_msg {
int x;
int y;
int z;
};
ZBUS_CHAN_DEFINE(acc_chan, /* Name */
struct acc_msg, /* Message type */
NULL, /* Validator */
NULL, /* User Data */
ZBUS_OBSERVERS(my_listener, my_subscriber), /* observers */
ZBUS_MSG_INIT(.x = 0, .y = 0, .z = 0) /* Initial value */
);
}
}
ZBUS_LISTENER_DEFINE(my_listener, listener_callback_example);
ZBUS_SUBSCRIBER_DEFINE(my_subscriber, 4);
void subscriber_task(void)
{
const struct zbus_channel *chan;
if (&acc_chan == chan) {
// Indirect message access
zbus_chan_read(&acc_chan, &acc, K_NO_WAIT);
LOG_DBG("From subscriber -> Acc x=%d, y=%d, z=%d", acc.x, acc.
˓→y, acc.z);
}
}
}
K_THREAD_DEFINE(subscriber_task_id, 512, subscriber_task, NULL, NULL, NULL, 3, 0, 0);
Note: It is unnecessary to claim/lock a channel before accessing the message inside the listener since
the event dispatcher calls listeners with the notifying channel already locked. Subscribers, however, must
claim/lock that or use regular read operations to access the message after being notified.
Channels can have a validator function that enables a channel to accept only valid messages. Publish
attempts invalidated by hard channels will return immediately with an error code. This allows original
creators of a channel to exert some authority over other developers/publishers who may want to piggy-
back on their channels. The following code defines and initializes a hard channel and its dependencies.
Only valid messages can be published to a hard channel. It is possible because a Validator function
passed to the channel’s definition. In this example, only messages with move equal to 0, -1, and 1 are
valid. Publish function will discard all other values to move.
struct control_msg {
int move;
};
ZBUS_CHAN_DEFINE(control_chan, /* Name */
struct control_msg, /* Message type */
control_validator, /* Validator */
&message_count, /* User data */
ZBUS_OBSERVERS_EMPTY, /* observers */
ZBUS_MSG_INIT(.move = 0) /* Initial value */
);
Publishing to a channel
Messages are published to a channel in zbus by calling zbus_chan_pub() . For example, the following
code builds on the examples above and publishes to channel acc_chan. The code is trying to publish the
message acc1 to channel acc_chan, and it will wait up to one second for the message to be published.
Otherwise, the operation fails. As can be inferred from the code sample, it’s OK to use stack allocated
messages since VDED copies the data internally.
Messages are read from a channel in zbus by calling zbus_chan_read() . So, for example, the following
code tries to read the channel acc_chan, which will wait up to 500 milliseconds to read the message.
Otherwise, the operation fails.
It is possible to force zbus to notify a channel’s observers by calling zbus_chan_notify() . For example,
the following code builds on the examples above and forces a notification for the channel acc_chan.
Note this can send events with no message, which does not require any data exchange. See the code
example under Claim and finish a channel where this may become useful.
zbus_chan_notify(&acc_chan, K_NO_WAIT);
For accessing channels or observers from files other than its defining files, it is necessary to declare them
by calling ZBUS_CHAN_DECLARE and ZBUS_OBS_DECLARE . In other words, zbus channel definitions and
declarations with the same channel names in different files would point to the same (global) channel.
Thus, developers should be careful about existing channels, and naming new channels or linking will
fail. It is possible to declare more than one channel or observer on the same call. The following code
builds on the examples above and displays the defined channels and observers.
ZBUS_OBS_DECLARE(my_listener, my_subscriber);
ZBUS_CHAN_DECLARE(acc_chan, version_chan);
Zbus subsystem also implements Iterable Sections for channels and observers, for which there are sup-
porting APIs like zbus_iterate_over_channels() and zbus_iterate_over_observers() . This fea-
ture enables developers to call a procedure over all declared channels, where the procedure parameter is
a zbus_channel . The execution sequence is in the alphabetical name order of the channels (see Iterable
Sections documentation for details). Zbus also implements this feature for zbus_observer .
int count;
++count;
return true;
}
int main(void)
{
LOG_DBG("Channel list:");
count = 0;
zbus_iterate_over_channels(print_channel_data_iterator);
LOG_DBG("Observers list:");
count = 0;
zbus_iterate_over_observers(print_observer_data_iterator);
}
D: Channel list:
D: 0 - Channel acc_chan:
D: Message size: 12
D: Observers:
D: - my_listener
D: - my_subscriber
D: 1 - Channel version_chan:
D: Message size: 4
D: Observers:
D: Observers list:
D: 0 - Listener my_listener
D: 1 - Subscriber my_subscriber
Zbus was designed to be as flexible and extensible as possible. Thus, there are some features designed
to provide some control and extensibility to the bus.
Listeners message access For performance purposes, listeners can access the receiving channel mes-
sage directly since they already have the mutex lock for it. To access the channel’s message, the listener
should use the zbus_chan_const_msg() because the channel passed as an argument to the listener func-
tion is a constant pointer to the channel. The const pointer return type tells developers not to modify the
message.
LOG_DBG("From listener -> Acc x=%d, y=%d, z=%d", acc->x, acc->y, acc->
˓→ z);
}
}
User Data It is possible to pass custom data into the channel’s user_data for various purposes, such as
writing channel metadata. That can be achieved by passing a pointer to the channel definition macro’s
user_data field, which will then be accessible by others. Note that user_data is individual for each
channel. Also, note that user_data access is not thread-safe. For thread-safe access to user_data, see
the next section.
Claim and finish a channel To take more control over channels, two function were added
zbus_chan_claim() and zbus_chan_finish() . With these functions, it is possible to access the chan-
nel’s metadata safely. When a channel is claimed, no actions are available to that channel. After finishing
the channel, all the actions are available again.
Warning: Never change the fields of the channel struct directly. It may cause zbus behavior incon-
sistencies and scheduling issues.
The following code builds on the examples above and claims the acc_chan to set the user_data to the
channel. Suppose we would like to count how many times the channels exchange messages. We defined
the user_data to have the 32 bits integer. This code could be added to the listener code described above.
if (!zbus_chan_claim(&acc_chan, K_MSEC(200))) {
int *message_counting = (int *) zbus_chan_user_data(&acc_chan);
*message_counting += 1;
zbus_chan_finish(&acc_chan);
}
The following code has the exact behavior of the code in Publishing to a channel.
if (!zbus_chan_claim(&acc_chan, K_MSEC(200))) {
struct acc_msg *acc1 = (struct acc_msg *) zbus_chan_msg(&acc_chan);
acc1.x = 1;
acc1.y = 1;
acc1.z = 1;
zbus_chan_finish(&acc_chan);
zbus_chan_notify(&acc_chan, K_SECONDS(1));
}
The following code has the exact behavior of the code in Reading from a channel.
if (!zbus_chan_claim(&acc_chan, K_MSEC(200))) {
const struct acc_msg *acc1 = (const struct acc_msg *) zbus_chan_const_msg(&
˓→acc_chan);
Runtime observer registration It is possible to add observers to channels in runtime. This fea-
ture uses the object pool pattern technique in which the dynamic nodes are pre-allocated and can
be used and recycled. Therefore, it is necessary to set the pool size by changing the feature
CONFIG_ZBUS_RUNTIME_OBSERVERS_POOL_SIZE to enable this feature. Furthermore, it uses memory
slabs. When necessary, turn on the CONFIG_MEM_SLAB_TRACE_MAX_UTILIZATION configuration to track
the maximum usage of the pool. The following example illustrates the runtime registration usage.
ZBUS_LISTENER_DEFINE(my_listener, callback);
// ...
void thread_entry(void) {
// ...
/* Adding the observer to channel chan1 */
zbus_chan_add_obs(&chan1, &my_listener);
/* Removing the observer from channel chan1 */
zbus_chan_rm_obs(&chan1, &my_listener);
Zbus can only use a limited number of dynamic observers. The configuration option
CONFIG_ZBUS_RUNTIME_OBSERVERS_POOL_SIZE represents the size of the runtime observers pool (mem-
ory slab). Change that to fit the solution needs. Use the k_mem_slab_num_used_get() to verify how
many runtime observers slots are available. The function k_mem_slab_max_used_get() will provide
information regarding the maximum number of used slots count reached during the execution. Use that
to set the appropriate pool size avoiding waste. The following code illustrates how to use that.
Warning: Do not use _zbus_runtime_obs_pool memory slab directly. It may lead to inconsistencies.
4.25.3 Samples
For a complete overview of zbus usage, take a look at the samples. There are the following samples
available:
• zbus-hello-world-sample illustrates the code used above in action;
• zbus-work-queue-sample shows how to define and use different kinds of observers. Note there is
an example of using a work queue instead of executing the listener as an execution option;
• zbus-dyn-channel-sample demonstrates how to use dynamically allocated exchanging data in zbus;
• zbus-uart-bridge-sample shows an example of sending the operation of the channel to a host via
serial;
• zbus-remote-mock-sample illustrates how to implement an external mock (on the host) to send
and receive messages to and from the bus.
• zbus-runtime-obs-registration-sample illustrates a way of using the runtime observer registration
feature;
• zbus-benchmark-sample implements a benchmark with different combinations of inputs.
Use zbus to transfer data (messages) between threads in one-to-one, one-to-many, and many-to-many
synchronously or asynchronously. Choosing the proper observer type is crucial. Use subscribers for
scenarios that can tolerate message losses and duplications; when they cannot, use listeners. In addi-
tion to the listener, another asynchronous message processing mechanism (like message queues) may be
necessary to retain the pending message until it gets processed.
Note: Zbus can be used to transfer streams from the producer to the consumer. However, this can in-
crease zbus’ communication latency. So maybe consider a Pipe a good alternative for this communication
topology.
group zbus_apis
Zbus API.
Defines
ZBUS_OBS_DECLARE(...)
This macro list the observers to be used in a file. Internally, it declares the observers with the
extern statement. Note it is only necessary when the observers are declared outside the file.
ZBUS_CHAN_DECLARE(...)
This macro list the channels to be used in a file. Internally, it declares the channels with the
extern statement. Note it is only necessary when the channels are declared outside the file.
ZBUS_OBSERVERS_EMPTY
This macro indicates the channel has no observers.
ZBUS_OBSERVERS(...)
This macro indicates the channel has listed observers. Note the sequence of observer notifica-
tion will follow the same as listed.
See also:
struct zbus_channel
Parameters
• _name – The channel’s name.
• _type – The Message type. It must be a struct or union.
• _validator – The validator function.
• _user_data – A pointer to the user data.
• _observers – The observers list. The sequence indicates the priority of the
observer. The first the highest priority.
• _init_val – The message initialization.
ZBUS_MSG_INIT(_val, ...)
Initialize a message.
This macro initializes a message by passing the values to initialize the message struct or union.
Parameters
• _val – [in] Variadic with the initial values. ZBUS_INIT(0) means {0}, as
ZBUS_INIT(.a=10, .b=30) means {.a=10, .b=30}.
ZBUS_SUBSCRIBER_DEFINE(_name, _queue_size)
Define and initialize a subscriber.
This macro defines an observer of subscriber type. It defines a message queue where the sub-
scriber will receive the notification asynchronously, and initialize the struct zbus_observer
defining the subscriber.
Parameters
• _name – [in] The subscriber’s name.
• _queue_size – [in] The notification queue’s size.
ZBUS_LISTENER_DEFINE(_name, _cb)
Define and initialize a listener.
This macro defines an observer of listener type. This macro establishes the callback where the
listener will be notified synchronously, and initialize the struct zbus_observer defining the
listener.
Parameters
• _name – [in] The listener’s name.
• _cb – [in] The callback function.
Functions
int zbus_chan_pub(const struct zbus_channel *chan, const void *msg, k_timeout_t timeout)
Publish to a channel.
This routine publishes a message to a channel.
Parameters
• chan – The channel’s reference.
• msg – Reference to the message where the publish function copies the channel’s
message data from.
• timeout – Waiting period to publish the channel, or one of the special values
K_NO_WAIT and K_FOREVER.
Return values
• 0 – Channel published.
• -ENOMSG – The message is invalid based on the validator function or some of
the observers could not receive the notification.
• -EBUSY – The channel is busy.
• -EAGAIN – Waiting period timed out.
• -EFAULT – A parameter is incorrect, the notification could not be sent to one or
more observer, or the function context is invalid (inside an ISR). The function
only returns this value when the CONFIG_ZBUS_ASSERT_MOCK is enabled.
int zbus_chan_read(const struct zbus_channel *chan, void *msg, k_timeout_t timeout)
Read a channel.
This routine reads a message from a channel.
Parameters
• chan – [in] The channel’s reference.
• msg – [out] Reference to the message where the read function copies the chan-
nel’s message data to.
• timeout – [in] Waiting period to read the channel, or one of the special values
K_NO_WAIT and K_FOREVER.
Return values
• 0 – Channel read.
• -EBUSY – The channel is busy.
• -EAGAIN – Waiting period timed out.
• -EFAULT – A parameter is incorrect, or the function context is invalid
(inside an ISR). The function only returns this value when the CON-
FIG_ZBUS_ASSERT_MOCK is enabled.
int zbus_chan_claim(const struct zbus_channel *chan, k_timeout_t timeout)
Claim a channel.
This routine claims a channel. During the claiming period the channel is blocked for publish-
ing, reading, notifying or claiming again. Finishing is the only available action.
Warning: After calling this routine, the channel cannot be used by other thread until the
zbus_chan_finish routine is performed.
Parameters
• chan – [in] The channel’s reference.
• timeout – [in] Waiting period to claim the channel, or one of the special values
K_NO_WAIT and K_FOREVER.
Return values
• 0 – Channel claimed.
• -EBUSY – The channel is busy.
• -EAGAIN – Waiting period timed out.
• -EFAULT – A parameter is incorrect, or the function context is invalid
(inside an ISR). The function only returns this value when the CON-
FIG_ZBUS_ASSERT_MOCK is enabled.
Parameters
• chan – The channel’s reference.
Return values
• 0 – Channel finished.
• -EPERM – The channel was claimed by other thread.
• -EINVAL – The channel’s mutex is not locked.
• -EFAULT – A parameter is incorrect, or the function context is invalid
(inside an ISR). The function only returns this value when the CON-
FIG_ZBUS_ASSERT_MOCK is enabled.
Warning: This function must only be used directly for acquired (locked by mutex) chan-
nels. This can be done inside a listener for the receiving channel or after claim a channel.
Parameters
• chan – The channel’s reference.
Returns
Channel’s message reference.
Warning: This function must only be used directly for acquired (locked by mutex) chan-
nels. This can be done inside a listener for the receiving channel or after claim a channel.
Parameters
• chan – The channel’s constant reference.
Returns
A constant channel’s message reference.
struct zbus_channel
#include <zbus.h> Type used to represent a channel.
Every channel has a zbus_channel structure associated used to control the channel access and
usage.
Public Members
sys_slist_t *runtime_observers
Dynamic channel observer list. Represents the channel’s observers list, it can be empty or
have listeners and subscribers mixed in any sequence. It can be changed in runtime.
struct zbus_observer
#include <zbus.h> Type used to represent an observer.
Every observer has an representation structure containing the relevant information. An ob-
server is a code portion interested in some channel. The observer can be notified syn-
chronously or asynchronously and it is called listener and subscriber respectively. The ob-
server can be enabled or disabled during runtime by change the enabled boolean field of the
structure. The listeners have a callback function that is executed by the bus with the index
of the changed channel as argument when the notification is sent. The subscribers have a
message queue where the bus enqueues the index of the changed channel when a notification
is sent.
See also:
zbus_obs_set_enable function to properly change the observer’s enabled field.
Public Members
bool enabled
Enabled flag. Indicates if observer is receiving notification.
4.26 Miscellaneous
CRC
group crc
Enums
enum crc_type
CRC algorithm enumeration.
These values should be used with the CRC dispatch function.
Values:
enumerator CRC7_BE
Use crc7_be
enumerator CRC8
Use crc8
enumerator CRC8_CCITT
Use crc8_ccitt
enumerator CRC16
Use crc16
enumerator CRC16_ANSI
Use crc16_ansi
enumerator CRC16_CCITT
Use crc16_ccitt
enumerator CRC16_ITU_T
Use crc16_itu_t
enumerator CRC32_C
Use crc32_c
enumerator CRC32_IEEE
Use crc32_ieee
Functions
uint16_t crc16(uint16_t poly, uint16_t seed, const uint8_t *src, size_t len)
Generic function for computing a CRC-16 without input or output reflection.
Compute CRC-16 by passing in the address of the input, the input length and polynomial used
in addition to the initial value. This is O(n*8) where n is the length of the buffer provided.
No reflection is performed.
Note: If you are planning to use a CRC based on poly 0x1012 the functions crc16_itu_t() is
faster and thus recommended over this one.
Parameters
• poly – The polynomial to use omitting the leading x^16 coefficient
uint16_t crc16_reflect(uint16_t poly, uint16_t seed, const uint8_t *src, size_t len)
Generic function for computing a CRC-16 with input and output reflection.
Compute CRC-16 by passing in the address of the input, the input length and polynomial used
in addition to the initial value. This is O(n*8) where n is the length of the buffer provided.
Both input and output are reflected.
The following checksums can, among others, be calculated by this function, depending on the
value provided for the initial seed and the value the final calculated CRC is XORed with:
Note: If you are planning to use a CRC based on poly 0x1012 the function crc16_ccitt() is
faster and thus recommended over this one.
Parameters
• poly – The polynomial to use omitting the leading x^16 coefficient. Impor-
tant: please reflect the poly. For example, use 0xA001 instead of 0x8005 for
CRC-16-MODBUS.
• seed – Initial value for the CRC computation
• src – Input bytes for the computation
• len – Length of the input in bytes
Returns
The computed CRC16 value (without any XOR applied to it)
uint8_t crc8(const uint8_t *src, size_t len, uint8_t polynomial, uint8_t initial_value, bool
reversed)
Generic function for computing CRC 8.
Compute CRC 8 by passing in the address of the input, the input length and polynomial used
in addition to the initial value.
Parameters
• src – Input bytes for the computation
• len – Length of the input in bytes
• polynomial – The polynomial to use omitting the leading x^8 coefficient
• initial_value – Initial value for the CRC computation
• reversed – Should we use reflected/reversed values or not
Returns
The computed CRC8 value
Note: To calculate the CRC across non-contiguous blocks use the return value from block
N-1 as the seed for block N.
Parameters
• seed – Value to seed the CRC with
• src – Input bytes for the computation
• len – Length of the input in bytes
Returns
The computed CRC16 value (without any XOR applied to it)
Note: To calculate the CRC across non-contiguous blocks use the return value from block
N-1 as the seed for block N.
Parameters
• seed – Value to seed the CRC with
• src – Input bytes for the computation
• len – Length of the input in bytes
Returns
The computed CRC16 value (without any XOR applied to it)
JSON
group json
Defines
struct foo {
int32_t some_int;
};
Parameters
• struct_ – Struct packing the values
• field_name_ – Field name in the struct
• type_ – Token type for JSON value corresponding to a primitive type. Must
be one of: JSON_TOK_STRING for strings, JSON_TOK_NUMBER for numbers,
JSON_TOK_TRUE (or JSON_TOK_FALSE) for booleans.
struct nested {
int32_t foo;
struct {
int32_t baz;
} bar;
};
Parameters
• struct_ – Struct packing the values
• field_name_ – Field name in the struct
• sub_descr_ – Array of json_obj_descr describing the subobject
struct example {
int32_t foo[10];
size_t foo_len;
};
Parameters
• struct_ – Struct packing the values
• field_name_ – Field name in the struct
• max_len_ – Maximum number of elements in array
• len_field_ – Field name in the struct for the number of elements in the array
• elem_type_ – Element type, must be a primitive type
struct person_height {
const char *name;
int32_t height;
};
struct people_heights {
struct person_height heights[10];
size_t heights_len;
};
Parameters
• struct_ – Struct packing the values
• field_name_ – Field name in the struct containing the array
struct person_height {
const char *name;
int32_t height;
};
struct person_heights_array {
struct person_height heights;
}
struct people_heights {
struct person_height_array heights[10];
size_t heights_len;
};
Parameters
• struct_ – Struct packing the values
• field_name_ – Field name in the struct containing the array
• max_len_ – Maximum number of elements in the array
• len_field_ – Field name in the struct for the number of elements in the array
• elem_descr_ – Element descriptor, pointer to a descriptor array
• elem_descr_len_ – Number of elements in elem_descr_
See also:
JSON_OBJ_DESCR_PRIM
Parameters
• struct_ – Struct packing the values.
• json_field_name_ – String, field name in JSON strings
• struct_field_name_ – Field name in the struct
• type_ – Token type for JSON value corresponding to a primitive type.
See also:
JSON_OBJ_DESCR_OBJECT
Parameters
• struct_ – Struct packing the values
• json_field_name_ – String, field name in JSON strings
• struct_field_name_ – Field name in the struct
• sub_descr_ – Array of json_obj_descr describing the subobject
See also:
JSON_OBJ_DESCR_ARRAY
Parameters
• struct_ – Struct packing the values
• json_field_name_ – String, field name in JSON strings
• struct_field_name_ – Field name in the struct
• max_len_ – Maximum number of elements in array
• len_field_ – Field name in the struct for the number of elements in the array
• elem_type_ – Element type, must be a primitive type
struct person_height {
const char *name;
int32_t height;
};
struct people_heights {
struct person_height heights[10];
size_t heights_len;
};
Parameters
• struct_ – Struct packing the values
• json_field_name_ – String, field name of the array in JSON strings
• struct_field_name_ – Field name in the struct containing the array
• max_len_ – Maximum number of elements in the array
• len_field_ – Field name in the struct for the number of elements in the array
• elem_descr_ – Element descriptor, pointer to a descriptor array
• elem_descr_len_ – Number of elements in elem_descr_
Typedefs
Param data
User-provided pointer
Return
This callback function should return a negative number on error (which will be
propagated to the return value of json_obj_encode()), or 0 on success.
Enums
enum json_tokens
Values:
Functions
int64_t json_obj_parse(char *json, size_t len, const struct json_obj_descr *descr, size_t
descr_len, void *val)
Parses the JSON-encoded object pointed to by json, with size len, according to the descriptor
pointed to by descr. Values are stored in a struct pointed to by val. Set up the descriptor like
this:
struct s { int32_t foo; char *bar; } struct json_obj_descr descr[] = {
JSON_OBJ_DESCR_PRIM(struct s, foo, JSON_TOK_NUMBER), JSON_OBJ_DESCR_PRIM(struct
s, bar, JSON_TOK_STRING), };
Since this parser is designed for machine-to-machine communications, some liberties were
taken to simplify the design: (1) strings are not unescaped (but only valid escape sequences
are accepted); (2) no UTF-8 validation is performed; and (3) only integer numbers are sup-
ported (no strtod() in the minimal libc).
Parameters
• json – Pointer to JSON-encoded value to be parsed
• len – Length of JSON-encoded value
• descr – Pointer to the descriptor array
• descr_len – Number of elements in the descriptor array. Must be less than
63 due to implementation detail reasons (if more fields are necessary, use two
descriptors)
• val – Pointer to the struct to hold the decoded values
Returns
< 0 if error, bitmap of decoded fields on success (bit 0 is set if first field in the
descriptor has been properly decoded, etc).
int json_arr_parse(char *json, size_t len, const struct json_obj_descr *descr, void *val)
Parses the JSON-encoded array pointed to by json, with size len, according to the descriptor
pointed to by descr. Values are stored in a struct pointed to by val. Set up the descriptor like
this:
struct s { int32_t foo; char *bar; } struct json_obj_descr descr[] = {
JSON_OBJ_DESCR_PRIM(struct s, foo, JSON_TOK_NUMBER), JSON_OBJ_DESCR_PRIM(struct
s, bar, JSON_TOK_STRING), }; struct a { struct s baz[10]; size_t count; } struct
json_obj_descr array[] = { JSON_OBJ_DESCR_OBJ_ARRAY(struct a, baz, 10, count,descr,
ARRAY_SIZE(descr)), };
Since this parser is designed for machine-to-machine communications, some liberties were
taken to simplify the design: (1) strings are not unescaped (but only valid escape sequences
are accepted); (2) no UTF-8 validation is performed; and (3) only integer numbers are sup-
ported (no strtod() in the minimal libc).
Parameters
• json – Pointer to JSON-encoded array to be parsed
• len – Length of JSON-encoded array
• descr – Pointer to the descriptor array
• val – Pointer to the struct to hold the decoded values
Returns
0 if array has been successfully parsed. A negative value indicates an error (as
defined on errno.h).
int json_arr_separate_object_parse_init(struct json_obj *json, char *payload, size_t len)
Initialize single-object array parsing.
JSON-encoded array data is going to be parsed one object at a time. Data is provided by
payload with the size of len bytes.
Function validate that Json Array start is detected and initialize json object for Json object
parsing separately.
Parameters
• json – Provide storage for parser states. To be used when parsing the array.
• payload – Pointer to JSON-encoded array to be parsed
• len – Length of JSON-encoded array
Returns
0 if array start is detected and initialization is successful or negative error code in
case of failure.
int json_arr_separate_parse_object(struct json_obj *json, const struct json_obj_descr *descr,
size_t descr_len, void *val)
Parse a single object from array.
Parses the JSON-encoded object pointed to by json object array, with size len, according to the
descriptor pointed to by descr.
Parameters
• json – Pointer to JSON-object message state
• descr – Pointer to the descriptor array
• descr_len – Number of elements in the descriptor array. Must be less than 31.
• val – Pointer to the struct to hold the decoded values
Returns
< 0 if error, 0 for end of message, bitmap of decoded fields on success (bit 0 is
set if first field in the descriptor has been properly decoded, etc).
ssize_t json_escape(char *str, size_t *len, size_t buf_size)
Escapes the string so it can be used to encode JSON objects.
Parameters
• str – The string to escape; the escape string is stored the buffer pointed to by
this parameter
• len – Points to a size_t containing the size before and after the escaping process
• buf_size – The size of buffer str points to
Returns
0 if string has been escaped properly, or -ENOMEM if there was not enough space
to escape the buffer
size_t json_calc_escaped_len(const char *str, size_t len)
Calculates the JSON-escaped string length.
Parameters
• str – The string to analyze
• len – String size
Returns
The length str would have if it were escaped
ssize_t json_calc_encoded_len(const struct json_obj_descr *descr, size_t descr_len, const void
*val)
Calculates the string length to fully encode an object.
Parameters
• descr – Pointer to the descriptor array
• descr_len – Number of elements in the descriptor array
• val – Struct holding the values
Returns
Number of bytes necessary to encode the values if >0, an error code is returned.
ssize_t json_calc_encoded_arr_len(const struct json_obj_descr *descr, const void *val)
Calculates the string length to fully encode an array.
Parameters
• descr – Pointer to the descriptor array
• val – Struct holding the values
Returns
Number of bytes necessary to encode the values if >0, an error code is returned.
int json_obj_encode_buf(const struct json_obj_descr *descr, size_t descr_len, const void *val,
char *buffer, size_t buf_size)
Encodes an object in a contiguous memory location.
Parameters
• descr – Pointer to the descriptor array
• descr_len – Number of elements in the descriptor array
• val – Struct holding the values
• buffer – Buffer to store the JSON data
• buf_size – Size of buffer, in bytes, with space for the terminating NUL charac-
ter
Returns
0 if object has been successfully encoded. A negative value indicates an error (as
defined on errno.h).
int json_arr_encode_buf(const struct json_obj_descr *descr, const void *val, char *buffer, size_t
buf_size)
Encodes an array in a contiguous memory location.
Parameters
• descr – Pointer to the descriptor array
• val – Struct holding the values
• buffer – Buffer to store the JSON data
• buf_size – Size of buffer, in bytes, with space for the terminating NUL charac-
ter
Returns
0 if object has been successfully encoded. A negative value indicates an error (as
defined on errno.h).
int json_obj_encode(const struct json_obj_descr *descr, size_t descr_len, const void *val,
json_append_bytes_t append_bytes, void *data)
Encodes an object using an arbitrary writer function.
Parameters
• descr – Pointer to the descriptor array
• descr_len – Number of elements in the descriptor array
• val – Struct holding the values
• append_bytes – Function to append bytes to the output
• data – Data pointer to be passed to the append_bytes callback function.
Returns
0 if object has been successfully encoded. A negative value indicates an error.
int json_arr_encode(const struct json_obj_descr *descr, const void *val, json_append_bytes_t
append_bytes, void *data)
Encodes an array using an arbitrary writer function.
Parameters
• descr – Pointer to the descriptor array
• val – Struct holding the values
• append_bytes – Function to append bytes to the output
• data – Data pointer to be passed to the append_bytes callback function.
Returns
0 if object has been successfully encoded. A negative value indicates an error.
struct json_token
#include <json.h>
struct json_lexer
#include <json.h>
struct json_obj
#include <json.h>
struct json_obj_token
#include <json.h>
struct json_obj_descr
#include <json.h>
JWT
JSON Web Tokens (JWT) are an open, industry standard [RFC 7519](https://tools.ietf.org/html/
rfc7519) method for representing claims securely between two parties. Although JWT is fairly flexi-
ble, this API is limited to creating the simplistic tokens needed to authenticate with the Google Core IoT
infrastructure.
group jwt
JSON Web Token (JWT)
Functions
struct jwt_builder
#include <jwt.h> JWT data tracking.
JSON Web Tokens contain several sections, each encoded in base-64. This structure tracks
the token as it is being built, including limits on the amount of available space. It should be
initialized with jwt_init().
Public Members
char *base
The base of the buffer we are writing to.
char *buf
The place in this buffer where we are currently writing.
size_t len
The length remaining to write.
bool overflowed
Flag that is set if we try to write past the end of the buffer. If set, the token is not valid.
CMake is used to build your application together with the Zephyr kernel. A CMake build is done in two
stages. The first stage is called configuration. During configuration, the CMakeLists.txt build scripts
are executed. After configuration is finished, CMake has an internal model of the Zephyr build, and can
generate build scripts that are native to the host platform.
CMake supports generating scripts for several build systems, but only Ninja and Make are tested and
supported by Zephyr. After configuration, you begin the build stage by executing the generated build
scripts. These build scripts can recompile the application without involving CMake following most code
changes. However, after certain changes, the configuration step must be executed again before building.
The build scripts can detect some of these situations and reconfigure automatically, but there are cases
when this must be done manually.
Zephyr uses CMake’s concept of a ‘target’ to organize the build. A target can be an executable, a library,
or a generated file. For application developers, the library target is the most important to understand. All
source code that goes into a Zephyr build does so by being included in a library target, even application
code.
Library targets have source code, that is added through CMakeLists.txt build scripts like this:
In the above CMakeLists.txt, an existing library target named app is configured to include the source
file src/main.c. The PRIVATE keyword indicates that we are modifying the internals of how the library is
being built. Using the keyword PUBLIC would modify how other libraries that link with app are built. In
this case, using PUBLIC would cause libraries that link with app to also include the source file src/main.
c, behavior that we surely do not want. The PUBLIC keyword could however be useful when modifying
the include paths of a target library.
The Zephyr build process can be divided into two main phases: a configuration phase (driven by CMake)
and a build phase (driven by Make or Ninja).
Configuration Phase
The configuration phase begins when the user invokes CMake to generate a build system, specifying a
source application directory and a board target.
1007
Zephyr Project Documentation, Release 3.4.0
Configuration overview...
*.dts/*.dtsi files
C preprocessor
Preprocessed devicetree,...
prj.conf...
Kconfig can rea...
devicetree_generated.h...
Kconfig files Scripts in scripts/kconf...
Outputs
CMake begins by processing the CMakeLists.txt file in the application directory, which refers to the
CMakeLists.txt file in the Zephyr top-level directory, which in turn refers to CMakeLists.txt files
throughout the build tree (directly and indirectly). Its primary output is a set of Makefiles or Ninja files
to drive the build process, but the CMake scripts also do some processing of their own, which is explained
here.
Note that paths beginning with build/ below refer to the build directory you create when running
CMake.
Devicetree
*.dts (devicetree source) and *.dtsi (devicetree source include) files are collected from the target’s
architecture, SoC, board, and application directories.
*.dtsi files are included by *.dts files via the C preprocessor (often abbreviated cpp, which should
not be confused with C++). The C preprocessor is also used to merge in any devicetree *.overlay
files, and to expand macros in *.dts, *.dtsi, and *.overlay files. The preprocessor output is
placed in build/zephyr/zephyr.dts.pre.
The preprocessed devicetree sources are parsed by gen_defines.py to generate a build/zephyr/
include/generated/devicetree_generated.h header with preprocessor macros.
Source code should access preprocessor macros generated from devicetree by including the device-
tree.h header, which includes devicetree_generated.h.
gen_defines.py also writes the final devicetree to build/zephyr/zephyr.dts in the build direc-
tory. This file’s contents may be useful for debugging.
If the devicetree compiler dtc is installed, it is run on build/zephyr/zephyr.dts to catch any
extra warnings and errors generated by this tool. The output from dtc is unused otherwise, and
this step is skipped if dtc is not installed.
The above is just a brief overview. For more information on devicetree, see Devicetree Guide.
Kconfig
Kconfig files define available configuration options for for the target architecture, SoC, board, and
application, as well as dependencies between options.
Kconfig configurations are stored in configuration files. The initial configuration is generated by
merging configuration fragments from the board and application (e.g. prj.conf).
The output from Kconfig is an autoconf.h header with preprocessor assignments, and a .config
file that acts both as a saved configuration and as configuration output (used by CMake). The
definitions in autoconf.h are automatically exposed at compile time, so there is no need to include
this header.
Information from devicetree is available to Kconfig, through the functions defined in kconfigfunc-
tions.py.
See the Kconfig section of the manual for more information.
Build Phase
The build phase begins when the user invokes make or ninja. Its ultimate output is a complete Zephyr
application in a format suitable for loading/flashing on the desired target board (zephyr.elf, zephyr.
hex, etc.) The build phase can be broken down, conceptually, into four stages: the pre-build, first-pass
binary, final binary, and post-processing.
Pre-build Pre-build occurs before any source files are compiled, because during this phase header files
used by the source files are generated.
Offset generation
Access to high-level data structures and members is sometimes required when the definitions of
those structures is not immediately accessible (e.g., assembly language). The generation of offsets.h
(by gen_offset_header.py) facilitates this.
System call boilerplate
The gen_syscall.py and parse_syscalls.py scripts work together to bind potential system call functions
with their implementations.
Intermediate binaries Compilation proper begins with the first intermediate binary. Source files (C
and assembly) are collected from various subsystems (which ones is decided during the configuration
phase), and compiled into archives (with reference to header files in the tree, as well as those generated
during the configuration phase and the pre-build stage(s)).
Build Stage II - Generation and Compilation
Makefile... arch/x86/*.c...
kernel/*.c Other sources
scripts/build/gen_app_partitions.py
app_smem_unaligned...
The exact number of intermediate binaries is decided during the configuration phase.
If memory protection is enabled, then:
Partition grouping
The gen_app_partitions.py script scans all the generated archives and outputs linker scripts to en-
sure that application partitions are properly grouped and aligned for the target’s memory protection
hardware.
Then cpp is used to combine linker script fragments from the target’s architecture/SoC, the kernel tree,
optionally the partition output if memory protection is enabled, and any other fragments selected during
the configuration process, into a linker.cmd file. The compiled archives are then linked with ld as specified
in the linker.cmd.
Unfixed size binary
The unfixed size intermediate binary is produced when User Mode is enabled or Devicetree is in use.
It produces a binary where sizes are not fixed and thus it may be used by post-process steps that
will impact the size of the final binary.
Build Stage III - Intermediate binary
Intermediate binaries post-processing The binaries from the previous stage are incomplete, with
empty and/or placeholder sections that must be filled in by, essentially, reflection.
To complete the build procedure the following scripts are executed on the intermediate binaries to pro-
duce the missing pieces needed for the final binary.
When User Mode is enabled:
Partition alignment
The gen_app_partitions.py script scans the unfixed size binary and generates an app shared memory
aligned linker script snippet where the partitions are sorted in descending order.
Device dependencies
The gen_handles.py script scans the unfixed size binary to determine relationships between devices
that were recorded from devicetree data, and replaces the encoded relationships with values that
are optimized to locate the devices actually present in the application.
Device handles
Interrupt tables
kobject_hash.gperf
scripts/build/process_gperf.py... kobject_hash.c
When no intermediate binary post-processing is required then the first intermediate binary will be di-
rectly used as the final binary.
Final binary The binary from the previous stage is incomplete, with empty and/or placeholder sections
that must be filled in by, essentially, reflection.
The link from the previous stage is repeated, this time with the missing pieces populated.
Post processing Finally, if necessary, the completed kernel is converted from ELF to the format expected
by the loader and/or flash tool required by the target. This is accomplished in a straightforward manner
with objdump.
zephyr.map
dev_handles.c GNU cc%3CmxGraphM... dev_handles.obj
kobject_hash_renamed.o
Makefile...
The following is a detailed description of the scripts used during the build process.
scripts/build/gen_syscalls.py
scripts/build/gen_handles.py
For example the sensor might have a first-pass handle defined by its devicetree ordinal 52, with the I2C
driver having ordinal 24 and the GPIO controller ordinal 14. The runtime ordinal is the index of the
corresponding device in the static devicetree array, which might be 6, 5, and 3, respectively.
The output is a C source file that provides alternative definitions for the array contents referenced from
the immutable device objects. In the final link these definitions supersede the ones in the driver-specific
object file.
scripts/build/gen_kobject_list.py
scripts/build/gen_offset_header.py
This script scans a specified object file and generates a header file that defined macros for the offsets
of various found structure members (particularly symbols ending with _OFFSET or _SIZEOF), primarily
intended for use in assembly code.
scripts/build/parse_syscalls.py
Script to scan Zephyr include directories and emit system call and subsystem metadata
System calls require a great deal of boilerplate code in order to implement completely. This script is the
first step in the build system’s process of auto-generating this code by doing a text scan of directories
containing C or header files, and building up a database of system calls and their function call prototypes.
This information is emitted to a generated JSON file for further processing.
This script also scans for struct definitions such as __subsystem and __net_socket, emitting a JSON
dictionary mapping tags to all the struct declarations found that were tagged with them.
If the output JSON file already exists, its contents are checked against what information this script would
have outputted; if the result is that the file would be unchanged, it is not modified to prevent unnecessary
incremental builds.
arch/x86/gen_idt.py
arch/x86/gen_gdt.py
scripts/build/gen_relocate_app.py
This script will relocate .text, .rodata, .data and .bss sections from required files and places it in the
required memory region. This memory region and file are given to this python script in the form of a
string.
Example of such a string would be:
SRAM2:COPY:/home/xyz/zephyr/samples/hello_world/src/main.c,\
SRAM1:COPY:/home/xyz/zephyr/samples/hello_world/src/main2.c, \
FLASH2:NOCOPY:/home/xyz/zephyr/samples/hello_world/src/main3.c
One can also specify the program header for a given memory region:
SRAM2\ :phdr0:COPY:/home/xyz/zephyr/samples/hello_world/src/main.c
To invoke this script:
scripts/build/process_gperf.py
scripts/build/gen_app_partitions.py
The output is a linker script fragment containing the definition of the app shared memory section, which
is further divided, for each partition found, into data and BSS for each partition.
scripts/build/check_init_priorities.py
5.2 Devicetree
A devicetree is a hierarchical data structure primarily used to describe hardware. Zephyr uses devicetree
in two main ways:
• to describe hardware to the Device Driver Model
• to provide that hardware’s initial configuration
This page links to a high level guide on devicetree as well as reference material.
The pages in this section are a high-level guide to using devicetree for Zephyr development.
Introduction to devicetree
Tip: This is a conceptual overview of devicetree and how Zephyr uses it. For step-by-step guides and
examples, see Devicetree HOWTOs.
The following pages introduce general devicetree concepts and how they apply to Zephyr.
Scope and purpose A devicetree is primarily a hierarchical data structure that describes hardware. The
Devicetree specification defines its source and binary representations.
Zephyr uses devicetree to describe:
• the hardware available on its boards
• that hardware’s initial configuration
As such, devicetree is both a hardware description language and a configuration language for Zephyr. See
Devicetree versus Kconfig for some comparisons between devicetree and Zephyr’s other main configuration
language, Kconfig.
There are two types of devicetree input files: devicetree sources and devicetree bindings. The sources
contain the devicetree itself. The bindings describe its contents, including data types. The build system
uses devicetree sources and bindings to produce a generated C header. The generated header’s contents
are abstracted by the devicetree.h API, which you can use to get information from your devicetree.
Here is a simplified view of the process:
All Zephyr and application source code files can include and use devicetree.h. This includes device
drivers, applications, tests, the kernel, etc.
The API itself is based on C macros. The macro names all start with DT_. In general, if you see a macro
that starts with DT_ in a Zephyr source file, it’s probably a devicetree.h macro. The generated C header
contains macros that start with DT_ as well; you might see those in compiler error messages. You always
can tell a generated- from a non-generated macro: generated macros have some lowercased letters,
while the devicetree.h macro names have all capital letters.
Syntax and structure As the name indicates, a devicetree is a tree. The human-readable text format
for this tree is called DTS (for devicetree source), and is defined in the Devicetree specification.
This page’s purpose is to introduce devicetree in a more gradual way than the specification. However,
you may still need to refer to the specification to understand some detailed cases.
Contents
• Example
• Nodes
• Properties
• Devicetrees reflect hardware
• Properties in practice
• Unit addresses
• Important properties
• Writing property values
• Aliases and chosen nodes
/dts-v1/ ;
/ {
a-node {
subnode_nodelabel: a-sub-node {
foo = <3>;
};
};
};
The /dts-v1/; line means the file’s contents are in version 1 of the DTS syntax, which has replaced a
now-obsolete “version 0”.
Nodes Like any tree data structure, a devicetree has a hierarchy of nodes. The above tree has three
nodes:
1. A root node: /
2. A node named a-node, which is a child of the root node
3. A node named a-sub-node, which is a child of a-node
Nodes can be assigned node labels, which are unique shorthands that refer to the labeled node. Above,
a-sub-node has the node label subnode_nodelabel. A node can have zero, one, or multiple node labels.
You can use node labels to refer to the node elsewhere in the devicetree.
Devicetree nodes have paths identifying their locations in the tree. Like Unix file system paths, devicetree
paths are strings separated by slashes (/), and the root node’s path is a single slash: /. Otherwise, each
node’s path is formed by concatenating the node’s ancestors’ names with the node’s own name, separated
by slashes. For example, the full path to a-sub-node is /a-node/a-sub-node.
Properties Devicetree nodes can also have properties. Properties are name/value pairs. Property values
can be any sequence of bytes. In some cases, the values are an array of what are called cells. A cell is just
a 32-bit unsigned integer.
Node a-sub-node has a property named foo, whose value is a cell with value 3. The size and type of
foo‘s value are implied by the enclosing angle brackets (< and >) in the DTS.
See Writing property values below for more example property values.
Devicetrees reflect hardware In practice, devicetree nodes usually correspond to some hardware, and
the node hierarchy reflects the hardware’s physical layout. For example, let’s consider a board with three
I2C peripherals connected to an I2C bus controller on an SoC, like this:
Nodes corresponding to the I2C bus controller and each I2C peripheral would be present in the device-
tree. Reflecting the hardware layout, the I2C peripheral nodes would be children of the bus controller
node. Similar conventions exist for representing other types of hardware.
The DTS would look something like this:
/dts-v1/ ;
/ {
soc {
i2c-bus-controller {
i2c-peripheral-1 {
};
i2c-peripheral-2 {
};
i2c-peripheral-3 {
};
};
};
};
Properties in practice In practice, properties usually describe or configure the hardware the node
represents. For example, an I2C peripheral’s node has a property whose value is the peripheral’s address
on the bus.
Here’s a tree representing the same example, but with real-world node names and properties you might
see when working with I2C devices.
Fig. 2: I2C devicetree example with real-world names and properties. Node names are at the top of each
node with a gray background. Properties are shown as “name=value” lines.
/dts-v1/ ;
/ {
soc {
(continues on next page)
apds9960@39 {
compatible = "avago,apds9960";
reg = <0x39>;
};
ti_hdc@43 {
compatible = "ti,hdc", "ti,hdc1010";
reg = <0x43>;
};
mma8652fc@1d {
compatible = "nxp,fxos8700", "nxp,mma8652fc";
reg = <0x1d>;
};
};
};
};
Unit addresses In addition to showing more real-world names and properties, the above example
introduces a new devicetree concept: unit addresses. Unit addresses are the parts of node names after
an “at” sign (@), like 40003000 in i2c@40003000, or 39 in apds9960@39. Unit addresses are optional: the
soc node does not have one.
In devicetree, unit addresses give a node’s address in the address space of its parent node. Here are some
example unit addresses for different types of hardware.
Memory-mapped peripherals
The peripheral’s register map base address. For example, the node named i2c@40003000 represents
an I2C controller whose register map base address is 0x40003000.
I2C peripherals
The peripheral’s address on the I2C bus. For example, the child node apds9960@39 of the I2C
controller in the previous section has I2C address 0x39.
SPI peripherals
An index representing the peripheral’s chip select line number. (If there is no chip select line, 0 is
used.)
Memory
The physical start address. For example, a node named memory@2000000 represents RAM starting
at physical address 0x2000000.
Memory-mapped flash
Like RAM, the physical start address. For example, a node named flash@8000000 represents a
flash device whose physical start address is 0x8000000.
Fixed flash partitions
This applies when the devicetree is used to store a flash partition table. The unit address is the
partition’s start offset within the flash memory. For example, take this flash device and its partitions:
flash@8000000 {
/* ... */
partitions {
partition@0 { /* ... */ };
partition@20000 { /* ... */ };
/* ... */
(continues on next page)
The node named partition@0 has offset 0 from the start of its flash device, so its base address is
0x8000000. Similarly, the base address of the node named partition@20000 is 0x8020000.
Important properties The devicetree specification defines several standard properties. Some of the
most important ones are:
compatible
The name of the hardware device the node represents.
The recommended format is "vendor,device", like "avago,apds9960", or a sequence of these,
like "ti,hdc", "ti,hdc1010". The vendor part is an abbreviated name of the vendor. The file
dts/bindings/vendor-prefixes.txt contains a list of commonly accepted vendor names. The device
part is usually taken from the datasheet.
It is also sometimes a value like gpio-keys, mmio-sram, or fixed-clock when the hardware’s
behavior is generic.
The build system uses the compatible property to find the right bindings for the node. Device drivers
use devicetree.h to find nodes with relevant compatibles, in order to determine the available
hardware to manage.
The compatible property can have multiple values. Additional values are useful when the device
is a specific instance of a more general family, to allow the system to match from most- to least-
specific device drivers.
Within Zephyr’s bindings syntax, this property has type string-array.
reg
Information used to address the device. The value is specific to the device (i.e. is different depend-
ing on the compatible property).
The reg property is a sequence of (address, length) pairs. Each pair is called a “register block”.
Values are conventionally written in hex.
Here are some common patterns:
• Devices accessed via memory-mapped I/O registers (like i2c@40003000): address is usually
the base address of the I/O register space, and length is the number of bytes occupied by the
registers.
• I2C devices (like apds9960@39 and its siblings): address is a slave address on the I2C bus.
There is no length value.
• SPI devices: address is a chip select line number; there is no length.
You may notice some similarities between the reg property and common unit addresses described
above. This is not a coincidence. The reg property can be seen as a more detailed view of the
addressable resources within a device than its unit address.
status
A string which describes whether the node is enabled.
The devicetree specification allows this property to have values "okay", "disabled", "reserved",
"fail", and "fail-sss". Only the values "okay" and "disabled" are currently relevant to
Zephyr; use of other values currently results in undefined behavior.
A node is considered enabled if its status property is either "okay" or not defined (i.e. does not ex-
ist in the devicetree source). Nodes with status "disabled" are explicitly disabled. (For backwards
compatibility, the value "ok" is treated the same as "okay", but this usage is deprecated.) Device-
tree nodes which correspond to physical devices must be enabled for the corresponding struct
device in the Zephyr driver model to be allocated and initialized.
interrupts
Information about interrupts generated by the device, encoded as an array of one or more inter-
rupt specifiers. Each interrupt specifier has some number of cells. See section 2.4, Interrupts and
Interrupt Mapping, in the Devicetree Specification release v0.3 for more details.
Zephyr’s devicetree bindings language lets you give a name to each cell in an interrupt specifier.
Note: Earlier versions of Zephyr made frequent use of the label property, which is distinct from the
standard node label. Use of the label property in new devicetree bindings, as well as use of the DT_LABEL
macro in new code, are actively discouraged. Label properties continue to persist for historical reasons in
some existing bindings and overlays, but should not be used in new bindings or device implementations.
Writing property values This section describes how to write property values in DTS format. The
property types in the table below are described in detail in Devicetree bindings.
Some specifics are skipped in the interest of keeping things simple; if you’re curious about details, see
the devicetree specification.
• Property values refer to other nodes in the devicetree by their phandles. You can write a phandle
using &foo, where foo is a node label. Here is an example devicetree fragment:
foo: device@0 { };
device@1 {
sibling = <&foo 1 2>;
};
The sibling property of node device@1 contains three cells, in this order:
1. The device@0 node’s phandle, which is written here as &foo since the device@0 node has a
node label foo
2. The value 1
3. The value 2
In the devicetree, a phandle value is a cell – which again is just a 32-bit unsigned int. However,
the Zephyr devicetree API generally exposes these values as node identifiers. Node identifiers are
covered in more detail in Devicetree access from C/C++.
• Array and similar type property values can be split into several <> blocks, like this:
This is recommended for readability when possible if the value can be logically grouped into blocks
of sub-values.
Aliases and chosen nodes There are two additional ways beyond node labels to refer to a particular
node without specifying its entire path: by alias, or by chosen node.
Here is an example devicetree which uses both:
/dts-v1/ ;
/ {
chosen {
zephyr,console = &uart0;
};
aliases {
my-uart = &uart0;
};
soc {
uart0: serial@12340000 {
...
};
};
};
The /aliases and /chosen nodes do not refer to an actual hardware device. Their purpose is to specify
other nodes in the devicetree.
Above, my-uart is an alias for the node with path /soc/serial@12340000. Using its node label uart0,
the same node is set as the value of the chosen zephyr,console node.
Zephyr sample applications sometimes use aliases to allow overriding the particular hardware device
used by the application in a generic way. For example, blinky-sample uses this to abstract the LED to
blink via the led0 alias.
The /chosen node’s properties are used to configure system- or subsystem-wide values. See Chosen nodes
for more information.
Input and output files This section describes the input and output files shown in the figure in Scope
and purpose in more detail.
<BOARD>.dts
In...
zephyr.dts.pre
Devi... FILE_1.overlay... Intermediate output in bu...
Devi... BINDING_1.yaml...
zephyr.dts
Devicetree scri... Final merged devicetree i...
boards/<ARCH>/<BOARD>/<BOARD>.dts
dts/common/skeleton.dtsi
dts/<ARCH>/.../<SOC>.dtsi
dts/bindings/.../binding.yaml
Generally speaking, every supported board has a BOARD.dts file describing its hardware. For example,
the reel_board has boards/arm/reel_board/reel_board.dts.
BOARD.dts includes one or more .dtsi files. These .dtsi files describe the CPU or system-on-chip
Zephyr runs on, perhaps by including other .dtsi files. They can also describe other common hardware
features shared by multiple boards. In addition to these includes, BOARD.dts also describes the board’s
specific hardware.
The dts/common directory contains skeleton.dtsi, a minimal include file for defining a complete de-
vicetree. Architecture-specific subdirectories (dts/<ARCH>) contain .dtsi files for CPUs or SoCs which
extend skeleton.dtsi.
The C preprocessor is run on all devicetree files to expand macro references, and includes are generally
done with #include <filename> directives, even though DTS has a /include/ "<filename>" syntax.
BOARD.dts can be extended or modified using overlays. Overlays are also DTS files; the .overlay ex-
tension is just a convention which makes their purpose clear. Overlays adapt the base devicetree for
different purposes:
• Zephyr applications can use overlays to enable a peripheral that is disabled by default, select a
sensor on the board for an application specific purpose, etc. Along with Configuration System
(Kconfig), this makes it possible to reconfigure the kernel and device drivers without modifying
source code.
• Overlays are also used when defining Shields.
The build system automatically picks up .overlay files stored in certain locations. It is also possible
to explicitly list the overlays to include, via the DTC_OVERLAY_FILE CMake variable. See Set devicetree
overlays for details.
The build system combines BOARD.dts and any .overlay files by concatenating them, with the overlays
put last. This relies on DTS syntax which allows merging overlapping definitions of nodes in the device-
tree. See Example: FRDM-K64F and Hexiwear K64 for an example of how this works (in the context of
.dtsi files, but the principle is the same for overlays). Putting the contents of the .overlay files last
allows them to override BOARD.dts.
Devicetree bindings (which are YAML files) are essentially glue. They describe the contents of devicetree
sources, includes, and overlays in a way that allows the build system to generate C macros usable by
device drivers and applications. The dts/bindings directory contains bindings.
Scripts and tools The following libraries and scripts, located in scripts/dts/, create output files from
input files. Their sources have extensive documentation.
dtlib.py
A low-level DTS parsing library.
edtlib.py
A library layered on top of dtlib that uses bindings to interpret properties and give a higher-level
view of the devicetree. Uses dtlib to do the DTS parsing.
gen_defines.py
A script that uses edtlib to generate C preprocessor macros from the devicetree and bindings.
In addition to these, the standard dtc (devicetree compiler) tool is run on the final devicetree if it is
installed on your system. This is just to catch errors or warnings. The output is unused. Boards may
need to pass dtc additional flags, e.g. for warning suppression. Board directories can contain a file
named pre_dt_board.cmake which configures these extra flags, like this:
Warning: Don’t include the header files directly. Devicetree access from C/C++ explains what to do
instead.
<build>/zephyr/zephyr.dts.pre
The preprocessed DTS source. This is an intermediate output file, which is input to gen_defines.
py and used to create zephyr.dts and devicetree_generated.h.
<build>/zephyr/include/generated/devicetree_generated.h
The generated macros and additional comments describing the devicetree. Included by
devicetree.h.
<build>/zephyr/zephyr.dts
The final merged devicetree. This file is output by gen_defines.py. It is useful for debugging any
issues. If the devicetree compiler dtc is installed, it is also run on this file, to catch any additional
warnings or errors.
Design goals
Zephyr’s use of devicetree has evolved significantly over time, and further changes are expected. The
following are the general design goals, along with specific examples about how they impact Zephyr’s
source code, and areas where more work remains to be done.
Single source for hardware information Zephyr’s built-in device drivers and sample applications shall
obtain configurable hardware descriptions from devicetree.
Examples
• New device drivers shall use devicetree APIs to determine which devices to create.
• In-tree sample applications shall use aliases to determine which of multiple possible generic devices
of a given type will be used in the current build. For example, the blinky-sample uses this to
determine the LED to blink.
• Boot-time pin muxing and pin control for new SoCs shall be accomplished via a devicetree-based
pinctrl driver
Source compatibility with other operating systems Zephyr’s devicetree tooling is based on a generic
layer which is interoperable with other devicetree users, such as the Linux kernel.
Zephyr’s binding language semantics can support Zephyr-specific attributes, but shall not express Zephyr-
specific relationships.
Examples
• Zephyr’s devicetree source parser, dtlib.py, is source-compatible with other tools like dtc in both
directions: dtlib.py can parse dtc output, and dtc can parse dtlib.py output.
• Zephyr’s “extended dtlib” library, edtlib.py, shall not include Zephyr-specific features. Its purpose
is to provide a higher-level view of the devicetree for common elements like interrupts and buses.
Only the high-level gen_defines.py script, which is built on top of edtlib.py, contains Zephyr-
specific knowledge and features.
Devicetree bindings
A devicetree on its own is only half the story for describing hardware, as it is a relatively unstructured
format. Devicetree bindings provide the other half.
A devicetree binding declares requirements on the contents of nodes, and provides semantic information
about the contents of valid nodes. Zephyr devicetree bindings are YAML files in a custom format (Zephyr
does not use the dt-schema tools used by the Linux kernel).
These pages introduce bindings, describe what they do, note where they are found, and explain their
data format.
Note: See the Bindings index for reference information on bindings built in to Zephyr.
compatible: "foo-company,bar-device"
properties:
num-foos:
(continues on next page)
The build system matches the bar-device node to its YAML binding because the node’s compatible
property matches the binding’s compatible: line.
What the build system does with bindings The build system uses bindings both to validate devicetree
nodes and to convert the devicetree’s contents into the generated devicetree_generated.h header file.
For example, the build system would use the above binding to check that the required num-foos property
is present in the bar-device node, and that its value, <3>, has the correct type.
The build system will then generate a macro for the bar-device node’s num-foos property, which will
expand to the integer literal 3. This macro lets you get the value of the property in C code using the API
which is discussed later in this guide in Devicetree access from C/C++.
For another example, the following node would cause a build error, because it has no num-foos property,
and this property is marked required in the binding:
bad-node {
compatible = "foo-company,bar-device";
};
Other ways nodes are matched to bindings If a node has more than one string in its compatible
property, the build system looks for compatible bindings in the listed order and uses the first match.
Take this node as an example:
baz-device {
compatible = "foo-company,baz-device", "generic-baz-device";
};
The baz-device node would get matched to a binding with a compatible: "generic-baz-device"
line if the build system can’t find a binding with a compatible: "foo-company,baz-device" line.
Nodes without compatible properties can be matched to bindings associated with their parent nodes.
These are called “child bindings”. If a node describes hardware on a bus, like I2C or SPI, then the bus
type is also taken into account when matching nodes to bindings. (See On-bus for details).
See The /zephyr,user node for information about a special node that doesn’t require any binding.
Where bindings are located Binding file names usually match their compatible: lines. For example,
the above example binding would be named foo-company,bar-device.yaml by convention.
The build system looks for bindings in dts/bindings subdirectories of the following places:
• the zephyr repository
• your application source directory
• your board directory
• any shield directories
• any directories manually included in the DTS_ROOT CMake variable
• any module that defines a dts_root in its Build settings
The build system will consider any YAML file in any of these, including in any subdirectories, when
matching nodes to bindings. A file is considered YAML if its name ends with .yaml or .yml.
Warning: The binding files must be located somewhere inside the dts/bindings subdirectory of the
above places.
For example, if my-app is your application directory, then you must place application-specific bindings
inside my-app/dts/bindings. So my-app/dts/bindings/serial/my-company,my-serial-port.
yaml would be found, but my-app/my-company,my-serial-port.yaml would be ignored.
Devicetree bindings syntax This page documents the syntax of Zephyr’s bindings format. Zephyr
bindings files are YAML files. A simple example was given in the introduction page.
Contents
Top level keys The top level of a bindings file maps keys to values. The top-level keys look like this:
# A high level description of the device the binding applies to:
description: |
This is the Vendomatic company's foo-device.
# You can include definitions from other bindings using this syntax:
include: other.yaml
(continues on next page)
properties:
# Requirements for and descriptions of the properties that this
# binding's nodes need to satisfy go here.
child-binding:
# You can constrain the children of the nodes matching this binding
# using this key.
foo-cells:
# "Specifier" cell names for the 'foo' domain go here; example 'foo'
# values are 'gpio', 'pwm', and 'dma'. See below for more information.
Description A free-form description of node hardware goes here. You can put links to datasheets or
example nodes or properties as well.
Compatible This key is used to match nodes to this binding as described in Introduction to Devicetree
Bindings. It should look like this in a binding file:
device {
compatible = "manufacturer,device";
};
Assuming no binding has compatible: "manufacturer,device-v2", it would also match this node:
device-2 {
compatible = "manufacturer,device-v2", "manufacturer,device";
};
Each node’s compatible property is tried in order. The first matching binding is used. The on-bus: key
can be used to refine the search.
If more than one binding for a compatible is found, an error is raised.
The manufacturer prefix identifies the device vendor. See dts/bindings/vendor-prefixes.txt for a list of
accepted vendor prefixes. The device part is usually from the datasheet.
Some bindings apply to a generic class of devices which do not have a specific vendor. In these cases,
there is no vendor prefix. One example is the gpio-leds compatible which is commonly used to describe
board LEDs connected to GPIOs.
Properties The properties: key describes properties that nodes which match the binding contain.
For example, a binding for a UART peripheral might look something like this:
compatible: "manufacturer,serial"
properties:
reg:
type: array
description: UART peripheral MMIO register space
required: true
current-speed:
type: int
description: current baud rate
required: true
In this example, a node with compatible "manufacturer,serial" must contain a node named
current-speed. The property’s value must be a single integer. Similarly, the node must contain a
reg property.
The build system uses bindings to generate C macros for devicetree properties that appear in DTS files.
You can read more about how to get property values in source code from these macros in Devicetree
access from C/C++. Generally speaking, the build system only generates macros for properties listed in
the properties: key for the matching binding. Properties not mentioned in the binding are generally
ignored by the build system.
The one exception is that the build system will always generate macros for standard properties, like reg,
whose meaning is defined by the devicetree specification. This happens regardless of whether the node
has a matching binding or not.
Property entry syntax Property entries in properties: are written in this syntax:
<property name>:
required: <true | false>
type: <string | int | boolean | array | uint8-array | string-array |
phandle | phandles | phandle-array | path | compound>
deprecated: <true | false>
default: <default>
description: <description of the property>
enum:
- <item1>
- <item2>
...
- <itemN>
const: <string | int | array | uint8-array | string-array>
specifier-space: <space-name>
properties:
# Describes a property like 'current-speed = <115200>;'. We pretend that
# it's obligatory for the example node and set 'required: true'.
current-speed:
(continues on next page)
int-with-default:
type: int
default: 123
description: Value for int register, default is power-up configuration.
array-with-default:
type: array
default: [1, 2, 3] # Same as 'array-with-default = <1 2 3>'
string-with-default:
type: string
default: "foo"
string-array-with-default:
type: string-array
default: ["foo", "bar"] # Same as 'string-array-with-default = "foo", "bar"'
uint8-array-with-default:
type: uint8-array
(continues on next page)
required Adding required: true to a property definition will fail the build if a node matches the
binding, but does not contain the property.
The default setting is required: false; that is, properties are optional by default. Using required:
false is therefore redundant and strongly discouraged.
type The type of a property constrains its values. The following types are available. See Writing
property values for more details about writing values of each type in a DTS file. See Phandles for more
information about the phandle* type properties.
deprecated A property with deprecated: true indicates to both the user and the tooling that the
property is meant to be phased out.
The tooling will report a warning if the devicetree includes the property that is flagged as deprecated.
(This warning is upgraded to an error in the Test Runner (Twister) for upstream pull requests.)
The default setting is deprecated: false. Using deprecated: false is therefore redundant and
strongly discouraged.
default The optional default: setting gives a value that will be used if the property is missing from
the devicetree node.
For example, with this binding fragment:
properties:
foo:
type: int
default: 3
If property foo is missing in a matching node, then the output will be as if foo = <3>; had appeared in
the DTS (except YAML data types are used for the default value).
Note that combining default: with required: true will raise an error.
For rules related to default in upstream Zephyr bindings, see Rules for default values.
See Example property definitions for examples. Putting default: on any property type besides those
used in Example property definitions will raise an error.
enum The enum: line is followed by a list of values the property may contain. If a property value
in DTS is not in the enum: list in the binding, an error is raised. See Example property definitions for
examples.
const This specifies a constant value the property must take. It is mainly useful for constraining the
values of common properties for a particular piece of hardware.
specifier-space This property, if present, manually sets the specifier space associated with a property
with type phandle-array.
Normally, the specifier space is encoded implicitly in the property name. A property named foos with
type phandle-array implicitly has specifier space foo. As a special case, *-gpios properties have speci-
fier space “gpio”, so that foo-gpios will have specifier space “gpio” rather than “foo-gpio”.
You can use specifier-space to manually provide a space if using this convention would result in an
awkward or unconventional name.
For example:
compatible: ...
properties:
bar:
type: phandle-array
specifier-space: my-custom-space
controller1: custom-controller@1000 {
#my-custom-space-cells = <2>;
};
controller2: custom-controller@2000 {
#my-custom-space-cells = <1>;
};
my-node {
bar = <&controller1 10 20>, <&controller2 30>;
};
Generally speaking, you should reserve this feature for cases where the implicit specifier space naming
convention doesn’t work. One appropriate example is an mboxes property with specifier space “mbox”,
not “mboxe”. You can write this property as follows:
properties:
mboxes:
type: phandle-array
specifier-space: mbox
Child-binding child-binding can be used when a node has children that all share the same properties.
Each child gets the contents of child-binding as its binding, though an explicit compatible = ... on
the child node takes precedence, if a binding is found for it.
Consider a binding for a PWM LED node like this one, where the child nodes are required to have a pwms
property:
pwmleds {
compatible = "pwm-leds";
red_pwm_led {
pwms = <&pwm3 4 15625000>;
};
green_pwm_led {
pwms = <&pwm3 0 15625000>;
};
/* ... */
};
compatible: "pwm-leds"
child-binding:
description: LED that uses PWM
properties:
pwms:
type: phandle-array
required: true
compatible: foo
child-binding:
child-binding:
properties:
my-property:
type: int
required: true
parent {
compatible = "foo";
child {
grandchild {
my-property = <123>;
(continues on next page)
Bus If the node is a bus controller, use bus: in the binding to say what type of bus. For example, a
binding for a SPI peripheral on an SoC would look like this:
compatible: "manufacturer,spi-peripheral"
bus: spi
# ...
The presence of this key in the binding informs the build system that the children of any node matching
this binding appear on this type of bus.
This in turn influences the way on-bus: is used to match bindings for the child nodes.
For a single bus supporting multiple protocols, e.g. I3C and I2C, the bus: in the binding can have a list
as value:
compatible: "manufacturer,i3c-controller"
bus: [i3c, i2c]
# ...
On-bus If the node appears as a device on a bus, use on-bus: in the binding to say what type of bus.
For example, a binding for an external SPI memory chip should include this line:
on-bus: spi
And a binding for an I2C based temperature sensor should include this line:
on-bus: i2c
When looking for a binding for a node, the build system checks if the binding for the parent node
contains bus: <bus type>. If it does, then only bindings with a matching on-bus: <bus type> and
bindings without an explicit on-bus are considered. Bindings with an explicit on-bus: <bus type> are
searched for first, before bindings without an explicit on-bus. The search repeats for each item in the
node’s compatible property, in order.
This feature allows the same device to have different bindings depending on what bus it appears on. For
example, consider a sensor device with compatible manufacturer,sensor which can be used via either
I2C or SPI.
The sensor node may therefore appear in the devicetree as a child node of either an SPI or an I2C
controller, like this:
spi-bus@0 {
/* ... some compatible with 'bus: spi', etc. ... */
sensor@0 {
compatible = "manufacturer,sensor";
reg = <0>;
/* ... */
};
};
i2c-bus@0 {
(continues on next page)
sensor@79 {
compatible = "manufacturer,sensor";
reg = <79>;
/* ... */
};
};
You can write two separate binding files which match these individual sensor nodes, even though they
have the same compatible:
# manufacturer,sensor-spi.yaml, which matches sensor@0 on the SPI bus:
compatible: "manufacturer,sensor"
on-bus: spi
Only sensor@79 can have a use-clock-stretching property. The bus-sensitive logic ignores
manufacturer,sensor-i2c.yaml when searching for a binding for sensor@0.
Specifier cell names (*-cells) This section documents how to name the cells in a specifier within a
binding. These concepts are discussed in detail later in this guide in phandle-array properties.
Consider a binding for a node whose phandle may appear in a phandle-array property, like the PWM
controllers pwm1 and pwm2 in this example:
pwm1: pwm@deadbeef {
compatible = "foo,pwm";
#pwm-cells = <2>;
};
pwm2: pwm@deadbeef {
compatible = "foo,pwm";
#pwm-cells = <1>;
};
my-node {
pwms = <&pwm1 1 2000>, <&pwm2 3000>;
};
The bindings for compatible "foo,pwm" and "bar,pwm" must give a name to the cells that appear in a
PWM specifier using pwm-cells:, like this:
# foo,pwm.yaml
compatible: "foo,pwm"
...
pwm-cells:
- channel
- period
# bar,pwm.yaml
(continues on next page)
A *-names (e.g. pwm-names) property can appear on the node as well, giving a name to each entry.
This allows the cells in the specifiers to be accessed by name, e.g. using APIs like
DT_PWMS_CHANNEL_BY_NAME .
If the specifier is empty (e.g. #clock-cells = <0>), then *-cells can either be omitted (recommended)
or set to an empty array. Note that an empty array is specified as e.g. clock-cells: [] in YAML.
Include Bindings can include other files, which can be used to share common property definitions
between bindings. Use the include: key for this. Its value is either a string or a list.
In the simplest case, you can include another file by giving its name as a string, like this:
include: foo.yaml
If any file named foo.yaml is found (see Where bindings are located for the search process), it will be
included into this binding.
Included files are merged into bindings with a simple recursive dictionary merge. The build system will
check that the resulting merged binding is well-formed. It is allowed to include at any level, including
child-binding, like this:
child-binding:
# bar.yaml will be merged with content at this level
include: bar.yaml
It is an error if a key appears with a different value in a binding and in a file it includes, with one ex-
ception: a binding can have required: true for a property definition for which the included file has
required: false. The required: true takes precedence, allowing bindings to strengthen require-
ments from included files.
Note that weakening requirements by having required: false where the included file has required:
true is an error. This is meant to keep the organization clean.
The file base.yaml contains definitions for many common properties. When writing a new binding, it
is a good idea to check if base.yaml already defines some of the needed properties, and include it if it
does.
Note that you can make a property defined in base.yaml obligatory like this, taking reg as an example:
reg:
required: true
This relies on the dictionary merge to fill in the other keys for reg, like type.
To include multiple files, you can use a list of strings:
include:
- foo.yaml
- bar.yaml
This includes the files foo.yaml and bar.yaml. (You can write this list in a single line of YAML as
include: [foo.yaml, bar.yaml].)
When including multiple files, any overlapping required keys on properties in the included files are
ORed together. This makes sure that a required: true is always respected.
In some cases, you may want to include some property definitions from a file, but not all of them. In this
case, include: should be a list, and you can filter out just the definitions you want by putting a mapping
in the list, like this:
include:
- name: foo.yaml
property-allowlist:
- i-want-this-one
- and-this-one
- name: bar.yaml
property-blocklist:
- do-not-include-this-one
- or-this-one
Each map element must have a name key which is the filename to include, and may have
property-allowlist and property-blocklist keys that filter which properties are included.
You cannot have a single map element with both property-allowlist and property-blocklist keys. A
map element with neither property-allowlist nor property-blocklist is valid; no additional filtering
is done.
You can freely intermix strings and mappings in a single include: list:
include:
- foo.yaml
- name: bar.yaml
property-blocklist:
- do-not-include-this-one
- or-this-one
include:
- name: bar.yaml
child-binding:
property-allowlist:
- child-prop-to-allow
Nexus nodes and maps All phandle-array type properties support mapping through *-map proper-
ties, e.g. gpio-map, as defined by the Devicetree specification.
This is used, for example, to define connector nodes for common breakout headers, such as the
arduino_header nodes that are conventionally defined in the devicetrees for boards with Arduino com-
patible expansion headers.
Rules for upstream bindings This section includes general rules for writing bindings that you want
to submit to the upstream Zephyr Project. (You don’t need to follow these rules for bindings you don’t
intend to contribute to the Zephyr Project, but it’s a good idea.)
Decisions made by the Zephyr devicetree maintainer override the contents of this section. If that happens,
though, please let them know so they can update this page, or you can send a patch yourself.
Contents
• General rules
– File names
– Recommendations are requirements
– Descriptions
– Naming conventions
• Rules for vendor prefixes
• Rules for default values
• The zephyr, prefix
Always check for existing bindings Zephyr aims for devicetree Source compatibility with other oper-
ating systems. Therefore, if there is an existing binding for your device in an authoritative location, you
should try to replicate its properties when writing a Zephyr binding, and you must justify any Zephyr-
specific divergences.
In particular, this rule applies if:
• There is an existing binding in the mainline Linux kernel. See Documentation/devicetree/
bindings in Linus’s tree for existing bindings and the Linux devicetree documentation for more
information.
• Your hardware vendor provides an official binding outside of the Linux kernel.
General rules
File names Bindings which match a compatible must have file names based on the compatible.
• For example, a binding for compatible vnd,foo must be named vnd,foo.yaml.
• If the binding is bus-specific, you can append the bus to the file name; for example, if the binding
YAML has on-bus: bar, you may name the file vnd,foo-bar.yaml.
Recommendations are requirements All recommendations in default are requirements when submit-
ting the binding.
In particular, if you use the default: feature, you must justify the value in the property’s description.
Descriptions There are only two acceptable ways to write property description: strings.
If your description is short, it’s fine to use this style:
If your description is long or spans multiple lines, you must use this style:
description: |
My very long string
goes here.
Look at all these lines!
This | style prevents YAML parsers from removing the newlines in multi-line descriptions. This in turn
makes these long strings display properly in the Bindings index.
Naming conventions Do not use uppercase letters (A through Z) or underscores (_) in property names.
Use lowercase letters (a through z) instead of uppercase. Use dashes (-) instead of underscores. (The
one exception to this rule is if you are replicating a well-established binding from somewhere like Linux.)
Rules for vendor prefixes The following general rules apply to vendor prefixes in compatible proper-
ties.
• If your device is manufactured by a specific vendor, then its compatible should have a vendor prefix.
If your binding describes hardware with a well known vendor from the list in dts/bindings/vendor-
prefixes.txt, you must use that vendor prefix.
• If your device is not manufactured by a specific hardware vendor, do not invent a vendor prefix.
Vendor prefixes are not mandatory parts of compatible properties, and compatibles should not
include them unless they refer to an actual vendor. There are some exceptions to this rule, but the
practice is strongly discouraged.
• Do not submit additions to Zephyr’s dts/bindings/vendor-prefixes.txt file unless you also
include users of the new prefix. This means at least a binding and a devicetree using the vendor
prefix, and should ideally include a device driver handling that compatible.
For custom bindings, you can add a custom dts/bindings/vendor-prefixes.txt file to any di-
rectory in your DTS_ROOT. The devicetree tooling will respect these prefixes, and will not generate
warnings or errors if you use them in your own bindings or devicetrees.
• We sometimes synchronize Zephyr’s vendor-prefixes.txt file with the Linux kernel’s equivalent file;
this process is exempt from the previous rule.
• If your binding is describing an abstract class of hardware with Zephyr specific drivers handling
the nodes, it’s usually best to use zephyr as the vendor prefix. See Zephyr-specific binding (zephyr)
for examples.
Rules for default values In any case where default: is used in a devicetree binding, the
description: for that property must explain why the value was selected and any conditions that would
make it necessary to provide a different value. Additionally, if changing one property would require
changing another to create a consistent configuration, then those properties should be made required.
There is no need to document the default value itself; this is already present in the Bindings index output.
There is a risk in using default: when the value in the binding may be incorrect for a particular board or
hardware configuration. For example, defaulting the capacity of the connected power cell in a charging
IC binding is likely to be incorrect. For such properties it’s better to make the property required: true,
forcing the user to make an explicit choice.
Driver developers should use their best judgment as to whether a value can be safely defaulted. Candi-
dates for default values include:
• delays that would be different only under unusual conditions (such as intervening hardware)
• configuration for devices that have a standard initial configuration (such as a USB audio headset)
• defaults which match the vendor-specified power-on reset value (as long as they are independent
from other properties)
Examples of how to write descriptions according to these rules:
properties:
cs-interval:
type: int
default: 0
description: |
Minimum interval between chip select deassertion and assertion.
The default corresponds to the reset value of the register field.
(continues on next page)
properties:
# Description doesn't mention anything about the default
foo:
type: int
default: 1
description: number of foos
The zephyr, prefix You must add this prefix to property names in the following cases:
• Zephyr-specific extensions to bindings we share with upstream Linux. One example is
the zephyr,vref-mv ADC channel property which is common to ADC controllers defined in
dts/bindings/adc/adc-controller.yaml. This channel binding is partially shared with an analogous
Linux binding, and Zephyr-specific extensions are marked as such with the prefix.
• Configuration values that are specific to a Zephyr device driver. One example is the zephyr,
lazy-load property in the ti,bq274xx binding. Though devicetree in general is a hardware de-
scription and configuration language, it is Zephyr’s only mechanism for configuring driver behavior
for an individual struct device. Therefore, as a compromise, we do allow some software config-
uration in Zephyr’s devicetree bindings, as long as they use this prefix to show that they are Zephyr
specific.
You may use the zephyr, prefix when naming a devicetree compatible that is specific to Zephyr. One
example is zephyr,ipc-openamp-static-vrings. In this case, it’s permitted but not required to add the
zephyr, prefix to properties defined in the binding.
This guide describes Zephyr’s <zephyr/devicetree.h> API for reading the devicetree from C source
files. It assumes you’re familiar with the concepts in Introduction to devicetree and Devicetree bindings.
See Devicetree Reference for reference material.
A note for Linux developers Linux developers familiar with devicetree should be warned that the API
described here differs significantly from how devicetree is used on Linux.
Instead of generating a C header with all the devicetree data which is then abstracted behind a macro
API, the Linux kernel would instead read the devicetree data structure in its binary form. The binary
representation is parsed at runtime, for example to load and initialize device drivers.
Zephyr does not work this way because the size of the devicetree binary and associated handling code
would be too large to fit comfortably on the relatively constrained devices Zephyr supports.
Node identifiers To get information about a particular devicetree node, you need a node identifier for
it. This is a just a C macro that refers to the node.
These are the main ways to get a node identifier:
By path
Use DT_PATH() along with the node’s full path in the devicetree, starting from the root node. This
is mostly useful if you happen to know the exact node you’re looking for.
By node label
Use DT_NODELABEL() to get a node identifier from a node label. Node labels are often provided by
SoC .dtsi files to give nodes names that match the SoC datasheet, like i2c1, spi2, etc.
By alias
Use DT_ALIAS() to get a node identifier for a property of the special /aliases node. This is
sometimes done by applications (like blinky, which uses the led0 alias) that need to refer to some
device of a particular type (“the board’s user LED”) but don’t care which one is used.
By instance number
This is done primarily by device drivers, as instance numbers are a way to refer to individual nodes
based on a matching compatible. Get these with DT_INST() , but be careful doing so. See below.
By chosen node
Use DT_CHOSEN() to get a node identifier for /chosen node properties.
By parent/child
Use DT_PARENT() and DT_CHILD() to get a node identifier for a parent or child node, starting from
a node identifier you already have.
Two node identifiers which refer to the same node are identical and can be used interchangeably.
Here’s a DTS fragment for some imaginary hardware we’ll return to throughout this file for examples:
/dts-v1/ ;
/ {
aliases {
sensor-controller = &i2c1;
};
soc {
i2c1: i2c@40002000 {
compatible = "vnd,soc-i2c";
label = "I2C_1";
reg = <0x40002000 0x1000>;
status = "okay";
clock-frequency = < 100000 >;
};
};
};
Here are a few ways to get node identifiers for the i2c@40002000 node:
• DT_PATH(soc, i2c_40002000)
• DT_NODELABEL(i2c1)
• DT_ALIAS(sensor_controller)
• DT_INST(x, vnd_soc_i2c) for some unknown number x. See the DT_INST() documentation for
details.
Important: Non-alphanumeric characters like dash (-) and the at sign (@) in devicetree names are
converted to underscores (_). The names in a DTS are also converted to lowercase.
Node identifiers are not values There is no way to store one in a variable. You cannot write:
Property access The right API to use to read property values depends on the node and property.
• Checking properties and values
• Simple properties
• reg properties
• interrupts properties
• phandle properties
Checking properties and values You can use DT_NODE_HAS_PROP() to check if a node has a property.
For the example devicetree above:
Simple properties Use DT_PROP(node_id, property) to read basic integer, boolean, string, numeric
array, and string array properties.
For example, to read the clock-frequency property’s value in the above example:
Important: The DTS property clock-frequency is spelled clock_frequency in C. That is, properties
also need special characters converted to underscores. Their names are also forced to lowercase.
Properties with string and boolean types work the exact same way. The DT_PROP() macro expands to
a string literal in the case of strings, and the number 0 or 1 in the case of booleans. For example:
Note: Don’t use DT_NODE_HAS_PROP() for boolean properties. Use DT_PROP() instead as shown
above. It will expand to either 0 or 1 depending on if the property is present or absent.
Properties with type array, uint8-array, and string-array work similarly, except DT_PROP() expands
to an array initializer in these cases. Here is an example devicetree fragment:
foo: foo@1234 {
a = <1000 2000 3000>; /* array */
b = [aa bb cc dd]; /* uint8-array */
c = "bar", "baz"; /* string-array */
};
You can use DT_PROP_LEN() to get logical array lengths in number of elements.
DT_PROP_LEN() cannot be used with the special reg or interrupts properties. These have alternative
macros which are described next.
Here, idx is the logical index into the interrupts array, i.e. it is the index of an individual interrupt
specifier in the property. The val argument is the name of a cell within the interrupt specifier. To use
this macro, check the bindings file for the node you are interested in to find the val names.
Most Zephyr devicetree bindings have a cell named irq, which is the interrupt number. You can use
DT_IRQN() as a convenient way to get a processed view of this value.
Warning: Here, “processed” reflects Zephyr’s devicetree Scripts and tools, which change the irq
number in zephyr.dts to handle hardware constraints on some SoCs and in accordance with Zephyr’s
multilevel interrupt numbering.
This is currently not very well documented, and you’ll need to read the scripts’ source code and
existing drivers for more details if you are writing a device driver.
phandle properties
Note: See Phandles for a detailed guide to phandles.
Property values can refer to other nodes using the &another-node phandle syntax introduced in Writing
property values. Properties which contain phandles have type phandle, phandles, or phandle-array in
their bindings. We’ll call these “phandle properties” for short.
You can convert a phandle to a node identifier using DT_PHANDLE() , DT_PHANDLE_BY_IDX() , or
DT_PHANDLE_BY_NAME() , depending on the type of property you are working with.
One common use case for phandle properties is referring to other hardware in the tree. In this case,
you usually want to convert the devicetree-level phandle to a Zephyr driver-level struct device. See Get a
struct device from a devicetree node for ways to do that.
Another common use case is accessing specifier values in a phandle array. The general pur-
pose APIs for this are DT_PHA_BY_IDX() and DT_PHA() . There are also hardware-specific short-
hands like DT_GPIO_CTLR_BY_IDX() , DT_GPIO_CTLR() , DT_GPIO_PIN_BY_IDX() , DT_GPIO_PIN() ,
DT_GPIO_FLAGS_BY_IDX() , and DT_GPIO_FLAGS() .
See DT_PHA_HAS_CELL_AT_IDX() and DT_PROP_HAS_IDX() for ways to check if a specifier value is
present in a phandle property.
• DT_ENUM_IDX() : for properties whose values are among a fixed list of choices
• Fixed flash partitions: APIs for managing fixed flash partitions. Also see Flash map, which wraps
this in a more user-friendly API.
Device driver conveniences Special purpose macros are available for writing device drivers, which
usually rely on instance identifiers.
To use these, you must define DT_DRV_COMPAT to the compat value your driver implements support for.
This compat value is what you would pass to DT_INST() .
If you do that, you can access the properties of individual instances of your compatible with less typing,
like this:
# include <zephyr/devicetree.h>
/*
* This is the same thing as
* DT_PROP(DT_INST(0, my_driver_compat), clock_frequency)
*/
DT_INST_PROP(0, clock_frequency)
Hardware specific APIs Convenience macros built on top of the above APIs are also defined to help
readability for hardware specific code. See Hardware specific APIs for details.
Generated macros While the zephyr/devicetree.h API is not generated, it does rely on a generated
C header which is put into every application build directory: devicetree_generated.h. This file contains
macros with devicetree data.
These macros have tricky naming conventions which the Devicetree API API abstracts away. They should
be considered an implementation detail, but it’s useful to understand them since they will frequently be
seen in compiler error messages.
This section contains an Augmented Backus-Naur Form grammar for these generated macros, with exam-
ples and more details in comments. See RFC 7405 (which extends RFC 5234) for a syntax specification.
; --------------------------------------------------------------------
; dt-macro: the top level nonterminal for a devicetree macro
;
; A dt-macro starts with uppercase "DT_", and is one of:
;
; - a <node-macro>, generated for a particular node
; - some <other-macro>, a catch-all for other types of macros
dt-macro = node-macro / other-macro
; --------------------------------------------------------------------
; pinctrl-macro: a macro related to the pinctrl properties in a node
;
; These are a bit of a special case because they kind of form an array,
; but the array indexes correspond to pinctrl-DIGIT properties in a node.
;
; So they're related to a node, but not just one property within the node.
;
; The following examples assume something like this:
;
; foo {
; pinctrl-0 = <&bar>;
; pinctrl-1 = <&baz>;
; pinctrl-names = "default", "sleep";
; };
; --------------------------------------------------------------------
; gpiohogs-macro: a macro related to GPIO hog nodes
;
; The following examples assume something like this:
;
; gpio1: gpio@... {
; compatible = "vnd,gpio";
; #gpio-cells = <2>;
;
; node-1 {
; gpio-hog;
; gpios = <0x0 0x10>, <0x1 0x20>;
; output-high;
; };
;
; node-2 {
; gpio-hog;
; gpios = <0x2 0x30>;
; output-low;
; };
; };
;
; Bindings fragment for the vnd,gpio compatible:
;
; gpio-cells:
; - pin
; - flags
; --------------------------------------------------------------------
; property-macro: a macro related to a node property
;
; These combine a node identifier with a "lowercase-and-underscores form"
; property name. The value expands to something related to the property's
; value.
;
; The optional prop-suf suffix is when there's some specialized
; subvalue that deserves its own macro, like the macros for an array
; property's individual elements
;
; The "plain vanilla" macro for a property's value, with no prop-suf,
; looks like this:
;
; DT_N_<node path>_P_<property name>
;
; Components:
;
(continues on next page)
; --------------------------------------------------------------------
; path-id: a node's path-based macro identifier
;
; This in "lowercase-and-underscores" form. I.e. it is
; the node's devicetree path converted to a C token by changing:
;
; - each slash (/) to _S_
; - all letters to lowercase
; - non-alphanumerics characters to underscores
;
; For example, the leaf node "bar-BAZ" in this devicetree:
;
; / {
; foo@123 {
; bar-BAZ {};
; };
; };
;
; has path-id "_S_foo_123_S_bar_baz".
path-id = 1*( %s"_S_" dt-name )
; ----------------------------------------------------------------------
; prop-id: a property identifier
;
; A property name converted to a C token by changing:
;
; - all letters to lowercase
; - non-alphanumeric characters to underscores
;
; Example node:
;
; chosen {
; zephyr,console = &uart1;
; WHY,AM_I_SHOUTING = "unclear";
; };
;
; The 'zephyr,console' property has prop-id 'zephyr_console'.
; 'WHY,AM_I_SHOUTING' has prop-id 'why_am_i_shouting'.
prop-id = dt-name
; ----------------------------------------------------------------------
; prop-suf: a property-specific macro suffix
;
; Extra macros are generated for properties:
;
; - that are special to the specification ("reg", "interrupts", etc.)
; - with array types (uint8-array, phandle-array, etc.)
; - with "enum:" in their bindings
; - that have zephyr device API specific macros for phandle-arrays
; - related to phandle specifier names ("foo-names")
;
(continues on next page)
; --------------------------------------------------------------------
; other-macro: grab bag for everything that isn't a node-macro.
; --------------------------------------------------------------------
; alternate-id: another way to specify a node besides a path-id
;
; Example devicetree:
;
(continues on next page)
; --------------------------------------------------------------------
; miscellaneous helper definitions
Phandles
The devicetree concept of a phandle is very similar to pointers in C. You can use phandles to refer to
nodes in devicetree similarly to the way you can use pointers to refer to structures in C.
Contents
• Getting phandles
• Using phandles
– One node: phandle type
– Zero or more nodes: phandles type
– Zero or more nodes with metadata: phandle-array type
• phandle-array properties
– High level description
– Example phandle-arrays: GPIOs
• Specifier spaces
– High level description
– Example specifier space: gpio
• Associating properties with specifier spaces
– High level description
– Special case: GPIO
– Manually specifying a space
• Naming the cells in a specifier
• See also
Getting phandles The usual way to get a phandle for a devicetree node is from one of its node labels.
For example, with this devicetree:
/ {
lbl_a: node-1 {};
lbl_b: lbl_c: node-2 {};
};
• /node-1 as &lbl_a
• /node-2 as either &lbl_b or &lbl_c
Notice how the &nodelabel devicetree syntax is similar to the “address of” C syntax.
Using phandles
Note: “Type” in this section refers to one of the type names documented in Properties in the devicetree
bindings documentation.
One node: phandle type You can use phandles to refer to node-b from node-a, where node-b is
related to node-a in some way.
One common example is when node-a represents some hardware that generates an interrupt, and
node-b represents the interrupt controller that receives the asserted interrupt. In this case, you could
write:
node_b: node-b {
interrupt-controller;
};
node-a {
interrupt-parent = <&node_b>;
};
This uses the standard interrupt-parent property defined in the devicetree specification to capture the
relationship between the two nodes.
These properties have type phandle.
Zero or more nodes: phandles type You can use phandles to make an array of references to other
nodes.
One common example occurs in pin control. Pin control properties like pinctrl-0, pinctrl-1 etc.
may contain multiple phandles, each of which “points” to a node containing information related to pin
configuration for that hardware peripheral. Here’s an example of six phandles in a single property:
Zero or more nodes with metadata: phandle-array type You can use phandles to refer to and con-
figure one or more resources that are “owned” by some other node.
This is the most complex case. There are examples and more details in the next section.
These properties have type phandle-array.
phandle-array properties These properties are commonly used to specify a resource that is owned by
another node along with additional metadata about the resource.
High level description Usually, properties with this type are written like phandle-array-prop in this
example:
node {
phandle-array-prop = <&foo 1 2>, <&bar 3>, <&baz 4 5>;
};
That is, the property’s value is written as a comma-separated sequence of “groups”, where each “group”
is written inside of angle brackets (< ... >). Each “group” starts with a phandle (&foo, &bar, &baz).
The values that follow the phandle in each “group” are called specifiers. There are three specifiers in the
above example:
1. 1 2
2. 3
3. 4 5
The phandle in each “group” is used to “point” to the hardware that controls the resource you are
interested in. The specifier describes the resource itself, along with any additional necessary metadata.
The rest of this section describes a common example. Subsequent sections document more rules about
how to use phandle-array properties in practice.
Example phandle-arrays: GPIOs Perhaps the most common use case for phandle-array properties
is specifying one or more GPIOs on your SoC that another chip on your board connects to. For that
reason, we’ll focus on that use case here. However, there are many other use cases that are handled in
devicetree with phandle-array properties.
For example, consider an external chip with an interrupt pin that is connected to a GPIO on your SoC.
You will typically need to provide that GPIO’s information (GPIO controller and pin number) to the
device driver for that chip. You usually also need to provide other metadata about the GPIO, like whether
it is active low or high, what kind of internal pull resistor within the SoC should be enabled in order to
communicate with the device, etc., to the driver.
In the devicetree, there will be a node that represents the GPIO controller that controls a group of pins.
This reflects the way GPIO IP blocks are usually developed in hardware. Therefore, there is no single
node in the devicetree that represents a GPIO pin, and you can’t use a single phandle to represent it.
Instead, you would use a phandle-array property, like this:
my-external-ic {
irq-gpios = <&gpioX pin flags>;
};
In this example, irq-gpios is a phandle-array property with just one “group” in its value. &gpioX is
the phandle for the GPIO controller node that controls the pin. pin is the pin number (0, 1, 2, . . . ).
flags is a bit mask describing pin metadata (for example (GPIO_ACTIVE_LOW | GPIO_PULL_UP)); see
include/zephyr/dt-bindings/gpio/gpio.h for more details.
The device driver handling the my-external-ic node can then use the irq-gpios property’s value to set
up interrupt handling for the chip as it is used on your board. This lets you configure the device driver
in devicetree, without changing the driver’s source code.
Such properties can contain multiple values as well:
my-other-external-ic {
handshake-gpios = <&gpioX pinX flagsX>, <&gpioY pinY flagsY>;
};
Specifier spaces Specifier spaces are a way to allow nodes to describe how you should use them in
phandle-array properties.
We’ll start with an abstract, high level description of how specifier spaces work in DTS files, before
moving on to a concrete example and providing references to further reading for how this all works in
practice using DTS files and bindings files.
node {
phandle-array-prop = <&foo 1 2>, <&bar 3>;
};
The cells that follow each phandle are called a specifier. In this example, there are two specifiers:
1. 1 2: two cells
2. 3: one cell
Every phandle-array property has an associated specifier space. This sounds complex, but it’s really just a
way to assign a meaning to the cells that follow each phandle in a hardware specific way. Every specifier
space has a unique name. There are a few “standard” names for commonly used hardware, but you can
create your own as well.
Devicetree nodes encode the number of cells that must appear in a specifier, by name, using the
#SPACE_NAME-cells property. For example, let’s assume that phandle-array-prop‘s specifier space is
named baz. Then we would need the foo and bar nodes to have the following #baz-cells properties:
foo: node@1000 {
#baz-cells = <2>;
};
bar: node@2000 {
#baz-cells = <1>;
};
Without the #baz-cells property, the devicetree tooling would not be able to validate the number of
cells in each specifier in phandle-array-prop.
This flexibility allows you to write down an array of hardware resources in a single devicetree property,
even though the amount of metadata you need to describe each resource might be different for different
nodes.
A single node can also have different numbers of cells in different specifier spaces. For example, we
might have:
foo: node@1000 {
#baz-cells = <2>;
#bob-cells = <1>;
};
node {
phandle-array-prop = <&foo 1 2>, <&bar 3>;
phandle-array-prop-2 = <&foo 4>;
};
This flexibility allows you to have a node that manages multiple different kinds of resources at the same
time. The node describes the amount of metadata needed to describe each kind of resource (how many
cells are needed in each case) using different #SPACE_NAME-cells properties.
Example specifier space: gpio From the above example, you’re already familiar with how one specifier
space works: in the “gpio” space, specifiers almost always have two cells:
1. a pin number
2. a bit mask of flags related to the pin
Therefore, almost all GPIO controller nodes you will see in practice will look like this:
gpioX: gpio-controller@deadbeef {
gpio-controller;
#gpio-cells = <2>;
};
High level description In general, a phandle-array property named foos implicitly has specifier space
foo. For example:
properties:
dmas:
type: phandle-array
pwms:
type: phandle-array
The dmas property’s specifier space is “dma”. The pwm property’s specifier space is pwm.
Special case: GPIO *-gpios properties are special-cased so that e.g. foo-gpios resolves to
#gpio-cells rather than #foo-gpio-cells.
Manually specifying a space You can manually specify the specifier space for any phandle-array
property. See specifier-space.
Naming the cells in a specifier You should name the cells in each specifier space your hardware
supports when writing bindings. For details on how to do this, see Specifier cell names (*-cells).
This allows C code to query information about and retrieve the values of cells in a specifier by name
using devicetree APIs like these:
• DT_PHA_BY_IDX
• DT_PHA_BY_NAME
This feature and these macros are used internally by numerous hardware-specific APIs. Here are a few
examples:
• DT_GPIO_PIN_BY_IDX
• DT_PWMS_CHANNEL_BY_IDX
• DT_DMAS_CELL_BY_NAME
• DT_IO_CHANNELS_INPUT_BY_IDX
• DT_CLOCKS_CELL_BY_NAME
See also
• Writing property values: how to write phandles in devicetree properties
• Properties: how to write bindings for properties with phandle types (phandle, phandles,
phandle-array)
• specifier-space: how to manually specify a phandle-array property’s specifier space
Zephyr’s devicetree scripts handle the /zephyr,user node as a special case: you can put essentially
arbitrary properties inside it and retrieve their values without having to write a binding. It is meant as a
convenient container when only a few simple properties are needed.
Note: This node is meant for sample code and user applications. It should not be used in the upstream
Zephyr source code for device drivers, subsystems, etc.
Simple values You can store numeric or array values in /zephyr,user if you want them to be config-
urable at build time via devicetree.
For example, with this devicetree overlay:
/ {
zephyr,user {
boolean;
bytes = [81 82 83];
number = <23>;
numbers = <1>, <2>, <3>;
string = "text";
strings = "a", "b", "c";
};
};
You can get the above property values in C/C++ code like this:
DT_PROP(ZEPHYR_USER_NODE, boolean) // 1
DT_PROP(ZEPHYR_USER_NODE, bytes) // {0x81, 0x82, 0x83}
DT_PROP(ZEPHYR_USER_NODE, number) // 23
DT_PROP(ZEPHYR_USER_NODE, numbers) // {1, 2, 3}
(continues on next page)
Devices You can store phandles in /zephyr,user if you want to be able to reconfigure which devices
your application uses in simple cases using devicetree overlays.
For example, with this devicetree overlay:
/ {
zephyr,user {
handle = <&gpio0>;
handles = <&gpio0>, <&gpio1>;
};
};
You can convert the phandles in the handle and handles properties to device pointers like this:
/*
* Same thing as:
*
* ... my_dev = DEVICE_DT_GET(DT_NODELABEL(gpio0));
*/
const struct device *my_device =
DEVICE_DT_GET(DT_PROP(ZEPHYR_USER_NODE, handle));
/*
* Same thing as:
*
* ... *my_devices[] = {
* DEVICE_DT_GET(DT_NODELABEL(gpio0)),
* DEVICE_DT_GET(DT_NODELABEL(gpio1)),
* };
*/
const struct device *my_devices[] = {
DT_FOREACH_PROP_ELEM(ZEPHYR_USER_NODE, handles, PHANDLE_TO_DEVICE)
};
GPIOs The /zephyr,user node is a convenient place to store application-specific GPIOs that you want
to be able to reconfigure with a devicetree overlay.
For example, with this devicetree overlay:
#include <zephyr/dt-bindings/gpio/gpio.h>
/ {
zephyr,user {
signal-gpios = <&gpio0 1 GPIO_ACTIVE_HIGH>;
};
};
You can convert the pin defined in signal-gpios to a struct gpio_dt_spec in your source code, then
use it like this:
# include <zephyr/drivers/gpio.h>
Devicetree HOWTOs
This page has step-by-step advice for getting things done with devicetree.
Get your devicetree and generated header A board’s devicetree (BOARD.dts) pulls in common node
definitions via #include preprocessor directives. This at least includes the SoC’s .dtsi. One way to
figure out the devicetree’s contents is by opening these files, e.g. by looking in dts/<ARCH>/<vendor>/
<soc>.dtsi, but this can be time consuming.
If you just want to see the “final” devicetree for your board, build an application and open the zephyr.
dts file in the build directory.
Tip: You can build hello_world to see the “base” devicetree for your board without any additional
changes from overlay files.
Get a struct device from a devicetree node When writing Zephyr applications, you’ll often want to
get a driver-level struct device corresponding to a devicetree node.
For example, with this devicetree fragment, you might want the struct device for serial@40002000:
/ {
soc {
serial0: serial@40002000 {
status = "okay";
current-speed = <115200>;
/* ... */
};
};
aliases {
my-serial = &serial0;
};
chosen {
zephyr,console = &serial0;
};
};
Start by making a node identifier for the device you are interested in. There are different ways to do this;
pick whichever one works best for your requirements. Here are some examples:
/* Option 2: by alias */
# define MY_SERIAL DT_ALIAS(my_serial)
/* Option 4: by path */
# define MY_SERIAL DT_PATH(soc, serial_40002000)
Once you have a node identifier there are two ways to proceed. One way to get a device is to use
DEVICE_DT_GET() :
if (!device_is_ready(uart_dev)) {
/* Not ready, do not use */
return -ENODEV;
}
You can then use uart_dev with Universal Asynchronous Receiver-Transmitter (UART) API functions like
uart_configure() . Similar code will work for other device types; just make sure you use the correct
API for the device.
If you’re having trouble, see Troubleshooting devicetree. The first thing to check is that the node has
status = "okay", like this:
# if DT_NODE_HAS_STATUS(MY_SERIAL, okay)
const struct device *const uart_dev = DEVICE_DT_GET(MY_SERIAL);
# else
# error "Node is disabled"
# endif
If you see the #error output, make sure to enable the node in your devicetree. In some situations your
code will compile but it will fail to link with a message similar to:
This likely means there’s a Kconfig issue preventing the device driver from being built, resulting in a
reference that does not exist. If your code compiles successfully, the last thing to check is if the device is
ready, like this:
if (!device_is_ready(uart_dev)) {
printk("Device not ready\n");
}
If you find that the device is not ready, it likely means that the device’s initialization function failed.
Enabling logging or debugging driver code may help in such situations. Note that you can also use
device_get_binding() to obtain a reference at runtime. If it returns NULL it can either mean that
device’s driver failed to initialize or that it does not exist.
Find a devicetree binding Devicetree bindings are YAML files which declare what you can do with the
nodes they describe, so it’s critical to be able to find them for the nodes you are using.
If you don’t have them already, Get your devicetree and generated header. To find a node’s binding, open
the generated header file, which starts with a list of nodes in a block comment:
/*
* [...]
* Nodes in dependency order (ordinal and path):
* 0 /
* 1 /aliases
* 2 /chosen
* 3 /flash@0
* 4 /memory@20000000
* (etc.)
* [...]
*/
Make note of the path to the node you want to find, like /flash@0. Search for the node’s output in the
file, which starts with something like this if the node has a matching binding:
/*
* Devicetree node:
* /flash@0
*
* Binding (compatible = soc-nv-flash):
* $ZEPHYR_BASE/dts/bindings/mtd/soc-nv-flash.yaml
* [...]
*/
Set devicetree overlays Devicetree overlays are explained in Introduction to devicetree. The CMake
variable DTC_OVERLAY_FILE contains a space- or semicolon-separated list of overlay files to use. If
DTC_OVERLAY_FILE specifies multiple files, they are included in that order by the C preprocessor.
A file in a Zephyr module can be referred to by escaping the Zephyr module dir variable like
\${ZEPHYR_<module>_MODULE_DIR}/<path-to>/dts.overlay when setting the DTC_OVERLAY_FILE
variable.
You can set DTC_OVERLAY_FILE to contain exactly the files you want to use. Here is an example using
west build.
If you don’t set DTC_OVERLAY_FILE, the build system will follow these steps, looking for files in your
application configuration directory to use as devicetree overlays:
1. If the file boards/<BOARD>.overlay exists, it will be used.
2. If the current board has multiple revisions and boards/<BOARD>_<revision>.overlay exists, it will
be used. This file will be used in addition to boards/<BOARD>.overlay if both exist.
3. If one or more files have been found in the previous steps, the build system stops looking and just
uses those files.
4. Otherwise, if <BOARD>.overlay exists, it will be used, and the build system will stop looking for
more files.
5. Otherwise, if app.overlay exists, it will be used.
Extra devicetree overlays may be provided using EXTRA_DTC_OVERLAY_FILE which will still allow the
build system to automatically use devicetree overlays described in the above steps.
The build system appends overlays specified in EXTRA_DTC_OVERLAY_FILE to the overlays in
DTC_OVERLAY_FILE when processing devicetree overlays. This means that changes made via
EXTRA_DTC_OVERLAY_FILE have higher precedence than those made via DTC_OVERLAY_FILE.
All configuration files will be taken from the application’s configuration directory except for files with an
absolute path that are given with the DTC_OVERLAY_FILE or EXTRA_DTC_OVERLAY_FILE argument.
See Application Configuration Directory on how the application configuration directory is defined.
Using Shields will also add devicetree overlay files.
The DTC_OVERLAY_FILE value is stored in the CMake cache and used in successive builds.
The build system prints all the devicetree overlays it finds in the configuration phase, like this:
Use devicetree overlays See Set devicetree overlays for how to add an overlay to the build.
Overlays can override node property values in multiple ways. For example, if your BOARD.dts contains
this node:
/ {
soc {
serial0: serial@40002000 {
status = "okay";
current-speed = <115200>;
/* ... */
};
};
};
/* Option 2 */
&{/soc/serial@40002000} {
current-speed = <9600>;
};
We’ll use the &serial0 style for the rest of these examples.
You can add aliases to your devicetree using overlays: an alias is just a property of the /aliases node.
For example:
/ {
aliases {
my-serial = &serial0;
};
};
To delete a property (in addition to deleting properties in general, this is how to set a boolean property
to false if it’s true in BOARD.dts):
&serial0 {
/delete-property/ some-unwanted-property;
};
You can add subnodes using overlays. For example, to configure a SPI or I2C child device on an existing
bus node, do something like this:
/* SPI device example */
&spi1 {
my_spi_device: temp-sensor@0 {
compatible = "...";
label = "TEMP_SENSOR_0";
/* reg is the chip select number, if needed;
* If present, it must match the node's unit address. */
reg = <0>;
(continues on next page)
Write device drivers using devicetree APIs “Devicetree-aware” device drivers should create a struct
device for each status = "okay" devicetree node with a particular compatible (or related set of com-
patibles) supported by the driver.
Writing a devicetree-aware driver begins by defining a devicetree binding for the devices supported by
the driver. Use existing bindings from similar drivers as a starting point. A skeletal binding to get started
needs nothing more than this:
See Find a devicetree binding for more advice on locating existing bindings.
After writing your binding, your driver C file can then use the devicetree API to find status = "okay"
nodes with the desired compatible, and instantiate a struct device for each one. There are two options
for instantiating each struct device: using instance numbers, and using node labels.
In either case:
• Each struct device‘s name should be set to its devicetree node’s label property. This allows the
driver’s users to Get a struct device from a devicetree node in the usual way.
• Each device’s initial configuration should use values from devicetree properties whenever practical.
This allows users to configure the driver using devicetree overlays.
Examples for how to do this follow. They assume you’ve already implemented the device-specific config-
uration and data structures and API functions, like this:
/* my_driver.c */
# include <zephyr/drivers/some_api.h>
Option 1: create devices using instance numbers Use this option, which uses Instance-based APIs, if
possible. However, they only work when devicetree nodes for your driver’s compatible are all equivalent,
and you do not need to be able to distinguish between them.
To use instance-based APIs, begin by defining DT_DRV_COMPAT to the lowercase-and-underscores version
of the compatible that the device driver supports. For example, if your driver’s compatible is "vnd,
my-device" in devicetree, you would define DT_DRV_COMPAT to vnd_my_device in your driver C file:
/*
* Put this near the top of the file. After the includes is a good place.
* (Note that you can therefore run "git grep DT_DRV_COMPAT drivers" in
* the zephyr Git repository to look for example drivers using this style).
*/
# define DT_DRV_COMPAT vnd_my_device
Important: As shown, the DT_DRV_COMPAT macro should have neither quotes nor special characters.
Remove quotes and convert special characters to underscores when creating DT_DRV_COMPAT from the
compatible property.
Finally, define an instantiation macro, which creates each struct device using instance numbers. Do
this after defining my_api_funcs.
/*
* This instantiation macro is named "CREATE_MY_DEVICE".
* Its "inst" argument is an arbitrary instance number.
*
* Put this near the end of the file, e.g. after defining "my_api_funcs".
*/
# define CREATE_MY_DEVICE(inst) \
static struct my_dev_data my_data_##inst = { \
/* initialize RAM values as needed, e.g.: */ \
.freq = DT_INST_PROP(inst, clock_frequency), \
}; \
static const struct my_dev_cfg my_cfg_##inst = { \
/* initialize ROM values as needed. */ \
(continues on next page)
Notice the use of APIs like DT_INST_PROP() and DEVICE_DT_INST_DEFINE() to access devicetree node
data. These APIs retrieve data from the devicetree for instance number inst of the node with compatible
determined by DT_DRV_COMPAT.
Finally, pass the instantiation macro to DT_INST_FOREACH_STATUS_OKAY() :
DT_INST_FOREACH_STATUS_OKAY expands to code which calls CREATE_MY_DEVICE once for each enabled
node with the compatible determined by DT_DRV_COMPAT. It does not append a semicolon to the end
of the expansion of CREATE_MY_DEVICE, so the macro’s expansion must end in a semicolon or function
definition to support multiple devices.
Option 2: create devices using node labels Some device drivers cannot use instance numbers. One
example is an SoC peripheral driver which relies on vendor HAL APIs specialized for individual IP blocks
to implement Zephyr driver callbacks. Cases like this should use DT_NODELABEL() to refer to individual
nodes in the devicetree representing the supported peripherals on the SoC. The devicetree.h Generic APIs
can then be used to access node data.
For this to work, your SoC’s dtsi file must define node labels like mydevice0, mydevice1, etc. appro-
priately for the IP blocks your driver supports. The resulting devicetree usually looks something like
this:
/ {
soc {
mydevice0: dev@0 {
compatible = "vnd,my-device";
};
mydevice1: dev@1 {
compatible = "vnd,my-device";
};
};
};
The driver can use the mydevice0 and mydevice1 node labels in the devicetree to operate on specific
device nodes:
/*
* This is a convenience macro for creating a node identifier for
* the relevant devices. An example use is MYDEV(0) to refer to
* the node with label "mydevice0".
*/
# define MYDEV(idx) DT_NODELABEL(mydevice ## idx)
/*
* Define your instantiation macro; "idx" is a number like 0 for mydevice0
* or 1 for mydevice1. It uses MYDEV() to create the node label from the
(continues on next page)
Notice the use of APIs like DT_PROP() and DEVICE_DT_DEFINE() to access devicetree node data.
Finally, manually detect each enabled devicetree node and use CREATE_MY_DEVICE to instantiate each
struct device:
# if DT_NODE_HAS_STATUS(DT_NODELABEL(mydevice0), okay)
CREATE_MY_DEVICE(0)
# endif
# if DT_NODE_HAS_STATUS(DT_NODELABEL(mydevice1), okay)
CREATE_MY_DEVICE(1)
# endif
Since this style does not use DT_INST_FOREACH_STATUS_OKAY(), the driver author is responsible for call-
ing CREATE_MY_DEVICE() for every possible node, e.g. using knowledge about the peripherals available
on supported SoCs.
Device drivers that depend on other devices At times, one struct device depends on another
struct device and requires a pointer to it. For example, a sensor device might need a pointer to
its SPI bus controller device. Some advice:
• Write your devicetree binding in a way that permits use of Hardware specific APIs from devicetree.h
if possible.
• In particular, for bus devices, your driver’s binding should include a file like dts/bindings/spi/spi-
device.yaml which provides common definitions for devices addressable via a specific bus. This
enables use of APIs like DT_BUS() to obtain a node identifier for the bus node. You can then Get a
struct device from a devicetree node for the bus in the usual way.
Search existing bindings and device drivers for examples.
Applications that depend on board-specific devices One way to allow application code to run un-
modified on multiple boards is by supporting a devicetree alias to specify the hardware specific portions,
as is done in the blinky-sample. The application can then be configured in BOARD.dts files or via device-
tree overlays.
Troubleshooting devicetree
Here are some tips for fixing misbehaving devicetree related code.
See Devicetree HOWTOs for other “HOWTO” style information.
See Pristine Builds for examples, or just delete the build directory completely and retry.
This is general advice which is especially applicable to debugging devicetree issues, because the outputs
are created during the CMake configuration phase, and are not always regenerated when one of their
inputs changes.
Make sure <devicetree.h> is included Unlike Kconfig symbols, the devicetree.h header must be
included explicitly.
Many Zephyr header files rely on information from devicetree, so including some other API may transi-
tively include devicetree.h, but that’s not guaranteed.
where NODE_ID is a valid node identifier, but no device driver has allocated a struct device for this
devicetree node. You thus get a linker error, because you’re asking for a pointer to a device that isn’t
defined.
To fix it, you need to make sure that:
1. The node is enabled: the node must have status = "okay";.
(Recall that a missing status property means the same thing as status = "okay";; see Important
properties for more information about status).
2. A device driver responsible for allocating the struct device is enabled. That is, the Kconfig option
which makes the build system compile the driver sources into your application needs to be set to y.
(See Setting Kconfig configuration values for more information on setting Kconfig options.)
Below, <build> means your build directory.
Making sure the node is enabled:
To find the devicetree node you need to check, use the number <N> from the linker error.
Look for this number in the list of nodes at the top of <build>/zephyr/include/generated/
devicetree_generated.h. For example, if <N> is 15, and your devicetree_generated.h file looks
like this, the node you are interested in is /soc/i2c@deadbeef:
/*
* Generated by gen_defines.py
*
* DTS input file:
* <build>/zephyr/zephyr.dts.pre
*
* Directories with bindings:
* $ZEPHYR_BASE/dts/bindings
*
* Node dependency ordering (ordinal and path):
* 0 /
* 1 /aliases
[...]
* 15 /soc/i2c@deadbeef
[...]
Now look for this node in <build>/zephyr/zephyr.dts, which is the final devicetree for your application
build. (See Get your devicetree and generated header for information and examples.)
If the node has status = "disabled"; in zephyr.dts, then you need to enable it by setting status =
"okay";, probably by using a devicetree overlay. For example, if zephyr.dts looks like this:
i2c0: i2c@deadbeef {
status = "disabled";
};
Then you should put this into your devicetree overlay and Try again with a pristine build directory:
&i2c0 {
status = "okay";
};
Make sure that you see status = "okay"; in zephyr.dts after you rebuild.
Making sure the device driver is enabled:
The first step is to figure out which device driver is responsible for handling your devicetree node and
allocating devices for it. To do this, you need to start with the compatible property in your devicetree
node, and find the driver that allocates struct device instances for that compatible.
If you’re not familiar with how devices are allocated from devicetree nodes based on compatible prop-
erties, the ZDS 2021 talk A deep dive into the Zephyr 2.5 device model may be a useful place to start,
along with the Device Driver Model pages. See Important properties and the Devicetree specification for
more information about compatible.
There is currently no documentation for what device drivers exist and which devicetree compatibles they
are associated with. You will have to figure this out by reading the source code:
• Look in drivers for the appropriate subdirectory that corresponds to the API your device implements
• Look inside that directory for relevant files until you figure out what the driver is, or realize there
is no such driver.
Often, but not always, you can find the driver by looking for a file that sets the DT_DRV_COMPAT macro
to match your node’s compatible property, except lowercased and with special characters converted to
underscores. For example, if your node’s compatible is vnd,foo-device, look for a file with this line:
Important: This does not always work since not all drivers use DT_DRV_COMPAT.
If you find a driver, you next need to make sure the Kconfig option that compiles it is enabled. (If you
don’t find a driver, and you are sure the compatible property is correct, then you need to write a driver.
Writing drivers is outside the scope of this documentation page.)
Continuing the above example, if your devicetree node looks like this now:
i2c0: i2c@deadbeef {
compatible = "nordic,nrf-twim";
status = "okay";
};
Then you would look inside of drivers/i2c for the driver file that handles the compatible nordic,
nrf-twim. In this case, that is drivers/i2c/i2c_nrfx_twim.c. Notice how even in cases where
DT_DRV_COMPAT is not set, you can use information like driver file names as clues.
Once you know the driver you want to enable, you need to make sure its Kconfig option is set to y. You
can figure out which Kconfig option is needed by looking for a line similar to this one in the CMakeLists.
txt file in the drivers subdirectory. Continuing the above example, drivers/i2c/CMakeLists.txt has a line
that looks like this:
zephyr_library_sources_ifdef(CONFIG_NRFX_TWIM i2c_nrfx_twim.c)
CONFIG_FOO=y
where CONFIG_FOO is the option that CMakeLists.txt uses to decide whether or not to compile the
driver.
However, there may be other problems in your way, such as unmet Kconfig dependencies that you also
have to enable before you can enable your driver.
Consult the Kconfig file that defines CONFIG_FOO (for your value of FOO) for more information.
/*
* foo.c: lowercase-and-underscores names
*/
/* Don't do this: */
# define MY_CLOCK_FREQ DT_PROP(DT_PATH(soc, i2c@1234000), clock-frequency)
/* ^ ^
* @ should be _ - should be _ */
/* Do this instead: */
# define MY_CLOCK_FREQ DT_PROP(DT_PATH(soc, i2c_1234000), clock_frequency)
/* ^ ^ */
/*
* foo.overlay: DTS names with special characters, etc.
*/
Look at the preprocessor output To save preprocessor output files, enable the
CONFIG_COMPILER_SAVE_TEMPS option. For example, to build hello_world with west with this
option set, use:
This will create a preprocessor output file named foo.c.i in the build directory for each source file
foo.c.
You can then search for the file in the build directory to see what your devicetree macros expanded to.
For example, on macOS and Linux, using find to find main.c.i:
It’s usually easiest to run a style formatter on the results before opening them. For example, to use
clang-format to reformat the file in place:
clang-format -i build/CMakeFiles/app.dir/src/main.c.i
You can then open the file in your favorite editor to view the final C results after preprocessing.
Do not track macro expansion Compiler messages for devicetree errors can sometimes be very long.
This typically happens when the compiler prints a message for every step of a complex macro expansion
that has several intermediate expansion steps.
To prevent the compiler from doing this, you can disable the
CONFIG_COMPILER_TRACK_MACRO_EXPANSION option. This typically reduces the output to one mes-
sage per error.
For example, to build hello_world with west and this option disabled, use:
Validate properties If you’re getting a compile error reading a node property, check your node identi-
fier and property. For example, if you get a build error on a line that looks like this:
Try checking the node by adding this to the file and recompiling:
# if !DT_NODE_EXISTS(DT_NODELABEL(my_serial))
# error "whoops"
# endif
If you see the “whoops” error message when you rebuild, the node identifier isn’t referring to a valid
node. Get your devicetree and generated header and debug from there.
Some hints for what to check next if you don’t see the “whoops” error message:
• did you Make sure you’re using the right names?
• does the property exist?
• does the node have a matching binding?
Check for missing bindings See Devicetree bindings for information about bindings, and Bindings index
for information on bindings built into Zephyr.
If the build fails to Find a devicetree binding for a node, then either the node’s compatible property is
not defined, or its value has no matching binding. If the property is set, check for typos in its name. In
a devicetree source file, compatible should look like "vnd,some-device" – Make sure you’re using the
right names.
If your binding file is not under zephyr/dts, you may need to set DTS_ROOT; see Where bindings are
located.
Errors with DT_INST_() APIs If you’re using an API like DT_INST_PROP() , you must define
DT_DRV_COMPAT to the lowercase-and-underscores version of the compatible you are interested in. See
Option 1: create devices using instance numbers.
Along with devicetree, Zephyr also uses the Kconfig language to configure the source code. Whether to
use devicetree or Kconfig for a particular purpose can sometimes be confusing. This section should help
you decide which one to use.
In short:
• Use devicetree to describe hardware and its boot-time configuration. Examples include periph-
erals on a board, boot-time clock frequencies, interrupt lines, etc.
• Use Kconfig to configure software support to build into the final image. Examples include whether
to add networking support, which drivers are needed by the application, etc.
In other words, devicetree mainly deals with hardware, and Kconfig with software.
For example, consider a board containing a SoC with 2 UART, or serial port, instances.
• The fact that the board has this UART hardware is described with two UART nodes in the device-
tree. These provide the UART type (via the compatible property) and certain settings such as the
address range of the hardware peripheral registers in memory (via the reg property).
• Additionally, the UART boot-time configuration is also described with devicetree. This could
include configuration such as the RX IRQ line’s priority and the UART baud rate. These may be
modifiable at runtime, but their boot-time configuration is described in devicetree.
• Whether or not to include software support for UART in the build is controlled via Kconfig. Ap-
plications which do not need to use the UARTs can remove the driver source code from the build
using Kconfig, even though the board’s devicetree still includes UART nodes.
As another example, consider a device with a 2.4GHz, multi-protocol radio supporting both the Bluetooth
Low Energy and 802.15.4 wireless technologies.
• Devicetree should be used to describe the presence of the radio hardware, what driver or drivers
it’s compatible with, etc.
• Boot-time configuration for the radio, such as TX power in dBm, should also be specified using
devicetree.
• Kconfig should determine which software features should be built for the radio, such as selecting
a BLE or 802.15.4 protocol stack.
As another example, Kconfig options that formerly enabled a particular instance of a driver (that is itself
enabled by Kconfig) have been removed. The devices are selected individually using devicetree’s status
keyword on the corresponding hardware instance.
There are exceptions to these rules:
• Because Kconfig is unable to flexibly control some instance-specific driver configuration parame-
ters, such as the size of an internal buffer, these options may be defined in devicetree. However,
to make clear that they are specific to Zephyr drivers and not hardware description or configura-
tion these properties should be prefixed with zephyr,, e.g. zephyr,random-mac-address in the
common Ethernet devicetree properties.
• Devicetree’s chosen keyword, which allows the user to select a specific instance of a hardware
device to be used for a particular purpose. An example of this is selecting a particular UART for
use as the system’s console.
These pages contain reference material for Zephyr’s devicetree APIs and built-in bindings.
For the platform-independent details, see the Devicetree specification.
Devicetree API
This is a reference page for the <zephyr/devicetree.h> API. The API is macro based. Use of these
macros has no impact on scheduling. They can be used from any calling context and at file scope.
Some of these – the ones beginning with DT_INST_ – require a special macro named DT_DRV_COMPAT to
be defined before they can be used; these are discussed individually below. These macros are generally
meant for use within device drivers, though they can be used outside of drivers with appropriate care.
Contents
• Generic APIs
– Node identifiers and helpers
– Property access
– ranges property
– reg property
– interrupts property
– For-each macros
– Existence checks
– Inter-node dependencies
– Bus helpers
• Instance-based APIs
• Hardware specific APIs
– CAN
– Clocks
– DMA
– Fixed flash partitions
– GPIO
– IO channels
– MBOX
– Pinctrl (pin control)
– PWM
– Reset Controller
– SPI
• Chosen nodes
• Zephyr-specific chosen nodes
Generic APIs The APIs in this section can be used anywhere and do not require DT_DRV_COMPAT to be
defined.
Node identifiers and helpers A node identifier is a way to refer to a devicetree node at C preprocessor
time. While node identifiers are not C values, you can use them to access devicetree data in C rvalue
form using, for example, the Property access API.
The root node / has node identifier DT_ROOT. You can create node identifiers for other devicetree nodes
using DT_PATH() , DT_NODELABEL() , DT_ALIAS() , and DT_INST() .
There are also DT_PARENT() and DT_CHILD() macros which can be used to create node identifiers for a
given node’s parent node or a particular child node, respectively.
The following macros create or operate on node identifiers.
group devicetree-generic-id
Defines
DT_INVALID_NODE
Name for an invalid node identifier.
This supports cases where factored macros can be invoked from paths where devicetree data
may or may not be available. It is a preprocessor identifier that does not match any valid
devicetree node identifier.
DT_ROOT
Node identifier for the root node in the devicetree.
DT_PATH(...)
Get a node identifier for a devicetree path.
The arguments to this macro are the names of non-root nodes in the tree required to reach
the desired node, starting from the root. Non-alphanumeric characters in each name must be
converted to underscores to form valid C tokens, and letters must be lowercased.
Example devicetree fragment:
/ {
soc {
serial1: serial@40001000 {
status = "okay";
current-speed = <115200>;
...
};
(continues on next page)
You can use DT_PATH(soc, serial_40001000) to get a node identifier for the
serial@40001000 node. Node labels like serial1 cannot be used as DT_PATH() arguments;
use DT_NODELABEL() for those instead.
Example usage with DT_PROP() to get the current-speed property:
• the first argument corresponds to a child node of the root (soc above)
• a second argument corresponds to a child of the first argument (serial_40001000 above,
from the node name serial@40001000 after lowercasing and changing @ to _)
• and so on for deeper nodes in the desired node’s path
Note: This macro returns a node identifier from path components. To get a path string from
a node identifier, use DT_NODE_PATH() instead.
Parameters
• ... – lowercase-and-underscores node names along the node’s path, with each
name given as a separate argument
Returns
node identifier for the node with that path
DT_NODELABEL(label)
Get a node identifier for a node label.
Convert non-alphanumeric characters in the node label to underscores to form valid C tokens,
and lowercase all letters. Note that node labels are not the same thing as label properties.
Example devicetree fragment:
serial1: serial@40001000 {
label = "UART_0";
status = "okay";
current-speed = <115200>;
...
};
cpu@0 {
L2_0: l2-cache {
cache-level = <2>;
...
};
};
DT_PROP(DT_NODELABEL(l2_0), cache_level) // 2
Notice how L2_0 in the devicetree is lowercased to l2_0 in the DT_NODELABEL() argument.
Parameters
• label – lowercase-and-underscores node label name
Returns
node identifier for the node with that label
DT_ALIAS(alias)
Get a node identifier from /aliases.
This macro’s argument is a property of the /aliases node. It returns a node identifier for
the node which is aliased. Convert non-alphanumeric characters in the alias property to
underscores to form valid C tokens, and lowercase all letters.
Example devicetree fragment:
/ {
aliases {
my-serial = &serial1;
};
soc {
serial1: serial@40001000 {
status = "okay";
current-speed = <115200>;
...
};
};
};
You can use DT_ALIAS(my_serial) to get a node identifier for the serial@40001000 node.
Notice how my-serial in the devicetree becomes my_serial in the DT_ALIAS() argument.
Example usage with DT_PROP() to get the current-speed property:
Parameters
• alias – lowercase-and-underscores alias name.
Returns
node identifier for the node with that alias
DT_INST(inst, compat)
Get a node identifier for an instance of a compatible.
All nodes with a particular compatible property value are assigned instance numbers, which
are zero-based indexes specific to that compatible. You can get a node identifier for these
nodes by passing DT_INST() an instance number, inst, along with the lowercase-and-
underscores version of the compatible, compat.
Instance numbers have the following properties:
• instance numbers in no way reflect any numbering scheme that might exist in SoC
documentation, node labels or unit addresses, or properties of the /aliases node (use
DT_NODELABEL() or DT_ALIAS() for those)
• there is no general guarantee that the same node will have the same instance number
between builds, even if you are building the same application again in the same build
directory
Example devicetree fragment:
serial1: serial@40001000 {
compatible = "vnd,soc-serial";
status = "disabled";
current-speed = <9600>;
...
};
serial2: serial@40002000 {
compatible = "vnd,soc-serial";
status = "okay";
current-speed = <57600>;
...
};
serial3: serial@40003000 {
compatible = "vnd,soc-serial";
current-speed = <115200>;
...
};
Assuming no other nodes in the devicetree have compatible "vnd,soc-serial", that compat-
ible has nodes with instance numbers 0, 1, and 2.
The nodes serial@40002000 and serial@40003000 are both enabled, so their instance num-
bers are 0 and 1, but no guarantees are made regarding which node has which instance
number.
Since serial@40001000 is the only disabled node, it has instance number 2, since disabled
nodes are assigned the largest instance numbers. Therefore:
// Could be 57600 or 115200. There is no way to be sure:
// either serial@40002000 or serial@40003000 could
// have instance number 0, so this could be the current-speed
// property of either of those nodes.
(continues on next page)
parent: parent-node {
child: child-node {
...
};
};
The following are equivalent ways to get the same node identifier:
DT_NODELABEL(parent)
DT_PARENT(DT_NODELABEL(child))
Parameters
• node_id – node identifier
Returns
a node identifier for the node’s parent
DT_GPARENT(node_id)
Get a node identifier for a grandparent node.
Example devicetree fragment:
gparent: grandparent-node {
parent: parent-node {
child: child-node { ... }
};
};
The following are equivalent ways to get the same node identifier:
DT_GPARENT(DT_NODELABEL(child))
DT_PARENT(DT_PARENT(DT_NODELABEL(child))
Parameters
• node_id – node identifier
Returns
a node identifier for the node’s parent’s parent
DT_CHILD(node_id, child)
Get a node identifier for a child node.
Example devicetree fragment:
/ {
soc-label: soc {
serial1: serial@40001000 {
status = "okay";
current-speed = <115200>;
...
};
};
};
Example usage with DT_PROP() to get the status of the serial@40001000 node:
Node labels like serial1 cannot be used as the child argument to this macro. Use
DT_NODELABEL() for that instead.
You can also use DT_FOREACH_CHILD() to iterate over node identifiers for all of a node’s
children.
Parameters
• node_id – node identifier
• child – lowercase-and-underscores child node name
Returns
node identifier for the node with the name referred to by ‘child’
DT_COMPAT_GET_ANY_STATUS_OKAY(compat)
Get a node identifier for a status okay node with a compatible.
Use this if you want to get an arbitrary enabled node with a given compatible, and you do
not care which one you get. If any enabled nodes with the given compatible exist, a node
identifier for one of them is returned. Otherwise, DT_INVALID_NODE is returned.
Example devicetree fragment:
node-a {
compatible = "vnd,device";
status = "okay";
};
node-b {
compatible = "vnd,device";
(continues on next page)
node-c {
compatible = "vnd,device";
status = "disabled";
};
Example usage:
DT_COMPAT_GET_ANY_STATUS_OKAY(vnd_device)
This expands to a node identifier for either node-a or node-b. It will not expand to a node
identifier for node-c, because that node does not have status okay.
Parameters
• compat – lowercase-and-underscores compatible, without quotes
Returns
node identifier for a node with that compatible, or DT_INVALID_NODE
DT_NODE_PATH(node_id)
Get a devicetree node’s full path as a string literal.
This returns the path to a node from a node identifier. To get a node identifier from path
components instead, use DT_PATH().
Example devicetree fragment:
/ {
soc {
node: my-node@12345678 { ... };
};
};
Example usage:
DT_NODE_PATH(DT_NODELABEL(node)) // "/soc/my-node@12345678"
DT_NODE_PATH(DT_PATH(soc)) // "/soc"
DT_NODE_PATH(DT_ROOT) // "/"
Parameters
• node_id – node identifier
Returns
the node’s full path in the devicetree
DT_NODE_FULL_NAME(node_id)
Get a devicetree node’s name with unit-address as a string literal.
This returns the node name and unit-address from a node identifier.
Example devicetree fragment:
/ {
soc {
node: my-node@12345678 { ... };
};
};
Example usage:
DT_NODE_FULL_NAME(DT_NODELABEL(node)) // "my-node@12345678"
Parameters
• node_id – node identifier
Returns
the node’s name with unit-address as a string in the devicetree
DT_NODE_CHILD_IDX(node_id)
Get a devicetree node’s index into its parent’s list of children.
Indexes are zero-based.
It is an error to use this macro with the root node.
Example devicetree fragment:
parent {
c1: child-1 {};
c2: child-2 {};
};
Example usage:
DT_NODE_CHILD_IDX(DT_NODELABEL(c1)) // 0
DT_NODE_CHILD_IDX(DT_NODELABEL(c2)) // 1
Parameters
• node_id – node identifier
Returns
the node’s index in its parent node’s list of children
DT_SAME_NODE(node_id1, node_id2)
Do node_id1 and node_id2 refer to the same node?
Both node_id1 and node_id2 must be node identifiers for nodes that exist in the devicetree
(if unsure, you can check with DT_NODE_EXISTS()).
The expansion evaluates to 0 or 1, but may not be a literal integer 0 or 1.
Parameters
• node_id1 – first node identifier
• node_id2 – second node identifier
Returns
an expression that evaluates to 1 if the node identifiers refer to the same node,
and evaluates to 0 otherwise
Property access The following general-purpose macros can be used to access node properties. There
are special-purpose APIs for accessing the ranges property, reg property and interrupts property.
Property values can be read using these macros even if the node is disabled, as long as it has a matching
binding.
group devicetree-generic-prop
Defines
DT_PROP(node_id, prop)
Get a devicetree property value.
For properties whose bindings have the following types, this macro expands to:
• for types array, string-array, and uint8-array, this expands to the number of elements in
the array
• for type phandles, this expands to the number of phandles
• for type phandle-array, this expands to the number of phandle and specifier blocks in the
property
• for type phandle, this expands to 1 (so that a phandle can be treated as a degenerate case
of phandles with length 1)
• for type string, this expands to 1 (so that a string can be treated as a degenerate case of
string-array with length 1)
These properties are handled as special cases:
Parameters
• node_id – node identifier
• prop – a lowercase-and-underscores property with a logical length
Returns
the property’s length
DT_PROP_LEN_OR(node_id, prop, default_value)
Like DT_PROP_LEN(), but with a fallback to default_value.
If the property is defined (as determined by DT_NODE_HAS_PROP()), this expands to
DT_PROP_LEN(node_id, prop). The default_value parameter is not expanded in this case.
Otherwise, this expands to default_value.
Parameters
• node_id – node identifier
• prop – a lowercase-and-underscores property with a logical length
• default_value – a fallback value to expand to
Returns
the property’s length or the given default value
DT_PROP_HAS_IDX(node_id, prop, idx)
Is index idx valid for an array type property?
If this returns 1, then DT_PROP_BY_IDX(node_id, prop, idx) or DT_PHA_BY_IDX(node_id, prop,
idx, . . . ) are valid at index idx. If it returns 0, it is an error to use those macros with that
index.
These properties are handled as special cases:
nx: node-x {
foos = <&bar xx yy>, <&baz xx zz>;
foo-names = "event", "error";
status = "okay";
};
Example usage:
Parameters
• node_id – node identifier
• prop – a lowercase-and-underscores prop-names type property
• name – a lowercase-and-underscores name to check
Returns
An expression which evaluates to 1 if “name” is an available name into the given
property, and 0 otherwise.
• for types array, string-array, uint8-array, and phandles, this expands to the idx-th array
element as an integer, string literal, integer, and node identifier respectively
• for type phandle, idx must be 0 and the expansion is a node identifier (this treats phandle
like a phandles of length 1)
• for type string, idx must be 0 and the expansion is the the entire string (this treats string
like string-array of length 1)
These properties are handled as special cases:
Deprecated:
Use DT_PROP(node_id, label)
This is a convenience for the Zephyr device API, which uses label properties as de-
vice_get_binding() arguments.
Parameters
• node_id – node identifier
Returns
node’s label property value
DT_ENUM_IDX(node_id, prop)
Get a property value’s index into its enumeration values.
The return values start at zero.
Example devicetree fragment:
usb1: usb@12340000 {
maximum-speed = "full-speed";
};
usb2: usb@12341000 {
maximum-speed = "super-speed";
};
properties:
maximum-speed:
type: string
enum:
- "low-speed"
- "full-speed"
- "high-speed"
- "super-speed"
Example usage:
DT_ENUM_IDX(DT_NODELABEL(usb1), maximum_speed) // 1
DT_ENUM_IDX(DT_NODELABEL(usb2), maximum_speed) // 3
Parameters
• node_id – node identifier
• prop – lowercase-and-underscores property name
Returns
zero-based index of the property’s value in its enum: list
n1: node-1 {
prop = "foo";
};
n2: node-2 {
prop = "FOO";
}
n3: node-3 {
prop = "123 foo";
};
properties:
prop:
type: string
Example usage:
Notice how:
• Unlike C identifiers, the property values may begin with a number. It’s the user’s respon-
sibility not to use such values as the name of a C identifier.
• The uppercased "FOO" in the DTS remains FOO as a token. It is not converted to foo.
• The whitespace in the DTS "123 foo" string is converted to 123_foo as a token.
Parameters
• node_id – node identifier
• prop – lowercase-and-underscores property name
Returns
the value of prop as a token, i.e. without any quotes and with special characters
converted to underscores
n1: node-1 {
prop = "foo";
};
(continues on next page)
properties:
prop:
type: string
Example usage:
Notice how:
• Unlike C identifiers, the property values may begin with a number. It’s the user’s respon-
sibility not to use such values as the name of a C identifier.
• The lowercased "foo" in the DTS becomes FOO as a token, i.e. it is uppercased.
• The whitespace in the DTS "123 foo" string is converted to 123_FOO as a token, i.e. it is
uppercased and whitespace becomes an underscore.
Parameters
• node_id – node identifier
• prop – lowercase-and-underscores property name
Returns
the value of prop as an uppercased token, i.e. without any quotes and with
special characters converted to underscores
n1: node-1 {
prop = "12.7";
};
n2: node-2 {
prop = "0.5";
}
n3: node-3 {
prop = "A B C";
};
properties:
prop:
type: string
Example usage:
Parameters
• node_id – node identifier
• prop – lowercase-and-underscores property name
Returns
the property’s value as a sequence of tokens, with no quotes
n1: node-1 {
prop = "f1", "F2";
};
n2: node-2 {
prop = "123 foo", "456 FOO";
};
properties:
prop:
type: string-array
Example usage:
DT_STRING_TOKEN_BY_IDX(DT_NODELABEL(n1), prop, 0) // f1
DT_STRING_TOKEN_BY_IDX(DT_NODELABEL(n1), prop, 1) // F2
DT_STRING_TOKEN_BY_IDX(DT_NODELABEL(n2), prop, 0) // 123_foo
DT_STRING_TOKEN_BY_IDX(DT_NODELABEL(n2), prop, 1) // 456_FOO
n1: node-1 {
prop = "f1", "F2";
};
n2: node-2 {
prop = "123 foo", "456 FOO";
};
properties:
prop:
type: string-array
Example usage:
DT_STRING_UPPER_TOKEN_BY_IDX(DT_NODELABEL(n1), prop, 0) // F1
DT_STRING_UPPER_TOKEN_BY_IDX(DT_NODELABEL(n1), prop, 1) // F2
DT_STRING_UPPER_TOKEN_BY_IDX(DT_NODELABEL(n2), prop, 0) // 123_FOO
DT_STRING_UPPER_TOKEN_BY_IDX(DT_NODELABEL(n2), prop, 1) // 456_FOO
n1: node-1 {
prop = "12.7", "34.1";
};
n2: node-2 {
prop = "A B", "C D";
}
properties:
prop:
type: string-array
Example usage:
Parameters
• node_id – node identifier
• prop – lowercase-and-underscores property name
• idx – the index to get
Returns
the property’s value as a sequence of tokens, with no quotes
That is, prop is a property of the phandle’s node, not a property of node_id.
Example devicetree fragment:
n1: node-1 {
foo = <&n2 &n3>;
};
n2: node-2 {
bar = <42>;
};
n3: node-3 {
baz = <43>;
};
Example usage:
# define N1 DT_NODELABEL(n1)
Parameters
• node_id – node identifier
• phs – lowercase-and-underscores property with type phandle, phandles, or
phandle-array
• idx – logical index into phs, which must be zero if phs has type phandle
• prop – lowercase-and-underscores property of the phandle’s node
Returns
the property’s value
Returns
the property’s value
DT_PROP_BY_PHANDLE(node_id, ph, prop)
Get a property value from a phandle’s node.
This is equivalent to DT_PROP_BY_PHANDLE_IDX(node_id, ph, 0, prop).
Parameters
• node_id – node identifier
• ph – lowercase-and-underscores property of node_id with type phandle
• prop – lowercase-and-underscores property of the phandle’s node
Returns
the property’s value
DT_PHA_BY_IDX(node_id, pha, idx, cell)
Get a phandle-array specifier cell value at an index.
It might help to read the argument order as being similar to node->phandle_array[index].
cell. That is, the cell value is in the pha property of node_id, inside the specifier at index
idx.
Example devicetree fragment:
gpio0: gpio@abcd1234 {
#gpio-cells = <2>;
};
gpio1: gpio@1234abcd {
#gpio-cells = <2>;
};
led: led_0 {
gpios = <&gpio0 17 0x1>, <&gpio1 5 0x3>;
};
gpio-cells:
- pin
- flags
• index 0 has specifier <17 0x1>, so its pin cell is 17, and its flags cell is 0x1
• index 1 has specifier <5 0x3>, so pin is 5 and flags is 0x3
Example usage:
Parameters
• node_id – node identifier
• pha – lowercase-and-underscores property with type phandle-array
n: node {
io-channels = <&adc1 10>, <&adc2 20>;
io-channel-names = "SENSOR", "BANDGAP";
};
io-channel-cells:
- input
Example usage:
Parameters
• node_id – node identifier
• pha – lowercase-and-underscores property with type phandle-array
• name – lowercase-and-underscores name of a specifier in pha
• cell – lowercase-and-underscores cell name in the named specifier
Returns
the cell’s value
adc1: adc@abcd1234 {
foobar = "ADC_1";
};
adc2: adc@1234abcd {
foobar = "ADC_2";
};
n: node {
io-channels = <&adc1 10>, <&adc2 20>;
io-channel-names = "SENSOR", "BANDGAP";
};
Notice how devicetree properties and names are lowercased, and non-alphanumeric charac-
ters are converted to underscores.
Parameters
• node_id – node identifier
• pha – lowercase-and-underscores property with type phandle-array
• name – lowercase-and-underscores name of an element in pha
Returns
a node identifier for the node with that phandle
DT_PHANDLE_BY_IDX(node_id, prop, idx)
Get a node identifier for a phandle in a property.
When a node’s value at a logical index contains a phandle, this macro returns a node identifier
for the node with that phandle.
Therefore, if prop has type phandle, idx must be zero. (A phandle type is treated as a
phandles with a fixed length of 1).
Example devicetree fragment:
n1: node-1 {
foo = <&n2 &n3>;
};
(continues on next page)
# define N1 DT_NODELABEL(n1)
ranges property Use these APIs instead of Property access to access the ranges property. Because this
property’s semantics are defined by the devicetree specification, these macros can be used even for nodes
without matching bindings. However, they take on special semantics when the node’s binding indicates
it is a PCIe bus node, as defined in the PCI Bus Binding to: IEEE Std 1275-1994 Standard for Boot
(Initialization Configuration) Firmware
group devicetree-ranges-prop
Defines
DT_NUM_RANGES(node_id)
Get the number of range blocks in the ranges property.
Use this instead of DT_PROP_LEN(node_id, ranges).
Example devicetree fragment:
pcie0: pcie@0 {
compatible = "intel,pcie";
reg = <0 1>;
#address-cells = <3>;
#size-cells = <2>;
other: other@1 {
reg = <1 1>;
Example usage:
DT_NUM_RANGES(DT_NODELABEL(pcie0)) // 3
DT_NUM_RANGES(DT_NODELABEL(other)) // 2
Parameters
• node_id – node identifier
DT_RANGES_HAS_IDX(node_id, idx)
Is idx a valid range block index?
If this returns 1, then DT_RANGES_CHILD_BUS_ADDRESS_BY_IDX(node_id,
idx), DT_RANGES_PARENT_BUS_ADDRESS_BY_IDX(node_id, idx)
or DT_RANGES_LENGTH_BY_IDX(node_id, idx) are valid. For
DT_RANGES_CHILD_BUS_FLAGS_BY_IDX(node_id, idx) the return value of
DT_RANGES_HAS_CHILD_BUS_FLAGS_AT_IDX(node_id, idx) will indicate valid-
ity. If it returns 0, it is an error to use those macros with index idx, including
DT_RANGES_CHILD_BUS_FLAGS_BY_IDX(node_id, idx).
Example devicetree fragment:
pcie0: pcie@0 {
compatible = "intel,pcie";
reg = <0 1>;
#address-cells = <3>;
#size-cells = <2>;
other: other@1 {
reg = <1 1>;
(continues on next page)
Example usage:
DT_RANGES_HAS_IDX(DT_NODELABEL(pcie0), 0) // 1
DT_RANGES_HAS_IDX(DT_NODELABEL(pcie0), 1) // 1
DT_RANGES_HAS_IDX(DT_NODELABEL(pcie0), 2) // 1
DT_RANGES_HAS_IDX(DT_NODELABEL(pcie0), 3) // 0
DT_RANGES_HAS_IDX(DT_NODELABEL(other), 0) // 1
DT_RANGES_HAS_IDX(DT_NODELABEL(other), 1) // 1
DT_RANGES_HAS_IDX(DT_NODELABEL(other), 2) // 0
DT_RANGES_HAS_IDX(DT_NODELABEL(other), 3) // 0
Parameters
• node_id – node identifier
• idx – index to check
Returns
1 if idx is a valid register block index, 0 otherwise.
DT_RANGES_HAS_CHILD_BUS_FLAGS_AT_IDX(node_id, idx)
Does a ranges property have child bus flags at index?
If this returns 1, then DT_RANGES_CHILD_BUS_FLAGS_BY_IDX(node_id, idx) is valid. If it
returns 0, it is an error to use this macro with index idx. This macro only returns 1 for PCIe
buses (i.e. nodes whose bindings specify they are “pcie” bus nodes.)
Example devicetree fragment:
parent {
#address-cells = <2>;
pcie0: pcie@0 {
compatible = "intel,pcie";
reg = <0 0 1>;
#address-cells = <3>;
#size-cells = <2>;
other: other@1 {
reg = <0 1 1>;
Example usage:
DT_RANGES_HAS_CHILD_BUS_FLAGS_AT_IDX(DT_NODELABEL(pcie0), 0) // 1
DT_RANGES_HAS_CHILD_BUS_FLAGS_AT_IDX(DT_NODELABEL(pcie0), 1) // 1
DT_RANGES_HAS_CHILD_BUS_FLAGS_AT_IDX(DT_NODELABEL(pcie0), 2) // 1
DT_RANGES_HAS_CHILD_BUS_FLAGS_AT_IDX(DT_NODELABEL(pcie0), 3) // 0
DT_RANGES_HAS_CHILD_BUS_FLAGS_AT_IDX(DT_NODELABEL(other), 0) // 0
DT_RANGES_HAS_CHILD_BUS_FLAGS_AT_IDX(DT_NODELABEL(other), 1) // 0
DT_RANGES_HAS_CHILD_BUS_FLAGS_AT_IDX(DT_NODELABEL(other), 2) // 0
DT_RANGES_HAS_CHILD_BUS_FLAGS_AT_IDX(DT_NODELABEL(other), 3) // 0
Parameters
• node_id – node identifier
• idx – logical index into the ranges array
Returns
1 if idx is a valid child bus flags index, 0 otherwise.
DT_RANGES_CHILD_BUS_FLAGS_BY_IDX(node_id, idx)
Get the ranges property child bus flags at index.
When the node is a PCIe bus, the Child Bus Address has an extra cell used to store some flags,
thus this cell is extracted from the Child Bus Address as Child Bus Flags field.
Example devicetree fragments:
parent {
#address-cells = <2>;
pcie0: pcie@0 {
compatible = "intel,pcie";
reg = <0 0 1>;
#address-cells = <3>;
#size-cells = <2>;
Example usage:
DT_RANGES_CHILD_BUS_FLAGS_BY_IDX(DT_NODELABEL(pcie0), 0) // 0x1000000
DT_RANGES_CHILD_BUS_FLAGS_BY_IDX(DT_NODELABEL(pcie0), 1) // 0x2000000
DT_RANGES_CHILD_BUS_FLAGS_BY_IDX(DT_NODELABEL(pcie0), 2) // 0x3000000
Parameters
• node_id – node identifier
• idx – logical index into the ranges array
Returns
range child bus flags field at idx
DT_RANGES_CHILD_BUS_ADDRESS_BY_IDX(node_id, idx)
Get the ranges property child bus address at index.
When the node is a PCIe bus, the Child Bus Address has an extra cell used to store some flags,
thus this cell is removed from the Child Bus Address.
parent {
#address-cells = <2>;
pcie0: pcie@0 {
compatible = "intel,pcie";
reg = <0 0 1>;
#address-cells = <3>;
#size-cells = <2>;
other: other@1 {
reg = <0 1 1>;
Example usage:
DT_RANGES_CHILD_BUS_ADDRESS_BY_IDX(DT_NODELABEL(pcie0), 0) // 0
DT_RANGES_CHILD_BUS_ADDRESS_BY_IDX(DT_NODELABEL(pcie0), 1) // 0x10000000
DT_RANGES_CHILD_BUS_ADDRESS_BY_IDX(DT_NODELABEL(pcie0), 2) // 0x8000000000
DT_RANGES_CHILD_BUS_ADDRESS_BY_IDX(DT_NODELABEL(other), 0) // 0
DT_RANGES_CHILD_BUS_ADDRESS_BY_IDX(DT_NODELABEL(other), 1) // 0x10000000
Parameters
• node_id – node identifier
• idx – logical index into the ranges array
Returns
range child bus address field at idx
DT_RANGES_PARENT_BUS_ADDRESS_BY_IDX(node_id, idx)
Get the ranges property parent bus address at index.
Similarly to DT_RANGES_CHILD_BUS_ADDRESS_BY_IDX(), this properly accounts for child
bus flags cells when the node is a PCIe bus.
Example devicetree fragment:
parent {
#address-cells = <2>;
pcie0: pcie@0 {
compatible = "intel,pcie";
reg = <0 0 1>;
#address-cells = <3>;
#size-cells = <2>;
other: other@1 {
reg = <0 1 1>;
Example usage:
DT_RANGES_PARENT_BUS_ADDRESS_BY_IDX(DT_NODELABEL(pcie0), 0) // 0x3eff0000
DT_RANGES_PARENT_BUS_ADDRESS_BY_IDX(DT_NODELABEL(pcie0), 1) // 0x10000000
DT_RANGES_PARENT_BUS_ADDRESS_BY_IDX(DT_NODELABEL(pcie0), 2) // 0x8000000000
DT_RANGES_PARENT_BUS_ADDRESS_BY_IDX(DT_NODELABEL(other), 0) // 0x3eff0000
DT_RANGES_PARENT_BUS_ADDRESS_BY_IDX(DT_NODELABEL(other), 1) // 0x10000000
Parameters
• node_id – node identifier
• idx – logical index into the ranges array
Returns
range parent bus address field at idx
DT_RANGES_LENGTH_BY_IDX(node_id, idx)
Get the ranges property length at index.
Similarly to DT_RANGES_CHILD_BUS_ADDRESS_BY_IDX(), this properly accounts for child
bus flags cells when the node is a PCIe bus.
Example devicetree fragment:
parent {
#address-cells = <2>;
pcie0: pcie@0 {
compatible = "intel,pcie";
reg = <0 0 1>;
#address-cells = <3>;
#size-cells = <2>;
other: other@1 {
reg = <0 1 1>;
Example usage:
DT_RANGES_LENGTH_BY_IDX(DT_NODELABEL(pcie0), 0) // 0x10000
DT_RANGES_LENGTH_BY_IDX(DT_NODELABEL(pcie0), 1) // 0x2eff0000
DT_RANGES_LENGTH_BY_IDX(DT_NODELABEL(pcie0), 2) // 0x8000000000
DT_RANGES_LENGTH_BY_IDX(DT_NODELABEL(other), 0) // 0x10000
DT_RANGES_LENGTH_BY_IDX(DT_NODELABEL(other), 1) // 0x2eff0000
Parameters
• node_id – node identifier
• idx – logical index into the ranges array
Returns
range length field at idx
DT_FOREACH_RANGE(node_id, fn)
Invokes fn for each entry of node_id ranges property.
The macro fn must take two parameters, node_id which will be the node identifier of the
node with the ranges property and idx the index of the ranges block.
Example devicetree fragment:
n: node@0 {
reg = <0 0 1>;
Example usage:
Parameters
• node_id – node identifier
• fn – macro to invoke
reg property Use these APIs instead of Property access to access the reg property. Because this prop-
erty’s semantics are defined by the devicetree specification, these macros can be used even for nodes
without matching bindings.
group devicetree-reg-prop
Defines
DT_NUM_REGS(node_id)
Get the number of register blocks in the reg property.
Use this instead of DT_PROP_LEN(node_id, reg).
Parameters
• node_id – node identifier
Returns
Number of register blocks in the node’s “reg” property.
DT_REG_HAS_IDX(node_id, idx)
Is idx a valid register block index?
If this returns 1, then DT_REG_ADDR_BY_IDX(node_id, idx) or DT_REG_SIZE_BY_IDX(node_id,
idx) are valid. If it returns 0, it is an error to use those macros with index idx.
Parameters
• node_id – node identifier
• idx – index to check
Returns
1 if idx is a valid register block index, 0 otherwise.
DT_REG_ADDR_BY_IDX(node_id, idx)
Get the base address of the register block at index idx.
Parameters
• node_id – node identifier
• idx – index of the register whose address to return
Returns
address of the idx-th register block
DT_REG_SIZE_BY_IDX(node_id, idx)
Get the size of the register block at index idx.
This is the size of an individual register block, not the total number of register blocks in the
property; use DT_NUM_REGS() for that.
Parameters
• node_id – node identifier
• idx – index of the register whose size to return
Returns
size of the idx-th register block
DT_REG_ADDR(node_id)
Get a node’s (only) register block address.
Equivalent to DT_REG_ADDR_BY_IDX(node_id, 0).
Parameters
• node_id – node identifier
Returns
node’s register block address
DT_REG_SIZE(node_id)
Get a node’s (only) register block size.
Equivalent to DT_REG_SIZE_BY_IDX(node_id, 0).
Parameters
• node_id – node identifier
Returns
node’s only register block’s size
DT_REG_ADDR_BY_NAME(node_id, name)
Get a register block’s base address by name.
Parameters
• node_id – node identifier
• name – lowercase-and-underscores register specifier name
Returns
address of the register block specified by name
DT_REG_SIZE_BY_NAME(node_id, name)
Get a register block’s size by name.
Parameters
• node_id – node identifier
• name – lowercase-and-underscores register specifier name
Returns
size of the register block specified by name
interrupts property Use these APIs instead of Property access to access the interrupts property.
Because this property’s semantics are defined by the devicetree specification, some of these macros can
be used even for nodes without matching bindings. This does not apply to macros which take cell names
as arguments.
group devicetree-interrupts-prop
Defines
DT_NUM_IRQS(node_id)
Get the number of interrupt sources for the node.
Use this instead of DT_PROP_LEN(node_id, interrupts).
Parameters
• node_id – node identifier
Returns
Number of interrupt specifiers in the node’s “interrupts” property.
DT_IRQ_HAS_IDX(node_id, idx)
Is idx a valid interrupt index?
If this returns 1, then DT_IRQ_BY_IDX(node_id, idx) is valid. If it returns 0, it is an error to
use that macro with this index.
Parameters
• node_id – node identifier
• idx – index to check
Returns
1 if the idx is valid for the interrupt property 0 otherwise.
my-serial: serial@abcd1234 {
interrupts = < 33 0 >, < 34 1 >;
};
Assuming the node’s interrupt domain has “#interrupt-cells = <2>;” and the individual cells
in each interrupt specifier are named “irq” and “priority” by the node’s binding, here are some
examples:
Parameters
• node_id – node identifier
• idx – logical index into the interrupt specifier array
• cell – cell name specifier
Returns
the named value at the specifier given by the index
For-each macros There is currently only one “generic” for-each macro, DT_FOREACH_CHILD() , which
allows iterating over the children of a devicetree node.
There are special-purpose for-each macros, like DT_INST_FOREACH_STATUS_OKAY() , but these require
DT_DRV_COMPAT to be defined before use.
group devicetree-generic-foreach
Defines
DT_FOREACH_NODE(fn)
Invokes fn for every node in the tree.
The macro fn must take one parameter, which will be a node identifier. The macro is expanded
once for each node in the tree. The order that nodes are visited in is not specified.
Parameters
• fn – macro to invoke
DT_FOREACH_STATUS_OKAY_NODE(fn)
Invokes fn for every status okay node in the tree.
The macro fn must take one parameter, which will be a node identifier. The macro is expanded
once for each node in the tree with status okay (as usual, a missing status property is treated
as status okay). The order that nodes are visited in is not specified.
Parameters
• fn – macro to invoke
DT_FOREACH_CHILD(node_id, fn)
Invokes fn for each child of node_id.
The macro fn must take one parameter, which will be the node identifier of a child node of
node_id.
The children will be iterated over in the same order as they appear in the final devicetree.
Example devicetree fragment:
n: node {
child-1 {
foobar = "foo";
};
child-2 {
foobar = "bar";
};
};
Example usage:
Parameters
• node_id – node identifier
• fn – macro to invoke
n: node {
child-1 {
...
};
child-2 {
...
};
};
Example usage:
Parameters
• node_id – node identifier
• fn – macro to invoke
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
See also:
DT_FOREACH_CHILD
Parameters
• node_id – node identifier
• fn – macro to invoke
• ... – variable number of arguments to pass to fn
See also:
DT_FOREACH_CHILD_VARGS
Parameters
• node_id – node identifier
• fn – macro to invoke
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
• ... – variable number of arguments to pass to fn
DT_FOREACH_CHILD_STATUS_OKAY(node_id, fn)
Call fn on the child nodes with status okay
The macro fn should take one argument, which is the node identifier for the child node.
As usual, both a missing status and an ok status are treated as okay.
The children will be iterated over in the same order as they appear in the final devicetree.
Parameters
• node_id – node identifier
• fn – macro to invoke
DT_FOREACH_CHILD_STATUS_OKAY_SEP(node_id, fn, sep)
Call fn on the child nodes with status okay with separator.
The macro fn should take one argument, which is the node identifier for the child node.
As usual, both a missing status and an ok status are treated as okay.
See also:
DT_FOREACH_CHILD_STATUS_OKAY
Parameters
• node_id – node identifier
• fn – macro to invoke
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
See also:
DT_FOREACH_CHILD_STATUS_OKAY
Parameters
See also:
DT_FOREACH_CHILD_SEP_STATUS_OKAY
Parameters
• node_id – node identifier
• fn – macro to invoke
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
• ... – variable number of arguments to pass to fn
n: node {
my-ints = <1 2 3>;
};
Example usage:
int array[] = {
DT_FOREACH_PROP_ELEM(DT_NODELABEL(n), my_ints, TIMES_TWO)
};
int array[] = {
(2 * 1), (2 * 2), (2 * 3),
};
See also:
DT_PROP_LEN
Parameters
• node_id – node identifier
• prop – lowercase-and-underscores property name
• fn – macro to invoke
n: node {
my-gpios = <&gpioa 0 GPIO_ACTICE_HIGH>,
<&gpiob 1 GPIO_ACTIVE_HIGH>;
};
Example usage:
The prop parameter has the same restrictions as the same parameter given to
DT_FOREACH_PROP_ELEM().
See also:
DT_FOREACH_PROP_ELEM
Parameters
• node_id – node identifier
• prop – lowercase-and-underscores property name
• fn – macro to invoke
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
See also:
DT_FOREACH_PROP_ELEM
Parameters
• node_id – node identifier
• prop – lowercase-and-underscores property name
• fn – macro to invoke
• ... – variable number of arguments to pass to fn
See also:
DT_FOREACH_PROP_ELEM_VARGS
Parameters
• node_id – node identifier
• prop – lowercase-and-underscores property name
• fn – macro to invoke
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
• ... – variable number of arguments to pass to fn
DT_FOREACH_STATUS_OKAY(compat, fn)
Invokes fn for each status okay node of a compatible.
This macro expands to:
where each node_id_<i> is a node identifier for some node with compatible compat and
status okay. Whitespace is added between expansions as shown above.
Example devicetree fragment:
/ {
a {
compatible = "foo";
status = "okay";
};
b {
compatible = "foo";
status = "disabled";
};
c {
compatible = "foo";
};
};
Example usage:
DT_FOREACH_STATUS_OKAY(foo, DT_NODE_PATH)
"/a" "/c"
"/c" "/a"
“One of the following” is because no guarantees are made about the order that node identifiers
are passed to fn in the expansion.
(The /c string literal is present because a missing status property is always treated as if the
status were set to okay.)
Note also that fn is responsible for adding commas, semicolons, or other terminators as
needed.
Parameters
• compat – lowercase-and-underscores devicetree compatible
• fn – Macro to call for each enabled node. Must accept a node_id as its only
parameter.
DT_FOREACH_STATUS_OKAY_VARGS(compat, fn, ...)
Invokes fn for each status okay node of a compatible with multiple arguments.
This is like DT_FOREACH_STATUS_OKAY() except you can also pass additional arguments to
fn.
Example devicetree fragment:
/ {
a {
compatible = "foo";
val = <3>;
};
b {
compatible = "foo";
val = <4>;
};
};
Example usage:
x = 3 + 4 + 0;
x = 4 + 3 + 0;
Existence checks This section documents miscellaneous macros that can be used to test if a node
exists, how many nodes of a certain type exist, whether a node has certain properties, etc. Some macros
used for special purposes (such as DT_IRQ_HAS_IDX() and all macros which require DT_DRV_COMPAT) are
documented elsewhere on this page.
group devicetree-generic-exist
Defines
DT_NODE_EXISTS(node_id)
Does a node identifier refer to a node?
Tests whether a node identifier refers to a node which exists, i.e. is defined in the devicetree.
It doesn’t matter whether or not the node has a matching binding, or what the node’s status
value is. This is purely a check of whether the node exists at all.
Parameters
• node_id – a node identifier
Returns
1 if the node identifier refers to a node, 0 otherwise.
DT_NODE_HAS_STATUS(node_id, status)
Does a node identifier refer to a node with a status?
Example uses:
Parameters
• node_id – a node identifier
• status – a status as one of the tokens okay or disabled, not a string
Returns
1 if the node has the given status, 0 otherwise.
DT_HAS_COMPAT_STATUS_OKAY(compat)
Does the devicetree have a status okay node with a compatible?
Test for whether the devicetree has any nodes with status okay and the given compatible. That
is, this returns 1 if and only if there is at least one node_id for which both of these expressions
return 1:
DT_NODE_HAS_STATUS(node_id, okay)
DT_NODE_HAS_COMPAT(node_id, compat)
n: node {
compatible = "vnd,specific-device", "generic-device";
}
DT_NODE_HAS_COMPAT(DT_NODELABEL(n), vnd_specific_device)
DT_NODE_HAS_COMPAT(DT_NODELABEL(n), generic_device)
This macro only uses the value of the compatible property. Whether or not a particular com-
patible has a matching binding has no effect on its value, nor does the node’s status.
Parameters
• node_id – node identifier
• compat – lowercase-and-underscores compatible, without quotes
Returns
1 if the node’s compatible property contains compat, 0 otherwise.
DT_NODE_HAS_COMPAT_STATUS(node_id, compat, status)
Does a devicetree node have a compatible and status?
This is equivalent to:
Parameters
• node_id – node identifier
• compat – lowercase-and-underscores compatible, without quotes
• status – okay or disabled as a token, not a string
DT_NODE_HAS_PROP(node_id, prop)
Does a devicetree node have a property?
Tests whether a devicetree node has a property defined.
This tests whether the property is defined at all, not whether a boolean property is true or
false. To get a boolean property’s truth value, use DT_PROP(node_id, prop) instead.
Parameters
• node_id – node identifier
• prop – lowercase-and-underscores property name
Returns
1 if the node has the property, 0 otherwise.
DT_PHA_HAS_CELL_AT_IDX(node_id, pha, idx, cell)
Does a phandle array have a named cell specifier at an index?
If this returns 1, then the phandle-array property pha has a cell named cell at index idx, and
therefore DT_PHA_BY_IDX(node_id,pha, idx, cell) is valid. If it returns 0, it’s an error to use
DT_PHA_BY_IDX() with the same arguments.
Parameters
• node_id – node identifier
• pha – lowercase-and-underscores property with type phandle-array
• idx – index to check within pha
• cell – lowercase-and-underscores cell name whose existence to check at index
idx
Returns
1 if the named cell exists in the specifier at index idx, 0 otherwise.
DT_PHA_HAS_CELL(node_id, pha, cell)
Equivalent to DT_PHA_HAS_CELL_AT_IDX(node_id, pha, 0, cell)
Parameters
• node_id – node identifier
• pha – lowercase-and-underscores property with type phandle-array
• cell – lowercase-and-underscores cell name whose existence to check at index
idx
Returns
1 if the named cell exists in the specifier at index 0, 0 otherwise.
Inter-node dependencies The devicetree.h API has some support for tracking dependencies between
nodes. Dependency tracking relies on a binary “depends on” relation between devicetree nodes, which
is defined as the transitive closure of the following “directly depends on” relation:
• every non-root node directly depends on its parent node
• a node directly depends on any nodes its properties refer to by phandle
• a node directly depends on its interrupt-parent if it has an interrupts property
A dependency ordering of a devicetree is a list of its nodes, where each node n appears earlier in the list
than any nodes that depend on n. A node’s dependency ordinal is then its zero-based index in that list.
Thus, for two distinct devicetree nodes n1 and n2 with dependency ordinals d1 and d2, we have:
• d1 != d2
• if n1 depends on n2, then d1 > d2
• d1 > d2 does not necessarily imply that n1 depends on n2
The Zephyr build system chooses a dependency ordering of the final devicetree and assigns a dependency
ordinal to each node. Dependency related information can be accessed using the following macros. The
exact dependency ordering chosen is an implementation detail, but cyclic dependencies are detected and
cause errors, so it’s safe to assume there are none when using these macros.
There are instance number-based conveniences as well; see DT_INST_DEP_ORD() and subsequent docu-
mentation.
group devicetree-dep-ord
Defines
DT_DEP_ORD(node_id)
Get a node’s dependency ordinal.
Parameters
• node_id – Node identifier
Returns
the node’s dependency ordinal as an integer literal
DT_REQUIRES_DEP_ORDS(node_id)
Get a list of dependency ordinals of a node’s direct dependencies.
There is a comma after each ordinal in the expansion, including the last one:
The one case DT_REQUIRES_DEP_ORDS() expands to nothing is when given the root node
identifier DT_ROOT as argument. The root has no direct dependencies; every other node at
least depends on its parent.
Parameters
• node_id – Node identifier
Returns
a list of dependency ordinals, with each ordinal followed by a comma (,), or an
empty expansion
DT_SUPPORTS_DEP_ORDS(node_id)
Get a list of dependency ordinals of what depends directly on a node.
There is a comma after each ordinal in the expansion, including the last one:
Returns
a list of dependency ordinals, with each ordinal followed by a comma (,), or an
empty expansion
DT_INST_DEP_ORD(inst)
Get a DT_DRV_COMPAT instance’s dependency ordinal.
Equivalent to DT_DEP_ORD(DT_DRV_INST(inst)).
Parameters
• inst – instance number
Returns
The instance’s dependency ordinal
DT_INST_REQUIRES_DEP_ORDS(inst)
Get a list of dependency ordinals of a DT_DRV_COMPAT instance’s direct dependencies.
Equivalent to DT_REQUIRES_DEP_ORDS(DT_DRV_INST(inst)).
Parameters
• inst – instance number
Returns
a list of dependency ordinals for the nodes the instance depends on directly
DT_INST_SUPPORTS_DEP_ORDS(inst)
Get a list of dependency ordinals of what depends directly on a DT_DRV_COMPAT instance.
Equivalent to DT_SUPPORTS_DEP_ORDS(DT_DRV_INST(inst)).
Parameters
• inst – instance number
Returns
a list of node identifiers for the nodes that depend directly on the instance
Bus helpers Zephyr’s devicetree bindings language supports a bus: key which allows bindings to
declare that nodes with a given compatible describe system buses. In this case, child nodes are considered
to be on a bus of the given type, and the following APIs may be used.
group devicetree-generic-bus
Defines
DT_BUS(node_id)
Node’s bus controller.
Get the node identifier of the node’s bus controller. This can be used with DT_PROP() to get
properties of the bus controller.
It is an error to use this with nodes which do not have bus controllers.
Example devicetree fragment:
i2c@deadbeef {
status = "okay";
clock-frequency = < 100000 >;
i2c_device: accelerometer@12 {
(continues on next page)
Example usage:
Parameters
• node_id – node identifier
Returns
a node identifier for the node’s bus controller
DT_BUS_LABEL(node_id)
Node’s bus controller’s label property.
Deprecated:
If used to obtain a device instance with device_get_binding, consider using
DEVICE_DT_GET(DT_BUS(node)) .
Parameters
• node_id – node identifier
Returns
the label property of the node’s bus controller DT_BUS(node)
DT_ON_BUS(node_id, bus)
Is a node on a bus of a given type?
Example devicetree overlay:
&i2c0 {
temp: temperature-sensor@76 {
compatible = "vnd,some-sensor";
reg = <0x76>;
};
};
Example usage, assuming i2c0 is an I2C bus controller node, and therefore temp is on an I2C
bus:
DT_ON_BUS(DT_NODELABEL(temp), i2c) // 1
DT_ON_BUS(DT_NODELABEL(temp), spi) // 0
Parameters
• node_id – node identifier
• bus – lowercase-and-underscores bus type as a C token (i.e. without quotes)
Returns
1 if the node is on a bus of the given type, 0 otherwise
Instance-based APIs These are recommended for use within device drivers. To use them, define
DT_DRV_COMPAT to the lowercase-and-underscores compatible the device driver implements support for.
Here is an example devicetree fragment:
serial@40001000 {
compatible = "vnd,serial";
status = "okay";
current-speed = <115200>;
};
Example usage, assuming serial@40001000 is the only enabled node with compatible vnd,serial:
Warning: Be careful making assumptions about instance numbers. See DT_INST() for the API
guarantees.
As shown above, the DT_INST_* APIs are conveniences for addressing nodes by instance num-
ber. They are almost all defined in terms of one of the Generic APIs. The equivalent generic
API can be found by removing INST_ from the macro name. For example, DT_INST_PROP(inst,
prop) is equivalent to DT_PROP(DT_DRV_INST(inst), prop). Similarly, DT_INST_REG_ADDR(inst)
is equivalent to DT_REG_ADDR(DT_DRV_INST(inst)), and so on. There are some exceptions:
DT_ANY_INST_ON_BUS_STATUS_OKAY() and DT_INST_FOREACH_STATUS_OKAY() are special-purpose
helpers without straightforward generic equivalents.
Since DT_DRV_INST() requires DT_DRV_COMPAT to be defined, it’s an error to use any of these without
that macro defined.
Note that there are also helpers available for specific hardware; these are documented in Hardware
specific APIs.
group devicetree-inst
Defines
DT_DRV_INST(inst)
Node identifier for an instance of a DT_DRV_COMPAT compatible.
Parameters
• inst – instance number
Returns
a node identifier for the node with DT_DRV_COMPAT compatible and instance num-
ber inst
DT_INST_PARENT(inst)
Get a DT_DRV_COMPAT parent’s node identifier.
See also:
DT_PARENT
Parameters
• inst – instance number
Returns
a node identifier for the instance’s parent
DT_INST_GPARENT(inst)
Get a DT_DRV_COMPAT grandparent’s node identifier.
See also:
DT_GPARENT
Parameters
• inst – instance number
Returns
a node identifier for the instance’s grandparent
DT_INST_CHILD(inst, child)
Get a node identifier for a child node of DT_DRV_INST(inst)
See also:
DT_CHILD
Parameters
• inst – instance number
• child – lowercase-and-underscores child node name
Returns
node identifier for the node with the name referred to by ‘child’
DT_INST_FOREACH_CHILD(inst, fn)
Call fn on all child nodes of DT_DRV_INST(inst).
The macro fn should take one argument, which is the node identifier for the child node.
The children will be iterated over in the same order as they appear in the final devicetree.
See also:
DT_FOREACH_CHILD
Parameters
• inst – instance number
• fn – macro to invoke on each child node identifier
See also:
DT_FOREACH_CHILD_SEP
Parameters
See also:
DT_FOREACH_CHILD
Parameters
• inst – instance number
• fn – macro to invoke on each child node identifier
• ... – variable number of arguments to pass to fn
See also:
DT_FOREACH_CHILD_SEP_VARGS
Parameters
• inst – instance number
• fn – macro to invoke on each child node identifier
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
• ... – variable number of arguments to pass to fn
DT_INST_FOREACH_CHILD_STATUS_OKAY(inst, fn)
Call fn on all child nodes of DT_DRV_INST(inst) with status okay.
The macro fn should take one argument, which is the node identifier for the child node.
See also:
DT_FOREACH_CHILD_STATUS_OKAY
Parameters
• inst – instance number
• fn – macro to invoke on each child node identifier
See also:
DT_FOREACH_CHILD_STATUS_OKAY_SEP
Parameters
• inst – instance number
• fn – macro to invoke on each child node identifier
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
See also:
DT_FOREACH_CHILD_STATUS_OKAY_VARGS
Parameters
• inst – instance number
• fn – macro to invoke on each child node identifier
• ... – variable number of arguments to pass to fn
See also:
DT_FOREACH_CHILD_STATUS_OKAY_SEP_VARGS
Parameters
• inst – instance number
• fn – macro to invoke on each child node identifier
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
• ... – variable number of arguments to pass to fn
DT_INST_ENUM_IDX(inst, prop)
Get a DT_DRV_COMPAT value’s index into its enumeration values.
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
Returns
zero-based index of the property’s value in its enum: list
DT_INST_ENUM_IDX_OR(inst, prop, default_idx_value)
Like DT_INST_ENUM_IDX(), but with a fallback to a default enum index.
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
• default_idx_value – a fallback index value to expand to
Returns
zero-based index of the property’s value in its enum if present, default_idx_value
otherwise
DT_INST_ENUM_HAS_VALUE(inst, prop, value)
Does a DT_DRV_COMPAT enumeration property have a given value?
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
• value – lowercase-and-underscores enumeration value
Returns
1 if the node property has the value value, 0 otherwise.
DT_INST_PROP(inst, prop)
Get a DT_DRV_COMPAT instance property.
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
Returns
a representation of the property’s value
DT_INST_PROP_LEN(inst, prop)
Get a DT_DRV_COMPAT property length.
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
Returns
logical length of the property
DT_INST_PROP_HAS_IDX(inst, prop, idx)
Is index idx valid for an array type property on a DT_DRV_COMPAT instance?
Parameters
• inst – instance number
Deprecated:
Use DT_INST_PROP(inst, label)
Parameters
• inst – instance number
Returns
instance’s label property value
DT_INST_STRING_TOKEN(inst, prop)
Get a DT_DRV_COMPAT instance’s string property’s value as a token.
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
Returns
the value of prop as a token, i.e. without any quotes and with special characters
converted to underscores
DT_INST_STRING_UPPER_TOKEN(inst, prop)
Like DT_INST_STRING_TOKEN(), but uppercased.
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
Returns
the value of prop as an uppercased token, i.e. without any quotes and with
special characters converted to underscores
DT_INST_STRING_UNQUOTED(inst, prop)
Get a DT_DRV_COMPAT instance’s string property’s value as an unquoted sequence of tokens.
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
Returns
the value of prop as a sequence of tokens, with no quotes
DT_INST_STRING_TOKEN_BY_IDX(inst, prop, idx)
Get an element out of string-array property as a token.
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
• idx – the index to get
Returns
the element in prop at index idx as a token
DT_INST_STRING_UPPER_TOKEN_BY_IDX(inst, prop, idx)
Like DT_INST_STRING_TOKEN_BY_IDX(), but uppercased.
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
Returns
DT_INST_PHA_BY_NAME(inst, pha, name, cell) or default_value
DT_INST_PHANDLE_BY_NAME(inst, pha, name)
Get a DT_DRV_COMPAT instance’s phandle node identifier from a phandle array by name.
Parameters
• inst – instance number
• pha – lowercase-and-underscores property with type phandle-array
• name – lowercase-and-underscores name of an element in pha
Returns
node identifier for the phandle at the element named “name”
DT_INST_PHANDLE_BY_IDX(inst, prop, idx)
Get a DT_DRV_COMPAT instance’s node identifier for a phandle in a property.
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name in inst with type phandle,
phandles or phandle-array
• idx – index into prop
Returns
a node identifier for the phandle at index idx in prop
DT_INST_PHANDLE(inst, prop)
Get a DT_DRV_COMPAT instance’s node identifier for a phandle property’s value.
Parameters
• inst – instance number
• prop – lowercase-and-underscores property of inst with type phandle
Returns
a node identifier for the node pointed to by “ph”
DT_INST_REG_HAS_IDX(inst, idx)
is idx a valid register block index on a DT_DRV_COMPAT instance?
Parameters
• inst – instance number
• idx – index to check
Returns
1 if idx is a valid register block index, 0 otherwise.
DT_INST_REG_ADDR_BY_IDX(inst, idx)
Get a DT_DRV_COMPAT instance’s idx-th register block’s address.
Parameters
• inst – instance number
• idx – index of the register whose address to return
Returns
address of the instance’s idx-th register block
DT_INST_REG_SIZE_BY_IDX(inst, idx)
Get a DT_DRV_COMPAT instance’s idx-th register block’s size.
Parameters
• inst – instance number
• idx – index of the register whose size to return
Returns
size of the instance’s idx-th register block
DT_INST_REG_ADDR_BY_NAME(inst, name)
Get a DT_DRV_COMPAT’s register block address by name.
Parameters
• inst – instance number
• name – lowercase-and-underscores register specifier name
Returns
address of the register block with the given name
DT_INST_REG_SIZE_BY_NAME(inst, name)
Get a DT_DRV_COMPAT’s register block size by name.
Parameters
• inst – instance number
• name – lowercase-and-underscores register specifier name
Returns
size of the register block with the given name
DT_INST_REG_ADDR(inst)
Get a DT_DRV_COMPAT’s (only) register block address.
Parameters
• inst – instance number
Returns
instance’s register block address
DT_INST_REG_SIZE(inst)
Get a DT_DRV_COMPAT’s (only) register block size.
Parameters
• inst – instance number
Returns
instance’s register block size
DT_INST_IRQ_BY_IDX(inst, idx, cell)
Get a DT_DRV_COMPAT interrupt specifier value at an index.
Parameters
• inst – instance number
• idx – logical index into the interrupt specifier array
• cell – cell name specifier
Returns
the named value at the specifier given by the index
Deprecated:
If used to obtain a device instance with device_get_binding, consider using
DEVICE_DT_GET(DT_INST_BUS(inst)) .
Parameters
• inst – instance number
Returns
the label property of the instance’s bus controller
DT_INST_ON_BUS(inst, bus)
Test if a DT_DRV_COMPAT’s bus type is a given type.
Parameters
• inst – instance number
&i2c0 {
temp: temperature-sensor@76 {
compatible = "vnd,some-sensor";
reg = <0x76>;
};
};
Example usage, assuming i2c0 is an I2C bus controller node, and therefore temp is on an I2C
bus:
DT_ANY_INST_ON_BUS_STATUS_OKAY(i2c) // 1
Parameters
• bus – a binding’s bus type as a C token, lowercased and without quotes
Returns
1 if any enabled node with that compatible is on that bus type, 0 otherwise
DT_ANY_INST_HAS_PROP_STATUS_OKAY(prop)
Check if any DT_DRV_COMPAT node with status okay has a given property.
&i2c0 {
sensor0: sensor@0 {
compatible = "vnd,some-sensor";
status = "okay";
reg = <0>;
foo = <1>;
bar = <2>;
};
sensor1: sensor@1 {
compatible = "vnd,some-sensor";
status = "okay";
reg = <1>;
foo = <2>;
};
sensor2: sensor@2 {
compatible = "vnd,some-sensor";
status = "disabled";
reg = <2>;
baz = <1>;
};
};
Example usage:
DT_ANY_INST_HAS_PROP_STATUS_OKAY(foo) // 1
DT_ANY_INST_HAS_PROP_STATUS_OKAY(bar) // 1
DT_ANY_INST_HAS_PROP_STATUS_OKAY(baz) // 0
Parameters
• prop – lowercase-and-underscores property name
DT_INST_FOREACH_STATUS_OKAY(fn)
Call fn on all nodes with compatible DT_DRV_COMPAT and status okay
This macro calls fn(inst) on each inst number that refers to a node with status okay.
Whitespace is added between invocations.
a {
compatible = "vnd,device";
status = "okay";
foobar = "DEV_A";
};
b {
compatible = "vnd,device";
status = "okay";
foobar = "DEV_B";
};
c {
compatible = "vnd,device";
status = "disabled";
foobar = "DEV_C";
};
Example usage:
DT_INST_FOREACH_STATUS_OKAY(MY_FN)
MY_FN(0) MY_FN(1)
"DEV_A", "DEV_B",
or this:
"DEV_B", "DEV_A",
No guarantees are made about the order that a and b appear in the expansion.
Note that fn is responsible for adding commas, semicolons, or other separators or terminators.
Device drivers should use this macro whenever possible to instantiate a struct device for each
enabled node in the devicetree of the driver’s compatible DT_DRV_COMPAT.
Parameters
• fn – Macro to call for each enabled node. Must accept an instance number as
its only parameter.
DT_INST_FOREACH_STATUS_OKAY_VARGS(fn, ...)
Call fn on all nodes with compatible DT_DRV_COMPAT and status okay with multiple arguments.
See also:
DT_INST_FOREACH_STATUS_OKAY
Parameters
• fn – Macro to call for each enabled node. Must accept an instance number as
its only parameter.
See also:
DT_INST_FOREACH_PROP_ELEM
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
• fn – macro to invoke
• ... – variable number of arguments to pass to fn
See also:
DT_INST_FOREACH_PROP_ELEM
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
• fn – macro to invoke
• sep – Separator (e.g. comma or semicolon). Must be in parentheses; this is
required to enable providing a comma as separator.
• ... – variable number of arguments to pass to fn
DT_INST_NODE_HAS_PROP(inst, prop)
Does a DT_DRV_COMPAT instance have a property?
Parameters
• inst – instance number
• prop – lowercase-and-underscores property name
Returns
1 if the instance has the property, 0 otherwise.
DT_INST_PHA_HAS_CELL_AT_IDX(inst, pha, idx, cell)
Does a phandle array have a named cell specifier at an index for a DT_DRV_COMPAT instance?
Parameters
• inst – instance number
• pha – lowercase-and-underscores property with type phandle-array
• idx – index to check
• cell – named cell value whose existence to check
Returns
1 if the named cell exists in the specifier at index idx, 0 otherwise.
DT_INST_PHA_HAS_CELL(inst, pha, cell)
Does a phandle array have a named cell specifier at index 0 for a DT_DRV_COMPAT instance?
Parameters
• inst – instance number
• pha – lowercase-and-underscores property with type phandle-array
• cell – named cell value whose existence to check
Returns
1 if the named cell exists in the specifier at index 0, 0 otherwise.
DT_INST_IRQ_HAS_IDX(inst, idx)
is index valid for interrupt property on a DT_DRV_COMPAT instance?
Parameters
• inst – instance number
• idx – logical index into the interrupt specifier array
Returns
1 if the idx is valid for the interrupt property 0 otherwise.
DT_INST_IRQ_HAS_CELL_AT_IDX(inst, idx, cell)
Does a DT_DRV_COMPAT instance have an interrupt named cell specifier?
Parameters
• inst – instance number
• idx – index to check
• cell – named cell value whose existence to check
Returns
1 if the named cell exists in the interrupt specifier at index idx 0 otherwise.
DT_INST_IRQ_HAS_CELL(inst, cell)
Does a DT_DRV_COMPAT instance have an interrupt value?
Parameters
• inst – instance number
• cell – named cell value whose existence to check
Returns
1 if the named cell exists in the interrupt specifier at index 0 0 otherwise.
DT_INST_IRQ_HAS_NAME(inst, name)
Does a DT_DRV_COMPAT instance have an interrupt value?
Parameters
• inst – instance number
• name – lowercase-and-underscores interrupt specifier name
Returns
1 if name is a valid named specifier
Hardware specific APIs The following APIs can also be used by including <devicetree.h>; no addi-
tional include is needed.
CAN These conveniences may be used for nodes which describe CAN controllers/transceivers, and
properties related to them.
group devicetree-can
Defines
DT_CAN_TRANSCEIVER_MAX_BITRATE(node_id, max)
Get the maximum transceiver bitrate for a CAN controller.
The bitrate will be limited to the maximum bitrate supported by the CAN controller. If no
CAN transceiver is present in the devicetree, the maximum bitrate will be that of the CAN
controller.
Example devicetree fragment:
transceiver0: can-phy0 {
compatible = "vnd,can-transceiver";
max-bitrate = <1000000>;
#phy-cells = <0>;
};
can0: can@... {
compatible = "vnd,can-controller";
phys = <&transceiver0>;
};
can1: can@... {
compatible = "vnd,can-controller";
Example usage:
DT_CAN_TRANSCEIVER_MAX_BITRATE(DT_NODELABEL(can0), 5000000) // 1000000
DT_CAN_TRANSCEIVER_MAX_BITRATE(DT_NODELABEL(can1), 5000000) // 2000000
DT_CAN_TRANSCEIVER_MAX_BITRATE(DT_NODELABEL(can1), 1000000) // 1000000
Parameters
• node_id – node identifier
• max – maximum bitrate supported by the CAN controller
Returns
the maximum bitrate supported by the CAN controller/transceiver combination
DT_INST_CAN_TRANSCEIVER_MAX_BITRATE(inst, max)
Get the maximum transceiver bitrate for a DT_DRV_COMPAT CAN controller.
See also:
DT_CAN_TRANSCEIVER_MAX_BITRATE()
Parameters
• inst – DT_DRV_COMPAT instance number
• max – maximum bitrate supported by the CAN controller
Returns
the maximum bitrate supported by the CAN controller/transceiver combination
Clocks These conveniences may be used for nodes which describe clock sources, and properties related
to them.
group devicetree-clocks
Defines
DT_CLOCKS_HAS_IDX(node_id, idx)
Test if a node has a clocks phandle-array property at a given index.
This expands to 1 if the given index is valid clocks property phandle-array index. Otherwise,
it expands to 0.
Example devicetree fragment:
n1: node-1 {
clocks = <...>, <...>;
};
n2: node-2 {
clocks = <...>;
};
Example usage:
DT_CLOCKS_HAS_IDX(DT_NODELABEL(n1), 0) // 1
DT_CLOCKS_HAS_IDX(DT_NODELABEL(n1), 1) // 1
DT_CLOCKS_HAS_IDX(DT_NODELABEL(n1), 2) // 0
DT_CLOCKS_HAS_IDX(DT_NODELABEL(n2), 1) // 0
Parameters
• node_id – node identifier; may or may not have any clocks property
• idx – index of a clocks property phandle-array whose existence to check
Returns
1 if the index exists, 0 otherwise
DT_CLOCKS_HAS_NAME(node_id, name)
Test if a node has a clock-names array property holds a given name.
This expands to 1 if the name is available as clocks-name array property cell. Otherwise, it
expands to 0.
Example devicetree fragment:
n1: node-1 {
clocks = <...>, <...>;
clock-names = "alpha", "beta";
};
n2: node-2 {
clocks = <...>;
clock-names = "alpha";
};
Example usage:
DT_CLOCKS_HAS_NAME(DT_NODELABEL(n1), alpha) // 1
DT_CLOCKS_HAS_NAME(DT_NODELABEL(n1), beta) // 1
DT_CLOCKS_HAS_NAME(DT_NODELABEL(n2), beta) // 0
Parameters
• node_id – node identifier; may or may not have any clock-names property.
• name – lowercase-and-underscores clock-names cell value name to check
Returns
1 if the clock name exists, 0 otherwise
DT_NUM_CLOCKS(node_id)
Get the number of elements in a clocks property.
Example devicetree fragment:
n1: node-1 {
clocks = <&foo>, <&bar>;
};
n2: node-2 {
clocks = <&foo>;
};
Example usage:
DT_NUM_CLOCKS(DT_NODELABEL(n1)) // 2
DT_NUM_CLOCKS(DT_NODELABEL(n2)) // 1
Parameters
• node_id – node identifier with a clocks property
Returns
number of elements in the property
DT_CLOCKS_CTLR_BY_IDX(node_id, idx)
Get the node identifier for the controller phandle from a “clocks” phandle-array property at
an index.
Example devicetree fragment:
n: node {
clocks = <&clk1 10 20>, <&clk2 30 40>;
};
Example usage:
See also:
DT_PHANDLE_BY_IDX()
Parameters
• node_id – node identifier
• idx – logical index into “clocks”
Returns
the node identifier for the clock controller referenced at index “idx”
DT_CLOCKS_CTLR(node_id)
Equivalent to DT_CLOCKS_CTLR_BY_IDX(node_id, 0)
See also:
DT_CLOCKS_CTLR_BY_IDX()
Parameters
• node_id – node identifier
Returns
a node identifier for the clocks controller at index 0 in “clocks”
DT_CLOCKS_CTLR_BY_NAME(node_id, name)
Get the node identifier for the controller phandle from a clocks phandle-array property by
name.
Example devicetree fragment:
n: node {
clocks = <&clk1 10 20>, <&clk2 30 40>;
clock-names = "alpha", "beta";
};
Example usage:
See also:
DT_PHANDLE_BY_NAME()
Parameters
• node_id – node identifier
• name – lowercase-and-underscores name of a clocks element as defined by the
node’s clock-names property
Returns
the node identifier for the clock controller referenced by name
clk1: clock-controller@... {
compatible = "vnd,clock";
#clock-cells = < 2 >;
};
n: node {
clocks = < &clk1 10 20 >, < &clk1 30 40 >;
};
clock-cells:
- bus
- bits
Example usage:
DT_CLOCKS_CELL_BY_IDX(DT_NODELABEL(n), 0, bus) // 10
DT_CLOCKS_CELL_BY_IDX(DT_NODELABEL(n), 1, bits) // 40
See also:
DT_PHA_BY_IDX()
Parameters
clk1: clock-controller@... {
compatible = "vnd,clock";
#clock-cells = < 2 >;
};
n: node {
clocks = < &clk1 10 20 >, < &clk1 30 40 >;
clock-names = "alpha", "beta";
};
clock-cells:
- bus
- bits
Example usage:
See also:
DT_PHA_BY_NAME()
Parameters
• node_id – node identifier for a node with a clocks property
• name – lowercase-and-underscores name of a clocks element as defined by the
node’s clock-names property
• cell – lowercase-and-underscores cell name
Returns
the cell value in the specifier at the named element
DT_CLOCKS_CELL(node_id, cell)
Equivalent to DT_CLOCKS_CELL_BY_IDX(node_id, 0, cell)
See also:
DT_CLOCKS_CELL_BY_IDX()
Parameters
• node_id – node identifier for a node with a clocks property
• cell – lowercase-and-underscores cell name
Returns
the cell value at index 0
DT_INST_CLOCKS_HAS_IDX(inst, idx)
Equivalent to DT_CLOCKS_HAS_IDX(DT_DRV_INST(inst), idx)
Parameters
• inst – DT_DRV_COMPAT instance number; may or may not have any clocks
property
• idx – index of a clocks property phandle-array whose existence to check
Returns
1 if the index exists, 0 otherwise
DT_INST_CLOCKS_HAS_NAME(inst, name)
Equivalent to DT_CLOCK_HAS_NAME(DT_DRV_INST(inst), name)
Parameters
• inst – DT_DRV_COMPAT instance number; may or may not have any clock-
names property.
• name – lowercase-and-underscores clock-names cell value name to check
Returns
1 if the clock name exists, 0 otherwise
DT_INST_NUM_CLOCKS(inst)
Equivalent to DT_NUM_CLOCKS(DT_DRV_INST(inst))
Parameters
• inst – instance number
Returns
number of elements in the clocks property
DT_INST_CLOCKS_CTLR_BY_IDX(inst, idx)
Get the node identifier for the controller phandle from a “clocks” phandle-array property at
an index.
See also:
DT_CLOCKS_CTLR_BY_IDX()
Parameters
• inst – instance number
• idx – logical index into “clocks”
Returns
the node identifier for the clock controller referenced at index “idx”
DT_INST_CLOCKS_CTLR(inst)
Equivalent to DT_INST_CLOCKS_CTLR_BY_IDX(inst, 0)
See also:
DT_CLOCKS_CTLR()
Parameters
DT_INST_CLOCKS_CTLR_BY_NAME(inst, name)
Get the node identifier for the controller phandle from a clocks phandle-array property by
name.
See also:
DT_CLOCKS_CTLR_BY_NAME()
Parameters
• inst – instance number
• name – lowercase-and-underscores name of a clocks element as defined by the
node’s clock-names property
Returns
the node identifier for the clock controller referenced by the named element
See also:
DT_CLOCKS_CELL_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• idx – logical index into clocks property
• cell – lowercase-and-underscores cell name
Returns
the cell value at index “idx”
See also:
DT_CLOCKS_CELL_BY_NAME()
Parameters
• inst – DT_DRV_COMPAT instance number
• name – lowercase-and-underscores name of a clocks element as defined by the
node’s clock-names property
• cell – lowercase-and-underscores cell name
Returns
the cell value in the specifier at the named element
DT_INST_CLOCKS_CELL(inst, cell)
Equivalent to DT_INST_CLOCKS_CELL_BY_IDX(inst, 0, cell)
Parameters
• inst – DT_DRV_COMPAT instance number
• cell – lowercase-and-underscores cell name
Returns
the value of the cell inside the specifier at index 0
DMA These conveniences may be used for nodes which describe direct memory access controllers or
channels, and properties related to them.
group devicetree-dmas
Defines
DT_DMAS_CTLR_BY_IDX(node_id, idx)
Get the node identifier for the DMA controller from a dmas property at an index.
Example devicetree fragment:
n: node {
dmas = <&dma1 1 2 0x400 0x3>,
<&dma2 6 3 0x404 0x5>;
};
Example usage:
DT_DMAS_CTLR_BY_IDX(DT_NODELABEL(n), 0) // DT_NODELABEL(dma1)
DT_DMAS_CTLR_BY_IDX(DT_NODELABEL(n), 1) // DT_NODELABEL(dma2)
See also:
DT_PROP_BY_PHANDLE_IDX()
Parameters
• node_id – node identifier for a node with a dmas property
• idx – logical index into dmas property
Returns
the node identifier for the DMA controller referenced at index “idx”
DT_DMAS_CTLR_BY_NAME(node_id, name)
Get the node identifier for the DMA controller from a dmas property by name.
Example devicetree fragment:
Example usage:
See also:
DT_PHANDLE_BY_NAME()
Parameters
• node_id – node identifier for a node with a dmas property
• name – lowercase-and-underscores name of a dmas element as defined by the
node’s dma-names property
Returns
the node identifier for the DMA controller in the named element
DT_DMAS_CTLR(node_id)
Equivalent to DT_DMAS_CTLR_BY_IDX(node_id, 0)
See also:
DT_DMAS_CTLR_BY_IDX()
Parameters
• node_id – node identifier for a node with a dmas property
Returns
the node identifier for the DMA controller at index 0 in the node’s “dmas” prop-
erty
DT_INST_DMAS_CTLR_BY_IDX(inst, idx)
Get the node identifier for the DMA controller from a DT_DRV_COMPAT instance’s dmas prop-
erty at an index.
See also:
DT_DMAS_CTLR_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• idx – logical index into dmas property
Returns
the node identifier for the DMA controller referenced at index “idx”
DT_INST_DMAS_CTLR_BY_NAME(inst, name)
Get the node identifier for the DMA controller from a DT_DRV_COMPAT instance’s dmas prop-
erty by name.
See also:
DT_DMAS_CTLR_BY_NAME()
Parameters
• inst – DT_DRV_COMPAT instance number
• name – lowercase-and-underscores name of a dmas element as defined by the
node’s dma-names property
Returns
the node identifier for the DMA controller in the named element
DT_INST_DMAS_CTLR(inst)
Equivalent to DT_INST_DMAS_CTLR_BY_IDX(inst, 0)
See also:
DT_DMAS_CTLR_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
Returns
the node identifier for the DMA controller at index 0 in the instance’s “dmas”
property
dma1: dma@... {
compatible = "vnd,dma";
#dma-cells = <2>;
};
dma2: dma@... {
compatible = "vnd,dma";
#dma-cells = <2>;
};
n: node {
dmas = <&dma1 1 0x400>,
<&dma2 6 0x404>;
};
dma-cells:
- channel
- config
Example usage:
DT_DMAS_CELL_BY_IDX(DT_NODELABEL(n), 0, channel) // 1
DT_DMAS_CELL_BY_IDX(DT_NODELABEL(n), 1, channel) // 6
DT_DMAS_CELL_BY_IDX(DT_NODELABEL(n), 0, config) // 0x400
DT_DMAS_CELL_BY_IDX(DT_NODELABEL(n), 1, config) // 0x404
See also:
DT_PHA_BY_IDX()
Parameters
• node_id – node identifier for a node with a dmas property
• idx – logical index into dmas property
• cell – lowercase-and-underscores cell name
Returns
the cell value at index “idx”
See also:
DT_DMAS_CELL_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• idx – logical index into dmas property
• cell – lowercase-and-underscores cell name
Returns
the cell value at index “idx”
dma1: dma@... {
compatible = "vnd,dma";
#dma-cells = <2>;
};
dma2: dma@... {
compatible = "vnd,dma";
#dma-cells = <2>;
};
n: node {
dmas = <&dma1 1 0x400>,
<&dma2 6 0x404>;
dma-names = "tx", "rx";
};
dma-cells:
- channel
- config
Example usage:
See also:
DT_PHA_BY_NAME()
Parameters
• node_id – node identifier for a node with a dmas property
• name – lowercase-and-underscores name of a dmas element as defined by the
node’s dma-names property
• cell – lowercase-and-underscores cell name
Returns
the cell value in the specifier at the named element
See also:
DT_DMAS_CELL_BY_NAME()
Parameters
• inst – DT_DRV_COMPAT instance number
• name – lowercase-and-underscores name of a dmas element as defined by the
node’s dma-names property
• cell – lowercase-and-underscores cell name
Returns
the cell value in the specifier at the named element
DT_DMAS_HAS_IDX(node_id, idx)
Is index “idx” valid for a dmas property?
Parameters
• node_id – node identifier for a node with a dmas property
• idx – logical index into dmas property
Returns
1 if the “dmas” property has index “idx”, 0 otherwise
DT_INST_DMAS_HAS_IDX(inst, idx)
Is index “idx” valid for a DT_DRV_COMPAT instance’s dmas property?
Parameters
• inst – DT_DRV_COMPAT instance number
Fixed flash partitions These conveniences may be used for the special-purpose fixed-partitions
compatible used to encode information about flash memory partitions in the device tree. See See
fixed-partition for more details.
group devicetree-fixed-partition
Defines
DT_NODE_BY_FIXED_PARTITION_LABEL(label)
Get a node identifier for a fixed partition with a given label property.
Example devicetree fragment:
flash@... {
partitions {
compatible = "fixed-partitions";
boot_partition: partition@0 {
label = "mcuboot";
};
slot0_partition: partition@c000 {
label = "image-0";
};
...
};
};
Example usage:
Parameters
• label – lowercase-and-underscores label property value
Returns
a node identifier for the partition with that label property
DT_HAS_FIXED_PARTITION_LABEL(label)
Test if a fixed partition with a given label property exists.
Parameters
• label – lowercase-and-underscores label property value
Returns
1 if any “fixed-partitions” child node has the given label, 0 otherwise.
DT_FIXED_PARTITION_EXISTS(node_id)
Test if fixed-partition compatible node exists.
Parameters
• node_id – DTS node to test
Returns
1 if node exists and is fixed-partition compatible, 0 otherwise.
DT_FIXED_PARTITION_ID(node_id)
Get a numeric identifier for a fixed partition.
Parameters
• node_id – node identifier for a fixed-partitions child node
Returns
the partition’s ID, a unique zero-based index number
DT_MTD_FROM_FIXED_PARTITION(node_id)
Get the node identifier of the flash device for a partition.
Parameters
• node_id – node identifier for a fixed-partitions child node
Returns
the node identifier of the memory technology device that contains the fixed-
partitions node.
GPIO These conveniences may be used for nodes which describe GPIO controllers/pins, and properties
related to them.
group devicetree-gpio
Defines
gpio1: gpio@... { };
gpio2: gpio@... { };
n: node {
gpios = <&gpio1 10 GPIO_ACTIVE_LOW>,
<&gpio2 30 GPIO_ACTIVE_HIGH>;
};
Example usage:
See also:
DT_PHANDLE_BY_IDX()
Parameters
• node_id – node identifier
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
• idx – logical index into “gpio_pha”
Returns
the node identifier for the gpio controller referenced at index “idx”
DT_GPIO_CTLR(node_id, gpio_pha)
Equivalent to DT_GPIO_CTLR_BY_IDX(node_id, gpio_pha, 0)
See also:
DT_GPIO_CTLR_BY_IDX()
Parameters
• node_id – node identifier
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
Returns
a node identifier for the gpio controller at index 0 in “gpio_pha”
Deprecated:
If used to obtain a device instance with device_get_binding, consider using
DEVICE_DT_GET(DT_GPIO_CTLR_BY_IDX(node, gpio_pha, idx)) .
It’s an error if the GPIO controller node referenced by the phandle in node_id’s “gpio_pha”
property at index “idx” has no label property.
Example devicetree fragment:
gpio1: gpio@... {
label = "GPIO_1";
};
gpio2: gpio@... {
label = "GPIO_2";
};
n: node {
gpios = <&gpio1 10 GPIO_ACTIVE_LOW>,
<&gpio2 30 GPIO_ACTIVE_HIGH>;
};
Example usage:
See also:
DT_PHANDLE_BY_IDX()
Parameters
• node_id – node identifier
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
• idx – logical index into “gpio_pha”
Returns
the label property of the node referenced at index “idx”
DT_GPIO_LABEL(node_id, gpio_pha)
Equivalent to DT_GPIO_LABEL_BY_IDX(node_id, gpio_pha, 0)
Deprecated:
If used to obtain a device instance with device_get_binding, consider using
DEVICE_DT_GET(DT_GPIO_CTLR(node, gpio_pha)) .
See also:
DT_GPIO_LABEL_BY_IDX()
Parameters
• node_id – node identifier
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
Returns
the label property of the node referenced at index 0
gpio2: gpio@... {
compatible = "vnd,gpio";
#gpio-cells = <2>;
};
n: node {
gpios = <&gpio1 10 GPIO_ACTIVE_LOW>,
<&gpio2 30 GPIO_ACTIVE_HIGH>;
};
Example usage:
DT_GPIO_PIN_BY_IDX(DT_NODELABEL(n), gpios, 0) // 10
DT_GPIO_PIN_BY_IDX(DT_NODELABEL(n), gpios, 1) // 30
See also:
DT_PHA_BY_IDX()
Parameters
• node_id – node identifier
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
• idx – logical index into “gpio_pha”
Returns
the pin cell value at index “idx”
DT_GPIO_PIN(node_id, gpio_pha)
Equivalent to DT_GPIO_PIN_BY_IDX(node_id, gpio_pha, 0)
See also:
DT_GPIO_PIN_BY_IDX()
Parameters
• node_id – node identifier
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
Returns
the pin cell value at index 0
gpio2: gpio@... {
compatible = "vnd,gpio";
#gpio-cells = <2>;
};
n: node {
gpios = <&gpio1 10 GPIO_ACTIVE_LOW>,
<&gpio2 30 GPIO_ACTIVE_HIGH>;
};
Example usage:
DT_GPIO_FLAGS_BY_IDX(DT_NODELABEL(n), gpios, 0) // GPIO_ACTIVE_LOW
DT_GPIO_FLAGS_BY_IDX(DT_NODELABEL(n), gpios, 1) // GPIO_ACTIVE_HIGH
See also:
DT_PHA_BY_IDX()
Parameters
• node_id – node identifier
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
• idx – logical index into “gpio_pha”
Returns
the flags cell value at index “idx”, or zero if there is none
DT_GPIO_FLAGS(node_id, gpio_pha)
Equivalent to DT_GPIO_FLAGS_BY_IDX(node_id, gpio_pha, 0)
See also:
DT_GPIO_FLAGS_BY_IDX()
Parameters
DT_NUM_GPIO_HOGS(node_id)
Get the number of GPIO hogs in a node.
This expands to the number of hogged GPIOs, or zero if there are none.
Example devicetree fragment:
gpio1: gpio@... {
compatible = "vnd,gpio";
#gpio-cells = <2>;
n1: node-1 {
gpio-hog;
gpios = <0 GPIO_ACTIVE_HIGH>, <1 GPIO_ACTIVE_LOW>;
output-high;
};
n2: node-2 {
gpio-hog;
gpios = <3 GPIO_ACTIVE_HIGH>;
output-low;
};
};
gpio-cells:
- pin
- flags
Example usage:
DT_NUM_GPIO_HOGS(DT_NODELABEL(n1)) // 2
DT_NUM_GPIO_HOGS(DT_NODELABEL(n2)) // 1
Parameters
• node_id – node identifier; may or may not be a GPIO hog node.
Returns
number of hogged GPIOs in the node
DT_GPIO_HOG_PIN_BY_IDX(node_id, idx)
Get a GPIO hog specifier’s pin cell at an index.
This macro only works for GPIO specifiers with cells named “pin”. Refer to the node’s binding
to check if necessary.
Example devicetree fragment:
gpio1: gpio@... {
compatible = "vnd,gpio";
#gpio-cells = <2>;
(continues on next page)
n1: node-1 {
gpio-hog;
gpios = <0 GPIO_ACTIVE_HIGH>, <1 GPIO_ACTIVE_LOW>;
output-high;
};
n2: node-2 {
gpio-hog;
gpios = <3 GPIO_ACTIVE_HIGH>;
output-low;
};
};
gpio-cells:
- pin
- flags
Example usage:
DT_GPIO_HOG_PIN_BY_IDX(DT_NODELABEL(n1), 0) // 0
DT_GPIO_HOG_PIN_BY_IDX(DT_NODELABEL(n1), 1) // 1
DT_GPIO_HOG_PIN_BY_IDX(DT_NODELABEL(n2), 0) // 3
Parameters
• node_id – node identifier
• idx – logical index into “gpios”
Returns
the pin cell value at index “idx”
DT_GPIO_HOG_FLAGS_BY_IDX(node_id, idx)
Get a GPIO hog specifier’s flags cell at an index.
This macro expects GPIO specifiers with cells named “flags”. If there is no “flags” cell in the
GPIO specifier, zero is returned. Refer to the node’s binding to check specifier cell names if
necessary.
Example devicetree fragment:
gpio1: gpio@... {
compatible = "vnd,gpio";
#gpio-cells = <2>;
n1: node-1 {
gpio-hog;
gpios = <0 GPIO_ACTIVE_HIGH>, <1 GPIO_ACTIVE_LOW>;
output-high;
};
n2: node-2 {
gpio-hog;
gpios = <3 GPIO_ACTIVE_HIGH>;
output-low;
(continues on next page)
gpio-cells:
- pin
- flags
Example usage:
DT_GPIO_HOG_FLAGS_BY_IDX(DT_NODELABEL(n1), 0) // GPIO_ACTIVE_HIGH
DT_GPIO_HOG_FLAGS_BY_IDX(DT_NODELABEL(n1), 1) // GPIO_ACTIVE_LOW
DT_GPIO_HOG_FLAGS_BY_IDX(DT_NODELABEL(n2), 0) // GPIO_ACTIVE_HIGH
Parameters
• node_id – node identifier
• idx – logical index into “gpios”
Returns
the flags cell value at index “idx”, or zero if there is none
Deprecated:
If used to obtain a device instance with device_get_binding, consider using
DEVICE_DT_GET(DT_INST_GPIO_CTLR_BY_IDX(node, gpio_pha, idx)) .
Parameters
• inst – DT_DRV_COMPAT instance number
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
• idx – logical index into “gpio_pha”
Returns
the label property of the node referenced at index “idx”
DT_INST_GPIO_LABEL(inst, gpio_pha)
Equivalent to DT_INST_GPIO_LABEL_BY_IDX(inst, gpio_pha, 0)
Deprecated:
If used to obtain a device instance with device_get_binding, consider using
DEVICE_DT_GET(DT_INST_GPIO_CTLR(node, gpio_pha)) .
Parameters
• inst – DT_DRV_COMPAT instance number
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
Returns
the label property of the node referenced at index 0
See also:
DT_GPIO_PIN_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
• idx – logical index into “gpio_pha”
Returns
the pin cell value at index “idx”
DT_INST_GPIO_PIN(inst, gpio_pha)
Equivalent to DT_INST_GPIO_PIN_BY_IDX(inst, gpio_pha, 0)
See also:
DT_INST_GPIO_PIN_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
Returns
the pin cell value at index 0
See also:
DT_GPIO_FLAGS_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
• idx – logical index into “gpio_pha”
Returns
the flags cell value at index “idx”, or zero if there is none
DT_INST_GPIO_FLAGS(inst, gpio_pha)
Equivalent to DT_INST_GPIO_FLAGS_BY_IDX(inst, gpio_pha, 0)
See also:
DT_INST_GPIO_FLAGS_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• gpio_pha – lowercase-and-underscores GPIO property with type “phandle-
array”
Returns
the flags cell value at index 0, or zero if there is none
IO channels These are commonly used by device drivers which need to use IO channels (e.g. ADC or
DAC channels) for conversion.
group devicetree-io-channels
Defines
DT_IO_CHANNELS_CTLR_BY_IDX(node_id, idx)
Get the node identifier for the node referenced by an io-channels property at an index.
Example devicetree fragment:
adc1: adc@... { ... };
n: node {
io-channels = <&adc1 10>, <&adc2 20>;
};
Example usage:
DT_IO_CHANNELS_CTLR_BY_IDX(DT_NODELABEL(n), 0) // DT_NODELABEL(adc1)
DT_IO_CHANNELS_CTLR_BY_IDX(DT_NODELABEL(n), 1) // DT_NODELABEL(adc2)
See also:
DT_PROP_BY_PHANDLE_IDX()
Parameters
• node_id – node identifier for a node with an io-channels property
• idx – logical index into io-channels property
Returns
the node identifier for the node referenced at index “idx”
DT_IO_CHANNELS_CTLR_BY_NAME(node_id, name)
Get the node identifier for the node referenced by an io-channels property by name.
Example devicetree fragment:
adc1: adc@... { ... };
n: node {
io-channels = <&adc1 10>, <&adc2 20>;
io-channel-names = "SENSOR", "BANDGAP";
};
Example usage:
DT_IO_CHANNELS_CTLR_BY_NAME(DT_NODELABEL(n), sensor) // DT_NODELABEL(adc1)
DT_IO_CHANNELS_CTLR_BY_NAME(DT_NODELABEL(n), bandgap) // DT_NODELABEL(adc2)
See also:
DT_PHANDLE_BY_NAME()
Parameters
• node_id – node identifier for a node with an io-channels property
• name – lowercase-and-underscores name of an io-channels element as defined
by the node’s io-channel-names property
Returns
the node identifier for the node referenced at the named element
DT_IO_CHANNELS_CTLR(node_id)
Equivalent to DT_IO_CHANNELS_CTLR_BY_IDX(node_id, 0)
See also:
DT_IO_CHANNELS_CTLR_BY_IDX()
Parameters
• node_id – node identifier for a node with an io-channels property
Returns
the node identifier for the node referenced at index 0 in the node’s “io-channels”
property
DT_INST_IO_CHANNELS_CTLR_BY_IDX(inst, idx)
Get the node identifier from a DT_DRV_COMPAT instance’s io-channels property at an index.
See also:
DT_IO_CHANNELS_CTLR_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• idx – logical index into io-channels property
Returns
the node identifier for the node referenced at index “idx”
DT_INST_IO_CHANNELS_CTLR_BY_NAME(inst, name)
Get the node identifier from a DT_DRV_COMPAT instance’s io-channels property by name.
See also:
DT_IO_CHANNELS_CTLR_BY_NAME()
Parameters
• inst – DT_DRV_COMPAT instance number
DT_INST_IO_CHANNELS_CTLR(inst)
Equivalent to DT_INST_IO_CHANNELS_CTLR_BY_IDX(inst, 0)
See also:
DT_IO_CHANNELS_CTLR_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
Returns
the node identifier for the node referenced at index 0 in the node’s “io-channels”
property
DT_IO_CHANNELS_INPUT_BY_IDX(node_id, idx)
Get an io-channels specifier input cell at an index.
This macro only works for io-channels specifiers with cells named “input”. Refer to the node’s
binding to check if necessary.
Example devicetree fragment:
adc1: adc@... {
compatible = "vnd,adc";
#io-channel-cells = <1>;
};
adc2: adc@... {
compatible = "vnd,adc";
#io-channel-cells = <1>;
};
n: node {
io-channels = <&adc1 10>, <&adc2 20>;
};
DT_IO_CHANNELS_INPUT_BY_IDX(DT_NODELABEL(n), 0) // 10
DT_IO_CHANNELS_INPUT_BY_IDX(DT_NODELABEL(n), 1) // 20
See also:
DT_PHA_BY_IDX()
Parameters
• node_id – node identifier for a node with an io-channels property
• idx – logical index into io-channels property
Returns
the input cell in the specifier at index “idx”
DT_IO_CHANNELS_INPUT_BY_NAME(node_id, name)
Get an io-channels specifier input cell by name.
This macro only works for io-channels specifiers with cells named “input”. Refer to the node’s
binding to check if necessary.
Example devicetree fragment:
adc1: adc@... {
compatible = "vnd,adc";
#io-channel-cells = <1>;
};
adc2: adc@... {
compatible = "vnd,adc";
#io-channel-cells = <1>;
};
n: node {
io-channels = <&adc1 10>, <&adc2 20>;
io-channel-names = "SENSOR", "BANDGAP";
};
DT_IO_CHANNELS_INPUT_BY_NAME(DT_NODELABEL(n), sensor) // 10
DT_IO_CHANNELS_INPUT_BY_NAME(DT_NODELABEL(n), bandgap) // 20
See also:
DT_PHA_BY_NAME()
Parameters
• node_id – node identifier for a node with an io-channels property
• name – lowercase-and-underscores name of an io-channels element as defined
by the node’s io-channel-names property
Returns
the input cell in the specifier at the named element
DT_IO_CHANNELS_INPUT(node_id)
Equivalent to DT_IO_CHANNELS_INPUT_BY_IDX(node_id, 0)
See also:
DT_IO_CHANNELS_INPUT_BY_IDX()
Parameters
• node_id – node identifier for a node with an io-channels property
Returns
the input cell in the specifier at index 0
DT_INST_IO_CHANNELS_INPUT_BY_IDX(inst, idx)
Get an input cell from the “DT_DRV_INST(inst)” io-channels property at an index.
See also:
DT_IO_CHANNELS_INPUT_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• idx – logical index into io-channels property
Returns
the input cell in the specifier at index “idx”
DT_INST_IO_CHANNELS_INPUT_BY_NAME(inst, name)
Get an input cell from the “DT_DRV_INST(inst)” io-channels property by name.
See also:
DT_IO_CHANNELS_INPUT_BY_NAME()
Parameters
• inst – DT_DRV_COMPAT instance number
• name – lowercase-and-underscores name of an io-channels element as defined
by the instance’s io-channel-names property
Returns
the input cell in the specifier at the named element
DT_INST_IO_CHANNELS_INPUT(inst)
Equivalent to DT_INST_IO_CHANNELS_INPUT_BY_IDX(inst, 0)
Parameters
• inst – DT_DRV_COMPAT instance number
Returns
the input cell in the specifier at index 0
MBOX These conveniences may be used for nodes which describe MBOX controllers/users, and prop-
erties related to them.
group devicetree-mbox
Defines
DT_MBOX_CTLR_BY_NAME(node_id, name)
Get the node identifier for the MBOX controller from a mboxes property by name.
Example devicetree fragment:
n: node {
mboxes = <&mbox1 8>,
<&mbox1 9>;
mbox-names = "tx", "rx";
};
Example usage:
See also:
DT_PHANDLE_BY_NAME()
Parameters
• node_id – node identifier for a node with a mboxes property
• name – lowercase-and-underscores name of a mboxes element as defined by the
node’s mbox-names property
Returns
the node identifier for the MBOX controller in the named element
DT_MBOX_CHANNEL_BY_NAME(node_id, name)
Get a MBOX channel value by name.
Example devicetree fragment:
mbox1: mbox@... {
#mbox-cells = <1>;
};
n: node {
mboxes = <&mbox1 1>,
<&mbox1 6>;
mbox-names = "tx", "rx";
};
mbox-cells:
- channel
Example usage:
DT_MBOX_CHANNEL_BY_NAME(DT_NODELABEL(n), tx) // 1
DT_MBOX_CHANNEL_BY_NAME(DT_NODELABEL(n), rx) // 6
See also:
DT_PHA_BY_NAME_OR()
Parameters
• node_id – node identifier for a node with a mboxes property
• name – lowercase-and-underscores name of a mboxes element as defined by the
node’s mbox-names property
Returns
the channel value in the specifier at the named element or 0 if no channels are
supported
Pinctrl (pin control) These are used to access pin control properties by name or index.
Devicetree nodes may have properties which specify pin control (sometimes known as pin mux) settings.
These are expressed using pinctrl-<index> properties within the node, where the <index> values are
contiguous integers starting from 0. These may also be named using the pinctrl-names property.
Here is an example:
node {
...
pinctrl-0 = <&foo &bar ...>;
pinctrl-1 = <&baz ...>;
pinctrl-names = "default", "sleep";
};
Above, pinctrl-0 has name "default", and pinctrl-1 has name "sleep". The pinctrl-<index>
property values contain phandles. The &foo, &bar, etc. phandles within the properties point to nodes
whose contents vary by platform, and which describe a pin configuration for the node.
group devicetree-pinctrl
Defines
n: node {
pinctrl-0 = <&foo &bar>;
pinctrl-1 = <&baz &blub>;
}
Example usage:
DT_PINCTRL_BY_IDX(DT_NODELABEL(n), 0, 1) // DT_NODELABEL(bar)
DT_PINCTRL_BY_IDX(DT_NODELABEL(n), 1, 0) // DT_NODELABEL(baz)
Parameters
• node_id – node with a pinctrl-‘pc_idx’ property
• pc_idx – index of the pinctrl property itself
• idx – index into the value of the pinctrl property
Returns
node identifier for the phandle at index ‘idx’ in ‘pinctrl-‘pc_idx”
DT_PINCTRL_0(node_id, idx)
Get a node identifier from a pinctrl-0 property.
This is equivalent to:
DT_PINCTRL_BY_IDX(node_id, 0, idx)
n: node {
pinctrl-0 = <&foo &bar>;
pinctrl-1 = <&baz &blub>;
pinctrl-names = "default", "sleep";
};
Example usage:
Parameters
• node_id – node with a named pinctrl property
• name – lowercase-and-underscores pinctrl property name
• idx – index into the value of the named pinctrl property
Returns
node identifier for the phandle at that index in the pinctrl property
DT_PINCTRL_NAME_TO_IDX(node_id, name)
Convert a pinctrl name to its corresponding index.
Example devicetree fragment:
n: node {
pinctrl-0 = <&foo &bar>;
pinctrl-1 = <&baz &blub>;
pinctrl-names = "default", "sleep";
};
Example usage:
DT_PINCTRL_NAME_TO_IDX(DT_NODELABEL(n), default) // 0
DT_PINCTRL_NAME_TO_IDX(DT_NODELABEL(n), sleep) // 1
Parameters
• node_id – node identifier with a named pinctrl property
• name – lowercase-and-underscores name name of the pinctrl whose index to
get
Returns
integer literal for the index of the pinctrl property with that name
DT_PINCTRL_IDX_TO_NAME_TOKEN(node_id, pc_idx)
Convert a pinctrl property index to its name as a token.
This allows you to get a pinctrl property’s name, and “remove the
quotes” from it.
DT_PINCTRL_IDX_TO_NAME_TOKEN() can only be used if the node has a pinctrl-‘pc_idx’
property and a pinctrl-names property element for that index. It is an error to use it in other
circumstances.
Example devicetree fragment:
n: node {
pinctrl-0 = <...>;
pinctrl-1 = <...>;
pinctrl-names = "default", "f.o.o2";
};
Example usage:
DT_PINCTRL_IDX_TO_NAME_TOKEN(DT_NODELABEL(n), 0) // default
DT_PINCTRL_IDX_TO_NAME_TOKEN(DT_NODELABEL(n), 1) // f_o_o2
The same caveats and restrictions that apply to DT_STRING_TOKEN()’s return value also apply
here.
Parameters
• node_id – node identifier
• pc_idx – index of a pinctrl property in that node
Returns
name of the pinctrl property, as a token, without any quotes and with non-
alphanumeric characters converted to underscores
DT_PINCTRL_IDX_TO_NAME_UPPER_TOKEN(node_id, pc_idx)
Like DT_PINCTRL_IDX_TO_NAME_TOKEN(), but with an uppercased result.
This does the a similar conversion as DT_PINCTRL_IDX_TO_NAME_TOKEN(node_id, pc_idx).
The only difference is that alphabetical characters in the result are uppercased.
Example devicetree fragment:
n: node {
pinctrl-0 = <...>;
pinctrl-1 = <...>;
pinctrl-names = "default", "f.o.o2";
};
Example usage:
DT_PINCTRL_IDX_TO_NAME_TOKEN(DT_NODELABEL(n), 0) // DEFAULT
DT_PINCTRL_IDX_TO_NAME_TOKEN(DT_NODELABEL(n), 1) // F_O_O2
The same caveats and restrictions that apply to DT_STRING_UPPER_TOKEN()’s return value
also apply here.
DT_NUM_PINCTRLS_BY_IDX(node_id, pc_idx)
Get the number of phandles in a pinctrl property.
Example devicetree fragment:
n1: node-1 {
pinctrl-0 = <&foo &bar>;
};
n2: node-2 {
pinctrl-0 = <&baz>;
};
Example usage:
DT_NUM_PINCTRLS_BY_IDX(DT_NODELABEL(n1), 0) // 2
DT_NUM_PINCTRLS_BY_IDX(DT_NODELABEL(n2), 0) // 1
Parameters
• node_id – node identifier with a pinctrl property
• pc_idx – index of the pinctrl property itself
Returns
number of phandles in the property with that index
DT_NUM_PINCTRLS_BY_NAME(node_id, name)
Like DT_NUM_PINCTRLS_BY_IDX(), but by name instead.
Example devicetree fragment:
n: node {
pinctrl-0 = <&foo &bar>;
pinctrl-1 = <&baz>
pinctrl-names = "default", "sleep";
};
Example usage:
DT_NUM_PINCTRLS_BY_NAME(DT_NODELABEL(n), default) // 2
DT_NUM_PINCTRLS_BY_NAME(DT_NODELABEL(n), sleep) // 1
Parameters
• node_id – node identifier with a pinctrl property
• name – lowercase-and-underscores name name of the pinctrl property
Returns
number of phandles in the property with that name
DT_NUM_PINCTRL_STATES(node_id)
Get the number of pinctrl properties in a node.
This expands to 0 if there are no pinctrl-i properties. Otherwise, it expands to the number of
such properties.
Example devicetree fragment:
n1: node-1 {
pinctrl-0 = <...>;
pinctrl-1 = <...>;
};
Example usage:
DT_NUM_PINCTRL_STATES(DT_NODELABEL(n1)) // 2
DT_NUM_PINCTRL_STATES(DT_NODELABEL(n2)) // 0
Parameters
• node_id – node identifier; may or may not have any pinctrl properties
Returns
number of pinctrl properties in the node
DT_PINCTRL_HAS_IDX(node_id, pc_idx)
Test if a node has a pinctrl property with an index.
This expands to 1 if the pinctrl-‘idx’ property exists. Otherwise, it expands to 0.
Example devicetree fragment:
n1: node-1 {
pinctrl-0 = <...>;
pinctrl-1 = <...>;
};
n2: node-2 {
};
Example usage:
DT_PINCTRL_HAS_IDX(DT_NODELABEL(n1), 0) // 1
DT_PINCTRL_HAS_IDX(DT_NODELABEL(n1), 1) // 1
DT_PINCTRL_HAS_IDX(DT_NODELABEL(n1), 2) // 0
DT_PINCTRL_HAS_IDX(DT_NODELABEL(n2), 0) // 0
Parameters
• node_id – node identifier; may or may not have any pinctrl properties
• pc_idx – index of a pinctrl property whose existence to check
Returns
1 if the property exists, 0 otherwise
DT_PINCTRL_HAS_NAME(node_id, name)
Test if a node has a pinctrl property with a name.
This expands to 1 if the named pinctrl property exists. Otherwise, it expands to 0.
Example devicetree fragment:
n1: node-1 {
pinctrl-0 = <...>;
pinctrl-names = "default";
};
n2: node-2 {
};
Example usage:
DT_PINCTRL_HAS_NAME(DT_NODELABEL(n1), default) // 1
DT_PINCTRL_HAS_NAME(DT_NODELABEL(n1), sleep) // 0
DT_PINCTRL_HAS_NAME(DT_NODELABEL(n2), default) // 0
Parameters
• node_id – node identifier; may or may not have any pinctrl properties
• name – lowercase-and-underscores pinctrl property name to check
Returns
1 if the property exists, 0 otherwise
DT_PINCTRL_BY_IDX(DT_DRV_INST(inst), 0, idx)
DT_INST_PINCTRL_NAME_TO_IDX(inst, name)
Convert a pinctrl name to its corresponding index for a DT_DRV_COMPAT instance.
This is equivalent to DT_PINCTRL_NAME_TO_IDX(DT_DRV_INST(inst),name).
Parameters
• inst – instance number
• name – lowercase-and-underscores name of the pinctrl whose index to get
Returns
integer literal for the index of the pinctrl property with that name
DT_INST_PINCTRL_IDX_TO_NAME_TOKEN(inst, pc_idx)
Convert a pinctrl index to its name as an uppercased token.
This is equivalent to DT_PINCTRL_IDX_TO_NAME_TOKEN(DT_DRV_INST(inst), pc_idx).
Parameters
• inst – instance number
• pc_idx – index of the pinctrl property itself
Returns
name of the pin control property as a token
DT_INST_PINCTRL_IDX_TO_NAME_UPPER_TOKEN(inst, pc_idx)
Convert a pinctrl index to its name as an uppercased token.
This is equivalent to DT_PINCTRL_IDX_TO_NAME_UPPER_TOKEN(DT_DRV_INST(inst), idx).
Parameters
• inst – instance number
• pc_idx – index of the pinctrl property itself
Returns
name of the pin control property as an uppercase token
DT_INST_NUM_PINCTRLS_BY_IDX(inst, pc_idx)
Get the number of phandles in a pinctrl property for a DT_DRV_COMPAT instance.
This is equivalent to DT_NUM_PINCTRLS_BY_IDX(DT_DRV_INST(inst),pc_idx).
Parameters
• inst – instance number
• pc_idx – index of the pinctrl property itself
Returns
number of phandles in the property with that index
DT_INST_NUM_PINCTRLS_BY_NAME(inst, name)
Like DT_INST_NUM_PINCTRLS_BY_IDX(), but by name instead.
This is equivalent to DT_NUM_PINCTRLS_BY_NAME(DT_DRV_INST(inst),name).
Parameters
• inst – instance number
• name – lowercase-and-underscores name of the pinctrl property
Returns
number of phandles in the property with that name
DT_INST_NUM_PINCTRL_STATES(inst)
Get the number of pinctrl properties in a DT_DRV_COMPAT instance.
This is equivalent to DT_NUM_PINCTRL_STATES(DT_DRV_INST(inst)).
Parameters
• inst – instance number
Returns
number of pinctrl properties in the instance
DT_INST_PINCTRL_HAS_IDX(inst, pc_idx)
Test if a DT_DRV_COMPAT instance has a pinctrl property with an index.
This is equivalent to DT_PINCTRL_HAS_IDX(DT_DRV_INST(inst), pc_idx).
Parameters
• inst – instance number
• pc_idx – index of a pinctrl property whose existence to check
Returns
1 if the property exists, 0 otherwise
DT_INST_PINCTRL_HAS_NAME(inst, name)
Test if a DT_DRV_COMPAT instance has a pinctrl property with a name.
This is equivalent to DT_PINCTRL_HAS_NAME(DT_DRV_INST(inst), name).
Parameters
• inst – instance number
• name – lowercase-and-underscores pinctrl property name to check
Returns
1 if the property exists, 0 otherwise
PWM These conveniences may be used for nodes which describe PWM controllers and properties re-
lated to them.
group devicetree-pwms
Defines
DT_PWMS_CTLR_BY_IDX(node_id, idx)
Get the node identifier for the PWM controller from a pwms property at an index.
Example devicetree fragment:
n: node {
pwms = <&pwm1 1 PWM_POLARITY_NORMAL>,
<&pwm2 3 PWM_POLARITY_INVERTED>;
};
Example usage:
DT_PWMS_CTLR_BY_IDX(DT_NODELABEL(n), 0) // DT_NODELABEL(pwm1)
DT_PWMS_CTLR_BY_IDX(DT_NODELABEL(n), 1) // DT_NODELABEL(pwm2)
See also:
DT_PROP_BY_PHANDLE_IDX()
Parameters
• node_id – node identifier for a node with a pwms property
• idx – logical index into pwms property
Returns
the node identifier for the PWM controller referenced at index “idx”
DT_PWMS_CTLR_BY_NAME(node_id, name)
Get the node identifier for the PWM controller from a pwms property by name.
Example devicetree fragment:
pwm2: pwm-controller. . . { . . . };
n: node { pwms = <&pwm1 1 PWM_POLARITY_NORMAL>, <&pwm2 3
PWM_POLARITY_INVERTED>; pwm-names = “alpha”, “beta”; };
Example usage:
See also:
DT_PHANDLE_BY_NAME()
Parameters
• node_id – node identifier for a node with a pwms property
• name – lowercase-and-underscores name of a pwms element as defined by the
node’s pwm-names property
Returns
the node identifier for the PWM controller in the named element
DT_PWMS_CTLR(node_id)
Equivalent to DT_PWMS_CTLR_BY_IDX(node_id, 0)
See also:
DT_PWMS_CTLR_BY_IDX()
Parameters
• node_id – node identifier for a node with a pwms property
Returns
the node identifier for the PWM controller at index 0 in the node’s “pwms” prop-
erty
pwm1: pwm-controller@... {
compatible = "vnd,pwm";
#pwm-cells = <2>;
};
pwm2: pwm-controller@... {
compatible = "vnd,pwm";
#pwm-cells = <2>;
};
n: node {
pwms = <&pwm1 1 200000 PWM_POLARITY_NORMAL>,
<&pwm2 3 100000 PWM_POLARITY_INVERTED>;
};
pwm-cells:
- channel
- period
- flags
Example usage:
DT_PWMS_CELL_BY_IDX(DT_NODELABEL(n), 0, channel) // 1
DT_PWMS_CELL_BY_IDX(DT_NODELABEL(n), 1, channel) // 3
DT_PWMS_CELL_BY_IDX(DT_NODELABEL(n), 0, period) // 200000
DT_PWMS_CELL_BY_IDX(DT_NODELABEL(n), 1, period) // 100000
DT_PWMS_CELL_BY_IDX(DT_NODELABEL(n), 0, flags) // PWM_POLARITY_NORMAL
DT_PWMS_CELL_BY_IDX(DT_NODELABEL(n), 1, flags) // PWM_POLARITY_INVERTED
See also:
DT_PHA_BY_IDX()
Parameters
• node_id – node identifier for a node with a pwms property
• idx – logical index into pwms property
• cell – lowercase-and-underscores cell name
Returns
the cell value at index “idx”
pwm1: pwm-controller@... {
compatible = "vnd,pwm";
#pwm-cells = <2>;
};
n: node {
pwms = <&pwm1 1 200000 PWM_POLARITY_NORMAL>,
<&pwm2 3 100000 PWM_POLARITY_INVERTED>;
pwm-names = "alpha", "beta";
};
pwm-cells:
- channel
- period
- flags
Example usage:
See also:
DT_PHA_BY_NAME()
Parameters
• node_id – node identifier for a node with a pwms property
• name – lowercase-and-underscores name of a pwms element as defined by the
node’s pwm-names property
• cell – lowercase-and-underscores cell name
Returns
the cell value in the specifier at the named element
DT_PWMS_CELL(node_id, cell)
Equivalent to DT_PWMS_CELL_BY_IDX(node_id, 0, cell)
See also:
DT_PWMS_CELL_BY_IDX()
Parameters
• node_id – node identifier for a node with a pwms property
• cell – lowercase-and-underscores cell name
Returns
the cell value at index 0
DT_PWMS_CHANNEL_BY_IDX(node_id, idx)
Get a PWM specifier’s channel cell value at an index.
This macro only works for PWM specifiers with cells named “channel”. Refer to the node’s
binding to check if necessary.
This is equivalent to DT_PWMS_CELL_BY_IDX(node_id, idx, channel).
See also:
DT_PWMS_CELL_BY_IDX()
Parameters
• node_id – node identifier for a node with a pwms property
• idx – logical index into pwms property
Returns
the channel cell value at index “idx”
DT_PWMS_CHANNEL_BY_NAME(node_id, name)
Get a PWM specifier’s channel cell value by name.
This macro only works for PWM specifiers with cells named “channel”. Refer to the node’s
binding to check if necessary.
This is equivalent to DT_PWMS_CELL_BY_NAME(node_id, name, channel).
See also:
DT_PWMS_CELL_BY_NAME()
Parameters
• node_id – node identifier for a node with a pwms property
• name – lowercase-and-underscores name of a pwms element as defined by the
node’s pwm-names property
Returns
the channel cell value in the specifier at the named element
DT_PWMS_CHANNEL(node_id)
Equivalent to DT_PWMS_CHANNEL_BY_IDX(node_id, 0)
See also:
DT_PWMS_CHANNEL_BY_IDX()
Parameters
• node_id – node identifier for a node with a pwms property
Returns
the channel cell value at index 0
DT_PWMS_PERIOD_BY_IDX(node_id, idx)
Get PWM specifier’s period cell value at an index.
This macro only works for PWM specifiers with cells named “period”. Refer to the node’s
binding to check if necessary.
This is equivalent to DT_PWMS_CELL_BY_IDX(node_id, idx, period).
See also:
DT_PWMS_CELL_BY_IDX()
Parameters
• node_id – node identifier for a node with a pwms property
• idx – logical index into pwms property
Returns
the period cell value at index “idx”
DT_PWMS_PERIOD_BY_NAME(node_id, name)
Get a PWM specifier’s period cell value by name.
This macro only works for PWM specifiers with cells named “period”. Refer to the node’s
binding to check if necessary.
This is equivalent to DT_PWMS_CELL_BY_NAME(node_id, name, period).
See also:
DT_PWMS_CELL_BY_NAME()
Parameters
• node_id – node identifier for a node with a pwms property
• name – lowercase-and-underscores name of a pwms element as defined by the
node’s pwm-names property
Returns
the period cell value in the specifier at the named element
DT_PWMS_PERIOD(node_id)
Equivalent to DT_PWMS_PERIOD_BY_IDX(node_id, 0)
See also:
DT_PWMS_PERIOD_BY_IDX()
Parameters
• node_id – node identifier for a node with a pwms property
Returns
the period cell value at index 0
DT_PWMS_FLAGS_BY_IDX(node_id, idx)
Get a PWM specifier’s flags cell value at an index.
This macro expects PWM specifiers with cells named “flags”. If there is no “flags” cell in the
PWM specifier, zero is returned. Refer to the node’s binding to check specifier cell names if
necessary.
This is equivalent to DT_PWMS_CELL_BY_IDX(node_id, idx, flags).
See also:
DT_PWMS_CELL_BY_IDX()
Parameters
• node_id – node identifier for a node with a pwms property
• idx – logical index into pwms property
Returns
the flags cell value at index “idx”, or zero if there is none
DT_PWMS_FLAGS_BY_NAME(node_id, name)
Get a PWM specifier’s flags cell value by name.
This macro expects PWM specifiers with cells named “flags”. If there is no “flags” cell in the
PWM specifier, zero is returned. Refer to the node’s binding to check specifier cell names if
necessary.
This is equivalent to DT_PWMS_CELL_BY_NAME(node_id, name, flags) if there is a flags cell,
but expands to zero if there is none.
See also:
DT_PWMS_CELL_BY_NAME()
Parameters
• node_id – node identifier for a node with a pwms property
• name – lowercase-and-underscores name of a pwms element as defined by the
node’s pwm-names property
Returns
the flags cell value in the specifier at the named element, or zero if there is none
DT_PWMS_FLAGS(node_id)
Equivalent to DT_PWMS_FLAGS_BY_IDX(node_id, 0)
See also:
DT_PWMS_FLAGS_BY_IDX()
Parameters
• node_id – node identifier for a node with a pwms property
Returns
the flags cell value at index 0, or zero if there is none
DT_INST_PWMS_CTLR_BY_IDX(inst, idx)
Get the node identifier for the PWM controller from a DT_DRV_COMPAT instance’s pwms
property at an index.
See also:
DT_PWMS_CTLR_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• idx – logical index into pwms property
Returns
the node identifier for the PWM controller referenced at index “idx”
DT_INST_PWMS_CTLR_BY_NAME(inst, name)
Get the node identifier for the PWM controller from a DT_DRV_COMPAT instance’s pwms
property by name.
See also:
DT_PWMS_CTLR_BY_NAME()
Parameters
• inst – DT_DRV_COMPAT instance number
• name – lowercase-and-underscores name of a pwms element as defined by the
node’s pwm-names property
Returns
the node identifier for the PWM controller in the named element
DT_INST_PWMS_CTLR(inst)
Equivalent to DT_INST_PWMS_CTLR_BY_IDX(inst, 0)
See also:
DT_PWMS_CTLR_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
Returns
the node identifier for the PWM controller at index 0 in the instance’s “pwms”
property
Returns
the cell value at index “idx”
DT_INST_PWMS_CELL_BY_NAME(inst, name, cell)
Get a DT_DRV_COMPAT instance’s PWM specifier’s cell value by name.
See also:
DT_PWMS_CELL_BY_NAME()
Parameters
• inst – DT_DRV_COMPAT instance number
• name – lowercase-and-underscores name of a pwms element as defined by the
node’s pwm-names property
• cell – lowercase-and-underscores cell name
Returns
the cell value in the specifier at the named element
DT_INST_PWMS_CELL(inst, cell)
Equivalent to DT_INST_PWMS_CELL_BY_IDX(inst, 0, cell)
Parameters
• inst – DT_DRV_COMPAT instance number
• cell – lowercase-and-underscores cell name
Returns
the cell value at index 0
DT_INST_PWMS_CHANNEL_BY_IDX(inst, idx)
Equivalent to DT_INST_PWMS_CELL_BY_IDX(inst, idx, channel)
See also:
DT_INST_PWMS_CELL_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• idx – logical index into pwms property
Returns
the channel cell value at index “idx”
DT_INST_PWMS_CHANNEL_BY_NAME(inst, name)
Equivalent to DT_INST_PWMS_CELL_BY_NAME(inst, name, channel)
See also:
DT_INST_PWMS_CELL_BY_NAME()
Parameters
• inst – DT_DRV_COMPAT instance number
• name – lowercase-and-underscores name of a pwms element as defined by the
node’s pwm-names property
Returns
the channel cell value in the specifier at the named element
DT_INST_PWMS_CHANNEL(inst)
Equivalent to DT_INST_PWMS_CHANNEL_BY_IDX(inst, 0)
See also:
DT_INST_PWMS_CHANNEL_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
Returns
the channel cell value at index 0
DT_INST_PWMS_PERIOD_BY_IDX(inst, idx)
Equivalent to DT_INST_PWMS_CELL_BY_IDX(inst, idx, period)
See also:
DT_INST_PWMS_CELL_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• idx – logical index into pwms property
Returns
the period cell value at index “idx”
DT_INST_PWMS_PERIOD_BY_NAME(inst, name)
Equivalent to DT_INST_PWMS_CELL_BY_NAME(inst, name, period)
See also:
DT_INST_PWMS_CELL_BY_NAME()
Parameters
• inst – DT_DRV_COMPAT instance number
• name – lowercase-and-underscores name of a pwms element as defined by the
node’s pwm-names property
Returns
the period cell value in the specifier at the named element
DT_INST_PWMS_PERIOD(inst)
Equivalent to DT_INST_PWMS_PERIOD_BY_IDX(inst, 0)
See also:
DT_INST_PWMS_PERIOD_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
Returns
the period cell value at index 0
DT_INST_PWMS_FLAGS_BY_IDX(inst, idx)
Equivalent to DT_INST_PWMS_CELL_BY_IDX(inst, idx, flags)
See also:
DT_INST_PWMS_CELL_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• idx – logical index into pwms property
Returns
the flags cell value at index “idx”, or zero if there is none
DT_INST_PWMS_FLAGS_BY_NAME(inst, name)
Equivalent to DT_INST_PWMS_CELL_BY_NAME(inst, name, flags)
See also:
DT_INST_PWMS_CELL_BY_NAME()
Parameters
• inst – DT_DRV_COMPAT instance number
• name – lowercase-and-underscores name of a pwms element as defined by the
node’s pwm-names property
Returns
the flags cell value in the specifier at the named element, or zero if there is none
DT_INST_PWMS_FLAGS(inst)
Equivalent to DT_INST_PWMS_FLAGS_BY_IDX(inst, 0)
See also:
DT_INST_PWMS_FLAGS_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
Returns
the flags cell value at index 0, or zero if there is none
Reset Controller These conveniences may be used for nodes which describe reset controllers and prop-
erties related to them.
group devicetree-reset-controller
Defines
DT_RESET_CTLR_BY_IDX(node_id, idx)
Get the node identifier for the controller phandle from a “resets” phandle-array property at an
index.
Example devicetree fragment:
n: node {
resets = <&reset1 10>, <&reset2 20>;
};
Example usage:
See also:
DT_PHANDLE_BY_IDX()
Parameters
• node_id – node identifier
• idx – logical index into “resets”
Returns
the node identifier for the reset controller referenced at index “idx”
DT_RESET_CTLR(node_id)
Equivalent to DT_RESET_CTLR_BY_IDX(node_id, 0)
See also:
DT_RESET_CTLR_BY_IDX()
Parameters
• node_id – node identifier
Returns
a node identifier for the reset controller at index 0 in “resets”
DT_RESET_CTLR_BY_NAME(node_id, name)
Get the node identifier for the controller phandle from a resets phandle-array property by
name.
Example devicetree fragment:
n: node {
resets = <&reset1 10>, <&reset2 20>;
(continues on next page)
Example usage:
See also:
DT_PHANDLE_BY_NAME()
Parameters
• node_id – node identifier
• name – lowercase-and-underscores name of a resets element as defined by the
node’s reset-names property
Returns
the node identifier for the reset controller referenced by name
reset: reset-controller@... {
compatible = "vnd,reset";
#reset-cells = <1>;
};
n: node {
resets = <&reset 10>;
};
reset-cells:
- id
Example usage:
DT_RESET_CELL_BY_IDX(DT_NODELABEL(n), 0, id) // 10
See also:
DT_PHA_BY_IDX()
Parameters
• node_id – node identifier for a node with a resets property
• idx – logical index into resets property
• cell – lowercase-and-underscores cell name
Returns
the cell value at index “idx”
reset: reset-controller@... {
compatible = "vnd,reset";
#reset-cells = <1>;
};
n: node {
resets = <&reset 10>;
reset-names = "alpha";
};
reset-cells:
- id
Example usage:
See also:
DT_PHA_BY_NAME()
Parameters
• node_id – node identifier for a node with a resets property
• name – lowercase-and-underscores name of a resets element as defined by the
node’s reset-names property
• cell – lowercase-and-underscores cell name
Returns
the cell value in the specifier at the named element
DT_RESET_CELL(node_id, cell)
Equivalent to DT_RESET_CELL_BY_IDX(node_id, 0, cell)
See also:
DT_RESET_CELL_BY_IDX()
Parameters
• node_id – node identifier for a node with a resets property
• cell – lowercase-and-underscores cell name
Returns
the cell value at index 0
DT_INST_RESET_CTLR_BY_IDX(inst, idx)
Get the node identifier for the controller phandle from a “resets” phandle-array property at an
index.
See also:
DT_RESET_CTLR_BY_IDX()
Parameters
• inst – instance number
• idx – logical index into “resets”
Returns
the node identifier for the reset controller referenced at index “idx”
DT_INST_RESET_CTLR(inst)
Equivalent to DT_INST_RESET_CTLR_BY_IDX(inst, 0)
See also:
DT_RESET_CTLR()
Parameters
• inst – instance number
Returns
a node identifier for the reset controller at index 0 in “resets”
DT_INST_RESET_CTLR_BY_NAME(inst, name)
Get the node identifier for the controller phandle from a resets phandle-array property by
name.
See also:
DT_RESET_CTLR_BY_NAME()
Parameters
• inst – instance number
• name – lowercase-and-underscores name of a resets element as defined by the
node’s reset-names property
Returns
the node identifier for the reset controller referenced by the named element
See also:
DT_RESET_CELL_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• idx – logical index into resets property
• cell – lowercase-and-underscores cell name
Returns
the cell value at index “idx”
See also:
DT_RESET_CELL_BY_NAME()
Parameters
• inst – DT_DRV_COMPAT instance number
• name – lowercase-and-underscores name of a resets element as defined by the
node’s reset-names property
• cell – lowercase-and-underscores cell name
Returns
the cell value in the specifier at the named element
DT_INST_RESET_CELL(inst, cell)
Equivalent to DT_INST_RESET_CELL_BY_IDX(inst, 0, cell)
Parameters
• inst – DT_DRV_COMPAT instance number
• cell – lowercase-and-underscores cell name
Returns
the value of the cell inside the specifier at index 0
DT_RESET_ID_BY_IDX(node_id, idx)
Get a Reset Controller specifier’s id cell at an index.
This macro only works for Reset Controller specifiers with cells named “id”. Refer to the node’s
binding to check if necessary.
Example devicetree fragment:
reset: reset-controller@... {
compatible = "vnd,reset";
#reset-cells = <1>;
};
n: node {
resets = <&reset 10>;
};
reset-cells:
- id
Example usage:
DT_RESET_ID_BY_IDX(DT_NODELABEL(n), 0) // 10
See also:
DT_PHA_BY_IDX()
Parameters
• node_id – node identifier
DT_RESET_ID(node_id)
Equivalent to DT_RESET_ID_BY_IDX(node_id, 0)
See also:
DT_RESET_ID_BY_IDX()
Parameters
• node_id – node identifier
Returns
the id cell value at index 0
DT_INST_RESET_ID_BY_IDX(inst, idx)
Get a DT_DRV_COMPAT instance’s Reset Controller specifier’s id cell value at an index.
See also:
DT_RESET_ID_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
• idx – logical index into “resets”
Returns
the id cell value at index “idx”
DT_INST_RESET_ID(inst)
Equivalent to DT_INST_RESET_ID_BY_IDX(inst, 0)
See also:
DT_INST_RESET_ID_BY_IDX()
Parameters
• inst – DT_DRV_COMPAT instance number
Returns
the id cell value at index 0
SPI These conveniences may be used for nodes which describe either SPI controllers or devices, de-
pending on the case.
group devicetree-spi
Defines
DT_SPI_HAS_CS_GPIOS(spi)
Does a SPI controller node have chip select GPIOs configured?
SPI bus controllers use the “cs-gpios” property for configuring chip select GPIOs. Its value is a
phandle-array which specifies the chip select lines.
Example devicetree fragment:
spi1: spi@... {
compatible = "vnd,spi";
cs-gpios = <&gpio1 10 GPIO_ACTIVE_LOW>,
<&gpio2 20 GPIO_ACTIVE_LOW>;
};
spi2: spi@... {
compatible = "vnd,spi";
};
Example usage:
DT_SPI_HAS_CS_GPIOS(DT_NODELABEL(spi1)) // 1
DT_SPI_HAS_CS_GPIOS(DT_NODELABEL(spi2)) // 0
Parameters
• spi – a SPI bus controller node identifier
Returns
1 if “spi” has a cs-gpios property, 0 otherwise
DT_SPI_NUM_CS_GPIOS(spi)
Number of chip select GPIOs in a SPI controller’s cs-gpios property.
Example devicetree fragment:
spi1: spi@... {
compatible = "vnd,spi";
cs-gpios = <&gpio1 10 GPIO_ACTIVE_LOW>,
<&gpio2 20 GPIO_ACTIVE_LOW>;
};
spi2: spi@... {
compatible = "vnd,spi";
};
Example usage:
DT_SPI_NUM_CS_GPIOS(DT_NODELABEL(spi1)) // 2
DT_SPI_NUM_CS_GPIOS(DT_NODELABEL(spi2)) // 0
Parameters
• spi – a SPI bus controller node identifier
Returns
Logical length of spi’s cs-gpios property, or 0 if “spi” doesn’t have a cs-gpios prop-
erty
DT_SPI_DEV_HAS_CS_GPIOS(spi_dev)
Does a SPI device have a chip select line configured? Example devicetree fragment:
spi1: spi@... {
compatible = "vnd,spi";
cs-gpios = <&gpio1 10 GPIO_ACTIVE_LOW>,
<&gpio2 20 GPIO_ACTIVE_LOW>;
a: spi-dev-a@0 {
reg = <0>;
};
b: spi-dev-b@1 {
reg = <1>;
};
};
spi2: spi@... {
compatible = "vnd,spi";
c: spi-dev-c@0 {
reg = <0>;
};
};
Example usage:
DT_SPI_DEV_HAS_CS_GPIOS(DT_NODELABEL(a)) // 1
DT_SPI_DEV_HAS_CS_GPIOS(DT_NODELABEL(b)) // 1
DT_SPI_DEV_HAS_CS_GPIOS(DT_NODELABEL(c)) // 0
Parameters
• spi_dev – a SPI device node identifier
Returns
1 if spi_dev’s bus node DT_BUS(spi_dev) has a chip select pin at index
DT_REG_ADDR(spi_dev), 0 otherwise
DT_SPI_DEV_CS_GPIOS_CTLR(spi_dev)
Get a SPI device’s chip select GPIO controller’s node identifier.
Example devicetree fragment:
spi@... {
compatible = "vnd,spi";
cs-gpios = <&gpio1 10 GPIO_ACTIVE_LOW>,
<&gpio2 20 GPIO_ACTIVE_LOW>;
a: spi-dev-a@0 {
reg = <0>;
};
b: spi-dev-b@1 {
(continues on next page)
Example usage:
DT_SPI_DEV_CS_GPIOS_CTLR(DT_NODELABEL(a)) // DT_NODELABEL(gpio1)
DT_SPI_DEV_CS_GPIOS_CTLR(DT_NODELABEL(b)) // DT_NODELABEL(gpio2)
Parameters
• spi_dev – a SPI device node identifier
Returns
node identifier for spi_dev’s chip select GPIO controller
DT_SPI_DEV_CS_GPIOS_LABEL(spi_dev)
Get a SPI device’s chip select GPIO controller’s label property.
Deprecated:
If used to obtain a device instance with device_get_binding, consider using
DEVICE_DT_GET(DT_SPI_DEV_CS_GPIOS_CTLR(node)) .
Example devicetree fragment:
gpio1: gpio@... {
label = "GPIO_1";
};
gpio2: gpio@... {
label = "GPIO_2";
};
spi1: spi@... {
compatible = "vnd,spi";
cs-gpios = <&gpio1 10 GPIO_ACTIVE_LOW>,
<&gpio2 20 GPIO_ACTIVE_LOW>;
a: spi-dev-a@0 {
reg = <0>;
};
b: spi-dev-b@1 {
reg = <1>;
};
};
Example usage:
DT_SPI_DEV_CS_GPIOS_LABEL(DT_NODELABEL(a)) // "GPIO_1"
DT_SPI_DEV_CS_GPIOS_LABEL(DT_NODELABEL(b)) // "GPIO_2"
Parameters
• spi_dev – a SPI device node identifier
Returns
label property of spi_dev’s chip select GPIO controller
DT_SPI_DEV_CS_GPIOS_PIN(spi_dev)
Get a SPI device’s chip select GPIO pin number.
It’s an error if the GPIO specifier for spi_dev’s entry in its bus node’s cs-gpios property has no
pin cell.
Example devicetree fragment:
spi1: spi@... {
compatible = "vnd,spi";
cs-gpios = <&gpio1 10 GPIO_ACTIVE_LOW>,
<&gpio2 20 GPIO_ACTIVE_LOW>;
a: spi-dev-a@0 {
reg = <0>;
};
b: spi-dev-b@1 {
reg = <1>;
};
};
Example usage:
DT_SPI_DEV_CS_GPIOS_PIN(DT_NODELABEL(a)) // 10
DT_SPI_DEV_CS_GPIOS_PIN(DT_NODELABEL(b)) // 20
Parameters
• spi_dev – a SPI device node identifier
Returns
pin number of spi_dev’s chip select GPIO
DT_SPI_DEV_CS_GPIOS_FLAGS(spi_dev)
Get a SPI device’s chip select GPIO flags.
Example devicetree fragment:
spi1: spi@... {
compatible = "vnd,spi";
cs-gpios = <&gpio1 10 GPIO_ACTIVE_LOW>;
a: spi-dev-a@0 {
reg = <0>;
};
};
Example usage:
DT_SPI_DEV_CS_GPIOS_FLAGS(DT_NODELABEL(a)) // GPIO_ACTIVE_LOW
If the GPIO specifier for spi_dev’s entry in its bus node’s cs-gpios property has no flags cell,
this expands to zero.
Parameters
• spi_dev – a SPI device node identifier
Returns
flags value of spi_dev’s chip select GPIO specifier, or zero if there is none
DT_INST_SPI_DEV_HAS_CS_GPIOS(inst)
Equivalent to DT_SPI_DEV_HAS_CS_GPIOS(DT_DRV_INST(inst)).
See also:
DT_SPI_DEV_HAS_CS_GPIOS()
Parameters
• inst – DT_DRV_COMPAT instance number
Returns
1 if the instance’s bus has a CS pin at index DT_INST_REG_ADDR(inst), 0 other-
wise
DT_INST_SPI_DEV_CS_GPIOS_CTLR(inst)
Get GPIO controller node identifier for a SPI device instance This is equivalent to
DT_SPI_DEV_CS_GPIOS_CTLR(DT_DRV_INST(inst)).
See also:
DT_SPI_DEV_CS_GPIOS_CTLR()
Parameters
• inst – DT_DRV_COMPAT instance number
Returns
node identifier for instance’s chip select GPIO controller
DT_INST_SPI_DEV_CS_GPIOS_LABEL(inst)
Get GPIO controller name for a SPI device instance This is equivalent to
DT_SPI_DEV_CS_GPIOS_LABEL(DT_DRV_INST(inst)).
Deprecated:
If used to obtain a device instance with device_get_binding, consider using
DEVICE_DT_GET(DT_INST_SPI_DEV_CS_GPIOS_CTLR(node)) .
See also:
DT_SPI_DEV_CS_GPIOS_LABEL()
Parameters
• inst – DT_DRV_COMPAT instance number
Returns
label property of the instance’s chip select GPIO controller
DT_INST_SPI_DEV_CS_GPIOS_PIN(inst)
Equivalent to DT_SPI_DEV_CS_GPIOS_PIN(DT_DRV_INST(inst)).
See also:
DT_SPI_DEV_CS_GPIOS_PIN()
Parameters
DT_INST_SPI_DEV_CS_GPIOS_FLAGS(inst)
DT_SPI_DEV_CS_GPIOS_FLAGS(DT_DRV_INST(inst)).
See also:
DT_SPI_DEV_CS_GPIOS_FLAGS()
Parameters
• inst – DT_DRV_COMPAT instance number
Returns
flags value of the instance’s chip select GPIO specifier, or zero if there is none
Chosen nodes The special /chosen node contains properties whose values describe system-wide set-
tings. The DT_CHOSEN() macro can be used to get a node identifier for a chosen node.
group devicetree-generic-chosen
Defines
DT_CHOSEN(prop)
Get a node identifier for a /chosen node property.
This is only valid to call if DT_HAS_CHOSEN(prop) is 1.
Parameters
• prop – lowercase-and-underscores property name for the /chosen node
Returns
a node identifier for the chosen node property
DT_HAS_CHOSEN(prop)
Test if the devicetree has a /chosen node.
Parameters
• prop – lowercase-and-underscores devicetree property
Returns
1 if the chosen property exists and refers to a node, 0 otherwise
Zephyr-specific chosen nodes The following table documents some commonly used Zephyr-specific
chosen nodes.
Sometimes, a chosen node’s label property will be used to set the default value of a Kconfig option which
in turn configures a hardware-specific device. This is usually for backwards compatibility in cases when
the Kconfig option predates devicetree support in Zephyr. In other cases, there is no Kconfig option, and
the devicetree node is used directly in the source code to select a device.
Bindings index
This page documents the available devicetree bindings. See Devicetree bindings for an introduction to the
Zephyr bindings file format.
Vendor index This section contains an index of hardware vendors. Click on a vendor’s name to go to
the list of bindings for that vendor.
• Generic or vendor-independent
• Altera Corp. (altr)
• AMS AG (ams)
• Analog Devices, Inc. (adi)
Bindings by vendor This section contains available bindings, grouped by vendor. Within each group,
bindings are listed by the “compatible” property they apply to, like this:
Vendor name (vendor prefix)
• <compatible-A>
• <compatible-B> (on <bus-name> bus)
• <compatible-C>
• ...
The text “(on <bus-name> bus)” appears when bindings may behave differently depending on the bus
the node appears on. For example, this applies to some sensor device nodes, which may appear as
children of either I2C or SPI bus nodes.
Generic or vendor-independent
• dtbinding_adafruit_feather_header
• dtbinding_arduino_header_r3
• dtbinding_arduino_mkr_header
• dtbinding_arduino_nano_header_r3
• dtbinding_atmel_xplained_header
• dtbinding_atmel_xplained_pro_header
• dtbinding_can_transceiver_gpio
• dtbinding_ethernet_phy
• dtbinding_fixed_clock
• dtbinding_fixed_factor_clock
• dtbinding_fixed_partitions
• dtbinding_generic_fem_two_ctrl_pins
• dtbinding_gpio_i2c
• dtbinding_gpio_keys
• dtbinding_gpio_leds
• dtbinding_gpio_radio_coex
• dtbinding_grove_header
• dtbinding_lm75
• dtbinding_lm77
• dtbinding_mikro_bus
• dtbinding_mmio_sram
• dtbinding_neorv32_cpu
• dtbinding_neorv32_gpio
• dtbinding_neorv32_machine_timer
• dtbinding_neorv32_trng
• dtbinding_neorv32_uart
• dtbinding_niosv_machine_timer
• dtbinding_nordic_thingy53_edge_connector
• dtbinding_ns16550
• dtbinding_ntc_thermistor_generic
• dtbinding_nvme_controller
• dtbinding_particle_gen3_header
• dtbinding_pci_host_ecam_generic
• dtbinding_power_domain
• dtbinding_power_domain_gpio
• dtbinding_pwm_leds
• dtbinding_raspberrypi_40pins_header
• dtbinding_regulator_fixed
• dtbinding_sample_controller
• dtbinding_shared_irq
• dtbinding_soc_nv_flash
• dtbinding_st_morpho_header
• dtbinding_syscon
• dtbinding_vnd_gpio_enable_disable_interrupt
• dtbinding_usb_audio
• dtbinding_usb_audio_hp
• dtbinding_usb_audio_hs
• dtbinding_usb_audio_mic
• dtbinding_usb_c_connector
• dtbinding_usb_nop_xceiv
• dtbinding_usb_ulpi_phy
• dtbinding_vexriscv_intc0
• dtbinding_voltage_divider
AMS AG (ams)
• dtbinding_ams_as5600
• dtbinding_ams_as6212
• dtbinding_ams_ccs811
• dtbinding_ams_ens210
• dtbinding_ams_iaqcore
• dtbinding_ams_tcs3400
• dtbinding_ams_tmd2620
Arduino (arduino)
• dtbinding_arduino_uno_adc
• dtbinding_arm_cryptocell_312
• dtbinding_arm_dma_pl330
• dtbinding_arm_dtcm
• dtbinding_arm_ethos_u
• dtbinding_arm_gic
• dtbinding_arm_gic_v3_its
• dtbinding_arm_itcm
• dtbinding_arm_mhu
• dtbinding_arm_mps2_fpgaio_gpio
• dtbinding_arm_mps3_fpgaio_gpio
• dtbinding_arm_pl011
• dtbinding_arm_pl022
• dtbinding_arm_psci_0.2
• dtbinding_arm_sbsa_uart
• dtbinding_arm_scc
• dtbinding_arm_v6m_nvic
• dtbinding_arm_v7m_nvic
• dtbinding_arm_v8.1m_nvic
• dtbinding_arm_v8m_nvic
• dtbinding_arm_versatile_i2c
• dtbinding_atmel_sam_adc
• dtbinding_atmel_sam_afec
• dtbinding_atmel_sam_can
• dtbinding_atmel_sam_dac
• dtbinding_atmel_sam_flash_controller
• dtbinding_atmel_sam_gmac
• dtbinding_atmel_sam_gpio
• dtbinding_atmel_sam_i2c_twi
• dtbinding_atmel_sam_i2c_twihs
• dtbinding_atmel_sam_i2c_twim
• dtbinding_atmel_sam_mdio
• dtbinding_atmel_sam_pinctrl
• dtbinding_atmel_sam_pmc
• dtbinding_atmel_sam_pwm
• dtbinding_atmel_sam_rstc
• dtbinding_atmel_sam_smc
• dtbinding_atmel_sam_spi
• dtbinding_atmel_sam_ssc
• dtbinding_atmel_sam_tc
• dtbinding_atmel_sam_tc_qdec
• dtbinding_atmel_sam_trng
• dtbinding_atmel_sam_uart
• dtbinding_atmel_sam_usart
• dtbinding_atmel_sam_usbc
• dtbinding_atmel_sam_usbhs
• dtbinding_atmel_sam_watchdog
• dtbinding_atmel_sam_xdmac
• dtbinding_atmel_sam0_adc
• dtbinding_atmel_sam0_can
• dtbinding_atmel_sam0_dac
• dtbinding_atmel_sam0_dmac
• dtbinding_atmel_sam0_eic
• dtbinding_atmel_sam0_gmac
• dtbinding_atmel_sam0_gpio
• dtbinding_atmel_sam0_i2c
• dtbinding_atmel_sam0_id
• dtbinding_atmel_sam0_nvmctrl
• dtbinding_atmel_sam0_pinctrl
• dtbinding_atmel_sam0_pinmux
• dtbinding_atmel_sam0_rtc
• dtbinding_atmel_sam0_sercom
• dtbinding_atmel_sam0_spi
• dtbinding_atmel_sam0_tc32
• dtbinding_atmel_sam0_tcc_pwm
• dtbinding_atmel_sam0_uart
• dtbinding_atmel_sam0_usb
• dtbinding_atmel_sam0_watchdog
• dtbinding_atmel_sam4l_flashcalw_controller
• dtbinding_atmel_sam4l_gpio
• dtbinding_atmel_sam4l_uid
• dtbinding_atmel_samc2x_gclk
• dtbinding_atmel_samc2x_mclk
• dtbinding_atmel_samd2x_gclk
• dtbinding_atmel_samd2x_pm
• dtbinding_atmel_samd5x_gclk
• dtbinding_atmel_samd5x_mclk
• dtbinding_atmel_saml2x_gclk
• dtbinding_atmel_saml2x_mclk
• dtbinding_atmel_winc1500
• dtbinding_bosch_bmm150_spi
• dtbinding_bosch_bmp388_i2c
• dtbinding_bosch_bmp388_spi
EPCOS AG (epcos)
• dtbinding_epcos_b57861s0103a039
Gaisler (gaisler)
• dtbinding_gaisler_apbuart
• dtbinding_gaisler_gptimer
• dtbinding_gaisler_irqmp
• dtbinding_gaisler_leon3
Honeywell (honeywell)
• dtbinding_honeywell_hmc5883l
• dtbinding_honeywell_mpr
• dtbinding_honeywell_sm351lt
Hynitron (hynitron)
• dtbinding_hynitron_cst816s
• dtbinding_intel_adsp_host_ipc
• dtbinding_intel_adsp_idc
• dtbinding_intel_adsp_imr
• dtbinding_intel_adsp_mailbox
• dtbinding_intel_adsp_mem_window
• dtbinding_intel_adsp_mtl_tlb
• dtbinding_intel_adsp_power_domain
• dtbinding_intel_adsp_sha
• dtbinding_intel_adsp_shim_clkctl
• dtbinding_intel_adsp_timer
• dtbinding_intel_adsp_tlb
• dtbinding_intel_adsp_watchdog
• dtbinding_intel_agilex_clock
• dtbinding_intel_agilex_socfpga_sip_smc
• dtbinding_intel_alh_dai
• dtbinding_intel_apollo_lake
• dtbinding_intel_atom
• dtbinding_intel_cavs_i2s
• dtbinding_intel_cavs_intc
• dtbinding_intel_dai_dmic
• dtbinding_intel_e1000
• dtbinding_intel_elkhart_lake
• dtbinding_intel_gna
• dtbinding_intel_gpio
• dtbinding_intel_hda_dai
• dtbinding_intel_hpet
• dtbinding_intel_ibecc
• dtbinding_intel_ioapic
• dtbinding_intel_lakemont
• dtbinding_intel_lpss
• dtbinding_intel_multiboot_framebuffer
• dtbinding_intel_niosv
• dtbinding_intel_pch_smbus
• dtbinding_intel_pcie
• dtbinding_intel_penwell_spi
• dtbinding_intel_raptor_lake
• dtbinding_intel_ssp_dai
• dtbinding_intel_ssp_sspbase
• dtbinding_intel_tco_wdt
• dtbinding_intel_vt_d
• dtbinding_intel_x86
Intersil (isil)
• dtbinding_isil_isl29035
• dtbinding_ite_it8xxx2_sspi
• dtbinding_ite_it8xxx2_tach
• dtbinding_ite_it8xxx2_timer
• dtbinding_ite_it8xxx2_uart
• dtbinding_ite_it8xxx2_usbpd
• dtbinding_ite_it8xxx2_vcmp
• dtbinding_ite_it8xxx2_watchdog
• dtbinding_ite_it8xxx2_wuc
• dtbinding_ite_it8xxx2_wuc_map
• dtbinding_ite_riscv_ite
Kvaser (kvaser)
• dtbinding_kvaser_pcican
• dtbinding_litex_uart0
• dtbinding_microchip_xec_kscan
• dtbinding_microchip_xec_pcr
• dtbinding_microchip_xec_peci
• dtbinding_microchip_xec_pinctrl
• dtbinding_microchip_xec_ps2
• dtbinding_microchip_xec_pwm
• dtbinding_microchip_xec_pwmbbled
• dtbinding_microchip_xec_qmspi
• dtbinding_microchip_xec_qmspi_ldma
• dtbinding_microchip_xec_rtos_timer
• dtbinding_microchip_xec_symcr
• dtbinding_microchip_xec_tach
• dtbinding_microchip_xec_timer
• dtbinding_microchip_xec_uart
• dtbinding_microchip_xec_watchdog
• dtbinding_nordic_nrf_clock
• dtbinding_nordic_nrf_comp
• dtbinding_nordic_nrf_ctrlapperi
• dtbinding_nordic_nrf_dcnf
• dtbinding_nordic_nrf_dppic
• dtbinding_nordic_nrf_ecb
• dtbinding_nordic_nrf_egu
• dtbinding_nordic_nrf_ficr
• dtbinding_nordic_nrf_gpio
• dtbinding_nordic_nrf_gpio_forwarder
• dtbinding_nordic_nrf_gpiote
• dtbinding_nordic_nrf_gpreget
• dtbinding_nordic_nrf_i2s
• dtbinding_nordic_nrf_ieee802154
• dtbinding_nordic_nrf_ipc
• dtbinding_nordic_nrf_kmu
• dtbinding_nordic_nrf_led_matrix
• dtbinding_nordic_nrf_lpcomp
• dtbinding_nordic_nrf_mpu
• dtbinding_nordic_nrf_mutex
• dtbinding_nordic_nrf_mwu
• dtbinding_nordic_nrf_nfct
• dtbinding_nordic_nrf_oscillators
• dtbinding_nordic_nrf_pdm
• dtbinding_nordic_nrf_pinctrl
• dtbinding_nordic_nrf_power
• dtbinding_nordic_nrf_ppi
• dtbinding_nordic_nrf_pwm
• dtbinding_nordic_nrf_qdec
• dtbinding_nordic_nrf_qspi
• dtbinding_nordic_nrf_radio
• dtbinding_nordic_nrf_regulators
• dtbinding_nordic_nrf_reset
• dtbinding_nordic_nrf_rng
• dtbinding_nordic_nrf_rtc
• dtbinding_nordic_nrf_saadc
• dtbinding_nordic_nrf_spi
• dtbinding_nordic_nrf_spim
• dtbinding_nordic_nrf_spis
• dtbinding_nordic_nrf_spu
• dtbinding_nordic_nrf_sw_pwm
• dtbinding_nordic_nrf_swi
• dtbinding_nordic_nrf_temp
• dtbinding_nordic_nrf_timer
• dtbinding_nordic_nrf_twi
• dtbinding_nordic_nrf_twim
• dtbinding_nordic_nrf_twis
• dtbinding_nordic_nrf_uart
• dtbinding_nordic_nrf_uarte
• dtbinding_nordic_nrf_uicr
• dtbinding_nordic_nrf_usbd
• dtbinding_nordic_nrf_usbreg
• dtbinding_nordic_nrf_vmc
• dtbinding_nordic_nrf_wdt
• dtbinding_nordic_nrf21540_fem
• dtbinding_nordic_nrf21540_fem_spi
• dtbinding_nordic_nrf51_flash_controller
• dtbinding_nordic_nrf52_flash_controller
• dtbinding_nordic_nrf53_flash_controller
• dtbinding_nordic_nrf91_flash_controller
• dtbinding_nordic_qspi_nor
• dtbinding_nuvoton_npcx_espi
• dtbinding_nuvoton_npcx_espi_vw_conf
• dtbinding_nuvoton_npcx_gpio
• dtbinding_nuvoton_npcx_host_sub
• dtbinding_nuvoton_npcx_host_uart
• dtbinding_nuvoton_npcx_i2c_ctrl
• dtbinding_nuvoton_npcx_i2c_port
• dtbinding_nuvoton_npcx_itim_timer
• dtbinding_nuvoton_npcx_kbd
• dtbinding_nuvoton_npcx_leakage_io
• dtbinding_nuvoton_npcx_lvolctrl_conf
• dtbinding_nuvoton_npcx_miwu
• dtbinding_nuvoton_npcx_miwu_int_map
• dtbinding_nuvoton_npcx_miwu_wui_map
• dtbinding_nuvoton_npcx_pcc
• dtbinding_nuvoton_npcx_peci
• dtbinding_nuvoton_npcx_pinctrl
• dtbinding_nuvoton_npcx_pinctrl_conf
• dtbinding_nuvoton_npcx_pinctrl_def
• dtbinding_nuvoton_npcx_power_psl
• dtbinding_nuvoton_npcx_ps2_channel
• dtbinding_nuvoton_npcx_ps2_ctrl
• dtbinding_nuvoton_npcx_pwm
• dtbinding_nuvoton_npcx_scfg
• dtbinding_nuvoton_npcx_sha
• dtbinding_nuvoton_npcx_shi
• dtbinding_nuvoton_npcx_soc_id
• dtbinding_nuvoton_npcx_spi_fiu
• dtbinding_nuvoton_npcx_tach
• dtbinding_nuvoton_npcx_uart
• dtbinding_nuvoton_npcx_watchdog
• dtbinding_nuvoton_numicro_gpio
• dtbinding_nuvoton_numicro_pinctrl
• dtbinding_nuvoton_numicro_uart
• dtbinding_nxp_flexcan_fd
• dtbinding_nxp_flexpwm
• dtbinding_nxp_fxas21002_spi
• dtbinding_nxp_fxas21002_i2c
• dtbinding_nxp_fxos8700_spi
• dtbinding_nxp_fxos8700_i2c
• dtbinding_nxp_gpt_hw_timer
• dtbinding_nxp_iap_fmc11
• dtbinding_nxp_iap_fmc54
• dtbinding_nxp_iap_fmc55
• dtbinding_nxp_iap_fmc553
• dtbinding_nxp_imx_anatop
• dtbinding_nxp_imx_caam
• dtbinding_nxp_imx_ccm
• dtbinding_nxp_imx_ccm_rev2
• dtbinding_nxp_imx_csi
• dtbinding_nxp_imx_dtcm
• dtbinding_nxp_imx_elcdif
• dtbinding_nxp_imx_epit
• dtbinding_nxp_imx_flexspi
• dtbinding_nxp_imx_flexspi_aps6408l
• dtbinding_nxp_imx_flexspi_hyperflash
• dtbinding_nxp_imx_flexspi_mx25um51345g
• dtbinding_nxp_imx_flexspi_nor
• dtbinding_nxp_imx_flexspi_s27ks0641
• dtbinding_nxp_imx_gpio
• dtbinding_nxp_imx_gpr
• dtbinding_nxp_imx_gpt
• dtbinding_nxp_imx_iomuxc
• dtbinding_nxp_imx_itcm
• dtbinding_nxp_imx_iuart
• dtbinding_nxp_imx_lpi2c
• dtbinding_nxp_imx_lpspi
• dtbinding_nxp_imx_mipi_dsi
• dtbinding_nxp_imx_mu
• dtbinding_nxp_imx_mu_rev2
• dtbinding_nxp_imx_pwm
• dtbinding_nxp_imx_qtmr
• dtbinding_nxp_imx_semc
• dtbinding_nxp_imx_snvs_rtc
• dtbinding_nxp_imx_tmr
• dtbinding_nxp_imx_uart
• dtbinding_nxp_imx_usdhc
• dtbinding_nxp_imx_wdog
• dtbinding_nxp_imx7d_pinctrl
• dtbinding_nxp_imx8m_pinctrl
• dtbinding_nxp_imx8mp_pinctrl
• dtbinding_nxp_imx93_pinctrl
• dtbinding_nxp_kinetis_acmp
• dtbinding_nxp_kinetis_adc12
• dtbinding_nxp_kinetis_adc16
• dtbinding_nxp_kinetis_dac
• dtbinding_nxp_kinetis_dac32
• dtbinding_nxp_kinetis_dspi
• dtbinding_nxp_kinetis_ethernet
• dtbinding_nxp_kinetis_ftfa
• dtbinding_nxp_kinetis_ftfe
• dtbinding_nxp_kinetis_ftfl
• dtbinding_nxp_kinetis_ftm
• dtbinding_nxp_kinetis_ftm_pwm
• dtbinding_nxp_kinetis_gpio
• dtbinding_nxp_kinetis_i2c
• dtbinding_nxp_kinetis_ke1xf_sim
• dtbinding_nxp_kinetis_lpsci
• dtbinding_nxp_kinetis_lptmr
• dtbinding_nxp_kinetis_lpuart
• dtbinding_nxp_kinetis_mcg
• dtbinding_nxp_kinetis_pcc
• dtbinding_nxp_kinetis_pinctrl
• dtbinding_nxp_kinetis_pinmux
• dtbinding_nxp_kinetis_pit
• dtbinding_nxp_kinetis_ptp
• dtbinding_nxp_kinetis_pwt
• dtbinding_nxp_kinetis_rnga
• dtbinding_nxp_kinetis_rtc
• dtbinding_nxp_kinetis_scg
• dtbinding_nxp_kinetis_sim
• dtbinding_nxp_kinetis_temperature
• dtbinding_nxp_kinetis_tpm
• dtbinding_nxp_kinetis_trng
• dtbinding_nxp_kinetis_uart
• dtbinding_nxp_kinetis_usbd
• dtbinding_nxp_kinetis_wdog
• dtbinding_nxp_kinetis_wdog32
• dtbinding_nxp_kw41z_ieee802154
• dtbinding_nxp_lpc_ctimer
• dtbinding_nxp_lpc_dma
• dtbinding_nxp_lpc_flexcomm
• dtbinding_nxp_lpc_gpio
• dtbinding_nxp_lpc_i2c
• dtbinding_nxp_lpc_i2s
• dtbinding_nxp_lpc_iocon
• dtbinding_nxp_lpc_iocon_pinctrl
• dtbinding_nxp_lpc_iocon_pio
• dtbinding_nxp_lpc_lpadc
• dtbinding_nxp_lpc_mailbox
• dtbinding_nxp_lpc_mcan
• dtbinding_nxp_lpc_rng
• dtbinding_nxp_lpc_rtc
• dtbinding_nxp_lpc_sdif
• dtbinding_nxp_lpc_spi
• dtbinding_nxp_lpc_syscon
• dtbinding_nxp_lpc_uid
• dtbinding_nxp_lpc_usart
• dtbinding_nxp_lpc_wwdt
• dtbinding_nxp_lpc11u6x_eeprom
• dtbinding_nxp_lpc11u6x_gpio
• dtbinding_nxp_lpc11u6x_i2c
• dtbinding_nxp_lpc11u6x_pinctrl
• dtbinding_nxp_lpc11u6x_syscon
• dtbinding_nxp_lpc11u6x_uart
• dtbinding_nxp_mcr20a
• dtbinding_nxp_mcux_12b1msps_sar
• dtbinding_nxp_mcux_edma
• dtbinding_nxp_mcux_i2s
• dtbinding_nxp_mcux_i3c
• dtbinding_nxp_mcux_qdec
• dtbinding_nxp_mcux_rt_pinctrl
• dtbinding_nxp_mcux_rt11xx_pinctrl
• dtbinding_nxp_mcux_usbd
• dtbinding_nxp_mcux_xbar
• dtbinding_nxp_mipi_dsi_2l
• dtbinding_nxp_os_timer
• dtbinding_nxp_pca9420
• dtbinding_nxp_pca95xx
• dtbinding_nxp_pca9633
• dtbinding_nxp_pca9685
• dtbinding_nxp_pcal6408a
• dtbinding_nxp_pcal6416a
• dtbinding_nxp_pcf8523
• dtbinding_nxp_pcf8574
• dtbinding_nxp_pdcfg_power
• dtbinding_nxp_pint
• dtbinding_nxp_rt_iocon_pinctrl
• dtbinding_nxp_s32_canxl
• dtbinding_nxp_s32_gpio
• dtbinding_nxp_s32_linflexd
• dtbinding_nxp_s32_mru
• dtbinding_nxp_s32_netc_emdio
• dtbinding_nxp_s32_netc_psi
• dtbinding_nxp_s32_netc_vsi
• dtbinding_nxp_s32_siul2_eirq
• dtbinding_nxp_s32_spi
• dtbinding_nxp_s32_swt
• dtbinding_nxp_s32_sys_timer
• dtbinding_nxp_s32ze_pinctrl
• dtbinding_nxp_sc18im704
• dtbinding_nxp_sc18im704_gpio
• dtbinding_nxp_sc18im704_i2c
• dtbinding_nxp_sctimer_pwm
• dtbinding_nxp_vf610_adc
open-isa.org (openisa)
• dtbinding_openisa_rv32m1_event_unit
• dtbinding_openisa_rv32m1_ftfe
• dtbinding_openisa_rv32m1_genfsk
• dtbinding_openisa_rv32m1_gpio
• dtbinding_openisa_rv32m1_intmux
• dtbinding_openisa_rv32m1_intmux_ch
• dtbinding_openisa_rv32m1_lpi2c
• dtbinding_openisa_rv32m1_lpspi
• dtbinding_openisa_rv32m1_lptmr
• dtbinding_openisa_rv32m1_lpuart
• dtbinding_openisa_rv32m1_pcc
• dtbinding_openisa_rv32m1_pinctrl
• dtbinding_openisa_rv32m1_pinmux
• dtbinding_openisa_rv32m1_tpm
• dtbinding_openisa_rv32m1_trng
OpenCores.org (opencores)
• dtbinding_opencores_spi_simple
QEMU, a generic and open source machine emulator and virtualizer (qemu)
• dtbinding_qemu_ivshmem
• dtbinding_qemu_nios2_zephyr
• dtbinding_renesas_smartbond_pinctrl
• dtbinding_renesas_smartbond_sdadc
• dtbinding_renesas_smartbond_spi
• dtbinding_renesas_smartbond_sys_clock
• dtbinding_renesas_smartbond_trng
• dtbinding_renesas_smartbond_uart
• dtbinding_renesas_smartbond_usbd
• dtbinding_renesas_smartbond_watchdog
Sensirion AG (sensirion)
• dtbinding_sensirion_sgp40
• dtbinding_sensirion_sht3xd
• dtbinding_sensirion_sht4x
• dtbinding_sensirion_shtcx
Siemens AG (siemens)
• dtbinding_siemens_ivshmem_eth
STMicroelectronics (st)
• dtbinding_st_dsi_lcd_qsh_030
• dtbinding_st_hts221_i2c
• dtbinding_st_hts221_spi
• dtbinding_st_i3g4250d
• dtbinding_st_iis2dh_i2c
• dtbinding_st_iis2dh_spi
• dtbinding_st_iis2dlpc_spi
• dtbinding_st_iis2dlpc_i2c
• dtbinding_st_iis2iclx_spi
• dtbinding_st_iis2iclx_i2c
• dtbinding_st_iis2mdc_spi
• dtbinding_st_iis2mdc_i2c
• dtbinding_st_iis3dhhc_spi
• dtbinding_st_ism330dhcx_i2c
• dtbinding_st_ism330dhcx_spi
• dtbinding_st_lis2dh_i2c
• dtbinding_st_lis2dh_spi
• dtbinding_st_lis2dh12_i2c
• dtbinding_st_lis2ds12_i2c
• dtbinding_st_lis2ds12_spi
• dtbinding_st_lis2dw12_spi
• dtbinding_st_lis2dw12_i2c
• dtbinding_st_lis2mdl_spi
• dtbinding_st_lis2mdl_i2c
• dtbinding_st_lis3dh_i2c
• dtbinding_st_lis3mdl_magn
• dtbinding_st_lps22hb_press
• dtbinding_st_lps22hh_spi
• dtbinding_st_lps22hh_i3c
• dtbinding_st_lps22hh_i2c
• dtbinding_st_lps25hb_press
• dtbinding_st_lsm303agr_accel_i2c
• dtbinding_st_lsm303agr_accel_spi
• dtbinding_st_lsm303dlhc_accel
• dtbinding_st_lsm303dlhc_magn
• dtbinding_st_lsm6ds0
• dtbinding_st_lsm6dsl_spi
• dtbinding_st_lsm6dsl_i2c
• dtbinding_st_lsm6dso_i2c
• dtbinding_st_lsm6dso_spi
• dtbinding_st_lsm6dso16is_i2c
• dtbinding_st_lsm6dso16is_spi
• dtbinding_st_lsm6dso32_i2c
• dtbinding_st_lsm6dso32_spi
• dtbinding_st_lsm6dsv16x_spi
• dtbinding_st_lsm6dsv16x_i2c
• dtbinding_st_lsm9ds0_gyro_i2c
• dtbinding_st_lsm9ds0_mfd_i2c
• dtbinding_st_mpxxdtyy_i2s
• dtbinding_st_stm32_adc
• dtbinding_st_stm32_aes
• dtbinding_st_stm32_backup_sram
• dtbinding_st_stm32_bbram
• dtbinding_st_stm32_bdma
• dtbinding_st_stm32_can
• dtbinding_st_stm32_ccm
• dtbinding_st_stm32_clock_mux
• dtbinding_st_stm32_counter
• dtbinding_st_stm32_cryp
• dtbinding_st_stm32_dac
• dtbinding_st_stm32_dma
• dtbinding_st_stm32_dma_v1
• dtbinding_st_stm32_dma_v2
• dtbinding_st_stm32_dma_v2bis
• dtbinding_st_stm32_dmamux
• dtbinding_st_stm32_eeprom
• dtbinding_st_stm32_ethernet
• dtbinding_st_stm32_exti
• dtbinding_st_stm32_fdcan
• dtbinding_st_stm32_flash_controller
• dtbinding_st_stm32_fmc
• dtbinding_st_stm32_fmc_nor_psram
• dtbinding_st_stm32_fmc_sdram
• dtbinding_st_stm32_gpio
• dtbinding_st_stm32_hse_clock
• dtbinding_st_stm32_hsem_mailbox
• dtbinding_st_stm32_i2c_v1
• dtbinding_st_stm32_i2c_v2
• dtbinding_st_stm32_i2s
• dtbinding_st_stm32_ipcc_mailbox
• dtbinding_st_stm32_lptim
• dtbinding_st_stm32_lpuart
• dtbinding_st_stm32_lse_clock
• dtbinding_st_stm32_ltdc
• dtbinding_st_stm32_mipi_dsi
• dtbinding_st_stm32_msi_clock
• dtbinding_st_stm32_nv_flash
• dtbinding_st_stm32_ospi
• dtbinding_st_stm32_ospi_nor
• dtbinding_st_stm32_otgfs
• dtbinding_st_stm32_otghs
• dtbinding_st_stm32_pinctrl
• dtbinding_st_stm32_pwm
• dtbinding_st_stm32_qdec
• dtbinding_st_stm32_qspi
• dtbinding_st_stm32_qspi_nor
• dtbinding_st_stm32_rcc
• dtbinding_st_stm32_rcc_rctl
• dtbinding_st_stm32_rng
• dtbinding_st_stm32_rtc
• dtbinding_st_stm32_sdmmc
• dtbinding_st_stm32_spi
• dtbinding_st_stm32_spi_fifo
• dtbinding_st_stm32_spi_subghz
• dtbinding_st_stm32_temp
• dtbinding_st_stm32_temp_cal
• dtbinding_st_stm32_timers
• dtbinding_st_stm32_uart
• dtbinding_st_stm32_ucpd
• dtbinding_st_stm32_usart
• dtbinding_st_stm32_usb
• dtbinding_st_stm32_usbphyc
• dtbinding_st_stm32_vbat
• dtbinding_st_stm32_vref
• dtbinding_st_stm32_watchdog
• dtbinding_st_stm32_window_watchdog
• dtbinding_st_stm32c0_hsi_clock
• dtbinding_st_stm32c0_temp_cal
• dtbinding_st_stm32f0_pll_clock
• dtbinding_st_stm32f0_rcc
• dtbinding_st_stm32f1_adc
• dtbinding_st_stm32f1_flash_controller
• dtbinding_st_stm32f1_pinctrl
• dtbinding_st_stm32f1_pll_clock
• dtbinding_st_stm32f100_pll_clock
• dtbinding_st_stm32f105_pll_clock
• dtbinding_st_stm32f105_pll2_clock
• dtbinding_st_stm32f2_flash_controller
• dtbinding_st_stm32f2_pll_clock
• dtbinding_st_stm32f4_adc
• dtbinding_st_stm32f4_flash_controller
• dtbinding_st_stm32f4_fsotg
• dtbinding_st_stm32f4_pll_clock
• dtbinding_st_stm32f4_plli2s_clock
• dtbinding_st_stm32f412_plli2s_clock
• dtbinding_st_stm32f7_flash_controller
• dtbinding_st_stm32f7_pll_clock
• dtbinding_st_stm32g0_exti
• dtbinding_st_stm32g0_flash_controller
• dtbinding_st_stm32g0_hsi_clock
• dtbinding_st_stm32g0_pll_clock
• dtbinding_st_stm32g4_flash_controller
• dtbinding_st_stm32g4_pll_clock
• dtbinding_st_stm32h7_fdcan
• dtbinding_st_stm32h7_flash_controller
• dtbinding_st_stm32h7_fmc
• dtbinding_st_stm32h7_hsi_clock
• dtbinding_st_stm32h7_pll_clock
• dtbinding_st_stm32h7_rcc
• dtbinding_st_stm32h7_spi
• dtbinding_st_stm32l0_msi_clock
• dtbinding_st_stm32l0_pll_clock
• dtbinding_st_stm32l4_flash_controller
• dtbinding_st_stm32l4_pll_clock
• dtbinding_st_stm32l5_flash_controller
• dtbinding_st_stm32mp1_rcc
• dtbinding_st_stm32u5_dma
• dtbinding_st_stm32u5_msi_clock
• dtbinding_st_stm32u5_pll_clock
• dtbinding_st_stm32u5_rcc
• dtbinding_st_stm32wb_flash_controller
• dtbinding_st_stm32wb_pll_clock
• dtbinding_st_stm32wb_rcc
• dtbinding_st_stm32wb_ble_rf
• dtbinding_st_stm32wl_hse_clock
• dtbinding_st_stm32wl_rcc
• dtbinding_st_stm32wl_subghz_radio
• dtbinding_st_stmpe1600
• dtbinding_st_stts751_i2c
• dtbinding_st_vl53l0x
• dtbinding_st_vl53l1x
• dtbinding_telink_b91_uart
• dtbinding_telink_b91_zb
• dtbinding_telink_machine_timer
• dtbinding_ti_dac60508
• dtbinding_ti_dac70508
• dtbinding_ti_dac80508
• dtbinding_ti_fdc2x1x
• dtbinding_ti_hdc
• dtbinding_ti_hdc2010
• dtbinding_ti_hdc2021
• dtbinding_ti_hdc2022
• dtbinding_ti_hdc2080
• dtbinding_ti_hdc20xx
• dtbinding_ti_ina219
• dtbinding_ti_ina230
• dtbinding_ti_ina237
• dtbinding_ti_ina3221
• dtbinding_ti_k3_pinctrl
• dtbinding_ti_lmp90077
• dtbinding_ti_lmp90078
• dtbinding_ti_lmp90079
• dtbinding_ti_lmp90080
• dtbinding_ti_lmp90097
• dtbinding_ti_lmp90098
• dtbinding_ti_lmp90099
• dtbinding_ti_lmp90100
• dtbinding_ti_lmp90xxx_gpio
• dtbinding_ti_lp3943
• dtbinding_ti_lp503x
• dtbinding_ti_lp5562
• dtbinding_ti_msp432p4xx_uart
• dtbinding_ti_opt3001
• dtbinding_ti_sn74hc595
• dtbinding_ti_stellaris_ethernet
• dtbinding_ti_stellaris_flash_controller
• dtbinding_ti_stellaris_gpio
• dtbinding_ti_stellaris_uart
• dtbinding_ti_tca6424a
• dtbinding_ti_tca9538
• dtbinding_ti_tca9546a
• dtbinding_ti_tca9548a
• dtbinding_ti_tlc59108
• dtbinding_ti_tlc5971
• dtbinding_ti_tlv320dac
• dtbinding_ti_tmp007
• dtbinding_ti_tmp108
• dtbinding_ti_tmp112
• dtbinding_ti_tmp116
• dtbinding_ti_tmp116_eeprom
• dtbinding_ti_tps382x
u-blox (u-blox)
• dtbinding_u_blox_sara_r4
Xilinx (xlnx)
• dtbinding_xlnx_fpga
• dtbinding_xlnx_gem
• dtbinding_xlnx_pinctrl_zynq
• dtbinding_xlnx_ps_gpio
• dtbinding_xlnx_ps_gpio_bank
• dtbinding_xlnx_ttcps
• dtbinding_xlnx_xps_gpio_1.00.a
• dtbinding_xlnx_xps_gpio_1.00.a_gpio2
• dtbinding_xlnx_xps_iic_2.00.a
• dtbinding_xlnx_xps_iic_2.1
• dtbinding_xlnx_xps_spi_2.00.a
• dtbinding_xlnx_xps_timebase_wdt_1.00.a
• dtbinding_xlnx_xps_timer_1.00.a
• dtbinding_xlnx_xps_timer_1.00.a_pwm
• dtbinding_xlnx_xps_uartlite_1.00.a
• dtbinding_xlnx_xuartps
• dtbinding_xlnx_zynq_ocm
• dtbinding_zephyr_fake_regulator
• dtbinding_zephyr_flash_disk
• dtbinding_zephyr_fstab
• dtbinding_zephyr_fstab_littlefs
• dtbinding_zephyr_gpio_emul
• dtbinding_zephyr_gpio_emul_sdl
• dtbinding_zephyr_gpio_keys
• dtbinding_zephyr_gsm_ppp
• dtbinding_zephyr_i2c_emul_controller
• dtbinding_zephyr_i2c_target_eeprom
• dtbinding_zephyr_ieee802154_uart_pipe
• dtbinding_zephyr_input_longpress
• dtbinding_zephyr_input_sdl_touch
• dtbinding_zephyr_ipc_icmsg
• dtbinding_zephyr_ipc_icmsg_me_follower
• dtbinding_zephyr_ipc_icmsg_me_initiator
• dtbinding_zephyr_ipc_openamp_static_vrings
• dtbinding_zephyr_kscan_input
• dtbinding_zephyr_memory_region
• dtbinding_zephyr_mmc_disk
• dtbinding_zephyr_modbus_serial
• dtbinding_zephyr_native_posix_counter
• dtbinding_zephyr_native_posix_cpu
• dtbinding_zephyr_native_posix_linux_can
• dtbinding_zephyr_native_posix_rng
• dtbinding_zephyr_native_posix_uart
• dtbinding_zephyr_native_posix_udc
• dtbinding_panel_timing
• dtbinding_zephyr_power_state
• dtbinding_zephyr_psa_crypto_rng
• dtbinding_zephyr_retained_ram
• dtbinding_zephyr_retention
• dtbinding_zephyr_rtc_emul
• dtbinding_zephyr_sdhc_spi_slot
• dtbinding_zephyr_sdl_dc
• dtbinding_zephyr_sdmmc_disk
• dtbinding_zephyr_sim_eeprom
• dtbinding_zephyr_sim_flash
• dtbinding_zephyr_spi_bitbang
• dtbinding_zephyr_spi_emul_controller
• dtbinding_zephyr_uart_emul
• dtbinding_zephyr_udc_skeleton
• dtbinding_zephyr_udc_virtual
• dtbinding_zephyr_uhc_virtual
• dtbinding_zephyr_usb_c_vbus_adc
• dtbinding_zephyr_w1_serial
Unknown vendor
• dtbinding_openthread_config
• dtbinding_swerv_pic
The Zephyr kernel and subsystems can be configured at build time to adapt them for specific application
and platform needs. Configuration is handled through Kconfig, which is the same configuration system
used by the Linux kernel. The goal is to support configuration without having to change any source code.
Configuration options (often called symbols) are defined in Kconfig files, which also specify dependen-
cies between symbols that determine what configurations are valid. Symbols can be grouped into menus
and sub-menus to keep the interactive configuration interfaces organized.
The output from Kconfig is a header file autoconf.h with macros that can be tested at build time. Code
for unused features can be compiled out to save space.
The following sections explain how to set Kconfig configuration options, go into detail on how Kconfig is
used within the Zephyr project, and have some tips and best practices for writing Kconfig files.
There are two interactive configuration interfaces available for exploring the available Kconfig options
and making temporary changes: menuconfig and guiconfig. menuconfig is a curses-based interface
that runs in the terminal, while guiconfig is a graphical configuration interface.
Note: The configuration can also be changed by editing zephyr/.config in the application build
directory by hand. Using one of the configuration interfaces is often handier, as they correctly handle
dependencies between configuration symbols.
If you try to enable a symbol with unsatisfied dependencies in zephyr/.config, the assignment will be
ignored and overwritten when re-configuring.
To make a setting permanent, you should set it in a *.conf file, as described in Setting Kconfig configu-
ration values.
Tip: Saving a minimal configuration file (with e.g. D in menuconfig) and inspecting it can be handy
when making settings permanent. The minimal configuration file only lists symbols that differ from their
default value.
ninja menuconfig
ninja guiconfig
Note: If you get an import error for tkinter when trying to run guiconfig, you are missing
required packages. See Install Linux Host Dependencies. The package you need is usually called
something like python3-tk/python3-tkinter.
tkinter is not included by default in many Python installations, despite being part of the standard
library.
Note: If you prefer to work in the guiconfig interface, then it’s a good idea to check any changes
to Kconfig files you make in single-menu mode, which is toggled via a checkbox at the top. Un-
like full-tree mode, single-menu mode will distinguish between symbols defined with config and
symbols defined with menuconfig, showing you what things would look like in the menuconfig
interface.
Note: You can also press Y or N to set a boolean configuration symbol to the corresponding
value.
• Press ? to display information about the currently selected symbol, including its help text.
Press ESC or Q to return from the information display to the menu.
In the guiconfig interface, either click on the image next to the symbol to change its value, or
double-click on the row with the symbol (this only works if the symbol has no children, as double-
clicking a symbol with children open/closes its menu instead).
guiconfig also supports keyboard controls, which are similar to menuconfig.
4. Pressing Q in the menuconfig interface will bring up the save-and-quit dialog (if there are changes
to save):
Press Y to save the kernel configuration options to the default filename (zephyr/.config). You will
typically save to the default filename unless you are experimenting with different configurations.
The guiconfig interface will also prompt for saving the configuration on exit if it has been modi-
fied.
Note: The configuration file used during the build is always zephyr/.config. If you have another
saved configuration that you want to build with, copy it to zephyr/.config. Make sure to back up
your original configuration file.
Also note that filenames starting with . are not listed by ls by default on Linux and macOS. Use
the -a flag to see them.
Finding a symbol in the menu tree and navigating to it can be tedious. To jump directly to a symbol, press
the / key (this also works in guiconfig). This brings up the following dialog, where you can search for
symbols by name and jump to them. In guiconfig, you can also change symbol values directly within
the dialog.
If you jump to a symbol that isn’t currently visible (e.g., due to having unsatisfied dependencies), then
show-all mode will be enabled. In show-all mode, all symbols are displayed, including currently invisible
symbols. To turn off show-all mode, press A in menuconfig or Ctrl-A in guiconfig.
Note: Show-all mode can’t be turned off if there are no visible items in the current menu.
To figure out why a symbol you jumped to isn’t visible, inspect its dependencies, either by pressing ?
in menuconfig or in the information pane at the bottom in guiconfig. If you discover that the symbol
depends on another symbol that isn’t enabled, you can jump to that symbol in turn to see if it can be
enabled.
Note: In menuconfig, you can press Ctrl-F to view the help of the currently selected item in the
jump-to dialog without leaving the dialog.
For more information on menuconfig and guiconfig, see the Python docstrings at the top of menucon-
fig.py and guiconfig.py.
The menuconfig and guiconfig interfaces can be used to test out configurations during application devel-
opment. This page explains how to make settings permanent.
All Kconfig options can be searched in the Kconfig search page.
Note: Before making changes to Kconfig files, it’s a good idea to also go through the Kconfig - Tips and
Best Practices page.
When making Kconfig changes, it’s important to understand the difference between visible and invisible
symbols.
• A visible symbol is a symbol defined with a prompt. Visible symbols show up in the interactive
configuration interfaces (hence visible), and can be set in configuration files.
Here’s an example of a visible symbol:
config FPU
bool "Support floating point operations"
depends on HAS_FPU
• An invisible symbol is a symbol without a prompt. Invisible symbols are not shown in the interactive
configuration interfaces, and users have no direct control over their value. They instead get their
value from defaults or from other symbols.
Here’s an example of an invisible symbol:
config CPU_HAS_FPU
bool
help
This symbol is y if the CPU has a hardware floating point unit.
In this case, CPU_HAS_FPU is enabled through other symbols having select CPU_HAS_FPU.
Visible symbols can be configured by setting them in configuration files. The initial configuration is
produced by merging a *_defconfig file for the board with application settings, usually from prj.conf.
See The Initial Configuration below for more details.
Assignments in configuration files use this syntax:
CONFIG_<symbol name>=<value>
CONFIG_FPU=y
Note: A boolean symbol can also be set to n with a comment formatted like this:
This is the format you will see in the merged configuration in zephyr/.config.
This style is accepted for historical reasons: Kconfig configuration files can be parsed as makefiles (though
Zephyr doesn’t use this). Having n-valued symbols correspond to unset variables simplifies tests in Make.
CONFIG_SOME_STRING="cool value"
CONFIG_SOME_INT=123
Comments use a #:
# This is a comment
Assignments in configuration files are only respected if the dependencies for the symbol are satisfied.
A warning is printed otherwise. To figure out what the dependencies of a symbol are, use one of the
interactive configuration interfaces (you can jump directly to a symbol with /), or look up the symbol in
the Kconfig search page.
The initial configuration for an application comes from merging configuration settings from three
sources:
1. A BOARD-specific configuration file stored in boards/<architecture>/<BOARD>/
<BOARD>_defconfig
2. Any CMake cache entries prefix with CONFIG_
3. The application configuration
The application configuration can come from the sources below (each file is known as a Kconfig fragment,
which are then merged to get the final configuration used for a particular build). By default, prj.conf
is used.
1. If CONF_FILE is set, the configuration file(s) specified in it are merged and used as the application
configuration. CONF_FILE can be set in various ways:
1. In CMakeLists.txt, before calling find_package(Zephyr)
2. By passing -DCONF_FILE=<conf file(s)>, either directly or via west
When making changes to the default configuration for a board, you might have to configure invisible
symbols. This is done in boards/<architecture>/<BOARD>/Kconfig.defconfig, which is a regular
Kconfig file.
Note: Assignments in .config files have no effect on invisible symbols, so this scheme is not just an
organizational issue.
config FOO_WIDTH
int
if BOARD_MY_BOARD
config FOO_WIDTH
default 32
endif
Note: Since the type of the symbol (int) has already been given at the first definition location, it does
not need to be repeated here. Only giving the type once at the “base” definition of the symbol is a good
idea for reasons explained in Common Kconfig shorthands.
default values in Kconfig.defconfig files have priority over default values given on the “base” defini-
tion of a symbol. Internally, this is implemented by including the Kconfig.defconfig files first. Kconfig
uses the first default with a satisfied condition, where an empty condition corresponds to if y (is
always satisfied).
Note that conditions from surrounding top-level ifs are propagated to symbol properties, so the above
default is equivalent to default 32 if BOARD_MY_BOARD.
Warning: When defining a symbol in multiple locations, dependencies are ORed together rather
than ANDed together. It is not possible to make the dependencies of a symbol more restrictive by
defining it in multiple locations.
For example, the direct dependencies of the symbol below becomes DEP1 || DEP2:
config FOO
...
depends on DEP1
config FOO
...
depends on DEP2
When making changes to Kconfig.defconfig files, always check the symbol’s direct dependencies in
one of the interactive configuration interfaces afterwards. It is often necessary to repeat dependencies
from the base definition of the symbol to avoid weakening a symbol’s dependencies.
Motivation for Kconfig.defconfig files One motivation for this configuration scheme is to avoid mak-
ing fixed BOARD-specific settings configurable in the interactive configuration interfaces. If all board
configuration were done via <BOARD>_defconfig, all symbols would have to be visible, as values given
in <BOARD>_defconfig have no effect on invisible symbols.
Having fixed settings be user-configurable would clutter up the configuration interfaces and make them
harder to understand, and would make it easier to accidentally create broken configurations.
When dealing with fixed board-specific settings, also consider whether they should be handled via de-
vicetree instead.
choice FOO
bool "Foo choice"
default B
config A
bool "A"
config B
bool "B"
endchoice
To change the default symbol of FOO to A, you would add the following definition to Kconfig.
defconfig:
choice FOO
default A
endchoice
The Kconfig.defconfig method should be used when the dependencies of the choice might not be
satisfied. In that case, you’re setting the default selection whenever the user makes the choice visible.
More Kconfig resources The Kconfig - Tips and Best Practices page has some tips for writing Kconfig
files.
The kconfiglib.py docstring docstring (at the top of the file) goes over how symbol values are calculated
in detail.
This page covers some Kconfig best practices and explains some Kconfig behaviors and features that
might be cryptic or that are easily overlooked.
• menuconfig symbols
• Commas in macro arguments
• Checking changes in menuconfig/guiconfig
• Checking changes with scripts/kconfig/lint.py
• Style recommendations and shorthands
– Factoring out common dependencies
– Redundant defaults
– Common Kconfig shorthands
– Prompt strings
– Header comments and other nits
• Lesser-known/used Kconfig features
– The imply statement
– Optional prompts
– Optional choices
– visible if conditions
• Other resources
When deciding whether something belongs in Kconfig, it helps to distinguish between symbols that have
prompts and symbols that don’t.
If a symbol has a prompt (e.g. bool "Enable foo"), then the user can change the symbol’s value
in the menuconfig or guiconfig interface (see Interactive Kconfig interfaces), or by manually editing
configuration files. Conversely, a symbol without a prompt can never be changed directly by the user,
not even by manually editing configuration files.
Only put a prompt on a symbol if it makes sense for the user to change its value.
Symbols without prompts are called hidden or invisible symbols, because they don’t show up in
menuconfig and guiconfig. Symbols that have prompts can also be invisible, when their dependen-
cies are not satisfied.
Symbols without prompts can’t be configured directly by the user (they derive their value from other
symbols), so less restrictions apply to them. If some derived setting is easier to calculate in Kconfig than
e.g. during the build, then do it in Kconfig, but keep the distinction between symbols with and without
prompts in mind.
See the optional prompts section for a way to deal with settings that are fixed on some machines and
configurable on other machines.
In Zephyr, Kconfig configuration is done after selecting a target board. In general, it does not make sense
to use Kconfig for a value that corresponds to a fixed machine-specific setting. Usually, such settings
should be handled via devicetree instead.
In particular, avoid adding new Kconfig options of the following types:
Options that specify a device in the system by name For example, if you are writing an I2C device
driver, avoid creating an option named MY_DEVICE_I2C_BUS_NAME for specifying the bus node your device
is controlled by. See Device drivers that depend on other devices for alternatives.
Similarly, if your application depends on a hardware-specific PWM device to control an RGB LED, avoid
creating an option like MY_PWM_DEVICE_NAME. See Applications that depend on board-specific devices for
alternatives.
Options that specify fixed hardware configuration For example, avoid Kconfig options specifying a
GPIO pin.
An alternative applicable to device drivers is to define a GPIO specifier with type phandle-array in the
device binding, and using the GPIO devicetree API from C. Similar advice applies to other cases where
devicetree.h provides Hardware specific APIs for referring to other nodes in the system. Search the source
code for drivers using these APIs for examples.
An application-specific devicetree binding to identify board specific properties may be appropriate. See
tests/drivers/gpio/gpio_basic_api for an example.
For applications, see blinky-sample for a devicetree-based alternative.
select statements
The select statement is used to force one symbol to y whenever another symbol is y. For example, the
following code forces CONSOLE to y whenever USB_CONSOLE is y:
config CONSOLE
bool "Console support"
...
config USB_CONSOLE
bool "USB console support"
select CONSOLE
This section covers some pitfalls and good uses for select.
select pitfalls select might seem like a generally useful feature at first, but can cause configuration
issues if overused.
For example, say that a new dependency is added to the CONSOLE symbol above, by a developer who is
unaware of the USB_CONSOLE symbol (or simply forgot about it):
config CONSOLE
bool "Console support"
depends on STRING_ROUTINES
config USB_CONSOLE
bool "USB console support"
select CONSOLE
depends on STRING_ROUTINES
...
More insidious cases with dependencies inherited from if and menu statements are common.
An alternative attempt to solve the issue might be to turn the depends on into another select:
config CONSOLE
bool "Console support"
select STRING_ROUTINES
...
config USB_CONSOLE
bool "USB console support"
select CONSOLE
In practice, this often amplifies the problem, because any dependencies added to STRING_ROUTINES now
need to be copied to both CONSOLE and USB_CONSOLE.
In general, whenever the dependencies of a symbol are updated, the dependencies of all symbols that
(directly or indirectly) select it have to be updated as well. This is very often overlooked in practice,
even for the simplest case above.
Chains of symbols selecting each other should be avoided in particular, except for simple helper symbols,
as covered below in Using select for helper symbols.
Liberal use of select also tends to make Kconfig files harder to read, both due to the extra dependencies
and due to the non-local nature of select, which hides ways in which a symbol might get enabled.
Alternatives to select For the example in the previous section, a better solution is usually to turn the
select into a depends on:
config CONSOLE
bool "Console support"
...
config USB_CONSOLE
bool "USB console support"
depends on CONSOLE
This makes it impossible to generate an invalid configuration, and means that dependencies only ever
have to be updated in a single spot.
An objection to using depends on here might be that configuration files that enable USB_CONSOLE now
also need to enable CONSOLE:
CONFIG_CONSOLE=y
CONFIG_USB_CONSOLE=y
This comes down to a trade-off, but if enabling CONSOLE is the norm, then a mitigation is to make CONSOLE
default to y:
config CONSOLE
bool "Console support"
default y
CONFIG_USB_CONSOLE=y
Note that configuration files that do not want CONSOLE enabled now have to explicitly disable it:
CONFIG_CONSOLE=n
Using select for helper symbols A good and safe use of select is for setting “helper” symbols that
capture some condition. Such helper symbols should preferably have no prompt or dependencies.
For example, a helper symbol for indicating that a particular CPU/SoC has an FPU could be defined as
follows:
config CPU_HAS_FPU
bool
help
If y, the CPU has an FPU
...
config SOC_FOO
bool "FOO SoC"
select CPU_HAS_FPU
...
config SOC_BAR
bool "BAR SoC"
select CPU_HAS_FPU
This makes it possible for other symbols to check for FPU support in a generic way, without having to
look for particular architectures:
config FPU
bool "Support floating point operations"
depends on CPU_HAS_FPU
The alternative would be to have dependencies like the following, possibly duplicated in several spots:
config FPU
bool "Support floating point operations"
depends on SOC_FOO || SOC_BAR || ...
Invisible helper symbols can also be useful without select. For example, the following code defines a
helper symbol that has the value y if the machine has some arbitrarily-defined “large” amount of memory:
config LARGE_MEM
def_bool MEM_SIZE >= 64
config LARGE_MEM
bool
default MEM_SIZE >= 64
select recommendations In summary, here are some recommended practices for select:
• Avoid selecting symbols with prompts or dependencies. Prefer depends on. If depends on causes
annoying bloat in configuration files, consider adding a Kconfig default for the most common value.
Rare exceptions might include cases where you’re sure that the dependencies of the selecting and
selected symbol will never drift out of sync, e.g. when dealing with two simple symbols defined
close to one another within the same if.
Common sense applies, but be aware that select often causes issues in practice. depends on is
usually a cleaner and safer solution.
• Select simple helper symbols without prompts and dependencies however much you like. They’re
a great tool for simplifying Kconfig files.
if blocks add dependencies to each item within the if, as if depends on was used.
A common misunderstanding related to if is to think that the following code conditionally includes the
file Kconfig.other:
if DEP
source "Kconfig.other"
endif
In reality, there are no conditional includes in Kconfig. if has no special meaning around a source.
Note: Conditional includes would be impossible to implement, because if conditions may contain
(either directly or indirectly) forward references to symbols that haven’t been defined yet.
Note that it is redundant to add depends on DEP to the definition of FOO in Kconfig.other, because the
DEP dependency has already been added by if DEP.
In general, try to avoid adding redundant dependencies. They can make the structure of the Kconfig
files harder to understand, and also make changes more error-prone, since it can be hard to spot that the
same dependency is added twice.
There is a common subtle gotcha related to interdependent configuration symbols with prompts. Con-
sider these symbols:
config FOO
bool "Foo"
config STACK_SIZE
hex "Stack size"
default 0x200 if FOO
default 0x100
Assume that the intention here is to use a larger stack whenever FOO is enabled, and that the configuration
initially has FOO disabled. Also, remember that Zephyr creates an initial configuration in zephyr/.config
in the build directory by merging configuration files (including e.g. prj.conf). This configuration file
exists before menuconfig or guiconfig is run.
When first entering the configuration interface, the value of STACK_SIZE is 0x100, as expected. After
enabling FOO, you might reasonably expect the value of STACK_SIZE to change to 0x200, but it stays as
0x100.
To understand what’s going on, remember that STACK_SIZE has a prompt, meaning it is user-
configurable, and consider that all Kconfig has to go on from the initial configuration is this:
CONFIG_STACK_SIZE=0x100
Since Kconfig can’t know if the 0x100 value came from a default or was typed in by the user, it has to
assume that it came from the user. Since STACK_SIZE is user-configurable, the value from the configura-
tion file is respected, and any symbol defaults are ignored. This is why the value of STACK_SIZE appears
to be “frozen” at 0x100 when toggling FOO.
The right fix depends on what the intention is. Here’s some different scenarios with suggestions:
• If STACK_SIZE can always be derived automatically and does not need to be user-configurable, then
just remove the prompt:
config STACK_SIZE
hex
default 0x200 if FOO
default 0x100
Symbols without prompts ignore any value from the saved configuration.
• If STACK_SIZE should usually be user-configurable, but needs to be set to 0x200 when FOO is
enabled, then disable its prompt when FOO is enabled, as described in optional prompts:
config STACK_SIZE
hex "Stack size" if !FOO
default 0x200 if FOO
default 0x100
• If STACK_SIZE should usually be derived automatically, but needs to be set to a custom value in
rare circumstances, then add another option for making STACK_SIZE user-configurable:
config CUSTOM_STACK_SIZE
bool "Use a custom stack size"
help
Enable this if you need to use a custom stack size. When disabled, a
suitable stack size is calculated automatically.
config STACK_SIZE
hex "Stack size" if CUSTOM_STACK_SIZE
default 0x200 if FOO
default 0x100
As long as CUSTOM_STACK_SIZE is disabled, STACK_SIZE will ignore the value from the saved con-
figuration.
It is a good idea to try out changes in the menuconfig or guiconfig interface, to make sure that things
behave the way you expect. This is especially true when making moderately complex changes like these.
Assignments to hidden (promptless, also called invisible) symbols in configuration files are always ig-
nored. Hidden symbols get their value indirectly from other symbols, via e.g. default and select.
A common source of confusion is opening the output configuration file (zephyr/.config), seeing a
bunch of assignments to hidden symbols, and assuming that those assignments must be respected when
the configuration is read back in by Kconfig. In reality, all assignments to hidden symbols in zephyr/.
config are ignored by Kconfig, like for other configuration files.
To understand why zephyr/.config still includes assignments to hidden symbols, it helps to realize that
zephyr/.config serves two separate purposes:
1. It holds the saved configuration, and
2. it holds configuration output. zephyr/.config is parsed by the CMake files to let them query
configuration settings, for example.
The assignments to hidden symbols in zephyr/.config are just configuration output. Kconfig itself
ignores assignments to hidden symbols when calculating symbol values.
Note: A minimal configuration, which can be generated from within the menuconfig and guiconfig
interfaces, could be considered closer to just a saved configuration, without the full configuration output.
depends on works not just for bool symbols, but also for string, int, and hex symbols (and for choices).
The Kconfig definitions below will hide the FOO_DEVICE_FREQUENCY symbol and disable any configuration
output for it when FOO_DEVICE is disabled.
config FOO_DEVICE
bool "Foo device"
config FOO_DEVICE_FREQUENCY
int "Foo device frequency"
depends on FOO_DEVICE
In general, it’s a good idea to check that only relevant symbols are ever shown in the
menuconfig/guiconfig interface. Having FOO_DEVICE_FREQUENCY show up when FOO_DEVICE is dis-
abled (and possibly hidden) makes the relationship between the symbols harder to understand, even if
code never looks at FOO_DEVICE_FREQUENCY when FOO_DEVICE is disabled.
menuconfig symbols
If the definition of a symbol FOO is immediately followed by other symbols that depend on FOO, then
those symbols become children of FOO. If FOO is defined with config FOO, then the children are shown
indented relative to FOO. Defining FOO with menuconfig FOO instead puts the children in a separate menu
rooted at FOO.
menuconfig has no effect on evaluation. It’s just a display option.
menuconfig can cut down on the number of menus and make the menu structure easier to navigate. For
example, say you have the following definitions:
config FOO_SUBSYSTEM
(continues on next page)
if FOO_SUBSYSTEM
config FOO_FEATURE_1
bool "Foo feature 1"
config FOO_FEATURE_2
bool "Foo feature 2"
config FOO_FREQUENCY
int "Foo frequency"
endif # FOO_SUBSYSTEM
endmenu
In this case, it’s probably better to get rid of the menu and turn FOO_SUBSYSTEM into a menuconfig symbol:
menuconfig FOO_SUBSYSTEM
bool "Foo subsystem"
if FOO_SUBSYSTEM
config FOO_FEATURE_1
bool "Foo feature 1"
config FOO_FEATURE_2
bool "Foo feature 2"
config FOO_FREQUENCY
int "Foo frequency"
endif # FOO_SUBSYSTEM
Note that making a symbol without children a menuconfig is meaningless. It should be avoided, because
it looks identical to a symbol with all children invisible:
Kconfig uses commas to separate macro arguments. This means a construct like this will fail:
config FOO
bool
default y if $(dt_chosen_enabled,"zephyr,bar")
To solve this problem, create a variable with the text and use this variable as argument, as follows:
DT_CHOSEN_ZEPHYR_BAR := zephyr,bar
config FOO
bool
default y if $(dt_chosen_enabled,$(DT_CHOSEN_ZEPHYR_BAR))
When adding new symbols or making other changes to Kconfig files, it is a good idea to look up the
symbols in menuconfig or guiconfig afterwards. To get to a symbol quickly, use the jump-to feature (press
/).
Here are some things to check:
• Are the symbols placed in a good spot? Check that they appear in a menu where they make sense,
close to related symbols.
If one symbol depends on another, then it’s often a good idea to place it right after the symbol it
depends on. It will then be shown indented relative to the symbol it depends on in the menuconfig
interface, and in a separate menu rooted at the symbol in guiconfig. This also works if several
symbols are placed after the symbol they depend on.
• Is it easy to guess what the symbols do from their prompts?
• If many symbols are added, do all combinations of values they can be set to make sense?
For example, if two symbols FOO_SUPPORT and NO_FOO_SUPPORT are added, and both can be enabled
at the same time, then that makes a nonsensical configuration. In this case, it’s probably better to
have a single FOO_SUPPORT symbol.
• Are there any duplicated dependencies?
This can be checked by selecting a symbol and pressing ? to view the symbol information. If
there are duplicated dependencies, then use the Included via ... path shown in the symbol
information to figure out where they come from.
After you make Kconfig changes, you can use the scripts/kconfig/lint.py script to check for some potential
issues, like unused symbols and symbols that are impossible to enable. Use --help to see available
options.
Some checks are necessarily a bit heuristic, so a symbol being flagged by a check does not neces-
sarily mean there’s a problem. If a check returns a false positive e.g. due to token pasting in C
(CONFIG_FOO_##index##_BAR), just ignore it.
When investigating an unknown symbol FOO_BAR, it is a good idea to run git grep FOO_BAR to look for
references. It is also a good idea to search for some components of the symbol name with e.g. git grep
FOO and git grep BAR, as it can help uncover token pasting.
This section gives some style recommendations and explains some common Kconfig shorthands.
config FOO
bool "Foo"
depends on DEP
config BAR
bool "Bar"
depends on DEP
choice
prompt "Choice"
depends on DEP
config BAZ
bool "Baz"
config QAZ
bool "Qaz"
endchoice
if DEP
config FOO
bool "Foo"
config BAR
bool "Bar"
choice
prompt "Choice"
config BAZ
bool "Baz"
config QAZ
bool "Qaz"
endchoice
endif # DEP
Note: Internally, the second version of the code is transformed into the first.
If a sequence of symbols/choices with shared dependencies are all in the same menu, the dependency
can be put on the menu itself:
config FOO_FEATURE_1
(continues on next page)
config FOO_FEATURE_2
bool "Foo feature 2"
endmenu
Redundant defaults bool symbols implicitly default to n, and string symbols implicitly default to the
empty string. Therefore, default n and default "" are (almost) always redundant.
The recommended style in Zephyr is to skip redundant defaults for bool and string symbols. That
also generates clearer documentation: (Implicitly defaults to n instead of n if <dependencies, possibly
inherited>).
Defaults should always be given for int and hex symbols, however, as they implicitly default to the empty
string. This is partly for compatibility with the C Kconfig tools, though an implicit 0 default might be less
likely to be what was intended compared to other symbol types as well.
The one case where default n/default "" is not redundant is when defining a symbol in multiple
locations and wanting to override e.g. a default y on a later definition. Note that a default n does
not override a previously defined default y.
That is, FOO will be set to n in the example below. If the default n was omitted in the first definition,
FOO would have been set to y.
config FOO
bool "foo"
default n
config FOO
bool "foo"
default y
config FOO
bool "foo"
default y
config FOO
bool "foo"
default n
Common Kconfig shorthands Kconfig has two shorthands that deal with prompts and defaults.
• <type> "prompt" is a shorthand for giving a symbol/choice a type and a prompt at the same time.
These two definitions are equal:
config FOO
bool "foo"
config FOO
bool
prompt "foo"
• def_<type> <value> is a shorthand for giving a type and a value at the same time. These two
definitions are equal:
config FOO
def_bool BAR && BAZ
config FOO
bool
default BAR && BAZ
Using both the <type> "prompt" and the def_<type> <value> shorthand in the same definition is
redundant, since it gives the type twice.
The def_<type> <value> shorthand is generally only useful for symbols without prompts, and some-
what obscure.
Note: For a symbol defined in multiple locations (e.g., in a Kconfig.defconfig file in Zephyr), it is
best to only give the symbol type for the “base” definition of the symbol, and to use default (instead
of def_<type> value) for the remaining definitions. That way, if the base definition of the symbol
is removed, the symbol ends up without a type, which generates a warning that points to the other
definitions. That makes the extra definitions easier to discover and remove.
Prompt strings For a Kconfig symbol that enables a driver/subsystem FOO, consider having just “Foo”
as the prompt, instead of “Enable Foo support” or the like. It will usually be clear in the context of an
option that can be toggled on/off, and makes things consistent.
Header comments and other nits A few formatting nits, to help keep things consistent:
• Use this format for any header comments at the top of Kconfig files:
This section lists some more obscure Kconfig behaviors and features that might still come in handy.
The imply statement The imply statement is similar to select, but respects dependencies and doesn’t
force a value. For example, the following code could be used to enable USB keyboard support by default
on the FOO SoC, while still allowing the user to turn it off:
config SOC_FOO
bool "FOO SoC"
imply USB_KEYBOARD
...
config USB_KEYBOARD
bool "USB keyboard support"
Optional prompts A condition can be put on a symbol’s prompt to make it optionally configurable by
the user. For example, a value MASK that’s hardcoded to 0xFF on some boards and configurable on others
could be expressed as follows:
config MASK
hex "Bitmask" if HAS_CONFIGURABLE_MASK
default 0xFF
config MASK
hex
prompt "Bitmask" if HAS_CONFIGURABLE_MASK
default 0xFF
The HAS_CONFIGURABLE_MASK helper symbol would get selected by boards to indicate that MASK is con-
figurable. When MASK is configurable, it will also default to 0xFF.
Optional choices Defining a choice with the optional keyword allows the whole choice to be toggled
off to select none of the symbols:
choice
prompt "Use legacy protocol"
optional
config LEGACY_PROTOCOL_1
bool "Legacy protocol 1"
config LEGACY_PROTOCOL_2
bool "Legacy protocol 2"
endchoice
In the menuconfig interface, this will be displayed e.g. as [*] Use legacy protocol (Legacy
protocol 1) --->, where the choice can be toggled off to enable neither of the symbols.
visible if conditions Putting a visible if condition on a menu hides the menu and all the symbols
within it, while still allowing symbol default values to kick in.
As a motivating example, consider the following code:
config FOO_SETTING_1
int "Foo setting 1"
default 1
config FOO_SETTING_2
int "Foo setting 2"
default 2
endmenu
config FOO_SETTING_1
int "Foo setting 1"
default 1
depends on HAS_CONFIGURABLE_FOO
config FOO_SETTING_2
int "Foo setting 2"
default 2
depends on HAS_CONFIGURABLE_FOO
If we want the symbols to still get their default values even when HAS_CONFIGURABLE_FOO is n, but not
be configurable by the user, then we can use visible if instead:
config FOO_SETTING_1
int "Foo setting 1"
default 1
config FOO_SETTING_2
int "Foo setting 2"
default 2
endmenu
config FOO_SETTING_1
int "Foo setting 1" if HAS_CONFIGURABLE_FOO
default 1
config FOO_SETTING_2
int "Foo setting 2" if HAS_CONFIGURABLE_FOO
default 2
Note: See the optional prompts section for the meaning of the conditions on the prompts.
When HAS_CONFIGURABLE is n, we now get the following configuration output for the symbols, instead
of no output:
...
CONFIG_FOO_SETTING_1=1
CONFIG_FOO_SETTING_2=2
...
Other resources
The Intro to symbol values section in the Kconfiglib docstring goes over how symbols values are calculated
in more detail.
Kconfiglib supports custom Kconfig preprocessor functions written in Python. These functions are defined
in scripts/kconfig/kconfigfunctions.py.
Most of the custom preprocessor functions are used to get devicetree information into Kconfig. For
example, the default value of a Kconfig symbol can be fetched from a devicetree reg property.
Devicetree-related Functions
The functions listed below are used to get devicetree information into Kconfig. See the Python docstrings
in scripts/kconfig/kconfigfunctions.py for detailed documentation.
The *_int version of each function returns the value as a decimal integer, while the *_hex version returns
a hexadecimal value starting with 0x.
$(dt_has_compat,<compatible string>)
$(dt_compat_enabled,<compatible string>)
$(dt_compat_on_bus,<compatible string>,<bus>)
$(dt_chosen_label,<property in /chosen>)
$(dt_chosen_enabled,<property in /chosen>)
$(dt_chosen_path,<property in /chosen>)
$(dt_chosen_has_compat,<property in /chosen>)
$(dt_path_enabled,<node path>)
$(dt_alias_enabled,<node alias>)
$(dt_nodelabel_enabled,<node label>)
$(dt_nodelabel_enabled_with_compat,<node label>,<compatible string>)
$(dt_chosen_reg_addr_int,<property in /chosen>[,<index>,<unit>])
$(dt_chosen_reg_addr_hex,<property in /chosen>[,<index>,<unit>])
$(dt_chosen_reg_size_int,<property in /chosen>[,<index>,<unit>])
$(dt_chosen_reg_size_hex,<property in /chosen>[,<index>,<unit>])
$(dt_node_reg_addr_int,<node path>[,<index>,<unit>])
$(dt_node_reg_addr_hex,<node path>[,<index>,<unit>])
$(dt_node_reg_size_int,<node path>[,<index>,<unit>])
$(dt_node_reg_size_hex,<node path>[,<index>,<unit>])
$(dt_compat_enabled,<compatible string>)
$(dt_chosen_enabled,<property in /chosen>)
$(dt_node_bool_prop,<node path>,<prop>)
$(dt_nodelabel_bool_prop,<node label>,<prop>)
$(dt_chosen_bool_prop, <property in /chosen>, <prop>)
$(dt_node_has_prop,<node path>,<prop>)
(continues on next page)
Example Usage Assume that the devicetree for some board looks like this:
{
soc {
#address-cells = <1>;
#size-cells = <1>;
spi0: spi@10014000 {
compatible = "sifive,spi0";
reg = <0x10014000 0x1000 0x20010000 0x3c0900>;
reg-names = "control", "mem";
...
};
};
The second entry in reg in spi@1001400 (<0x20010000 0x3c0900>) corresponds to mem, and has the
address 0x20010000. This address can be inserted into Kconfig as follows:
config FLASH_BASE_ADDRESS
default $(dt_node_reg_addr_hex,/soc/spi@1001400,1)
config FLASH_BASE_ADDRESS
default 0x20010000
Zephyr uses the Kconfiglib implementation of Kconfig, which includes some Kconfig extensions:
• Environment variables in source statements are expanded directly, meaning no “bounce” symbols
with option env="ENV_VAR" need to be defined.
Note: option env has been removed from the C tools as of Linux 4.18 as well.
The recommended syntax for referencing environment variables is $(FOO) rather than $FOO. This
uses the new Kconfig preprocessor. The $FOO syntax for expanding environment variables is only
supported for backwards compatibility.
• The source statement supports glob patterns and includes each matching file. A pattern is required
to match at least one file.
Consider the following example:
source "foo/bar/*/Kconfig"
source "foo/bar/baz/Kconfig"
source "foo/bar/qaz/Kconfig"
Note: source and osource are analogous to include and -include in Make.
• An rsource statement is available for including files specified with a relative path. The path is
relative to the directory of the Kconfig file that contains the rsource statement.
As an example, assume that foo/Kconfig is the top-level Kconfig file, and that foo/bar/Kconfig
has the following statements:
source "qaz/Kconfig1"
rsource "qaz/Kconfig2"
orsource "Kconfig[12]"
• def_int, def_hex, and def_string keywords are available, analogous to def_bool. These set the
type and add a default at the same time.
Users interested in optimizing their configuration for security should refer to the Zephyr Security Guide’s
section on the Hardening Tool.
5.4 Snippets
Snippets are a way to save build system settings in one place, and then use those settings when you build
any Zephyr application. This lets you save common configuration separately when it applies to multiple
different applications.
Some example use cases for snippets are:
• changing your board’s console backend from a “real” UART to a USB CDC-ACM UART
• enabling frequently-used debugging options
• applying interrelated configuration settings to your “main” CPU and a co-processor core on an AMP
SoC
Tip: See Built-in snippets for a list of snippets that are provided by Zephyr.
Snippets have names. You use snippets by giving their names to the build system.
With cmake
If you are running CMake directly instead of using west build, use the SNIPPET variable. This is a
whitespace- or semicolon-separated list of snippet names you want to use. For example:
Overview This snippet redirects serial console output to a CDC ACM UART. The USB device which
should be used is configured using Devicetree.
zephyr_udc0: usbd@deadbeef {
compatible = "vnd,usb-device";
/* ... */
};
• Basics
• Namespacing
• Where snippets are located
• Processing order
• Devicetree overlays (.overlay)
• .conf files
• Board-specific settings
– By name
– By regular expression
Basics
name: snippet-name
# ... build system settings go here ...
Build system settings go in other keys in the file as described later on in this page.
You can combine settings whenever they appear under the same keys. For example, you can combine a
snippet-specific devicetree overlay and a .conf file like this:
name: foo
append:
EXTRA_DTC_OVERLAY_FILE: foo.overlay
EXTRA_CONF_FILE: foo.conf
Namespacing
chosen {
zephyr,baz = &snippet_foo_bar_dev;
};
snippet_foo_bar_dev: device@12345678 {
(continues on next page)
settings:
snippet_root: .
And then any snippet.yml files in baz/snippets will automatically be discovered by the build
system, just as if the path to baz had appeared in SNIPPET_ROOT.
Processing order
Snippets are processed in the order they are listed in the SNIPPET variable, or in the order of the -S
arguments if using west.
To apply bar after foo:
When multiple snippets set the same configuration, the configuration value set by the last processed
snippet ends up in the final configurations.
For instance, if foo sets CONFIG_FOO=1 and bar sets CONFIG_FOO=2 in the above example, the resulting
final configuration will be CONFIG_FOO=2 because bar is processed after foo.
This principle applies to both Kconfig fragments (.conf files) and devicetree overlays (.overlay files).
name: foo
append:
EXTRA_DTC_OVERLAY_FILE: foo.overlay
.conf files
name: foo
append:
EXTRA_CONF_FILE: foo.conf
Board-specific settings
By name
name: ...
boards:
bar: # settings for board "bar" go here
append:
EXTRA_DTC_OVERLAY_FILE: bar.overlay
baz: # settings for board "baz" go here
append:
EXTRA_DTC_OVERLAY_FILE: baz.overlay
The above example uses bar.overlay when building for board bar, and baz.overlay when building for
baz.
By regular expression You can enclose the board name in slashes (/) to match the name against a
regular expression in the CMake syntax. The regular expression must match the entire board name.
For example:
name: foo
boards:
/my_vendor_.*/:
append:
EXTRA_DTC_OVERLAY_FILE: my_vendor.overlay
The above example uses devicetree overlay my_vendor.overlay when building for either board
my_vendor_board1 or my_vendor_board2. It would not use the overlay when building for either
another_vendor_board or x_my_vendor_board.
This page documents design goals for the snippets feature. Further information can be found in Issue
#51834.
• extensible: for example, it is possible to add board support for an existing built-in snippet without
modifying the zephyr repository
• composable: it is possible to use multiple snippets at once, for example using:
• able to combine multiple types of configuration: snippets make it possible to store multiple
different types of build system settings in one place, and apply them all together
• specializable: for example, it is possible to customize a snippet’s behavior for a particular board,
or board revision
• future-proof and backwards-compatible: arbitrary future changes to the snippets feature will be
possible without breaking backwards compatibility for older snippets
• applicable to purely “software” changes: unlike the shields feature, snippets do not assume the
presence of a “daughterboard”, “shield”, “hat”, or any other type of external assembly which is
connected to the main board
• DRY (don’t repeat yourself): snippets allow you to skip unnecessary repetition; for example, you
can apply the same board-specific configuration to boards foo and bar by specifying /(foo|bar)/
as a regular expression for the settings, which will then apply to both boards
Note: The Application types section introduces the application types used in this page.
The Zephyr CMake package ensures that CMake can automatically select a Zephyr installation to use for
building the application, whether it is a Zephyr repository application, a Zephyr workspace application, or
a Zephyr freestanding application.
When developing a Zephyr-based application, then a developer simply needs to write
find_package(Zephyr) in the beginning of the application CMakeLists.txt file.
To use the Zephyr CMake package it must first be exported to the CMake user package registry. This is
means creating a reference to the current Zephyr installation inside the CMake user package registry.
Ubuntu
In Linux, the CMake user package registry is found in:
~/.cmake/packages/Zephyr
macOS
In macOS, the CMake user package registry is found in:
~/.cmake/packages/Zephyr
Windows
In Windows, the CMake user package registry is found in:
HKEY_CURRENT_USER\Software\Kitware\CMake\Packages\Zephyr
The Zephyr CMake package allows CMake to automatically find a Zephyr base. One or more Zephyr
installations must be exported. Exporting multiple Zephyr installations may be useful when developing
or testing Zephyr freestanding applications, Zephyr workspace application with vendor forks, etc..
When installing Zephyr using west then it is recommended to export Zephyr using west zephyr-export.
Zephyr CMake package is exported to the CMake user package registry using the following commands:
cmake -P <PATH-TO-ZEPHYR>/share/zephyr-package/cmake/zephyr_export.cmake
This will export the current Zephyr to the CMake user package registry.
To also export the Zephyr Unittest CMake package, run the following command in addition:
cmake -P <PATH-TO-ZEPHYR>/share/zephyrunittest-package/cmake/zephyr_export.cmake
The Zephyr CMake package search functionality allows for explicitly specifying a Zephyr base using an
environment variable.
To do this, use the following find_package() syntax:
This syntax instructs CMake to first search for Zephyr using the Zephyr base environment setting
ZEPHYR_BASE and then use the normal search paths.
When Zephyr base environment setting is not used for searching, the Zephyr installation matching the
following criteria will be used:
• A Zephyr repository application will use the Zephyr in which it is located. For example:
<projects>/zephyr-workspace/zephyr
samples
hello_world
<projects>/zephyr-workspace
zephyr
...
my_applications
my_first_app
Note: The root of a Zephyr workspace is identical to west topdir if the workspace was installed using
west
• Zephyr freestanding application will use the Zephyr registered in the CMake user package registry.
For example:
<projects>/zephyr-workspace-1
zephyr (Not exported to CMake)
<projects>/zephyr-workspace-2
zephyr (Exported to CMake)
<home>/app
CMakeLists.txt
prj.conf
src
main.c
Note: The Zephyr package selected on the first CMake invocation will be used for all subsequent
builds. To change the Zephyr package, for example to test the application using Zephyr base
environment setting, then it is necessary to do a pristine build first (See Rebuilding an Application).
When writing an application then it is possible to specify a Zephyr version number x.y.z that must be
used in order to build the application.
Specifying a version is especially useful for a Zephyr freestanding application as it ensures the application
is built with a minimal Zephyr version.
It also helps CMake to select the correct Zephyr to use for building, when there are multiple Zephyr
installations in the system.
For example:
find_package(Zephyr 2.2.0)
project(app)
will require app to be built with Zephyr 2.2.0 as minimum. CMake will search all exported candidates to
find a Zephyr installation which matches this version criteria.
Thus it is possible to have multiple Zephyr installations and have CMake automatically select between
them based on the version number provided, see CMake package version for details.
For example:
<projects>/zephyr-workspace-2.a
zephyr (Exported to CMake)
<projects>/zephyr-workspace-2.b
(continues on next page)
<home>/app
CMakeLists.txt
prj.conf
src
main.c
in this case, there are two released versions of Zephyr installed at their own workspaces. Workspace 2.a
and 2.b, corresponding to the Zephyr version.
To ensure app is built with minimum version 2.a the following find_package syntax may be used:
find_package(Zephyr 2.a)
project(app)
In case no Zephyr is found which satisfies the version required, as example, the application specifies
find_package(Zephyr 2.z)
project(app)
Could not find a configuration file for package "Zephyr" that is compatible
with requested version "2.z".
<projects>/zephyr-workspace-2.a/zephyr/share/zephyr-package/cmake/ZephyrConfig.
˓→cmake, version: 2.a.0
<projects>/zephyr-workspace-2.b/zephyr/share/zephyr-package/cmake/ZephyrConfig.
˓→cmake, version: 2.b.0
Note: It can also be beneficial to specify a version number for Zephyr repository applications and Zephyr
workspace applications. Specifying a version in those cases ensures the application will only build if the
Zephyr repository or workspace is matching. This can be useful to avoid accidental builds when only
part of a workspace has been updated.
Testing out a new Zephyr version, while at the same time keeping the existing Zephyr in the workspace
untouched is sometimes beneficial.
Or having both an upstream Zephyr, Vendor specific, and a custom Zephyr in same workspace.
For example:
<projects>/zephyr-workspace
zephyr
zephyr-vendor
zephyr-custom
...
my_applications
my_first_app
in this setup, find_package(Zephyr) has the following order of precedence for selecting which Zephyr
to use:
• Project name: zephyr
• First project, when Zephyr projects are ordered lexicographical, in this case.
– zephyr-custom
– zephyr-vendor
This means that my_first_app will use <projects>/zephyr-workspace/zephyr.
It is possible to specify a Zephyr preference list in the application.
A Zephyr preference list can be specified as:
project(my_first_app)
the ZEPHYR_PREFER is a list, allowing for multiple Zephyrs. If a Zephyr is specified in the list, but not
found in the system, it is simply ignored and find_package(Zephyr) will continue to the next candidate.
This allows for temporary creation of a new Zephyr release to be tested, without touching current Zephyr.
When testing is done, the zephyr-test folder can simply be removed. Such a CMakeLists.txt could look
as:
set(ZEPHYR_PREFER "zephyr-test")
find_package(Zephyr)
project(my_first_app)
The Zephyr Build Configuration CMake package provides a possibility for a Zephyr based project to
control Zephyr build settings in a generic way.
It is similar to the per-user .zephyrrc file that can be used to set Environment Variables, but it sets CMake
variables instead. It also allows you to automatically share the build configuration among all users
through the project repository. It also allows more advanced use cases, such as loading of additional
CMake boilerplate code.
The Zephyr Build Configuration CMake package will be loaded in the Zephyr boilerplate code after initial
properties and ZEPHYR_BASE has been defined, but before CMake code execution. This allows the Zephyr
Build Configuration CMake package to setup or extend properties such as: DTS_ROOT, BOARD_ROOT,
TOOLCHAIN_ROOT / other toolchain setup, fixed overlays, and any other property that can be controlled.
It also allows inclusion of additional boilerplate code.
To provide a Zephyr Build Configuration CMake package, create ZephyrBuildConfig.cmake and place
it in a Zephyr workspace top-level folder as:
<projects>/zephyr-workspace
zephyr
...
zephyr application (can be named anything)
share/zephyrbuild-package/cmake/ZephyrBuildConfig.cmake
The Zephyr Build Configuration CMake package will not search in any CMake default search paths, and
thus cannot be installed in the CMake package registry. There will be no version checking on the Zephyr
Build Configuration package.
# To ensure final path is absolute and does not contain ../.. in variable.
get_filename_component(APPLICATION_PROJECT_DIR
${CMAKE_CURRENT_LIST_DIR}/../../..
ABSOLUTE
)
The Zephyr Build Configuration CMake package can be located outside a Zephyr workspace, for example
located with a Zephyr freestanding application.
Create the build configuration as described in the previous section, and then refer to
the location of your Zephyr Build Configuration CMake package using the CMake variable
ZephyrBuildConfiguration_ROOT.
1. At the CMake command line, like this:
set(ZephyrBuildConfiguration_ROOT <path-to-build-config>)
find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE})
If you choose this option, make sure to set the variable before calling find_package(Zephyr ...),
as shown above.
3. In a separate CMake script which is pre-loaded to populate the CMake cache, like this:
You can tell the build system to use this file by adding -C zephyr-settings.cmake to your CMake
command line. This principle is useful when not using west as both this setting and Zephyr modules
can be specified using the same file. See Zephyr module Without West.
Sysbuild is a higher-level build system that can be used to combine multiple other build systems together.
It is a higher-level layer that combines one or more Zephyr build systems and optional additional build
systems into a hierarchical build system.
For example, you can use sysbuild to build a Zephyr application together with the MCUboot bootloader,
flash them both onto your device, and debug the results.
Sysbuild works by configuring and building at least a Zephyr application and, optionally, as many addi-
tional projects as you want. The additional projects can be either Zephyr applications or other types of
builds you want to run.
Like Zephyr’s build system, sysbuild is written in CMake and uses Kconfig.
5.6.1 Definitions
Actor SB_CONF_FILE
BOARD...
MCUboot
Board / sample CMake Board / sample Kconfig
enabled
domains.yaml
enabled
BOARD...
runners.yaml runners.yaml
elf, bin, hex,... elf, bin, hex,...
runners.yaml
elf, bin, hex,...
Text is not SVG - cannot display
The following are some key sysbuild features indicated in this figure:
• You can run sysbuild either with west build or directly via cmake.
• You can use sysbuild to generate application images from each build system, shown above as ELF,
BIN, and HEX files.
• You can configure sysbuild or any of the build systems it manages using various configuration
variables. These variables are namespaced so that sysbuild can direct them to the right build
system. In some cases, such as the BOARD variable, these are shared among multiple build systems.
• Sysbuild itself is also configured using Kconfig. For example, you can instruct sysbuild to build the
MCUboot bootloader, as well as to build and link your main Zephyr application as an MCUboot
child image, using sysbuild’s Kconfig files.
• Sysbuild integrates with west’s Building, Flashing and Debugging commands. It does this by man-
aging the Flash and debug runners, and specifically the runners.yaml files that each Zephyr build
system will contain. These are packaged into a global view of how to flash and debug each build
system in a domains.yaml file generated and managed by sysbuild.
• Build names are prefixed with the target name and an underscore, for example the sysbuild tar-
get is prefixed with sysbuild_ and if MCUboot is enabled as part of sysbuild, it will be prefixed
with mcuboot_. This also allows for running things like menuconfig with the prefix, for exam-
ple (if using ninja) ninja sysbuild_menuconfig to configure sysbuild or (if using make) make
mcuboot_menuconfig.
As mentioned above, you can run sysbuild via west build or cmake.
west build
Here is an example. For details, see Sysbuild (multi-domain builds) in the west build documentation.
Tip: To configure west build to use --sysbuild by default from now on, run:
Since sysbuild supports both single- and multi-image builds, this lets you use sysbuild all the time,
without worrying about what type of build you are running.
To turn this off, run this before generating your build system:
To turn this off for just one west build command, run:
cmake
Here is an example using CMake and Ninja.
To use sysbuild directly with CMake, you must specify the sysbuild project as the source folder, and give
-DAPP_DIR=<path-to-sample> as an extra CMake argument. APP_DIR is the path to the main Zephyr
application managed by sysbuild.
When building a single Zephyr application without sysbuild, all CMake cache settings and Kconfig build
options given on the command line as -D<var>=<value> or -DCONFIG_<var>=<value> are handled by
the Zephyr build system.
However, when sysbuild combines multiple Zephyr build systems, there could be Kconfig settings exclu-
sive to sysbuild (and not used by any of the applications). To handle this, sysbuild has namespaces for
configuration variables. You can use these namespaces to direct settings either to sysbuild itself or to a
specific Zephyr application managed by sysbuild using the information in these sections.
The following example shows how to build hello_world with MCUboot enabled, applying to both images
debug optimizations:
west build
cmake
˓→share/sysbuild
ninja -Cbuild
CMake variable settings can be passed to CMake using -D<var>=<value> on the com-
mand line. You can also set Kconfig options via CMake as -DCONFIG_<var>=<value> or
-D<namespace>_CONFIG_<var>=<value>.
Since sysbuild is the entry point for the build system, and sysbuild is written in CMake, all CMake
variables are first processed by sysbuild.
Sysbuild creates a namespace for each domain. The namespace prefix is the domain’s application name.
See Adding Zephyr applications to sysbuild for more information.
To set the variable <var> in the namespace <namespace>, use this syntax:
-D<namespace>_<var>=<value>
For example, to set the CMake variable FOO in the my_sample application build system to the value BAR,
run the following commands:
west build
cmake
Kconfig namespacing
To set the sysbuild Kconfig option <var> to the value <value>, use this syntax:
-DSB_CONFIG_<var>=<value>
In the previous example, SB_CONFIG is the namespace prefix for sysbuild’s Kconfig options.
To set a Zephyr application’s Kconfig option instead, use this syntax:
-D<namespace>_CONFIG_<var>=<value>
In the previous example, <namespace> is the application name discussed above in CMake variable names-
pacing.
For example, to set the Kconfig option FOO in the my_sample application build system to the value BAR,
run the following commands:
west build
cmake
Tip: When no <namespace> is used, the Kconfig setting is passed to the main Zephyr application
my_sample.
This means that passing -DCONFIG_<var>=<value> and -Dmy_sample_CONFIG_<var>=<value> are
equivalent.
This allows you to build the same application with or without sysbuild using the same syntax for setting
Kconfig values at CMake time. For example, the following commands will work in the same way:
You can use west debug to debug the main application, whether you are using sysbuild or not. Just
follow the existing west debug guide to debug the main sample.
To debug a different domain (Zephyr application), such as mcuboot, use the --domain argument, as
follows:
cmake
ninja -Cbuild
This builds hello_world and mcuboot for the reel_board, and then flashes both the mcuboot and
hello_world application images to the board.
More detailed information regarding the use of MCUboot with Zephyr can be found in the MCUboot with
Zephyr documentation page on the MCUboot website.
Note: MCUBoot default configuration will perform a full chip erase when flashed. This can be controlled
through the MCUBoot Kconfig option CONFIG_ZEPHYR_TRY_MASS_ERASE. If this option is enabled, then
flashing only MCUBoot, for example using west flash --domain mcuboot, may erase the entire flash,
including the main application image.
You can set sysbuild’s Kconfig options for a single application using configuration files. By default,
sysbuild looks for a configuration file named sysbuild.conf in the application top-level directory.
In the following example, there is a sysbuild.conf file that enables building and flashing with MCUboot
whenever sysbuild is used:
<home>/application
CMakeLists.txt
prj.conf
sysbuild.conf
SB_CONFIG_BOOTLOADER_MCUBOOT=y
You can set a configuration file to use with the -DSB_CONF_FILE=<sysbuild-conf-file> CMake build
setting.
For example, you can create sysbuild-mcuboot.conf and then specify this file when building with
sysbuild, as follows:
west build
cmake
ninja -Cbuild
Sysbuild creates build targets for each image (including sysbuild itself) for the following modes:
• menuconfig
• hardenconfig
• guiconfig
For the main application (as is the same without using sysbuild) these can be ran normally without any
prefix. For other images (including sysbuild), these are ran with a prefix of the image name and an
underscore e.g. sysbuild_ or mcuboot_, using ninja or make - for details on how to run image build
targets that do not have mapped build targets in sysbuild, see the Dedicated image build targets section.
Not all build targets for images are given equivalent prefixed build targets when sysbuild is used, for
example build targets like ram_report, rom_report, footprint, puncover and pahole are not ex-
posed. When using Trusted Firmware, this includes build targets prefix with tfm_ and bl2_, for example:
tfm_rom_report and bl2_ram_report. To run these build targets, the build directory of the image can
be provided to west/ninja/make along with the name of the build target to execute and it will run.
west
Assuming that a project has been configured and built using west using sysbuild with mcuboot enabled
in the default build folder location, the rom_report build target for mcuboot can be ran with:
ninja
Assuming that a project has been configured using cmake and built using ninja using sysbuild with
mcuboot enabled, the rom_report build target for mcuboot can be ran with:
make
Assuming that a project has been configured using cmake and built using make using sysbuild with mcu-
boot enabled, the rom_report build target for mcuboot can be ran with:
You can use the ExternalZephyrProject_Add() function to add Zephyr applications as sysbuild do-
mains. Call this CMake function from your application’s sysbuild.cmake file, or any other CMake file
you know will run as part sysbuild CMake invocation.
To include my_sample as another sysbuild domain, targeting the same board as the main image, use this
example:
ExternalZephyrProject_Add(
APPLICATION my_sample
SOURCE_DIR <path-to>/my_sample
)
This could be useful, for example, if your board requires you to build and flash an SoC-specific bootloader
along with your main application.
In sysbuild and Zephyr CMake build system a board may refer to:
• A physical board with a single core SoC.
• A specific core on a physical board with a multi-core SoC, such as nrf5340dk_nrf5340.
• A specific SoC on a physical board with multiple SoCs, such as nrf9160dk_nrf9160 and
nrf9160dk_nrf52840.
If your main application, for example, is built for mps2_an521, and your helper application must target
the mps2_an521_remote board (cpu1), add a CMake function call that is structured as follows:
ExternalZephyrProject_Add(
APPLICATION my_sample
SOURCE_DIR <path-to>/my_sample
BOARD mps2_an521_remote
)
This could be useful, for example, if your main application requires another helper Zephyr application to
be built and flashed alongside it, but the helper runs on another core in your SoC.
You can control whether extra applications are included as sysbuild domains using Kconfig.
If the extra application image is specific to the board or an application, you can create two additional
files: sysbuild.cmake and Kconfig.sysbuild.
For an application, this would look like this:
<home>/application
CMakeLists.txt
prj.conf
Kconfig.sysbuild
sysbuild.cmake
if(SB_CONFIG_SECOND_SAMPLE)
ExternalZephyrProject_Add(
APPLICATION second_sample
SOURCE_DIR <path-to>/second_sample
)
endif()
source "sysbuild/Kconfig"
config SECOND_SAMPLE
bool "Second sample"
default y
This will include second_sample by default, while still allowing you to disable it using the Kconfig option
SECOND_SAMPLE.
For more information on setting sysbuild Kconfig options, see Kconfig namespacing.
When adding a Zephyr application to sysbuild, such as MCUboot, then the configuration files from the
application (MCUboot) itself will be used.
When integrating multiple applications with each other, then it is often necessary to make adjustments
to the configuration of extra images.
Sysbuild gives users the ability of creating Kconfig fragments or devicetree overlays that will be used
together with the application’s default configuration. Sysbuild also allows users to change Application
Configuration Directory in order to give users full control of an image’s configuration.
Zephyr application Kconfig fragment and devicetree overlay In the folder of the main application,
create a Kconfig fragment or a devicetree overlay under a sysbuild folder, where the name of the file
is <image>.conf or <image>.overlay, for example if your main application includes my_sample then
create a sysbuild/my_sample.conf file or a devicetree overlay sysbuild/my_sample.overlay.
A Kconfig fragment could look as:
# sysbuild/my_sample.conf
CONFIG_FOO=n
Zephyr application configuration directory In the folder of the main application, create a new folder
under sysbuild/<image>/. This folder will then be used as APPLICATION_CONFIG_DIR when build-
ing <image>. As an example, if your main application includes my_sample then create a sysbuild/
my_sample/ folder and place any configuration files in there as you would normally do:
<home>/application
CMakeLists.txt
prj.conf
sysbuild
my_sample
prj.conf
app.overlay
boards
<board_A>.conf
<board_A>.overlay
<board_B>.conf
<board_B>.overlay
All configuration files under the sysbuild/my_sample/ folder will now be used when my_sample is
included in the build, and the default configuration files for my_sample will be ignored.
This give you full control on how images are configured when integrating those with application.
You can include non-Zephyr applications in a multi-image build using the standard CMake module Ex-
ternalProject. Please refer to the CMake documentation for usage details.
When using ExternalProject, the non-Zephyr application will be built as part of the sysbuild build
invocation, but west flash or west debug will not be aware of the application. Instead, you must
manually flash and debug the application.
Sysbuild can be extended by other modules to give it additional functionality or include other configura-
tion or images, an example could be to add support for another bootloader or external signing method.
Modules can be extended by adding custom CMake or Kconfig files as normal modules do, this will cause
the files to be included in each image that is part of a project. Alternatively, there are sysbuild-specific
module extension files which can be used to include CMake and Kconfig files for the overall sysbuild
image itself, this is where e.g. a custom image for a particular board or SoC can be added.
Connectivity
6.1 Bluetooth
This section contains information regarding the Bluetooth stack of the Zephyr OS. You can use this infor-
mation to understand the principles behind the operation of the layers and how they were implemented.
Zephyr includes a complete Bluetooth Low Energy stack from application to radio hardware, as well as
portions of a Classical Bluetooth (BR/EDR) Host layer.
6.1.1 Overview
• Supported Features
Since its inception, Zephyr has had a strong focus on Bluetooth and, in particular, on Bluetooth Low
Energy (BLE). Through the contributions of several companies and individuals involved in existing open
source implementations of the Bluetooth specification (Linux’s BlueZ) as well as the design and develop-
ment of BLE radio hardware, the protocol stack in Zephyr has grown to be mature and feature-rich, as
can be seen in the section below.
Supported Features
Zephyr comes integrated with a feature-rich and highly configurable Bluetooth stack.
• Bluetooth v5.3 compliant
– Highly configurable
* Controller-only (HCI) over UART, SPI, USB and IPC physical transports
* Host-only over UART, SPI, and IPC (shared memory)
* Combined (Host + Controller)
• Bluetooth-SIG qualified
– Controller on Nordic Semiconductor hardware
1293
Zephyr Project Documentation, Release 3.4.0
* Relay, Friend Node, Low-Power Node (LPN) and GATT Proxy features
* Both Provisioning roles and bearers supported (PB-ADV & PB-GATT)
* Foundation Models included
* Highly configurable, fits as small as 16k RAM devices
– IPSP/6LoWPAN for IPv6 connectivity over Bluetooth LE
* SPI
* Local controller support as a virtual HCI driver
– Verified with multiple popular controllers
• LE Audio in Host and Controller
– Isochronous channels
Overview
This page describes the software architecture of Zephyr’s Bluetooth protocol stack.
Note: Zephyr supports mainly Bluetooth Low Energy (BLE), the low-power version of the Bluetooth
specification. Zephyr also has limited support for portions of the BR/EDR Host. Throughout this archi-
tecture document we use BLE interchangeably for Bluetooth except when noted.
BLE Layers There are 3 main layers that together constitute a full Bluetooth Low Energy protocol stack:
• Host: This layer sits right below the application, and is comprised of multiple (non real-time) net-
work and transport protocols enabling applications to communicate with peer devices in a standard
and interoperable way.
• Controller: The Controller implements the Link Layer (LE LL), the low-level, real-time protocol
which provides, in conjunction with the Radio Hardware, standard-interoperable over-the-air com-
munication. The LL schedules packet reception and transmission, guarantees the delivery of data,
and handles all the LL control procedures.
• Radio Hardware: Hardware implements the required analog and digital baseband functional
blocks that permit the Link Layer firmware to send and receive in the 2.4GHz band of the spectrum.
Host Controller Interface The Bluetooth Specification describes the format in which a Host must
communicate with a Controller. This is called the Host Controller Interface (HCI) protocol. HCI can be
implemented over a range of different physical transports like UART, SPI, or USB. This protocol defines
the commands that a Host can send to a Controller and the events that it can expect in return, and also
the format for user and protocol data that needs to go over the air. The HCI ensures that different Host
and Controller implementations can communicate in a standard way making it possible to combine Hosts
and Controllers from different vendors.
Configurations The three separate layers of the protocol and the standardized interface make it possi-
ble to implement the Host and Controller on different platforms. The two following configurations are
commonly used:
• Single-chip configuration: In this configuration, a single microcontroller implements all three
layers and the application itself. This can also be called a system-on-chip (SoC) implementation.
In this case the BLE Host and the BLE Controller communicate directly through function calls
and queues in RAM. The Bluetooth specification does not specify how HCI is implemented in this
single-chip configuration and so how HCI commands, events, and data flows between the two can
be implementation-specific. This configuration is well suited for those applications and designs
that require a small footprint and the lowest possible power consumption, since everything runs
on a single IC.
• Dual-chip configuration: This configuration uses two separate ICs, one running the Application
and the Host, and a second one with the Controller and the Radio Hardware. This is sometimes
also called a connectivity-chip configuration. This configuration allows for a wider variety of com-
binations of Hosts when using the Zephyr OS as a Controller. Since HCI ensures interoperability
among Host and Controller implementations, including of course Zephyr’s very own BLE Host and
Controller, users of the Zephyr Controller can choose to use whatever Host running on any plat-
form they prefer. For example, the host can be the Linux BLE Host stack (BlueZ) running on any
processor capable of supporting Linux. The Host processor may of course also run Zephyr and
the Zephyr OS BLE Host. Conversely, combining an IC running the Zephyr Host with an external
Controller that does not run Zephyr is also supported.
Build Types The Zephyr software stack as an RTOS is highly configurable, and in particular, the BLE
subsystem can be configured in multiple ways during the build process to include only the features and
layers that are required to reduce RAM and ROM footprint as well as power consumption. Here’s a short
list of the different BLE-enabled builds that can be produced from the Zephyr project codebase:
• Controller-only build: When built as a BLE Controller, Zephyr includes the Link Layer and a
special application. This application is different depending on the physical transport chosen for
HCI:
– hci_uart
– hci_usb
– hci_spi
This application acts as a bridge between the UART, SPI or USB peripherals and the Controller
subsystem, listening for HCI commands, sending application data and responding with events and
received data. A build of this type sets the following Kconfig option values:
– CONFIG_BT =y
– CONFIG_BT_HCI =y
– CONFIG_BT_HCI_RAW =y
– CONFIG_BT_CTLR =y
– CONFIG_BT_LL_SW_SPLIT =y (if using the open source Link Layer)
• Host-only build: A Zephyr OS Host build will contain the Application and the BLE Host, along
with an HCI driver (UART or SPI) to interface with an external Controller chip. A build of this type
sets the following Kconfig option values:
– CONFIG_BT =y
– CONFIG_BT_HCI =y
– CONFIG_BT_CTLR =n
All of the samples located in samples/bluetooth except for the ones used for Controller-only
builds can be built as Host-only
• Combined build: This includes the Application, the Host and the Controller, and it is used exclu-
sively for single-chip (SoC) configurations. A build of this type sets the following Kconfig option
values:
– CONFIG_BT =y
– CONFIG_BT_HCI =y
– CONFIG_BT_CTLR =y
– CONFIG_BT_LL_SW_SPLIT =y (if using the open source Link Layer)
All of the samples located in samples/bluetooth except for the ones used for Controller-only
builds can be built as Combined
The picture below shows the SoC or single-chip configuration when using a Zephyr combined build (a
build that includes both a BLE Host and a Controller in the same firmware image that is programmed
onto the chip):
When using connectivity or dual-chip configurations, several Host and Controller combinations are pos-
sible, some of which are depicted below:
When using a Zephyr Host (left side of image), two instances of Zephyr OS must be built with different
configurations, yielding two separate images that must be programmed into each of the chips respec-
tively. The Host build image contains the application, the BLE Host and the selected HCI driver (UART
or SPI), while the Controller build runs either the hci_uart, or the hci_spi app to provide an interface to
the BLE Controller.
This configuration is not limited to using a Zephyr OS Host, as the right side of the image shows. One
can indeed take one of the many existing GNU/Linux distributions, most of which include Linux’s own
BLE Host (BlueZ), to connect it via UART or USB to one or more instances of the Zephyr OS Controller
build. BlueZ as a Host supports multiple Controllers simultaneously for applications that require more
than one BLE radio operating at the same time but sharing the same Host stack.
samples/bluetooth/
Sample Bluetooth code. This is a good reference to get started with Bluetooth application develop-
ment.
tests/bluetooth/
Test applications. These applications are used to verify the functionality of the Bluetooth stack, but
are not necessary the best source for sample code (see samples/bluetooth instead).
doc/guides/bluetooth/
Extra documentation, such as PICS documents.
Host
The Bluetooth Host implements all the higher-level protocols and profiles, and most importantly, provides
a high-level API for applications. The following diagram depicts the main protocol & profile layers of the
host.
Lowest down in the host stack sits a so-called HCI driver, which is responsible for abstracting away the
details of the HCI transport. It provides a basic API for delivering data from the controller to the host,
and vice-versa.
Perhaps the most important block above the HCI handling is the Generic Access Profile (GAP). GAP
simplifies Bluetooth LE access by defining four distinct roles of BLE usage:
• Connection-oriented roles
– Peripheral (e.g. a smart sensor, often with a limited user interface)
– Central (typically a mobile phone or a PC)
• Connection-less roles
– Broadcaster (sending out BLE advertisements, e.g. a smart beacon)
– Observer (scanning for BLE advertisements)
Each role comes with its own build-time configuration option: CONFIG_BT_PERIPHERAL,
CONFIG_BT_CENTRAL, CONFIG_BT_BROADCASTER & CONFIG_BT_OBSERVER. Of the connection-oriented
roles central implicitly enables observer role, and peripheral implicitly enables broadcaster role. Usu-
ally the first step when creating an application is to decide which roles are needed and go from there.
Bluetooth mesh is a slightly special case, requiring at least the observer and broadcaster roles, and pos-
sibly also the Peripheral role. This will be described in more detail in a later section.
Peripheral role Most Zephyr-based BLE devices will most likely be peripheral-role devices. This means
that they perform connectable advertising and expose one or more GATT services. After registering
services using the bt_gatt_service_register() API the application will typically start connectable
advertising using the bt_le_adv_start() API.
There are several peripheral sample applications available in the tree, such as sam-
ples/bluetooth/peripheral_hr.
Central role Central role may not be as common for Zephyr-based devices as peripheral role, but it is
still a plausible one and equally well supported in Zephyr. Rather than accepting connections from other
devices a central role device will scan for available peripheral device and choose one to connect to. Once
connected, a central will typically act as a GATT client, first performing discovery of available services
and then accessing one or more supported services.
To initially discover a device to connect to the application will likely use the bt_le_scan_start()
API, wait for an appropriate device to be found (using the scan callback), stop scanning using
bt_le_scan_stop() and then connect to the device using bt_conn_le_create() . If the central wants
to keep automatically reconnecting to the peripheral it should use the bt_le_set_auto_conn() API.
There are some sample applications for the central role available in the tree, such as sam-
ples/bluetooth/central_hr.
Observer role An observer role device will use the bt_le_scan_start() API to scan for device, but
it will not connect to any of them. Instead it will simply utilize the advertising data of found devices,
combining it optionally with the received signal strength (RSSI).
Broadcaster role A broadcaster role device will use the bt_le_adv_start() API to advertise specific
advertising data, but the type of advertising will be non-connectable, i.e. other device will not be able to
connect to it.
Connections Connection handling and the related APIs can be found in the Connection Management
section.
Security To achieve a secure relationship between two Bluetooth devices a process called pairing is
used. This process can either be triggered implicitly through the security properties of GATT services, or
explicitly using the bt_conn_security() API on a connection object.
To achieve a higher security level, and protect against Man-In-The-Middle (MITM) attacks, it is rec-
ommended to use some out-of-band channel during the pairing. If the devices have a sufficient user
interface this “channel” is the user itself. The capabilities of the device are registered using the
bt_conn_auth_cb_register() API. The bt_conn_auth_cb struct that’s passed to this API has a set
of optional callbacks that can be used during the pairing - if the device lacks some feature the corre-
sponding callback may be set to NULL. For example, if the device does not have an input method but
does have a display, the passkey_entry and passkey_confirm callbacks would be set to NULL, but the
passkey_display would be set to a callback capable of displaying a passkey to the user.
Depending on the local and remote security requirements & capabilities, there are four possible security
levels that can be reached:
BT_SECURITY_L1
No encryption and no authentication.
BT_SECURITY_L2
Encryption but no authentication (no MITM protection).
BT_SECURITY_L3
Encryption and authentication using the legacy pairing method from Bluetooth 4.0 and
4.1.
BT_SECURITY_L4
Encryption and authentication using the LE Secure Connections feature available since
Bluetooth 4.2.
Note: Mesh has its own security solution through a process called provisioning. It follows a similar
procedure as pairing, but is done using separate mesh-specific APIs.
L2CAP L2CAP stands for the Logical Link Control and Adaptation Protocol. It is a common layer for all
communication over Bluetooth connections, however an application comes in direct contact with it only
when using it in the so-called Connection-oriented Channels (CoC) mode. More information on this can
be found in the L2CAP API section.
GATT The Generic Attribute Profile is the most common means of communication over LE connections.
A more detailed description of this layer and the API reference can be found in the GATT API reference
section.
Mesh Mesh is a little bit special when it comes to the needed GAP roles. By default, mesh requires
both observer and broadcaster role to be enabled. If the optional GATT Proxy feature is desired, then
peripheral role should also be enabled.
The API reference for mesh can be found in the Mesh API reference section.
LE Audio The LE audio is a set of profiles and services that utilizes GATT and Isochronous Channel to
provide audio over Bluetooth Low Energy. The architecture and API references can be found in Bluetooth
Audio Architecture.
Persistent storage The Bluetooth host stack uses the settings subsystem to implement persistent stor-
age to flash. This requires the presence of a flash driver and a designated “storage” partition on flash. A
typical set of configuration options needed will look something like the following:
CONFIG_BT_SETTINGS=y
CONFIG_FLASH=y
CONFIG_FLASH_PAGE_LAYOUT=y
CONFIG_FLASH_MAP=y
CONFIG_NVS=y
CONFIG_SETTINGS=y
Once enabled, it is the responsibility of the application to call settings_load() after having initialized
Bluetooth (using the bt_enable() API).
Overview
1. HCI
• Host Controller Interface, Bluetooth standard
• Provides Zephyr Bluetooth HCI Driver
2. HAL
• Hardware Abstraction Layer
• Vendor Specific, and Zephyr Driver usage
3. Ticker
• Soft real time radio/resource scheduling
4. LL_SW
• Software-based Link Layer implementation
• States and roles, control procedures, packet controller
5. Util
• Bare metal memory pool management
• Queues of variable count, lockless usage
• FIFO of fixed count, lockless usage
• Mayfly concept based deferred ISR executions
Architecture
Execution Overview
Architecture Overview
Scheduling
Ticker
Scheduling Variants
Event Handling
Data Flow
Execution Priorities
• Event handle (0, 1) < Event preparation (2, 3) < Event/Rx done (4) < Tx request (5) < Role
management (6) < Host (7).
• LLL is vendor ISR, ULL is Mayfly ISR concept, Host is kernel thread.
LLL Execution
LLL Resume
Mayfly
• Mayfly are multi-instance scalable ISR execution contexts
Legacy Controller
Hardware Requirements
Nordic Semiconductor The Nordic Semiconductor Bluetooth Low Energy Controller implementation
requires the following hardware peripherals.
Timer NRF_TIMER0 2 or No
or 1Page 1315, 1 • 2 instances, one each for packet timing
NRF_TIMER41 , and tIFS software switching, respectively
and • 7 capture/compare registers (3 manda-
NRF_TIMER10 tory, 1 optional for ISR profiling, 4 for
single timer tIFS switching) on first in-
stance
• 4 capture/compare registers for second
instance, if single tIFS timer is not used.
GAF
GATT SMP
GAP ISO
The Generic Audio Framework (GAF) is considered the middleware of the Bluetooth LE Audio architec-
ture. The GAF contains the profiles and services that allows higher layer applications and profiles to set
up streams, change volume, control media and telephony and more. The GAF builds on GATT, GAP and
isochronous channels (ISO).
GAF uses GAP to connect, advertise and synchronize to other devices. GAF uses GATT to configure
streams, associate streams with content (e.g. media or telephony), control volume and more. GAF
uses ISO for the audio streams themselves, both as unicast (connected) audio streams or broadcast
(unconnected) audio streams.
GAF mandates the use of the LC3 codec, but also supports other codecs.
The top-level profiles TMAP and HAP are not part of the GAF, but rather provide top-level requirements
for how to use the GAF.
GAF has been implemented in Zephyr with the following structure.
Using the Bluetooth Audio Stack To use any of the profiles in the Bluetooth Audio Stack, including
the top-level profiles outside of GAF, CONFIG_BT_AUDIO shall be enabled. This Kconfig option allows
the enabling of the individual profiles inside of the Bluetooth Audio Stack. Each profile can generally
be enabled on its own, but enabling higher-layer profiles (such as CAP, TMAP and HAP) will typically
require enabling some of the lower layer profiles.
It is, however, possible to create a device that uses e.g. only Stream Control (with just the BAP), without
using any of the content control or rendering/capture control profiles, or vice versa. Using the higher
layer profiles will however typically provide a better user experience and better interoperability with
other devices.
1 CONFIG_BT_CTLR_SW_SWITCH_SINGLE_TIMER =y
0 CONFIG_BT_CTLR_TIFS_HW =n
2 When not using pre-defined PPI channels
3 For software-based tIFS switching
4 Drivers that use nRFx interfaces
5 For nRF53x Series
CAP... CSIP...
CAS CSIS
cap.h csis.h
Common Audio Profile (CAP) The Common Audio Profile introduces restrictions and requirements on
the lower layer profiles. The procedures in CAP works on one or more streams for one or more devices.
Is it thus possible via CAP to do a single function call to setup multiple streams across multiple devices.
The figure below shows a complete structure of the procedures in CAP and how they correspond to
procedures from the other profiles. The circles with I, A and C show whether the procedure has active
involvement or requirements from the CAP Initiator, CAP Accept and CAP Commander roles respectively.
The API reference for CAP can be found in Common Audio Profile.
Stream Control (BAP) Stream control is implemented by the Basic Audio Profile. This profile defines
multiple roles:
• Unicast Client
• Unicast Server
• Broadcast Source
• Broadcast Sink
• Scan Delegator (not yet implemented)
• Broadcast assistant (not yet implemented)
Each role can be enabled individually, and it is possible to support more than one role.
The API reference for stream control can be found in Bluetooth Audio.
Rendering and Capture Control Rendering and capture control is implemented by the Volume Control
Profile (VCP) and Microphone Control Profile (MICP).
The VCP implementation supports the following roles
• Volume Control Service (VCS) Server
• Volume Control Service (VCS) Client
The MICP implementation supports the following roles
• Microphone Control Profile (MICP) Microphone Device (server)
• Microphone Control Profile (MICP) Microphone Controller (client)
The API reference for volume control can be found in Bluetooth Volume Control.
The API reference for Microphone Control can be found in Bluetooth Microphone Control.
Content Control Content control is implemented by the Call Control Profile (CCP) and Media Control
Profile (MCP).
The CCP implementation is not yet implemented in Zephyr.
The MCP implementation supports the following roles
• Media Control Service (MCS) Server via the Media Proxy module
• Media Control Client (MCC)
The API reference for media control can be found in Bluetooth Media Control.
Coordinated Sets Coordinated Sets is implemented by the Coordinated Sets Identification Profile
(CSIP).
The CSIP implementation supports the following roles
• Coordinated Set Identification Service (CSIP) Set Member
QoS Configuration
F...
Enabling ASE
I A
Unicast Audio Update Updating unicast metadata
I A
Unicast Audio Stop Disabling ASE
Releasing ASE
I
Broadcast Audio Start Configure broadcast source
SyncInfo Transfer
VCP
Discovery
Relative Volume Up
Unmute
Mute
Mute
MICP
Discovery
Unmute
Mute
CCP
Discovery
Terminate Call
Originate Call
Join Calls
MCP
Discovery
Search
Qualification Listings
The Zephyr BLE stack has obtained qualification listings for both the Host and the Controller. See the
tables below for a list of qualification listings
Host qualifications
Zephyr version Link Qualifying Company
2.2.x QDID 151074 Demant A/S
1.14.x QDID 139258 The Linux Foundation
1.13 QDID 119517 Nordic Semiconductor
Mesh qualifications
Zephyr version Link Qualifying Company
1.14.x QDID 139259 The Linux Foundation
Controller qualifications
ICS Features
The ICS features for each supported protocol & profile can be found in the following documents:
Device Configuration
Parameter Name Selected Description
TSPC_GAP_0_1 False BR/EDR (C.1)
TSPC_GAP_0_2 True LE (C.2)
TSPC_GAP_0_3 False BR/EDR/LE (C.3)
Modes
Parameter Name Selected Description
TSPC_GAP_1_1 False Non-discoverable mode (C.1)
TSPC_GAP_1_2 False Limited-discoverable mode (O)
TSPC_GAP_1_3 False General-discoverable mode (O)
TSPC_GAP_1_4 False Non-connectable mode (O)
TSPC_GAP_1_5 False Connectable mode (M)
TSPC_GAP_1_6 False Non-bondable mode (O)
TSPC_GAP_1_7 False Bondable mode (C.2)
TSPC_GAP_1_8 False Non-Synchronizable Mode (C.3)
TSPC_GAP_1_9 False Synchronizable Mode (C.4)
Security Aspects
Parameter Name Selected Description
TSPC_GAP_2_1 False Authentication procedure (C.1)
TSPC_GAP_2_2 False Support of LMP-Authentication (M)
TSPC_GAP_2_3 False Initiate LMP-Authentication (C.5)
TSPC_GAP_2_4 False Security mode 1 (C.2)
TSPC_GAP_2_5 False Security mode 2 (O)
TSPC_GAP_2_6 False Security mode 3 (C.7)
TSPC_GAP_2_7 False Security mode 4 (M)
TSPC_GAP_2_7a False Security mode 4, level 4 (C.9)
TSPC_GAP_2_7b False Security mode 4, level 3 (C.9)
TSPC_GAP_2_7c False Security mode 4, level 2 (C.9)
TSPC_GAP_2_7d False Security mode 4, level 1 (C.9)
TSPC_GAP_2_8 False Support of Authenticated link key (C.6)
TSPC_GAP_2_9 False Support of Unauthenticated link key (C.6)
TSPC_GAP_2_10 False Security Optional (C.6)
TSPC_GAP_2_11 False Secure Connections Only Mode (C.8)
TSPC_GAP_2_12 False 56-bit minimum encryption key size (C.10)
TSPC_GAP_2_13 False 128-bit encryption key size capable (C.11)
Establishment Procedures
LE Roles
Parameter Name Selected Description
TSPC_GAP_5_1 True Broadcaster (C.1)
TSPC_GAP_5_2 True Observer (C.1)
TSPC_GAP_5_3 True Peripheral (C.1)
TSPC_GAP_5_4 True Central (C.1)
BR/EDR/LE Roles
Parameter Name Selected Description
TSPC_GAP_38_1 False Broadcaster (C.1)
TSPC_GAP_38_2 False Observer (C.1)
TSPC_GAP_38_3 False Peripheral (C.1)
TSPC_GAP_38_4 False Central (C.1)
SDP Interoperability
Roles
Parameter Name Selected Description
TSPC_L2CAP_1_1 False Data Channel Initiator (C.3)
TSPC_L2CAP_1_2 False Data Channel Acceptor (C.1)
TSPC_L2CAP_1_3 True LE Master (C.2)
TSPC_L2CAP_1_4 True LE Slave (C.2)
TSPC_L2CAP_1_5 True LE Data Channel Initiator (C.4)
TSPC_L2CAP_1_6 True LE Data Channel Acceptor (C.5)
General Operation
Parameter Name Selected Description
TSPC_L2CAP_2_1 False Support of L2CAP signalling channel (C.16)
TSPC_L2CAP_2_2 False Support of configuration process (C.16)
TSPC_L2CAP_2_3 False Support of connection oriented data channel (C.16)
TSPC_L2CAP_2_4 False Support of command echo request (C.17)
TSPC_L2CAP_2_5 False Support of command echo response (C.16)
TSPC_L2CAP_2_6 False Support of command information request (C.17)
TSPC_L2CAP_2_7 False Support of command information response (C.16)
TSPC_L2CAP_2_8 False Support of a channel group (C.17)
TSPC_L2CAP_2_9 False Support of packet for connectionless channel (C.17)
TSPC_L2CAP_2_10 False Support retransmission mode (C.17)
TSPC_L2CAP_2_11 False Support flow control mode (C.17)
TSPC_L2CAP_2_12 False Enhanced Retransmission Mode (C.11)
TSPC_L2CAP_2_13 False Streaming Mode (O)
TSPC_L2CAP_2_14 False FCS Option (C.1)
TSPC_L2CAP_2_15 False Generate Local Busy Condition (C.2)
TSPC_L2CAP_2_16 False Send Reject (C.2)
TSPC_L2CAP_2_17 False Send Selective Reject (C.2)
TSPC_L2CAP_2_18 False Mandatory use of ERTM (C.3)
TSPC_L2CAP_2_19 False Mandatory use of Streaming Mode (C.4)
TSPC_L2CAP_2_20 False Optional use of ERTM (C.3)
TSPC_L2CAP_2_21 False Optional use of Streaming Mode (C.4)
TSPC_L2CAP_2_22 False Send data using SAR in ERTM (C.5)
TSPC_L2CAP_2_23 False Send data using SAR in Streaming Mode (C.6)
TSPC_L2CAP_2_24 False Actively request Basic Mode for a PSM that supports the use of ERTM or Streaming M
TSPC_L2CAP_2_25 False Supports performing L2CAP channel mode configuration fallback from SM to ERTM (
TSPC_L2CAP_2_26 False Supports sending more than one unacknowledged I-Frame when operating in ERTM (
TSPC_L2CAP_2_27 False Supports sending more than three unacknowledged I-Frame when operating in ERTM
TSPC_L2CAP_2_28 False Supports configuring the peer TxWindow greater than 1. (C.10)
TSPC_L2CAP_2_29 False AMP Support (C.11)
TSPC_L2CAP_2_30 False Fixed Channel Support (C.11)
TSPC_L2CAP_2_31 False AMP Manager Support (C.11)
TSPC_L2CAP_2_32 False ERTM over AMP (C.11)
TSPC_L2CAP_2_33 False Streaming Mode Source over AMP Support (C.12)
continues on
Configurable Parameters
Role
Parameter Name Selected Description
TSPC_SM_1_1 True Central Role (Initiator) (C.1)
TSPC_SM_1_2 True Peripheral Role (Responder) (C.2)
Security Properties
Parameter Name Selected Description
TSPC_SM_2_1 True Authenticated MITM protection (O)
TSPC_SM_2_2 True Unauthenticated no MITM protection (C.1)
TSPC_SM_2_3 True No security requirements (M)
TSPC_SM_2_4 True OOB supported (O)
TSPC_SM_2_5 True LE Secure Connections (O)
Pairing Method
Parameter Name Selected Description
TSPC_SM_4_1 True Just Works (O)
TSPC_SM_4_2 True Passkey Entry (C.1)
TSPC_SM_4_3 True Out of Band (C.1)
Security Initiation
Signing Algorithm
Parameter Name Selected Description
TSPC_SM_6_1 True Signing Algorithm - Generation (O)
TSPC_SM_6_2 True Signing Algorithm - Resolving (O)
Key Distribution
Parameter Name Selected Description
TSPC_SM_7_1 True Encryption Key (C.1)
TSPC_SM_7_2 True Identity Key (C.2)
TSPC_SM_7_3 True Signing Key (C.3)
Protocol Version
Parameter Name Selected Description
TSPC_RFCOMM_0_1 False RFCOMM 1.1 with TS 07.10
TSPC_RFCOMM_0_2 True (*) RFCOMM 1.2 with TS 07.10
Supported Procedures
Roles
Parameter Name Selected Description
TSPC_MESH_2_1 True Node (C.1)
TSPC_MESH_2_2 False Provisioner (C.1)
GAP Requirements
Parameter Name Selected Description
TSPC_MESH_16_1 True Broadcaster (C.1)
TSPC_MESH_16_2 True Observer (C.1)
TSPC_MESH_16_3 True Peripheral (C.2)
TSPC_MESH_16_4 True Peripheral – Security Mode 1 (C.2)
TSPC_MESH_16_5 False Central (C.3)
TSPC_MESH_16_6 False Central – Security Mode 1 (C.3)
Provisioner – Bearers
Parameter Name Selected Description
TSPC_MESH_17_1 False Advertising Bearer (C.1)
TSPC_MESH_17_2 False GATT Bearer (C.1)
Provisioner – Provisioning
GAP Requirements
Parameter Name Selected Description
TSPC_MESH_21_1 False Broadcaster (C.1)
TSPC_MESH_21_2 False Observer (C.1)
TSPC_MESH_21_3 False Central (C.2)
TSPC_MESH_21_4 False Central - Security Mode 1 (C.2)
O - optional
Service Version
Parameter Name Selected Description
TSPC_DIS_0_1 True Device Information Service v1.1 (M)
Transport Requirements
Parameter Name Selected Description
TSPC_DIS_1_1 False Service supported over BR/EDR (C.1)
TSPC_DIS_1_2 True Service supported over LE (C.1)
TSPC_DIS_1_3 False Service supported over HS (C.1)
Service Requirements
This page lists and describes tools that can be used to assist during Bluetooth stack or application devel-
opment in order to help, simplify and speed up the development process.
Mobile applications
It is often useful to make use of existing mobile applications to interact with hardware running Zephyr,
to test functionality without having to write any additional code or requiring extra hardware.
The recommended mobile applications for interacting with Zephyr are:
• Android:
– nRF Connect for Android
– nRF Mesh for Android
– LightBlue for Android
• iOS:
– nRF Connect for iOS
– nRF Mesh for iOS
– LightBlue for iOS
The Linux Bluetooth Protocol Stack, BlueZ, comes with a very useful set of tools that can be used to
debug and interact with Zephyr’s BLE Host and Controller. In order to benefit from these tools you will
need to make sure that you are running a recent version of the Linux Kernel and BlueZ:
• Linux Kernel 4.10+
• BlueZ 4.45+
Additionally, some of the BlueZ tools might not be bundled by default by your Linux distribution. If you
need to build BlueZ from scratch to update to a recent version or to obtain all of its tools you can follow
the steps below:
You can then find btattach, btmgt and btproxy in the tools/ folder and btmon in the monitor/ folder.
You’ll need to enable BlueZ’s experimental features so you can access its most recent BLE functionality.
Do this by editing the file /lib/systemd/system/bluetooth.service and making sure to include the
-E option in the daemon’s execution start line:
ExecStart=/usr/libexec/bluetooth/bluetoothd -E
It’s possible to run Bluetooth applications using either the QEMU emulator or Native POSIX. In either
case, a Bluetooth controller needs to be exported from the host OS (Linux) to the emulator. For this
purpose you will need some tools described in the Using BlueZ with Zephyr section.
Using the Host System Bluetooth Controller The host OS’s Bluetooth controller is connected in the
following manner:
• To the second QEMU serial line using a UNIX socket. This socket gets used with the help of the
QEMU option -serial unix:/tmp/bt-server-bredr. This option gets passed to QEMU through
QEMU_EXTRA_FLAGS automatically whenever an application has enabled Bluetooth support.
• To a serial port in Native POSIX through the use of a command-line option passed to the Native
POSIX executable: --bt-dev=hci0
On the host side, BlueZ allows you to export its Bluetooth controller through a so-called user channel for
QEMU and Native POSIX to use.
Note: You only need to run btproxy when using QEMU. Native POSIX handles the UNIX socket proxying
automatically
If you are using QEMU, in order to make the Controller available you will need one additional step using
btproxy:
1. Make sure that the Bluetooth controller is down
2. Use the btproxy tool to open the listening UNIX socket, type:
sudo tools/btproxy -u -i 0
Listening on /tmp/bt-server-bredr
You might need to replace -i 0 with the index of the Controller you wish to proxy.
If you see Received unknown host packet type 0x00 when running QEMU, then add -z to the
btproxy command line to ignore any null bytes transmitted at startup.
Once the hardware is connected and ready to use, you can then proceed to building and running a
sample:
• Choose one of the Bluetooth sample applications located in samples/bluetooth.
• To run a Bluetooth application in QEMU, type:
Running QEMU now results in a connection with the second serial line to the bt-server-bredr
UNIX socket, letting the application access the Bluetooth controller.
• To run a Bluetooth application in Native POSIX, first build it:
Using a Zephyr-based BLE Controller Depending on which hardware you have available, you can
choose between two transports when building a single-mode, Zephyr-based BLE Controller:
• UART: Use the hci_uart sample and follow the instructions in bluetooth-hci-uart-qemu-posix.
• USB: Use the hci_usb sample and then treat it as a Host System Bluetooth Controller (see previous
section)
HCI Tracing When running the Host on a computer connected to an external Controller, it is very useful
to be able to see the full log of exchanges between the two, in the format of a Host Controller Interface
log. In order to see those logs, you can use the built-in btmon tool from BlueZ:
$ btmon
If you want to test a Zephyr-powered BLE Controller using BlueZ’s Bluetooth Host, you will need a few
tools described in the Using BlueZ with Zephyr section. Once you have installed the tools you can then
use them to interact with your Zephyr-based controller:
You might need to replace --index 0 with the index of the Controller you wish to manage. Additional
information about btmgmt can be found in its manual pages.
Bluetooth applications are developed using the common infrastructure and approach that is described in
the Application Development section of the documentation.
Additional information that is only relevant to Bluetooth applications can be found in this page.
Thread safety
Calling into the Bluetooth API is intended to be thread safe, unless otherwise noted in the documentation
of the API function. The effort to ensure that this is the case for all API calls is an ongoing one, but the
overall goal is formally stated in this paragraph. Bug reports and Pull Requests that move the subsystem
in the direction of such goal are welcome.
Hardware setup
This section describes the options you have when building and debugging Bluetooth applications with
Zephyr. Depending on the hardware that is available to you, the requirements you have and the type of
development you prefer you may pick one or another setup to match your needs.
There are 4 possible hardware setups to use with Zephyr and Bluetooth:
1. Embedded
2. QEMU with an external Controller
3. Native POSIX with an external Controller
4. Simulated nRF52 with BabbleSim
Embedded This setup relies on all software running directly on the embedded platform(s) that the
application is targeting. All the Configurations and Build Types are supported but you might need to build
Zephyr more than once if you are using a dual-chip configuration or if you have multiple cores in your
SoC each running a different build type (e.g., one running the Host, the other the Controller).
To start developing using this setup follow the Getting Started Guide, choose one (or more if you are
using a dual-chip solution) boards that support Bluetooth and then run the application).
Embedded HCI tracing When running both Host and Controller in actual Integrated Circuits, you will
only see normal log messages on the console by default, without any way of accessing the HCI traffic
between the Host and the Controller. However, there is a special Bluetooth logging mode that converts
the console to use a binary protocol that interleaves both normal log messages as well as the HCI traffic.
Set the following Kconfig options to enable this protocol before building your application:
CONFIG_BT_DEBUG_MONITOR_UART=y
CONFIG_UART_CONSOLE=n
This setup relies on a “dual-chip” configuration which is comprised of the following devices:
1. A Host-only application running in the QEMU emulator or the native_posix native port of Zephyr
2. A Controller, which can be one of two types:
• A commercially available Controller
• A Controller-only build of Zephyr
Warning: Certain external Controllers are either unable to accept the Host to Controller flow control
parameters that Zephyr sets by default (Qualcomm), or do not transmit any data from the Controller
to the Host (Realtek). If you see a message similar to:
<wrn> bt_hci_core: opcode 0x0c33 status 0x12
when booting your sample of choice (make sure you have enabled CONFIG_LOG in your prj.conf
before running the sample), or if there is no data flowing from the Controller to the Host, then you
need to disable Host to Controller flow control. To do so, set CONFIG_BT_HCI_ACL_FLOW_CONTROL=n
in your prj.conf.
QEMU You can run the Zephyr Host on the QEMU emulator and have it interact with a physical external
Bluetooth Controller. Refer to Running on QEMU and Native POSIX for full instructions on how to build
and run an application in this setup.
Native POSIX
Note: This is currently only available on GNU/Linux
The Native POSIX target builds your Zephyr application with the Zephyr kernel, and some minimal
HW emulation as a native Linux executable. This executable is a normal Linux program, which can be
debugged and instrumented like any other, and it communicates with a physical external Controller.
Refer to Running on QEMU and Native POSIX for full instructions on how to build and run an application
in this setup.
The nrf52_bsim board, is a simulated target board which emulates the necessary peripherals of a nrf52
SOC to be able to develop and test BLE applications. This board, uses:
• BabbleSim to simulate the nrf52 modem and the radio environment.
• The POSIX arch to emulate the processor.
• Models of the nrf52 HW
Just like with the native_posix target, the build result is a normal Linux executable. You can find more
information on how to run simulations with one or several devices in this board’s documentation
Currently, only Combined builds are possible, as this board does not yet have any models of a UART, or
USB which could be used for an HCI interface towards another real or simulated device.
Initialization
The Bluetooth subsystem is initialized using the bt_enable() function. The caller should ensure that
function succeeds by checking the return code for errors. If a function pointer is passed to bt_enable() ,
the initialization happens asynchronously, and the completion is notified through the given function.
A simple Bluetooth beacon application is shown below. The application initializes the Bluetooth Subsys-
tem and enables non-connectable advertising, effectively acting as a Bluetooth Low Energy broadcaster.
1
2 /*
3 * Set Advertisement data. Based on the Eddystone specification:
4 * https://github.com/google/eddystone/blob/master/protocol-specification.md
5 * https://github.com/google/eddystone/tree/master/eddystone-url
6 */
7 static const struct bt_data ad[] = {
8 BT_DATA_BYTES(BT_DATA_FLAGS, BT_LE_AD_NO_BREDR),
9 BT_DATA_BYTES(BT_DATA_UUID16_ALL, 0xaa, 0xfe),
10 BT_DATA_BYTES(BT_DATA_SVC_DATA16,
11 0xaa, 0xfe, /* Eddystone UUID */
12 0x10, /* Eddystone-URL frame type */
13 0x00, /* Calibrated Tx power at 0m */
14 0x00, /* URL Scheme Prefix http://www. */
15 'z', 'e', 'p', 'h', 'y', 'r',
16 'p', 'r', 'o', 'j', 'e', 'c', 't',
17 0x08) /* .org */
18 };
19
31 if (err) {
32 printk("Bluetooth init failed (err %d)\n", err);
33 return;
34 }
35
36 printk("Bluetooth initialized\n");
37
38 /* Start advertising */
39 err = bt_le_adv_start(BT_LE_ADV_NCONN_IDENTITY, ad, ARRAY_SIZE(ad),
40 sd, ARRAY_SIZE(sd));
41 if (err) {
42 printk("Advertising failed to start (err %d)\n", err);
43 return;
44 }
45
46
(continues on next page)
53 bt_id_get(&addr, &count);
54 bt_addr_le_to_str(&addr, addr_s, sizeof(addr_s));
55
59 int main(void)
60 {
61 int err;
62
The key APIs employed by the beacon sample are bt_enable() that’s used to initialize Bluetooth and
then bt_le_adv_start() that’s used to start advertising a specific combination of advertising and scan
response data.
Overview
This tutorial shows how to setup AutoPTS client and server to run both on Windows 10. We use WSL1
with Ubuntu only to build a Zephyr project to an elf file, because Zephyr SDK is not available on Windows
yet. Tutorial covers only nrf52840dk.
Install Python 3
Download and install Python 3. Setup was tested with versions >=3.8. Let the installer add the Python
installation directory to the PATH and disable the path length limitation.
Install Git
Download and install Git. During installation enable option: Enable experimental support for pseudo
consoles. We will use Git Bash as Windows terminal.
Install PTS 8
Install latest PTS from https://www.bluetooth.org. Remember to install drivers from installation direc-
tory “C:/Program Files (x86)/Bluetooth SIG/Bluetooth PTS/PTS Driver/win64/CSRBlueCoreUSB.inf”
Note: Starting with PTS 8.0.1 the Bluetooth Protocol Viewer is no longer included. So to capture
Bluetooth events, you have to download it separately.
Install nrftools
On Windows download latest nrftools (version >= 10.12.1) from site https://www.nordicsemi.com/
Software-and-tools/Development-Tools/nRF-Command-Line-Tools/Download and run default install.
Connect devices
Flash board
In Device Manager find COM port of your nrf board. In my case it is COM3.
cd ~/zephyrproject
Note that west does not accept COMs, so use /dev/ttyS2 as the COM3 equivalent, /dev/ttyS2 as the
COM3 equivalent, etc.(/dev/ttyS + decremented COM number).
cd auto-pts
Install socat.exe
Running AutoPTS
Server and client by default will run on localhost address. Run server:
Note: If the error “ImportError: No module named pywintypes” appeared after the fresh setup, uninstall
and install the pywin32 module:
Run client:
At the first run, when Windows asks, enable connection through firewall:
Troubleshooting
• “When running actual hardware test mode, I have only BTP TIMEOUTs.”
This is a problem with connection between auto-pts client and board. There are many possible causes.
Try:
• Clean your auto-pts and zephyr repos with
Warning: This command will force the irreversible removal of all uncommitted files in the repo.
• If you have set up Windows on virtual machine, check if guest extensions are installed properly or
change USB compatibility mode in VM settings to USB 2.0.
• Check, if firewall in not blocking python.exe or socat.exe.
• Check if board sends ready event after restart (hex 00 00 80 ff 00 00). Open serial connection to
board with e.g. PuTTy with proper COM and baud rate. After board reset you should see some
strings in console.
• Check if socat.exe creates tunnel to board. Run in console
where /dev/ttyS2 is the COM3 equivalent. Open PuTTY, set connection type to Raw, IP to 127.0.0.1,
port to 65123. After board reset you should see some strings in console.
Overview
This tutorial shows how to setup AutoPTS client on Linux with AutoPTS server running on Windows 10
virtual machine. Tested with Ubuntu 20.4 and Linux Mint 20.4.
You must have a Zephyr development environment set up. See Getting Started Guide for details.
Supported methods to test zephyr bluetooth host:
• Testing Zephyr Host Stack on QEMU
• Testing Zephyr Host Stack on native posix
• Testing Zephyr combined (controller + host) build on Real hardware (such as nRF52)
For running with QEMU or native posix, see Running on QEMU and Native POSIX.
Setup Linux
After you extract archive, you will see 2 .deb files, e.g.:
• JLink_Linux_V688a_x86_64.deb
• nRF-Command-Line-Tools_10_12_1_Linux-amd64.deb
and README.md. To install the tools, double click on each .deb file or follow instructions from
README.md.
Choose and install your hypervisor like VMWare Workstation(preferred) or VirtualBox. On VirtualBox
could be some issues, if your host has fewer than 6 CPU.
Create Windows virtual machine instance. Make sure it has at least 2 cores and installed guest extensions.
Setup tested with VirtualBox 6.1.18 and VMWare Workstation 16.1.1 Pro.
Setup static IP
WMWare Works On Linux, open Virtual Network Editor app and create network:
If you type ‘ifconfig’ in terminal, you should be able to find your host IP:
VirtualBox Go to:
File -> Host Network Manager
and create network:
Open virtual machine network settings. On adapter 1 you will have created by default NAT. Add adapter
2:
Install Python 3 Download and install latest Python 3 on Windows. Let the installer add the Python
installation directory to the PATH and disable the path length limitation.
Install Git Download and install Git. During installation enable option: Enable experimental support
for pseudo consoles. We will use Git Bash as Windows terminal.
Install PTS 8 On Windows virtual machine, install latest PTS from https://www.bluetooth.org. Re-
member to install drivers from installation directory “C:/Program Files (x86)/Bluetooth SIG/Bluetooth
PTS/PTS Driver/win64/CSRBlueCoreUSB.inf”
Note: Starting with PTS 8.0.1 the Bluetooth Protocol Viewer is no longer included. So to capture
Bluetooth events, you have to download it separately.
Connect PTS dongle With VirtualBox there should be no problem. Just find dongle in Devices -> USB
and connect.
With VMWare you might need to use some trick, if you cannot find dongle in VM -> Removable Devices.
Type in Linux terminal:
usb-devices
Note Vendor and ProdID number. Close VMWare Workstation and open .vmx of your virtual machine
(path similar to /home/codecoup/vmware/Windows 10/Windows 10.vmx) in text editor. Write any-
where in the file following line:
usb.autoConnect.device0 = "0x0a12:0x0001"
just replace 0x0a12 with Vendor number and 0x0001 with ProdID number you found earlier.
west flash
Install socat, that is used to transfer BTP data stream from UART’s tty file:
cd auto-pts
pip3 install --user wheel
pip3 install --user -r autoptsclient_requirements.txt
Autopts server on Windows virtual machine In Git Bash, clone auto-pts project repo:
cd auto-pts
pip3 install --user wheel
pip3 install --user -r autoptsserver_requirements.txt
Running AutoPTS
Server and client by default will run on localhost address. Run server:
python ./autoptsserver.py
Note: If the error “ImportError: No module named pywintypes” appeared after the fresh setup, uninstall
and install the pywin32 module:
Run client:
At the first run, when Windows asks, enable connection through firewall:
Troubleshooting
• “After running one test, I need to restart my Windows virtual machine to run another, because of
fail verdict from APICOM in PTS logs.”
It means your virtual machine has not enough processor cores or memory. Try to add more in settings.
Note that a host with 4 CPUs could be not enough with VirtualBox as hypervisor. In this case, choose
rather VMWare Workstation.
• “I cannot start autoptsserver-zephyr.py. I always got error:”
API Reference
group bt_att
Attribute Protocol (ATT)
Defines
BT_ATT_ERR_SUCCESS
The ATT operation was successful
BT_ATT_ERR_INVALID_HANDLE
The attribute handle given was not valid on the server
BT_ATT_ERR_READ_NOT_PERMITTED
The attribute cannot be read
BT_ATT_ERR_WRITE_NOT_PERMITTED
The attribute cannot be written
BT_ATT_ERR_INVALID_PDU
The attribute PDU was invalid
BT_ATT_ERR_AUTHENTICATION
The attribute requires authentication before it can be read or written
BT_ATT_ERR_NOT_SUPPORTED
The ATT Server does not support the request received from the client
BT_ATT_ERR_INVALID_OFFSET
Offset specified was past the end of the attribute
BT_ATT_ERR_AUTHORIZATION
The attribute requires authorization before it can be read or written
BT_ATT_ERR_PREPARE_QUEUE_FULL
Too many prepare writes have been queued
BT_ATT_ERR_ATTRIBUTE_NOT_FOUND
No attribute found within the given attribute handle range
BT_ATT_ERR_ATTRIBUTE_NOT_LONG
The attribute cannot be read using the ATT_READ_BLOB_REQ PDU
BT_ATT_ERR_ENCRYPTION_KEY_SIZE
The Encryption Key Size used for encrypting this link is too short
BT_ATT_ERR_INVALID_ATTRIBUTE_LEN
The attribute value length is invalid for the operation
BT_ATT_ERR_UNLIKELY
The attribute request that was requested has encountered an error that was unlikely.
The attribute request could therefore not be completed as requested
BT_ATT_ERR_INSUFFICIENT_ENCRYPTION
The attribute requires encryption before it can be read or written
BT_ATT_ERR_UNSUPPORTED_GROUP_TYPE
The attribute type is not a supported grouping attribute.
The attribute type is not a supported grouping attribute as defined by a higher layer specifica-
tion.
BT_ATT_ERR_INSUFFICIENT_RESOURCES
Insufficient Resources to complete the request
BT_ATT_ERR_DB_OUT_OF_SYNC
The server requests the client to rediscover the database
BT_ATT_ERR_VALUE_NOT_ALLOWED
The attribute parameter value was not allowed
BT_ATT_ERR_WRITE_REQ_REJECTED
Write Request Rejected
BT_ATT_ERR_CCC_IMPROPER_CONF
Client Characteristic Configuration Descriptor Improperly Configured
BT_ATT_ERR_PROCEDURE_IN_PROGRESS
Procedure Already in Progress
BT_ATT_ERR_OUT_OF_RANGE
Out of Range
BT_ATT_MAX_ATTRIBUTE_LEN
BT_ATT_FIRST_ATTRIBUTE_HANDLE
BT_ATT_FIRST_ATTTRIBUTE_HANDLE
BT_ATT_LAST_ATTRIBUTE_HANDLE
BT_ATT_LAST_ATTTRIBUTE_HANDLE
Enums
enum bt_att_chan_opt
ATT channel option bit field values.
Values:
Functions
Bluetooth Audio
API Reference
group bt_audio
Bluetooth Audio.
Defines
BT_AUDIO_BROADCAST_ID_SIZE
BT_AUDIO_BROADCAST_ID_MAX
Maximum broadcast ID value
BT_AUDIO_PD_PREF_NONE
Indicates that the server have no preference for the presentation delay
BT_AUDIO_PD_MAX
Maximum presentation delay in microseconds
BT_AUDIO_BROADCAST_CODE_SIZE
BT_AUDIO_CONTEXT_TYPE_ANY
Any known context.
BT_AUDIO_UNICAST_ANNOUNCEMENT_GENERAL
BT_AUDIO_UNICAST_ANNOUNCEMENT_TARGETED
BT_CODEC_DATA(_type, _bytes...)
Helper to declare elements of bt_codec_data arrays.
This macro is mainly for creating an array of struct bt_codec_data elements inside bt_codec
which is then passed to the likes of bt_bap_stream_config or bt_bap_stream_reconfig.
Parameters
• _type – Type of advertising data field
• _bytes – Variable number of single-byte parameters
BT_CODEC(_id, _cid, _vid, _data, _meta)
Helper to declare bt_codec structure.
Parameters
• _id – Codec ID
• _cid – Company ID
• _vid – Vendor ID
• _data – Codec Specific Data in LVT format
• _meta – Codec Specific Metadata in LVT format
BT_AUDIO_LOCATION_ANY
Any known location.
BT_CODEC_QOS(_interval, _framing, _phy, _sdu, _rtn, _latency, _pd)
Helper to declare elements of bt_codec_qos.
Parameters
• _interval – SDU interval (usec)
• _framing – Framing
• _phy – Target PHY
• _sdu – Maximum SDU Size
• _rtn – Retransmission number
• _latency – Maximum Transport Latency (msec)
• _pd – Presentation Delay (usec)
Enums
enum bt_audio_context
Audio Context Type for Generic Audio.
These values are defined by the Generic Audio Assigned Numbers, bluetooth.com
Values:
enumerator BT_AUDIO_CONTEXT_TYPE_PROHIBITED = 0
enum bt_audio_parental_rating
Parental rating defined by the Generic Audio assigned numbers (bluetooth.com).
The numbering scheme is aligned with Annex F of EN 300 707 v1.2.1 which defined parental
rating for viewing.
Values:
enum bt_audio_active_state
Audio Active State defined by the Generic Audio assigned numbers (bluetooth.com).
Values:
enum bt_audio_metadata_type
Codec metadata type IDs.
Metadata types defined by the Generic Audio assigned numbers (bluetooth.com).
Values:
If 0, the context type is not a preferred use case for this codec
configuration.
If 0, the context type is not a preferred use case for this codec
configuration.
enum bt_audio_location
Location values for BT Audio.
These values are defined by the Generic Audio Assigned Numbers, bluetooth.com
Values:
enumerator BT_AUDIO_LOCATION_PROHIBITED = 0
enum bt_audio_dir
Audio Capability type.
Values:
enum [anonymous]
Codec QoS Framing.
Values:
enum [anonymous]
Codec QoS Preferred PHY.
Values:
Functions
struct bt_codec_data
#include <audio.h> Codec configuration structure.
struct bt_codec
#include <audio.h> Codec structure.
Public Members
uint8_t path_id
Data path ID
BT_ISO_DATA_PATH_HCI for HCI path, or any other value for vendor specific ID.
uint8_t id
Codec ID
uint16_t cid
Codec Company ID
uint16_t vid
Codec Company Vendor ID
struct bt_codec_qos
#include <audio.h> Codec QoS structure.
Public Members
uint8_t phy
QoS PHY
uint8_t framing
QoS Framing
uint8_t rtn
QoS Retransmission Number
uint16_t sdu
QoS SDU
uint32_t interval
QoS Frame Interval
uint32_t pd
QoS Presentation Delay in microseconds.
Value range 0 to BT_AUDIO_PD_MAX.
struct bt_codec_qos_pref
#include <audio.h> Audio Stream Quality of Service Preference structure.
Public Members
bool unframed_supported
Unframed PDUs supported.
Unlike the other fields, this is not a preference but whether the codec supports unframed
ISOAL PDUs.
uint8_t phy
Preferred PHY
uint8_t rtn
Preferred Retransmission Number
uint16_t latency
Preferred Transport Latency
uint32_t pd_min
Minimum Presentation Delay in microseconds.
Unlike the other fields, this is not a preference but a minimum requirement.
Value range 0 to BT_AUDIO_PD_MAX, or BT_AUDIO_PD_PREF_NONE to indicate no pref-
erence.
uint32_t pd_max
Maximum Presentation Delay.
Unlike the other fields, this is not a preference but a maximum requirement.
Value range 0 to BT_AUDIO_PD_MAX, or BT_AUDIO_PD_PREF_NONE to indicate no pref-
erence.
uint32_t pref_pd_min
Preferred minimum Presentation Delay.
Value range 0 to BT_AUDIO_PD_MAX.
uint32_t pref_pd_max
Preferred maximum Presentation Delay.
Value range 0 to BT_AUDIO_PD_MAX.
group bt_audio_codec_cfg
Audio codec Config APIs.
Functions to parse codec config data when formatted as LTV wrapped into bt_codec.
Enums
enum bt_audio_codec_parse_err
Codec parser error codes for Codec config parsing APIs.
Values:
enumerator BT_AUDIO_CODEC_PARSE_ERR_SUCCESS = 0
The requested type is not present in the data set.
enumerator BT_AUDIO_CODEC_PARSE_ERR_TYPE_NOT_FOUND = -1
The requested type is not present in the data set.
enumerator BT_AUDIO_CODEC_PARSE_ERR_INVALID_VALUE_FOUND = -2
The value found is invalid.
enumerator BT_AUDIO_CODEC_PARSE_ERR_INVALID_PARAM = -3
The parameters specified to the function call are not valid.
Functions
The Bluetooth specificationa are not clear about this value - it does not state that the codec
shall use this SDU size only. A codec like LC3 supports variable bit-rate (per SDU) hence
it might be allowed for an encoder to reduce the frame size below this value. Hence it is
recommended to use the received SDU size and divide by blocks_per_sdu rather than relying
on this octets_per_sdu value to be fixed.
Parameters
• codec – The codec configuration to extract data from.
Returns
Frame length in octets if value is found else a negative value of type
bt_audio_codec_parse_err.
int bt_codec_cfg_get_frame_blocks_per_sdu(const struct bt_codec *codec, bool
fallback_to_default)
Extract number of audio frame blockss in each SDU from BT codec config.
The overall SDU size will be octets_per_frame * frame_blocks_per_sdu * number-of-channels.
If this value is not present a default value of 1 shall be used.
A frame block is one or more frames that represents data for the same period of time but for
different channels. If the stream have two audio channels and this value is two there will be
four frames in the SDU.
Parameters
• codec – The codec configuration to extract data from.
• fallback_to_default – If true this function will return the default value of 1
if the type is not found. In this case the function will only fail if a NULL pointer
is provided.
Returns
The count of codec frames in each SDU if value is found else a negative value of
type bt_audio_codec_parse_err - unless when fallback_to_default is true then
the value 1 is returned if frames per sdu is not found.
bool bt_codec_get_val(const struct bt_codec *codec, uint8_t type, const struct bt_codec_data
**data)
Lookup a specific value based on type.
Depending on context bt_codec will be either codec capabilities, codec configuration or meta
data.
Typically types used are: bt_codec_capability_type bt_codec_config_type
bt_audio_metadata_type
Parameters
• codec – The codec data to search in.
• type – The type id to look for
• data – Pointer to the data-pointer to update when item is found
Returns
True if the type is found, false otherwise.
API Reference
group bt_bap
Bluetooth Basic Audio Profile (BAP)
Defines
BT_BAP_SCAN_DELEGATOR_MAX_METADATA_LEN
BT_BAP_SCAN_DELEGATOR_MAX_SUBGROUPS
BT_BAP_BASE_MIN_SIZE
The minimum size of a Broadcast Audio Source Endpoint (BASE) 2 octets UUID 3 octets
presentation delay 1 octet number of subgroups (which is minimum 1) 1 octet number of
BIS (which is minimum 1) 5 octets codec_id 1 octet codec configuration length (which may
be 0) 1 octet metadata length (which may be 0) 1 octet BIS index 1 octet BIS specific codec
configuration length (which may be 0)
BT_BAP_BASE_BIS_DATA_MIN_SIZE
The minimum size of a bt_bap_base_bis_data
BT_BAP_PA_INTERVAL_UNKNOWN
Value indicating that the periodic advertising interval is unknown
BT_BAP_BIS_SYNC_NO_PREF
Broadcast Assistant no BIS sync preference.
Value indicating that the Broadcast Assistant has no preference to which BIS the Scan Delega-
tor syncs to
BROADCAST_SNK_STREAM_CNT
BROADCAST_SNK_SUBGROUP_CNT
BT_BAP_ASCS_RSP(c, r)
Macro used to initialise the object storing values of ASE Control Point notification.
Parameters
• c – Response Code field
• r – Reason field - bt_bap_ascs_reason or bt_audio_metadata_type (see notes in
bt_bap_ascs_rsp).
Typedefs
Enums
enum bt_bap_pa_state
Periodic advertising state reported by the Scan Delegator
Values:
enum bt_bap_big_enc_state
Broadcast Isochronous Group encryption state reported by the Scan Delegator
Values:
enum bt_bap_bass_att_err
Broadcast Audio Scan Service (BASS) specific ATT error codes
Values:
enum bt_bap_ep_state
Endpoint states
Values:
enum bt_bap_ascs_rsp_code
Response Status Code.
These are sent by the server to the client when a stream operation is requested.
Values:
enum bt_bap_ascs_reason
Response Reasons.
These are used if the bt_bap_ascs_rsp_code value is BT_BAP_ASCS_RSP_CODE_CONF_UNSUPPORTED,
BT_BAP_ASCS_RSP_CODE_CONF_REJECTED or BT_BAP_ASCS_RSP_CODE_CONF_INVALID.
Values:
enum bt_bap_scan_delegator_iter
Values:
enumerator BT_BAP_SCAN_DELEGATOR_ITER_STOP = 0
enumerator BT_BAP_SCAN_DELEGATOR_ITER_CONTINUE
Functions
Returns
0 in case of success or negative value in case of error.
int bt_bap_stream_metadata(struct bt_bap_stream *stream, struct bt_codec_data *meta, size_t
meta_count)
Change Audio Stream Metadata.
This procedure is used by a unicast client or unicast server to change the metadata of a stream.
Parameters
• stream – Stream object
• meta_count – Number of metadata entries
• meta – Metadata entries
Returns
0 in case of success or negative value in case of error.
int bt_bap_stream_disable(struct bt_bap_stream *stream)
Disable Audio Stream.
This procedure is used by a unicast client or unicast server to disable a stream.
This shall only be called for unicast streams, as broadcast streams will always be enabled once
created.
Parameters
• stream – Stream object
Returns
0 in case of success or negative value in case of error.
int bt_bap_stream_start(struct bt_bap_stream *stream)
Start Audio Stream.
This procedure is used by a unicast client or unicast server to make a stream start streaming.
For the unicast client, this will connect the CIS for the stream before sending the start com-
mand.
For the unicast server, this will put a BT_AUDIO_DIR_SINK stream into the stream-
ing state if the CIS is connected (initialized by the unicast client). If the CIS is
not connected yet, the stream will go into the streaming state as soon as the CIS
is connected. BT_AUDIO_DIR_SOURCE streams will go into the streaming state when
the unicast client sends the Receiver Start Ready operation, which will trigger the
bt_bap_unicast_server_cb::start() callback.
This shall only be called for unicast streams.
Broadcast sinks will always be started once synchronized, and broadcast source streams shall
be started with bt_bap_broadcast_source_start().
Parameters
• stream – Stream object
Returns
0 in case of success or negative value in case of error.
int bt_bap_stream_stop(struct bt_bap_stream *stream)
Stop Audio Stream.
This procedure is used by a client to make a stream stop streaming.
This shall only be called for unicast streams. Broadcast sinks cannot be stopped. Broadcast
sources shall be stopped with bt_bap_broadcast_source_stop().
Parameters
• stream – Stream object
Returns
0 in case of success or negative value in case of error.
int bt_bap_stream_release(struct bt_bap_stream *stream)
Release Audio Stream.
This procedure is used by a unicast client or unicast server to release a unicast stream.
Broadcast sink streams cannot be released, but can be deleted by
bt_bap_broadcast_sink_delete(). Broadcast source streams cannot be released, but can
be deleted by bt_bap_broadcast_source_delete().
Parameters
• stream – Stream object
Returns
0 in case of success or negative value in case of error.
int bt_bap_stream_send(struct bt_bap_stream *stream, struct net_buf *buf, uint16_t seq_num,
uint32_t ts)
Send data to Audio stream.
Send data from buffer to the stream.
Note: Data will not be sent to linked streams since linking is only consider for procedures
affecting the state machine.
Parameters
• stream – Stream object.
• buf – Buffer containing data to be sent.
• seq_num – Packet Sequence number. This value shall be incremented for each
call to this function and at least once per SDU interval for a specific channel.
• ts – Timestamp of the SDU in microseconds (us). This value can be used
to transmit multiple SDUs in the same SDU interval in a CIG or BIG. Can be
omitted by using BT_ISO_TIMESTAMP_NONE which will simply enqueue the
ISO SDU in a FIFO manner.
Returns
Bytes sent in case of success or negative value in case of error.
struct bt_bap_ascs_rsp
#include <bap.h> Structure storing values of fields of ASE Control Point notification.
Public Members
struct bt_bap_scan_delegator_subgroup
#include <bap.h> Struct to hold subgroup specific information for the receive state
Public Members
uint32_t bis_sync
BIS synced bitfield
uint8_t metadata_len
Length of the metadata
uint8_t metadata[0]
The metadata
struct bt_bap_scan_delegator_recv_state
#include <bap.h> Represents the Broadcast Audio Scan Service receive state
Public Members
uint8_t src_id
The source ID
bt_addr_le_t addr
The Bluetooth address
uint8_t adv_sid
The advertising set ID
uint32_t broadcast_id
The 24-bit broadcast ID
uint8_t bad_code[BT_AUDIO_BROADCAST_CODE_SIZE]
The bad broadcast code.
Only valid if encrypt_state is BT_BAP_BIG_ENC_STATE_BCODE_REQ
uint8_t num_subgroups
Number of subgroups
struct bt_bap_scan_delegator_cb
#include <bap.h>
Public Members
Param recv_state
Pointer to the receive state that was updated.
Return
0 in case of success or negative value in case of error.
Param conn
[in] Pointer to the connection of the Broadcast Assistant requesting the sync.
Param recv_state
[in] Pointer to the receive state that is being requested for the sync.
Param bis_sync_req
[in] Array of bitfields of which BIS indexes that is requested to sync for each
subgroup by the Broadcast Assistant.
Return
0 in case of accept, or other value to reject.
struct bt_bap_ep_info
#include <bap.h> Structure holding information of audio stream endpoint
Public Members
uint8_t id
The ID of the endpoint
struct bt_bap_stream
#include <bap.h> Basic Audio Profile stream structure.
Streams represents a stream configuration of a Remote Endpoint and a Local Capability.
Note: Streams are unidirectional but can be paired with other streams to use a bidirectional
connected isochronous stream.
Public Members
void *group
Unicast or Broadcast group - Used internally
void *user_data
Stream user data
struct bt_bap_stream_ops
#include <bap.h> Stream operation.
Public Members
struct bt_bap_scan_delegator_add_src_param
#include <bap.h>
Public Members
uint32_t broadcast_id
The 24-bit broadcast ID
uint8_t num_subgroups
Number of subgroups
struct bt_bap_scan_delegator_mod_src_param
#include <bap.h>
Public Members
uint8_t src_id
The periodic adverting sync
uint32_t broadcast_id
The 24-bit broadcast ID
uint8_t num_subgroups
Number of subgroups
struct bt_bap_broadcast_assistant_cb
#include <bap.h>
Public Members
Param broadcast_id
24-bit broadcast ID.
Param conn
The connection to the peer device.
Param err
Error value. 0 on success, GATT error on fail.
struct bt_bap_broadcast_assistant_add_src_param
#include <bap.h> Parameters for adding a source to a Broadcast Audio Scan Service server
Public Members
bt_addr_le_t addr
Address of the advertiser.
uint8_t adv_sid
SID of the advertising set.
bool pa_sync
Whether to sync to periodic advertisements.
uint32_t broadcast_id
24-bit broadcast ID
uint16_t pa_interval
Periodic advertising interval in milliseconds.
BT_BAP_PA_INTERVAL_UNKNOWN if unknown.
uint8_t num_subgroups
Number of subgroups
struct bt_bap_broadcast_assistant_mod_src_param
#include <bap.h> Parameters for modifying a source
Public Members
uint8_t src_id
Source ID of the receive state.
bool pa_sync
Whether to sync to periodic advertisements.
uint16_t pa_interval
Periodic advertising interval.
BT_BAP_PA_INTERVAL_UNKNOWN if unknown.
uint8_t num_subgroups
Number of subgroups
group bt_bap_unicast_client
Functions
struct bt_bap_unicast_group_stream_param
#include <bap.h> Parameter struct for each stream in the unicast group
Public Members
struct bt_bap_unicast_group_stream_pair_param
#include <bap.h> Parameter struct for the unicast group functions.
Parameter struct for the bt_bap_unicast_group_create() and
bt_bap_unicast_group_add_streams() functions.
Public Members
struct bt_bap_unicast_group_param
#include <bap.h>
Public Members
size_t params_count
The number of parameters in params
uint8_t packing
Unicast Group packing mode.
BT_ISO_PACKING_SEQUENTIAL or BT_ISO_PACKING_INTERLEAVED.
Note: This is a recommendation to the controller, which the controller may ignore.
struct bt_bap_unicast_client_cb
#include <bap.h> Unicast Client callback structure
Public Members
void (*location)(struct bt_conn *conn, enum bt_audio_dir dir, enum bt_audio_location loc)
Remote Unicast Server Audio Locations.
This callback is called whenever the audio locations is read from the server or otherwise
notified to the client.
Param conn
Connection to the remote unicast server.
Param dir
Direction of the location.
Param loc
The location bitfield value.
Return
0 in case of success or negative value in case of error.
Param src_ctx
The source context bitfield value.
Return
0 in case of success or negative value in case of error.
void (*pac_record)(struct bt_conn *conn, enum bt_audio_dir dir, const struct bt_codec
*codec)
Remote Published Audio Capability (PAC) record discovered.
Called when a PAC record has been discovered as part of the discovery procedure.
The codec is only valid while in the callback, so the values must be stored by the receiver
if future use is wanted.
If discovery procedure has complete both codec and ep are set to NULL.
Param conn
Connection to the remote unicast server.
Param dir
The type of remote endpoints and capabilities discovered.
Param codec
Remote capabilities.
void (*endpoint)(struct bt_conn *conn, enum bt_audio_dir dir, struct bt_bap_ep *ep)
Remote Audio Stream Endoint (ASE) discovered.
Called when an ASE has been discovered as part of the discovery procedure.
If discovery procedure has complete both codec and ep are set to NULL.
Param conn
Connection to the remote unicast server.
Param dir
The type of remote endpoints and capabilities discovered.
Param ep
Remote endpoint.
If discovery procedure has complete both codec and ep are set to NULL.
Param conn
Connection to the remote unicast server.
Param err
Error value. 0 on success, GATT error on positive value or errno on negative
value.
Param dir
The type of remote endpoints and capabilities discovered.
group bt_bap_unicast_server
Typedefs
Functions
struct bt_bap_unicast_server_cb
#include <bap.h> Unicast Server callback structure
Public Members
int (*config)(struct bt_conn *conn, const struct bt_bap_ep *ep, enum bt_audio_dir dir, const
struct bt_codec *codec, struct bt_bap_stream **stream, struct bt_codec_qos_pref *const pref,
struct bt_bap_ascs_rsp *rsp)
Endpoint config request callback.
Config callback is called whenever an endpoint is requested to be configured
Param conn
[in] Connection object.
Param ep
[in] Local Audio Endpoint being configured.
Param dir
[in] Direction of the endpoint.
Param codec
[in] Codec configuration.
Param stream
[out] Pointer to stream that will be configured for the endpoint.
Param pref
[out] Pointer to a QoS preference object that shall be populated with values.
Invalid values will reject the codec configuration request.
Param rsp
[out] Object for the ASE operation response. Only used if the return value is
non-zero.
Return
0 in case of success or negative value in case of error.
int (*reconfig)(struct bt_bap_stream *stream, enum bt_audio_dir dir, const struct bt_codec
*codec, struct bt_codec_qos_pref *const pref, struct bt_bap_ascs_rsp *rsp)
Stream reconfig request callback.
Reconfig callback is called whenever an Audio Stream needs to be reconfigured with
different codec configuration.
Param stream
[in] Stream object being reconfigured.
Param dir
[in] Direction of the endpoint.
Param codec
[in] Codec configuration.
Param pref
[out] Pointer to a QoS preference object that shall be populated with values.
Invalid values will reject the codec configuration request.
Param rsp
[out] Object for the ASE operation response. Only used if the return value is
non-zero.
Return
0 in case of success or negative value in case of error.
non-zero.
Return
0 in case of success or negative value in case of error.
Param rsp
[out] Object for the ASE operation response. Only used if the return value is
non-zero.
Return
0 in case of success or negative value in case of error.
group bt_bap_broadcast
BAP Broadcast APIs.
Functions
struct bt_bap_base_bis_data
#include <bap.h>
struct bt_bap_base_subgroup
#include <bap.h>
Public Members
struct bt_bap_base
#include <bap.h>
Public Members
uint32_t pd
QoS Presentation Delay in microseconds.
Value range 0 to BT_AUDIO_PD_MAX.
group bt_bap_broadcast_sink
BAP Broadcast Sink APIs.
Functions
struct bt_bap_broadcast_sink_cb
#include <bap.h> Broadcast Audio Sink callback structure
Public Members
bt_bap_broadcast_sink_scan_stop().
Typical reasons for this are that the periodic advertising has synchronized (success cri-
teria) or the scan timed out. It may also be called if the periodic advertising failed to
synchronize.
Param err
0 in case of success or negative value in case of error.
group bt_bap_broadcast_source
BAP Broadcast Source APIs.
Functions
See table 3.14 in the Basic Audio Profile v1.0.1 for the structure.
Parameters
• source – [in] Pointer to the broadcast source.
• broadcast_id – [out] Pointer to the 3-octet broadcast ID.
Returns
Zero on success or (negative) error code otherwise.
int bt_bap_broadcast_source_get_base(struct bt_bap_broadcast_source *source, struct
net_buf_simple *base_buf)
Get the Broadcast Audio Stream Endpoint of a broadcast source.
This will encode the BASE of a broadcast source into a buffer, that can be used for advertise-
ment. The encoded BASE will thus be encoded as little-endian. The BASE shall be put into
the periodic advertising data (see bt_le_per_adv_set_data()).
See table 3.15 in the Basic Audio Profile v1.0.1 for the structure.
Parameters
• source – Pointer to the broadcast source.
• base_buf – Pointer to a buffer where the BASE will be inserted.
Returns
Zero on success or (negative) error code otherwise.
struct bt_bap_broadcast_source_stream_param
#include <bap.h> Broadcast Source stream parameters
Public Members
struct bt_bap_broadcast_source_subgroup_param
#include <bap.h> Broadcast Source subgroup parameters
Public Members
size_t params_count
The number of parameters in stream_params
struct bt_bap_broadcast_source_create_param
#include <bap.h> Broadcast Source create parameters
Public Members
size_t params_count
The number of parameters in subgroup_params
uint8_t packing
Broadcast Source packing mode.
BT_ISO_PACKING_SEQUENTIAL or BT_ISO_PACKING_INTERLEAVED.
Note: This is a recommendation to the controller, which the controller may ignore.
bool encryption
Whether or not to encrypt the streams.
uint8_t broadcast_code[BT_AUDIO_BROADCAST_CODE_SIZE]
Broadcast code.
If the value is a string or a the value is less than 16 octets, the remaining octets shall be
0.
Example: The string “Broadcast Code” shall be [42 72 6F 61 64 63 61 73 74 20 43 6F 64
65 00 00]
API Reference
group bt_cap
Common Audio Profile (CAP)
[Experimental] Users should note that the APIs can change as a part of ongoing development.
Enums
enum bt_cap_set_type
Type of CAP set
Values:
enumerator BT_CAP_SET_TYPE_AD_HOC
The set is an ad-hoc set
enumerator BT_CAP_SET_TYPE_CSIP
The set is a CSIP Coordinated Set
Functions
Parameters
• param – [in] Parameters to start the audio streams.
• unicast_group – [out] Pointer to the unicast group.
Returns
0 on success or negative error value on failure.
Parameters
• params – Array of update parameters.
• count – The number of entries in params.
Returns
0 on success or negative error value on failure.
Parameters
• unicast_group – The group of unicast devices to stop. The audio streams in
this will be stopped and reset, and the unicast_group will be invalidated.
Returns
0 on success or negative error value on failure.
int bt_cap_initiator_unicast_audio_cancel(void)
Cancel any current Common Audio Profile procedure.
This will stop the current procedure from continuing and making it possible to run a new
Common Audio Profile procedure.
It is recommended to do this if any existing procedure take longer time than expected, which
could indicate a missing response from the Common Audio Profile Acceptor.
This does not send any requests to any Common Audio Profile Acceptors involved with the
current procedure, and thus notifications from the Common Audio Profile Acceptors may
arrive after this has been called. It is thus recommended to either only use this if a procedure
has stalled, or wait a short while before starting any new Common Audio Profile procedure
after this has been called to avoid getting notifications from the cancelled procedure. The wait
time depends on the connection interval, the number of devices in the previous procedure and
the behavior of the Common Audio Profile Acceptors.
The respective callbacks of the procedure will be called as part of this with the connection
pointer set to 0 and the err value set to -ECANCELED.
Return values
• 0 – on success
• -EALREADY – if no procedure is active
int bt_cap_initiator_broadcast_audio_create(const struct
bt_cap_initiator_broadcast_create_param
*param, struct bt_cap_broadcast_source
**broadcast_source)
Create a Common Audio Profile broadcast source.
Create a new audio broadcast source with one or more audio streams.
• *
Parameters
• param – [in] Parameters to start the audio streams.
• broadcast_source – [out] Pointer to the broadcast source created.
Returns
0 on success or negative error value on failure.
Parameters
• broadcast_source – Pointer to the broadcast source.
• adv – Pointer to an extended advertising set with periodic advertising config-
ured.
Returns
0 on success or negative error value on failure.
Update broadcast audio streams for a Common Audio Profile broadcast source.
Parameters
• broadcast_source – The broadcast source to update.
• meta_count – The number of entries in meta.
• meta – The new metadata. The metadata shall contain a list of CCIDs as well
as a non-0 context bitfield.
Returns
0 on success or negative error value on failure.
Parameters
• broadcast_source – The broadcast source to stop. The audio streams in this
will be stopped and reset.
Returns
0 on success or negative error value on failure.
See table 3.15 in the Basic Audio Profile v1.0.1 for the structure.
Parameters
• broadcast_source – Pointer to the broadcast source.
• base_buf – Pointer to a buffer where the BASE will be inserted.
Returns
int 0 if on success, errno on error.
int bt_cap_initiator_unicast_to_broadcast(const struct bt_cap_unicast_to_broadcast_param
*param, struct bt_cap_broadcast_source
**source)
Hands over the data streams in a unicast group to a broadcast source.
The streams in the unicast group will be stopped and the unicast group will be deleted. This
can only be done for source streams.
Parameters
• param – The parameters for the handover.
• source – The resulting broadcast source.
Returns
0 on success or negative error value on failure.
Parameters
• param – [in] The parameters for the handover.
• unicast_group – [out] The resulting broadcast source.
Returns
0 on success or negative error value on failure.
struct bt_cap_initiator_cb
#include <cap.h> Callback structure for CAP procedures
union bt_cap_set_member
#include <cap.h> Represents a Common Audio Set member that are either in a Coordinated
or ad-hoc set
Public Members
struct bt_cap_stream
#include <cap.h>
struct bt_cap_unicast_audio_start_stream_param
#include <cap.h>
Public Members
struct bt_cap_unicast_audio_start_param
#include <cap.h>
Public Members
size_t count
The number of parameters in stream_params
struct bt_cap_unicast_audio_update_param
#include <cap.h>
Public Members
size_t meta_count
The number of entries in meta.
struct bt_cap_initiator_broadcast_stream_param
#include <cap.h>
Public Members
size_t data_count
The number of elements in the p data array.
The BIS specific data may be omitted and this set to 0.
struct bt_cap_initiator_broadcast_subgroup_param
#include <cap.h>
Public Members
size_t stream_count
The number of parameters in stream_params
struct bt_cap_initiator_broadcast_create_param
#include <cap.h>
Public Members
size_t subgroup_count
The number of parameters in subgroup_params
uint8_t packing
Broadcast Source packing mode.
BT_ISO_PACKING_SEQUENTIAL or BT_ISO_PACKING_INTERLEAVED.
Note: This is a recommendation to the controller, which the controller may ignore.
bool encryption
Whether or not to encrypt the streams.
uint8_t broadcast_code[BT_AUDIO_BROADCAST_CODE_SIZE]
16-octet broadcast code.
Only valid if encrypt is true.
If the value is a string or a the value is less than 16 octets, the remaining octets shall be
0.
Example: The string “Broadcast Code” shall be [42 72 6F 61 64 63 61 73 74 20 43 6F 64
65 00 00]
struct bt_cap_unicast_to_broadcast_param
#include <cap.h>
Public Members
bool encrypt
Whether or not to encrypt the streams.
If set to true, then the broadcast code in broadcast_code will be used to encrypt the
streams.
uint8_t broadcast_code[BT_ISO_BROADCAST_CODE_SIZE]
16-octet broadcast code.
Only valid if encrypt is true.
If the value is a string or a the value is less than 16 octets, the remaining octets shall be
0.
struct bt_cap_broadcast_to_unicast_param
#include <cap.h>
Public Members
size_t count
The number of set members in members.
This value shall match the number of streams in the broadcast_source.
Connection Management
The Zephyr Bluetooth stack uses an abstraction called bt_conn to represent connections to other devices.
The internals of this struct are not exposed to the application, but a limited amount of information (such
as the remote address) can be acquired using the bt_conn_get_info() API. Connection objects are
reference counted, and the application is expected to use the bt_conn_ref() API whenever storing a
connection pointer for a longer period of time, since this ensures that the object remains valid (even if the
connection would get disconnected). Similarly the bt_conn_unref() API is to be used when releasing a
reference to a connection.
An application may track connections by registering a bt_conn_cb struct using the
bt_conn_cb_register() or BT_CONN_CB_DEFINE APIs. This struct lets the application define
callbacks for connection & disconnection events, as well as other events related to a connection such as
a change in the security level or the connection parameters. When acting as a central the application
will also get hold of the connection object through the return value of the bt_conn_le_create() API.
API Reference
group bt_conn
Connection management.
Defines
BT_LE_CONN_PARAM_DEFAULT
Default LE connection parameters: Connection Interval: 30-50 ms Latency: 0 Timeout: 4 s
BT_CONN_LE_PHY_PARAM_INIT(_pref_tx_phy, _pref_rx_phy)
Initialize PHY parameters
Parameters
• _pref_tx_phy – Bitmask of preferred transmit PHYs.
• _pref_rx_phy – Bitmask of preferred receive PHYs.
BT_CONN_LE_PHY_PARAM(_pref_tx_phy, _pref_rx_phy)
Helper to declare PHY parameters inline
Parameters
• _pref_tx_phy – Bitmask of preferred transmit PHYs.
• _pref_rx_phy – Bitmask of preferred receive PHYs.
BT_CONN_LE_PHY_PARAM_1M
Only LE 1M PHY
BT_CONN_LE_PHY_PARAM_2M
Only LE 2M PHY
BT_CONN_LE_PHY_PARAM_CODED
Only LE Coded PHY.
BT_CONN_LE_PHY_PARAM_ALL
All LE PHYs.
BT_CONN_LE_DATA_LEN_PARAM_INIT(_tx_max_len, _tx_max_time)
Initialize transmit data length parameters
Parameters
• _tx_max_len – Maximum Link Layer transmission payload size in bytes.
• _tx_max_time – Maximum Link Layer transmission payload time in us.
BT_CONN_LE_DATA_LEN_PARAM(_tx_max_len, _tx_max_time)
Helper to declare transmit data length parameters inline
Parameters
BT_LE_DATA_LEN_PARAM_DEFAULT
Default LE data length parameters.
BT_LE_DATA_LEN_PARAM_MAX
Maximum LE data length parameters.
BT_CONN_INTERVAL_TO_MS(interval)
Convert connection interval to milliseconds.
Multiply by 1.25 to get milliseconds.
Note that this may be inaccurate, as something like 7.5 ms cannot be accurately presented
with integers.
BT_CONN_INTERVAL_TO_US(interval)
Convert connection interval to microseconds.
Multiply by 1250 to get microseconds.
BT_CONN_ROLE_MASTER
Connection role (central or peripheral)
BT_CONN_ROLE_SLAVE
BT_CONN_LE_CREATE_CONN
Default LE create connection parameters. Scan continuously by setting scan interval equal to
scan window.
BT_CONN_LE_CREATE_CONN_AUTO
Default LE create connection using filter accept list parameters. Scan window: 30 ms. Scan
interval: 60 ms.
BT_CONN_CB_DEFINE(_name)
Register a callback structure for connection events.
Parameters
BT_PASSKEY_INVALID
Special passkey value that can be used to disable a previously set fixed passkey.
BT_BR_CONN_PARAM_INIT(role_switch)
Initialize BR/EDR connection parameters.
Parameters
• role_switch – True if role switch is allowed
BT_BR_CONN_PARAM(role_switch)
Helper to declare BR/EDR connection parameters inline
Parameters
• role_switch – True if role switch is allowed
BT_BR_CONN_PARAM_DEFAULT
Default BR/EDR connection parameters: Role switch allowed
Enums
enum [anonymous]
Connection PHY options
Values:
enumerator BT_CONN_LE_PHY_OPT_NONE = 0
Convenience value when no options are specified.
enum [anonymous]
Connection Type
Values:
enum [anonymous]
Values:
enumerator BT_CONN_ROLE_CENTRAL = 0
enumerator BT_CONN_ROLE_PERIPHERAL = 1
enum bt_conn_state
Values:
enumerator BT_CONN_STATE_DISCONNECTED
Channel disconnected
enumerator BT_CONN_STATE_CONNECTING
Channel in connecting state
enumerator BT_CONN_STATE_CONNECTED
Channel connected and ready for upper layer traffic on it
enumerator BT_CONN_STATE_DISCONNECTING
Channel in disconnecting state
enum bt_security_t
Security level.
Values:
enumerator BT_SECURITY_L0
Level 0: Only for BR/EDR special cases, like SDP
enumerator BT_SECURITY_L1
Level 1: No encryption and no authentication.
enumerator BT_SECURITY_L2
Level 2: Encryption and no authentication (no MITM).
enumerator BT_SECURITY_L3
Level 3: Encryption and authentication (MITM).
enumerator BT_SECURITY_L4
Level 4: Authenticated Secure Connections and 128-bit key.
enum bt_security_flag
Security Info Flags.
Values:
enum bt_conn_le_tx_power_phy
Values:
enumerator BT_CONN_LE_TX_POWER_PHY_NONE
Convenience macro for when no PHY is set.
enumerator BT_CONN_LE_TX_POWER_PHY_1M
LE 1M PHY
enumerator BT_CONN_LE_TX_POWER_PHY_2M
LE 2M PHY
enumerator BT_CONN_LE_TX_POWER_PHY_CODED_S8
LE Coded PHY using S=8 coding.
enumerator BT_CONN_LE_TX_POWER_PHY_CODED_S2
LE Coded PHY using S=2 coding.
enum bt_conn_auth_keypress
Passkey Keypress Notification type.
The numeric values are the same as in the Core specification for Pairing Keypress Notification
PDU.
Values:
enum [anonymous]
Values:
enumerator BT_CONN_LE_OPT_NONE = 0
Convenience value when no options are specified.
enum bt_security_err
Values:
enumerator BT_SECURITY_ERR_SUCCESS
Security procedure successful.
enumerator BT_SECURITY_ERR_AUTH_FAIL
Authentication failed.
enumerator BT_SECURITY_ERR_PIN_OR_KEY_MISSING
PIN or encryption key is missing.
enumerator BT_SECURITY_ERR_OOB_NOT_AVAILABLE
OOB data is not available.
enumerator BT_SECURITY_ERR_AUTH_REQUIREMENT
The requested security level could not be reached.
enumerator BT_SECURITY_ERR_PAIR_NOT_SUPPORTED
Pairing is not supported
enumerator BT_SECURITY_ERR_PAIR_NOT_ALLOWED
Pairing is not allowed.
enumerator BT_SECURITY_ERR_INVALID_PARAM
Invalid parameters.
enumerator BT_SECURITY_ERR_KEY_REJECTED
Distributed Key Rejected
enumerator BT_SECURITY_ERR_UNSPECIFIED
Pairing failed but the exact reason could not be specified.
Functions
Parameters
• conn – Connection object.
Returns
Connection object with incremented reference count, or NULL if the reference
count is zero.
Note: In order to retrieve the remote version (version, manufacturer and subversion)
CONFIG_BT_REMOTE_VERSION must be enabled
Note: The remote information is exchanged directly after the connection has been es-
tablished. The application can be notified about when the remote information is available
through the remote_info_available callback.
Parameters
• conn – Connection object.
• remote_info – Connection remote info object.
Returns
Zero on success or (negative) error code on failure.
Returns
-EBUSY The remote information is not yet available.
• BT_HCI_ERR_REMOTE_POWER_OFF
• BT_HCI_ERR_UNSUPP_REMOTE_FEATURE
• BT_HCI_ERR_PAIRING_NOT_SUPPORTED
• BT_HCI_ERR_UNACCEPT_CONN_PARAM
Parameters
• conn – Connection to disconnect.
• reason – Reason code for the disconnection.
Returns
Zero on success or (negative) error code on failure.
Parameters
• addr – Remote Bluetooth address.
• param – If non-NULL, auto connect is enabled with the given parameters. If
NULL, auto connect is disabled.
Returns
Zero on success or error code otherwise.
This function may return error if the pairing procedure has already been initiated by the local
device or the peer device.
Note: When CONFIG_BT_SMP_SC_ONLY is enabled then the security level will always be level
4.
Parameters
• conn – Connection object.
• sec – Requested security level.
Returns
0 on success or negative error
Note: The OOB data will only be available as long as the connection object associated with
it is valid.
Parameters
• conn – Connection object
• oobd_local – Local OOB data or NULL if not set
See also:
bt_conn_auth_keypress.
Parameters
• conn – Destination for the notification.
• type – What keypress event type to send.
Return values
• 0 – Success
• -EINVAL – Improper use of the API.
• -ENOMEM – Failed to allocate.
• -ENOBUFS – Failed to allocate.
Returns
Valid connection object on success or NULL otherwise.
struct bt_le_conn_param
#include <conn.h> Connection parameters for LE connections
struct bt_conn_le_phy_info
#include <conn.h> Connection PHY information for LE connections
Public Members
uint8_t rx_phy
Connection transmit PHY
struct bt_conn_le_phy_param
#include <conn.h> Preferred PHY parameters for LE connections
Public Members
uint8_t pref_tx_phy
Connection PHY options.
uint8_t pref_rx_phy
Bitmask of preferred transmit PHYs
struct bt_conn_le_data_len_info
#include <conn.h> Connection data length information for LE connections
Public Members
uint16_t tx_max_len
Maximum Link Layer transmission payload size in bytes.
uint16_t tx_max_time
Maximum Link Layer transmission payload time in us.
uint16_t rx_max_len
Maximum Link Layer reception payload size in bytes.
uint16_t rx_max_time
Maximum Link Layer reception payload time in us.
struct bt_conn_le_data_len_param
#include <conn.h> Connection data length parameters for LE connections
Public Members
uint16_t tx_max_len
Maximum Link Layer transmission payload size in bytes.
uint16_t tx_max_time
Maximum Link Layer transmission payload time in us.
struct bt_conn_le_info
#include <conn.h> LE Connection Info Structure
Public Members
uint16_t latency
Connection interval
uint16_t timeout
Connection peripheral latency
struct bt_conn_br_info
#include <conn.h> BR/EDR Connection Info Structure
struct bt_security_info
#include <conn.h> Security Info Structure.
Public Members
bt_security_t level
Security Level.
uint8_t enc_key_size
Encryption Key Size.
struct bt_conn_info
#include <conn.h> Connection Info Structure
Public Members
uint8_t type
Connection Type.
uint8_t role
Connection Role.
uint8_t id
Which local identity the connection was created with
struct bt_conn_le_info le
LE Connection specific Info.
struct bt_conn_br_info br
BR/EDR Connection specific Info.
struct bt_conn_le_remote_info
#include <conn.h> LE Connection Remote Info Structure
Public Members
struct bt_conn_br_remote_info
#include <conn.h> BR/EDR Connection Remote Info structure
Public Members
uint8_t num_pages
Number of pages in the remote feature set.
struct bt_conn_remote_info
#include <conn.h> Connection Remote Info Structure.
Note: The version, manufacturer and subversion fields will only contain valid data if
CONFIG_BT_REMOTE_VERSION is enabled.
Public Members
uint8_t type
Connection Type
uint8_t version
Remote Link Layer version
uint16_t manufacturer
Remote manufacturer identifier
uint16_t subversion
Per-manufacturer unique revision
struct bt_conn_le_remote_info le
LE connection remote info
struct bt_conn_br_remote_info br
BR/EDR connection remote info
struct bt_conn_le_tx_power
#include <conn.h> LE Transmit Power Level Structure
Public Members
uint8_t phy
Input: 1M, 2M, Coded S2 or Coded S8
int8_t current_level
Output: current transmit power level
int8_t max_level
Output: maximum transmit power level
struct bt_conn_le_create_param
#include <conn.h>
Public Members
uint32_t options
Bit-field of create connection options.
uint16_t interval
Scan interval (N * 0.625 ms)
uint16_t window
Scan window (N * 0.625 ms)
uint16_t interval_coded
Scan interval LE Coded PHY (N * 0.625 MS)
Set zero to use same as LE 1M PHY scan interval
uint16_t window_coded
Scan window LE Coded PHY (N * 0.625 MS)
Set zero to use same as LE 1M PHY scan window.
uint16_t timeout
Connection initiation timeout (N * 10 MS)
Set zero to use the default CONFIG_BT_CREATE_CONN_TIMEOUT timeout.
struct bt_conn_le_create_synced_param
#include <conn.h>
Public Members
uint8_t subevent
The subevent where the connection will be initiated.
struct bt_conn_cb
#include <conn.h> Connection callback structure.
This structure is used for tracking the state of a connection. It is registered with the help of
the bt_conn_cb_register() API. It’s permissible to register multiple instances of this bt_conn_cb
type, in case different modules of an application are interested in tracking the connection
state. If a callback is not of interest for an instance, it may be set to NULL and will as a
consequence not be used for that instance.
Public Members
Note: If the connection was established from an advertising set then the advertising set
cannot be restarted directly from this callback. Instead use the connected callback of the
advertising set.
Param conn
New connection object.
Param err
HCI error. Zero for success, non-zero otherwise.
It is recommended for an application to have just one of these callbacks for simplicity.
However, if an application registers multiple it needs to manage the potentially different
requirements for each callback. Each callback gets the parameters as returned by previous
callbacks, i.e. they are not necessarily the same ones as the remote originally sent.
If the application does not have this callback then the default is to accept the parameters.
Param conn
Connection object.
Param param
Proposed connection parameters.
Return
true to accept the parameters, or false to reject them.
struct bt_conn_oob_info
#include <conn.h> Info Structure for OOB pairing
Public Types
enum [anonymous]
Type of OOB pairing method
Values:
enumerator BT_CONN_OOB_LE_LEGACY
LE legacy pairing
enumerator BT_CONN_OOB_LE_SC
LE SC pairing
Public Members
struct bt_conn_pairing_feat
#include <conn.h> Pairing request and pairing response info structure.
This structure is the same for both smp_pairing_req and smp_pairing_rsp and a subset of the
packet data, except for the initial Code octet. It is documented in Core Spec. Vol. 3, Part H,
3.5.1 and 3.5.2.
Public Members
uint8_t io_capability
IO Capability, Core Spec. Vol 3, Part H, 3.5.1, Table 3.4
uint8_t oob_data_flag
OOB data flag, Core Spec. Vol 3, Part H, 3.5.1, Table 3.5
uint8_t auth_req
AuthReq, Core Spec. Vol 3, Part H, 3.5.1, Fig. 3.3
uint8_t max_enc_key_size
Maximum Encryption Key Size, Core Spec. Vol 3, Part H, 3.5.1
uint8_t init_key_dist
Initiator Key Distribution/Generation, Core Spec. Vol 3, Part H, 3.6.1, Fig. 3.11
uint8_t resp_key_dist
Responder Key Distribution/Generation, Core Spec. Vol 3, Part H 3.6.1, Fig. 3.11
struct bt_conn_auth_cb
#include <conn.h> Authenticated pairing callback structure
Public Members
This callback may be unregistered in which case pairing continues as if the Kconfig flag
was not set.
This callback is not called for BR/EDR Secure Simple Pairing (SSP).
Param conn
Connection where pairing is initiated.
Param feat
Pairing req/resp info.
struct bt_conn_auth_info_cb
#include <conn.h> Authenticated pairing information callback structure
Public Members
sys_snode_t node
Internally used field for list handling
struct bt_br_conn_param
#include <conn.h> Connection parameters for BR/EDR connections
Bluetooth Controller
API Reference
group bt_ctrl
Bluetooth Controller.
Functions
API Reference
group bt_gatt_csip
Coordinated Set Identification Profile (CSIP)
Copyright (c) 2021-2022 Nordic Semiconductor ASA
SPDX-License-Identifier: Apache-2.0
• [Experimental] Users should note that the APIs can change as a part of ongoing development.
Defines
BT_CSIP_SET_COORDINATOR_DISCOVER_TIMER_VALUE
Recommended timer for member discovery
BT_CSIP_SET_COORDINATOR_MAX_CSIS_INSTANCES
BT_CSIP_READ_SIRK_REQ_RSP_ACCEPT
Accept the request to read the SIRK as plaintext
BT_CSIP_READ_SIRK_REQ_RSP_ACCEPT_ENC
Accept the request to read the SIRK, but return encrypted SIRK
BT_CSIP_READ_SIRK_REQ_RSP_REJECT
Reject the request to read the SIRK
BT_CSIP_READ_SIRK_REQ_RSP_OOB_ONLY
SIRK is available only via an OOB procedure
BT_CSIP_SET_SIRK_SIZE
Size of the Set Identification Resolving Key (SIRK)
BT_CSIP_RSI_SIZE
Size of the Resolvable Set Identifier (RSI)
BT_CSIP_ERROR_LOCK_DENIED
Service is already locked
BT_CSIP_ERROR_LOCK_RELEASE_DENIED
Service is not locked
BT_CSIP_ERROR_LOCK_INVAL_VALUE
Invalid lock value
BT_CSIP_ERROR_SIRK_OOB_ONLY
SIRK only available out-of-band
BT_CSIP_ERROR_LOCK_ALREADY_GRANTED
Client is already owner of the lock
BT_CSIP_DATA_RSI(_rsi)
Helper to declare bt_data array including RSI.
This macro is mainly for creating an array of struct bt_data elements which is then passed to
e.g. bt_le_ext_adv_start().
Parameters
• _rsi – Pointer to the RSI value
Typedefs
Param set_info
Pointer to the a specific set_info struct.
Param err
Error value. 0 on success, GATT error or errno on fail.
Param locked
Whether the lock is locked or release.
Param member
The locked member if locked is true, otherwise NULL.
Functions
struct bt_csip_set_member_cb
#include <csip.h> Callback structure for the Coordinated Set Identification Service
Public Members
struct bt_csip_set_member_register_param
#include <csip.h> Register structure for Coordinated Set Identification Service
Public Members
uint8_t set_size
Size of the set.
If set to 0, the set size characteristic won’t be initialized.
uint8_t set_sirk[16]
The unique Set Identity Resolving Key (SIRK)
This shall be unique between different sets, and shall be the same for each set member
for each set.
bool lockable
Boolean to set whether the set is lockable by clients.
Setting this to false will disable the lock characteristic.
uint8_t rank
Rank of this device in this set.
If the lockable parameter is set to true, this shall be > 0 and <= to the set_size. If the
lockable parameter is set to false, this may be set to 0 to disable the rank characteristic.
struct bt_csip_set_coordinator_set_info
#include <csip.h> Information about a specific set
Public Members
uint8_t set_sirk[16]
The 16 octet set Set Identity Resolving Key (SIRK)
The Set SIRK may not be exposed by the server over Bluetooth, and may require an out-
of-band solution.
uint8_t set_size
The size of the set.
Will be 0 if not exposed by the server.
uint8_t rank
The rank of the set on on the remote device.
Will be 0 if not exposed by the server.
bool lockable
Whether or not the set can be locked on this device
struct bt_csip_set_coordinator_csis_inst
#include <csip.h> Struct representing a coordinated set instance on a remote device.
The values in this struct will be populated during discovery of sets
(bt_csip_set_coordinator_discover()).
Public Members
void *svc_inst
Internally used pointer value
struct bt_csip_set_coordinator_set_member
#include <csip.h> Struct representing a remote device as a set member
Public Members
struct bt_csip_set_coordinator_cb
#include <csip.h>
Cryptography
API Reference
group bt_crypto
Cryptography.
Functions
Data Buffers
API Reference
group bt_buf
Data buffers.
Defines
BT_BUF_RESERVE
BT_BUF_SIZE(size)
Helper to include reserved HCI data in buffer calculations
BT_BUF_ACL_SIZE(size)
Helper to calculate needed buffer size for HCI ACL packets
BT_BUF_EVT_SIZE(size)
Helper to calculate needed buffer size for HCI Event packets.
BT_BUF_CMD_SIZE(size)
Helper to calculate needed buffer size for HCI Command packets.
BT_BUF_ISO_SIZE(size)
Helper to calculate needed buffer size for HCI ISO packets.
BT_BUF_ACL_RX_SIZE
Data size needed for HCI ACL RX buffers
BT_BUF_EVT_RX_SIZE
Data size needed for HCI Event RX buffers
BT_BUF_ISO_RX_SIZE
BT_BUF_ISO_RX_COUNT
BT_BUF_RX_SIZE
Data size needed for HCI ACL, HCI ISO or Event RX buffers
BT_BUF_RX_COUNT
Buffer count needed for HCI ACL, HCI ISO or Event RX buffers
BT_BUF_CMD_TX_SIZE
Data size needed for HCI Command buffers.
Enums
enum bt_buf_type
Possible types of buffers passed around the Bluetooth stack
Values:
enumerator BT_BUF_CMD
HCI command
enumerator BT_BUF_EVT
HCI event
enumerator BT_BUF_ACL_OUT
Outgoing ACL data
enumerator BT_BUF_ACL_IN
Incoming ACL data
enumerator BT_BUF_ISO_OUT
Outgoing ISO data
enumerator BT_BUF_ISO_IN
Incoming ISO data
enumerator BT_BUF_H4
H:4 data
Functions
struct bt_buf_data
#include <buf.h> This is a base type for bt_buf user data.
API Reference
group bt_gap
Generic Access Profile.
Defines
BT_ID_DEFAULT
Convenience macro for specifying the default identity. This helps make the code more read-
able, especially when only one identity is supported.
BT_DATA_SERIALIZED_SIZE(data_len)
Bluetooth data serialized size.
Get the size of a serialized bt_data given its data length.
Size of ‘AD Structure’->’Length’ field, equal to 1. Size of ‘AD Structure’->’Data’->’AD Type’
field, equal to 1. Size of ‘AD Structure’->’Data’->’AD Data’ field, equal to data_len.
See Core Specification Version 5.4 Vol. 3 Part C, 11, Figure 11.1.
BT_LE_ADV_CONN
BT_LE_ADV_CONN_NAME
BT_LE_ADV_CONN_NAME_AD
BT_LE_ADV_CONN_DIR_LOW_DUTY(_peer)
BT_LE_ADV_NCONN
Non-connectable advertising with private address
BT_LE_ADV_NCONN_NAME
Non-connectable advertising with BT_LE_ADV_OPT_USE_NAME
BT_LE_ADV_NCONN_IDENTITY
Non-connectable advertising with BT_LE_ADV_OPT_USE_IDENTITY
BT_LE_EXT_ADV_CONN_NAME
Connectable extended advertising with BT_LE_ADV_OPT_USE_NAME
BT_LE_EXT_ADV_SCAN_NAME
Scannable extended advertising with BT_LE_ADV_OPT_USE_NAME
BT_LE_EXT_ADV_NCONN
Non-connectable extended advertising with private address
BT_LE_EXT_ADV_NCONN_NAME
Non-connectable extended advertising with BT_LE_ADV_OPT_USE_NAME
BT_LE_EXT_ADV_NCONN_IDENTITY
Non-connectable extended advertising with BT_LE_ADV_OPT_USE_IDENTITY
BT_LE_EXT_ADV_CODED_NCONN
Non-connectable extended advertising on coded PHY with private address
BT_LE_EXT_ADV_CODED_NCONN_NAME
Non-connectable extended advertising on coded PHY with BT_LE_ADV_OPT_USE_NAME
BT_LE_EXT_ADV_CODED_NCONN_IDENTITY
Non-connectable extended advertising on coded PHY with BT_LE_ADV_OPT_USE_IDENTITY
BT_LE_EXT_ADV_START_PARAM_INIT(_timeout, _n_evts)
Helper to initialize extended advertising start parameters inline
Parameters
• _timeout – Advertiser timeout
• _n_evts – Number of advertising events
BT_LE_EXT_ADV_START_PARAM(_timeout, _n_evts)
Helper to declare extended advertising start parameters inline
Parameters
• _timeout – Advertiser timeout
• _n_evts – Number of advertising events
BT_LE_EXT_ADV_START_DEFAULT
BT_LE_PER_ADV_DEFAULT
BT_LE_SCAN_OPT_FILTER_WHITELIST
BT_LE_SCAN_ACTIVE
Helper macro to enable active scanning to discover new devices.
BT_LE_SCAN_PASSIVE
Helper macro to enable passive scanning to discover new devices.
This macro should be used if information required for device identification (e.g., UUID) are
known to be placed in Advertising Data.
BT_LE_SCAN_CODED_ACTIVE
Helper macro to enable active scanning to discover new devices. Include scanning on Coded
PHY in addition to 1M PHY.
BT_LE_SCAN_CODED_PASSIVE
Helper macro to enable passive scanning to discover new devices. Include scanning on Coded
PHY in addition to 1M PHY.
This macro should be used if information required for device identification (e.g., UUID) are
known to be placed in Advertising Data.
Typedefs
Enums
enum [anonymous]
Advertising options
Values:
enumerator BT_LE_ADV_OPT_NONE = 0
Convenience value when no options are specified.
Note: The address used for advertising will not be the same as returned by
bt_le_oob_get_local, instead bt_id_get should be used to get the LE address.
Warning: This will compromise the privacy of the device, so care must be taken when
using this option.
The application can set the device name itself by including the
following in the advertising data.
@code
BT_DATA(BT_DATA_NAME_COMPLETE, name, sizeof(name) - 1)
@endcode
Note: Enabling this option requires extended advertising support in the peer devices
scanning for advertisement packets.
enum [anonymous]
Periodic Advertising options
Values:
enumerator BT_LE_PER_ADV_OPT_NONE = 0
Convenience value when no options are specified.
enum [anonymous]
Periodic advertising sync options
Values:
enumerator BT_LE_PER_ADV_SYNC_OPT_NONE = 0
Convenience value when no options are specified.
enum [anonymous]
Periodic Advertising Sync Transfer options
Values:
enumerator BT_LE_PER_ADV_SYNC_TRANSFER_OPT_NONE = 0
Convenience value when no options are specified.
enumerator BT_LE_PER_ADV_SYNC_TRANSFER_OPT_REPORTING_INITIALLY_DISABLED =
BIT(4)
Sync to received PAST packets but don’t generate sync reports.
This option must not be set at the same time as
BT_LE_PER_ADV_SYNC_TRANSFER_OPT_FILTER_DUPLICATES.
enum [anonymous]
Values:
enumerator BT_LE_SCAN_OPT_NONE = 0
Convenience value when no options are specified.
enum [anonymous]
Values:
Functions
See also:
CONFIG_BT_DEVICE_NAME_MAX .
Parameters
• name – New name
Returns
Zero on success or (negative) error code otherwise.
Returns
Bluetooth Device Name
uint16_t bt_get_appearance(void)
Get local Bluetooth appearance.
Bluetooth Appearance is a description of the external appearance of a device in terms of an
Appearance Value.
See also:
https://specificationrefs.bluetooth.com/assigned-values/Appearance%20Values.pdf
Returns
Appearance Value of local Bluetooth host.
Parameters
• addrs – Array where to store the configured identities.
• count – Should be initialized to the array size. Once the function returns it will
contain the number of returned identities.
from and will be able to repeat the procedure on every power cycle, i.e. it would be redundant
to also store the information in flash.
Generating random static address or random IRK is not supported when calling this function
before bt_enable().
If the application wants to have the stack randomly generate identities and store them in flash
for later recovery, the way to do it would be to first initialize the stack (using bt_enable), then
call settings_load(), and after that check with bt_id_get() how many identities were recovered.
If an insufficient amount of identities were recovered the app may then call bt_id_create() to
create new ones.
Parameters
• addr – Address to use for the new identity. If NULL or initialized to
BT_ADDR_LE_ANY the stack will generate a new random static address for
the identity and copy it to the given parameter upon return from this function
(in case the parameter was non-NULL).
• irk – Identity Resolving Key (16 bytes) to be used with this identity. If set to all
zeroes or NULL, the stack will generate a random IRK for the identity and copy
it back to the parameter upon return from this function (in case the parameter
was non-NULL). If privacy CONFIG_BT_PRIVACY is not enabled this parameter
must be NULL.
Returns
Identity identifier (>= 0) in case of success, or a negative error code on failure.
int bt_id_reset(uint8_t id, bt_addr_le_t *addr, uint8_t *irk)
Reset/reclaim an identity for reuse.
The semantics of the addr and irk parameters of this function are the same as with
bt_id_create(). The difference is the first id parameter that needs to be an existing identity
(if it doesn’t exist this function will return an error). When given an existing identity this
function will disconnect any connections created using it, remove any pairing keys or other
data associated with it, and then create a new identity in the same slot, based on the addr and
irk parameters.
Note: the default identity (BT_ID_DEFAULT) cannot be reset, i.e. this API will return an
error if asked to do that.
Parameters
• id – Existing identity identifier.
• addr – Address to use for the new identity. If NULL or initialized to
BT_ADDR_LE_ANY the stack will generate a new static random address for
the identity and copy it to the given parameter upon return from this function
(in case the parameter was non-NULL).
• irk – Identity Resolving Key (16 bytes) to be used with this identity. If set to all
zeroes or NULL, the stack will generate a random IRK for the identity and copy
it back to the parameter upon return from this function (in case the parameter
was non-NULL). If privacy CONFIG_BT_PRIVACY is not enabled this parameter
must be NULL.
Returns
Identity identifier (>= 0) in case of success, or a negative error code on failure.
When given a valid identity this function will disconnect any connections created using it,
remove any pairing keys or other data associated with it, and then flag is as deleted, so that it
can not be used for any operations. To take back into use the slot the identity was occupying
the bt_id_reset() API needs to be used.
Note: the default identity (BT_ID_DEFAULT) cannot be deleted, i.e. this API will return an
error if asked to do that.
Parameters
• id – Existing identity identifier.
Returns
0 in case of success, or a negative error code on failure.
Returns
Zero on success or (negative) error code otherwise.
Returns
-ENOMEM No free connection objects available for connectable advertiser.
Returns
-ECONNREFUSED When connectable advertising is requested and there is al-
ready maximum number of connections established in the controller. This error
code is only guaranteed when using Zephyr controller, for other controllers code
returned in this case may be -EIO.
int bt_le_adv_update_data(const struct bt_data *ad, size_t ad_len, const struct bt_data *sd,
size_t sd_len)
Update advertising.
Update advertisement and scan response data.
Parameters
• ad – Data to be used in advertisement packets.
• ad_len – Number of elements in ad
• sd – Data to be used in scan response packets.
• sd_len – Number of elements in sd
Returns
Zero on success or (negative) error code otherwise.
int bt_le_adv_stop(void)
Stop advertising.
Stops ongoing advertising.
Returns
Zero on success or (negative) error code otherwise.
int bt_le_ext_adv_create(const struct bt_le_adv_param *param, const struct bt_le_ext_adv_cb
*cb, struct bt_le_ext_adv **adv)
Create advertising set.
Create a new advertising set and set advertising parameters. Advertising parameters can be
updated with bt_le_ext_adv_update_param.
Parameters
• param – [in] Advertising parameters.
• cb – [in] Callback struct to notify about advertiser activity. Can be NULL. Must
point to valid memory during the lifetime of the advertising set.
• adv – [out] Valid advertising set object on success.
Returns
Zero on success or (negative) error code otherwise.
int bt_le_ext_adv_start(struct bt_le_ext_adv *adv, struct bt_le_ext_adv_start_param *param)
Start advertising with the given advertising set.
If the advertiser is limited by either the timeout or number of advertising events the applica-
tion will be notified by the advertiser sent callback once the limit is reached. If the advertiser
is limited by both the timeout and the number of advertising events then the limit that is
reached first will stop the advertiser.
Parameters
• adv – Advertising set object.
Note: Not all scanners support extended data length advertising data.
Note: When updating the advertising data while advertising the advertising data and scan
response data length must be smaller or equal to what can be fit in a single advertising packet.
Otherwise the advertiser must be stopped.
Parameters
• adv – Advertising set object.
• ad – Data to be used in advertisement packets.
• ad_len – Number of elements in ad
• sd – Data to be used in scan response packets.
• sd_len – Number of elements in sd
Returns
Zero on success or (negative) error code otherwise.
Parameters
• adv – Advertising set object.
• param – Advertising parameters.
Returns
Zero on success or (negative) error code otherwise.
int bt_le_per_adv_set_data(const struct bt_le_ext_adv *adv, const struct bt_data *ad, size_t
ad_len)
Set or update the periodic advertising data.
The periodic advertisement data can only be set or updated on an extended advertisement set
which is neither scannable, connectable nor anonymous.
Parameters
• adv – Advertising set object.
• ad – Advertising data.
• ad_len – Advertising data length.
Returns
Zero on success or (negative) error code otherwise.
int bt_le_per_adv_set_subevent_data(const struct bt_le_ext_adv *adv, uint8_t num_subevents,
const struct bt_le_per_adv_subevent_data_params
*params)
Set the periodic advertising with response subevent data.
Set the data for one or more subevents of a Periodic Advertising with Responses Advertiser in
reply data request.
Parameters
• adv – The extended advertiser the PAwR train belongs to.
• num_subevents – The number of subevents to set data for.
• params – Subevent parameters.
Pre
There are num_subevents elements in params.
Pre
The controller has requested data for the subevents in params.
Returns
Zero on success or (negative) error code otherwise.
int bt_le_per_adv_start(struct bt_le_ext_adv *adv)
Starts periodic advertising.
Enabling the periodic advertising can be done independently of extended advertising, but both
periodic advertising and extended advertising shall be enabled before any periodic advertising
data is sent. The periodic advertising and extended advertising can be enabled in any order.
Once periodic advertising has been enabled, it will continue advertising un-
til bt_le_per_adv_stop() has been called, or if the advertising set is deleted by
bt_le_ext_adv_delete(). Calling bt_le_ext_adv_stop() will not stop the periodic advertis-
ing.
Parameters
• adv – Advertising set object.
Returns
Zero on success or (negative) error code otherwise.
int bt_le_per_adv_stop(struct bt_le_ext_adv *adv)
Stops periodic advertising.
Disabling the periodic advertising can be done independently of extended advertising. Dis-
abling periodic advertising will not disable extended advertising.
Parameters
Note: The LE scanner by default does not use the Identity Address of the local device when
CONFIG_BT_PRIVACY is disabled. This is to prevent the active scanner from disclosing the
identity information when requesting additional information from advertisers. In order to
enable directed advertiser reports then CONFIG_BT_SCAN_WITH_IDENTITY must be enabled.
Parameters
• param – Scan parameters.
• cb – Callback to notify scan results. May be NULL if callback registration
through bt_le_scan_cb_register is preferred.
Returns
Zero on success or error code otherwise, positive in case of protocol error or
negative (POSIX) in case of stack internal error.
int bt_le_scan_stop(void)
Stop (LE) scanning.
Stops ongoing LE scanning.
Returns
Zero on success or error code otherwise, positive in case of protocol error or
negative (POSIX) in case of stack internal error.
void bt_le_scan_cb_register(struct bt_le_scan_cb *cb)
Register scanner packet callbacks.
Adds the callback structure to the list of callback structures that monitors scanner activity.
This callback will be called for all scanner activity, regardless of what API was used to start
the scanner.
Parameters
• cb – Callback struct. Must point to memory that remains valid.
Note: The filter accept list cannot be modified when an LE role is using the filter accept
list, i.e advertiser or scanner using a filter accept list or automatic connecting to devices using
filter accept list.
Parameters
• addr – Bluetooth LE identity address.
Returns
Zero on success or error code otherwise, positive in case of protocol error or
negative (POSIX) in case of stack internal error.
Note: The filter accept list cannot be modified when an LE role is using the filter accept
list, i.e advertiser or scanner using a filter accept list or automatic connecting to devices using
filter accept list.
Parameters
• addr – Bluetooth LE identity address.
Returns
Zero on success or error code otherwise, positive in case of protocol error or
negative (POSIX) in case of stack internal error.
int bt_le_filter_accept_list_clear(void)
Clear filter accept list.
Clear all devices from the filter accept list.
Note: The filter accept list cannot be modified when an LE role is using the filter accept
list, i.e advertiser or scanner using a filter accept list or automatic connecting to devices using
filter accept list.
Returns
Zero on success or error code otherwise, positive in case of protocol error or
negative (POSIX) in case of stack internal error.
Warning: This helper function will consume ad when parsing. The user should make a
copy if the original data is to be used afterwards
Parameters
• ad – Advertising data as given to the bt_le_scan_cb_t callback.
• func – Callback function which will be called for each element that’s found in
the data. The callback should return true to continue parsing, or false to stop
parsing.
• user_data – User data to be passed to the callback.
Note: If privacy is enabled the RPA cannot be refreshed in the following cases:
• Creating a connection in progress, wait for the connected callback. In addition when ex-
tended advertising CONFIG_BT_EXT_ADV is not enabled or not supported by the controller:
• Advertiser is enabled using a Random Static Identity Address for a different local identity.
• The local identity conflicts with the local identity used by other roles.
Parameters
• id – [in] Local identity, in most cases BT_ID_DEFAULT.
• oob – [out] LE OOB information
Returns
Zero on success or error code otherwise, positive in case of protocol error or
negative (POSIX) in case of stack internal error.
Note: When generating OOB information for multiple advertising set all OOB information
needs to be generated at the same time.
Note: If privacy is enabled the RPA cannot be refreshed in the following cases:
• Creating a connection in progress, wait for the connected callback.
Parameters
• adv – [in] The advertising set object
• oob – [out] LE OOB information
Returns
Zero on success or error code otherwise, positive in case of protocol error or
negative (POSIX) in case of stack internal error.
void bt_foreach_bond(uint8_t id, void (*func)(const struct bt_bond_info *info, void *user_data),
void *user_data)
Iterate through all existing bonds.
Parameters
• id – Local identity (mostly just BT_ID_DEFAULT).
• func – Function to call for each bond.
• user_data – Data to pass to the callback function.
int bt_configure_data_path(uint8_t dir, uint8_t id, uint8_t vs_config_len, const uint8_t
*vs_config)
Configure vendor data path.
Request the Controller to configure the data transport path in a given direction between the
Controller and the Host.
Parameters
• dir – Direction to be configured, BT_HCI_DATAPATH_DIR_HOST_TO_CTLR or
BT_HCI_DATAPATH_DIR_CTLR_TO_HOST
• id – Vendor specific logical transport channel ID, range
[BT_HCI_DATAPATH_ID_VS..BT_HCI_DATAPATH_ID_VS_END]
• vs_config_len – Length of additional vendor specific configuration data
• vs_config – Pointer to additional vendor specific configuration data
Returns
0 in case of success or negative value in case of error.
int bt_le_per_adv_sync_subevent(struct bt_le_per_adv_sync *per_adv_sync, struct
bt_le_per_adv_sync_subevent_params *params)
Synchronize with a subset of subevents.
Until this command is issued, the subevent(s) the controller is synchronized to is unspecified.
Parameters
• per_adv_sync – The periodic advertising sync object.
• params – Parameters.
Returns
0 in case of success or negative value in case of error.
int bt_le_per_adv_set_response_data(struct bt_le_per_adv_sync *per_adv_sync, const struct
bt_le_per_adv_response_params *params, const struct
net_buf_simple *data)
Set the data for a response slot in a specific subevent of the PAwR.
This function is called by the application to set the response data. The data for a response slot
shall be transmitted only once.
Parameters
• per_adv_sync – The periodic advertising sync object.
• params – Parameters.
• data – The response data to send.
Returns
Zero on success or (negative) error code otherwise.
struct bt_le_ext_adv_sent_info
#include <bluetooth.h>
Public Members
uint8_t num_sent
The number of advertising events completed.
struct bt_le_ext_adv_connected_info
#include <bluetooth.h>
Public Members
struct bt_le_ext_adv_scanned_info
#include <bluetooth.h>
Public Members
bt_addr_le_t *addr
Active scanner LE address and type
struct bt_le_per_adv_data_request
#include <bluetooth.h>
Public Members
uint8_t start
The first subevent data can be set for
uint8_t count
The number of subevents data can be set for
struct bt_le_per_adv_response_info
#include <bluetooth.h>
Public Members
uint8_t subevent
The subevent the response was received in
uint8_t tx_status
Status of the subevent indication.
0 if subevent indication was transmitted. 1 if subevent indication was not transmitted.
All other values RFU.
int8_t tx_power
The TX power of the response in dBm
int8_t rssi
The RSSI of the response in dBm
uint8_t cte_type
The Constant Tone Extension (CTE) of the advertisement (bt_df_cte_type)
uint8_t response_slot
The slot the response was received in
struct bt_le_ext_adv_cb
#include <bluetooth.h>
Public Members
struct bt_data
#include <bluetooth.h> Bluetooth data.
Description of different data types that can be encoded into advertising data. Used to form
arrays that are passed to the bt_le_adv_start() function.
struct bt_le_adv_param
#include <bluetooth.h> LE Advertising Parameters.
Public Members
uint8_t id
Local identity.
uint8_t sid
Advertising Set Identifier, valid range 0x00 - 0x0f.
uint8_t secondary_max_skip
Secondary channel maximum skip count.
Maximum advertising events the advertiser can skip before it must send advertising data
on the secondary advertising channel.
uint32_t options
Bit-field of advertising options
uint32_t interval_min
Minimum Advertising Interval (N * 0.625 milliseconds) Minimum Advertising Interval
shall be less than or equal to the Maximum Advertising Interval. The Minimum Advertis-
ing Interval and Maximum Advertising Interval should not be the same value (as stated
in Bluetooth Core Spec 5.2, section 7.8.5) Range: 0x0020 to 0x4000
uint32_t interval_max
Maximum Advertising Interval (N * 0.625 milliseconds) Minimum Advertising Interval
shall be less than or equal to the Maximum Advertising Interval. The Minimum Advertis-
ing Interval and Maximum Advertising Interval should not be the same value (as stated
in Bluetooth Core Spec 5.2, section 7.8.5) Range: 0x0020 to 0x4000
struct bt_le_per_adv_param
#include <bluetooth.h>
Public Members
uint16_t interval_min
Minimum Periodic Advertising Interval (N * 1.25 ms)
Shall be greater or equal to BT_GAP_PER_ADV_MIN_INTERVAL and less or equal to in-
terval_max.
uint16_t interval_max
Maximum Periodic Advertising Interval (N * 1.25 ms)
Shall be less or equal to BT_GAP_PER_ADV_MAX_INTERVAL and greater or equal to in-
terval_min.
uint32_t options
Bit-field of periodic advertising options
struct bt_le_ext_adv_start_param
#include <bluetooth.h>
Public Members
uint16_t timeout
Advertiser timeout (N * 10 ms).
Application will be notified by the advertiser sent callback. Set to zero for no timeout.
When using high duty cycle directed connectable advertising then this parame-
ters must be set to a non-zero value less than or equal to the maximum of
BT_GAP_ADV_HIGH_DUTY_CYCLE_MAX_TIMEOUT.
If privacy CONFIG_BT_PRIVACY is enabled then the timeout must be less than
CONFIG_BT_RPA_TIMEOUT .
uint8_t num_events
Number of advertising events.
Application will be notified by the advertiser sent callback. Set to zero for no limit.
struct bt_le_ext_adv_info
#include <bluetooth.h> Advertising set info structure.
Public Members
int8_t tx_power
Currently selected Transmit Power (dBM).
struct bt_le_per_adv_subevent_data_params
#include <bluetooth.h>
Public Members
uint8_t subevent
The subevent to set data for
uint8_t response_slot_start
The first response slot to listen to
uint8_t response_slot_count
The number of response slots to listen to
struct bt_le_per_adv_sync_synced_info
#include <bluetooth.h>
Public Members
uint8_t sid
Advertiser SID
uint16_t interval
Periodic advertising interval (N * 1.25 ms)
uint8_t phy
Advertiser PHY
bool recv_enabled
True if receiving periodic advertisements, false otherwise.
uint16_t service_data
Service Data provided by the peer when sync is transferred.
Will always be 0 when the sync is locally created.
struct bt_le_per_adv_sync_term_info
#include <bluetooth.h>
Public Members
uint8_t sid
Advertiser SID
uint8_t reason
Cause of periodic advertising termination
struct bt_le_per_adv_sync_recv_info
#include <bluetooth.h>
Public Members
uint8_t sid
Advertiser SID
int8_t tx_power
The TX power of the advertisement.
int8_t rssi
The RSSI of the advertisement excluding any CTE.
uint8_t cte_type
The Constant Tone Extension (CTE) of the advertisement (bt_df_cte_type)
struct bt_le_per_adv_sync_state_info
#include <bluetooth.h>
Public Members
bool recv_enabled
True if receiving periodic advertisements, false otherwise.
struct bt_le_per_adv_sync_cb
#include <bluetooth.h>
Public Members
struct bt_le_per_adv_sync_param
#include <bluetooth.h>
Public Members
bt_addr_le_t addr
Periodic Advertiser Address.
Only valid if not using the periodic advertising list
(BT_LE_PER_ADV_SYNC_OPT_USE_PER_ADV_LIST)
uint8_t sid
Advertiser SID.
Only valid if not using the periodic advertising list
(BT_LE_PER_ADV_SYNC_OPT_USE_PER_ADV_LIST)
uint32_t options
Bit-field of periodic advertising sync options.
uint16_t skip
Maximum event skip.
Maximum number of periodic advertising events that can be skipped after a successful
receive. Range: 0x0000 to 0x01F3
uint16_t timeout
Synchronization timeout (N * 10 ms)
Synchronization timeout for the periodic advertising sync. Range 0x000A to 0x4000 (100
ms to 163840 ms)
struct bt_le_per_adv_sync_info
#include <bluetooth.h> Advertising set info structure.
Public Members
bt_addr_le_t addr
Periodic Advertiser Address
uint8_t sid
Advertiser SID
uint16_t interval
Periodic advertising interval (N * 1.25 ms)
uint8_t phy
Advertiser PHY
struct bt_le_per_adv_sync_transfer_param
#include <bluetooth.h>
Public Members
uint16_t skip
Maximum event skip.
The number of periodic advertising packets that can be skipped after a successful receive.
uint16_t timeout
Synchronization timeout (N * 10 ms)
Synchronization timeout for the periodic advertising sync. Range 0x000A to 0x4000 (100
ms to 163840 ms)
uint32_t options
Periodic Advertising Sync Transfer options
struct bt_le_scan_param
#include <bluetooth.h> LE scan parameters
Public Members
uint8_t type
Scan type (BT_LE_SCAN_TYPE_ACTIVE or BT_LE_SCAN_TYPE_PASSIVE)
uint32_t options
Bit-field of scanning options.
uint16_t interval
Scan interval (N * 0.625 ms)
uint16_t window
Scan window (N * 0.625 ms)
uint16_t timeout
Scan timeout (N * 10 ms)
Application will be notified by the scan timeout callback. Set zero to disable timeout.
uint16_t interval_coded
Scan interval LE Coded PHY (N * 0.625 MS)
Set zero to use same as LE 1M PHY scan interval.
uint16_t window_coded
Scan window LE Coded PHY (N * 0.625 MS)
Set zero to use same as LE 1M PHY scan window.
struct bt_le_scan_recv_info
#include <bluetooth.h> LE advertisement and scan response packet information
Public Members
uint8_t sid
Advertising Set Identifier.
int8_t rssi
Strength of advertiser signal.
int8_t tx_power
Transmit power of the advertiser.
uint8_t adv_type
Advertising packet type.
Uses the BT_GAP_ADV_TYPE_* value.
May indicate that this is a scan response if the type is BT_GAP_ADV_TYPE_SCAN_RSP.
uint16_t adv_props
Advertising packet properties bitfield.
Uses the BT_GAP_ADV_PROP_* values. May indicate that this is a scan response if the
value contains the BT_GAP_ADV_PROP_SCAN_RESPONSE bit.
uint16_t interval
Periodic advertising interval.
If 0 there is no periodic advertising.
uint8_t primary_phy
Primary advertising channel PHY.
uint8_t secondary_phy
Secondary advertising channel PHY.
struct bt_le_scan_cb
#include <bluetooth.h> Listener context for (LE) scanning.
Public Members
void (*timeout)(void)
The scanner has stopped scanning after scan timeout.
struct bt_le_oob_sc_data
#include <bluetooth.h> LE Secure Connections pairing Out of Band data.
Public Members
uint8_t r[16]
Random Number.
uint8_t c[16]
Confirm Value.
struct bt_le_oob
#include <bluetooth.h> LE Out of Band information.
Public Members
bt_addr_le_t addr
LE address. If privacy is enabled this is a Resolvable Private Address.
struct bt_br_discovery_result
#include <bluetooth.h> BR/EDR discovery result structure.
Public Members
bt_addr_t addr
Remote device address
int8_t rssi
RSSI from inquiry
uint8_t cod[3]
Class of Device
uint8_t eir[240]
Extended Inquiry Response
struct bt_br_discovery_param
#include <bluetooth.h> BR/EDR discovery parameters
Public Members
uint8_t length
Maximum length of the discovery in units of 1.28 seconds. Valid range is 0x01 - 0x30.
bool limited
True if limited discovery procedure is to be used.
struct bt_br_oob
#include <bluetooth.h>
Public Members
bt_addr_t addr
BR/EDR address.
struct bt_bond_info
#include <bluetooth.h> Information about a bond with a remote device.
Public Members
bt_addr_le_t addr
Address of the remote device.
struct bt_le_per_adv_sync_subevent_params
#include <bluetooth.h>
Public Members
uint16_t properties
Periodic Advertising Properties.
Bit 6 is include TxPower, all others RFU.
uint8_t num_subevents
Number of subevents to sync to
uint8_t *subevents
The subevent(s) to synchronize with.
The array must have num_subevents elements.
struct bt_le_per_adv_response_params
#include <bluetooth.h>
Public Members
uint16_t request_event
The periodic event counter of the request the response is sent to.
bt_le_per_adv_sync_recv_info
Note: The response can be sent up to one periodic interval after the request was received.
uint8_t request_subevent
The subevent counter of the request the response is sent to.
bt_le_per_adv_sync_recv_info
uint8_t response_subevent
The subevent the response shall be sent in
uint8_t response_slot
The response slot the response shall be sent in
group bt_addr
Bluetooth device address definitions and utilities.
Defines
BT_ADDR_LE_PUBLIC
BT_ADDR_LE_RANDOM
BT_ADDR_LE_PUBLIC_ID
BT_ADDR_LE_RANDOM_ID
BT_ADDR_LE_UNRESOLVED
BT_ADDR_LE_ANONYMOUS
BT_ADDR_SIZE
Length in bytes of a standard Bluetooth address
BT_ADDR_LE_SIZE
Length in bytes of an LE Bluetooth address. Not packed, so no sizeof()
BT_ADDR_ANY
Bluetooth device “any” address, not a valid address
BT_ADDR_NONE
Bluetooth device “none” address, not a valid address
BT_ADDR_LE_ANY
Bluetooth LE device “any” address, not a valid address
BT_ADDR_LE_NONE
Bluetooth LE device “none” address, not a valid address
BT_ADDR_IS_RPA(a)
Check if a Bluetooth LE random address is resolvable private address.
BT_ADDR_IS_NRPA(a)
Check if a Bluetooth LE random address is a non-resolvable private address.
BT_ADDR_IS_STATIC(a)
Check if a Bluetooth LE random address is a static address.
BT_ADDR_SET_RPA(a)
Set a Bluetooth LE random address as a resolvable private address.
BT_ADDR_SET_NRPA(a)
Set a Bluetooth LE random address as a non-resolvable private address.
BT_ADDR_SET_STATIC(a)
Set a Bluetooth LE random address as a static address.
BT_ADDR_STR_LEN
Recommended length of user string buffer for Bluetooth address.
The recommended length guarantee the output of address conversion will not lose valuable
information about address being processed.
BT_ADDR_LE_STR_LEN
Recommended length of user string buffer for Bluetooth LE address.
The recommended length guarantee the output of address conversion will not lose valuable
information about address being processed.
Functions
See also:
bt_addr_le_eq
Parameters
• a – First Bluetooth LE device address to compare
• b – Second Bluetooth LE device address to compare
Returns
negative value if a < b, 0 if a == b, else positive
Return values
0 – Success. The parsed address is stored in addr.
Returns
-EINVAL Invalid address string. str is not a well-formed Bluetooth address.
int bt_addr_le_from_str(const char *str, const char *type, bt_addr_le_t *addr)
Convert LE Bluetooth address from string to binary.
Parameters
• str – [in] The string representation of an LE Bluetooth address.
• type – [in] The string representation of the LE Bluetooth address type.
• addr – [out] Address of buffer to store the LE Bluetooth address
Returns
Zero on success or (negative) error code otherwise.
Variables
struct bt_addr_t
#include <addr.h> Bluetooth Device Address
struct bt_addr_le_t
#include <addr.h> Bluetooth LE Device Address
group bt_gap_defines
Bluetooth Generic Access Profile defines and Assigned Numbers.
Defines
BT_COMP_ID_LF
Company Identifiers (see Bluetooth Assigned Numbers)
BT_DATA_FLAGS
EIR/AD data type definitions
BT_DATA_UUID16_SOME
BT_DATA_UUID16_ALL
BT_DATA_UUID32_SOME
BT_DATA_UUID32_ALL
BT_DATA_UUID128_SOME
BT_DATA_UUID128_ALL
BT_DATA_NAME_SHORTENED
BT_DATA_NAME_COMPLETE
BT_DATA_TX_POWER
BT_DATA_SM_TK_VALUE
BT_DATA_SM_OOB_FLAGS
BT_DATA_PERIPHERAL_INT_RANGE
BT_DATA_SOLICIT16
BT_DATA_SOLICIT128
BT_DATA_SVC_DATA16
BT_DATA_PUB_TARGET_ADDR
BT_DATA_RAND_TARGET_ADDR
BT_DATA_GAP_APPEARANCE
BT_DATA_ADV_INT
BT_DATA_LE_BT_DEVICE_ADDRESS
BT_DATA_LE_ROLE
BT_DATA_SIMPLE_PAIRING_HASH
BT_DATA_SIMPLE_PAIRING_RAND
BT_DATA_SOLICIT32
BT_DATA_SVC_DATA32
BT_DATA_SVC_DATA128
BT_DATA_LE_SC_CONFIRM_VALUE
BT_DATA_LE_SC_RANDOM_VALUE
BT_DATA_URI
BT_DATA_INDOOR_POS
BT_DATA_TRANS_DISCOVER_DATA
BT_DATA_LE_SUPPORTED_FEATURES
BT_DATA_CHANNEL_MAP_UPDATE_IND
BT_DATA_MESH_PROV
BT_DATA_MESH_MESSAGE
BT_DATA_MESH_BEACON
BT_DATA_BIG_INFO
BT_DATA_BROADCAST_CODE
BT_DATA_CSIS_RSI
BT_DATA_ADV_INT_LONG
BT_DATA_BROADCAST_NAME
BT_DATA_ENCRYPTED_AD_DATA
BT_DATA_3D_INFO
BT_DATA_MANUFACTURER_DATA
BT_LE_AD_LIMITED
BT_LE_AD_GENERAL
BT_LE_AD_NO_BREDR
BT_APPEARANCE_UNKNOWN
BT_APPEARANCE_GENERIC_PHONE
BT_APPEARANCE_GENERIC_COMPUTER
BT_APPEARANCE_COMPUTER_DESKTOP_WORKSTATION
BT_APPEARANCE_COMPUTER_SERVER_CLASS
BT_APPEARANCE_COMPUTER_LAPTOP
BT_APPEARANCE_COMPUTER_HANDHELD_PCPDA
BT_APPEARANCE_COMPUTER_PALMSIZE_PCPDA
BT_APPEARANCE_COMPUTER_WEARABLE_COMPUTER
BT_APPEARANCE_COMPUTER_TABLET
BT_APPEARANCE_COMPUTER_DOCKING_STATION
BT_APPEARANCE_COMPUTER_ALL_IN_ONE
BT_APPEARANCE_COMPUTER_BLADE_SERVER
BT_APPEARANCE_COMPUTER_CONVERTIBLE
BT_APPEARANCE_COMPUTER_DETACHABLE
BT_APPEARANCE_COMPUTER_IOT_GATEWAY
BT_APPEARANCE_COMPUTER_MINI_PC
BT_APPEARANCE_COMPUTER_STICK_PC
BT_APPEARANCE_GENERIC_WATCH
BT_APPEARANCE_SPORTS_WATCH
BT_APPEARANCE_SMARTWATCH
BT_APPEARANCE_GENERIC_CLOCK
BT_APPEARANCE_GENERIC_DISPLAY
BT_APPEARANCE_GENERIC_REMOTE
BT_APPEARANCE_GENERIC_EYEGLASSES
BT_APPEARANCE_GENERIC_TAG
BT_APPEARANCE_GENERIC_KEYRING
BT_APPEARANCE_GENERIC_MEDIA_PLAYER
BT_APPEARANCE_GENERIC_BARCODE_SCANNER
BT_APPEARANCE_GENERIC_THERMOMETER
BT_APPEARANCE_THERMOMETER_EAR
BT_APPEARANCE_GENERIC_HEART_RATE
BT_APPEARANCE_HEART_RATE_BELT
BT_APPEARANCE_GENERIC_BLOOD_PRESSURE
BT_APPEARANCE_BLOOD_PRESSURE_ARM
BT_APPEARANCE_BLOOD_PRESSURE_WRIST
BT_APPEARANCE_GENERIC_HID
BT_APPEARANCE_HID_KEYBOARD
BT_APPEARANCE_HID_MOUSE
BT_APPEARANCE_HID_JOYSTICK
BT_APPEARANCE_HID_GAMEPAD
BT_APPEARANCE_HID_DIGITIZER_TABLET
BT_APPEARANCE_HID_CARD_READER
BT_APPEARANCE_HID_DIGITAL_PEN
BT_APPEARANCE_HID_BARCODE_SCANNER
BT_APPEARANCE_HID_TOUCHPAD
BT_APPEARANCE_HID_PRESENTATION_REMOTE
BT_APPEARANCE_GENERIC_GLUCOSE
BT_APPEARANCE_GENERIC_WALKING
BT_APPEARANCE_WALKING_IN_SHOE
BT_APPEARANCE_WALKING_ON_SHOE
BT_APPEARANCE_WALKING_ON_HIP
BT_APPEARANCE_GENERIC_CYCLING
BT_APPEARANCE_CYCLING_COMPUTER
BT_APPEARANCE_CYCLING_SPEED
BT_APPEARANCE_CYCLING_CADENCE
BT_APPEARANCE_CYCLING_POWER
BT_APPEARANCE_CYCLING_SPEED_CADENCE
BT_APPEARANCE_GENERIC_CONTROL_DEVICE
BT_APPEARANCE_CONTROL_SWITCH
BT_APPEARANCE_CONTROL_MULTI_SWITCH
BT_APPEARANCE_CONTROL_BUTTON
BT_APPEARANCE_CONTROL_SLIDER
BT_APPEARANCE_CONTROL_ROTARY_SWITCH
BT_APPEARANCE_CONTROL_TOUCH_PANEL
BT_APPEARANCE_CONTROL_SINGLE_SWITCH
BT_APPEARANCE_CONTROL_DOUBLE_SWITCH
BT_APPEARANCE_CONTROL_TRIPLE_SWITCH
BT_APPEARANCE_CONTROL_BATTERY_SWITCH
BT_APPEARANCE_CONTROL_ENERGY_HARVESTING_SWITCH
BT_APPEARANCE_CONTROL_PUSH_BUTTON
BT_APPEARANCE_GENERIC_NETWORK_DEVICE
BT_APPEARANCE_NETWORK_ACCESS_POINT
BT_APPEARANCE_NETWORK_MESH_DEVICE
BT_APPEARANCE_NETWORK_MESH_PROXY
BT_APPEARANCE_GENERIC_SENSOR
BT_APPEARANCE_SENSOR_MOTION
BT_APPEARANCE_SENSOR_AIR_QUALITY
BT_APPEARANCE_SENSOR_TEMPERATURE
BT_APPEARANCE_SENSOR_HUMIDITY
BT_APPEARANCE_SENSOR_LEAK
BT_APPEARANCE_SENSOR_SMOKE
BT_APPEARANCE_SENSOR_OCCUPANCY
BT_APPEARANCE_SENSOR_CONTACT
BT_APPEARANCE_SENSOR_CARBON_MONOXIDE
BT_APPEARANCE_SENSOR_CARBON_DIOXIDE
BT_APPEARANCE_SENSOR_AMBIENT_LIGHT
BT_APPEARANCE_SENSOR_ENERGY
BT_APPEARANCE_SENSOR_COLOR_LIGHT
BT_APPEARANCE_SENSOR_RAIN
BT_APPEARANCE_SENSOR_FIRE
BT_APPEARANCE_SENSOR_WIND
BT_APPEARANCE_SENSOR_PROXIMITY
BT_APPEARANCE_SENSOR_MULTI
BT_APPEARANCE_SENSOR_FLUSH_MOUNTED
BT_APPEARANCE_SENSOR_CEILING_MOUNTED
BT_APPEARANCE_SENSOR_WALL_MOUNTED
BT_APPEARANCE_MULTISENSOR
BT_APPEARANCE_SENSOR_ENERGY_METER
BT_APPEARANCE_SENSOR_FLAME_DETECTOR
BT_APPEARANCE_SENSOR_VEHICLE_TIRE_PRESSURE
BT_APPEARANCE_GENERIC_LIGHT_FIXTURES
BT_APPEARANCE_LIGHT_FIXTURES_WALL
BT_APPEARANCE_LIGHT_FIXTURES_CEILING
BT_APPEARANCE_LIGHT_FIXTURES_FLOOR
BT_APPEARANCE_LIGHT_FIXTURES_CABINET
BT_APPEARANCE_LIGHT_FIXTURES_DESK
BT_APPEARANCE_LIGHT_FIXTURES_TROFFER
BT_APPEARANCE_LIGHT_FIXTURES_PENDANT
BT_APPEARANCE_LIGHT_FIXTURES_IN_GROUND
BT_APPEARANCE_LIGHT_FIXTURES_FLOOD
BT_APPEARANCE_LIGHT_FIXTURES_UNDERWATER
BT_APPEARANCE_LIGHT_FIXTURES_BOLLARD_WITH
BT_APPEARANCE_LIGHT_FIXTURES_PATHWAY
BT_APPEARANCE_LIGHT_FIXTURES_GARDEN
BT_APPEARANCE_LIGHT_FIXTURES_POLE_TOP
BT_APPEARANCE_SPOT_LIGHT
BT_APPEARANCE_LIGHT_FIXTURES_LINEAR
BT_APPEARANCE_LIGHT_FIXTURES_STREET
BT_APPEARANCE_LIGHT_FIXTURES_SHELVES
BT_APPEARANCE_LIGHT_FIXTURES_BAY
BT_APPEARANCE_LIGHT_FIXTURES_EMERGENCY_EXIT
BT_APPEARANCE_LIGHT_FIXTURES_CONTROLLER
BT_APPEARANCE_LIGHT_FIXTURES_DRIVER
BT_APPEARANCE_LIGHT_FIXTURES_BULB
BT_APPEARANCE_LIGHT_FIXTURES_LOW_BAY
BT_APPEARANCE_LIGHT_FIXTURES_HIGH_BAY
BT_APPEARANCE_GENERIC_FAN
BT_APPEARANCE_FAN_CEILING
BT_APPEARANCE_FAN_AXIAL
BT_APPEARANCE_FAN_EXHAUST
BT_APPEARANCE_FAN_PEDESTAL
BT_APPEARANCE_FAN_DESK
BT_APPEARANCE_FAN_WALL
BT_APPEARANCE_GENERIC_HVAC
BT_APPEARANCE_HVAC_THERMOSTAT
BT_APPEARANCE_HVAC_HUMIDIFIER
BT_APPEARANCE_HVAC_DEHUMIDIFIER
BT_APPEARANCE_HVAC_HEATER
BT_APPEARANCE_HVAC_RADIATOR
BT_APPEARANCE_HVAC_BOILER
BT_APPEARANCE_HVAC_HEAT_PUMP
BT_APPEARANCE_HVAC_INFRARED_HEATER
BT_APPEARANCE_HVAC_RADIANT_PANEL_HEATER
BT_APPEARANCE_HVAC_FAN_HEATER
BT_APPEARANCE_HVAC_AIR_CURTAIN
BT_APPEARANCE_GENERIC_AIR_CONDITIONING
BT_APPEARANCE_GENERIC_HUMIDIFIER
BT_APPEARANCE_GENERIC_HEATING
BT_APPEARANCE_HEATING_RADIATOR
BT_APPEARANCE_HEATING_BOILER
BT_APPEARANCE_HEATING_HEAT_PUMP
BT_APPEARANCE_HEATING_INFRARED_HEATER
BT_APPEARANCE_HEATING_RADIANT_PANEL_HEATER
BT_APPEARANCE_HEATING_FAN_HEATER
BT_APPEARANCE_HEATING_AIR_CURTAIN
BT_APPEARANCE_GENERIC_ACCESS_CONTROL
BT_APPEARANCE_CONTROL_ACCESS_DOOR
BT_APPEARANCE_CONTROL_GARAGE_DOOR
BT_APPEARANCE_CONTROL_EMERGENCY_EXIT_DOOR
BT_APPEARANCE_CONTROL_ACCESS_LOCK
BT_APPEARANCE_CONTROL_ELEVATOR
BT_APPEARANCE_CONTROL_WINDOW
BT_APPEARANCE_CONTROL_ENTRANCE_GATE
BT_APPEARANCE_CONTROL_DOOR_LOCK
BT_APPEARANCE_CONTROL_LOCKER
BT_APPEARANCE_GENERIC_MOTORIZED_DEVICE
BT_APPEARANCE_MOTORIZED_GATE
BT_APPEARANCE_MOTORIZED_AWNING
BT_APPEARANCE_MOTORIZED_BLINDS_OR_SHADES
BT_APPEARANCE_MOTORIZED_CURTAINS
BT_APPEARANCE_MOTORIZED_SCREEN
BT_APPEARANCE_GENERIC_POWER_DEVICE
BT_APPEARANCE_POWER_OUTLET
BT_APPEARANCE_POWER_STRIP
BT_APPEARANCE_POWER_PLUG
BT_APPEARANCE_POWER_SUPPLY
BT_APPEARANCE_POWER_LED_DRIVER
BT_APPEARANCE_POWER_FLUORESCENT_LAMP_GEAR
BT_APPEARANCE_POWER_HID_LAMP_GEAR
BT_APPEARANCE_POWER_CHARGE_CASE
BT_APPEARANCE_POWER_POWER_BANK
BT_APPEARANCE_GENERIC_LIGHT_SOURCE
BT_APPEARANCE_LIGHT_SOURCE_INCANDESCENT_BULB
BT_APPEARANCE_LIGHT_SOURCE_LED_LAMP
BT_APPEARANCE_LIGHT_SOURCE_HID_LAMP
BT_APPEARANCE_LIGHT_SOURCE_FLUORESCENT_LAMP
BT_APPEARANCE_LIGHT_SOURCE_LED_ARRAY
BT_APPEARANCE_LIGHT_SOURCE_MULTICOLOR_LED_ARRAY
BT_APPEARANCE_LIGHT_SOURCE_LOW_VOLTAGE_HALOGEN
BT_APPEARANCE_LIGHT_SOURCE_OLED
BT_APPEARANCE_GENERIC_WINDOW_COVERING
BT_APPEARANCE_WINDOW_SHADES
BT_APPEARANCE_WINDOW_BLINDS
BT_APPEARANCE_WINDOW_AWNING
BT_APPEARANCE_WINDOW_CURTAIN
BT_APPEARANCE_WINDOW_EXTERIOR_SHUTTER
BT_APPEARANCE_WINDOW_EXTERIOR_SCREEN
BT_APPEARANCE_GENERIC_AUDIO_SINK
BT_APPEARANCE_AUDIO_SINK_STANDALONE_SPEAKER
BT_APPEARANCE_AUDIO_SINK_SOUNDBAR
BT_APPEARANCE_AUDIO_SINK_BOOKSHELF_SPEAKER
BT_APPEARANCE_AUDIO_SINK_STANDMOUNTED_SPEAKER
BT_APPEARANCE_AUDIO_SINK_SPEAKERPHONE
BT_APPEARANCE_GENERIC_AUDIO_SOURCE
BT_APPEARANCE_AUDIO_SOURCE_MICROPHONE
BT_APPEARANCE_AUDIO_SOURCE_ALARM
BT_APPEARANCE_AUDIO_SOURCE_BELL
BT_APPEARANCE_AUDIO_SOURCE_HORN
BT_APPEARANCE_AUDIO_SOURCE_BROADCASTING_DEVICE
BT_APPEARANCE_AUDIO_SOURCE_SERVICE_DESK
BT_APPEARANCE_AUDIO_SOURCE_KIOSK
BT_APPEARANCE_AUDIO_SOURCE_BROADCASTING_ROOM
BT_APPEARANCE_AUDIO_SOURCE_AUDITORIUM
BT_APPEARANCE_GENERIC_MOTORIZED_VEHICLE
BT_APPEARANCE_VEHICLE_CAR
BT_APPEARANCE_VEHICLE_LARGE_GOODS
BT_APPEARANCE_VEHICLE_TWO_WHEELED
BT_APPEARANCE_VEHICLE_MOTORBIKE
BT_APPEARANCE_VEHICLE_SCOOTER
BT_APPEARANCE_VEHICLE_MOPED
BT_APPEARANCE_VEHICLE_THREE_WHEELED
BT_APPEARANCE_VEHICLE_LIGHT
BT_APPEARANCE_VEHICLE_QUAD_BIKE
BT_APPEARANCE_VEHICLE_MINIBUS
BT_APPEARANCE_VEHICLE_BUS
BT_APPEARANCE_VEHICLE_TROLLEY
BT_APPEARANCE_VEHICLE_AGRICULTURAL
BT_APPEARANCE_VEHICLE_CAMPER_OR_CARAVAN
BT_APPEARANCE_VEHICLE_RECREATIONAL
BT_APPEARANCE_GENERIC_DOMESTIC_APPLIANCE
BT_APPEARANCE_APPLIANCE_REFRIGERATOR
BT_APPEARANCE_APPLIANCE_FREEZER
BT_APPEARANCE_APPLIANCE_OVEN
BT_APPEARANCE_APPLIANCE_MICROWAVE
BT_APPEARANCE_APPLIANCE_TOASTER
BT_APPEARANCE_APPLIANCE_WASHING_MACHINE
BT_APPEARANCE_APPLIANCE_DRYER
BT_APPEARANCE_APPLIANCE_COFFEE_MAKER
BT_APPEARANCE_APPLIANCE_CLOTHES_IRON
BT_APPEARANCE_APPLIANCE_CURLING_IRON
BT_APPEARANCE_APPLIANCE_HAIR_DRYER
BT_APPEARANCE_APPLIANCE_VACUUM_CLEANER
BT_APPEARANCE_APPLIANCE_ROBOTIC_VACUUM_CLEANER
BT_APPEARANCE_APPLIANCE_RICE_COOKER
BT_APPEARANCE_APPLIANCE_CLOTHES_STEAMER
BT_APPEARANCE_GENERIC_WEARABLE_AUDIO_DEVICE
BT_APPEARANCE_WEARABLE_AUDIO_DEVICE_EARBUD
BT_APPEARANCE_WEARABLE_AUDIO_DEVICE_HEADSET
BT_APPEARANCE_WEARABLE_AUDIO_DEVICE_HEADPHONES
BT_APPEARANCE_WEARABLE_AUDIO_DEVICE_NECK_BAND
BT_APPEARANCE_GENERIC_AIRCRAFT
BT_APPEARANCE_AIRCRAFT_LIGHT
BT_APPEARANCE_AIRCRAFT_MICROLIGHT
BT_APPEARANCE_AIRCRAFT_PARAGLIDER
BT_APPEARANCE_AIRCRAFT_LARGE_PASSENGER
BT_APPEARANCE_GENERIC_AV_EQUIPMENT
BT_APPEARANCE_AV_EQUIPMENT_AMPLIFIER
BT_APPEARANCE_AV_EQUIPMENT_RECEIVER
BT_APPEARANCE_AV_EQUIPMENT_RADIO
BT_APPEARANCE_AV_EQUIPMENT_TUNER
BT_APPEARANCE_AV_EQUIPMENT_TURNTABLE
BT_APPEARANCE_AV_EQUIPMENT_CD_PLAYER
BT_APPEARANCE_AV_EQUIPMENT_DVD_PLAYER
BT_APPEARANCE_AV_EQUIPMENT_BLURAY_PLAYER
BT_APPEARANCE_AV_EQUIPMENT_OPTICAL_DISC_PLAYER
BT_APPEARANCE_AV_EQUIPMENT_SET_TOP_BOX
BT_APPEARANCE_GENERIC_DISPLAY_EQUIPMENT
BT_APPEARANCE_DISPLAY_EQUIPMENT_TELEVISION
BT_APPEARANCE_DISPLAY_EQUIPMENT_MONITOR
BT_APPEARANCE_DISPLAY_EQUIPMENT_PROJECTOR
BT_APPEARANCE_GENERIC_HEARING_AID
BT_APPEARANCE_HEARING_AID_IN_EAR
BT_APPEARANCE_HEARING_AID_BEHIND_EAR
BT_APPEARANCE_HEARING_AID_COCHLEAR_IMPLANT
BT_APPEARANCE_GENERIC_GAMING
BT_APPEARANCE_HOME_VIDEO_GAME_CONSOLE
BT_APPEARANCE_PORTABLE_HANDHELD_CONSOLE
BT_APPEARANCE_GENERIC_SIGNAGE
BT_APPEARANCE_SIGNAGE_DIGITAL
BT_APPEARANCE_SIGNAGE_ELECTRONIC_LABEL
BT_APPEARANCE_GENERIC_PULSE_OXIMETER
BT_APPEARANCE_PULSE_OXIMETER_FINGERTIP
BT_APPEARANCE_PULSE_OXIMETER_WRIST
BT_APPEARANCE_GENERIC_WEIGHT_SCALE
BT_APPEARANCE_GENERIC_PERSONAL_MOBILITY_DEVICE
BT_APPEARANCE_MOBILITY_POWERED_WHEELCHAIR
BT_APPEARANCE_MOBILITY_SCOOTER
BT_APPEARANCE_CONTINUOUS_GLUCOSE_MONITOR
BT_APPEARANCE_GENERIC_INSULIN_PUMP
BT_APPEARANCE_INSULIN_PUMP_DURABLE
BT_APPEARANCE_INSULIN_PUMP_PATCH
BT_APPEARANCE_INSULIN_PEN
BT_APPEARANCE_GENERIC_MEDICATION_DELIVERY
BT_APPEARANCE_GENERIC_SPIROMETER
BT_APPEARANCE_SPIROMETER_HANDHELD
BT_APPEARANCE_GENERIC_OUTDOOR_SPORTS
BT_APPEARANCE_OUTDOOR_SPORTS_LOCATION
BT_APPEARANCE_OUTDOOR_SPORTS_LOCATION_AND_NAV
BT_APPEARANCE_OUTDOOR_SPORTS_LOCATION_POD
BT_APPEARANCE_OUTDOOR_SPORTS_LOCATION_POD_AND_NAV
BT_GAP_SCAN_FAST_INTERVAL
BT_GAP_SCAN_FAST_WINDOW
BT_GAP_SCAN_SLOW_INTERVAL_1
BT_GAP_SCAN_SLOW_WINDOW_1
BT_GAP_SCAN_SLOW_INTERVAL_2
BT_GAP_SCAN_SLOW_WINDOW_2
BT_GAP_ADV_FAST_INT_MIN_1
BT_GAP_ADV_FAST_INT_MAX_1
BT_GAP_ADV_FAST_INT_MIN_2
BT_GAP_ADV_FAST_INT_MAX_2
BT_GAP_ADV_SLOW_INT_MIN
BT_GAP_ADV_SLOW_INT_MAX
BT_GAP_PER_ADV_FAST_INT_MIN_1
BT_GAP_PER_ADV_FAST_INT_MAX_1
BT_GAP_PER_ADV_FAST_INT_MIN_2
BT_GAP_PER_ADV_FAST_INT_MAX_2
BT_GAP_PER_ADV_SLOW_INT_MIN
BT_GAP_PER_ADV_SLOW_INT_MAX
BT_GAP_INIT_CONN_INT_MIN
BT_GAP_INIT_CONN_INT_MAX
BT_GAP_ADV_MAX_ADV_DATA_LEN
Maximum advertising data length.
BT_GAP_ADV_MAX_EXT_ADV_DATA_LEN
Maximum extended advertising data length.
Note: The maximum advertising data length that can be sent by an extended advertiser is
defined by the controller.
BT_GAP_TX_POWER_INVALID
BT_GAP_RSSI_INVALID
BT_GAP_SID_INVALID
BT_GAP_NO_TIMEOUT
BT_GAP_ADV_HIGH_DUTY_CYCLE_MAX_TIMEOUT
BT_GAP_DATA_LEN_DEFAULT
BT_GAP_DATA_LEN_MAX
BT_GAP_DATA_TIME_DEFAULT
BT_GAP_DATA_TIME_MAX
BT_GAP_SID_MAX
BT_GAP_PER_ADV_MAX_SKIP
BT_GAP_PER_ADV_MIN_TIMEOUT
BT_GAP_PER_ADV_MAX_TIMEOUT
BT_GAP_PER_ADV_MIN_INTERVAL
Minimum Periodic Advertising Interval (N * 1.25 ms)
BT_GAP_PER_ADV_MAX_INTERVAL
Maximum Periodic Advertising Interval (N * 1.25 ms)
BT_GAP_PER_ADV_INTERVAL_TO_MS(interval)
Convert periodic advertising interval (N * 1.25 ms) to milliseconds.
5 / 4 represents 1.25 ms unit.
BT_LE_SUPP_FEAT_40_ENCODE(w64)
Encode 40 least significant bits of 64-bit LE Supported Features into array values in little-
endian format.
Helper macro to encode 40 least significant bits of 64-bit LE Supported Features value into
advertising data. The number of bits that are encoded is a number of LE Supported Features
defined by BT 5.3 Core specification.
BT_DATA_BYTES(BT_DATA_LE_SUPPORTED_FEATURES, BT_LE_SUPP_FEAT_40_
˓→ENCODE(0x000000DFF00DF00D))
Parameters
• w64 – LE Supported Features value (64-bits)
Returns
The comma separated values for LE Supported Features value that may be used
directly as an argument for BT_DATA_BYTES.
BT_LE_SUPP_FEAT_32_ENCODE(w64)
Encode 4 least significant bytes of 64-bit LE Supported Features into 4 bytes long array of
values in little-endian format.
Helper macro to encode 64-bit LE Supported Features value into advertising data. The macro
encodes 4 least significant bytes into advertising data. Other 4 bytes are not encoded.
Example of how to encode the 0x000000DFF00DF00D into advertising data.
BT_DATA_BYTES(BT_DATA_LE_SUPPORTED_FEATURES, BT_LE_SUPP_FEAT_32_
˓→ENCODE(0x000000DFF00DF00D))
Parameters
• w64 – LE Supported Features value (64-bits)
Returns
The comma separated values for LE Supported Features value that may be used
directly as an argument for BT_DATA_BYTES.
BT_LE_SUPP_FEAT_24_ENCODE(w64)
Encode 3 least significant bytes of 64-bit LE Supported Features into 3 bytes long array of
values in little-endian format.
Helper macro to encode 64-bit LE Supported Features value into advertising data. The macro
encodes 3 least significant bytes into advertising data. Other 5 bytes are not encoded.
Example of how to encode the 0x000000DFF00DF00D into advertising data.
BT_DATA_BYTES(BT_DATA_LE_SUPPORTED_FEATURES, BT_LE_SUPP_FEAT_24_
˓→ENCODE(0x000000DFF00DF00D))
Parameters
• w64 – LE Supported Features value (64-bits)
Returns
The comma separated values for LE Supported Features value that may be used
directly as an argument for BT_DATA_BYTES.
BT_LE_SUPP_FEAT_16_ENCODE(w64)
Encode 2 least significant bytes of 64-bit LE Supported Features into 2 bytes long array of
values in little-endian format.
Helper macro to encode 64-bit LE Supported Features value into advertising data. The macro
encodes 3 least significant bytes into advertising data. Other 6 bytes are not encoded.
Example of how to encode the 0x000000DFF00DF00D into advertising data.
BT_DATA_BYTES(BT_DATA_LE_SUPPORTED_FEATURES, BT_LE_SUPP_FEAT_16_
˓→ENCODE(0x000000DFF00DF00D))
Parameters
• w64 – LE Supported Features value (64-bits)
Returns
The comma separated values for LE Supported Features value that may be used
directly as an argument for BT_DATA_BYTES.
BT_LE_SUPP_FEAT_8_ENCODE(w64)
Encode the least significant byte of 64-bit LE Supported Features into single byte long array.
Helper macro to encode 64-bit LE Supported Features value into advertising data. The macro
encodes the least significant byte into advertising data. Other 7 bytes are not encoded.
Example of how to encode the 0x000000DFF00DF00D into advertising data.
BT_DATA_BYTES(BT_DATA_LE_SUPPORTED_FEATURES, BT_LE_SUPP_FEAT_8_
˓→ENCODE(0x000000DFF00DF00D))
Parameters
• w64 – LE Supported Features value (64-bits)
Returns
The value of least significant byte of LE Supported Features value that may be
used directly as an argument for BT_DATA_BYTES.
BT_LE_SUPP_FEAT_VALIDATE(w64)
Validate whether LE Supported Features value does not use bits that are reserved for future
use.
Helper macro to check if w64 has zeros as bits 40-63. The macro is compliant with BT 5.3 Core
Specification where bits 0-40 has assigned values. In case of invalid value, build time error is
reported.
Enums
enum [anonymous]
LE PHY types
Values:
enumerator BT_GAP_LE_PHY_NONE = 0
Convenience macro for when no PHY is set.
enum [anonymous]
Advertising PDU types
Values:
enum [anonymous]
Advertising PDU properties
Values:
enum [anonymous]
Constant Tone Extension (CTE) types
Values:
enum [anonymous]
Peripheral sleep clock accuracy (SCA) in ppm (parts per million)
Values:
enumerator BT_GAP_SCA_UNKNOWN = 0
enumerator BT_GAP_SCA_251_500 = 0
enumerator BT_GAP_SCA_151_250 = 1
enumerator BT_GAP_SCA_101_150 = 2
enumerator BT_GAP_SCA_76_100 = 3
enumerator BT_GAP_SCA_51_75 = 4
enumerator BT_GAP_SCA_31_50 = 5
enumerator BT_GAP_SCA_21_30 = 6
enumerator BT_GAP_SCA_0_20 = 7
GATT layer manages the service database providing APIs for service registration and attribute declara-
tion.
Services can be registered using bt_gatt_service_register() API which takes the bt_gatt_service
struct that provides the list of attributes the service contains. The helper macro BT_GATT_SERVICE() can
be used to declare a service.
Attributes can be declared using the bt_gatt_attr struct or using one of the helper macros:
BT_GATT_PRIMARY_SERVICE
Declares a Primary Service.
BT_GATT_SECONDARY_SERVICE
Declares a Secondary Service.
BT_GATT_INCLUDE_SERVICE
Declares a Include Service.
BT_GATT_CHARACTERISTIC
Declares a Characteristic.
BT_GATT_DESCRIPTOR
Declares a Descriptor.
BT_GATT_ATTRIBUTE
Declares an Attribute.
BT_GATT_CCC
Declares a Client Characteristic Configuration.
BT_GATT_CEP
Declares a Characteristic Extended Properties.
BT_GATT_CUD
Declares a Characteristic User Format.
Each attribute contain a uuid, which describes their type, a read callback, a write callback and a set
of permission. Both read and write callbacks can be set to NULL if the attribute permission don’t allow
their respective operations.
Note: 32-bit UUIDs are not supported in GATT. All 32-bit UUIDs shall be converted to 128-bit UUIDs
when the UUID is contained in an ATT PDU.
Note: Attribute read and write callbacks are called directly from RX Thread thus it is not recommended
to block for long periods of time in them.
Attribute value changes can be notified using bt_gatt_notify() API, alternatively there is
bt_gatt_notify_cb() where is is possible to pass a callback to be called when it is necessary to
know the exact instant when the data has been transmitted over the air. Indications are supported
by bt_gatt_indicate() API.
Client procedures can be enabled with the configuration option: CONFIG_BT_GATT_CLIENT
Discover procedures can be initiated with the use of bt_gatt_discover() API which takes the
bt_gatt_discover_params struct which describes the type of discovery. The parameters also serves
as a filter when setting the uuid field only attributes which matches will be discovered, in contrast
setting it to NULL allows all attributes to be discovered.
Read procedures are supported by bt_gatt_read() API which takes the bt_gatt_read_params struct
as parameters. In the parameters one or more attributes can be set, though setting multiple handles
requires the option: CONFIG_BT_GATT_READ_MULTIPLE
Write procedures are supported by bt_gatt_write() API and takes bt_gatt_write_params struct as
parameters. In case the write operation don’t require a response bt_gatt_write_without_response()
or bt_gatt_write_without_response_cb() APIs can be used, with the later working similarly to
bt_gatt_notify_cb() .
Subscriptions to notification and indication can be initiated with use of bt_gatt_subscribe() API which
takes bt_gatt_subscribe_params as parameters. Multiple subscriptions to the same attribute are sup-
ported so there could be multiple notify callback being triggered for the same attribute. Subscriptions
can be removed with use of bt_gatt_unsubscribe() API.
Note: When subscriptions are removed notify callback is called with the data set to NULL.
API Reference
group bt_gatt
Generic Attribute Profile (GATT)
Defines
BT_GATT_ERR(_att_err)
Construct error return value for attribute read and write callbacks.
Parameters
• _att_err – ATT error code
Returns
Appropriate error code for the attribute callbacks.
BT_GATT_CHRC_BROADCAST
Characteristic broadcast property.
Characteristic Properties Bit field values
If set, permits broadcasts of the Characteristic Value using Server Characteristic Configuration
Descriptor.
BT_GATT_CHRC_READ
Characteristic read property.
If set, permits reads of the Characteristic Value.
BT_GATT_CHRC_WRITE_WITHOUT_RESP
Characteristic write without response property.
If set, permits write of the Characteristic Value without response.
BT_GATT_CHRC_WRITE
Characteristic write with response property.
If set, permits write of the Characteristic Value with response.
BT_GATT_CHRC_NOTIFY
Characteristic notify property.
If set, permits notifications of a Characteristic Value without acknowledgment.
BT_GATT_CHRC_INDICATE
Characteristic indicate property.
If set, permits indications of a Characteristic Value with acknowledgment.
BT_GATT_CHRC_AUTH
Characteristic Authenticated Signed Writes property.
If set, permits signed writes to the Characteristic Value.
BT_GATT_CHRC_EXT_PROP
Characteristic Extended Properties property.
If set, additional characteristic properties are defined in the Characteristic Extended Properties
Descriptor.
BT_GATT_CEP_RELIABLE_WRITE
Characteristic Extended Properties Bit field values
BT_GATT_CEP_WRITABLE_AUX
BT_GATT_CCC_NOTIFY
Client Characteristic Configuration Notification.
Client Characteristic Configuration Values
If set, changes to Characteristic Value shall be notified.
BT_GATT_CCC_INDICATE
Client Characteristic Configuration Indication.
If set, changes to Characteristic Value shall be indicated.
BT_GATT_SCC_BROADCAST
Server Characteristic Configuration Broadcast.
Server Characteristic Configuration Values
If set, the characteristic value shall be broadcast in the advertising data when the server is
advertising.
Typedefs
Param attr
The attribute that’s being written
Param buf
Buffer with the data to write
Param len
Number of bytes in the buffer
Param offset
Offset to start writing from
Param flags
Flags (BT_GATT_WRITE_FLAG_*)
Return
Number of bytes written, or in case of an error BT_GATT_ERR() with a specific
BT_ATT_ERR_* error code.
Enums
enum bt_gatt_perm
GATT attribute permission bit field values
Values:
enumerator BT_GATT_PERM_NONE = 0
No operations supported, e.g. for notify-only
enum [anonymous]
GATT attribute write flags
Values:
struct bt_gatt_attr
#include <gatt.h> GATT Attribute structure.
Public Members
bt_gatt_attr_write_func_t write
Attribute write callback
void *user_data
Attribute user data
uint16_t handle
Attribute handle
uint16_t perm
Attribute permissions.
Will be 0 if returned from bt_gatt_discover().
struct bt_gatt_service_static
#include <gatt.h> GATT Service structure.
Public Members
size_t attr_count
Service Attribute count
struct bt_gatt_service
#include <gatt.h> GATT Service structure.
Public Members
size_t attr_count
Service Attribute count
struct bt_gatt_service_val
#include <gatt.h> Service Attribute Value.
Public Members
uint16_t end_handle
Service end handle.
struct bt_gatt_include
#include <gatt.h> Include Attribute Value.
Public Members
uint16_t start_handle
Service start handle.
uint16_t end_handle
Service end handle.
struct bt_gatt_cb
#include <gatt.h> GATT callback structure.
Public Members
struct bt_gatt_chrc
#include <gatt.h> Characteristic Attribute Value.
Public Members
uint16_t value_handle
Characteristic Value handle.
uint8_t properties
Characteristic properties.
struct bt_gatt_cep
#include <gatt.h> Characteristic Extended Properties Attribute Value.
Public Members
uint16_t properties
Characteristic Extended properties
struct bt_gatt_ccc
#include <gatt.h> Client Characteristic Configuration Attribute Value
Public Members
uint16_t flags
Client Characteristic Configuration flags
struct bt_gatt_scc
#include <gatt.h> Server Characteristic Configuration Attribute Value
Public Members
uint16_t flags
Server Characteristic Configuration flags
struct bt_gatt_cpf
#include <gatt.h> GATT Characteristic Presentation Format Attribute Value.
Public Members
uint8_t format
Format of the value of the characteristic
int8_t exponent
Exponent field to determine how the value of this characteristic is further formatted
uint16_t unit
Unit of the characteristic
uint8_t name_space
Name space of the description
uint16_t description
Description of the characteristic as defined in a higher layer profile
GATT Server
group bt_gatt_server
Defines
BT_GATT_SERVICE_DEFINE(_name, ...)
Statically define and register a service.
Helper macro to statically define and register a service.
Parameters
• _name – Service name.
BT_GATT_SERVICE_INSTANCE_DEFINE(_name, _instances, _instance_num, _attrs_def)
Statically define service structure array.
Helper macro to statically define service structure array. Each element of the array is linked
to the service attribute array which is also defined in this scope using _attrs_def macro.
Parameters
• _name – Name of service structure array.
• _instances – Array of instances to pass as user context to the attribute call-
backs.
• _instance_num – Number of elements in instance array.
• _attrs_def – Macro provided by the user that defines attribute array for the
service. This macro should accept single parameter which is the instance con-
text.
BT_GATT_SERVICE(_attrs)
Service Structure Declaration Macro.
Helper macro to declare a service structure.
Parameters
• _attrs – Service attributes.
BT_GATT_PRIMARY_SERVICE(_service)
Primary Service Declaration Macro.
Helper macro to declare a primary service attribute.
Parameters
• _service – Service attribute value.
BT_GATT_SECONDARY_SERVICE(_service)
Secondary Service Declaration Macro.
Helper macro to declare a secondary service attribute.
Note: A secondary service is only intended to be included from a primary service or another
secondary service or other higher layer specification.
Parameters
• _service – Service attribute value.
BT_GATT_INCLUDE_SERVICE(_service_incl)
Include Service Declaration Macro.
Helper macro to declare database internal include service attribute.
Parameters
BT_GATT_CCC_MAX
BT_GATT_CUD(_value, _perm)
Characteristic User Format Descriptor Declaration Macro.
Helper macro to declare a CUD attribute.
Parameters
• _value – User description NULL-terminated C string.
• _perm – Descriptor attribute access permissions, a bitmap of bt_gatt_perm val-
ues.
BT_GATT_CPF(_value)
Characteristic Presentation Format Descriptor Declaration Macro.
Helper macro to declare a CPF attribute.
Parameters
• _value – Pointer to a struct bt_gatt_cpf .
BT_GATT_DESCRIPTOR(_uuid, _perm, _read, _write, _user_data)
Descriptor Declaration Macro.
Helper macro to declare a descriptor attribute.
Parameters
• _uuid – Descriptor attribute uuid.
• _perm – Descriptor attribute access permissions, a bitmap of bt_gatt_perm val-
ues.
• _read – Descriptor attribute read callback (bt_gatt_attr_read_func_t).
• _write – Descriptor attribute write callback (bt_gatt_attr_write_func_t).
• _user_data – Descriptor attribute user data.
BT_GATT_ATTRIBUTE(_uuid, _perm, _read, _write, _user_data)
Attribute Declaration Macro.
Helper macro to declare an attribute.
Parameters
• _uuid – Attribute uuid.
• _perm – Attribute access permissions, a bitmap of bt_gatt_perm values.
• _read – Attribute read callback (bt_gatt_attr_read_func_t).
• _write – Attribute write callback (bt_gatt_attr_write_func_t).
• _user_data – Attribute user data.
Typedefs
Param user_data
Data given.
Return
BT_GATT_ITER_CONTINUE if should continue to the next attribute.
Return
BT_GATT_ITER_STOP to stop.
Enums
enum [anonymous]
Values:
enumerator BT_GATT_ITER_STOP = 0
enumerator BT_GATT_ITER_CONTINUE
Functions
When using CONFIG_BT_SETTINGS then all services that should have bond configuration
loaded, i.e. CCC values, must be registered before calling settings_load.
When using CONFIG_BT_GATT_CACHING and CONFIG_BT_SETTINGS then all services that should
be included in the GATT Database Hash calculation should be added before calling set-
tings_load. All services registered after settings_load will trigger a new database hash cal-
culation and a new hash stored.
There are two situations where this function can be called: either before bt_init() has been
called, or after settings_load() has been called. Registering a service in the middle is not
supported and will return an error.
Parameters
• svc – Service containing the available attributes
Returns
0 in case of success or negative value in case of error.
Returns
-EAGAIN if bt_init() has been called but settings_load() hasn’t yet.
int bt_gatt_service_unregister(struct bt_gatt_service *svc)
Unregister GATT service.
Parameters
• svc – Service to be unregistered.
Returns
0 in case of success or negative value in case of error.
bool bt_gatt_service_is_registered(const struct bt_gatt_service *svc)
Check if GATT service is registered.
Parameters
• svc – Service to be checked.
Returns
true if registered or false if not register.
void bt_gatt_foreach_attr_type(uint16_t start_handle, uint16_t end_handle, const struct
bt_uuid *uuid, const void *attr_data, uint16_t num_matches,
bt_gatt_attr_func_t func, void *user_data)
Attribute iterator by type.
Iterate attributes in the given range matching given UUID and/or data.
Parameters
• start_handle – Start handle.
• end_handle – End handle.
• uuid – UUID to match, passing NULL skips UUID matching.
• attr_data – Attribute data to match, passing NULL skips data matching.
• num_matches – Number matches, passing 0 makes it unlimited.
• func – Callback function.
• user_data – Data to pass to the callback.
static inline void bt_gatt_foreach_attr(uint16_t start_handle, uint16_t end_handle,
bt_gatt_attr_func_t func, void *user_data)
Attribute iterator.
Iterate attributes in the given range.
Parameters
• start_handle – Start handle.
• end_handle – End handle.
• func – Callback function.
• user_data – Data to pass to the callback.
struct bt_gatt_attr *bt_gatt_attr_next(const struct bt_gatt_attr *attr)
Iterate to the next attribute.
Iterate to the next attribute following a given attribute.
Parameters
• attr – Current Attribute.
Returns
The next attribute or NULL if it cannot be found.
struct bt_gatt_attr *bt_gatt_find_by_uuid(const struct bt_gatt_attr *attr, uint16_t attr_count,
const struct bt_uuid *uuid)
Find Attribute by UUID.
Find the attribute with the matching UUID. To limit the search to a service set the attr to the
service attributes and the attr_count to the service attribute count .
Parameters
• attr – Pointer to an attribute that serves as the starting point for the search of
a match for the UUID. Passing NULL will search the entire range.
• attr_count – The number of attributes from the starting point to search for a
match for the UUID. Set to 0 to search until the end.
• uuid – UUID to match.
uint16_t bt_gatt_attr_get_handle(const struct bt_gatt_attr *attr)
Get Attribute handle.
Parameters
• attr – Attribute object.
Returns
Handle of the corresponding attribute or zero if the attribute could not be found.
uint16_t bt_gatt_attr_value_handle(const struct bt_gatt_attr *attr)
Get the handle of the characteristic value descriptor.
Parameters
• attr – A Characteristic Attribute.
Returns
the handle of the corresponding Characteristic Value. The value will be zero (the
invalid handle) if attr was not a characteristic attribute.
ssize_t bt_gatt_attr_read(struct bt_conn *conn, const struct bt_gatt_attr *attr, void *buf,
uint16_t buf_len, uint16_t offset, const void *value, uint16_t
value_len)
Parameters
• conn – Connection object.
• attr – Attribute to read.
• buf – Buffer to store the value read.
• len – Buffer length.
• offset – Start offset.
Returns
number of bytes read in case of success or negative values in case of error.
Parameters
• conn – Connection object.
• attr – Attribute to read.
• buf – Buffer to store the value read.
• len – Buffer length.
ssize_t bt_gatt_attr_read_chrc(struct bt_conn *conn, const struct bt_gatt_attr *attr, void *buf,
uint16_t len, uint16_t offset)
Read Characteristic Attribute helper.
Read characteristic attribute value from local database storing the result into buffer after
encoding it.
Parameters
• conn – Connection object.
• attr – Attribute to read.
• buf – Buffer to store the value read.
• len – Buffer length.
• offset – Start offset.
Returns
number of bytes read in case of success or negative values in case of error.
ssize_t bt_gatt_attr_read_ccc(struct bt_conn *conn, const struct bt_gatt_attr *attr, void *buf,
uint16_t len, uint16_t offset)
Read Client Characteristic Configuration Attribute helper.
Read CCC attribute value from local database storing the result into buffer after encoding it.
Parameters
• conn – Connection object.
• attr – Attribute to read.
• buf – Buffer to store the value read.
• len – Buffer length.
• offset – Start offset.
Returns
number of bytes read in case of success or negative values in case of error.
ssize_t bt_gatt_attr_write_ccc(struct bt_conn *conn, const struct bt_gatt_attr *attr, const void
*buf, uint16_t len, uint16_t offset, uint8_t flags)
Write Client Characteristic Configuration Attribute helper.
Write value in the buffer into CCC attribute.
Parameters
ssize_t bt_gatt_attr_read_cep(struct bt_conn *conn, const struct bt_gatt_attr *attr, void *buf,
uint16_t len, uint16_t offset)
Read Characteristic Extended Properties Attribute helper.
Read CEP attribute value from local database storing the result into buffer after encoding it.
Parameters
• conn – Connection object
• attr – Attribute to read
• buf – Buffer to store the value read
• len – Buffer length
• offset – Start offset
Returns
number of bytes read in case of success or negative values in case of error.
ssize_t bt_gatt_attr_read_cud(struct bt_conn *conn, const struct bt_gatt_attr *attr, void *buf,
uint16_t len, uint16_t offset)
Read Characteristic User Description Descriptor Attribute helper.
Read CUD attribute value from local database storing the result into buffer after encoding it.
Note: Only use this with attributes which user_data is a NULL-terminated C string.
Parameters
• conn – Connection object
• attr – Attribute to read
• buf – Buffer to store the value read
• len – Buffer length
• offset – Start offset
Returns
number of bytes read in case of success or negative values in case of error.
ssize_t bt_gatt_attr_read_cpf(struct bt_conn *conn, const struct bt_gatt_attr *attr, void *buf,
uint16_t len, uint16_t offset)
Read Characteristic Presentation format Descriptor Attribute helper.
Read CPF attribute value from local database storing the result into buffer after encoding it.
Parameters
• conn – Connection object
• attr – Attribute to read
• buf – Buffer to store the value read
• len – Buffer length
• offset – Start offset
Returns
number of bytes read in case of success or negative values in case of error.
This API has an additional limitation: it only accepts valid attribute references and not UUIDs
like bt_gatt_notify and bt_gatt_notify_cb.
Parameters
• conn – Target client. Notifying all connected clients by passing NULL is not yet
supported, please use bt_gatt_notify instead.
• num_params – Element count of params array. Has to be greater than 1.
• params – Array of notification parameters. It is okay to free this after calling
this function.
Return values
• 0 – Success. The PDU is queued for sending.
• -EINVAL –
– One of the attribute handles is invalid.
– Only one parameter was passed. This API expects 2 or more.
– Not all func were equal or not all user_data were equal.
– One of the characteristics is not notifiable.
– An UUID was passed in one of the parameters.
• -ERANGE –
– The notifications cannot all fit in a single
ATT_MULTIPLE_HANDLE_VALUE_NTF.
– They exceed the MTU of all open ATT bearers.
• -EPERM – The connection has a lower security level than required by one of the
attributes.
• -EOPNOTSUPP – The peer hasn’t yet communicated that it supports this PDU
type.
static inline int bt_gatt_notify(struct bt_conn *conn, const struct bt_gatt_attr *attr, const void
*data, uint16_t len)
Notify attribute value change.
Send notification of attribute value change, if connection is NULL notify all peer that have
notification enabled via CCC otherwise do a direct notification only the given connection.
The attribute object on the parameters can be the so called Characteristic Declaration, which is
usually declared with BT_GATT_CHARACTERISTIC followed by BT_GATT_CCC, or the Char-
acteristic Value Declaration which is automatically created after the Characteristic Declaration
when using BT_GATT_CHARACTERISTIC.
Parameters
• conn – Connection object.
• attr – Characteristic or Characteristic Value attribute.
• data – Pointer to Attribute data.
• len – Attribute value length.
Returns
0 in case of success or negative value in case of error.
static inline int bt_gatt_notify_uuid(struct bt_conn *conn, const struct bt_uuid *uuid, const
struct bt_gatt_attr *attr, const void *data, uint16_t len)
Note: This procedure is asynchronous therefore the parameters need to remains valid while
it is active. The procedure is active until the destroy callback is run.
Parameters
• conn – Connection object.
• params – Indicate parameters.
Returns
0 in case of success or negative value in case of error.
struct bt_gatt_ccc_cfg
#include <gatt.h> GATT CCC configuration entry.
Public Members
uint8_t id
Local identity, BT_ID_DEFAULT in most cases.
bt_addr_le_t peer
Remote peer address.
bool link_encrypted
Separate storage for encrypted and unencrypted context. This indicate that the link was
encrypted when the CCC was written.
uint16_t value
Configuration value.
struct bt_gatt_notify_params
#include <gatt.h>
Public Members
uint16_t len
Notification Value length
bt_gatt_complete_func_t func
Notification Value callback
void *user_data
Notification Value callback user data
struct bt_gatt_indicate_params
#include <gatt.h> GATT Indicate Value parameters.
Public Members
bt_gatt_indicate_func_t func
Indicate Value callback
bt_gatt_indicate_params_destroy_t destroy
Indicate operation complete callback
uint16_t len
Indicate Value length
GATT Client
group bt_gatt_client
Typedefs
If discovery procedure has completed this callback will be called with attr set to NULL. This
will not happen if procedure was stopped by returning BT_GATT_ITER_STOP.
The attribute object as well as its UUID and value objects are temporary and must be copied
to in order to cache its information. Only the following fields of the attribute contains valid
information:
• uuid UUID representing the type of attribute.
• handle Handle in the remote database.
• user_data The value of the attribute, if the discovery type maps to an ATT operation that
provides this information. NULL otherwise. See below.
The effective type of attr->user_data is determined by params. Note that the fields
params->type and params->uuid are left unchanged by the discovery procedure.
params->type params->uuid
Type of
attr->user_data
BT_GATT_DISCOVER_PRIMARY any bt_gatt_service_val
BT_GATT_DISCOVER_SECONDARY any bt_gatt_service_val
BT_GATT_DISCOVER_INCLUDE any bt_gatt_include
BT_GATT_DISCOVER_CHARACTERISTIC any bt_gatt_chrc
BT_GATT_DISCOVER_STD_CHAR_DESC BT_UUID_GATT_CEP bt_gatt_cep
BT_GATT_DISCOVER_STD_CHAR_DESC BT_UUID_GATT_CCC bt_gatt_ccc
BT_GATT_DISCOVER_STD_CHAR_DESC BT_UUID_GATT_SCC bt_gatt_scc
BT_GATT_DISCOVER_STD_CHAR_DESC BT_UUID_GATT_CPF bt_gatt_cpf
BT_GATT_DISCOVER_DESCRIPTOR any NULL
BT_GATT_DISCOVER_ATTRIBUTE any NULL
Also consider if using read-by-type instead of discovery is more convenient. See bt_gatt_read
with bt_gatt_read_params::handle_count set to 0.
Param conn
Connection object.
Param attr
Attribute found, or NULL if not found.
Param params
Discovery parameters given.
Return
BT_GATT_ITER_CONTINUE to continue discovery procedure.
Return
BT_GATT_ITER_STOP to stop discovery procedure.
When reading using by_uuid, params->start_handle is the attribute handle for this data
item.
Param conn
Connection object.
Param err
ATT error code.
Param params
Read parameters used.
Param data
Attribute value data. NULL means read has completed.
Param length
Attribute value length.
Return
BT_GATT_ITER_CONTINUE if should continue to the next attribute.
Return
BT_GATT_ITER_STOP to stop.
Param conn
Connection object.
Param err
ATT error code.
Param params
Subscription parameters used.
Enums
enum [anonymous]
GATT Discover types
Values:
enumerator BT_GATT_DISCOVER_PRIMARY
Discover Primary Services.
enumerator BT_GATT_DISCOVER_SECONDARY
Discover Secondary Services.
enumerator BT_GATT_DISCOVER_INCLUDE
Discover Included Services.
enumerator BT_GATT_DISCOVER_CHARACTERISTIC
Discover Characteristic Values.
enumerator BT_GATT_DISCOVER_DESCRIPTOR
Discover Descriptors.
enumerator BT_GATT_DISCOVER_ATTRIBUTE
Discover Attributes.
enumerator BT_GATT_DISCOVER_STD_CHAR_DESC
Discover standard characteristic descriptor values.
enum [anonymous]
Subscription flags
Values:
enumerator BT_GATT_SUBSCRIBE_FLAG_VOLATILE
Persistence flag.
enumerator BT_GATT_SUBSCRIBE_FLAG_NO_RESUB
No resubscribe flag.
enumerator BT_GATT_SUBSCRIBE_FLAG_WRITE_PENDING
Write pending flag.
enumerator BT_GATT_SUBSCRIBE_FLAG_SENT
Sent flag.
enumerator BT_GATT_SUBSCRIBE_NUM_FLAGS
Functions
The Response comes in callback params->func. The callback is run from the context specified
by ‘config BT_RECV_CONTEXT’. params must remain valid until start of callback.
This function will block while the ATT request queue is full, except when called from the BT
RX thread, as this would cause a deadlock.
Parameters
• conn – Connection object.
• params – Exchange MTU parameters.
Return values
• 0 – Successfully queued request. Will call params->func on resolution.
• -ENOMEM – ATT request queue is full and blocking would cause deadlock. Al-
low a pending request to resolve before retrying, or call this function out-
side the BT RX thread to get blocking behavior. Queue size is controlled by
CONFIG_BT_L2CAP_TX_BUF_COUNT .
• -EALREADY – The MTU exchange procedure has been already performed.
Parameters
• conn – Connection object.
• params – Discover parameters.
Return values
• 0 – Successfully queued request. Will call params->func on resolution.
• -ENOMEM – ATT request queue is full and blocking would cause deadlock. Al-
low a pending request to resolve before retrying, or call this function out-
side the BT RX thread to get blocking behavior. Queue size is controlled by
CONFIG_BT_L2CAP_TX_BUF_COUNT .
int bt_gatt_read(struct bt_conn *conn, struct bt_gatt_read_params *params)
Read Attribute Value by handle.
This procedure read the attribute value and return it to the callback.
When reading attributes by UUID the callback can be called multiple times depending on
how many instances of given the UUID exists with the start_handle being updated for each
instance.
If an instance does contain a long value which cannot be read entirely the caller will need to
read the remaining data separately using the handle and offset.
The Response comes in callback params->func. The callback is run from the context specified
by ‘config BT_RECV_CONTEXT’. params must remain valid until start of callback.
This function will block while the ATT request queue is full, except when called from the BT
RX thread, as this would cause a deadlock.
Parameters
• conn – Connection object.
• params – Read parameters.
Return values
• 0 – Successfully queued request. Will call params->func on resolution.
• -ENOMEM – ATT request queue is full and blocking would cause deadlock. Al-
low a pending request to resolve before retrying, or call this function out-
side the BT RX thread to get blocking behavior. Queue size is controlled by
CONFIG_BT_L2CAP_TX_BUF_COUNT .
int bt_gatt_write(struct bt_conn *conn, struct bt_gatt_write_params *params)
Write Attribute Value by handle.
The Response comes in callback params->func. The callback is run from the context specified
by ‘config BT_RECV_CONTEXT’. params must remain valid until start of callback.
This function will block while the ATT request queue is full, except when called from Bluetooth
event context. When called from Bluetooth context, this function will instead instead return
-ENOMEM if it would block to avoid a deadlock.
Parameters
• conn – Connection object.
• params – Write parameters.
Return values
• 0 – Successfully queued request. Will call params->func on resolution.
• -ENOMEM – ATT request queue is full and blocking would cause deadlock. Al-
low a pending request to resolve before retrying, or call this function outside
This function will block while the ATT request queue is full, except when called from the BT
RX thread, as this would cause a deadlock.
Note: By using a callback it also disable the internal flow control which would prevent
sending multiple commands without waiting for their transmissions to complete, so if that is
required the caller shall not submit more data until the callback is called.
Parameters
• conn – Connection object.
• handle – Attribute handle.
• data – Data to be written.
• length – Data length.
• sign – Whether to sign data
• func – Transmission complete callback.
• user_data – User data to be passed back to callback.
Return values
• 0 – Successfully queued request.
• -ENOMEM – ATT request queue is full and blocking would cause deadlock. Al-
low a pending request to resolve before retrying, or call this function out-
side the BT RX thread to get blocking behavior. Queue size is controlled by
CONFIG_BT_L2CAP_TX_BUF_COUNT .
This function will block while the ATT request queue is full, except when called from the BT
RX thread, as this would cause a deadlock.
Note: Notifications are asynchronous therefore the parameters need to remain valid while
subscribed.
Parameters
• conn – Connection object.
• params – Subscribe parameters.
Return values
• 0 – Successfully queued request. Will call params->write on resolution.
• -ENOMEM – ATT request queue is full and blocking would cause deadlock. Al-
low a pending request to resolve before retrying, or call this function out-
side the BT RX thread to get blocking behavior. Queue size is controlled by
CONFIG_BT_L2CAP_TX_BUF_COUNT .
Note: Notifications are asynchronous therefore the parameters need to remain valid while
subscribed.
Parameters
• id – Local identity (in most cases BT_ID_DEFAULT).
Parameters
• conn – The connection the request was issued on.
• params – The address params used in the request function call.
struct bt_gatt_exchange_params
#include <gatt.h> GATT Exchange MTU parameters.
Public Members
struct bt_gatt_discover_params
#include <gatt.h> GATT Discover Attributes parameters.
Public Members
bt_gatt_discover_func_t func
Discover attribute callback
uint16_t attr_handle
Include service attribute declaration handle
uint16_t start_handle
Included service start handle
Discover start handle
uint16_t end_handle
Included service end handle
Discover end handle
uint8_t type
Discover type
struct bt_gatt_read_params
#include <gatt.h> GATT Read parameters.
Public Members
bt_gatt_read_func_t func
Read attribute callback.
size_t handle_count
If equals to 1 single.handle and single.offset are used. If greater than 1 multiple.handles
are used. If equals to 0 by_uuid is used for Read Using Characteristic UUID.
uint16_t handle
Attribute handle.
uint16_t offset
Attribute data offset.
uint16_t *handles
Attribute handles to read with Read Multiple Characteristic Values.
bool variable
If true use Read Multiple Variable Length Characteristic Values procedure. The values of
the set of attributes may be of variable or unknown length. If false use Read Multiple
Characteristic Values procedure. The values of the set of attributes must be of a known
fixed length, with the exception of the last value that can have a variable length.
uint16_t start_handle
First requested handle number.
uint16_t end_handle
Last requested handle number.
struct bt_gatt_write_params
#include <gatt.h> GATT Write parameters.
Public Members
bt_gatt_write_func_t func
Response callback
uint16_t handle
Attribute handle
uint16_t offset
Attribute data offset
uint16_t length
Length of the data
struct bt_gatt_subscribe_params
#include <gatt.h> GATT Subscribe parameters.
Public Members
bt_gatt_notify_func_t notify
Notification value callback
bt_gatt_subscribe_func_t subscribe
Subscribe CCC write request response callback If given, called with the subscription pa-
rameters given when subscribing
bt_gatt_write_func_t write
Deprecated:
uint16_t value_handle
Subscribe value handle
uint16_t ccc_handle
Subscribe CCC handle
uint16_t value
Subscribe value
bt_security_t min_security
Minimum required security for received notification. Notifications and indications re-
ceived over a connection with a lower security level are silently discarded.
atomic_t flags[ATOMIC_BITMAP_SIZE(BT_GATT_SUBSCRIBE_NUM_FLAGS)]
Subscription flags
HCI Drivers
API Reference
group bt_hci_driver
HCI drivers.
Defines
IS_BT_QUIRK_NO_AUTO_DLE(bt_dev)
BT_HCI_EVT_FLAG_RECV_PRIO
BT_HCI_EVT_FLAG_RECV
Enums
enum [anonymous]
Values:
enum bt_hci_driver_bus
Possible values for the ‘bus’ member of the bt_hci_driver struct
Values:
enumerator BT_HCI_DRIVER_BUS_VIRTUAL = 0
enumerator BT_HCI_DRIVER_BUS_USB = 1
enumerator BT_HCI_DRIVER_BUS_PCCARD = 2
enumerator BT_HCI_DRIVER_BUS_UART = 3
enumerator BT_HCI_DRIVER_BUS_RS232 = 4
enumerator BT_HCI_DRIVER_BUS_PCI = 5
enumerator BT_HCI_DRIVER_BUS_SDIO = 6
enumerator BT_HCI_DRIVER_BUS_SPI = 7
enumerator BT_HCI_DRIVER_BUS_I2C = 8
enumerator BT_HCI_DRIVER_BUS_IPM = 9
Functions
Note: A weak version of this function is included in the H4 driver, so defining it is optional
per board.
Parameters
• dev – The device structure for the bus connecting to the IC
Returns
0 on success, negative error value on failure
This function allocates a new buffer for an HCI event. It is given the event code and the total
length of the parameters. Upon successful return the buffer is ready to have the parameters
encoded into it.
Parameters
• evt – Event OpCode.
• len – Length of event parameters.
Returns
Newly allocated buffer.
struct net_buf *bt_hci_cmd_complete_create(uint16_t op, uint8_t plen)
Allocate an HCI Command Complete event buffer.
This function allocates a new buffer for HCI Command Complete event. It is given the OpCode
(encoded e.g. using the BT_OP macro) and the total length of the parameters. Upon successful
return the buffer is ready to have the parameters encoded into it.
Parameters
• op – Command OpCode.
• plen – Length of command parameters.
Returns
Newly allocated buffer.
struct net_buf *bt_hci_cmd_status_create(uint16_t op, uint8_t status)
Allocate an HCI Command Status event buffer.
This function allocates a new buffer for HCI Command Status event. It is given the OpCode
(encoded e.g. using the BT_OP macro) and the status code. Upon successful return the buffer
is ready to have the parameters encoded into it.
Parameters
• op – Command OpCode.
• status – Status code.
Returns
Newly allocated buffer.
struct bt_hci_driver
#include <hci_driver.h> Abstraction which represents the HCI transport to the controller.
This struct is used to represent the HCI transport to the Bluetooth controller.
Public Members
uint32_t quirks
Specific controller quirks. These are set by the HCI driver and acted upon by the host.
They can either be statically set at buildtime, or set at runtime before the HCI driver’s
open() callback returns.
int (*open)(void)
Open the HCI transport.
Opens the HCI transport for operation. This function must not return until the transport
is ready for operation, meaning it is safe to start calling the send() handler.
If the driver uses its own RX thread, i.e. CONFIG_BT_RECV_BLOCKING is set, then this
function is expected to start that thread.
Return
0 on success or negative error number on failure.
int (*close)(void)
Close the HCI transport.
Closes the HCI transport. This function must not return until the transport is closed.
If the driver uses its own RX thread, i.e. CONFIG_BT_RECV_BLOCKING is set, then this
function is expected to abort that thread.
Return
0 on success or negative error number on failure.
Param buf
Buffer containing data to be sent to the controller.
Return
0 on success or negative error number on failure.
int (*setup)(void)
HCI vendor-specific setup.
Executes vendor-specific commands sequence to initialize BT Controller before BT Host
executes Reset sequence.
Return
0 on success or negative error number on failure.
Overview HCI RAW channel API is intended to expose HCI interface to the remote entity. The local
Bluetooth controller gets owned by the remote entity and host Bluetooth stack is not used. RAW API
provides direct access to packets which are sent and received by the Bluetooth HCI driver.
API Reference
group hci_raw
HCI RAW channel.
Defines
BT_HCI_ERR_EXT_HANDLED
Enums
enum [anonymous]
Values:
While in this mode the buffers are passed as is between the stack
and the driver.
While in this mode H:4 headers will added into the buffers
according to the buffer type when coming from the stack and will be
removed and used to set the buffer type.
Functions
uint8_t bt_hci_raw_get_mode(void)
Get Bluetooth RAW channel mode.
Get access mode of Bluetooth RAW channel.
Returns
Access mode.
void bt_hci_raw_cmd_ext_register(struct bt_hci_raw_cmd_ext *cmds, size_t size)
Register Bluetooth RAW command extension table.
Register Bluetooth RAW channel command extension table, opcodes in this table are inter-
cepted to sent to the handler function.
Parameters
• cmds – Pointer to the command extension table.
• size – Size of the command extension table.
int bt_enable_raw(struct k_fifo *rx_queue)
Enable Bluetooth RAW channel:
Enable Bluetooth RAW HCI channel.
Parameters
• rx_queue – netbuf queue where HCI packets received from the Bluetooth con-
troller are to be queued. The queue is defined in the caller while the available
buffers pools are handled in the stack.
Returns
Zero on success or (negative) error code otherwise.
struct bt_hci_raw_cmd_ext
#include <hci_raw.h>
Public Members
uint16_t op
Opcode of the command
size_t min_len
Minimal length of the command
API Reference
group bt_hfp
Hands Free Profile (HFP)
Defines
HFP_HF_CMD_OK
HFP_HF_CMD_ERROR
HFP_HF_CMD_CME_ERROR
HFP_HF_CMD_UNKNOWN_ERROR
Enums
enum bt_hfp_hf_at_cmd
Values:
enumerator BT_HFP_HF_ATA
enumerator BT_HFP_HF_AT_CHUP
Functions
struct bt_hfp_hf_cmd_complete
#include <hfp_hf.h> HFP HF Command completion field.
struct bt_hfp_hf_cb
#include <hfp_hf.h> HFP profile application callback.
Public Members
Param conn
Connection object.
Param value
call held indicator value received from the AG.
L2CAP layer enables connection-oriented channels which can be enable with the configuration op-
tion: CONFIG_BT_L2CAP_DYNAMIC_CHANNEL. This channels support segmentation and reassembly trans-
parently, they also support credit based flow control making it suitable for data streams.
Channels instances are represented by the bt_l2cap_chan struct which contains the callbacks in the
bt_l2cap_chan_ops struct to inform when the channel has been connected, disconnected or when the
encryption has changed. In addition to that it also contains the recv callback which is called whenever
an incoming data has been received. Data received this way can be marked as processed by returning 0
or using bt_l2cap_chan_recv_complete() API if processing is asynchronous.
Note: The recv callback is called directly from RX Thread thus it is not recommended to block for long
periods of time.
For sending data the bt_l2cap_chan_send() API can be used noting that it may block if no credits are
available, and resuming as soon as more credits are available.
Servers can be registered using bt_l2cap_server_register() API passing the bt_l2cap_server struct
which informs what psm it should listen to, the required security level sec_level, and the callback
accept which is called to authorize incoming connection requests and allocate channel instances.
Client channels can be initiated with use of bt_l2cap_chan_connect() API and can be disconnected
with the bt_l2cap_chan_disconnect() API. Note that the later can also disconnect channel instances
created by servers.
API Reference
group bt_l2cap
L2CAP.
Defines
BT_L2CAP_HDR_SIZE
L2CAP PDU header size, used for buffer size calculations
BT_L2CAP_TX_MTU
Maximum Transmission Unit (MTU) for an outgoing L2CAP PDU.
BT_L2CAP_RX_MTU
Maximum Transmission Unit (MTU) for an incoming L2CAP PDU.
BT_L2CAP_BUF_SIZE(mtu)
Helper to calculate needed buffer size for L2CAP PDUs. Useful for creating buffer pools.
Parameters
• mtu – Needed L2CAP PDU MTU.
Returns
Needed buffer size to match the requested L2CAP PDU MTU.
BT_L2CAP_SDU_HDR_SIZE
L2CAP SDU header size, used for buffer size calculations
BT_L2CAP_SDU_TX_MTU
Maximum Transmission Unit for an unsegmented outgoing L2CAP SDU.
The Maximum Transmission Unit for an outgoing L2CAP SDU when sent without segmenta-
tion, i.e. a single L2CAP SDU will fit inside a single L2CAP PDU.
The MTU for outgoing L2CAP SDUs with segmentation is defined by the size of the application
buffer pool.
BT_L2CAP_SDU_RX_MTU
Maximum Transmission Unit for an unsegmented incoming L2CAP SDU.
The Maximum Transmission Unit for an incoming L2CAP SDU when sent without segmenta-
tion, i.e. a single L2CAP SDU will fit inside a single L2CAP PDU.
The MTU for incoming L2CAP SDUs with segmentation is defined by the size of the application
buffer pool. The application will have to define an alloc_buf callback for the channel in order
to support receiving segmented L2CAP SDUs.
BT_L2CAP_SDU_BUF_SIZE(mtu)
Helper to calculate needed buffer size for L2CAP SDUs. Useful for creating buffer pools.
Parameters
• mtu – Required BT_L2CAP_*_SDU.
Returns
Needed buffer size to match the requested L2CAP SDU MTU.
BT_L2CAP_LE_CHAN(_ch)
Helper macro getting container object of type bt_l2cap_le_chan address having the same con-
tainer chan member address as object in question.
Parameters
• _ch – Address of object of bt_l2cap_chan type
Returns
Address of in memory bt_l2cap_le_chan object type containing the address of in
question object.
BT_L2CAP_CHAN_SEND_RESERVE
Headroom needed for outgoing L2CAP PDUs.
BT_L2CAP_SDU_CHAN_SEND_RESERVE
Headroom needed for outgoing L2CAP SDUs.
Typedefs
Enums
enum bt_l2cap_chan_state
Life-span states of L2CAP CoC channel.
Used only by internal APIs dealing with setting channel to proper state depending on opera-
tional context.
A channel enters the BT_L2CAP_CONNECTING state upon bt_l2cap_chan_connect,
bt_l2cap_ecred_chan_connect or upon returning from bt_l2cap_server::accept.
When a channel leaves the BT_L2CAP_CONNECTING state, bt_l2cap_chan_ops::connected is
called.
Values:
enumerator BT_L2CAP_DISCONNECTED
Channel disconnected
enumerator BT_L2CAP_CONNECTING
Channel in connecting state
enumerator BT_L2CAP_CONFIG
Channel in config state, BR/EDR specific
enumerator BT_L2CAP_CONNECTED
Channel ready for upper layer traffic on it
enumerator BT_L2CAP_DISCONNECTING
Channel in disconnecting state
enum bt_l2cap_chan_status
Status of L2CAP channel.
Values:
enumerator BT_L2CAP_STATUS_OUT
Channel output status
enumerator BT_L2CAP_STATUS_SHUTDOWN
Channel shutdown status.
enumerator BT_L2CAP_STATUS_ENCRYPT_PENDING
Channel encryption pending status.
enumerator BT_L2CAP_NUM_STATUS
Functions
Returns
0 in case of success or negative value in case of error.
int bt_l2cap_chan_connect(struct bt_conn *conn, struct bt_l2cap_chan *chan, uint16_t psm)
Connect L2CAP channel.
Connect L2CAP channel by PSM, once the connection is completed channel connected() call-
back will be called. If the connection is rejected disconnected() callback is called instead.
Channel object passed (over an address of it) as second parameter shouldn’t be instantiated
in application as standalone. Instead of, application should create transport dedicated L2CAP
objects, i.e. type of bt_l2cap_le_chan for LE and/or type of bt_l2cap_br_chan for BR/EDR.
Then pass to this API the location (address) of bt_l2cap_chan type object which is a member
of both transport dedicated objects.
Parameters
• conn – Connection object.
• chan – Channel object.
• psm – Channel PSM to connect to.
Returns
0 in case of success or negative value in case of error.
int bt_l2cap_chan_disconnect(struct bt_l2cap_chan *chan)
Disconnect L2CAP channel.
Disconnect L2CAP channel, if the connection is pending it will be canceled and as a result
the channel disconnected() callback is called. Regarding to input parameter, to get details see
reference description to bt_l2cap_chan_connect() API above.
Parameters
• chan – Channel object.
Returns
0 in case of success or negative value in case of error.
int bt_l2cap_chan_send(struct bt_l2cap_chan *chan, struct net_buf *buf)
Send data to L2CAP channel.
Send data from buffer to the channel. If credits are not available, buf will be queued and sent
as and when credits are received from peer. Regarding to first input parameter, to get details
see reference description to bt_l2cap_chan_connect() API above.
When sending L2CAP data over an BR/EDR connection the application is sending L2CAP
PDUs. The application is required to have reserved BT_L2CAP_CHAN_SEND_RESERVE bytes
in the buffer before sending. The application should use the BT_L2CAP_BUF_SIZE() helper to
correctly size the buffers for the for the outgoing buffer pool.
When sending L2CAP data over an LE connection the application is sending L2CAP SDUs.
The application can optionally reserve BT_L2CAP_SDU_CHAN_SEND_RESERVE bytes in the
buffer before sending. By reserving bytes in the buffer the stack can use this buffer as a
segment directly, otherwise it will have to allocate a new segment for the first segment. If the
application is reserving the bytes it should use the BT_L2CAP_BUF_SIZE() helper to correctly
size the buffers for the for the outgoing buffer pool. When segmenting an L2CAP SDU into
L2CAP PDUs the stack will first attempt to allocate buffers from the original buffer pool of the
L2CAP SDU before using the stacks own buffer pool.
Note: Buffer ownership is transferred to the stack in case of success, in case of an error the
caller retains the ownership of the buffer.
Returns
Bytes sent in case of success or negative value in case of error.
struct bt_l2cap_chan
#include <l2cap.h> L2CAP Channel structure.
Public Members
struct bt_l2cap_le_endpoint
#include <l2cap.h> LE L2CAP Endpoint structure.
Public Members
uint16_t cid
Endpoint Channel Identifier (CID)
uint16_t mtu
Endpoint Maximum Transmission Unit
uint16_t mps
Endpoint Maximum PDU payload Size
uint16_t init_credits
Endpoint initial credits
atomic_t credits
Endpoint credits
struct bt_l2cap_le_chan
#include <l2cap.h> LE L2CAP Channel structure.
Public Members
struct bt_l2cap_le_endpoint rx
Channel Receiving Endpoint.
If the application has set an alloc_buf channel callback for the channel to support re-
ceiving segmented L2CAP SDUs the application should inititalize the MTU of the Re-
ceiving Endpoint. Otherwise the MTU of the receiving endpoint will be initialized to
BT_L2CAP_SDU_RX_MTU by the stack.
This is the source of the MTU, MPS and credit values when
sending L2CAP_LE_CREDIT_BASED_CONNECTION_REQ/RSP and
L2CAP_CONFIGURATION_REQ.
uint16_t pending_rx_mtu
Pending RX MTU on ECFC reconfigure, used internally by stack
struct bt_l2cap_le_endpoint tx
Channel Transmission Endpoint.
This is an image of the remote’s rx.
The MTU and MPS is controlled by the remote
by L2CAP_LE_CREDIT_BASED_CONNECTION_REQ/RSP or
L2CAP_CONFIGURATION_REQ.
struct bt_l2cap_br_endpoint
#include <l2cap.h> BREDR L2CAP Endpoint structure.
Public Members
uint16_t cid
Endpoint Channel Identifier (CID)
uint16_t mtu
Endpoint Maximum Transmission Unit
struct bt_l2cap_br_chan
#include <l2cap.h> BREDR L2CAP Channel structure.
Public Members
struct bt_l2cap_br_endpoint rx
Channel Receiving Endpoint
struct bt_l2cap_br_endpoint tx
Channel Transmission Endpoint
uint16_t psm
Remote PSM to be connected
uint8_t ident
Helps match request context during CoC
struct bt_l2cap_chan_ops
#include <l2cap.h> L2CAP Channel operations structure.
Public Members
Param chan
The channel which has made encryption status changed.
Param status
HCI status of performed security procedure caused by channel security require-
ments. The value is populated by HCI layer and set to 0 when success and to
non-zero (reference to HCI Error Codes) when security/authentication failed.
Note: With this alternative API, the application is responsible for setting the RX MTU
and MPS. The MPS must not exceed BT_L2CAP_RX_MTU.
Param chan
The receiving channel.
Param sdu_len
Byte length of the SDU this segment is part of.
Param seg_offset
The byte offset of this segment in the SDU.
Param seg
The segment payload.
struct bt_l2cap_server
#include <l2cap.h> L2CAP Server structure.
Public Members
uint16_t psm
Server PSM.
Possible values: 0 A dynamic value will be auto-allocated when bt_l2cap_server_register()
is called.
0x0001-0x007f Standard, Bluetooth SIG-assigned fixed values.
bt_security_t sec_level
Required minimum security level
Bluetooth Media
API Reference
group bt_mcs
Media Control Service (MCS)
[Experimental] Users should note that the APIs can change as a part of ongoing development.
Definitions and types related to the Media Control Service and Media Control Profile specifications.
Defines
BT_MCS_ERR_LONG_VAL_CHANGED
BT_MCS_PLAYBACK_SPEED_MIN
Playback speeds.
All values from -128 to 127 allowed, only some defined
BT_MCS_PLAYBACK_SPEED_QUARTER
BT_MCS_PLAYBACK_SPEED_HALF
BT_MCS_PLAYBACK_SPEED_UNITY
BT_MCS_PLAYBACK_SPEED_DOUBLE
BT_MCS_PLAYBACK_SPEED_MAX
BT_MCS_SEEKING_SPEED_FACTOR_MAX
Seeking speed.
The allowed values for seeking speed are the range -64 to -4 (endpoints included), the value
0, and the range 4 to 64 (endpoints included).
BT_MCS_SEEKING_SPEED_FACTOR_MIN
BT_MCS_SEEKING_SPEED_FACTOR_ZERO
BT_MCS_PLAYING_ORDER_SINGLE_ONCE
Playing orders
BT_MCS_PLAYING_ORDER_SINGLE_REPEAT
BT_MCS_PLAYING_ORDER_INORDER_ONCE
BT_MCS_PLAYING_ORDER_INORDER_REPEAT
BT_MCS_PLAYING_ORDER_OLDEST_ONCE
BT_MCS_PLAYING_ORDER_OLDEST_REPEAT
BT_MCS_PLAYING_ORDER_NEWEST_ONCE
BT_MCS_PLAYING_ORDER_NEWEST_REPEAT
BT_MCS_PLAYING_ORDER_SHUFFLE_ONCE
BT_MCS_PLAYING_ORDER_SHUFFLE_REPEAT
BT_MCS_PLAYING_ORDERS_SUPPORTED_SINGLE_ONCE
Playing orders supported.
A bitmap, in the same order as the playing orders above. Note that playing order 1 corre-
sponds to bit 0, and so on.
BT_MCS_PLAYING_ORDERS_SUPPORTED_SINGLE_REPEAT
BT_MCS_PLAYING_ORDERS_SUPPORTED_INORDER_ONCE
BT_MCS_PLAYING_ORDERS_SUPPORTED_INORDER_REPEAT
BT_MCS_PLAYING_ORDERS_SUPPORTED_OLDEST_ONCE
BT_MCS_PLAYING_ORDERS_SUPPORTED_OLDEST_REPEAT
BT_MCS_PLAYING_ORDERS_SUPPORTED_NEWEST_ONCE
BT_MCS_PLAYING_ORDERS_SUPPORTED_NEWEST_REPEAT
BT_MCS_PLAYING_ORDERS_SUPPORTED_SHUFFLE_ONCE
BT_MCS_PLAYING_ORDERS_SUPPORTED_SHUFFLE_REPEAT
BT_MCS_MEDIA_STATE_INACTIVE
Media states
BT_MCS_MEDIA_STATE_PLAYING
BT_MCS_MEDIA_STATE_PAUSED
BT_MCS_MEDIA_STATE_SEEKING
BT_MCS_MEDIA_STATE_LAST
BT_MCS_OPC_PLAY
Media control point opcodes
BT_MCS_OPC_PAUSE
BT_MCS_OPC_FAST_REWIND
BT_MCS_OPC_FAST_FORWARD
BT_MCS_OPC_STOP
BT_MCS_OPC_MOVE_RELATIVE
BT_MCS_OPC_PREV_SEGMENT
BT_MCS_OPC_NEXT_SEGMENT
BT_MCS_OPC_FIRST_SEGMENT
BT_MCS_OPC_LAST_SEGMENT
BT_MCS_OPC_GOTO_SEGMENT
BT_MCS_OPC_PREV_TRACK
BT_MCS_OPC_NEXT_TRACK
BT_MCS_OPC_FIRST_TRACK
BT_MCS_OPC_LAST_TRACK
BT_MCS_OPC_GOTO_TRACK
BT_MCS_OPC_PREV_GROUP
BT_MCS_OPC_NEXT_GROUP
BT_MCS_OPC_FIRST_GROUP
BT_MCS_OPC_LAST_GROUP
BT_MCS_OPC_GOTO_GROUP
BT_MCS_OPCODES_SUPPORTED_LEN
Media control point supported opcodes length
BT_MCS_OPC_SUP_PLAY
Media control point supported opcodes values
BT_MCS_OPC_SUP_PAUSE
BT_MCS_OPC_SUP_FAST_REWIND
BT_MCS_OPC_SUP_FAST_FORWARD
BT_MCS_OPC_SUP_STOP
BT_MCS_OPC_SUP_MOVE_RELATIVE
BT_MCS_OPC_SUP_PREV_SEGMENT
BT_MCS_OPC_SUP_NEXT_SEGMENT
BT_MCS_OPC_SUP_FIRST_SEGMENT
BT_MCS_OPC_SUP_LAST_SEGMENT
BT_MCS_OPC_SUP_GOTO_SEGMENT
BT_MCS_OPC_SUP_PREV_TRACK
BT_MCS_OPC_SUP_NEXT_TRACK
BT_MCS_OPC_SUP_FIRST_TRACK
BT_MCS_OPC_SUP_LAST_TRACK
BT_MCS_OPC_SUP_GOTO_TRACK
BT_MCS_OPC_SUP_PREV_GROUP
BT_MCS_OPC_SUP_NEXT_GROUP
BT_MCS_OPC_SUP_FIRST_GROUP
BT_MCS_OPC_SUP_LAST_GROUP
BT_MCS_OPC_SUP_GOTO_GROUP
BT_MCS_OPC_NTF_SUCCESS
Media control point notification result codes
BT_MCS_OPC_NTF_NOT_SUPPORTED
BT_MCS_OPC_NTF_PLAYER_INACTIVE
BT_MCS_OPC_NTF_CANNOT_BE_COMPLETED
BT_MCS_SEARCH_TYPE_TRACK_NAME
Search control point type values
BT_MCS_SEARCH_TYPE_ARTIST_NAME
BT_MCS_SEARCH_TYPE_ALBUM_NAME
BT_MCS_SEARCH_TYPE_GROUP_NAME
BT_MCS_SEARCH_TYPE_EARLIEST_YEAR
BT_MCS_SEARCH_TYPE_LATEST_YEAR
BT_MCS_SEARCH_TYPE_GENRE
BT_MCS_SEARCH_TYPE_ONLY_TRACKS
BT_MCS_SEARCH_TYPE_ONLY_GROUPS
SEARCH_LEN_MIN
Search control point values
SEARCH_SCI_LEN_MIN
SEARCH_LEN_MAX
SEARCH_PARAM_MAX
BT_MCS_SCP_NTF_SUCCESS
Search notification result codes
BT_MCS_SCP_NTF_FAILURE
BT_MCS_GROUP_OBJECT_TRACK_TYPE
BT_MCS_GROUP_OBJECT_GROUP_TYPE
Media Proxy
group bt_media_proxy
Media proxy module.
The media proxy module is the connection point between media players and media controllers.
A media player has (access to) media content and knows how to navigate and play this content. A
media controller reads or gets information from a player and controls the player by setting player
parameters and giving the player commands.
The media proxy module allows media player implementations to make themselves available to
media controllers. And it allows controllers to access, and get updates from, any player.
The media proxy module allows both local and remote control of local player instances: A media
controller may be a local application, or it may be a Media Control Service relaying requests from
a remote Media Control Client. There may be either local or remote control, or both, or even
multiple instances of each.
[Experimental] Users should note that the APIs can change as a part of ongoing development.
Defines
MEDIA_PROXY_PLAYBACK_SPEED_MIN
Playback speed parameters.
All values from -128 to 127 allowed, only some defined
MEDIA_PROXY_PLAYBACK_SPEED_QUARTER
MEDIA_PROXY_PLAYBACK_SPEED_HALF
MEDIA_PROXY_PLAYBACK_SPEED_UNITY
MEDIA_PROXY_PLAYBACK_SPEED_DOUBLE
MEDIA_PROXY_PLAYBACK_SPEED_MAX
MEDIA_PROXY_SEEKING_SPEED_FACTOR_MAX
Seeking speed factors.
The allowed values for seeking speed are the range -64 to -4 (endpoints included), the value
0, and the range 4 to 64 (endpoints included).
MEDIA_PROXY_SEEKING_SPEED_FACTOR_MIN
MEDIA_PROXY_SEEKING_SPEED_FACTOR_ZERO
MEDIA_PROXY_PLAYING_ORDER_SINGLE_ONCE
Playing orders.
MEDIA_PROXY_PLAYING_ORDER_SINGLE_REPEAT
MEDIA_PROXY_PLAYING_ORDER_INORDER_ONCE
MEDIA_PROXY_PLAYING_ORDER_INORDER_REPEAT
MEDIA_PROXY_PLAYING_ORDER_OLDEST_ONCE
MEDIA_PROXY_PLAYING_ORDER_OLDEST_REPEAT
MEDIA_PROXY_PLAYING_ORDER_NEWEST_ONCE
MEDIA_PROXY_PLAYING_ORDER_NEWEST_REPEAT
MEDIA_PROXY_PLAYING_ORDER_SHUFFLE_ONCE
MEDIA_PROXY_PLAYING_ORDER_SHUFFLE_REPEAT
MEDIA_PROXY_PLAYING_ORDERS_SUPPORTED_SINGLE_ONCE
Playing orders supported.
A bitmap, in the same order as the playing orders above. Note that playing order 1 corre-
sponds to bit 0, and so on.
MEDIA_PROXY_PLAYING_ORDERS_SUPPORTED_SINGLE_REPEAT
MEDIA_PROXY_PLAYING_ORDERS_SUPPORTED_INORDER_ONCE
MEDIA_PROXY_PLAYING_ORDERS_SUPPORTED_INORDER_REPEAT
MEDIA_PROXY_PLAYING_ORDERS_SUPPORTED_OLDEST_ONCE
MEDIA_PROXY_PLAYING_ORDERS_SUPPORTED_OLDEST_REPEAT
MEDIA_PROXY_PLAYING_ORDERS_SUPPORTED_NEWEST_ONCE
MEDIA_PROXY_PLAYING_ORDERS_SUPPORTED_NEWEST_REPEAT
MEDIA_PROXY_PLAYING_ORDERS_SUPPORTED_SHUFFLE_ONCE
MEDIA_PROXY_PLAYING_ORDERS_SUPPORTED_SHUFFLE_REPEAT
MEDIA_PROXY_STATE_INACTIVE
Media player states.
MEDIA_PROXY_STATE_PLAYING
MEDIA_PROXY_STATE_PAUSED
MEDIA_PROXY_STATE_SEEKING
MEDIA_PROXY_STATE_LAST
MEDIA_PROXY_OP_PLAY
Media player command opcodes.
MEDIA_PROXY_OP_PAUSE
MEDIA_PROXY_OP_FAST_REWIND
MEDIA_PROXY_OP_FAST_FORWARD
MEDIA_PROXY_OP_STOP
MEDIA_PROXY_OP_MOVE_RELATIVE
MEDIA_PROXY_OP_PREV_SEGMENT
MEDIA_PROXY_OP_NEXT_SEGMENT
MEDIA_PROXY_OP_FIRST_SEGMENT
MEDIA_PROXY_OP_LAST_SEGMENT
MEDIA_PROXY_OP_GOTO_SEGMENT
MEDIA_PROXY_OP_PREV_TRACK
MEDIA_PROXY_OP_NEXT_TRACK
MEDIA_PROXY_OP_FIRST_TRACK
MEDIA_PROXY_OP_LAST_TRACK
MEDIA_PROXY_OP_GOTO_TRACK
MEDIA_PROXY_OP_PREV_GROUP
MEDIA_PROXY_OP_NEXT_GROUP
MEDIA_PROXY_OP_FIRST_GROUP
MEDIA_PROXY_OP_LAST_GROUP
MEDIA_PROXY_OP_GOTO_GROUP
MEDIA_PROXY_OPCODES_SUPPORTED_LEN
Media player supported opcodes length.
MEDIA_PROXY_OP_SUP_PLAY
Media player supported command opcodes.
MEDIA_PROXY_OP_SUP_PAUSE
MEDIA_PROXY_OP_SUP_FAST_REWIND
MEDIA_PROXY_OP_SUP_FAST_FORWARD
MEDIA_PROXY_OP_SUP_STOP
MEDIA_PROXY_OP_SUP_MOVE_RELATIVE
MEDIA_PROXY_OP_SUP_PREV_SEGMENT
MEDIA_PROXY_OP_SUP_NEXT_SEGMENT
MEDIA_PROXY_OP_SUP_FIRST_SEGMENT
MEDIA_PROXY_OP_SUP_LAST_SEGMENT
MEDIA_PROXY_OP_SUP_GOTO_SEGMENT
MEDIA_PROXY_OP_SUP_PREV_TRACK
MEDIA_PROXY_OP_SUP_NEXT_TRACK
MEDIA_PROXY_OP_SUP_FIRST_TRACK
MEDIA_PROXY_OP_SUP_LAST_TRACK
MEDIA_PROXY_OP_SUP_GOTO_TRACK
MEDIA_PROXY_OP_SUP_PREV_GROUP
MEDIA_PROXY_OP_SUP_NEXT_GROUP
MEDIA_PROXY_OP_SUP_FIRST_GROUP
MEDIA_PROXY_OP_SUP_LAST_GROUP
MEDIA_PROXY_OP_SUP_GOTO_GROUP
MEDIA_PROXY_CMD_SUCCESS
Media player command result codes.
MEDIA_PROXY_CMD_NOT_SUPPORTED
MEDIA_PROXY_CMD_PLAYER_INACTIVE
MEDIA_PROXY_CMD_CANNOT_BE_COMPLETED
MEDIA_PROXY_SEARCH_TYPE_TRACK_NAME
Search operation type values.
MEDIA_PROXY_SEARCH_TYPE_ARTIST_NAME
MEDIA_PROXY_SEARCH_TYPE_ALBUM_NAME
MEDIA_PROXY_SEARCH_TYPE_GROUP_NAME
MEDIA_PROXY_SEARCH_TYPE_EARLIEST_YEAR
MEDIA_PROXY_SEARCH_TYPE_LATEST_YEAR
MEDIA_PROXY_SEARCH_TYPE_GENRE
MEDIA_PROXY_SEARCH_TYPE_ONLY_TRACKS
MEDIA_PROXY_SEARCH_TYPE_ONLY_GROUPS
MEDIA_PROXY_SEARCH_SUCCESS
Search operation result codes.
MEDIA_PROXY_SEARCH_FAILURE
MEDIA_PROXY_GROUP_OBJECT_TRACK_TYPE
MEDIA_PROXY_GROUP_OBJECT_GROUP_TYPE
Functions
Returns
0 if success, errno on failure.
int media_proxy_ctrl_get_icon_url(struct media_player *player)
Read Icon URL.
Get a URL to the media player’s icon.
Parameters
• player – Media player instance pointer
int media_proxy_ctrl_get_track_title(struct media_player *player)
Read Track Title.
Parameters
• player – Media player instance pointer
Returns
0 if success, errno on failure.
int media_proxy_ctrl_get_track_duration(struct media_player *player)
Read Track Duration.
The duration of a track is measured in hundredths of a second.
Parameters
• player – Media player instance pointer
Returns
0 if success, errno on failure.
int media_proxy_ctrl_get_track_position(struct media_player *player)
Read Track Position.
The position of the player (the playing position) is measured in hundredths of a second from
the beginning of the track
Parameters
• player – Media player instance pointer
Returns
0 if success, errno on failure.
int media_proxy_ctrl_set_track_position(struct media_player *player, int32_t position)
Set Track Position.
Set the playing position of the media player in the current track. The position is given in in
hundredths of a second, from the beginning of the track of the track for positive values, and
(backwards) from the end of the track for negative values.
Parameters
• player – Media player instance pointer
• position – The track position to set
Returns
0 if success, errno on failure.
int media_proxy_ctrl_get_playback_speed(struct media_player *player)
Get Playback Speed.
The playback speed parameter is related to the actual playback speed as follows: actual play-
back speed = 2^(speed_parameter/64)
A speed parameter of 0 corresponds to unity speed playback (i.e. playback at “normal” speed).
A speed parameter of -128 corresponds to playback at one fourth of normal speed, 127 corre-
sponds to playback at almost four times the normal speed.
Parameters
• player – Media player instance pointer
Returns
0 if success, errno on failure.
int media_proxy_ctrl_set_playback_speed(struct media_player *player, int8_t speed)
Set Playback Speed.
See the get_playback_speed() function for an explanation of the playback speed parameter.
Note that the media player may not support all possible values of the playback speed parame-
ter. If the value given is not supported, and is higher than the current value, the player should
set the playback speed to the next higher supported value. (And correspondingly to the next
lower supported value for given values lower than the current value.)
Parameters
• player – Media player instance pointer
• speed – The playback speed parameter to set
Returns
0 if success, errno on failure.
int media_proxy_ctrl_get_seeking_speed(struct media_player *player)
Get Seeking Speed.
The seeking speed gives the speed with which the player is seeking. It is a factor, relative
to real-time playback speed - a factor four means seeking happens at four times the real-
time playback speed. Positive values are for forward seeking, negative values for backwards
seeking.
The seeking speed is not settable - a non-zero seeking speed is the result of “fast rewind” of
“fast forward” commands.
Parameters
• player – Media player instance pointer
Returns
0 if success, errno on failure.
int media_proxy_ctrl_get_track_segments_id(struct media_player *player)
Read Current Track Segments Object ID.
Get an ID (48 bit) that can be used to retrieve the Current Track Segments Object from an
Object Transfer Service
See the Media Control Service spec v1.0 sections 3.10 and 4.2 for a description of the Track
Segments Object.
Requires Object Transfer Service
Parameters
• player – Media player instance pointer
Returns
0 if success, errno on failure.
Parameters
• url – The URL of the player’s icon
void media_proxy_pl_track_changed_cb(void)
Track changed callback.
To be called when the player’s current track is changed
void media_proxy_pl_track_title_cb(char *title)
Track title callback.
To be called when the player’s current track is changed
Parameters
• title – The title of the track
void media_proxy_pl_track_duration_cb(int32_t duration)
Track duration callback.
To be called when the current track’s duration is changed (e.g. due to a track change)
The track duration is given in hundredths of a second.
Parameters
• duration – The track duration
void media_proxy_pl_track_position_cb(int32_t position)
Track position callback.
To be called when the media player’s position in the track is changed, or when the player is
paused or similar.
Exception: This callback should not be called when the position changes during regular play-
back, i.e. while the player is playing and playback happens at a constant speed.
The track position is given in hundredths of a second from the start of the track.
Parameters
• position – The media player’s position in the track
void media_proxy_pl_playback_speed_cb(int8_t speed)
Playback speed callback.
To be called when the playback speed is changed.
Parameters
• speed – The playback speed parameter
void media_proxy_pl_seeking_speed_cb(int8_t speed)
Seeking speed callback.
To be called when the seeking speed is changed.
Parameters
• speed – The seeking speed factor
void media_proxy_pl_current_track_id_cb(uint64_t id)
Current track object ID callback.
To be called when the ID of the current track is changed (e.g. due to a track change).
Parameters
• id – The ID of the current track object in the OTS
Parameters
• result_code – The result (success or failure) of the search
void media_proxy_pl_search_results_id_cb(uint64_t id)
Search Results object ID callback.
To be called when the ID of the search results is changed (typically as the result of a new
successful search).
Parameters
• id – The ID of the search results object in the OTS
struct mpl_cmd
#include <media_proxy.h> Media player command.
struct mpl_cmd_ntf
#include <media_proxy.h> Media command notification.
struct mpl_sci
#include <media_proxy.h> Search control item.
Public Members
uint8_t len
Length of type and parameter
uint8_t type
MEDIA_PROXY_SEARCH_TYPE_<. . . >
char param[62]
Search parameter
struct mpl_search
#include <media_proxy.h> Search.
struct media_proxy_ctrl_cbs
#include <media_proxy.h> Callbacks to a controller, from the media proxy.
Given by a controller when registering
Public Members
Param title
The title of the current track
value.
Param id
The ID (48 bits) attempted to write
Called when the Current Group Object ID is written See also me-
dia_proxy_ctrl_set_current_group_id()
Param player
Media player instance pointer
Param err
Error value. 0 on success, GATT error on positive value or errno on negative
value.
Param id
The ID (48 bits) attempted to write
value.
Param state
The media player state
void (*command_send)(struct media_player *player, int err, const struct mpl_cmd *cmd)
Command send callback.
Called when a command has been sent See also media_proxy_ctrl_send_command()
Param player
Media player instance pointer
Param err
Error value. 0 on success, GATT error on positive value or errno on negative
value.
Param cmd
The command sent
void (*command_recv)(struct media_player *player, int err, const struct mpl_cmd_ntf *result)
Command result receive callback.
Called when a command result has been received See also me-
dia_proxy_ctrl_send_command()
Param player
Media player instance pointer
Param err
Error value. 0 on success, GATT error on positive value or errno on negative
value.
Param result
The result received
void (*search_send)(struct media_player *player, int err, const struct mpl_search *search)
Search send callback.
Called when a search has been sent See also media_proxy_ctrl_send_search()
Param player
Media player instance pointer
Param err
Error value. 0 on success, GATT error on positive value or errno on negative
value.
Param search
The search sent
Called when a search result code has been received See also me-
dia_proxy_ctrl_send_search()
The search result code tells whether the search was successful or not. For a successful
search, the actual results of the search (i.e. what was found as a result of the search)can
be accessed using the Search Results Object ID. The Search Results Object ID has a sepa-
rate callback - search_results_id_recv().
Param player
Media player instance pointer
Param err
Error value. 0 on success, GATT error on positive value or errno on negative
value.
Param result_code
Search result code
struct media_proxy_pl_calls
#include <media_proxy.h> Available calls in a player, that the media proxy can call.
Given by a player when registering.
Public Members
uint64_t (*get_icon_id)(void)
Read Icon Object ID.
Get an ID (48 bit) that can be used to retrieve the Icon Object from an Object Transfer
Service
See the Media Control Service spec v1.0 sections 3.2 and 4.1 for a description of the Icon
Object.
Return
The Icon Object ID
int32_t (*get_track_duration)(void)
Read Track Duration.
The duration of a track is measured in hundredths of a second.
Return
The duration of the current track
int32_t (*get_track_position)(void)
Read Track Position.
The position of the player (the playing position) is measured in hundredths of a second
from the beginning of the track
Return
The position of the player in the current track
int8_t (*get_playback_speed)(void)
Get Playback Speed.
The playback speed parameter is related to the actual playback speed as follows: actual
playback speed = 2^(speed_parameter/64)
A speed parameter of 0 corresponds to unity speed playback (i.e. playback at “normal”
speed). A speed parameter of -128 corresponds to playback at one fourth of normal speed,
127 corresponds to playback at almost four times the normal speed.
Return
The playback speed parameter
See the get_playback_speed() function for an explanation of the playback speed parameter.
Note that the media player may not support all possible values of the playback speed
parameter. If the value given is not supported, and is higher than the current value, the
player should set the playback speed to the next higher supported value. (And corre-
spondingly to the next lower supported value for given values lower than the current
value.)
Param speed
The playback speed parameter to set
int8_t (*get_seeking_speed)(void)
Get Seeking Speed.
The seeking speed gives the speed with which the player is seeking. It is a factor, rela-
tive to real-time playback speed - a factor four means seeking happens at four times the
real-time playback speed. Positive values are for forward seeking, negative values for
backwards seeking.
The seeking speed is not settable - a non-zero seeking speed is the result of “fast rewind”
of “fast forward” commands.
Return
The seeking speed factor
uint64_t (*get_track_segments_id)(void)
Read Current Track Segments Object ID.
Get an ID (48 bit) that can be used to retrieve the Current Track Segments Object from
an Object Transfer Service
See the Media Control Service spec v1.0 sections 3.10 and 4.2 for a description of the
Track Segments Object.
Return
Current The Track Segments Object ID
uint64_t (*get_current_track_id)(void)
Read Current Track Object ID.
Get an ID (48 bit) that can be used to retrieve the Current Track Object from an Object
Transfer Service
See the Media Control Service spec v1.0 sections 3.11 and 4.3 for a description of the
Current Track Object.
Return
The Current Track Object ID
uint64_t (*get_next_track_id)(void)
Read Next Track Object ID.
Get an ID (48 bit) that can be used to retrieve the Next Track Object from an Object
Transfer Service
Return
The Next Track Object ID
uint64_t (*get_parent_group_id)(void)
Read Parent Group Object ID.
Get an ID (48 bit) that can be used to retrieve the Parent Track Object from an Object
Transfer Service
The parent group is the parent of the current group.
See the Media Control Service spec v1.0 sections 3.14 and 4.4 for a description of the
Current Track Object.
Return
The Current Group Object ID
uint64_t (*get_current_group_id)(void)
Read Current Group Object ID.
Get an ID (48 bit) that can be used to retrieve the Current Track Object from an Object
Transfer Service
See the Media Control Service spec v1.0 sections 3.14 and 4.4 for a description of the
Current Group Object.
Return
The Current Group Object ID
uint8_t (*get_playing_order)(void)
Read Playing Order.
return The media player’s current playing order
uint16_t (*get_playing_orders_supported)(void)
Read Playing Orders Supported.
Read a bitmap containing the media player’s supported playing orders. See the ME-
DIA_PROXY_PLAYING_ORDERS_SUPPORTED_* defines.
Return
The media player’s supported playing orders
uint8_t (*get_media_state)(void)
Read Media State.
Read the media player’s state See the MEDIA_PROXY_MEDIA_STATE_* defines.
Return
The media player’s state
uint32_t (*get_commands_supported)(void)
Read Commands Supported.
Read a bitmap containing the media player’s supported command opcodes. See the ME-
DIA_PROXY_OP_SUP_* defines.
Return
The media player’s supported command opcodes
uint64_t (*get_search_results_id)(void)
Read Search Results Object ID.
Get an ID (48 bit) that can be used to retrieve the Search Results Object from an Object
Transfer Service
The search results object is a group object. The search results object only exists if a
successful search operation has been done.
Return
The Search Results Object ID
uint8_t (*get_content_ctrl_id)(void)
Read Content Control ID.
The content control ID identifies a content control service on a device, and links it to the
corresponding audio stream.
Return
The content control ID for the media player
group bt_gatt_mcc
Bluetooth Media Control Client (MCC) interface.
Updated to the Media Control Profile specification revision 1.0
[Experimental] Users should note that the APIs can change as a part of ongoing development.
Typedefs
typedef void (*bt_mcc_read_player_name_cb)(struct bt_conn *conn, int err, const char *name)
Callback function for bt_mcc_read_player_name()
Called when the player name is read or notified
Param conn
The connection that was used to initialise the media control client
Param err
Error value. 0 on success, GATT error or errno on fail
Param name
Player name
typedef void (*bt_mcc_read_icon_url_cb)(struct bt_conn *conn, int err, const char *icon_url)
Callback function for bt_mcc_read_icon_url()
Called when the icon URL is read
Param conn
The connection that was used to initialise the media control client
Param err
Error value. 0 on success, GATT error or errno on fail
Param icon_url
The URL of the Icon
typedef void (*bt_mcc_read_track_title_cb)(struct bt_conn *conn, int err, const char *title)
Callback function for bt_mcc_read_track_title()
Called when the track title is read or notified
Param conn
The connection that was used to initialise the media control client
Param err
Error value. 0 on success, GATT error or errno on fail
Param title
The title of the track
Param err
Error value. 0 on success, GATT error or errno on fail
Param orders
The playing orders supported (bitmap)
typedef void (*bt_mcc_send_cmd_cb)(struct bt_conn *conn, int err, const struct mpl_cmd *cmd)
Callback function for bt_mcc_send_cmd()
Called when a command is sent, i.e. when the media control point is set
Param conn
The connection that was used to initialise the media control client
Param err
Error value. 0 on success, GATT error or errno on fail
Param cmd
The command sent
typedef void (*bt_mcc_cmd_ntf_cb)(struct bt_conn *conn, int err, const struct mpl_cmd_ntf *ntf)
Callback function for command notifications.
Called when the media control point is notified
Notifications for commands (i.e. for writes to the media control point) use a different param-
eter structure than what is used for sending commands (writing to the media control point)
Param conn
The connection that was used to initialise the media control client
Param err
Error value. 0 on success, GATT error or errno on fail
Param ntf
The command notification
Functions
Returns
0 if success, errno on failure.
int bt_mcc_read_playing_order(struct bt_conn *conn)
Read Playing Order.
Parameters
• conn – Connection to the peer device
Returns
0 if success, errno on failure.
int bt_mcc_set_playing_order(struct bt_conn *conn, uint8_t order)
Set Playing Order.
Parameters
• conn – Connection to the peer device
• order – Playing order
Returns
0 if success, errno on failure.
int bt_mcc_read_playing_orders_supported(struct bt_conn *conn)
Read Playing Orders Supported.
Parameters
• conn – Connection to the peer device
Returns
0 if success, errno on failure.
int bt_mcc_read_media_state(struct bt_conn *conn)
Read Media State.
Parameters
• conn – Connection to the peer device
Returns
0 if success, errno on failure.
int bt_mcc_send_cmd(struct bt_conn *conn, const struct mpl_cmd *cmd)
Send a command.
Write a command (e.g. “play”, “pause”) to the server’s media control point.
Parameters
• conn – Connection to the peer device
• cmd – The command to send
Returns
0 if success, errno on failure.
int bt_mcc_read_opcodes_supported(struct bt_conn *conn)
Read Opcodes Supported.
Parameters
• conn – Connection to the peer device
Returns
0 if success, errno on failure.
struct bt_mcc_cb
#include <mcc.h> Media control client callbacks.
The Bluetooth mesh profile adds secure wireless multi-hop communication for Bluetooth Low Energy.
This module implements the Bluetooth Mesh Profile Specification v1.0.1.
Read more about Bluetooth mesh on the Bluetooth SIG Website.
Core The core provides functionality for managing the general Bluetooth mesh state.
Low Power Node The Low Power Node (LPN) role allows battery powered devices to participate in a
mesh network as a leaf node. An LPN interacts with the mesh network through a Friend node, which
is responsible for relaying any messages directed to the LPN. The LPN saves power by keeping its radio
turned off, and only wakes up to either send messages or poll the Friend node for any incoming messages.
The radio control and polling is managed automatically by the mesh stack, but the LPN API allows
the application to trigger the polling at any time through bt_mesh_lpn_poll() . The LPN operation
parameters, including poll interval, poll event timing and Friend requirements is controlled through the
CONFIG_BT_MESH_LOW_POWER option and related configuration options.
When using the LPN feature with logging, it is strongly recommended to only use the
CONFIG_LOG_MODE_DEFERRED option. Log modes other than the deferred may cause unintended delays
during processing of log messages. This in turns will affect scheduling of the receive delay and receive
window. The same limitation applies for the CONFIG_BT_MESH_FRIEND option.
Replay Protection List The Replay Protection List (RPL) is used to hold recently received sequence
numbers from elements within the mesh network to perform protection against replay attacks.
To keep a node protected against replay attacks after reboot, it needs to store the entire RPL in the
persistent storage before it is powered off. Depending on the amount of traffic in a mesh network,
storing recently seen sequence numbers can make flash wear out sooner or later. To mitigate this,
CONFIG_BT_MESH_RPL_STORE_TIMEOUT can be used. This option postpones storing of RPL entries in the
persistent storage.
This option, however, doesn’t completely solve the issue as the node may get powered off before the
timer to store the RPL is fired. To ensure that messages can not be replayed, the node can initiate
storage of the pending RPL entry (or entries) at any time (or sufficiently before power loss) by calling
bt_mesh_rpl_pending_store() . This is up to the node to decide, which RPL entries are to be stored in
this case.
Setting CONFIG_BT_MESH_RPL_STORE_TIMEOUT to -1 allows to completely switch off the timer, which can
help to significantly reduce flash wear out. This moves the responsibility of storing RPL to the user
application and requires that sufficient power backup is available from the time this API is called until
all RPL entries are written to the flash.
Finding the right balance between CONFIG_BT_MESH_RPL_STORE_TIMEOUT and calling
bt_mesh_rpl_pending_store() may reduce a risk of security vulnerability and flash wear out.
Persistent storage The mesh stack uses the Settings Subsystem for storing the device configuration
persistently. When the stack configuration changes and the change needs to be stored persistently, the
stack schedules a work item. The delay between scheduling the work item and submitting it to the
workqueue is defined by the CONFIG_BT_MESH_STORE_TIMEOUT option. Once storing of data is scheduled,
it can not be rescheduled until the work item is processed. Exceptions are made in certain cases as
described below.
When IV index, Sequence Number or CDB configuration have to be stored, the work item is submitted
to the workqueue without the delay. If the work item was previously scheduled, it will be rescheduled
without the delay.
The Replay Protection List uses the same work item to store RPL entries. If storing of RPL
entries is requested and no other configuration is pending to be stored, the delay is set to
CONFIG_BT_MESH_RPL_STORE_TIMEOUT. If other stack configuration has to be stored, the delay defined
by the CONFIG_BT_MESH_STORE_TIMEOUT option is less than CONFIG_BT_MESH_RPL_STORE_TIMEOUT, and
the work item was scheduled by the Replay Protection List, the work item will be rescheduled.
When the work item is running, the stack will store all pending configuration, including the RPL entries.
Work item execution context The CONFIG_BT_MESH_SETTINGS_WORKQ option configures the context
from which the work item is executed. This option is enabled by default, and results in stack using a
dedicated cooperative thread to process the work item. This allows the stack to process other incoming
and outgoing messages, as well as other work items submitted to the system workqueue, while the stack
configuration is being stored.
When this option is disabled, the work item is submitted to the system workqueue. This means that
the system workqueue is blocked for the time it takes to store the stack’s configuration. It is not recom-
mended to disable this option as this will make the device non-responsive for a noticeable amount of
time.
API reference
group bt_mesh
Bluetooth mesh.
Defines
BT_MESH_NET_PRIMARY
BT_MESH_FEAT_RELAY
Relay feature
BT_MESH_FEAT_PROXY
GATT Proxy feature
BT_MESH_FEAT_FRIEND
Friend feature
BT_MESH_FEAT_LOW_POWER
Low Power Node feature
BT_MESH_FEAT_SUPPORTED
BT_MESH_LPN_CB_DEFINE(_name)
Register a callback structure for Friendship events.
Parameters
• _name – Name of callback structure.
BT_MESH_FRIEND_CB_DEFINE(_name)
Register a callback structure for Friendship events.
Registers a callback structure that will be called whenever Friendship gets established or ter-
minated.
Parameters
• _name – Name of callback structure.
Functions
Note: When flash is used as the persistent storage, calling this API too frequently may wear
it out.
Parameters
struct bt_mesh_lpn_cb
#include <main.h> Low Power Node callback functions.
Public Members
struct bt_mesh_friend_cb
#include <main.h> Friend Node callback functions.
Public Members
Param net_idx
NetKeyIndex used during friendship establishment.
Param lpn_addr
Low Power Node address.
Param recv_delay
Receive Delay in units of 1 millisecond.
Param polltimeout
PollTimeout in units of 1 millisecond.
Access layer The access layer is the application’s interface to the Bluetooth mesh network. The access
layer provides mechanisms for compartmentalizing the node behavior into elements and models, which
are implemented by the application.
Mesh models The functionality of a mesh node is represented by models. A model implements a single
behavior the node supports, like being a light, a sensor or a thermostat. The mesh models are grouped
into elements. Each element is assigned its own unicast address, and may only contain one of each type of
model. Conventionally, each element represents a single aspect of the mesh node behavior. For instance,
a node that contains a sensor, two lights and a power outlet would spread this functionality across four
elements, with each element instantiating all the models required for a single aspect of the supported
behavior.
The node’s element and model structure is specified in the node composition data, which is passed to
bt_mesh_init() during initialization. The Bluetooth SIG have defined a set of foundation models (see
Mesh models) and a set of models for implementing common behavior in the Bluetooth Mesh Model
Specification. All models not specified by the Bluetooth SIG are vendor models, and must be tied to a
Company ID.
Mesh models have several parameters that can be configured either through initialization of the mesh
stack or with the Configuration Server:
Opcode list The opcode list contains all message opcodes the model can receive, as well as the min-
imum acceptable payload length and the callback to pass them to. Models can support any number of
opcodes, but each opcode can only be listed by one model in each element.
The full opcode list must be passed to the model structure in the composition data, and cannot be
changed at runtime. The end of the opcode list is determined by the special BT_MESH_MODEL_OP_END
entry. This entry must always be present in the opcode list, unless the list is empty. In that case,
BT_MESH_MODEL_NO_OPS should be used in place of a proper opcode list definition.
AppKey list The AppKey list contains all the application keys the model can receive messages on. Only
messages encrypted with application keys in the AppKey list will be passed to the model.
The maximum number of supported application keys each model can hold is configured with the
CONFIG_BT_MESH_MODEL_KEY_COUNT configuration option. The contents of the AppKey list is managed
by the Configuration Server.
Subscription list A model will process all messages addressed to the unicast address of their element
(given that the utilized application key is present in the AppKey list). Additionally, the model will process
packets addressed to any group or virtual address in its subscription list. This allows nodes to address
multiple nodes throughout the mesh network with a single message.
The maximum number of supported addresses in the Subscription list each model can hold is configured
with the CONFIG_BT_MESH_MODEL_GROUP_COUNT configuration option. The contents of the subscription
list is managed by the Configuration Server.
Extended models The Bluetooth mesh specification allows the mesh models to extend each other.
When a model extends another, it inherits that model’s functionality, and extension can be used to con-
struct complex models out of simple ones, leveraging the existing model functionality to avoid defining
new opcodes. Models may extend any number of models, from any element. When one model extends
another in the same element, the two models will share subscription lists. The mesh stack implements
this by merging the subscription lists of the two models into one, combining the number of subscriptions
the models can have in total. Models may extend models that extend others, creating an “extension
tree”. All models in an extension tree share a single subscription list per element it spans.
Model extensions are done by calling bt_mesh_model_extend() during initialization. A model can only
be extended by one other model, and extensions cannot be circular. Note that binding of node states and
other relationships between the models must be defined by the model implementations.
The model extension concept adds some overhead in the access layer packet processing, and must be
explicitly enabled with CONFIG_BT_MESH_MODEL_EXTENSIONS to have any effect.
Model data storage Mesh models may have data associated with each model instance that needs to
be stored persistently. The access API provides a mechanism for storing this data, leveraging the internal
model instance encoding scheme. Models can store one user defined data entry per instance by calling
bt_mesh_model_data_store() . To be able to read out the data the next time the device reboots, the
model’s bt_mesh_model_cb.settings_set callback must be populated. This callback gets called when
model specific data is found in the persistent storage. The model can retrieve the data by calling the
read_cb passed as a parameter to the callback. See the Settings module documentation for details.
When model data changes frequently, storing it on every change may lead to increased
wear of flash. To reduce the wear, the model can postpone storing of data by calling
bt_mesh_model_data_store_schedule() . The stack will schedule a work item with delay defined by
the CONFIG_BT_MESH_STORE_TIMEOUT option. When the work item is running, the stack will call the
bt_mesh_model_cb.pending_store callback for every model that has requested storing of data. The
model can then call bt_mesh_model_data_store() to store the data.
If CONFIG_BT_MESH_SETTINGS_WORKQ is enabled, the bt_mesh_model_cb.pending_store callback is
called from a dedicated thread. This allows the stack to process other incoming and outgo-
ing messages while model data is being stored. It is recommended to use this option and the
bt_mesh_model_data_store_schedule() function when large amount of data needs to be stored.
API reference
group bt_mesh_access
Access layer.
Defines
BT_MESH_ADDR_UNASSIGNED
BT_MESH_ADDR_ALL_NODES
BT_MESH_ADDR_RELAYS
BT_MESH_ADDR_FRIENDS
BT_MESH_ADDR_PROXIES
BT_MESH_ADDR_DFW_NODES
BT_MESH_ADDR_IP_NODES
BT_MESH_ADDR_IP_BR_ROUTERS
BT_MESH_KEY_UNUSED
BT_MESH_KEY_ANY
BT_MESH_KEY_DEV
BT_MESH_KEY_DEV_LOCAL
BT_MESH_KEY_DEV_REMOTE
BT_MESH_KEY_DEV_ANY
BT_MESH_ADDR_IS_UNICAST(addr)
BT_MESH_ADDR_IS_GROUP(addr)
BT_MESH_ADDR_IS_FIXED_GROUP(addr)
BT_MESH_ADDR_IS_VIRTUAL(addr)
BT_MESH_ADDR_IS_RFU(addr)
BT_MESH_IS_DEV_KEY(key)
BT_MESH_APP_SEG_SDU_MAX
Maximum size of an access message segment (in octets).
BT_MESH_APP_UNSEG_SDU_MAX
Maximum payload size of an unsegmented access message (in octets).
BT_MESH_RX_SEG_MAX
Maximum number of segments supported for incoming messages.
BT_MESH_TX_SEG_MAX
Maximum number of segments supported for outgoing messages.
BT_MESH_TX_SDU_MAX
Maximum possible payload size of an outgoing access message (in octets).
BT_MESH_RX_SDU_MAX
Maximum possible payload size of an incoming access message (in octets).
BT_MESH_ELEM(_loc, _mods, _vnd_mods)
Helper to define a mesh element within an array.
In case the element has no SIG or Vendor models the helper macro BT_MESH_MODEL_NONE
can be given instead.
Parameters
• _loc – Location Descriptor.
• _mods – Array of models.
• _vnd_mods – Array of vendor models.
BT_MESH_MODEL_ID_CFG_SRV
BT_MESH_MODEL_ID_CFG_CLI
BT_MESH_MODEL_ID_HEALTH_SRV
BT_MESH_MODEL_ID_HEALTH_CLI
BT_MESH_MODEL_ID_REMOTE_PROV_SRV
BT_MESH_MODEL_ID_REMOTE_PROV_CLI
BT_MESH_MODEL_ID_PRIV_BEACON_SRV
BT_MESH_MODEL_ID_PRIV_BEACON_CLI
BT_MESH_MODEL_ID_SAR_CFG_SRV
BT_MESH_MODEL_ID_SAR_CFG_CLI
BT_MESH_MODEL_ID_OP_AGG_SRV
BT_MESH_MODEL_ID_OP_AGG_CLI
BT_MESH_MODEL_ID_LARGE_COMP_DATA_SRV
BT_MESH_MODEL_ID_LARGE_COMP_DATA_CLI
BT_MESH_MODEL_ID_SOL_PDU_RPL_SRV
BT_MESH_MODEL_ID_SOL_PDU_RPL_CLI
BT_MESH_MODEL_ID_ON_DEMAND_PROXY_SRV
BT_MESH_MODEL_ID_ON_DEMAND_PROXY_CLI
BT_MESH_MODEL_ID_GEN_ONOFF_SRV
BT_MESH_MODEL_ID_GEN_ONOFF_CLI
BT_MESH_MODEL_ID_GEN_LEVEL_SRV
BT_MESH_MODEL_ID_GEN_LEVEL_CLI
BT_MESH_MODEL_ID_GEN_DEF_TRANS_TIME_SRV
BT_MESH_MODEL_ID_GEN_DEF_TRANS_TIME_CLI
BT_MESH_MODEL_ID_GEN_POWER_ONOFF_SRV
BT_MESH_MODEL_ID_GEN_POWER_ONOFF_SETUP_SRV
BT_MESH_MODEL_ID_GEN_POWER_ONOFF_CLI
BT_MESH_MODEL_ID_GEN_POWER_LEVEL_SRV
BT_MESH_MODEL_ID_GEN_POWER_LEVEL_SETUP_SRV
BT_MESH_MODEL_ID_GEN_POWER_LEVEL_CLI
BT_MESH_MODEL_ID_GEN_BATTERY_SRV
BT_MESH_MODEL_ID_GEN_BATTERY_CLI
BT_MESH_MODEL_ID_GEN_LOCATION_SRV
BT_MESH_MODEL_ID_GEN_LOCATION_SETUPSRV
BT_MESH_MODEL_ID_GEN_LOCATION_CLI
BT_MESH_MODEL_ID_GEN_ADMIN_PROP_SRV
BT_MESH_MODEL_ID_GEN_MANUFACTURER_PROP_SRV
BT_MESH_MODEL_ID_GEN_USER_PROP_SRV
BT_MESH_MODEL_ID_GEN_CLIENT_PROP_SRV
BT_MESH_MODEL_ID_GEN_PROP_CLI
BT_MESH_MODEL_ID_SENSOR_SRV
BT_MESH_MODEL_ID_SENSOR_SETUP_SRV
BT_MESH_MODEL_ID_SENSOR_CLI
BT_MESH_MODEL_ID_TIME_SRV
BT_MESH_MODEL_ID_TIME_SETUP_SRV
BT_MESH_MODEL_ID_TIME_CLI
BT_MESH_MODEL_ID_SCENE_SRV
BT_MESH_MODEL_ID_SCENE_SETUP_SRV
BT_MESH_MODEL_ID_SCENE_CLI
BT_MESH_MODEL_ID_SCHEDULER_SRV
BT_MESH_MODEL_ID_SCHEDULER_SETUP_SRV
BT_MESH_MODEL_ID_SCHEDULER_CLI
BT_MESH_MODEL_ID_LIGHT_LIGHTNESS_SRV
BT_MESH_MODEL_ID_LIGHT_LIGHTNESS_SETUP_SRV
BT_MESH_MODEL_ID_LIGHT_LIGHTNESS_CLI
BT_MESH_MODEL_ID_LIGHT_CTL_SRV
BT_MESH_MODEL_ID_LIGHT_CTL_SETUP_SRV
BT_MESH_MODEL_ID_LIGHT_CTL_CLI
BT_MESH_MODEL_ID_LIGHT_CTL_TEMP_SRV
BT_MESH_MODEL_ID_LIGHT_HSL_SRV
BT_MESH_MODEL_ID_LIGHT_HSL_SETUP_SRV
BT_MESH_MODEL_ID_LIGHT_HSL_CLI
BT_MESH_MODEL_ID_LIGHT_HSL_HUE_SRV
BT_MESH_MODEL_ID_LIGHT_HSL_SAT_SRV
BT_MESH_MODEL_ID_LIGHT_XYL_SRV
BT_MESH_MODEL_ID_LIGHT_XYL_SETUP_SRV
BT_MESH_MODEL_ID_LIGHT_XYL_CLI
BT_MESH_MODEL_ID_LIGHT_LC_SRV
BT_MESH_MODEL_ID_LIGHT_LC_SETUPSRV
BT_MESH_MODEL_ID_LIGHT_LC_CLI
BT_MESH_MODEL_ID_BLOB_SRV
BT_MESH_MODEL_ID_BLOB_CLI
BT_MESH_MODEL_ID_DFU_SRV
BT_MESH_MODEL_ID_DFU_CLI
BT_MESH_MODEL_ID_DFD_SRV
BT_MESH_MODEL_ID_DFD_CLI
BT_MESH_MODEL_OP_1(b0)
BT_MESH_MODEL_OP_2(b0, b1)
BT_MESH_MODEL_OP_3(b0, cid)
BT_MESH_LEN_EXACT(len)
Macro for encoding exact message length for fixed-length messages.
BT_MESH_LEN_MIN(len)
Macro for encoding minimum message length for variable-length messages.
BT_MESH_MODEL_OP_END
End of the opcode list. Must always be present.
BT_MESH_MODEL_NO_OPS
Helper to define an empty opcode list.
This macro uses compound literal feature of C99 standard and thus is available only from C,
not C++.
BT_MESH_MODEL_NONE
Helper to define an empty model array.
This macro uses compound literal feature of C99 standard and thus is available only from C,
not C++.
BT_MESH_MODEL_CNT_CB(_id, _op, _pub, _user_data, _keys, _grps, _cb)
Composition data SIG model entry with callback functions with specific number of keys &
groups.
This macro uses compound literal feature of C99 standard and thus is available only from C,
not C++.
Parameters
• _id – Model ID.
• _op – Array of model opcode handlers.
• _pub – Model publish parameters.
• _user_data – User data for the model.
• _keys – Number of keys that can be bound to the model. Shall not exceed
CONFIG_BT_MESH_MODEL_KEY_COUNT .
• _grps – Number of addresses that the model can be subscribed to. Shall not
exceed CONFIG_BT_MESH_MODEL_GROUP_COUNT .
• _cb – Callback structure, or NULL to keep no callbacks.
BT_MESH_PUB_MSG_NUM(pub)
Get message number within one publication interval.
Meant to be used inside bt_mesh_model_pub::update.
Parameters
• pub – Model publication context.
Returns
message number starting from 1.
BT_MESH_MODEL_PUB_DEFINE(_name, _update, _msg_len)
Define a model publication context.
Parameters
• _name – Variable name given to the context.
• _update – Optional message update callback (may be NULL).
• _msg_len – Length of the publication message.
BT_MESH_MODELS_METADATA_ENTRY(_len, _id, _data)
Initialize a Models Metadata entry structure in a list.
Parameters
• _len – Length of the metadata entry.
• _id – ID of the Models Metadata entry.
• _data – Pointer to a contiguous memory that contains the metadata.
BT_MESH_MODELS_METADATA_NONE
Helper to define an empty Models metadata array
BT_MESH_MODELS_METADATA_END
End of the Models Metadata list. Must always be present.
BT_MESH_TTL_DEFAULT
Special TTL value to request using configured default TTL
BT_MESH_TTL_MAX
Maximum allowed TTL value
Functions
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_model_publish(struct bt_mesh_model *model)
Send a model publication message.
Before calling this function, the user needs to ensure that the model publication message
(bt_mesh_model_pub::msg) contains a valid message to be sent. Note that this API is only to be
used for non-period publishing. For periodic publishing the app only needs to make sure that
bt_mesh_model_pub::msg contains a valid message whenever the bt_mesh_model_pub::update
callback is called.
Parameters
• model – Mesh (client) Model that’s publishing the message.
Returns
0 on success, or (negative) error code on failure.
static inline bool bt_mesh_model_pub_is_retransmission(const struct bt_mesh_model *model)
Check if a message is being retransmitted.
Meant to be used inside the bt_mesh_model_pub::update callback.
Parameters
• model – Mesh Model that supports publication.
Returns
true if this is a retransmission, false if this is a first publication.
struct bt_mesh_elem *bt_mesh_model_elem(struct bt_mesh_model *mod)
Get the element that a model belongs to.
Parameters
• mod – Mesh model.
Returns
Pointer to the element that the given model belongs to.
struct bt_mesh_model *bt_mesh_model_find(const struct bt_mesh_elem *elem, uint16_t id)
Find a SIG model.
Parameters
• elem – Element to search for the model in.
• id – Model ID of the model.
Returns
A pointer to the Mesh model matching the given parameters, or NULL if no SIG
model with the given ID exists in the given element.
struct bt_mesh_model *bt_mesh_model_find_vnd(const struct bt_mesh_elem *elem, uint16_t
company, uint16_t id)
Find a vendor model.
Parameters
• elem – Element to search for the model in.
• company – Company ID of the model.
• id – Model ID of the model.
Returns
A pointer to the Mesh model matching the given parameters, or NULL if no vendor
model with the given ID exists in the given element.
struct bt_mesh_elem
#include <access.h> Abstraction that describes a Mesh Element
Public Members
uint16_t addr
Unicast Address. Set at runtime during provisioning.
struct bt_mesh_model_op
#include <access.h> Model opcode handler.
Public Members
struct bt_mesh_model_pub
#include <access.h> Model publication context.
The context should primarily be created using the BT_MESH_MODEL_PUB_DEFINE macro.
Public Members
uint16_t addr
Publish Address.
uint16_t key
Publish AppKey Index.
uint16_t cred
Friendship Credentials Flag.
uint16_t send_rel
Force reliable sending (segment acks)
uint16_t fast_period
Use FastPeriodDivisor
uint16_t retr_update
Call update callback on every retransmission.
uint8_t ttl
Publish Time to Live.
uint8_t retransmit
Retransmit Count & Interval Steps.
uint8_t period
Publish Period.
uint8_t period_div
Divisor for the Period.
uint8_t count
Transmissions left.
uint32_t period_start
Start of the current period.
If the callback returns non-zero, the publication is skipped and will resume on the next
periodic publishing interval.
When bt_mesh_model_pub::retr_update is set to 1, the callback will be called on every
retransmission.
Param mod
The Model the Publication Context belongs to.
Return
Zero on success or (negative) error code otherwise.
struct bt_mesh_models_metadata_entry
#include <access.h> Models Metadata Entry struct
The struct should primarily be created using the BT_MESH_MODELS_METADATA_ENTRY
macro.
struct bt_mesh_model_cb
#include <access.h> Model callback functions.
Public Members
int (*const settings_set)(struct bt_mesh_model *model, const char *name, size_t len_rd,
settings_read_cb read_cb, void *cb_arg)
Set value handler of user data tied to the model.
See also:
settings_handler::h_set
Param model
Model to set the persistent data of.
Param name
Name/key of the settings item.
Param len_rd
The size of the data found in the backend.
Param read_cb
Function provided to read the data from the backend.
Param cb_arg
Arguments for the read function provided by the backend.
Return
0 on success, error otherwise.
Return
0 on success, error otherwise.
Note: If the model stores any persistent data, this needs to be erased manually.
Param model
Model this callback belongs to.
struct bt_mesh_mod_id_vnd
#include <access.h> Vendor model ID
Public Members
uint16_t company
Vendor’s company ID
uint16_t id
Model ID
struct bt_mesh_model
#include <access.h> Abstraction that describes a Mesh Model instance
Public Members
const uint16_t id
SIG model ID
void *user_data
Model-specific user data
struct bt_mesh_send_cb
#include <access.h> Callback structure for monitoring model message sending
Public Members
struct bt_mesh_comp
#include <access.h> Node Composition
Public Members
uint16_t cid
Company ID
uint16_t pid
Product ID
uint16_t vid
Version ID
size_t elem_count
The number of elements in this device.
Mesh models
Foundation models The Bluetooth mesh specification defines foundation models that can be used by
network administrators to configure and diagnose mesh nodes.
Configuration Server The Configuration Server model is a foundation model defined by the Bluetooth
mesh specification. The Configuration Server model controls most parameters of the mesh node. It does
not have an API of its own, but relies on a Configuration Client to control it.
..note::
The bt_mesh_cfg_srv structure has been deprecated. The initial values of the Relay, Beacon,
Friend, Network transmit and Relay retransmit should be set through Kconfig, and the Heartbeat
feature should be controlled through the Heartbeat API.
The Configuration Server model is mandatory on all Bluetooth mesh nodes, and should be instantiated
in the first element.
API reference
group bt_mesh_cfg_srv
Configuration Server Model.
Defines
BT_MESH_MODEL_CFG_SRV
Generic Configuration Server model composition data entry.
Configuration Client The Configuration Client model is a foundation model defined by the Bluetooth
mesh specification. It provides functionality for configuring most parameters of a mesh node, including
encryption keys, model configuration and feature enabling.
The Configuration Client model communicates with a Configuration Server model using the device key
of the target node. The Configuration Client model may communicate with servers on other nodes or
self-configure through the local Configuration Server model.
All configuration functions in the Configuration Client API have net_idx and addr as their first param-
eters. These should be set to the network index and primary unicast address that the target node was
provisioned with.
The Configuration Client model is optional, but should be instantiated on the first element if it is present
in the composition data.
API reference
group bt_mesh_cfg_cli
Configuration Client Model.
Defines
BT_MESH_MODEL_CFG_CLI(cli_data)
Generic Configuration Client model composition data entry.
Parameters
• cli_data – Pointer to a Configuration Client Model instance.
BT_MESH_PUB_PERIOD_100MS(steps)
Helper macro to encode model publication period in units of 100ms.
Parameters
• steps – Number of 100ms steps.
Returns
Encoded value that can be assigned to bt_mesh_cfg_cli_mod_pub.period
BT_MESH_PUB_PERIOD_SEC(steps)
Helper macro to encode model publication period in units of 1 second.
Parameters
• steps – Number of 1 second steps.
Returns
Encoded value that can be assigned to bt_mesh_cfg_cli_mod_pub.period
BT_MESH_PUB_PERIOD_10SEC(steps)
Helper macro to encode model publication period in units of 10 seconds.
Parameters
• steps – Number of 10 second steps.
Returns
Encoded value that can be assigned to bt_mesh_cfg_cli_mod_pub.period
BT_MESH_PUB_PERIOD_10MIN(steps)
Helper macro to encode model publication period in units of 10 minutes.
Parameters
• steps – Number of 10 minute steps.
Returns
Encoded value that can be assigned to bt_mesh_cfg_cli_mod_pub.period
Functions
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• status – Status response parameter. Returns one of
BT_MESH_GATT_PROXY_DISABLED, BT_MESH_GATT_PROXY_ENABLED
or BT_MESH_GATT_PROXY_NOT_SUPPORTED on success.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_gatt_proxy_set(uint16_t net_idx, uint16_t addr, uint8_t val, uint8_t
*status)
Set the target node’s Proxy feature state.
This method can be used asynchronously by setting status as NULL. This way the method
will not wait for response and will return immediately after sending the command.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• val – New Proxy feature state. Must be one of
BT_MESH_GATT_PROXY_DISABLED or BT_MESH_GATT_PROXY_ENABLED.
• status – Status response parameter. Returns one of
BT_MESH_GATT_PROXY_DISABLED, BT_MESH_GATT_PROXY_ENABLED
or BT_MESH_GATT_PROXY_NOT_SUPPORTED on success.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_net_transmit_get(uint16_t net_idx, uint16_t addr, uint8_t *transmit)
Get the target node’s network_transmit state.
This method can be used asynchronously by setting transmit as NULL. This way the method
will not wait for response and will return immediately after sending the command.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• transmit – Network transmit response parameter. Returns the en-
coded network transmission parameters on success. Decoded with
BT_MESH_TRANSMIT_COUNT and BT_MESH_TRANSMIT_INT.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_net_transmit_set(uint16_t net_idx, uint16_t addr, uint8_t val, uint8_t
*transmit)
Set the target node’s network transmit parameters.
This method can be used asynchronously by setting transmit as NULL. This way the method
will not wait for response and will return immediately after sending the command.
See also:
BT_MESH_TRANSMIT.
Parameters
See also:
BT_MESH_TRANSMIT.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• new_relay – New relay state. Must be one of BT_MESH_RELAY_DISABLED or
BT_MESH_RELAY_ENABLED.
• new_transmit – New encoded relay transmit parameters.
• status – Status response parameter. Returns one of
BT_MESH_RELAY_DISABLED, BT_MESH_RELAY_ENABLED or
BT_MESH_RELAY_NOT_SUPPORTED on success.
• transmit – Transmit response parameter. Returns the encoded relay transmis-
sion parameters on success. Decoded with BT_MESH_TRANSMIT_COUNT and
BT_MESH_TRANSMIT_INT.
Returns
0 on success, or (negative) error code on failure.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_mod_app_bind(uint16_t net_idx, uint16_t addr, uint16_t elem_addr,
uint16_t mod_app_idx, uint16_t mod_id, uint8_t *status)
Bind an application to a SIG model on the target node.
This method can be used asynchronously by setting status as NULL. This way the method
will not wait for response and will return immediately after sending the command.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• elem_addr – Element address the model is in.
• mod_app_idx – Application index to bind.
• mod_id – Model ID.
• status – Status response parameter.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_mod_app_unbind(uint16_t net_idx, uint16_t addr, uint16_t elem_addr,
uint16_t mod_app_idx, uint16_t mod_id, uint8_t
*status)
Unbind an application from a SIG model on the target node.
This method can be used asynchronously by setting status as NULL. This way the method
will not wait for response and will return immediately after sending the command.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• elem_addr – Element address the model is in.
• mod_app_idx – Application index to unbind.
• mod_id – Model ID.
• status – Status response parameter.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_mod_app_bind_vnd(uint16_t net_idx, uint16_t addr, uint16_t elem_addr,
uint16_t mod_app_idx, uint16_t mod_id, uint16_t cid,
uint8_t *status)
Bind an application to a vendor model on the target node.
This method can be used asynchronously by setting status as NULL. This way the method
will not wait for response and will return immediately after sending the command.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• elem_addr – Element address the model is in.
• mod_app_idx – Application index to bind.
• mod_id – Model ID.
Get a list of all applications bound to a vendor model on the target node.
This method can be used asynchronously by setting status and ( apps or app_cnt ) as NULL.
This way the method will not wait for response and will return immediately after sending the
command.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• elem_addr – Element address the model is in.
• mod_id – Model ID.
• cid – Company ID of the model.
• status – Status response parameter.
• apps – App index list response parameter. Will be filled with all the returned
application key indexes it can fill.
• app_cnt – App index list length. Should be set to the capacity of the apps list
when calling. Will return the number of returned application key indexes upon
success.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_mod_pub_get(uint16_t net_idx, uint16_t addr, uint16_t elem_addr,
uint16_t mod_id, struct bt_mesh_cfg_cli_mod_pub *pub,
uint8_t *status)
Get publish parameters for a SIG model on the target node.
This method can be used asynchronously by setting status and pub as NULL. This way the
method will not wait for response and will return immediately after sending the command.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• elem_addr – Element address the model is in.
• mod_id – Model ID.
• pub – Publication parameter return buffer.
• status – Status response parameter.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_mod_pub_get_vnd(uint16_t net_idx, uint16_t addr, uint16_t elem_addr,
uint16_t mod_id, uint16_t cid, struct
bt_mesh_cfg_cli_mod_pub *pub, uint8_t *status)
Get publish parameters for a vendor model on the target node.
This method can be used asynchronously by setting status and pub as NULL. This way the
method will not wait for response and will return immediately after sending the command.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• elem_addr – Element address the model is in.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_mod_sub_del_vnd(uint16_t net_idx, uint16_t addr, uint16_t elem_addr,
uint16_t sub_addr, uint16_t mod_id, uint16_t cid,
uint8_t *status)
Delete a group address in a vendor model’s subscription list.
This method can be used asynchronously by setting status as NULL. This way the method
will not wait for response and will return immediately after sending the command.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• elem_addr – Element address the model is in.
• sub_addr – Group address to add to the subscription list.
• mod_id – Model ID.
• cid – Company ID of the model.
• status – Status response parameter.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_mod_sub_overwrite(uint16_t net_idx, uint16_t addr, uint16_t elem_addr,
uint16_t sub_addr, uint16_t mod_id, uint8_t *status)
Overwrite all addresses in a SIG model’s subscription list with a group address.
Deletes all subscriptions in the model’s subscription list, and adds a single group address
instead.
This method can be used asynchronously by setting status as NULL. This way the method
will not wait for response and will return immediately after sending the command.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• elem_addr – Element address the model is in.
• sub_addr – Group address to add to the subscription list.
• mod_id – Model ID.
• status – Status response parameter.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_mod_sub_overwrite_vnd(uint16_t net_idx, uint16_t addr, uint16_t
elem_addr, uint16_t sub_addr, uint16_t mod_id,
uint16_t cid, uint8_t *status)
Overwrite all addresses in a vendor model’s subscription list with a group address.
Deletes all subscriptions in the model’s subscription list, and adds a single group address
instead.
This method can be used asynchronously by setting status as NULL. This way the method
will not wait for response and will return immediately after sending the command.
Parameters
• net_idx – Network index to encrypt with.
• sub_cnt – Subscription list element count. Should be set to the capacity of the
subs list when calling. Will return the number of returned subscriptions upon
success.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_mod_sub_get_vnd(uint16_t net_idx, uint16_t addr, uint16_t elem_addr,
uint16_t mod_id, uint16_t cid, uint8_t *status, uint16_t
*subs, size_t *sub_cnt)
Get the subscription list of a vendor model on the target node.
This method can be used asynchronously by setting status and ( subs or sub_cnt ) as NULL.
This way the method will not wait for response and will return immediately after sending the
command.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• elem_addr – Element address the model is in.
• mod_id – Model ID.
• cid – Company ID of the model.
• status – Status response parameter.
• subs – Subscription list response parameter. Will be filled with all the returned
subscriptions it can fill.
• sub_cnt – Subscription list element count. Should be set to the capacity of the
subs list when calling. Will return the number of returned subscriptions upon
success.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_hb_sub_set(uint16_t net_idx, uint16_t addr, struct bt_mesh_cfg_cli_hb_sub
*sub, uint8_t *status)
Set the target node’s Heartbeat subscription parameters.
This method can be used asynchronously by setting status as NULL. This way the method
will not wait for response and will return immediately after sending the command.
sub shall not be null.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• sub – New Heartbeat subscription parameters.
• status – Status response parameter.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_cfg_cli_hb_sub_get(uint16_t net_idx, uint16_t addr, struct bt_mesh_cfg_cli_hb_sub
*sub, uint8_t *status)
Get the target node’s Heartbeat subscription parameters.
This method can be used asynchronously by setting status and sub as NULL. This way the
method will not wait for response and will return immediately after sending the command.
Parameters
This method can be used asynchronously by setting status as NULL. This way the method
will not wait for response and will return immediately after sending the command.
pub shall not be NULL;
Note: The target node must already have received the specified network key.
Parameters
• net_idx – Network index to encrypt with.
• addr – Target node address.
• pub – New Heartbeat publication parameters.
• status – Status response parameter.
Returns
0 on success, or (negative) error code on failure.
NET_BUF_SIMPLE_DEFINE(buf, BT_MESH_RX_SDU_MAX);
struct bt_mesh_comp_p0 comp;
Parameters
• buf – Network buffer containing composition data.
• comp – Composition data structure to fill.
Returns
0 on success, or (negative) error code on failure.
struct bt_mesh_cfg_cli_cb
#include <cfg_cli.h> Mesh Configuration Client Status messages callback
Public Members
Param status
Status Code for requesting message.
Param net_idx
The index of the NetKey.
Param identity
The node identity state.
struct bt_mesh_cfg_cli
#include <cfg_cli.h> Mesh Configuration Client Model Context
Public Members
struct bt_mesh_cfg_cli_mod_pub
#include <cfg_cli.h> Model publication configuration parameters.
Public Members
uint16_t addr
Publication destination address.
uint16_t app_idx
Application index to publish with.
bool cred_flag
Friendship credential flag.
uint8_t ttl
Time To Live to publish with.
uint8_t period
Encoded publish period.
See also:
BT_MESH_PUB_PERIOD_100MS, BT_MESH_PUB_PERIOD_SEC,
BT_MESH_PUB_PERIOD_10SEC, BT_MESH_PUB_PERIOD_10MIN
uint8_t transmit
Encoded transmit parameters.
See also:
BT_MESH_TRANSMIT
struct bt_mesh_cfg_cli_hb_sub
#include <cfg_cli.h> Heartbeat subscription configuration parameters.
Public Members
uint16_t src
Source address to receive Heartbeat messages from.
uint16_t dst
Destination address to receive Heartbeat messages on.
uint8_t period
Logarithmic subscription period to keep listening for. The decoded subscription period is
(1 << (period - 1)) seconds, or 0 seconds if period is 0.
uint8_t count
Logarithmic Heartbeat subscription receive count. The decoded Heartbeat count is (1 <<
(count - 1)) if count is between 1 and 0xfe, 0 if count is 0 and 0xffff if count is 0xff.
Ignored in Heartbeat subscription set.
uint8_t min
Minimum hops in received messages, ie the shortest registered path from the publishing
node to the subscribing node. A Heartbeat received from an immediate neighbor has hop
count = 1.
Ignored in Heartbeat subscription set.
uint8_t max
Maximum hops in received messages, ie the longest registered path from the publishing
node to the subscribing node. A Heartbeat received from an immediate neighbor has hop
count = 1.
Ignored in Heartbeat subscription set.
struct bt_mesh_cfg_cli_hb_pub
#include <cfg_cli.h> Heartbeat publication configuration parameters.
Public Members
uint16_t dst
Heartbeat destination address.
uint8_t count
Logarithmic Heartbeat count. Decoded as (1 << (count - 1)) if count is between 1 and
0x11, 0 if count is 0, or “indefinitely” if count is 0xff.
When used in Heartbeat publication set, this parameter denotes the number of Heartbeat
messages to send.
When returned from Heartbeat publication get, this parameter denotes the number of
Heartbeat messages remaining to be sent.
uint8_t period
Logarithmic Heartbeat publication transmit interval in seconds. Decoded as (1 << (pe-
riod - 1)) if period is between 1 and 0x11. If period is 0, Heartbeat publication is disabled.
uint8_t ttl
Publication message Time To Live value.
uint16_t feat
Bitmap of features that trigger Heartbeat publications. Legal values are
BT_MESH_FEAT_RELAY, BT_MESH_FEAT_PROXY, BT_MESH_FEAT_FRIEND and
BT_MESH_FEAT_LOW_POWER
uint16_t net_idx
Network index to publish with.
struct bt_mesh_comp_p0
#include <cfg_cli.h> Parsed Composition data page 0 representation.
Should be pulled from the return buffer passed to bt_mesh_cfg_cli_comp_data_get using
bt_mesh_comp_p0_get.
Public Members
uint16_t cid
Company ID
uint16_t pid
Product ID
uint16_t vid
Version ID
uint16_t crpl
Replay protection list size
uint16_t feat
Supported features, see BT_MESH_FEAT_SUPPORTED.
struct bt_mesh_comp_p0_elem
#include <cfg_cli.h> Composition data page 0 element representation
Public Members
uint16_t loc
Element location
size_t nsig
The number of SIG models in this element
size_t nvnd
The number of vendor models in this element
struct bt_mesh_comp_p1_elem
#include <cfg_cli.h>
Public Members
size_t nsig
The number of SIG models in this element
size_t nvnd
The number of vendor models in this element
struct bt_mesh_comp_p1_model_item
#include <cfg_cli.h> Composition data page 1 model item representation
Public Members
bool cor_present
Corresponding_Group_ID field indicator
bool format
Determines the format of Extended Model Item
uint8_t ext_item_cnt
Number of items in Extended Model Items
uint8_t cor_id
Buffer containing Extended Model Items. If cor_present is set to 1 it starts with Corre-
sponding_Group_ID
struct bt_mesh_comp_p1_item_short
#include <cfg_cli.h> Extended Model Item in short representation
Public Members
uint8_t elem_offset
Element address modifier
uint8_t mod_item_idx
Model Index
struct bt_mesh_comp_p1_item_long
#include <cfg_cli.h> Extended Model Item in long representation
Public Members
uint8_t elem_offset
Element address modifier
uint8_t mod_item_idx
Model Index
struct bt_mesh_comp_p1_ext_item
#include <cfg_cli.h> Extended Model Item
Public Members
Health Server The Health Server model provides attention callbacks and node diagnostics for Health
Client models. It is primarily used to report faults in the mesh node and map the mesh nodes to their
physical location.
Faults The Health Server model may report a list of faults that have occurred in the device’s lifetime.
Typically, the faults are events or conditions that may alter the behavior of the node, like power outages
or faulty peripherals. Faults are split into warnings and errors. Warnings indicate conditions that are
close to the limits of what the node is designed to withstand, but not necessarily damaging to the device.
Errors indicate conditions that are outside of the node’s design limits, and may have caused invalid
behavior or permanent damage to the device.
Fault values 0x01 to 0x7f are reserved for the Bluetooth mesh specification, and the full list of specifica-
tion defined faults are available in Health faults. Fault values 0x80 to 0xff are vendor specific. The list
of faults are always reported with a company ID to help interpreting the vendor specific faults.
Attention state The attention state is used to make the device call attention to itself through some
physical behavior like blinking, playing a sound or vibrating. The attention state may be used during
provisioning to let the user know which device they’re provisioning, as well as through the Health models
at runtime.
The attention state is always assigned a timeout in the range of one to 255 seconds when en-
abled. The Health Server API provides two callbacks for the application to run their attention call-
ing behavior: bt_mesh_health_srv_cb.attn_on is called at the beginning of the attention period,
bt_mesh_health_srv_cb.attn_off is called at the end.
The remaining time for the attention period may be queried through bt_mesh_health_srv.attn_timer .
API reference
group bt_mesh_health_srv
Health Server Model.
Defines
BT_MESH_HEALTH_PUB_DEFINE(_name, _max_faults)
A helper to define a health publication context
Parameters
• _name – Name given to the publication context variable.
• _max_faults – Maximum number of faults the element can have.
BT_MESH_MODEL_HEALTH_SRV(srv, pub)
Define a new health server model. Note that this API needs to be repeated for each element
that the application wants to have a health server model on. Each instance also needs a unique
bt_mesh_health_srv and bt_mesh_model_pub context.
Parameters
• srv – Pointer to a unique struct bt_mesh_health_srv.
• pub – Pointer to a unique struct bt_mesh_model_pub.
Returns
New mesh model instance.
BT_MESH_HEALTH_TEST_INFO_METADATA_ID
Health Test Information Metadata ID.
BT_MESH_HEALTH_TEST_INFO_METADATA(tests)
BT_MESH_HEALTH_TEST_INFO(cid, tests...)
Define a Health Test Info Metadata array.
Parameters
• cid – Company ID of the Health Test suite.
• tests – A comma separated list of tests.
Returns
A comma separated list of values that make Health Test Info Metadata
Functions
struct bt_mesh_health_srv_cb
#include <health_srv.h> Callback function for the Health Server model
Public Members
Param model
Health Server model instance to get faults of.
Param company_id
Company ID to get faults for.
Param test_id
Test ID response buffer.
Param faults
Array to fill with registered faults.
Param fault_count
The number of faults the fault array can fit. Should be updated to reflect the
number of faults copied into the array.
Return
0 on success, or (negative) error code otherwise.
struct bt_mesh_health_srv
#include <health_srv.h> Mesh Health Server Model Context
Public Members
group bt_mesh_health_faults
List of specification defined Health fault values.
Defines
BT_MESH_HEALTH_FAULT_NO_FAULT
No fault has occurred.
BT_MESH_HEALTH_FAULT_BATTERY_LOW_WARNING
BT_MESH_HEALTH_FAULT_BATTERY_LOW_ERROR
BT_MESH_HEALTH_FAULT_SUPPLY_VOLTAGE_TOO_LOW_WARNING
BT_MESH_HEALTH_FAULT_SUPPLY_VOLTAGE_TOO_LOW_ERROR
BT_MESH_HEALTH_FAULT_SUPPLY_VOLTAGE_TOO_HIGH_WARNING
BT_MESH_HEALTH_FAULT_SUPPLY_VOLTAGE_TOO_HIGH_ERROR
BT_MESH_HEALTH_FAULT_POWER_SUPPLY_INTERRUPTED_WARNING
BT_MESH_HEALTH_FAULT_POWER_SUPPLY_INTERRUPTED_ERROR
BT_MESH_HEALTH_FAULT_NO_LOAD_WARNING
BT_MESH_HEALTH_FAULT_NO_LOAD_ERROR
BT_MESH_HEALTH_FAULT_OVERLOAD_WARNING
BT_MESH_HEALTH_FAULT_OVERLOAD_ERROR
BT_MESH_HEALTH_FAULT_OVERHEAT_WARNING
BT_MESH_HEALTH_FAULT_OVERHEAT_ERROR
BT_MESH_HEALTH_FAULT_CONDENSATION_WARNING
BT_MESH_HEALTH_FAULT_CONDENSATION_ERROR
BT_MESH_HEALTH_FAULT_VIBRATION_WARNING
BT_MESH_HEALTH_FAULT_VIBRATION_ERROR
BT_MESH_HEALTH_FAULT_CONFIGURATION_WARNING
BT_MESH_HEALTH_FAULT_CONFIGURATION_ERROR
BT_MESH_HEALTH_FAULT_ELEMENT_NOT_CALIBRATED_WARNING
BT_MESH_HEALTH_FAULT_ELEMENT_NOT_CALIBRATED_ERROR
BT_MESH_HEALTH_FAULT_MEMORY_WARNING
BT_MESH_HEALTH_FAULT_MEMORY_ERROR
BT_MESH_HEALTH_FAULT_SELF_TEST_WARNING
BT_MESH_HEALTH_FAULT_SELF_TEST_ERROR
BT_MESH_HEALTH_FAULT_INPUT_TOO_LOW_WARNING
BT_MESH_HEALTH_FAULT_INPUT_TOO_LOW_ERROR
BT_MESH_HEALTH_FAULT_INPUT_TOO_HIGH_WARNING
BT_MESH_HEALTH_FAULT_INPUT_TOO_HIGH_ERROR
BT_MESH_HEALTH_FAULT_INPUT_NO_CHANGE_WARNING
BT_MESH_HEALTH_FAULT_INPUT_NO_CHANGE_ERROR
BT_MESH_HEALTH_FAULT_ACTUATOR_BLOCKED_WARNING
BT_MESH_HEALTH_FAULT_ACTUATOR_BLOCKED_ERROR
BT_MESH_HEALTH_FAULT_HOUSING_OPENED_WARNING
BT_MESH_HEALTH_FAULT_HOUSING_OPENED_ERROR
BT_MESH_HEALTH_FAULT_TAMPER_WARNING
BT_MESH_HEALTH_FAULT_TAMPER_ERROR
BT_MESH_HEALTH_FAULT_DEVICE_MOVED_WARNING
BT_MESH_HEALTH_FAULT_DEVICE_MOVED_ERROR
BT_MESH_HEALTH_FAULT_DEVICE_DROPPED_WARNING
BT_MESH_HEALTH_FAULT_DEVICE_DROPPED_ERROR
BT_MESH_HEALTH_FAULT_OVERFLOW_WARNING
BT_MESH_HEALTH_FAULT_OVERFLOW_ERROR
BT_MESH_HEALTH_FAULT_EMPTY_WARNING
BT_MESH_HEALTH_FAULT_EMPTY_ERROR
BT_MESH_HEALTH_FAULT_INTERNAL_BUS_WARNING
BT_MESH_HEALTH_FAULT_INTERNAL_BUS_ERROR
BT_MESH_HEALTH_FAULT_MECHANISM_JAMMED_WARNING
BT_MESH_HEALTH_FAULT_MECHANISM_JAMMED_ERROR
BT_MESH_HEALTH_FAULT_VENDOR_SPECIFIC_START
Start of the vendor specific fault values.
All values below this are reserved for the Bluetooth Specification.
Health Client The Health Client model interacts with a Health Server model to read out diagnostics
and control the node’s attention state.
All message passing functions in the Health Client API have cli as their first parameter. This is a pointer
to the client model instance to be used in this function call. The second parameter is the ctx or message
context. Message context contains netkey index, appkey index and unicast address that the target node
uses.
The Health Client model is optional, and may be instantiated in any element. However, if a Health Client
model is instantiated in an element other than the first, an instance must also be present in the first
element.
See Health faults for a list of specification defined fault values.
API reference
group bt_mesh_health_cli
Health Client Model.
Defines
BT_MESH_MODEL_HEALTH_CLI(cli_data)
Generic Health Client model composition data entry.
Parameters
• cli_data – Pointer to a Health Client Model instance.
Functions
See also:
Health faults
Parameters
• cli – Client model to send on.
• ctx – Message context, or NULL to use the configured publish parameters.
• cid – Company ID to get the registered faults of.
• test_id – Test ID response buffer.
• faults – Fault array response buffer.
• fault_count – Fault count response buffer.
Returns
0 on success, or (negative) error code on failure.
See also:
Health faults
Parameters
• cli – Client model to send on.
• ctx – Message context, or NULL to use the configured publish parameters.
• cid – Company ID to clear the registered faults for.
• test_id – Test ID response buffer.
• faults – Fault array response buffer.
• fault_count – Fault count response buffer.
Returns
0 on success, or (negative) error code on failure.
See also:
Health faults
Parameters
• cli – Client model to send on.
• ctx – Message context, or NULL to use the configured publish parameters.
• cid – Company ID to clear the registered faults for.
Returns
0 on success, or (negative) error code on failure.
struct bt_mesh_health_cli
#include <health_cli.h> Health Client Model Context
Public Members
See also:
Health faults
Param cli
Health client that received the status message.
Param addr
Address of the sender.
Param test_id
Identifier of a most recently performed test.
Param cid
Company Identifier of the node.
Param faults
Array of faults.
Param fault_count
Number of faults in the fault array.
See also:
Health faults
Param cli
Health client that received the status message.
Param addr
Address of the sender.
Param test_id
Identifier of a most recently performed test.
Param cid
Company Identifier of the node.
Param faults
Array of faults.
Param fault_count
Number of faults in the fault array.
Large Composition Data Client The Large Composition Data Client model is a foundation model
defined by the Bluetooth mesh specification. The model is optional, and is enabled through the
CONFIG_BT_MESH_LARGE_COMP_DATA_CLI option.
The Large Composition Data Client model was introduced in the Bluetooth Mesh Protocol Specification
version 1.1, and supports the functionality of reading pages of Composition Data that do not fit in a
Config Composition Data Status message and reading the metadata of the model instances on a node
that supports the Large Composition Data Server model.
The Large Composition Data Client model communicates with a Large Composition Data Server model
using the device key of the node containing the target Large Composition Data Server model instance.
If present, the Large Composition Data Client model must only be instantiated on the primary element.
API reference
group bt_mesh_large_comp_data_cli
Defines
BT_MESH_MODEL_LARGE_COMP_DATA_CLI(cli_data)
Large Composition Data Client model Composition Data entry.
Parameters
• cli_data – Pointer to a Large Composition Data Client model instance.
Functions
struct bt_mesh_large_comp_data_rsp
#include <large_comp_data_cli.h> Large Composition Data response.
Public Members
uint8_t page
Page number.
uint16_t offset
Offset within the page.
uint16_t total_size
Total size of the page.
struct bt_mesh_large_comp_data_cli_cb
#include <large_comp_data_cli.h> Large Composition Data Status messages callbacks
Public Members
Param cli
Large Composition Data Client context.
Param addr
Address of the sender.
Param rsp
Response received from the server.
struct bt_mesh_large_comp_data_cli
#include <large_comp_data_cli.h> Large Composition Data Client model context
Public Members
Large Composition Data Server The Large Composition Data Server model is a foundation model
defined by the Bluetooth mesh specification. The model is optional, and is enabled through the
CONFIG_BT_MESH_LARGE_COMP_DATA_SRV option.
The Large Composition Data Server model was introduced in the Bluetooth Mesh Protocol Specification
version 1.1, and is used to support the functionality of exposing pages of Composition Data that do not
fit in a Config Composition Data Status message and to expose metadata of the model instances.
The Large Composition Data Server does not have an API of its own and relies on a Large Composition
Data Client to control it. The model only accepts messages encrypted with the node’s device key.
If present, the Large Composition Data Server model must only be instantiated on the primary element.
Models metadata The Large Composition Data Server model allows each model to have a list of
model’s specific metadata that can be read by the Large Composition Data Client model. The meta-
data list can be associated with the bt_mesh_model through the bt_mesh_model.metadata field. The
metadata list consists of one or more entries defined by the bt_mesh_models_metadata_entry struc-
ture. Each entry contains the length and ID of the metadata, and a pointer to the raw data. Entries can
be created using the BT_MESH_MODELS_METADATA_ENTRY macro. The BT_MESH_MODELS_METADATA_END
macro marks the end of the metadata list and must always be present. If the model has no metadata, the
helper macro BT_MESH_MODELS_METADATA_NONE can be used instead.
API reference
group bt_mesh_large_comp_data_srv
Defines
BT_MESH_MODEL_LARGE_COMP_DATA_SRV
Large Composition Data Server model composition data entry.
On-Demand Private Proxy Client The On-Demand Private Proxy Client model is a foundation
model defined by the Bluetooth mesh specification. The model is optional, and is enabled with the
CONFIG_BT_MESH_OD_PRIV_PROXY_CLI option.
The On-Demand Private Proxy Client model was introduced in the Bluetooth Mesh Protocol Specification
version 1.1, and is used to set and retrieve the On-Demand Private GATT Proxy state. The state defines
how long a node will advertise Mesh Proxy Service with Private Network Identity type after it receives a
Solicitation PDU.
The On-Demand Private Proxy Client model communicates with an On-Demand Private Proxy Server
model using the device key of the node containing the target On-Demand Private Proxy Server model
instance.
Configurations The On-Demand Private Proxy Client model behavior can be configured
with the transmission timeout option CONFIG_BT_MESH_OD_PRIV_PROXY_CLI_TIMEOUT. The
CONFIG_BT_MESH_OD_PRIV_PROXY_CLI_TIMEOUT`controls how long the Client waits for a
state response message to arrive in milliseconds. This value can be changed at runtime
using :c:func:`bt_mesh_od_priv_proxy_cli_timeout_set.
API reference
group bt_mesh_od_priv_proxy_cli
Defines
BT_MESH_MODEL_OD_PRIV_PROXY_CLI(cli_data)
On-Demand Private Proxy Client model composition data entry.
Functions
struct bt_mesh_od_priv_proxy_cli
#include <od_priv_proxy_cli.h> On-Demand Private Proxy Client Model Context
Public Members
On-Demand Private Proxy Server The On-Demand Private Proxy Server model is a foundation model
defined by the Bluetooth mesh specification. It is enabled with the CONFIG_BT_MESH_OD_PRIV_PROXY_SRV
option.
The On-Demand Private Proxy Server model was introduced in the Bluetooth Mesh Protocol Specification
version 1.1, and supports the configuration of advertising with Private Network Identity type of a node
that is a recipient of Solicitation PDUs by managing its On-Demand Private GATT Proxy state.
When enabled, the Solicitation PDU RPL Configuration Server is also enabled. The On-Demand Private
Proxy Server is dependent on the Private Beacon Server to be present on the node.
The On-Demand Private Proxy Server does not have an API of its own, and relies on a On-Demand Private
Proxy Client to control it. The On-Demand Private Proxy Server model only accepts messages encrypted
with the node’s device key.
If present, the On-Demand Private Proxy Server model must be instantiated on the primary element.
API reference
group bt_mesh_od_priv_proxy_srv
Defines
BT_MESH_MODEL_OD_PRIV_PROXY_SRV
On-Demand Private Proxy Server model composition data entry.
Opcodes Aggregator Client The Opcodes Aggregator Client model is a foundation model defined by
the Bluetooth mesh specification. It is an optional model, enabled with the CONFIG_BT_MESH_OP_AGG_CLI
option.
The Opcodes Aggregator Client model is introduced in the Bluetooth Mesh Profile Specification version
1.1, and is used to support the functionality of dispatching a sequence of access layer messages to nodes
supporting the Opcodes Aggregator Server model.
The Opcodes Aggregator Client model communicates with an Opcodes Aggregator Server model using
the device key of the target node or the application keys configured by the Configuration Client.
The Opcodes Aggregator Client model must only be instantiated on the primary element, and it is im-
plicitly bound to the device key on initialization.
The Opcodes Aggregator Client model should be bound to the same application keys that the client
models, used to produce the sequence of messages, are bound to.
To be able to aggregate a message from a client model, it should support an asynchronous API, for
example through callbacks.
API reference
group bt_mesh_op_agg_cli
Defines
BT_MESH_MODEL_OP_AGG_CLI
Opcodes Aggregator Client model composition data entry.
Functions
Opcodes Aggregator Server The Opcodes Aggregator Server model is a foundation model defined by
the Bluetooth mesh specification. It is an optional model, enabled with the CONFIG_BT_MESH_OP_AGG_SRV
option.
The Opcodes Aggregator Server model is introduced in the Bluetooth Mesh Profile Specification version
1.1, and is used to support the functionality of processing a sequence of access layer messages.
The Opcodes Aggregator Server model accepts messages encrypted with the node’s device key or the
application keys.
The Opcodes Aggregator Server model can only be instantiated on the node’s primary element.
The targeted server models should be bound to the same application key that is used to encrypt the
sequence of access layer messages sent to the Opcodes Aggregator Server.
The Opcodes Aggregator Server handles aggregated messages and dispatches them to the respective
models and their message handlers. Current implementation assumes that responses are sent from the
same execution context as the received message and doesn’t allow to send a postponed response, for
example from a work queue.
API reference
group bt_mesh_op_agg_srv
Defines
BT_MESH_MODEL_OP_AGG_SRV
Opcodes Aggretator Server model composition data entry.
Note: The Opcodes Aggregator Server handles aggregated messages and dispatches them
to the respective models and their message handlers. Current implementation assumes that
responses are sent from the same execution context as the received message and doesn’t allow
to send a postponed response, e.g. from workqueue.
Private Beacon Server The Private Beacon Server model is a foundation model defined by the Blue-
tooth mesh specification. It is enabled with CONFIG_BT_MESH_PRIV_BEACON_SRV option.
The Private Beacon Server model is introduced in the Bluetooth Mesh Profile Specification version 1.1,
and controls the mesh node’s Private Beacon state, Private GATT Proxy state and Private Node Identity
state.
The Private Beacons feature adds privacy to the different Bluetooth mesh beacons by periodically ran-
domizing the beacon input data. This protects the mesh node from being tracked by devices outside
the mesh network, and hides the network’s IV index, IV update and the Key Refresh state. The Private
Beacon Server must be instantiated for the device to support sending of the private beacons, but the
node will process received private beacons without it.
The Private Beacon Server does not have an API of its own, but relies on a Private Beacon Client to control
it. The Private Beacon Server model only accepts messages encrypted with the node’s device key.
The application can configure the initial parameters of the Private Beacon Server model through the
bt_mesh_priv_beacon_srv instance passed to BT_MESH_MODEL_PRIV_BEACON_SRV . Note that if the mesh
node stored changes to this configuration in the settings subsystem, the initial values may be overwritten
upon loading.
The Private Beacon Server model is optional, and can only be instantiated in the node’s primary element.
API reference
group bt_mesh_priv_beacon_srv
Defines
BT_MESH_MODEL_PRIV_BEACON_SRV
Private Beacon Server model composition data entry.
Private Beacon Client The Private Beacon Client model is a foundation model defined by the Bluetooth
mesh specification. It is enabled with the CONFIG_BT_MESH_PRIV_BEACON_CLI option.
The Private Beacon Client model is introduced in the Bluetooth Mesh Profile Specification version 1.1,
and provides functionality for configuring the Private Beacon Server models.
The Private Beacons feature adds privacy to the different Bluetooth mesh beacons by periodically ran-
domizing the beacon input data. This protects the mesh node from being tracked by devices outside the
mesh network, and hides the network’s IV index, IV update and the Key Refresh state.
The Private Beacon Client model communicates with a Private Beacon Server model using the device key
of the target node. The Private Beacon Client model may communicate with servers on other nodes or
self-configure through the local Private Beacon Server model.
All configuration functions in the Private Beacon Client API have net_idx and addr as their first param-
eters. These should be set to the network index and the primary unicast address the target node was
provisioned with.
The Private Beacon Client model is optional, and can be instantiated on any element.
API reference
group bt_mesh_priv_beacon_cli
Defines
BT_MESH_MODEL_PRIV_BEACON_CLI(cli_data)
Private Beacon Client model composition data entry.
Parameters
• cli_data – Pointer to a Bluetooth Mesh Private Beacon Client instance.
Functions
struct bt_mesh_priv_beacon
#include <priv_beacon_cli.h> Private Beacon
Public Members
uint8_t enabled
Private beacon is enabled
uint8_t rand_interval
Random refresh interval (in 10 second steps), or 0 to keep current value.
struct bt_mesh_priv_node_id
#include <priv_beacon_cli.h> Private Node Identity
Public Members
uint16_t net_idx
Index of the NetKey.
uint8_t state
Private Node Identity state
uint8_t status
Response status code.
struct bt_mesh_priv_beacon_cli_cb
#include <priv_beacon_cli.h> Private Beacon Client Status messages callbacks
Public Members
struct bt_mesh_priv_beacon_cli
#include <priv_beacon_cli.h> Mesh Private Beacon Client model
Public Members
Remote Provisioning Client The Remote Provisioning Client model is a foundation model defined by
the Bluetooth mesh specification. It is enabled with the CONFIG_BT_MESH_RPR_CLI option.
The Remote Provisioning Client model is introduced in the Bluetooth Mesh Protocol Specification version
1.1. This model provides functionality to remotely provision devices into a mesh network, and perform
Node Provisioning Protocol Interface procedures by interacting with mesh nodes that support the Remote
Provisioning Server model.
The Remote Provisioning Client model communicates with a Remote Provisioning Server model using
the device key of the node containing the target Remote Provisioning Server model instance.
If present, the Remote Provisioning Client model must be instantiated on the primary element.
Scanning The scanning procedure is used to scan for unprovisioned devices located nearby the
Remote Provisioning Server. The Remote Provisioning Client starts a scan procedure by using the
bt_mesh_rpr_scan_start() call:
The above example shows pseudo code for starting a scan procedure on the target Remote Provision-
ing Server node. This procedure will start a ten-second, multiple-device scanning where the generated
scan report will contain a maximum of three unprovisioned devices. If the UUID argument was speci-
fied, the same procedure would only scan for the device with the corresponding UUID. After the proce-
dure completes, the server sends the scan report that will be handled in the client’s bt_mesh_rpr_cli.
scan_report callback.
Additionally, the Remote Provisioning Client model also supports extended scanning with the
bt_mesh_rpr_scan_start_ext() call. Extended scanning supplements regular scanning by allowing
the Remote Provisioning Server to report additional data for a specific device. The Remote Provisioning
Server will use active scanning to request a scan response from the unprovisioned device if it is supported
by the unprovisioned device.
Provisioning The Remote Provisioning Client starts a provisioning procedure by using the
bt_mesh_provision_remote() call:
The above example shows pseudo code for remotely provisioning a device through a Remote Provisioning
Server node. This procedure will attempt to provision the device with the corresponding UUID, and
assign the address 0x0006 to its primary element using the network key located at index zero.
Note: During the remote provisioning, the same bt_mesh_prov callbacks are triggered as for ordinary
provisioning. See section Provisioning for further details.
Re-provisioning In addition to scanning and provisioning functionality, the Remote Provisioning Client
also provides means to reconfigure node addresses, device keys and Composition Data on devices that
support the Remote Provisioning Server model. This is provided through the Node Provisioning Protocol
Interface (NPPI) which supports the following three procedures:
• Device Key Refresh procedure: Used to change the device key of the Target node without a need to
reconfigure the node.
• Node Address Refresh procedure: Used to change the node’s device key and unicast address.
• Node Composition Refresh procedure: Used to change the device key of the node, and to add or
delete models or features of the node.
The three NPPI procedures can be initiated with the bt_mesh_reprovision_remote() call:
The above example shows pseudo code for triggering a Node Address Refresh procedure on the Target
node. The specific procedure is not chosen directly, but rather through the other parameters that are
inputted. In the example we can see that the current unicast address of the Target is 0x0006, while
the new address is set to 0x0009. If the two addresses were the same, and the composition_changed
flag was set to true, this code would instead trigger a Node Composition Refresh procedure. If the two
addresses were the same, and the composition_changed flag was set to false, this code would trigger a
Device Key Refresh procedure.
API reference
group bt_mesh_rpr_cli
Defines
BT_MESH_RPR_SCAN_MAX_DEVS_ANY
Special value for the max_devs parameter of bt_mesh_rpr_scan_start.
Tells the Remote Provisioning Server not to put restrictions on the max number of devices
reported to the Client.
BT_MESH_MODEL_RPR_CLI(_cli)
Remote Provisioning Client model composition data entry.
Parameters
• _cli – Pointer to a Remote Provisioning Client model instance.
Functions
Use the uuid parameter to scan for a specific device, or leave it as NULL to report all unprovi-
sioned devices.
The Server will ignore duplicates, and report up to max_devs number of devices. Requesting
a max_devs number that’s higher than the Server’s capability will result in an error.
Parameters
• cli – Remote Provisioning Client.
• srv – Remote Provisioning Server.
• uuid – Device UUID to scan for, or NULL to report all devices.
• timeout – Scan timeout in seconds. Must be at least 1 second.
• max_devs – Max number of devices to report, or 0 to report as many as possible.
• status – Scan status response buffer.
Returns
0 on success, or (negative) error code otherwise.
int bt_mesh_rpr_scan_start_ext(struct bt_mesh_rpr_cli *cli, const struct bt_mesh_rpr_node
*srv, const uint8_t uuid[16], uint8_t timeout, const uint8_t
*ad_types, size_t ad_count)
Start extended scanning for unprovisioned devices.
Extended scanning supplements regular unprovisioned scanning, by allowing the Server to
report additional data for a specific device. The Remote Provisioning Server will use active
scanning to request a scan response from the unprovisioned device, if supported. If no UUID
is provided, the Server will report a scan on its own OOB information and advertising data.
Use the ad_types array to specify which AD types to include in the scan report. Some AD types
invoke special behavior:
• BT_DATA_NAME_COMPLETE Will report both the complete and the shortened name.
• BT_DATA_URI If the unprovisioned beacon contains a URI hash, the Server will extend
the scanning to include packets other than the scan response, to look for URIs matching
the URI hash. Only matching URIs will be reported.
The following AD types should not be used:
• BT_DATA_NAME_SHORTENED
• BT_DATA_UUID16_SOME
• BT_DATA_UUID32_SOME
• BT_DATA_UUID128_SOME
Additionally, each AD type should only occur once.
Parameters
• cli – Remote Provisioning Client.
• srv – Remote Provisioning Server.
• uuid – Device UUID to start extended scanning for, or NULL to scan the remote
server.
• timeout – Scan timeout in seconds. Valid val-
ues from BT_MESH_RPR_EXT_SCAN_TIME_MIN to
BT_MESH_RPR_EXT_SCAN_TIME_MAX. Ignored if UUID is NULL.
• ad_types – List of AD types to include in the scan report. Must contain 1 to
CONFIG_BT_MESH_RPR_AD_TYPES_MAX entries.
• ad_count – Number of AD types in ad_types.
Returns
0 on success, or (negative) error code otherwise.
int bt_mesh_rpr_scan_stop(struct bt_mesh_rpr_cli *cli, const struct bt_mesh_rpr_node *srv,
struct bt_mesh_rpr_scan_status *status)
Stop any ongoing scanning on the Remote Provisioning Server.
Parameters
• cli – Remote Provisioning Client.
• srv – Remote Provisioning Server.
• status – Scan status response buffer.
Returns
0 on success, or (negative) error code otherwise.
int bt_mesh_rpr_link_get(struct bt_mesh_rpr_cli *cli, const struct bt_mesh_rpr_node *srv, struct
bt_mesh_rpr_link *rsp)
Get the current link status of the Remote Provisioning Server.
Parameters
• cli – Remote Provisioning Client.
• srv – Remote Provisioning Server.
• rsp – Link status response buffer.
Returns
0 on success, or (negative) error code otherwise.
int bt_mesh_rpr_link_close(struct bt_mesh_rpr_cli *cli, const struct bt_mesh_rpr_node *srv,
struct bt_mesh_rpr_link *rsp)
Close any open link on the Remote Provisioning Server.
Parameters
• cli – Remote Provisioning Client.
• srv – Remote Provisioning Server.
• rsp – Link status response buffer.
Returns
0 on success, or (negative) error code otherwise.
int32_t bt_mesh_rpr_cli_timeout_get(void)
Get the current transmission timeout value.
Returns
The configured transmission timeout in milliseconds.
void bt_mesh_rpr_cli_timeout_set(int32_t timeout)
Set the transmission timeout value.
The transmission timeout controls the amount of time the Remote Provisioning Client models
will wait for a response from the Server.
Parameters
• timeout – The new transmission timeout.
struct bt_mesh_rpr_scan_status
#include <rpr_cli.h> Scan status response
Public Members
uint8_t max_devs
Max number of devices to report in current scan.
uint8_t timeout
Seconds remaining of the scan.
struct bt_mesh_rpr_caps
#include <rpr_cli.h> Remote Provisioning Server scanning capabilities
Public Members
uint8_t max_devs
Max number of scannable devices
bool active_scan
Supports active scan
struct bt_mesh_rpr_cli
#include <rpr_cli.h> Remote Provisioning Client model instance.
Public Members
Remote Provisioning Server The Remote Provisioning Server model is a foundation model defined by
the Bluetooth mesh specification. It is enabled with the CONFIG_BT_MESH_RPR_SRV option.
The Remote Provisioning Server model is introduced in the Bluetooth Mesh Protocol Specification version
1.1, and is used to support the functionality of remotely provisioning devices into a mesh network.
The Remote Provisioning Server does not have an API of its own, but relies on a Remote Provisioning
Client to control it. The Remote Provisioning Server model only accepts messages encrypted with the
node’s device key.
If present, the Remote Provisioning Server model must be instantiated on the primary element.
Note that after refreshing the device key, node address or Composition Data through a Node Provision-
ing Protocol Interface (NPPI) procedure, the bt_mesh_prov.reprovisioned callback is triggered. See
section Remote Provisioning Client for further details.
API reference
group bt_mesh_rpr_srv
Defines
BT_MESH_MODEL_RPR_SRV
Remote Provisioning Server model composition data entry.
Solicitation PDU RPL Configuration Client The Solicitation PDU RPL Configuration Client model is
a foundation model defined by the Bluetooth mesh specification. The model is optional, and is enabled
through the CONFIG_BT_MESH_SOL_PDU_RPL_CLI option.
The Solicitation PDU RPL Configuration Client model was introduced in the Bluetooth Mesh Protocol
Specification version 1.1, and supports the functionality of removing addresses from the solicitation
replay protection list (SRPL) of a node that supports the Solicitation PDU RPL Configuration Server model.
The Solicitation PDU RPL Configuration Client model communicates with a Solicitation PDU RPL Con-
figuration Server model using the application keys configured by the Configuration Client.
If present, the Solicitation PDU RPL Configuration Client model must be instantiated on the primary
element.
Configurations The Solicitation PDU RPL Configuration Client model behavior can be con-
figured with the transmission timeout option CONFIG_BT_MESH_SOL_PDU_RPL_CLI_TIMEOUT. The
CONFIG_BT_MESH_SOL_PDU_RPL_CLI_TIMEOUT controls how long the Solicitation PDU RPL Configura-
tion Client waits for a response message to arrive in milliseconds. This value can be changed at runtime
using bt_mesh_sol_pdu_rpl_cli_timeout_set() .
API reference
group bt_mesh_sol_pdu_rpl_cli
Defines
BT_MESH_MODEL_SOL_PDU_RPL_CLI(cli_data)
Solicitation PDU RPL Client model composition data entry.
Functions
struct bt_mesh_sol_pdu_rpl_cli
#include <sol_pdu_rpl_cli.h> Solicitation PDU RPL Client Model Context
Public Members
Param cli
Solicitation PDU RPL client that received the status message.
Param addr
Address of the sender.
Param range_start
Range start value.
Param range_length
Range length value.
Solicitation PDU RPL Configuration Server The Solicitation PDU RPL Configuration Server model is
a foundation model defined by the Bluetooth mesh specification. The model is enabled if the node has
the On-Demand Private Proxy Server enabled.
The Solicitation PDU RPL Configuration Server model was introduced in the Bluetooth Mesh Protocol
Specification version 1.1, and manages the Solicitation Replay Protection List (SRPL) saved on the de-
vice. The SRPL is used to reject Solicitation PDUs that are already processed by a node. When a valid
Solicitation PDU message is successfully processed by a node, the SSRC field and SSEQ field of the
message are stored in the node’s SRPL.
The Solicitation PDU RPL Configuration Server does not have an API of its own, and relies on a Solici-
tation PDU RPL Configuration Client to control it. The model only accepts messages encrypted with an
application key as configured by the Configuration Client.
If present, the Solicitation PDU RPL Configuration Server model must be instantiated on the primary
element.
Configurations For the Solicitation PDU RPL Configuration Server model, the
CONFIG_BT_MESH_PROXY_SRPL_SIZE option can be configured to set the size of the SRPL.
API reference
group bt_mesh_sol_pdu_rpl_srv
Defines
BT_MESH_MODEL_SOL_PDU_RPL_SRV
Solicitation PDU RPL Server model composition data entry.
Model specification models In addition to the foundation models defined in the Bluetooth mesh speci-
fication, the Bluetooth Mesh Model Specification defines several models, some of which are implemented
in Zephyr:
BLOB Transfer models The Binary Large Object (BLOB) Transfer models provide functionality for
sending large binary objects from a single source to many Target nodes over the Bluetooth mesh network.
It is the underlying transport method for the Device Firmware Update (DFU), but may be used for other
object transfer purposes.
The BLOB Transfer models support transfers of continuous binary objects of up to 4 GB (2 32 bytes).
The BLOB transfer protocol has built-in recovery procedures for packet losses, and sets up checkpoints to
ensure that all targets have received all the data before moving on. Data transfer order is not guaranteed.
BLOB transfers are constrained by the transfer speed and reliability of the underlying mesh network.
Under ideal conditions, the BLOBs can be transferred at a rate of up to 1 kbps, allowing a 100 kB BLOB
to be transferred in 10-15 minutes. However, network conditions, transfer capabilities and other limiting
factors can easily degrade the data rate by several orders of magnitude. Tuning the parameters of the
transfer according to the application and network configuration, as well as scheduling it to periods with
low network traffic, will offer significant improvements on the speed and reliability of the protocol.
However, achieving transfer rates close to the ideal rate is unlikely in actual deployments.
There are two BLOB Transfer models:
BLOB Transfer Server The Binary Large Object (BLOB) Transfer Server model implements reliable
receiving of large binary objects. It serves as the backend of the Firmware Update Server, but can also be
used for receiving other binary images.
BLOBs As described in BLOB Transfer models, the binary objects transferred by the BLOB Transfer
models are divided into blocks, which are divided into chunks. As the transfer is controlled by the BLOB
Transfer Client model, the BLOB Transfer Server must allow blocks to come in any order. The chunks
within a block may also come in any order, but all chunks in a block must be received before the next
block is started.
The BLOB Transfer Server keeps track of the received blocks and chunks, and will process each block
and chunk only once. The BLOB Transfer Server also ensures that any missing chunks are resent by the
BLOB Transfer Client.
Usage The BLOB Transfer Server is instantiated on an element with a set of event handler callbacks:
A BLOB Transfer Server is capable of receiving a single BLOB transfer at a time. Before the BLOB
Transfer Server can receive a transfer, it must be prepared by the user. The transfer ID must be passed to
the BLOB Transfer Server through the bt_mesh_blob_srv_recv() function before the transfer is started
by the BLOB Transfer Client. The ID must be shared between the BLOB Transfer Client and the BLOB
Transfer Server through some higher level procedure, like a vendor specific transfer management model.
Once the transfer has been set up on the BLOB Transfer Server, it’s ready for receiving the BLOB. The
application is notified of the transfer progress through the event handler callbacks, and the BLOB data is
sent to the BLOB stream.
The interaction between the BLOB Transfer Server, BLOB stream and application is shown below:
Transfer suspension The BLOB Transfer Server keeps a running timer during the transfer, that is reset
on every received message. If the BLOB Transfer Client does not send a message before the transfer timer
expires, the transfer is suspended by the BLOB Transfer Server.
The BLOB Transfer Server notifies the user of the suspension by calling the suspended callback. If the
BLOB Transfer Server is in the middle of receiving a block, this block is discarded.
The BLOB Transfer Client may resume a suspended transfer by starting a new block transfer. The BLOB
Transfer Server notifies the user by calling the resume callback.
recv(ID, stream)
Success
Start
Success Success
Transfer recovery The state of the BLOB transfer is stored persistently. If a reboot occurs, the BLOB
Transfer Server will attempt to recover the transfer. When the Bluetooth mesh subsystem is started (for
instance by calling bt_mesh_init() ), the BLOB Transfer Server will check for aborted transfers, and
call the recover callback if there is any. In the recover callback, the user must provide a BLOB stream
to use for the rest of the transfer. If the recover callback doesn’t return successfully or does not provide
a BLOB stream, the transfer is abandoned. If no recover callback is implemented, transfers are always
abandoned after a reboot.
After a transfer is successfully recovered, the BLOB Transfer Server enters the suspended state. It will
stay suspended until the BLOB Transfer Client resumes the transfer, or the user cancels it.
Note: The BLOB Transfer Client sending the transfer must support transfer recovery for the transfer to
complete. If the BLOB Transfer Client has already given up the transfer, the BLOB Transfer Server will
stay suspended until the application calls bt_mesh_blob_srv_cancel() .
API reference
group bt_mesh_blob_srv
Defines
BT_MESH_BLOB_BLOCKS_MAX
Max number of blocks in a single transfer.
BT_MESH_MODEL_BLOB_SRV(_srv)
BLOB Transfer Server model composition data entry.
Parameters
• _srv – Pointer to a Bluetooth Mesh BLOB Transfer Server model API instance.
Functions
struct bt_mesh_blob_srv_cb
#include <blob_srv.h> BLOB Transfer Server model event handlers.
All callbacks are optional.
Public Members
Note: The transfer may end before it’s started if the start parameters are invalid.
Param srv
BLOB Transfer Server instance.
Param id
BLOB ID of the cancelled transfer.
Param success
Whether the transfer was successful.
Note: The BLOB Transfer Server does not run a timer in the suspended state, and it’s
up to the application to determine whether the transfer should be permanently cancelled.
Without interaction, the transfer will be suspended indefinitely, and the BLOB Transfer
Server will not accept any new transfers.
Param srv
BLOB Transfer Server instance.
struct bt_mesh_blob_srv
#include <blob_srv.h> BLOB Transfer Server instance.
Public Members
struct bt_mesh_blob_srv_state
#include <blob_srv.h>
BLOB Transfer Client The Binary Large Object (BLOB) Transfer Client is the sender of the BLOB trans-
fer. It supports sending BLOBs of any size to any number of Target nodes, in both Push BLOB Transfer
Mode and Pull BLOB Transfer Mode.
Usage
Initialization The BLOB Transfer Client is instantiated on an element with a set of event handler call-
backs:
Transfer context Both the transfer capabilities retrieval procedure and the BLOB transfer uses an in-
stance of a bt_mesh_blob_cli_inputs to determine how to perform the transfer. The BLOB Transfer
Client Inputs structure must at least be initialized with a list of targets, an application key and a time to
live (TTL) value before it is used in a procedure:
sys_slist_init(&inputs.targets);
sys_slist_append(&inputs.targets, &targets[0].n);
sys_slist_append(&inputs.targets, &targets[1].n);
sys_slist_append(&inputs.targets, &targets[2].n);
Note that all BLOB Transfer Servers in the transfer must be bound to the chosen application key.
Group address The application may additionally specify a group address in the context structure. If the
group is not BT_MESH_ADDR_UNASSIGNED , the messages in the transfer will be sent to the group address,
instead of being sent individually to each Target node. Mesh Manager must ensure that all Target nodes
having the BLOB Transfer Server model subscribe to this group address.
Using group addresses for transferring the BLOBs can generally increase the transfer speed, as the BLOB
Transfer Client sends each message to all Target nodes at the same time. However, sending large, seg-
mented messages to group addresses in Bluetooth mesh is generally less reliable than sending them to
unicast addresses, as there is no transport layer acknowledgment mechanism for groups. This can lead
to longer recovery periods at the end of each block, and increases the risk of losing Target nodes. Using
group addresses for BLOB transfers will generally only pay off if the list of Target nodes is extensive, and
the effectiveness of each addressing strategy will vary heavily between different deployments and the
size of the chunks.
Transfer timeout If a Target node fails to respond to an acknowledged message within the BLOB
Transfer Client’s time limit, the Target node is dropped from the transfer. The application can reduce the
chances of this by giving the BLOB Transfer Client extra time through the context structure. The extra
time may be set in 10-second increments, up to 182 hours, in addition to the base time of 20 seconds.
The wait time scales automatically with the transfer TTL.
Note that the BLOB Transfer Client only moves forward with the transfer in following cases:
• All Target nodes have responded.
• A node has been removed from the list of Target nodes.
• The BLOB Transfer Client times out.
Increasing the wait time will increase this delay.
BLOB transfer capabilities retrieval It is generally recommended to retrieve BLOB transfer capabil-
ities before starting a transfer. The procedure populates the transfer capabilities from all Target nodes
with the most liberal set of parameters that allows all Target nodes to participate in the transfer. Any
Target nodes that fail to respond, or respond with incompatible transfer parameters, will be dropped.
Target nodes are prioritized according to their order in the list of Target nodes. If a Target node is found to
be incompatible with any of the previous Target nodes, for instance by reporting a non-overlapping block
size range, it will be dropped. Lost Target nodes will be reported through the lost_target callback.
The end of the procedure is signalled through the caps callback, and the resulting capabilities can be
used to determine the block and chunk sizes required for the BLOB transfer.
BLOB transfer The BLOB transfer is started by calling bt_mesh_blob_cli_send() function, which (in
addition to the aforementioned transfer inputs) requires a set of transfer parameters and a BLOB stream
instance. The transfer parameters include the 64-bit BLOB ID, the BLOB size, the transfer mode, the
block size in logarithmic representation and the chunk size. The BLOB ID is application defined, but
must match the BLOB ID the BLOB Transfer Servers have been started with.
The transfer runs until it either completes successfully for at least one Target node, or it is cancelled. The
end of the transfer is communicated to the application through the end callback. Lost Target nodes will
be reported through the lost_target callback.
API reference
group bt_mesh_blob_cli
Defines
BT_MESH_MODEL_BLOB_CLI(_cli)
BLOB Transfer Client model Composition Data entry.
Parameters
• _cli – Pointer to a Bluetooth Mesh BLOB Transfer Client model API instance.
Enums
enum bt_mesh_blob_cli_state
BLOB Transfer Client state.
Values:
enumerator BT_MESH_BLOB_CLI_STATE_NONE
No transfer is active.
enumerator BT_MESH_BLOB_CLI_STATE_CAPS_GET
Retrieving transfer capabilities.
enumerator BT_MESH_BLOB_CLI_STATE_START
Sending transfer start.
enumerator BT_MESH_BLOB_CLI_STATE_BLOCK_START
Sending block start.
enumerator BT_MESH_BLOB_CLI_STATE_BLOCK_SEND
Sending block chunks.
enumerator BT_MESH_BLOB_CLI_STATE_BLOCK_CHECK
Checking block status.
enumerator BT_MESH_BLOB_CLI_STATE_XFER_CHECK
Checking transfer status.
enumerator BT_MESH_BLOB_CLI_STATE_CANCEL
Cancelling transfer.
enumerator BT_MESH_BLOB_CLI_STATE_SUSPENDED
Transfer is suspended.
enumerator BT_MESH_BLOB_CLI_STATE_XFER_PROGRESS_GET
Checking transfer progress.
Functions
struct bt_mesh_blob_target_pull
#include <blob_cli.h> Target node’s Pull mode (Pull BLOB Transfer Mode) context used while
sending chunks to the Target node.
Public Members
int64_t block_report_timestamp
Timestamp when the Block Report Timeout Timer expires for this Target node.
struct bt_mesh_blob_target
#include <blob_cli.h> BLOB Transfer Client Target node.
Public Members
sys_snode_t n
Linked list node
uint16_t addr
Target node address.
uint8_t status
BLOB transfer status, see bt_mesh_blob_status.
struct bt_mesh_blob_xfer_info
#include <blob_cli.h> BLOB transfer information.
If phase is BT_MESH_BLOB_XFER_PHASE_INACTIVE, the fields below phase are not initial-
ized. If phase is BT_MESH_BLOB_XFER_PHASE_WAITING_FOR_START, the fields below id
are not initialized.
Public Members
uint64_t id
BLOB ID.
uint32_t size
BLOB size in octets.
uint8_t block_size_log
Logarithmic representation of the block size.
uint16_t mtu_size
MTU size in octets.
struct bt_mesh_blob_cli_inputs
#include <blob_cli.h> BLOB Transfer Client transfer inputs.
Public Members
sys_slist_t targets
Linked list of Target nodes. Each node should point to bt_mesh_blob_target::n.
uint16_t app_idx
AppKey index to send with.
uint16_t group
Group address destination for the BLOB transfer, or BT_MESH_ADDR_UNASSIGNED to
send every message to each Target node individually.
uint8_t ttl
Time to live value of BLOB transfer messages.
uint16_t timeout_base
Additional response time for the Target nodes, in 10-second increments.
The extra time can be used to give the Target nodes more time to respond to messages
from the Client. The actual timeout will be calculated according to the following formula:
If a Target node fails to respond to a message from the Client within the configured
transfer timeout, the Target node is dropped.
struct bt_mesh_blob_cli_caps
#include <blob_cli.h> Transfer capabilities of a Target node.
Public Members
size_t max_size
Max BLOB size.
uint8_t min_block_size_log
Logarithmic representation of the minimum block size.
uint8_t max_block_size_log
Logarithmic representation of the maximum block size.
uint16_t max_chunks
Max number of chunks per block.
uint16_t max_chunk_size
Max chunk size.
uint16_t mtu_size
Max MTU size.
struct bt_mesh_blob_cli_cb
#include <blob_cli.h> Event handler callbacks for the BLOB Transfer Client model.
All handlers are optional.
Public Members
void (*end)(struct bt_mesh_blob_cli *cli, const struct bt_mesh_blob_xfer *xfer, bool success)
Transfer end callback.
Called when the transfer ends.
Param cli
BLOB Transfer Client instance.
Param xfer
Completed transfer.
Param success
Status of the transfer. Is true if at least one Target node received the whole
transfer.
struct bt_mesh_blob_cli
#include <blob_cli.h> BLOB Transfer Client model instance.
Public Members
Concepts The BLOB transfer protocol introduces several new concepts to implement the BLOB transfer.
BLOBs BLOBs are binary objects up to 4 GB in size, that can contain any data the application would
like to transfer through the mesh network. The BLOBs are continuous data objects, divided into blocks
and chunks to make the transfers reliable and easy to process. No limitations are put on the contents or
structure of the BLOB, and applications are free to define any encoding or compression they’d like on the
data itself.
The BLOB transfer protocol does not provide any built-in integrity checks, encryption or authentication
of the BLOB data. However, the underlying encryption of the Bluetooth mesh protocol provides data
integrity checks and protects the contents of the BLOB from third parties using network and application
level encryption.
Blocks The binary objects are divided into blocks, typically from a few hundred to several thousand
bytes in size. Each block is transmitted separately, and the BLOB Transfer Client ensures that all BLOB
Transfer Servers have received the full block before moving on to the next. The block size is determined
by the transfer’s block_size_log parameter, and is the same for all blocks in the transfer except the last,
which may be smaller. For a BLOB stored in flash memory, the block size is typically a multiple of the
flash page size of the Target devices.
Chunks Each block is divided into chunks. A chunk is the smallest data unit in the BLOB transfer, and
must fit inside a single Bluetooth mesh access message excluding the opcode (379 bytes or less). The
mechanism for transferring chunks depends on the transfer mode.
When operating in Push BLOB Transfer Mode, the chunks are sent as unacknowledged packets from the
BLOB Transfer Client to all targeted BLOB Transfer Servers. Once all chunks in a block have been sent,
the BLOB Transfer Client asks each BLOB Transfer Server if they’re missing any chunks, and resends
them. This is repeated until all BLOB Transfer Servers have received all chunks, or the BLOB Transfer
Client gives up.
When operating in Pull BLOB Transfer Mode, the BLOB Transfer Server will request a small number
of chunks from the BLOB Transfer Client at a time, and wait for the BLOB Transfer Client to send them
before requesting more chunks. This repeats until all chunks have been transferred, or the BLOB Transfer
Server gives up.
Read more about the transfer modes in Transfer modes section.
BLOB streams In the BLOB Transfer models’ APIs, the BLOB data handling is separated from the high-
level transfer handling. This split allows reuse of different BLOB storage and transfer strategies for
different applications. While the high level transfer is controlled directly by the application, the BLOB
data itself is accessed through a BLOB stream.
The BLOB stream is comparable to a standard library file stream. Through opening, closing, reading and
writing, the BLOB Transfer model gets full access to the BLOB data, whether it’s kept in flash, RAM, or on
a peripheral. The BLOB stream is opened with an access mode (read or write) before it’s used, and the
BLOB Transfer models will move around inside the BLOB’s data in blocks and chunks, using the BLOB
stream as an interface.
Interaction Before the BLOB is read or written, the stream is opened by calling its open callback. When
used with a BLOB Transfer Server, the BLOB stream is always opened in write mode, and when used with
a BLOB Transfer Client, it’s always opened in read mode.
For each block in the BLOB, the BLOB Transfer model starts by calling block_start . Then, depending
on the access mode, the BLOB stream’s wr or rd callback is called repeatedly to move data to or from the
BLOB. When the model is done processing the block, it calls block_end . When the transfer is complete,
the BLOB stream is closed by calling close .
Implementations The application may implement their own BLOB stream, or use the implementations
provided by Zephyr:
BLOB Flash The BLOB Flash Readers and Writers implement BLOB reading to and writing from flash
partitions defined in the flash map.
BLOB Flash Reader The BLOB Flash Reader interacts with the BLOB Transfer Client to read BLOB
data directly from flash. It must be initialized by calling bt_mesh_blob_flash_rd_init() before being
passed to the BLOB Transfer Client. Each BLOB Flash Reader only supports one transfer at the time.
BLOB Flash Writer The BLOB Flash Writer interacts with the BLOB Transfer Server to write BLOB data
directly to flash. It must be initialized by calling bt_mesh_blob_flash_rd_init() before being passed
to the BLOB Transfer Server. Each BLOB Flash Writer only supports one transfer at the time, and requires
a block size that is a multiple of the flash page size. If a transfer is started with a block size lower than
the flash page size, the transfer will be rejected.
The BLOB Flash Writer copies chunk data into a buffer to accommodate chunks that are unaligned with
the flash write block size. The buffer data is padded with 0xff if either the start or length of the chunk
is unaligned.
API Reference
group bt_mesh_blob_io_flash
Functions
struct bt_mesh_blob_io_flash
#include <blob_io_flash.h> BLOB flash stream.
Public Members
uint8_t area_id
Flash area ID to write the BLOB to.
off_t offset
Offset into the flash area to place the BLOB at (in bytes).
Transfer capabilities Each BLOB Transfer Server may have different transfer capabilities. The transfer
capabilities of each device are controlled through the following configuration options:
• CONFIG_BT_MESH_BLOB_SIZE_MAX
• CONFIG_BT_MESH_BLOB_BLOCK_SIZE_MIN
• CONFIG_BT_MESH_BLOB_BLOCK_SIZE_MAX
• CONFIG_BT_MESH_BLOB_CHUNK_COUNT_MAX
The CONFIG_BT_MESH_BLOB_CHUNK_COUNT_MAX option is also used by the BLOB Transfer Client and af-
fects memory consumption by the BLOB Transfer Client model structure.
To ensure that the transfer can be received by as many servers as possible, the BLOB Transfer Client can
retrieve the capabilities of each BLOB Transfer Server before starting the transfer. The client will transfer
the BLOB with the highest possible block and chunk size.
Transfer modes BLOBs can be transferred using two transfer modes, Push BLOB Transfer Mode and
Pull BLOB Transfer Mode. In most cases, the transfer should be conducted in Push BLOB Transfer Mode.
In Push BLOB Transfer Mode, the send rate is controlled by the BLOB Transfer Client, which will push
all the chunks of each block without any high level flow control. Push BLOB Transfer Mode supports any
number of Target nodes, and should be the default transfer mode.
In Pull BLOB Transfer Mode, the BLOB Transfer Server will “pull” the chunks from the BLOB Transfer
Client at its own rate. Pull BLOB Transfer Mode can be conducted with multiple Target nodes, and is
intended for transferring BLOBs to Target nodes acting as Low Power Node. When operating in Pull BLOB
Transfer Mode, the BLOB Transfer Server will request chunks from the BLOB Transfer Client in small
batches, and wait for them all to arrive before requesting more chunks. This process is repeated until the
BLOB Transfer Server has received all chunks in a block. Then, the BLOB Transfer Client starts the next
block, and the BLOB Transfer Server requests all chunks of that block.
Transfer timeout The timeout of the BLOB transfer is based on a Timeout Base value. Both client and
server use the same Timeout Base value, but they calculate timeout differently.
The BLOB Transfer Server uses the following formula to calculate the BLOB transfer timeout:
API reference This section contains types and defines common to the BLOB Transfer models.
group bt_mesh_blob
Defines
CONFIG_BT_MESH_BLOB_CHUNK_COUNT_MAX
Enums
enum bt_mesh_blob_xfer_mode
BLOB transfer mode.
Values:
enumerator BT_MESH_BLOB_XFER_MODE_NONE
No valid transfer mode.
enumerator BT_MESH_BLOB_XFER_MODE_PUSH
Push mode (Push BLOB Transfer Mode).
enumerator BT_MESH_BLOB_XFER_MODE_PULL
Pull mode (Pull BLOB Transfer Mode).
enumerator BT_MESH_BLOB_XFER_MODE_ALL
Both modes are valid.
enum bt_mesh_blob_xfer_phase
Transfer phase.
Values:
enumerator BT_MESH_BLOB_XFER_PHASE_INACTIVE
The BLOB Transfer Server is awaiting configuration.
enumerator BT_MESH_BLOB_XFER_PHASE_WAITING_FOR_START
The BLOB Transfer Server is ready to receive a BLOB transfer.
enumerator BT_MESH_BLOB_XFER_PHASE_WAITING_FOR_BLOCK
The BLOB Transfer Server is waiting for the next block of data.
enumerator BT_MESH_BLOB_XFER_PHASE_WAITING_FOR_CHUNK
The BLOB Transfer Server is waiting for the next chunk of data.
enumerator BT_MESH_BLOB_XFER_PHASE_COMPLETE
The BLOB was transferred successfully.
enumerator BT_MESH_BLOB_XFER_PHASE_SUSPENDED
The BLOB transfer is paused.
enum bt_mesh_blob_status
BLOB model status codes.
Values:
enumerator BT_MESH_BLOB_SUCCESS
The message was processed successfully.
enumerator BT_MESH_BLOB_ERR_INVALID_BLOCK_NUM
The Block Number field value is not within the range of blocks being transferred.
enumerator BT_MESH_BLOB_ERR_INVALID_BLOCK_SIZE
The block size is smaller than the size indicated by the Min Block Size Log state or is
larger than the size indicated by the Max Block Size Log state.
enumerator BT_MESH_BLOB_ERR_INVALID_CHUNK_SIZE
The chunk size exceeds the size indicated by the Max Chunk Size state, or the number of
chunks exceeds the number specified by the Max Total Chunks state.
enumerator BT_MESH_BLOB_ERR_WRONG_PHASE
The operation cannot be performed while the server is in the current phase.
enumerator BT_MESH_BLOB_ERR_INVALID_PARAM
A parameter value in the message cannot be accepted.
enumerator BT_MESH_BLOB_ERR_WRONG_BLOB_ID
The message contains a BLOB ID value that is not expected.
enumerator BT_MESH_BLOB_ERR_BLOB_TOO_LARGE
There is not enough space available in memory to receive the BLOB.
enumerator BT_MESH_BLOB_ERR_UNSUPPORTED_MODE
The transfer mode is not supported by the BLOB Transfer Server model.
enumerator BT_MESH_BLOB_ERR_INTERNAL
An internal error occurred on the node.
enumerator BT_MESH_BLOB_ERR_INFO_UNAVAILABLE
The requested information cannot be provided while the server is in the current phase.
enum bt_mesh_blob_io_mode
BLOB stream interaction mode.
Values:
enumerator BT_MESH_BLOB_READ
Read data from the stream.
enumerator BT_MESH_BLOB_WRITE
Write data to the stream.
struct bt_mesh_blob_block
#include <blob.h> BLOB transfer data block.
Public Members
size_t size
Block size in bytes
off_t offset
Offset in bytes from the start of the BLOB.
uint16_t number
Block number
uint16_t chunk_count
Number of chunks in block.
struct bt_mesh_blob_chunk
#include <blob.h> BLOB data chunk.
Public Members
off_t offset
Offset of the chunk data from the start of the block.
size_t size
Chunk data size.
uint8_t *data
Chunk data.
struct bt_mesh_blob_xfer
#include <blob.h> BLOB transfer.
Public Members
uint64_t id
BLOB ID.
size_t size
Total BLOB size in bytes.
uint16_t chunk_size
Base chunk size. May be smaller for the last chunk.
struct bt_mesh_blob_io
#include <blob.h> BLOB stream.
Public Members
int (*open)(const struct bt_mesh_blob_io *io, const struct bt_mesh_blob_xfer *xfer, enum
bt_mesh_blob_io_mode mode)
Open callback.
Called when the reader is opened for reading.
Param io
BLOB stream.
Param xfer
BLOB transfer.
Param mode
Direction of the stream (read/write).
Return
0 on success, or (negative) error code otherwise.
int (*wr)(const struct bt_mesh_blob_io *io, const struct bt_mesh_blob_xfer *xfer, const struct
bt_mesh_blob_block *block, const struct bt_mesh_blob_chunk *chunk)
Chunk data write callback.
Used by the BLOB Transfer Server on incoming data.
Each block is divided into chunks of data. This callback is called when a new chunk of
data is received. Chunks may be received in any order within their block.
If the callback returns successfully, this chunk will be marked as received, and will not be
received again unless the block is restarted due to a transfer suspension. If the callback
returns a non-zero value, the chunk remains unreceived, and the BLOB Transfer Client
will attempt to resend it later.
Note that the Client will only perform a limited number of attempts at delivering a chunk
before dropping a Target node from the transfer. The number of retries performed by the
Client is implementation specific.
Param io
BLOB stream.
Param xfer
BLOB transfer.
Param block
Block the chunk is part of.
Param chunk
Received chunk.
Return
0 on success, or (negative) error code otherwise.
int (*rd)(const struct bt_mesh_blob_io *io, const struct bt_mesh_blob_xfer *xfer, const struct
bt_mesh_blob_block *block, const struct bt_mesh_blob_chunk *chunk)
Chunk data read callback.
Used by the BLOB Transfer Client to fetch outgoing data.
The Client calls the chunk data request callback to populate a chunk message going out
to the Target nodes. The data request callback may be called out of order and multiple
times for each offset, and cannot be used as an indication of progress.
Returning a non-zero status code on the chunk data request callback results in termination
of the transfer.
Param io
BLOB stream.
Param xfer
BLOB transfer.
Param block
Block the chunk is part of.
Param chunk
Chunk to get the data of. The buffer pointer to by the data member should be
filled by the callback.
Return
0 on success, or (negative) error code otherwise.
Device Firmware Update (DFU) Bluetooth mesh supports the distribution of firmware images across
a mesh network. The Bluetooth mesh DFU subsystem implements the Firmware update section of the
Bluetooth Mesh Model Specification v1.1.
Bluetooth mesh DFU implements a distribution mechanism for firmware images, and does not put any
restrictions on the size, format or usage of the images. The primary design goal of the subsystem is to
provide the qualifiable parts of the Bluetooth mesh DFU specification, and leave the usage, firmware
validation and deployment to the application.
The DFU specification is implemented in the Zephyr Bluetooth mesh DFU subsystem as three separate
models:
Firmware Update Server The Firmware Update Server model implements the Target node functional-
ity of the Device Firmware Update (DFU) subsystem. It extends the BLOB Transfer Server, which it uses to
receive the firmware image binary from the Distributor node.
Together with the extended BLOB Transfer Server model, the Firmware Update Server model implements
all the required functionality for receiving firmware updates over the mesh network, but does not provide
any functionality for storing, applying or verifying the images.
Firmware images The Firmware Update Server holds a list of all the updatable firmware im-
ages on the device. The full list shall be passed to the server through the _imgs parameter in
BT_MESH_DFU_SRV_INIT , and must be populated before the Bluetooth mesh subsystem is started. Each
firmware image in the image list must be independently updatable, and should have its own firmware
ID.
For instance, a device with an upgradable bootloader, an application and a peripheral chip with firmware
update capabilities could have three entries in the firmware image list, each with their own separate
firmware ID.
Receiving transfers The Firmware Update Server model uses a BLOB Transfer Server model on the
same element to transfer the binary image. The interaction between the Firmware Update Server, BLOB
Transfer Server and application is described below:
Success
Start
Success
Setup BLOB
Success
BLOB start
Verify image
Verified
Transfer status...
Verified
Apply Apply
Apply image
Success Success
Get images
Image list
Transfer check The transfer check is an optional pre-transfer check the application can perform on
incoming firmware image metadata. The Firmware Update Server performs the transfer check by calling
the check callback.
The result of the transfer check is a pass/fail status return and the expected bt_mesh_dfu_effect . The
DFU effect return parameter will be communicated back to the Distributor, and should indicate what
effect the firmware update will have on the mesh state of the device. If the transfer will cause the device
to change its Composition Data or become unprovisioned, this should be communicated through the
effect parameter of the metadata check.
Start The Start procedure prepares the application for the incoming transfer. It’ll contain information
about which image is being updated, as well as the update metadata.
The Firmware Update Server start callback must return a pointer to the BLOB Writer the BLOB Transfer
Server will send the BLOB to.
BLOB transfer After the setup stage, the Firmware Update Server prepares the BLOB Transfer Server
for the incoming transfer. The entire firmware image is transferred to the BLOB Transfer Server, which
passes the image to its assigned BLOB Writer.
At the end of the BLOB transfer, the Firmware Update Server calls its end callback.
Image verification After the BLOB transfer has finished, the application should verify the image in any
way it can to ensure that it is ready for being applied. Once the image has been verified, the application
calls bt_mesh_dfu_srv_verified() .
If the image can’t be verified, the application calls bt_mesh_dfu_srv_rejected() .
Applying the image Finally, if the image was verified, the Distributor may instruct the Firmware Up-
date Server to apply the transfer. This is communicated to the application through the apply callback.
The application should swap the image and start running with the new firmware. The firmware image
table should be updated to reflect the new firmware ID of the updated image.
When the transfer applies to the mesh application itself, the device might have to reboot as part of
the swap. This restart can be performed from inside the apply callback, or done asynchronously. After
booting up with the new firmware, the firmware image table should be updated before the Bluetooth
mesh subsystem is started.
The Distributor will read out the firmware image table to confirm that the transfer was successfully
applied. If the metadata check indicated that the device would become unprovisioned, the Target node
is not required to respond to this check.
API reference
group bt_mesh_dfu_srv
API for the Bluetooth mesh Firmware Update Server model.
Defines
Functions
struct bt_mesh_dfu_srv_cb
#include <dfu_srv.h> Firmware Update Server event callbacks.
Public Members
void (*end)(struct bt_mesh_dfu_srv *srv, const struct bt_mesh_dfu_img *img, bool success)
int (*recover)(struct bt_mesh_dfu_srv *srv, const struct bt_mesh_dfu_img *img, const struct
bt_mesh_blob_io **io)
Transfer recovery callback.
If the device reboots in the middle of a transfer, the Firmware Update Server calls this
function when the Bluetooth mesh subsystem is started.
This callback is optional, but transfers will not be recovered after a reboot without it.
Param srv
Firmware Update Server instance.
Param img
DFU image being updated.
Param io
BLOB stream return parameter. Must be set to a valid BLOB stream by the
callback.
Return
0 on success, or (negative) error code to abandon the transfer.
struct bt_mesh_dfu_srv
#include <dfu_srv.h> Firmware Update Server instance.
Should be initialized with BT_MESH_DFU_SRV_INIT.
Public Members
size_t img_count
Number of updatable images.
Firmware Update Client The Firmware Update Client is responsible for distributing firmware updates
through the mesh network. The Firmware Update Client uses the BLOB Transfer Client as a transport for
its transfers.
API reference
group bt_mesh_dfu_cli
API for the Bluetooth mesh Firmware Update Client model.
Defines
BT_MESH_DFU_CLI_INIT(_handlers)
Initialization parameters for the Firmware Uppdate Client model.
See also:
bt_mesh_dfu_cli_cb.
Parameters
• _handlers – Handler callback structure.
BT_MESH_MODEL_DFU_CLI(_cli)
Firmware Update Client model Composition Data entry.
Parameters
• _cli – Pointer to a Firmware Uppdate Client model instance.
Typedefs
Param idx
Image index.
Param total
Total number of images on the Target node.
Param img
Image information for the given image index.
Param cb_data
Callback data.
Retval BT_MESH_DFU_ITER_STOP
Stop iterating through the image list and return from bt_mesh_dfu_cli_imgs_get.
Retval BT_MESH_DFU_ITER_CONTINUE
Continue iterating through the image list if any images remain.
Functions
Note: The BLOB Transfer Client transfer inputs targets list must point to a list of
bt_mesh_dfu_target nodes.
Parameters
• cli – Firmware Update Client model instance.
• inputs – BLOB Transfer Client transfer inputs.
• io – BLOB stream to read BLOB from.
• xfer – Firmware Update Client transfer parameters.
Returns
0 on success, or (negative) error code otherwise.
The DFU image list request can be used to determine which image index the Target node holds
its different firmwares in.
Waits for a response until the procedure timeout expires.
Parameters
• cli – Firmware Update Client model instance.
• ctx – Message context.
• cb – Callback to call for each image index.
• cb_data – Callback data to pass to cb.
• max_count – Max number of images to return.
Returns
0 on success, or (negative) error code otherwise.
int bt_mesh_dfu_cli_metadata_check(struct bt_mesh_dfu_cli *cli, struct bt_mesh_msg_ctx *ctx,
uint8_t img_idx, const struct bt_mesh_dfu_slot *slot,
struct bt_mesh_dfu_metadata_status *rsp)
Perform a metadata check for the given DFU image slot.
The metadata check procedure allows the Firmware Update Client to check if a Target node
will accept a transfer of this DFU image slot, and what the effect would be.
Waits for a response until the procedure timeout expires.
Parameters
• cli – Firmware Update Client model instance.
• ctx – Message context.
• img_idx – Target node’s image index to check.
• slot – DFU image slot to check for.
• rsp – Metadata status response buffer.
Returns
0 on success, or (negative) error code otherwise.
int bt_mesh_dfu_cli_status_get(struct bt_mesh_dfu_cli *cli, struct bt_mesh_msg_ctx *ctx, struct
bt_mesh_dfu_target_status *rsp)
Get the status of a Target node.
Parameters
• cli – Firmware Update Client model instance.
• ctx – Message context.
• rsp – Response data buffer.
Returns
0 on success, or (negative) error code otherwise.
int32_t bt_mesh_dfu_cli_timeout_get(void)
Get the current procedure timeout value.
Returns
The configured procedure timeout.
void bt_mesh_dfu_cli_timeout_set(int32_t timeout)
Set the procedure timeout value.
Parameters
struct bt_mesh_dfu_target
#include <dfu_cli.h> DFU Target node.
Public Members
uint8_t img_idx
Image index on the Target node
uint8_t effect
Expected DFU effect, see bt_mesh_dfu_effect.
uint8_t status
Current DFU status, see bt_mesh_dfu_status.
uint8_t phase
Current DFU phase, see bt_mesh_dfu_phase.
struct bt_mesh_dfu_metadata_status
#include <dfu_cli.h> Metadata status response.
Public Members
uint8_t idx
Image index.
struct bt_mesh_dfu_target_status
#include <dfu_cli.h> DFU Target node status parameters.
Public Members
uint64_t blob_id
BLOB ID used in the transfer.
uint8_t img_idx
Image index to transfer.
uint8_t ttl
TTL used in the transfer.
uint16_t timeout_base
Additional response time for the Target nodes, in 10-second increments.
The extra time can be used to give the Target nodes more time to respond to messages
from the Client. The actual timeout will be calculated according to the following formula:
If a Target node fails to respond to a message from the Client within the configured
transfer timeout, the Target node is dropped.
struct bt_mesh_dfu_cli_cb
#include <dfu_cli.h> Firmware Update Client event callbacks.
Public Members
struct bt_mesh_dfu_cli
#include <dfu_cli.h> Firmware Update Client model instance.
Should be initialized with BT_MESH_DFU_CLI_INIT.
Public Members
struct bt_mesh_dfu_cli_xfer_blob_params
#include <dfu_cli.h> BLOB parameters for Firmware Update Client transfer:
Public Members
uint16_t chunk_size
Base chunk size. May be smaller for the last chunk.
struct bt_mesh_dfu_cli_xfer
#include <dfu_cli.h> Firmware Update Client transfer parameters:
Public Members
uint64_t blob_id
BLOB ID to use for this transfer, or 0 to set it randomly.
Firmware Distribution Server The Firmware Distribution Server model implements the Distributor
role for the Device Firmware Update (DFU) subsystem. It extends the BLOB Transfer Server, which it uses
to receive the firmware image binary from the Initiator node. It also instantiates a Firmware Update
Client, which it uses to distribute firmware updates throughout the mesh network.
Note: Currently, the Firmware Distribution Server supports out-of-band (OOB) retrieval of firmware
images over SMP service only.
The Firmware Distribution Server does not have an API of its own, but relies on a Firmware Distribution
Client model on a different device to give it information and trigger image distribution and upload.
Firmware slots The Firmware Distribution Server is capable of storing multiple firmware images for
distribution. Each slot contains a separate firmware image with metadata, and can be distributed to
other mesh nodes in the network in any order. The contents, format and size of the firmware images
are vendor specific, and may contain data from other vendors. The application should never attempt to
execute or modify them.
The slots are managed remotely by a Firmware Distribution Client, which can both upload new slots and
delete old ones. The application is notified of changes to the slots through the Firmware Distribution
Server’s callbacks (bt_mesh_fd_srv_cb). While the metadata for each firmware slot is stored internally,
the application must provide a BLOB streams for reading and writing the firmware image.
API reference
group bt_mesh_dfd_srv
API for the Firmware Distribution Server model.
Defines
CONFIG_BT_MESH_DFD_SRV_TARGETS_MAX
BT_MESH_DFD_SRV_INIT(_cb)
Initialization parameters for the Firmware Distribution Server model.
BT_MESH_MODEL_DFD_SRV(_srv)
Firmware Distribution Server model Composition Data entry.
Parameters
• _srv – Pointer to a Firmware Distribution Server model instance.
struct bt_mesh_dfd_srv_cb
#include <dfd_srv.h> Firmware Distribution Server callbacks:
Public Members
int (*recv)(struct bt_mesh_dfd_srv *srv, const struct bt_mesh_dfu_slot *slot, const struct
bt_mesh_blob_io **io)
Slot receive callback.
Called at the start of an upload procedure. The callback must fill io with a pointer to a
writable BLOB stream for the Firmware Distribution Server to write the firmware image
to.
Param srv
Firmware Distribution Server model instance.
Param slot
DFU image slot being received.
Param io
BLOB stream response pointer.
Return
0 on success, or (negative) error code otherwise.
int (*send)(struct bt_mesh_dfd_srv *srv, const struct bt_mesh_dfu_slot *slot, const struct
bt_mesh_blob_io **io)
Slot send callback.
Called at the start of a distribution procedure. The callback must fill io with a pointer to
a readable BLOB stream for the Firmware Distribution Server to read the firmware image
from.
Param srv
Firmware Distribution Server model instance.
Param slot
DFU image slot being sent.
Param io
BLOB stream response pointer.
Return
0 on success, or (negative) error code otherwise.
struct bt_mesh_dfd_srv
#include <dfd_srv.h> Firmware Distribution Server instance.
Overview
DFU roles The Bluetooth mesh DFU subsystem defines three different roles the mesh nodes have to
assume in the distribution of firmware images:
Target node
Target node is the receiver and user of the transferred firmware images. All its functionality is
implemented by the Firmware Update Server model. A transfer may be targeting any number of
Target nodes, and they will all be updated concurrently.
Distributor
The Distributor role serves two purposes in the DFU process. First, it’s acting as the Target node
in the Upload Firmware procedure, then it distributes the uploaded image to other Target nodes
as the Distributor. The Distributor does not select the parameters of the transfer, but relies on an
Initiator to give it a list of Target nodes and transfer parameters. The Distributor functionality
is implemented in two models, Firmware Distribution Server and Firmware Update Client. The
Firmware Distribution Server is responsible for communicating with the Initiator, and the Firmware
Update Client is responsible for distributing the image to the Target nodes.
Initiator
The Initiator role is typically implemented by the same device that implements the Bluetooth mesh
Provisioner and Configurator roles. The Initiator needs a full overview of the potential Target nodes
and their firmware, and will control (and initiate) all firmware updates. The Initiator role is not
implemented in the Zephyr Bluetooth mesh DFU subsystem.
BLOB Transfer
BLOB Transfer Client
Server
Fig. 10: DFU roles and the associated Bluetooth mesh models
Bluetooth mesh applications may combine the DFU roles in any way they’d like, and even take on mul-
tiple instances of the same role by instantiating the models on separate elements. For instance, the
Distributor and Initiator role can be combined by instantiating the Firmware Update Client on the Initia-
tor node and calling its API directly.
It’s also possible to combine the Initiator and Distributor devices into a single device, and replace the
Firmware Distribution Server model with a proprietary mechanism that will access the Firmware Update
Client model directly, e.g. over a serial protocol.
Note: All DFU models instantiate one or more BLOB Transfer models, and may need to be spread over
multiple elements for certain role combinations.
Stages The Bluetooth mesh DFU process is designed to act in three stages:
Upload stage
First, the image is uploaded to a Distributor in a mesh network by an external entity, such as a
phone or gateway (the Initiator). During the Upload stage, the Initiator transfers the firmware
image and all its metadata to the Distributor node inside the mesh network. The Distributor stores
the firmware image and its metadata persistently, and awaits further instructions from the Initiator.
The time required to complete the upload process depends on the size of the image. After the
upload completes, the Initiator can disconnect from the network during the much more time-
consuming Distribution stage. Once the firmware has been uploaded to the Distributor, the Initiator
may trigger the Distribution stage at any time.
Firmware Capability Check stage (optional)
Before starting the Distribution stage, the Initiator may optionally check if Target nodes can accept
the new firmware. Nodes that do not respond, or respond that they can’t receive the new firmware,
are excluded from the firmware distribution process.
Distribution stage
Before the firmware image can be distributed, the Initiator transfers the list of Target nodes and
their designated firmware image index to the Distributor. Next, it tells the Distributor to start
the firmware distributon process, which runs in the background while the Initiator and the mesh
network perform other duties. Once the firmware image has been transferred to the Target nodes,
the Distributor may ask them to apply the firmware image immediately and report back with their
status and new firmware IDs.
Firmware images All updatable parts of a mesh node’s firmware should be represented as a firmware
image. Each Target node holds a list of firmware images, each of which should be independently updat-
able and identifiable.
Firmware images are represented as a BLOB (the firmware itself) with the following additional informa-
tion attached to it:
Firmware ID
The firmware ID is used to identify a firmware image. The Initiator node may ask the Target
nodes for a list of its current firmware IDs to determine whether a newer version of the firmware
is available. The format of the firmware ID is vendor specific, but generally, it should include
enough information for an Initiator node with knowledge of the format to determine the type of
image as well as its version. The firmware ID is optional, and its max length is determined by
CONFIG_BT_MESH_DFU_FWID_MAXLEN.
Firmware metadata
The firmware metadata is used by the Target node to determine whether it should accept an in-
coming firmware update, and what the effect of the update would be. The metadata format is
vendor specific, and should contain all information the Target node needs to verify the image, as
well as any preparation the Target node has to make before the image is applied. Typical metadata
information can be image signatures, changes to the node’s Composition Data and the format of
the BLOB. The Target node may perform a metadata check before accepting incoming transfers
to determine whether the transfer should be started. The firmware metadata can be discarded by
the Target node after the metadata check, as other nodes will never request the metadata from
the Target node. The firmware metadata is optional, and its maximum length is determined by
CONFIG_BT_MESH_DFU_METADATA_MAXLEN.
The Bluetooth mesh DFU subsystem in Zephyr provides its own metadata format
(bt_mesh_dfu_metadata ) together with a set of related functions that can be used by an end prod-
uct. The support for it is enabled using the CONFIG_BT_MESH_DFU_METADATA option. The format of
the metadata is presented in the table below.
Firmware URI
The firmware URI gives the Initiator information about where firmware updates for the image can
be found. The URI points to an online resource the Initiator can interact with to get new versions
of the firmware. This allows Initiators to perform updates for any node in the mesh network by
interacting with the web server pointed to in the URI. The URI must point to a resource using
the http or https schemes, and the targeted web server must behave according to the Firmware
Check Over HTTPS procedure defined by the specification. The firmware URI is optional, and its
max length is determined by CONFIG_BT_MESH_DFU_URI_MAXLEN.
Firmware effect A new image may have the Composition Data Page 0 different from the one allocated
on a Target node. This may have an effect on the provisioning data of the node and how the Distributor
finalizes the DFU. Depending on the availability of the Remote Provisioning Server model on the old and
new image, the device may either boot up unprovisioned after applying the new firmware or require to
be re-provisioned. The complete list of available options is defined in bt_mesh_dfu_effect :
BT_MESH_DFU_EFFECT_NONE
The device stays provisioned after the new firmware is programmed. This effect is chosen if the
composition data of the new firmware doesn’t change.
BT_MESH_DFU_EFFECT_COMP_CHANGE_NO_RPR
This effect is chosen when the composition data changes and the device doesn’t support the remote
provisioning. The new composition data takes place only after re-provisioning.
BT_MESH_DFU_EFFECT_COMP_CHANGE
This effect is chosen when the composition data changes and the device supports the remote pro-
visioning. In this case, the device stays provisioned and the new composition data takes place after
re-provisioning using the Remote Provisioning models.
BT_MESH_DFU_EFFECT_UNPROV
This effect is chosen if the composition data in the new firmware changes, the device doesn’t
support the remote provisioning, and the new composition data takes effect after applying the
firmware.
When the Target node receives the Firmware Update Firmware Metadata Check message, the Firmware
Update Server model calls the bt_mesh_dfu_srv_cb.check callback, the application can then process
the metadata and provide the effect value.
DFU procedures The DFU protocol is implemented as a set of procedures that must be performed in a
certain order.
The Initiator controls the Upload stage of the DFU protocol, and all Distributor side handling of the
upload subprocedures is implemented in the Firmware Distribution Server.
The Distribution stage is controlled by the Distributor, as implemented by the Firmware Update Client.
The Target node implements all handling of these procedures in the Firmware Update Server, and notifies
the application through a set of callbacks.
Uploading the
firmware from
Initiator
Upload stage
Populating the
Recovering from
Distributor’s
failed distribution
receivers list
Applying the
firmware image
Distribution stage
Fig. 11: DFU stages and procedures as seen from the Distributor
Uploading the firmware The Upload Firmware procedure uses the BLOB Transfer models to transfer
the firmware image from the Initiator to the Distributor. The Upload Firmware procedure works in two
steps:
1. The Initiator generates a BLOB ID, and sends it to the Distributor’s Firmware Distribution Server
along with the firmware information and other input parameters of the BLOB transfer. The
Firmware Distribution Server stores the information, and prepares its BLOB Transfer Server for
the incoming transfer before it responds with a status message to the Initiator.
2. The Initiator’s BLOB Transfer Client model transfers the firmware image to the Distributor’s BLOB
Transfer Server, which stores the image in a predetermined flash partition.
When the BLOB transfer finishes, the firmware image is ready for distribution. The Initiator may upload
several firmware images to the Distributor, and ask it to distribute them in any order or at any time.
Additional procedures are available for querying and deleting firmware images from the Distributor.
The following Distributor’s capabilities related to firmware images can be configured using the configu-
ration options:
• CONFIG_BT_MESH_DFU_SLOT_CNT: Amount of image slots available on the device.
• CONFIG_BT_MESH_DFD_SRV_SLOT_MAX_SIZE: Maximum allowed size for each image.
• CONFIG_BT_MESH_DFD_SRV_SLOT_SPACE: Available space for all images.
Populating the Distributor’s receivers list Before the Distributor can start distributing the firmware
image, it needs a list of Target nodes to send the image to. The Initiator gets the full list of Target
nodes either by querying the potential targets directly, or through some external authority. The Initiator
uses this information to populate the Distributor’s receivers list with the address and relevant firmware
image index of each Target node. The Initiator may send one or more Firmware Distribution Receivers
Add messages to build the Distributor’s receivers list, and a Firmware Distribution Receivers Delete All
message to clear it.
The maximum number of receivers that can be added to the Distributor is configured through the
CONFIG_BT_MESH_DFD_SRV_TARGETS_MAX configuration option.
Initiating the distribution Once the Distributor has stored a firmware image and received a list of
Target nodes, the Initiator may initiate the distribution procedure. The BLOB transfer parameters for the
distribution are passed to the Distributor along with an update policy. The update policy decides whether
the Distributor should request that the firmware is applied on the Target nodes or not. The Distributor
stores the transfer parameters and starts distributing the firmware image to its list of Target nodes.
Firmware distribution The Distributor’s Firmware Update Client model uses its BLOB Transfer Client
model’s broadcast subsystem to communicate with all Target nodes. The firmware distribution is per-
formed with the following steps:
1. The Distributor’s Firmware Update Client model generates a BLOB ID and sends it to each Target
node’s Firmware Update Server model, along with the other BLOB transfer parameters, the Target
node firmware image index and the firmware image metadata. Each Target node performs a meta-
data check and prepares their BLOB Transfer Server model for the transfer, before sending a status
response to the Firmware Update Client, indicating if the firmware update will have any effect on
the Bluetooth mesh state of the node.
2. The Distributor’s BLOB Transfer Client model transfers the firmware image to all Target nodes.
3. Once the BLOB transfer has been received, the Target nodes’ applications verify that the firmware
is valid by performing checks such as signature verification or image checksums against the image
metadata.
4. The Distributor’s Firmware Update Client model queries all Target nodes to ensure that they’ve all
verified the firmware image.
If the distribution procedure completed with at least one Target node reporting that the image has been
received and verified, the distribution procedure is considered successful.
Note: The firmware distribution procedure only fails if all Target nodes are lost. It is up to the Initiator
to request a list of failed Target nodes from the Distributor and initiate additional attempts to update the
lost Target nodes after the current attempt is finished.
Suspending the distribution The Initiator can also request the Distributor to suspend the firmware
distribution. In this case, the Distributor will stop sending any messages to Target nodes. When the
firmware distribution is resumed, the Distributor will continue sending the firmware from the last suc-
cessfully transferred block.
Applying the firmware image If the Initiator requested it, the Distributor can initiate the Apply
Firmware on Target Node procedure on all Target nodes that successfully received and verified the
firmware image. The Apply Firmware on Target Node procedure takes no parameters, and to avoid
ambiguity, it should be performed before a new transfer is initiated. The Apply Firmware on Target Node
procedure consists of the following steps:
1. The Distributor’s Firmware Update Client model instructs all Target nodes that have verified the
firmware image to apply it. The Target nodes’ Firmware Update Server models respond with a
status message before calling their application’s apply callback.
2. The Target node’s application performs any preparations needed before applying the transfer, such
as storing a snapshot of the Composition Data or clearing its configuration.
3. The Target node’s application swaps the current firmware with the new image and updates its
firmware image list with the new firmware ID.
4. The Distributor’s Firmware Update Client model requests the full list of firmware images from each
Target node, and scans through the list to make sure that the new firmware ID has replaced the
old.
Note: During the metadata check in the distribution procedure, the Target node may have reported that
it will become unprovisioned after the firmware image is applied. In this case, the Distributor’s Firmware
Update Client model will send a request for the full firmware image list, and expect no response.
Cancelling the distribution The firmware distribution can be cancelled at any time by the Initiator. In
this case, the Distributor starts the cancelling procedure by sending a cancelling message to all Target
nodes. The Distributor waits for the response from all Target nodes. Once all Target nodes have replied,
or the request has timed out, the distribution procedure is cancelled. After this the distribution procedure
can be started again from the Firmware distribution section.
API reference This section lists the types common to the Device Firmware Update mesh models.
group bt_mesh_dfu
Defines
CONFIG_BT_MESH_DFU_FWID_MAXLEN
CONFIG_BT_MESH_DFU_METADATA_MAXLEN
CONFIG_BT_MESH_DFU_URI_MAXLEN
Enums
enum bt_mesh_dfu_phase
DFU transfer phase.
Values:
enumerator BT_MESH_DFU_PHASE_IDLE
Ready to start a Receive Firmware procedure.
enumerator BT_MESH_DFU_PHASE_TRANSFER_ERR
The Transfer BLOB procedure failed.
enumerator BT_MESH_DFU_PHASE_TRANSFER_ACTIVE
The Receive Firmware procedure is being executed.
enumerator BT_MESH_DFU_PHASE_VERIFY
The Verify Firmware procedure is being executed.
enumerator BT_MESH_DFU_PHASE_VERIFY_OK
The Verify Firmware procedure completed successfully.
enumerator BT_MESH_DFU_PHASE_VERIFY_FAIL
The Verify Firmware procedure failed.
enumerator BT_MESH_DFU_PHASE_APPLYING
The Apply New Firmware procedure is being executed.
enumerator BT_MESH_DFU_PHASE_TRANSFER_CANCELED
Firmware transfer has been canceled.
enumerator BT_MESH_DFU_PHASE_APPLY_SUCCESS
Firmware applying succeeded.
enumerator BT_MESH_DFU_PHASE_APPLY_FAIL
Firmware applying failed.
enumerator BT_MESH_DFU_PHASE_UNKNOWN
The current phase is unknown.
enum bt_mesh_dfu_status
DFU status.
Values:
enumerator BT_MESH_DFU_SUCCESS
The message was processed successfully.
enumerator BT_MESH_DFU_ERR_RESOURCES
Insufficient resources on the node
enumerator BT_MESH_DFU_ERR_WRONG_PHASE
The operation cannot be performed while the Server is in the current phase.
enumerator BT_MESH_DFU_ERR_INTERNAL
An internal error occurred on the node.
enumerator BT_MESH_DFU_ERR_FW_IDX
The message contains a firmware index value that is not expected.
enumerator BT_MESH_DFU_ERR_METADATA
The metadata check failed.
enumerator BT_MESH_DFU_ERR_TEMPORARILY_UNAVAILABLE
The Server cannot start a firmware update.
enumerator BT_MESH_DFU_ERR_BLOB_XFER_BUSY
Another BLOB transfer is in progress.
enum bt_mesh_dfu_effect
Expected effect of a DFU transfer.
Values:
enumerator BT_MESH_DFU_EFFECT_NONE
No changes to node Composition Data.
enumerator BT_MESH_DFU_EFFECT_COMP_CHANGE_NO_RPR
Node Composition Data changed and the node does not support remote provisioning.
enumerator BT_MESH_DFU_EFFECT_COMP_CHANGE
Node Composition Data changed, and remote provisioning is supported. The node sup-
ports remote provisioning and Composition Data Page 0x80. Page 0x80 contains different
Composition Data than Page 0x0.
enumerator BT_MESH_DFU_EFFECT_UNPROV
Node will be unprovisioned after the update.
enum bt_mesh_dfu_iter
Action for DFU iteration callbacks.
Values:
enumerator BT_MESH_DFU_ITER_STOP
Stop iterating.
enumerator BT_MESH_DFU_ITER_CONTINUE
Continue iterating.
struct bt_mesh_dfu_img
#include <dfu.h> DFU image instance.
Each DFU image represents a single updatable firmware image.
Public Members
size_t fwid_len
Length of the firmware ID.
struct bt_mesh_dfu_slot
#include <dfu.h> DFU image slot for DFU distribution.
Public Members
size_t size
Size of the firmware in bytes.
size_t fwid_len
Length of the firmware ID.
size_t metadata_len
Length of the metadata.
size_t uri_len
Length of the image URI.
uint8_t fwid[0]
Firmware ID.
uint8_t metadata[0]
Metadata.
char uri[0]
Image URI.
group bt_mesh_dfu_metadata
Common types and functions for the Bluetooth mesh DFU metadata.
Enums
enum bt_mesh_dfu_metadata_fw_core_type
Firmware core type.
Values:
Functions
struct bt_mesh_dfu_metadata_fw_ver
#include <dfu_metadata.h> Firmware version.
Public Members
uint8_t major
Firmware major version.
uint8_t minor
Firmware minor version.
uint16_t revision
Firmware revision.
uint32_t build_num
Firmware build number.
struct bt_mesh_dfu_metadata
#include <dfu_metadata.h> Firmware metadata.
Public Members
uint32_t fw_size
New firmware size.
uint32_t comp_hash
Hash of incoming Composition Data.
uint16_t elems
New number of node elements.
uint8_t *user_data
Application-specific data for new firmware. This field is optional.
uint32_t user_data_len
Length of the application-specific field.
Message The Bluetooth mesh message provides set of structures, macros and functions used for prepar-
ing message buffers, managing message and acknowledged message contexts.
API reference
group bt_mesh_msg
Message.
Defines
BT_MESH_MIC_SHORT
Length of a short Mesh MIC.
BT_MESH_MIC_LONG
Length of a long Mesh MIC.
BT_MESH_MODEL_OP_LEN(_op)
Helper to determine the length of an opcode.
Parameters
• _op – Opcode.
BT_MESH_MODEL_BUF_LEN(_op, _payload_len)
Helper for model message buffer length.
Returns the length of a Mesh model message buffer, including the opcode length and a short
MIC.
Parameters
• _op – Opcode of the message.
• _payload_len – Length of the model payload.
BT_MESH_MODEL_BUF_LEN_LONG_MIC(_op, _payload_len)
Helper for model message buffer length.
Returns the length of a Mesh model message buffer, including the opcode length and a long
MIC.
Parameters
• _op – Opcode of the message.
• _payload_len – Length of the model payload.
BT_MESH_MODEL_BUF_DEFINE(_buf, _op, _payload_len)
Define a Mesh model message buffer using NET_BUF_SIMPLE_DEFINE.
Parameters
• _buf – Buffer name.
• _op – Opcode of the message.
• _payload_len – Length of the model message payload.
Functions
Parameters
• ack – Acknowledged message context to be reset.
void bt_mesh_msg_ack_ctx_clear(struct bt_mesh_msg_ack_ctx *ack)
Clear parameters of an acknowledged message context.
This function clears the opcode, remote address and user data set by
bt_mesh_msg_ack_ctx_prepare.
Parameters
• ack – Acknowledged message context to be cleared.
int bt_mesh_msg_ack_ctx_prepare(struct bt_mesh_msg_ack_ctx *ack, uint32_t op, uint16_t dst,
void *user_data)
Prepare an acknowledged message context for the incoming message to wait.
This function sets the opcode, remote address of the incoming message and stores the user
data. Use this function before calling bt_mesh_msg_ack_ctx_wait.
Parameters
• ack – Acknowledged message context to prepare.
• op – The message OpCode.
• dst – Destination address of the message.
• user_data – User data for the acknowledged message context.
Returns
0 on success, or (negative) error code on failure.
static inline bool bt_mesh_msg_ack_ctx_busy(struct bt_mesh_msg_ack_ctx *ack)
Check if the acknowledged message context is initialized with an opcode.
Parameters
• ack – Acknowledged message context.
Returns
true if the acknowledged message context is initialized with an opcode, false
otherwise.
int bt_mesh_msg_ack_ctx_wait(struct bt_mesh_msg_ack_ctx *ack, k_timeout_t timeout)
Wait for a message acknowledge.
This function blocks execution until bt_mesh_msg_ack_ctx_rx is called or by timeout.
Parameters
• ack – Acknowledged message context of the message to wait for.
• timeout – Wait timeout.
Returns
0 on success, or (negative) error code on failure.
static inline void bt_mesh_msg_ack_ctx_rx(struct bt_mesh_msg_ack_ctx *ack)
Mark a message as acknowledged.
This function unblocks call to bt_mesh_msg_ack_ctx_wait.
Parameters
• ack – Context of a message to be acknowledged.
struct bt_mesh_msg_ctx
#include <msg.h> Message sending context.
Public Members
uint16_t net_idx
NetKey Index of the subnet to send the message on.
uint16_t app_idx
AppKey Index to encrypt the message with.
uint16_t addr
Remote address.
uint16_t recv_dst
Destination address of a received message. Not used for sending.
int8_t recv_rssi
RSSI of received packet. Not used for sending.
uint8_t recv_ttl
Received TTL value. Not used for sending.
bool send_rel
Force sending reliably by using segment acknowledgment
uint8_t send_ttl
TTL, or BT_MESH_TTL_DEFAULT for default TTL.
struct bt_mesh_msg_ack_ctx
#include <msg.h> Acknowledged message context for tracking the status of model messages
pending a response.
Public Members
uint32_t op
Opcode we’re waiting for.
uint16_t dst
Address of the node that should respond.
void *user_data
User specific parameter.
Provisioning Provisioning is the process of adding devices to a mesh network. It requires two devices
operating in the following roles:
• The provisioner represents the network owner, and is responsible for adding new nodes to the mesh
network.
• The provisionee is the device that gets added to the network through the Provisioning process.
Before the provisioning process starts, the provisionee is an unprovisioned device.
The Provisioning module in the Zephyr Bluetooth mesh stack supports both the Advertising and GATT
Provisioning bearers for the provisionee role, as well as the Advertising Provisioning bearer for the
provisioner role.
The Provisioning process All Bluetooth mesh nodes must be provisioned before they can participate
in a Bluetooth mesh network. The Provisioning API provides all the functionality necessary for a device
to become a provisioned mesh node. Provisioning is a five-step process, involving the following steps:
• Beaconing
• Invitation
• Public key exchange
• Authentication
• Provisioning data transfer
Beaconing To start the provisioning process, the unprovisioned device must first start broadcasting
the Unprovisioned Beacon. This makes it visible to nearby provisioners, which can initiate the provi-
sioning. To indicate that the device needs to be provisioned, call bt_mesh_prov_enable() . The device
starts broadcasting the Unprovisioned Beacon with the device UUID and the OOB information field, as
specified in the prov parameter passed to bt_mesh_init() . Additionally, a Uniform Resource Identifier
(URI) may be specified, which can point the provisioner to the location of some Out Of Band information,
such as the device’s public key or an authentication value database. The URI is advertised in a separate
beacon, with a URI hash included in the unprovisioned beacon, to tie the two together.
Uniform Resource Identifier The Uniform Resource Identifier shall follow the format specified in the
Bluetooth Core Specification Supplement. The URI must start with a URI scheme, encoded as a single
utf-8 data point, or the special none scheme, encoded as 0x01. The available schemes are listed on the
Bluetooth website.
Examples of encoded URIs:
Provisioning invitation The provisioner initiates the Provisioning process by sending a Provisioning
invitation. The invitations prompts the provisionee to call attention to itself using the Health Server
Attention state, if available.
The Unprovisioned device automatically responds to the invite by presenting a list of its capabilities,
including the supported Out of Band Authentication methods and algorithms.
Public key exchange Before the provisioning process can begin, the provisioner and the unprovisioned
device exchange public keys, either in-band or Out of Band (OOB).
In-band public key exchange is a part of the provisioning process and always supported by the unprovi-
sioned device and provisioner.
If the application wants to support public key exchange via OOB, it needs to provide public and private
keys to the mesh stack. The unprovisioned device will reflect this in its capabilities. The provisioner ob-
tains the public key via any available OOB mechanism (e.g. the device may advertise a packet containing
the public key or it can be encoded in a QR code printed on the device packaging). Note that even if
the unprovisioned device has specified the public key for the Out of Band exchange, the provisioner may
choose to exchange the public key in-band if it can’t retrieve the public key via OOB mechanism. In this
case, a new key pair will be generated by the mesh stack for each Provisioning process.
To enable support of OOB public key on the unprovisioned device side,
CONFIG_BT_MESH_PROV_OOB_PUBLIC_KEY needs to be enabled. The application must provide public
and private keys before the Provisioning process is started by initializing pointers to bt_mesh_prov.
public_key_be and bt_mesh_prov.private_key_be . The keys needs to be provided in big-endian
bytes order.
To provide the device’s public key obtained via OOB, call bt_mesh_prov_remote_pub_key_set() on the
provisioner side.
Authentication After the initial exchange, the provisioner selects an Out of Band (OOB) Authentication
method. This allows the user to confirm that the device the provisioner connected to is actually the device
they intended, and not a malicious third party.
The Provisioning API supports the following authentication methods for the provisionee:
• Static OOB: An authentication value is assigned to the device in production, which the provisioner
can query in some application specific way.
• Input OOB: The user inputs the authentication value. The available input actions are listed in
bt_mesh_input_action_t .
• Output OOB: Show the user the authentication value. The available output actions are listed in
bt_mesh_output_action_t .
The application must provide callbacks for the supported authentication methods in bt_mesh_prov ,
as well as enabling the supported actions in bt_mesh_prov.output_actions and bt_mesh_prov.
input_actions .
When an Output OOB action is selected, the authentication value should be presented to the user when
the output callback is called, and remain until the bt_mesh_prov.input_complete or bt_mesh_prov.
complete callback is called. If the action is blink, beep or vibrate, the sequence should be repeated
after a delay of three seconds or more.
When an Input OOB action is selected, the user should be prompted when the application receives the
bt_mesh_prov.input callback. The user response should be fed back to the Provisioning API through
bt_mesh_input_string() or bt_mesh_input_number() . If no user response is recorded within 60
seconds, the Provisioning process is aborted.
If Provisionee wants to mandate OOB authentication, it is mandatory to use the
BT_MESH_ECDH_P256_HMAC_SHA256_AES_CCM algorithm.
Data transfer After the device has been successfully authenticated, the provisioner transfers the Provi-
sioning data:
• Unicast address
• A network key
• IV index
• Network flags
– Key refresh
– IV update
Additionally, a device key is generated for the node. All this data is stored by the mesh stack, and the
provisioning bt_mesh_prov.complete callback gets called.
Provisioning security Depending on the choice of public key exchange mechanism and authentication
method, the provisioning process can be secure or insecure.
On May 24th 2021, ANSSI disclosed a set of vulnerabilities in the Bluetooth mesh provisioning protocol
that showcased how the low entropy provided by the Blink, Vibrate, Push, Twist and Input/Output nu-
meric OOB methods could be exploited in impersonation and MITM attacks. In response, the Bluetooth
SIG has reclassified these OOB methods as insecure in the Mesh Profile specification erratum 16350, as
AuthValue may be brute forced in real time. To ensure secure provisioning, applications should use a
static OOB value and OOB public key transfer.
API reference
group bt_mesh_prov
Provisioning.
Enums
enum [anonymous]
Available authentication algorithms.
Values:
enumerator BT_MESH_PROV_AUTH_CMAC_AES128_AES_CCM
enumerator BT_MESH_PROV_AUTH_HMAC_SHA256_AES_CCM
enum [anonymous]
OOB Type field values.
Values:
enum bt_mesh_output_action_t
Available Provisioning output authentication actions.
Values:
enumerator BT_MESH_NO_OUTPUT = 0
enum bt_mesh_input_action_t
Available Provisioning input authentication actions.
Values:
enumerator BT_MESH_NO_INPUT = 0
enum bt_mesh_prov_bearer_t
Available Provisioning bearers.
Values:
enum bt_mesh_prov_oob_info_t
Out of Band information location.
Values:
Functions
Warning: Not using any authentication exposes the mesh network to impersonation
attacks, where attackers can pretend to be the unprovisioned device to gain access to the
network. Authentication is strongly encouraged.
Returns
Zero on success or (negative) error code otherwise.
Note: Changing the unicast addresses of the target node requires changes to all nodes that
publish directly to any of the target node’s models.
Parameters
• cli – Remote Provisioning Client Model to provision on
• srv – Remote Provisioning Server to reprovision
• addr – Address to assign to remote device. If addr is 0, the lowest available
address will be chosen.
• comp_change – The target node has indicated that its composition data has
changed. Note that the target node will reject the update if this isn’t true.
Returns
Zero on success or (negative) error code otherwise.
bool bt_mesh_is_provisioned(void)
Check if the local node has been provisioned.
This API can be used to check if the local node has been provisioned or not. It can e.g. be
helpful to determine if there was a stored network in flash, i.e. if the network was restored
after calling settings_load().
Returns
True if the node is provisioned. False otherwise.
struct bt_mesh_dev_capabilities
#include <main.h> Device Capabilities.
Public Members
uint8_t elem_count
Number of elements supported by the device
uint16_t algorithms
Supported algorithms and other capabilities
uint8_t pub_key_type
Supported public key types
uint8_t oob_type
Supported OOB Types
bt_mesh_output_action_t output_actions
Supported Output OOB Actions
bt_mesh_input_action_t input_actions
Supported Input OOB Actions
uint8_t output_size
Maximum size of Output OOB supported
uint8_t input_size
Maximum size in octets of Input OOB supported
struct bt_mesh_prov
#include <main.h> Provisioning properties & capabilities.
Public Members
bt_mesh_prov_oob_info_t oob_info
Out of Band information field.
uint8_t static_val_len
Static OOB value length
uint8_t output_size
Maximum size of Output OOB supported
uint16_t output_actions
Supported Output OOB Actions
uint8_t input_size
Maximum size of Input OOB supported
uint16_t input_actions
Supported Input OOB Actions
Input is requested.
This callback notifies the application that it should request input from the user using the
given action. The requested input will either be a string or a number, and the applica-
tion needs to consequently call the bt_mesh_input_string() or bt_mesh_input_number()
functions once the data has been acquired from the user.
Param act
Action for inputting data.
Param num
Maximum size of the inputted data.
Return
Zero on success or negative error code otherwise
void (*input_complete)(void)
The other device finished their OOB input.
This callback notifies the application that it should stop displaying its output OOB value,
as the other party finished their OOB input.
void (*reset)(void)
Node has been reset.
This callback notifies the application that the local node has been reset and needs to be
provisioned again. The node will not automatically advertise as unprovisioned, rather the
bt_mesh_prov_enable() API needs to be called to enable unprovisioned advertising on one
or more provisioning bearers.
Proxy The Proxy feature allows legacy devices like phones to access the Bluetooth mesh network
through GATT. The Proxy feature is only compiled in if the CONFIG_BT_MESH_GATT_PROXY option is set.
The Proxy feature state is controlled by the Configuration Server, and the initial value can be set with
bt_mesh_cfg_srv.gatt_proxy.
Nodes with the Proxy feature enabled can advertise with Network Identity and Node Identity, which is
controlled by the Configuration Client.
The GATT Proxy state indicates if the Proxy feature is supported.
Private Proxy A node supporting the Proxy feature and the Private Beacon Server model can advertise
with Private Network Identity and Private Node Identity types, which is controlled by the Private Beacon
Client. By advertising with this set of identification types, the node allows the legacy device to connect
to the network over GATT while maintaining the privacy of the network.
The Private GATT Proxy state indicates whether the Private Proxy functionality is supported.
Proxy Solicitation In the case where both GATT Proxy and Private GATT Proxy states are disabled
on a node, a legacy device cannot connect to it. A node supporting the On-Demand Private Proxy
Server may however be solicited to advertise connectable advertising events without enabling the Pri-
vate GATT Proxy state. To solicit the node, the legacy device can send a Solicitation PDU by calling the
bt_mesh_proxy_solicit() function. To enable this feature, the client must to be compiled with the
CONFIG_BT_MESH_PROXY_SOLICITATION option set.
Solicitation PDUs are non-mesh, non-connectable, undirected advertising messages containing Proxy
Solicitation UUID, encrypted with the network key of the subnet that the legacy device wants to connect
to. The PDU contains the source address of the legacy device and a sequence number. The sequence
number is maintained by the legacy device and is incremented for every new Solicitation PDU sent.
Each node supporting the Solicitation PDU reception holds its own Solicitation Replay Protection List
(SRPL). The SRPL protects the solicitation mechanism from replay attacks by storing solicitation se-
quence number (SSEQ) and solicitation source (SSRC) pairs of valid Solicitation PDUs processed by the
node. The delay between updating the SRPL and storing the change to the persistent storage is defined
by CONFIG_BT_MESH_RPL_STORE_TIMEOUT.
The Solicitation PDU RPL Configuration models, Solicitation PDU RPL Configuration Client and Solicita-
tion PDU RPL Configuration Server, provide the functionality of saving and clearing SRPL entries. A node
that supports the Solicitation PDU RPL Configuration Client model can clear a section of the SRPL on
the target by calling the bt_mesh_sol_pdu_rpl_clear() function. Communication between the Solic-
itation PDU RPL Configuration Client and Server is encrypted using the application key, therefore, the
Solicitation PDU RPL Configuration Client can be instantiated on any device in the network.
When the node receives the Solicitation PDU and successfully authenticates it, it will start advertising
connectable advertisements with the Private Network Identity type. The duration of the advertisement
can be configured by the On-Demand Private Proxy Client model.
API reference
group bt_mesh_proxy
Proxy.
Defines
BT_MESH_PROXY_CB_DEFINE(_name)
Register a callback structure for Proxy events.
Registers a structure with callback functions that gets called on various Proxy events.
Parameters
• _name – Name of callback structure.
Functions
int bt_mesh_proxy_identity_enable(void)
Enable advertising with Node Identity.
This API requires that GATT Proxy support has been enabled. Once called each subnet will
start advertising using Node Identity for the next 60 seconds.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_proxy_private_identity_enable(void)
Enable advertising with Private Node Identity.
This API requires that GATT Proxy support has been enabled. Once called each subnet will
start advertising using Private Node Identity for the next 60 seconds.
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_proxy_connect(uint16_t net_idx)
Allow Proxy Client to auto connect to a network.
This API allows a proxy client to auto-connect a given network.
Parameters
• net_idx – Network Key Index
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_proxy_disconnect(uint16_t net_idx)
Disallow Proxy Client to auto connect to a network.
This API disallows a proxy client to connect a given network.
Parameters
• net_idx – Network Key Index
Returns
0 on success, or (negative) error code on failure.
int bt_mesh_proxy_solicit(uint16_t net_idx)
Schedule advertising of Solicitation PDUs on Proxy Client .
Once called Proxy Client will schedule advertising Solicitation PDUs for the amount of time
defined by adv_int * (CONFIG_BT_MESH_SOL_ADV_XMIT + 1), where adv_int is 20ms for
Bluetooth v5.0 or higher, or 100ms otherwise.
If the number of advertised Solicitation PDUs reached 0xFFFFFF, the advertisements will no
longer be started until the node is reprovisioned.
Parameters
• net_idx – Network Key Index
Returns
0 on success, or (negative) error code on failure.
struct bt_mesh_proxy_cb
#include <proxy.h> Callbacks for the Proxy feature.
Should be instantiated with BT_MESH_PROXY_CB_DEFINE.
Public Members
Heartbeat The Heartbeat feature provides functionality for monitoring Bluetooth mesh nodes and de-
termining the distance between nodes.
The Heartbeat feature is configured through the Configuration Server model.
Heartbeat messages Heartbeat messages are sent as transport control packets through the network,
and are only encrypted with a network key. Heartbeat messages contain the original Time To Live (TTL)
value used to send the message and a bitfield of the active features on the node. Through this, a receiving
node can determine how many relays the message had to go through to arrive at the receiver, and what
features the node supports.
Available Heartbeat feature flags:
• BT_MESH_FEAT_RELAY
• BT_MESH_FEAT_PROXY
• BT_MESH_FEAT_FRIEND
• BT_MESH_FEAT_LOW_POWER
Heartbeat publication Heartbeat publication is controlled through the Configuration models, and can
be triggered in two ways:
Periodic publication
The node publishes a new Heartbeat message at regular intervals. The publication can be config-
ured to stop after a certain number of messages, or continue indefinitely.
Triggered publication
The node publishes a new Heartbeat message every time a feature changes. The set of features
that can trigger the publication is configurable.
The two publication types can be combined.
Heartbeat subscription A node can be configured to subscribe to Heartbeat messages from one node
at the time. To receive a Heartbeat message, both the source and destination must match the configured
subscription parameters.
Heartbeat subscription is always time limited, and throughout the subscription period, the node keeps
track of the number of received Heartbeats as well as the minimum and maximum received hop count.
All Heartbeats received with the configured subscription parameters are passed to the
bt_mesh_hb_cb::recv event handler.
When the Heartbeat subscription period ends, the bt_mesh_hb_cb::sub_end callback gets called.
API reference
group bt_mesh_heartbeat
Heartbeat.
Defines
BT_MESH_HB_CB_DEFINE(_name)
Register a callback structure for Heartbeat events.
Registers a callback structure that will be called whenever Heartbeat events occur
Parameters
• _name – Name of callback structure.
Functions
struct bt_mesh_hb_pub
#include <heartbeat.h> Heartbeat Publication parameters
Public Members
uint16_t dst
Destination address.
uint16_t count
Remaining publish count.
uint8_t ttl
Time To Live value.
uint16_t feat
Bitmap of features that trigger a Heartbeat publication if they change. Legal val-
ues are BT_MESH_FEAT_RELAY, BT_MESH_FEAT_PROXY, BT_MESH_FEAT_FRIEND and
BT_MESH_FEAT_LOW_POWER.
uint16_t net_idx
Network index used for publishing.
uint32_t period
Publication period in seconds.
struct bt_mesh_hb_sub
#include <heartbeat.h> Heartbeat Subscription parameters.
Public Members
uint32_t period
Subscription period in seconds.
uint32_t remaining
Remaining subscription time in seconds.
uint16_t src
Source address to receive Heartbeats from.
uint16_t dst
Destination address to received Heartbeats on.
uint16_t count
The number of received Heartbeat messages so far.
uint8_t min_hops
Minimum hops in received messages, ie the shortest registered path from the publishing
node to the subscribing node. A Heartbeat received from an immediate neighbor has hop
count = 1.
uint8_t max_hops
Maximum hops in received messages, ie the longest registered path from the publishing
node to the subscribing node. A Heartbeat received from an immediate neighbor has hop
count = 1.
struct bt_mesh_hb_cb
#include <heartbeat.h> Heartbeat callback structure
Public Members
Runtime Configuration The runtime configuration API allows applications to change their runtime
configuration directly, without going through the Configuration models.
Bluetooth mesh nodes should generally be configured by a central network configurator device with a
Configuration Client model. Each mesh node instantiates a Configuration Server model that the Config-
uration Client can communicate with to change the node configuration. In some cases, the mesh node
can’t rely on the Configuration Client to detect or determine local constraints, such as low battery power
or changes in topology. For these scenarios, this API can be used to change the configuration locally.
API reference
group bt_mesh_cfg
Runtime Configuration.
Defines
BT_MESH_KR_NORMAL
BT_MESH_KR_PHASE_1
BT_MESH_KR_PHASE_2
BT_MESH_KR_PHASE_3
BT_MESH_RELAY_DISABLED
BT_MESH_RELAY_ENABLED
BT_MESH_RELAY_NOT_SUPPORTED
BT_MESH_BEACON_DISABLED
BT_MESH_BEACON_ENABLED
BT_MESH_PRIV_BEACON_DISABLED
BT_MESH_PRIV_BEACON_ENABLED
BT_MESH_GATT_PROXY_DISABLED
BT_MESH_GATT_PROXY_ENABLED
BT_MESH_GATT_PROXY_NOT_SUPPORTED
BT_MESH_PRIV_GATT_PROXY_DISABLED
BT_MESH_PRIV_GATT_PROXY_ENABLED
BT_MESH_PRIV_GATT_PROXY_NOT_SUPPORTED
BT_MESH_FRIEND_DISABLED
BT_MESH_FRIEND_ENABLED
BT_MESH_FRIEND_NOT_SUPPORTED
BT_MESH_NODE_IDENTITY_STOPPED
BT_MESH_NODE_IDENTITY_RUNNING
BT_MESH_NODE_IDENTITY_NOT_SUPPORTED
Enums
enum bt_mesh_feat_state
Bluetooth mesh feature states
Values:
enumerator BT_MESH_FEATURE_DISABLED
Feature is supported, but disabled.
enumerator BT_MESH_FEATURE_ENABLED
Feature is supported and enabled.
enumerator BT_MESH_FEATURE_NOT_SUPPORTED
Feature is not supported, and cannot be enabled.
Functions
Return values
• 0 – Successfully changed the Mesh Private beacon feature state.
• -ENOTSUP – The Mesh Private beacon feature is not supported.
• -EINVAL – Invalid parameter.
• -EALREADY – Already in the given state.
enum bt_mesh_feat_state bt_mesh_priv_beacon_get(void)
Get the current Mesh Private beacon state.
Returns
The Mesh Private beacon feature state.
void bt_mesh_priv_beacon_update_interval_set(uint8_t interval)
Set the current Mesh Private beacon update interval.
The Mesh Private beacon’s randomization value is updated regularly to maintain the node’s
privacy. The update interval controls how often the beacon is updated, in 10 second incre-
ments.
Parameters
• interval – Private beacon update interval in 10 second steps, or 0 to update
on every beacon transmission.
uint8_t bt_mesh_priv_beacon_update_interval_get(void)
Get the current Mesh Private beacon update interval.
The Mesh Private beacon’s randomization value is updated regularly to maintain the node’s
privacy. The update interval controls how often the beacon is updated, in 10 second incre-
ments.
Returns
The Private beacon update interval in 10 second steps, or 0 if the beacon is up-
dated every time it’s transmitted.
int bt_mesh_default_ttl_set(uint8_t default_ttl)
Set the default TTL value.
The default TTL value is used when no explicit TTL value is set. Models will use the default
TTL value when bt_mesh_msg_ctx::send_ttl is BT_MESH_TTL_DEFAULT.
Parameters
• default_ttl – The new default TTL value. Valid values are 0x00 and 0x02 to
BT_MESH_TTL_MAX.
Return values
• 0 – Successfully set the default TTL value.
• -EINVAL – Invalid TTL value.
uint8_t bt_mesh_default_ttl_get(void)
Get the current default TTL value.
Returns
The current default TTL value.
int bt_mesh_od_priv_proxy_get(void)
Get the current Mesh On-Demand Private Proxy state.
Return values
• 0 – or positive value represents On-Demand Private Proxy feature state
• -ENOTSUP – The On-Demand Private Proxy feature is not supported.
See also:
BT_MESH_TRANSMIT
Parameters
• xmit – New Network Transmit parameters. Use BT_MESH_TRANSMIT for en-
coding.
uint8_t bt_mesh_net_transmit_get(void)
Get the current Network Transmit parameters.
The BT_MESH_TRANSMIT_COUNT and BT_MESH_TRANSMIT_INT macros can be used to de-
code the Network Transmit parameters.
Returns
The current Network Transmit parameters.
int bt_mesh_relay_set(enum bt_mesh_feat_state relay, uint8_t xmit)
Configure the Relay feature.
Enable or disable the Relay feature, and configure the parameters to transmit relayed mes-
sages with.
Support for the Relay feature must be enabled through the CONFIG_BT_MESH_RELAY configu-
ration option.
See also:
BT_MESH_TRANSMIT
Parameters
• relay – New Relay feature state. Must be one of
BT_MESH_FEATURE_ENABLED and BT_MESH_FEATURE_DISABLED.
Note: The GATT Proxy feature only controls a Proxy node’s ability to relay messages to
the mesh network. A node that supports GATT Proxy will still advertise Connectable Proxy
beacons, even if the feature is disabled. The Proxy feature can only be fully disabled through
compile time configuration.
Parameters
• gatt_proxy – New GATT Proxy state. Must be one of
BT_MESH_FEATURE_ENABLED and BT_MESH_FEATURE_DISABLED.
Return values
• 0 – Successfully changed the GATT Proxy feature state.
• -ENOTSUP – The GATT Proxy feature is not supported.
• -EINVAL – Invalid parameter.
• -EALREADY – Already in the given state.
Parameters
• priv_gatt_proxy – New Private GATT Proxy state. Must be one of
BT_MESH_FEATURE_ENABLED and BT_MESH_FEATURE_DISABLED.
Return values
• 0 – Successfully changed the Private GATT Proxy feature state.
• -ENOTSUP – The Private GATT Proxy feature is not supported.
• -EINVAL – Invalid parameter.
• -EALREADY – Already in the given state.
enum bt_mesh_feat_state bt_mesh_priv_gatt_proxy_get(void)
Get the current Private GATT Proxy state.
Returns
The Private GATT Proxy feature state.
int bt_mesh_friend_set(enum bt_mesh_feat_state friendship)
Enable or disable the Friend feature.
Any active friendships will be terminated immediately if the Friend feature is disabled.
Support for the Friend feature must be enabled through the CONFIG_BT_MESH_FRIEND config-
uration option.
Parameters
• friendship – New Friend feature state. Must be one of
BT_MESH_FEATURE_ENABLED and BT_MESH_FEATURE_DISABLED.
Return values
• 0 – Successfully changed the Friend feature state.
• -ENOTSUP – The Friend feature is not supported.
• -EINVAL – Invalid parameter.
• -EALREADY – Already in the given state.
enum bt_mesh_feat_state bt_mesh_friend_get(void)
Get the current Friend state.
Returns
The Friend feature state.
Bluetooth Mesh Shell The Bluetooth mesh shell subsystem provides a set of Bluetooth mesh shell
commands for the Shell module. It allows for testing and exploring the Bluetooth mesh API through an
interactive interface, without having to write an application.
The Bluetooth mesh shell interface provides access to most Bluetooth mesh features, including provi-
sioning, configuration, and message sending.
Prerequisites The Bluetooth mesh shell subsystem depends on the application to create the composi-
tion data and do the mesh initialization.
Application The Bluetooth mesh shell subsystem is most easily used through the Bluetooth mesh shell
application under tests/bluetooth/mesh_shell. See Shell for information on how to connect and
interact with the Bluetooth mesh shell application.
Basic usage The Bluetooth mesh shell subsystem adds a single mesh command, which holds a set of
sub-commands. Every time the device boots up, make sure to call mesh init before any of the other
Bluetooth mesh shell commands can be called:
This is done to ensure that all available log will be printed to the shell output.
Provisioning The mesh node must be provisioned to become part of the network. This is only necessary
the first time the device boots up, as the device will remember its provisioning data between reboots.
The simplest way to provision the device is through self-provisioning. To do this the user must provision
the device with the default network key and address 0x0001, execute:
Since all mesh nodes use the same values for the default network key, this can be done on multiple
devices, as long as they’re assigned non-overlapping unicast addresses. Alternatively, to provision the
device into an existing network, the unprovisioned beacon can be enabled with mesh prov pb-adv on or
mesh prov pb-gatt on. The beacons can be picked up by an external provisioner, which can provision
the node into its network.
Once the mesh node is part of a network, its transmission parameters can be controlled by the general
configuration commands:
• To set the destination address, call mesh target dst <Addr>.
• To set the network key index, call mesh target net <NetKeyIdx>.
• To set the application key index, call mesh target app <AppKeyIdx>.
By default, the transmission parameters are set to send messages to the provisioned address and network
key.
Configuration By setting the destination address to the local unicast address (0x0001 in the mesh prov
local command above), we can perform self-configuration through any of the Models commands.
A good first step is to read out the node’s own composition data:
This prints a list of the composition data of the node, including a list of its model IDs.
Next, since the device has no application keys by default, it’s a good idea to add one:
Message sending With an application key added (see above), the mesh node’s transition parameters
are all valid, and the Bluetooth mesh shell can send raw mesh messages through the network.
For example, to send a Generic OnOff Set message, call:
Note: All multibyte fields model messages are in little endian, except the opcode.
The message will be sent to the current destination address, using the current network and application
key indexes. As the destination address points to the local unicast address by default, the device will
only send packets to itself. To change the destination address to the All Nodes broadcast address, call:
With the destination address set to 0xffff, any other mesh nodes in the network with the configured
network and application keys will receive and process the messages we send.
Note: To change the configuration of the device, the destination address must be set back to the local
unicast address before issuing any configuration commands.
Sending raw mesh packets is a good way to test model message handler implementations during devel-
opment, as it can be done without having to implement the sending model. By default, only the reception
of the model messages can be tested this way, as the Bluetooth mesh shell only includes the foundation
models. To receive a packet in the mesh node, you have to add a model with a valid opcode handler list
to the composition data in subsys/bluetooth/mesh/shell.c, and print the incoming message to the
shell in the handler callback.
Parameter formats The Bluetooth mesh shell commands are parsed with a variety of formats:
Commands The Bluetooth mesh shell implements a large set of commands. Some of the commands
accept parameters, which are mentioned in brackets after the command name. For example, mesh lpn
set <value: off, on>. Mandatory parameters are marked with angle brackets (e.g. <NetKeyIdx>),
and optional parameters are marked with square brackets (e.g. [DstAddr]).
The Bluetooth mesh shell commands are divided into the following groups:
• General configuration
• Target
• Low Power Node
• Testing
• Provisioning
• Proxy
• Models
• Configuration database
Note: Some commands depend on specific features being enabled in the compile time configuration
of the application. Not all features are enabled by default. The list of available Bluetooth mesh shell
commands can be shown in the shell by calling mesh without any arguments.
General configuration
mesh init
Initialize the mesh shell. This command must be run before any other mesh command.
mesh reset-local
Reset the local mesh node to its initial unprovisioned state. This command will also clear the
Configuration Database (CDB) if present.
Target The target commands enables the user to monitor and set the target destination address, net-
work index and application index for the shell. These parameters are used by several commands, like
provisioning, Configuration Client, etc.
Testing
Warning: Clearing the replay protection list breaks the security mechanisms of the mesh node,
making it susceptible to message replay attacks. This should never be performed in a real deployment.
• UUID: If present, new 128-bit UUID value. Providing a hex-string shorter than
16 bytes will populate the N most significant bytes of the array and zero-pad the
rest. If omitted, the current UUID will be printed. To enable this command, the
BT_MESH_SHELL_PROV_CTX_INSTANCE option must be enabled.
Proxy The Proxy Server module is an optional mesh subsystem that can be enabled through the
CONFIG_BT_MESH_GATT_PROXY configuration option.
Models
Configuration Client The Configuration Client model is an optional mesh subsystem that can be en-
abled through the CONFIG_BT_MESH_CFG_CLI configuration option. This is implemented as a separate
module (mesh models cfg) inside the mesh models subcommand list. This module will work on any
instance of the Configuration Client model if the mentioned shell configuration options is enabled, and
as long as the Configuration Client model is present in the model composition of the application. This
shell module can be used for configuring itself and other nodes in the mesh network.
The Configuration Client uses general message parameters set by mesh target dst and mesh target
net to target specific nodes. When the Bluetooth mesh shell node is provisioned, given that the
BT_MESH_SHELL_PROV_CTX_INSTANCE option is enabled with the shell provisioning context initialized,
the Configuration Client model targets itself by default. Similarly, when another node has been pro-
visioned by the Bluetooth mesh shell, the Configuration Client model targets the new node. In most
common use-cases, the Configuration Client is depending on the provisioning features and the Configu-
ration database to be fully functional. The Configuration Client always sends messages using the Device
key bound to the destination address, so it will only be able to configure itself and the mesh nodes
it provisioned. The following steps are an example of how you can set up a device to start using the
Configuration Client commands:
• Initialize the client node (mesh init).
• Create the CDB (mesh cdb create).
• Provision the local device (mesh prov local).
• The shell module should now target itself.
• Monitor the composition data of the local node (mesh models cfg get-comp).
• Configure the local node as desired with the Configuration Client commands.
• Provision other devices (mesh prov beacon-listen) (mesh prov remote-adv) (mesh prov
remote-gatt).
• The shell module should now target the newly added node.
• Monitor the newly provisioned nodes and their addresses (mesh cdb show).
• Monitor the composition data of the target device (mesh models cfg get-comp).
• Configure the node as desired with the Configuration Client commands.
mesh models cfg model pub <Addr> <MID> [CID] [<PubAddr> <AppKeyIdx> <Cred(off, on)>
<TTL> <PerRes> <PerSteps> <Count> <Int(ms)>]
Get or set the publication parameters of a model. If all publication parameters are included,
they become the new publication parameters of the model. If all publication parameters are
omitted, print the current publication parameters of the model.
• Addr: Address of the element the model is on.
• MID: The model ID of the model to get the bound keys of.
• CID: If present, determines the Company ID of the model. If omitted, the model is a
Bluetooth SIG defined model.
Publication parameters:
• PubAddr: The destination address to publish to.
• AppKeyIdx: The application key index to publish with.
• Cred: Whether to publish with Friendship credentials when acting as a Low Power Node.
• TTL: TTL value to publish with (0x00 to 0x07f).
• PerRes: Resolution of the publication period steps:
– 0x00: The Step Resolution is 100 milliseconds
– 0x01: The Step Resolution is 1 second
– 0x02: The Step Resolution is 10 seconds
mesh models cfg model pub-va <Addr> <UUID(1-16 hex)> <AppKeyIdx> <Cred(off, on)> <TTL>
<PerRes> <PerSteps> <Count> <Int(ms)> <MID> [CID]
Set the publication parameters of a model.
• Addr: Address of the element the model is on.
• MID: The model ID of the model to get the bound keys of.
• CID: If present, determines the Company ID of the model. If omitted, the model is a
Bluetooth SIG defined model.
Publication parameters:
• UUID: The destination virtual address to publish to. Providing a hex-string shorter than
16 bytes will populate the N most significant bytes of the array and zero-pad the rest.
• AppKeyIdx: The application key index to publish with.
• Cred: Whether to publish with Friendship credentials when acting as a Low Power Node.
• TTL: TTL value to publish with (0x00 to 0x07f).
• PerRes: Resolution of the publication period steps:
– 0x00: The Step Resolution is 100 milliseconds
– 0x01: The Step Resolution is 1 second
– 0x02: The Step Resolution is 10 seconds
– 0x03: The Step Resolution is 10 minutes
• PerSteps: Number of publication period steps, or 0 to disable periodic publication.
• Count: Number of retransmission for each published message (0 to 7).
• Int The interval between each retransmission, in milliseconds. Must be a multiple of
50.
mesh models cfg model sub-add-va <ElemAddr> <LabelUUID(1-16 hex)> <MID> [CID]
Subscribe the model to a virtual address. Models only receive messages sent to their unicast
address or a group or virtual address they subscribe to. Models may subscribe to multiple
group and virtual addresses.
• ElemAddr: Address of the element the model is on.
• LabelUUID: 128-bit label UUID of the virtual address to subscribe to. Providing a hex-
string shorter than 16 bytes will populate the N most significant bytes of the array and
zero-pad the rest.
• MID: The model ID of the model to add the subscription to.
• CID: If present, determines the Company ID of the model. If omitted, the model is a
Bluetooth SIG defined model.
mesh models cfg model sub-del-va <ElemAddr> <LabelUUID(1-16 hex)> <MID> [CID]
Unsubscribe a model from a virtual address.
• ElemAddr: Address of the element the model is on.
• LabelUUID: 128-bit label UUID of the virtual address to remove the subscription of.
Providing a hex-string shorter than 16 bytes will populate the N most significant bytes
of the array and zero-pad the rest.
• MID: The model ID of the model to add the subscription to.
• CID: If present, determines the Company ID of the model. If omitted, the model is a
Bluetooth SIG defined model.
mesh models cfg model sub-ow-va <ElemAddr> <LabelUUID(1-16 hex)> <MID> [CID]
Overwrite all model subscriptions with a single new virtual address. Models only receive
messages sent to their unicast address or a group or virtual address they subscribe to. Models
may subscribe to multiple group and virtual addresses.
mesh models cfg hb-pub [<Dst> <Count> <Per> <TTL> <Features> <NetKeyIdx>]
Get or set the Heartbeat publication parameters. Sets the Heartbeat publication parameters if
present, or prints the current Heartbeat publication parameters if called with no parameters.
• Dst: Destination address to publish Heartbeat messages to.
• Count: Logarithmic representation of the number of Heartbeat messages to publish
periodically:
– 0: Heartbeat messages are not published periodically.
– 1 to 17: The node will periodically publish 2(count - 1) Heartbeat messages.
– 255: Heartbeat messages will be published periodically indefinitely.
• Per: Logarithmic representation of the Heartbeat publication period:
– 0: Heartbeat messages are not published periodically.
– 1 to 17: The node will publish Heartbeat messages every 2(period - 1) seconds.
• TTL: The TTL value to publish Heartbeat messages with (0x00 to 0x7f).
• Features: Bitfield of features that should trigger a Heartbeat publication when
changed:
– Bit 0: Relay feature.
– Bit 1: Proxy feature.
– Bit 2: Friend feature.
– Bit 3: Low Power feature.
• NetKeyIdx: Index of the network key to publish Heartbeat messages with.
Health Client The Health Client model is an optional mesh subsystem that can be enabled through the
CONFIG_BT_MESH_HEALTH_CLI configuration option. This is implemented as a separate module (mesh
models health) inside the mesh models subcommand list. This module will work on any instance of
the Health Client model if the mentioned shell configuration options is enabled, and as long as one or
more Health Client model(s) is present in the model composition of the application. This shell module
can be used to trigger interaction between Health Clients and Servers on devices in a Mesh network.
By default, the module will choose the first Health Client instance in the model composition when us-
ing the Health Client commands. To choose a spesific Health Client instance the user can utilize the
commands mesh models health instance set and mesh models health instance get-all.
The Health Client may use the general messages parameters set by mesh target dst, mesh target net
and mesh target app to target specific nodes. If the shell target destination address is set to zero, the
targeted Health Client will attempt to publish messages using its configured publication parameters.
Binary Large Object (BLOB) Transfer Client model The BLOB Transfer Client can be
added to the mesh shell by enabling the CONFIG_BT_MESH_BLOB_CLI option, and disabling the
CONFIG_BT_MESH_DFU_CLI option.
mesh models blob cli tx <Id> <Size> <BlockSizeLog> <ChunkSize> [<Group> [<Mode(push,
pull)>]]
Perform a BLOB transfer to Target nodes. The BLOB Transfer Client will send a dummy BLOB
to all Target nodes, then post a message when the transfer is completed. Note that all Target
nodes must first be configured to receive the transfer using the mesh models blob srv rx
command.
• Id: 64-bit BLOB transfer ID.
• Size: Size of the BLOB in bytes.
• BlockSizeLog Logarithmic representation of the BLOB’s block size. The final block size
will be 1 << block size log bytes.
• ChunkSize: Chunk size in bytes.
• Group: Optional group address to use when communicating with Target nodes. If omit-
ted or set to 0, the BLOB Transfer Client will address each Target node individually.
• Mode: BLOB transfer mode to use. Must be either push (Push BLOB Transfer Mode) or
pull (Pull BLOB Transfer Mode). If omitted, push will be used by default.
BLOB Transfer Server model The BLOB Transfer Server can be added to the mesh shell by enabling the
CONFIG_BT_MESH_BLOB_SRV option. The BLOB Transfer Server model is capable of receiving any BLOB
data, but the implementation in the mesh shell will discard the incoming data.
Firmware Update Client model The Firmware Update Client model can be added to the mesh
shell by enabling configuration options CONFIG_BT_MESH_BLOB_CLI and CONFIG_BT_MESH_DFU_CLI. The
Firmware Update Client demonstrates the firmware update Distributor role by transferring a dummy
firmware update to a set of Target nodes.
Firmware Update Server model The Firmware Update Server model can be added to the mesh
shell by enabling configuration options CONFIG_BT_MESH_BLOB_SRV and CONFIG_BT_MESH_DFU_SRV. The
Firmware Update Server demonstrates the firmware update Target role by accepting any firmware up-
date. The mesh shell Firmware Update Server will discard the incoming firmware data, but otherwise
behave as a proper firmware update Target node.
Firmware Distribution Server model The Firmware Distribution Server model commands can be
added to the mesh shell by enabling the CONFIG_BT_MESH_DFD_SRV configuration option. The shell com-
mands for this model mirror the messages sent to the server by a Firmware Distribution Client model.
To use these commands, a Firmware Distribution Server must be instantiated by the application.
DFU metadata The DFU metadata commands allow generating metadata that can be used by a
Target node to check the firmware before accepting it. The commands are enabled through the
CONFIG_BT_MESH_DFU_METADATA configuration option.
mesh models dfu metadata comp-add <CID> <ProductID> <VendorID> <Crpl> <Features>
Create a header of the Composition Data Page 0.
• CID: Company identifier assigned by Bluetooth SIG.
• ProductID: Vendor-assigned product identifier.
• VendorID: Vendor-assigned version identifier.
• Crpl: The size of the replay protection list.
• Features: Features supported by the node in bit field format:
– 0: Relay.
– 1: Proxy.
– 2: Friend.
– 3: Low Power.
mesh models dfu metadata encode <Major> <Minor> <Rev> <BuildNum> <Size> <CoreType>
<Hash> <Elems> [<UserData>]
Encode metadata for the DFU.
• Major: Major version of the firmware.
• Minor: Minor version of the firmware.
• Rev: Revision number of the firmware.
• BuildNum: Build number.
• Size: Size of the signed bin file.
• CoreType: New firmware core type in bit field format:
– 0: Application core.
– 1: Network core.
– 2: Applications specific BLOB.
• Hash: Hash of the composition data generated using mesh models dfu metadata
comp-hash-get command.
• Elems: Number of elements on the new firmware.
• UserData: User data supplied with the metadata.
Segmentation and Reassembly (SAR) Configuration Client The SAR Configuration client is an op-
tional mesh model that can be enabled through the CONFIG_BT_MESH_SAR_CFG_CLI configuration option.
The SAR Configuration Client model is used to support the functionality of configuring the behavior of
the lower transport layer of a node that supports the SAR Configuration Server model.
Private Beacon Client The Private Beacon Client model is an optional mesh subsystem that can be
enabled through the CONFIG_BT_MESH_PRIV_BEACON_CLI configuration option.
Opcodes Aggregator Client The Opcodes Aggregator client is an optional Bluetooth mesh model that
can be enabled through the CONFIG_BT_MESH_OP_AGG_CLI configuration option. The Opcodes Aggrega-
tor Client model is used to support the functionality of dispatching a sequence of access layer messages
to nodes supporting the Opcodes Aggregator Server model.
Remote Provisioning Client The Remote Provisioning Client is an optional Bluetooth mesh model
enabled through the CONFIG_BT_MESH_RPR_CLI configuration option. The Remote Provisioning Client
model provides support for remote provisioning of devices into a mesh network by using the Remote
Provisioning Server model.
This shell module can be used to trigger interaction between Remote Provisioning Clients and Remote
Provisioning Servers on devices in a mesh network.
• UUID: Device UUID to scan for. Providing a hex-string shorter than 16 bytes will populate
the N most significant bytes of the array and zero-pad the rest. If omitted, all devices
will be reported.
Configuration database The Configuration database is an optional mesh subsystem that can be en-
abled through the CONFIG_BT_MESH_CDB configuration option. The Configuration database is only avail-
able on provisioner devices, and allows them to store all information about the mesh network. To avoid
conflicts, there should only be one mesh node in the network with the Configuration database enabled.
This node is the Configurator, and is responsible for adding new nodes to the network and configuring
them.
mesh cdb node-add <UUID(1-16 hex)> <Addr> <ElemCnt> <NetKeyIdx> [DevKey(1-16 hex)]
Manually add a mesh node to the configuration database. Note that devices provisioned with
mesh provision and mesh provision-adv will be added automatically if the Configuration
Database is enabled and created.
• UUID: 128-bit hexadecimal UUID of the node. Providing a hex-string shorter than 16
bytes will populate the N most significant bytes of the array and zero-pad the rest.
• Addr: Unicast address of the node, or 0 to automatically choose the lowest available
address.
• ElemCnt: Number of elements on the node.
• NetKeyIdx: The network key the node was provisioned with.
• DevKey: Optional 128-bit device key value for the device. Providing a hex-string shorter
than 16 bytes will populate the N most significant bytes of the array and zero-pad the
rest. If omitted, a random value will be generated.
On-Demand Private GATT Proxy Client The On-Demand Private GATT Proxy Client model is an op-
tional mesh subsystem that can be enabled through the CONFIG_BT_MESH_OD_PRIV_PROXY_CLI configu-
ration option.
Solicitation PDU RPL Client The Solicitation PDU RPL Client model is an optional mesh subsystem
that can be enabled through the CONFIG_BT_MESH_SOL_PDU_RPL_CLI configuration option.
API Reference
group bt_gatt_micp
Microphone Control Profile (MICP)
[Experimental] Users should note that the APIs can change as a part of ongoing development.
Defines
BT_MICP_MIC_DEV_AICS_CNT
BT_MICP_ERR_MUTE_DISABLED
Application error codes
BT_MICP_ERR_VAL_OUT_OF_RANGE
BT_MICP_MUTE_UNMUTED
Microphone Control Profile mute states
BT_MICP_MUTE_MUTED
BT_MICP_MUTE_DISABLED
Functions
Parameters
• conn – The connection to initialize the profile for.
• mic_ctlr – [out] Valid remote instance object on success.
Returns
0 on success, GATT error value on fail.
struct bt_micp_mic_dev_register_param
#include <micp.h> Register parameters structure for Microphone Control Service.
Public Members
struct bt_micp_included
#include <micp.h> Microphone Control Profile included services.
Used for to represent the Microphone Control Profile included service instances, for either a
Microphone Controller or a Microphone Device. The instance pointers either represent local
service instances, or remote service instances.
Public Members
uint8_t aics_cnt
Number of Audio Input Control Service instances
struct bt_micp_mic_dev_cb
#include <micp.h>
Public Members
struct bt_micp_mic_ctlr_cb
#include <micp.h>
Public Members
API Reference
group bt_rfcomm
RFCOMM.
Typedefs
Enums
enum [anonymous]
Values:
enumerator BT_RFCOMM_CHAN_HFP_HF = 1
enumerator BT_RFCOMM_CHAN_HFP_AG
enumerator BT_RFCOMM_CHAN_HSP_AG
enumerator BT_RFCOMM_CHAN_HSP_HS
enumerator BT_RFCOMM_CHAN_SPP
enum bt_rfcomm_role
Role of RFCOMM session and dlc. Used only by internal APIs.
Values:
enumerator BT_RFCOMM_ROLE_ACCEPTOR
enumerator BT_RFCOMM_ROLE_INITIATOR
Functions
struct bt_rfcomm_dlc_ops
#include <rfcomm.h> RFCOMM DLC operations structure.
Public Members
Param buf
Buffer containing incoming data.
struct bt_rfcomm_dlc
#include <rfcomm.h> RFCOMM DLC structure.
struct bt_rfcomm_server
#include <rfcomm.h>
Public Members
uint8_t channel
Server Channel
Battery Service
group bt_bas
Battery Service (BAS)
[Experimental] Users should note that the APIs can change as a part of ongoing development.
Functions
uint8_t bt_bas_get_battery_level(void)
Read battery level value.
Read the characteristic value of the battery level
Returns
The battery level in percent.
int bt_bas_set_battery_level(uint8_t level)
Update battery level value.
Update the characteristic value of the battery level This will send a GATT notification to all
current subscribers.
Parameters
• level – The battery level in percent.
Returns
Zero in case of success and error code in case of error.
group bt_hrs
Heart Rate Service (HRS)
[Experimental] Users should note that the APIs can change as a part of ongoing development.
Functions
group bt_ias
Immediate Alert Service (IAS)
[Experimental] Users should note that the APIs can change as a part of ongoing development.
Defines
BT_IAS_CB_DEFINE(_name)
Register a callback structure for immediate alert events.
Parameters
• _name – Name of callback structure.
Enums
enum bt_ias_alert_lvl
Values:
enumerator BT_IAS_ALERT_LVL_NO_ALERT
No alerting should be done on device
enumerator BT_IAS_ALERT_LVL_MILD_ALERT
Device shall alert
enumerator BT_IAS_ALERT_LVL_HIGH_ALERT
Device should alert in strongest possible way
Functions
int bt_ias_local_alert_stop(void)
Method for stopping alert locally.
Returns
Zero in case of success and error code in case of error.
int bt_ias_client_alert_write(struct bt_conn *conn, enum bt_ias_alert_lvl)
Set alert level.
Parameters
• conn – Bluetooth connection object
• bt_ias_alert_lvl – Level of alert to write
Returns
Zero in case of success and error code in case of error.
int bt_ias_discover(struct bt_conn *conn)
Discover Immediate Alert Service.
Parameters
• conn – Bluetooth connection object
Returns
Zero in case of success and error code in case of error.
int bt_ias_client_cb_register(const struct bt_ias_client_cb *cb)
Register Immediate Alert Client callbacks.
Parameters
• cb – The callback structure
Returns
Zero in case of success and error code in case of error.
struct bt_ias_cb
#include <ias.h> Immediate Alert Service callback structure.
Public Members
void (*no_alert)(void)
Callback function to stop alert.
This callback is called when peer commands to disable alert.
void (*mild_alert)(void)
Callback function for alert level value.
This callback is called when peer commands to alert.
void (*high_alert)(void)
Callback function for alert level value.
This callback is called when peer commands to alert in the strongest possible way.
struct bt_ias_client_cb
#include <ias.h>
Public Members
group bt_ots
Object Transfer Service (OTS)
[Experimental] Users should note that the APIs can change as a part of ongoing development.
Defines
BT_OTS_OBJ_ID_SIZE
Size of OTS object ID (in bytes).
BT_OTS_OBJ_ID_MIN
Minimum allowed value for object ID (except ID for directory listing)
BT_OTS_OBJ_ID_MAX
Maximum allowed value for object ID (except ID for directory listing)
OTS_OBJ_ID_DIR_LIST
ID of the Directory Listing Object.
BT_OTS_OBJ_ID_MASK
Mask for OTS object IDs, preserving the 48 bits.
BT_OTS_OBJ_ID_STR_LEN
Length of OTS object ID string (in bytes).
BT_OTS_OBJ_SET_PROP_DELETE(prop)
Set BT_OTS_OBJ_PROP_DELETE property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_SET_PROP_EXECUTE(prop)
Set BT_OTS_OBJ_PROP_EXECUTE property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_SET_PROP_READ(prop)
Set BT_OTS_OBJ_PROP_READ property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_SET_PROP_WRITE(prop)
Set BT_OTS_OBJ_PROP_WRITE property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_SET_PROP_APPEND(prop)
Set BT_OTS_OBJ_PROP_APPEND property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_SET_PROP_TRUNCATE(prop)
Set BT_OTS_OBJ_PROP_TRUNCATE property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_SET_PROP_PATCH(prop)
Set BT_OTS_OBJ_PROP_PATCH property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_SET_PROP_MARKED(prop)
Set BT_OTS_OBJ_SET_PROP_MARKED property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_GET_PROP_DELETE(prop)
Get BT_OTS_OBJ_PROP_DELETE property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_GET_PROP_EXECUTE(prop)
Get BT_OTS_OBJ_PROP_EXECUTE property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_GET_PROP_READ(prop)
Get BT_OTS_OBJ_PROP_READ property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_GET_PROP_WRITE(prop)
Get BT_OTS_OBJ_PROP_WRITE property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_GET_PROP_APPEND(prop)
Get BT_OTS_OBJ_PROP_APPEND property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_GET_PROP_TRUNCATE(prop)
Get BT_OTS_OBJ_PROP_TRUNCATE property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_GET_PROP_PATCH(prop)
Get BT_OTS_OBJ_PROP_PATCH property.
Parameters
• prop – Object properties.
BT_OTS_OBJ_GET_PROP_MARKED(prop)
Get BT_OTS_OBJ_PROP_MARKED property.
Parameters
• prop – Object properties.
BT_OTS_OACP_SET_FEAT_CREATE(feat)
Set BT_OTS_OACP_SET_FEAT_CREATE feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_SET_FEAT_DELETE(feat)
Set BT_OTS_OACP_FEAT_DELETE feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_SET_FEAT_CHECKSUM(feat)
Set BT_OTS_OACP_FEAT_CHECKSUM feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_SET_FEAT_EXECUTE(feat)
Set BT_OTS_OACP_FEAT_EXECUTE feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_SET_FEAT_READ(feat)
Set BT_OTS_OACP_FEAT_READ feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_SET_FEAT_WRITE(feat)
Set BT_OTS_OACP_FEAT_WRITE feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_SET_FEAT_APPEND(feat)
Set BT_OTS_OACP_FEAT_APPEND feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_SET_FEAT_TRUNCATE(feat)
Set BT_OTS_OACP_FEAT_TRUNCATE feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_SET_FEAT_PATCH(feat)
Set BT_OTS_OACP_FEAT_PATCH feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_SET_FEAT_ABORT(feat)
Set BT_OTS_OACP_FEAT_ABORT feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_GET_FEAT_CREATE(feat)
Get BT_OTS_OACP_FEAT_CREATE feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_GET_FEAT_DELETE(feat)
Get BT_OTS_OACP_FEAT_DELETE feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_GET_FEAT_CHECKSUM(feat)
Get BT_OTS_OACP_FEAT_CHECKSUM feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_GET_FEAT_EXECUTE(feat)
Get BT_OTS_OACP_FEAT_EXECUTE feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_GET_FEAT_READ(feat)
Get BT_OTS_OACP_FEAT_READ feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_GET_FEAT_WRITE(feat)
Get BT_OTS_OACP_FEAT_WRITE feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_GET_FEAT_APPEND(feat)
Get BT_OTS_OACP_FEAT_APPEND feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_GET_FEAT_TRUNCATE(feat)
Get BT_OTS_OACP_FEAT_TRUNCATE feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_GET_FEAT_PATCH(feat)
Get BT_OTS_OACP_FEAT_PATCH feature.
Parameters
• feat – OTS features.
BT_OTS_OACP_GET_FEAT_ABORT(feat)
Get BT_OTS_OACP_FEAT_ABORT feature.
Parameters
• feat – OTS features.
BT_OTS_OLCP_SET_FEAT_GO_TO(feat)
Set BT_OTS_OLCP_FEAT_GO_TO feature.
Parameters
• feat – OTS features.
BT_OTS_OLCP_SET_FEAT_ORDER(feat)
Set BT_OTS_OLCP_FEAT_ORDER feature.
Parameters
• feat – OTS features.
BT_OTS_OLCP_SET_FEAT_NUM_REQ(feat)
Set BT_OTS_OLCP_FEAT_NUM_REQ feature.
Parameters
• feat – OTS features.
BT_OTS_OLCP_SET_FEAT_CLEAR(feat)
Set BT_OTS_OLCP_FEAT_CLEAR feature.
Parameters
• feat – OTS features.
BT_OTS_OLCP_GET_FEAT_GO_TO(feat)
Get BT_OTS_OLCP_GET_FEAT_GO_TO feature.
Parameters
• feat – OTS features.
BT_OTS_OLCP_GET_FEAT_ORDER(feat)
Get BT_OTS_OLCP_GET_FEAT_ORDER feature.
Parameters
• feat – OTS features.
BT_OTS_OLCP_GET_FEAT_NUM_REQ(feat)
Get BT_OTS_OLCP_GET_FEAT_NUM_REQ feature.
Parameters
• feat – OTS features.
BT_OTS_OLCP_GET_FEAT_CLEAR(feat)
Get BT_OTS_OLCP_GET_FEAT_CLEAR feature.
Parameters
• feat – OTS features.
BT_OTS_DATE_TIME_FIELD_SIZE
BT_OTS_STOP
BT_OTS_CONTINUE
Typedefs
Enums
enum [anonymous]
Properties of an OTS object.
Values:
enumerator BT_OTS_OBJ_PROP_DELETE = 0
Bit 0 Deletion of this object is permitted
enumerator BT_OTS_OBJ_PROP_EXECUTE = 1
Bit 1 Execution of this object is permitted
enumerator BT_OTS_OBJ_PROP_READ = 2
Bit 2 Reading this object is permitted
enumerator BT_OTS_OBJ_PROP_WRITE = 3
Bit 3 Writing data to this object is permitted
enumerator BT_OTS_OBJ_PROP_APPEND = 4
Bit 4 Appending data to this object is permitted.
enumerator BT_OTS_OBJ_PROP_TRUNCATE = 5
Bit 5 Truncation of this object is permitted
enumerator BT_OTS_OBJ_PROP_PATCH = 6
Bit 6 Patching this object is permitted.
enumerator BT_OTS_OBJ_PROP_MARKED = 7
Bit 7 This object is a marked object
enum [anonymous]
Object Action Control Point Feature bits.
Values:
enumerator BT_OTS_OACP_FEAT_CREATE = 0
Bit 0 OACP Create Op Code Supported
enumerator BT_OTS_OACP_FEAT_DELETE = 1
Bit 1 OACP Delete Op Code Supported
enumerator BT_OTS_OACP_FEAT_CHECKSUM = 2
Bit 2 OACP Calculate Checksum Op Code Supported
enumerator BT_OTS_OACP_FEAT_EXECUTE = 3
Bit 3 OACP Execute Op Code Supported
enumerator BT_OTS_OACP_FEAT_READ = 4
Bit 4 OACP Read Op Code Supported
enumerator BT_OTS_OACP_FEAT_WRITE = 5
Bit 5 OACP Write Op Code Supported
enumerator BT_OTS_OACP_FEAT_APPEND = 6
Bit 6 Appending Additional Data to Objects Supported
enumerator BT_OTS_OACP_FEAT_TRUNCATE = 7
Bit 7 Truncation of Objects Supported
enumerator BT_OTS_OACP_FEAT_PATCH = 8
Bit 8 Patching of Objects Supported
enumerator BT_OTS_OACP_FEAT_ABORT = 9
Bit 9 OACP Abort Op Code Supported
enum bt_ots_oacp_write_op_mode
Values:
enumerator BT_OTS_OACP_WRITE_OP_MODE_NONE = 0
enum [anonymous]
Object List Control Point Feature bits.
Values:
enumerator BT_OTS_OLCP_FEAT_GO_TO = 0
Bit 0 OLCP Go To Op Code Supported
enumerator BT_OTS_OLCP_FEAT_ORDER = 1
Bit 1 OLCP Order Op Code Supported
enumerator BT_OTS_OLCP_FEAT_NUM_REQ = 2
Bit 2 OLCP Request Number of Objects Op Code Supported
enumerator BT_OTS_OLCP_FEAT_CLEAR = 3
Bit 3 OLCP Clear Marking Op Code Supported
enum [anonymous]
Object metadata request bit field values.
Values:
Functions
Returns
int 0 if success, ERRNO on failure.
int bt_ots_client_write_object_data(struct bt_ots_client *otc_inst, struct bt_conn *conn,
const void *buf, size_t len, off_t offset, enum
bt_ots_oacp_write_op_mode mode)
Write the data of the current selected object.
This will trigger an OACP write operation for the current object with a specified offset and
then expect transferring the content via the L2CAP CoC.
The length of the data written to object is returned in the obj_data_written() callback.
Parameters
• otc_inst – Pointer to the OTC instance.
• conn – Pointer to the connection object.
• buf – Pointer to the data buffer to be written.
• len – Size of data.
• offset – Offset to write, usually 0.
• mode – Mode Parameter for OACP Write Op Code. See
bt_ots_oacp_write_op_mode.
Returns
int 0 if success, ERRNO on failure.
int bt_ots_client_get_object_checksum(struct bt_ots_client *otc_inst, struct bt_conn *conn,
off_t offset, size_t len)
Get the checksum of the current selected object.
This will trigger an OACP calculate checksum operation for the current object with a specified
offset and length.
The checksum goes to OACP IND and obj_checksum_calculated() callback.
Parameters
• otc_inst – Pointer to the OTC instance.
• conn – Pointer to the connection object.
• offset – Offset to calculate, usually 0.
• len – Len of data to calculate checksum for. May be less than the current
object’s size, but shall not be larger.
Returns
int 0 if success, ERRNO on failure.
int bt_ots_client_decode_dirlisting(uint8_t *data, uint16_t length,
bt_ots_client_dirlisting_cb cb)
Decode Directory Listing object into object metadata.
If the Directory Listing object contains multiple objects, then the callback will be called for
each of them.
Parameters
• data – The data received for the directory listing object.
• length – Length of the data.
• cb – The callback that will be called for each object.
struct bt_ots_obj_type
#include <ots.h> Type of an OTS object.
struct bt_ots_obj_size
#include <ots.h> Descriptor for OTS Object Size parameter.
Public Members
uint32_t cur
Current Size.
uint32_t alloc
Allocated Size.
struct bt_ots_feat
#include <ots.h> Features of the OTS.
struct bt_ots_date_time
#include <ots.h> Date and Time structure.
struct bt_ots_obj_metadata
#include <ots.h> Metadata of an OTS object.
Used by the server as a descriptor for OTS object initialization. Used by the client to present
object metadata to the application.
Public Members
uint32_t props
Object Properties.
struct bt_ots_obj_add_param
#include <ots.h> Descriptor for OTS object addition.
Public Members
uint32_t size
Object size to allocate.
struct bt_ots_obj_created_desc
#include <ots.h> Descriptor for OTS created object.
Descriptor for OTS object created by the application. This descriptor is returned by
bt_ots_cb::obj_created callback which contains further documentation on distinguishing be-
tween server and client object creation.
Public Members
char *name
Object name.
The object name as a NULL terminated string.
When the server creates a new object the name shall be > 0 and <=
BT_OTS_OBJ_MAX_NAME_LEN When the client creates a new object the name shall be
an empty string
uint32_t props
Object properties.
struct bt_ots_cb
#include <ots.h> OTS callback structure.
Public Members
int (*obj_created)(struct bt_ots *ots, struct bt_conn *conn, uint64_t id, const struct
bt_ots_obj_add_param *add_param, struct bt_ots_obj_created_desc *created_desc)
Object created callback.
This callback is called whenever a new object is created. Application can reject this
request by returning an error when it does not have necessary resources to hold this
new object. This callback is also triggered when the server creates a new object with
bt_ots_obj_add() API.
Param ots
OTS instance.
Param conn
The connection that is requesting object creation or NULL if object is created by
bt_ots_obj_add().
Param id
Object ID.
Param add_param
Object creation requested parameters.
Param created_desc
Created object descriptor that shall be filled by the receiver of this callback.
Return
0 in case of success or negative value in case of error.
Return
-ENOTSUP if object type is not supported
Return
-ENOMEM if no available space for new object.
Return
-EINVAL if an invalid parameter is provided
Return
other negative values are treated as a generic operation failure
Retval When
an error is indicated by using a negative value, the object delete procedure is
aborted and a corresponding failed status is returned to the client.
Return
0 in case of success.
Return
-EBUSY if the object is locked. This is generally not expected to be returned by
the application as the OTS layer tracks object accesses. An object locked status
is returned to the client.
Return
Other negative values in case of error. A generic operation failed status is re-
turned to the client.
ssize_t (*obj_read)(struct bt_ots *ots, struct bt_conn *conn, uint64_t id, void **data, size_t
len, off_t offset)
Object read callback.
This callback is called multiple times during the Object read operation. OTS module will
keep requesting successive Object fragments from the application until the read operation
is completed. The end of read operation is indicated by NULL data parameter.
Param ots
OTS instance.
Param conn
The connection that read object.
Param id
Object ID.
Param data
In: NULL once the read operations is completed. Out: Next chunk of data to be
sent.
Param len
Remaining length requested by the client.
Param offset
Object data offset.
Return
Data length to be sent via data parameter. This value shall be smaller or equal
to the len parameter.
Return
Negative value in case of an error.
ssize_t (*obj_write)(struct bt_ots *ots, struct bt_conn *conn, uint64_t id, const void *data,
size_t len, off_t offset, size_t rem)
Object write callback.
This callback is called multiple times during the Object write operation. OTS module will
keep providing successive Object fragments to the application until the write operation is
completed. The offset and length of each write fragment is validated by the OTS module
to be within the allocated size of the object. The remaining length indicates data length
remaining to be written and will decrease each write iteration until it reaches 0 in the last
write fragment.
Param ots
OTS instance.
Param conn
The connection that wrote object.
Param id
Object ID.
Param data
Next chunk of data to be written.
Param len
Length of the current chunk of data in the buffer.
Param offset
Object data offset.
Param rem
Remaining length in the write operation.
Return
Number of bytes written in case of success, if the number of bytes written does
not match len, -EIO is returned to the L2CAP layer.
Return
A negative value in case of an error.
Return
-EINPROGRESS has a special meaning and is unsupported at the moment. It
should not be returned.
void (*obj_name_written)(struct bt_ots *ots, struct bt_conn *conn, uint64_t id, const char
*cur_name, const char *new_name)
Object name written callback.
This callback is called when the object name is written. This is a notification to the
application that the object name will be updated by the OTS service implementation.
Param ots
OTS instance.
Param conn
The connection that wrote object name.
Param id
Object ID.
Param cur_name
Current object name.
Param new_name
New object name.
int (*obj_cal_checksum)(struct bt_ots *ots, struct bt_conn *conn, uint64_t id, off_t offset,
size_t len, void **data)
Object Calculate checksum callback.
This callback is called when the OACP Calculate Checksum procedure is performed. Be-
cause object data is opaque to OTS, the application is the only one who knows where data
is and should return pointer of actual object data.
Param ots
[in] OTS instance.
Param conn
[in] The connection that wrote object.
Param id
[in] Object ID.
Param offset
[in] The first octet of the object contents need to be calculated.
Param len
struct bt_ots_init
#include <ots.h> Descriptor for OTS initialization.
struct bt_ots_client
#include <ots.h> OTS client instance.
struct bt_ots_client_cb
#include <ots.h> OTS client callback structure
Public Members
API Reference
group bt_sdp
Service Discovery Protocol (SDP)
Defines
BT_SDP_SDP_SERVER_SVCLASS
BT_SDP_BROWSE_GRP_DESC_SVCLASS
BT_SDP_PUBLIC_BROWSE_GROUP
BT_SDP_SERIAL_PORT_SVCLASS
BT_SDP_LAN_ACCESS_SVCLASS
BT_SDP_DIALUP_NET_SVCLASS
BT_SDP_IRMC_SYNC_SVCLASS
BT_SDP_OBEX_OBJPUSH_SVCLASS
BT_SDP_OBEX_FILETRANS_SVCLASS
BT_SDP_IRMC_SYNC_CMD_SVCLASS
BT_SDP_HEADSET_SVCLASS
BT_SDP_CORDLESS_TELEPHONY_SVCLASS
BT_SDP_AUDIO_SOURCE_SVCLASS
BT_SDP_AUDIO_SINK_SVCLASS
BT_SDP_AV_REMOTE_TARGET_SVCLASS
BT_SDP_ADVANCED_AUDIO_SVCLASS
BT_SDP_AV_REMOTE_SVCLASS
BT_SDP_AV_REMOTE_CONTROLLER_SVCLASS
BT_SDP_INTERCOM_SVCLASS
BT_SDP_FAX_SVCLASS
BT_SDP_HEADSET_AGW_SVCLASS
BT_SDP_WAP_SVCLASS
BT_SDP_WAP_CLIENT_SVCLASS
BT_SDP_PANU_SVCLASS
BT_SDP_NAP_SVCLASS
BT_SDP_GN_SVCLASS
BT_SDP_DIRECT_PRINTING_SVCLASS
BT_SDP_REFERENCE_PRINTING_SVCLASS
BT_SDP_IMAGING_SVCLASS
BT_SDP_IMAGING_RESPONDER_SVCLASS
BT_SDP_IMAGING_ARCHIVE_SVCLASS
BT_SDP_IMAGING_REFOBJS_SVCLASS
BT_SDP_HANDSFREE_SVCLASS
BT_SDP_HANDSFREE_AGW_SVCLASS
BT_SDP_DIRECT_PRT_REFOBJS_SVCLASS
BT_SDP_REFLECTED_UI_SVCLASS
BT_SDP_BASIC_PRINTING_SVCLASS
BT_SDP_PRINTING_STATUS_SVCLASS
BT_SDP_HID_SVCLASS
BT_SDP_HCR_SVCLASS
BT_SDP_HCR_PRINT_SVCLASS
BT_SDP_HCR_SCAN_SVCLASS
BT_SDP_CIP_SVCLASS
BT_SDP_VIDEO_CONF_GW_SVCLASS
BT_SDP_UDI_MT_SVCLASS
BT_SDP_UDI_TA_SVCLASS
BT_SDP_AV_SVCLASS
BT_SDP_SAP_SVCLASS
BT_SDP_PBAP_PCE_SVCLASS
BT_SDP_PBAP_PSE_SVCLASS
BT_SDP_PBAP_SVCLASS
BT_SDP_MAP_MSE_SVCLASS
BT_SDP_MAP_MCE_SVCLASS
BT_SDP_MAP_SVCLASS
BT_SDP_GNSS_SVCLASS
BT_SDP_GNSS_SERVER_SVCLASS
BT_SDP_MPS_SC_SVCLASS
BT_SDP_MPS_SVCLASS
BT_SDP_PNP_INFO_SVCLASS
BT_SDP_GENERIC_NETWORKING_SVCLASS
BT_SDP_GENERIC_FILETRANS_SVCLASS
BT_SDP_GENERIC_AUDIO_SVCLASS
BT_SDP_GENERIC_TELEPHONY_SVCLASS
BT_SDP_UPNP_SVCLASS
BT_SDP_UPNP_IP_SVCLASS
BT_SDP_UPNP_PAN_SVCLASS
BT_SDP_UPNP_LAP_SVCLASS
BT_SDP_UPNP_L2CAP_SVCLASS
BT_SDP_VIDEO_SOURCE_SVCLASS
BT_SDP_VIDEO_SINK_SVCLASS
BT_SDP_VIDEO_DISTRIBUTION_SVCLASS
BT_SDP_HDP_SVCLASS
BT_SDP_HDP_SOURCE_SVCLASS
BT_SDP_HDP_SINK_SVCLASS
BT_SDP_GENERIC_ACCESS_SVCLASS
BT_SDP_GENERIC_ATTRIB_SVCLASS
BT_SDP_APPLE_AGENT_SVCLASS
BT_SDP_SERVER_RECORD_HANDLE
BT_SDP_ATTR_RECORD_HANDLE
BT_SDP_ATTR_SVCLASS_ID_LIST
BT_SDP_ATTR_RECORD_STATE
BT_SDP_ATTR_SERVICE_ID
BT_SDP_ATTR_PROTO_DESC_LIST
BT_SDP_ATTR_BROWSE_GRP_LIST
BT_SDP_ATTR_LANG_BASE_ATTR_ID_LIST
BT_SDP_ATTR_SVCINFO_TTL
BT_SDP_ATTR_SERVICE_AVAILABILITY
BT_SDP_ATTR_PROFILE_DESC_LIST
BT_SDP_ATTR_DOC_URL
BT_SDP_ATTR_CLNT_EXEC_URL
BT_SDP_ATTR_ICON_URL
BT_SDP_ATTR_ADD_PROTO_DESC_LIST
BT_SDP_ATTR_GROUP_ID
BT_SDP_ATTR_IP_SUBNET
BT_SDP_ATTR_VERSION_NUM_LIST
BT_SDP_ATTR_SUPPORTED_FEATURES_LIST
BT_SDP_ATTR_GOEP_L2CAP_PSM
BT_SDP_ATTR_SVCDB_STATE
BT_SDP_ATTR_MPSD_SCENARIOS
BT_SDP_ATTR_MPMD_SCENARIOS
BT_SDP_ATTR_MPS_DEPENDENCIES
BT_SDP_ATTR_SERVICE_VERSION
BT_SDP_ATTR_EXTERNAL_NETWORK
BT_SDP_ATTR_SUPPORTED_DATA_STORES_LIST
BT_SDP_ATTR_DATA_EXCHANGE_SPEC
BT_SDP_ATTR_NETWORK
BT_SDP_ATTR_FAX_CLASS1_SUPPORT
BT_SDP_ATTR_REMOTE_AUDIO_VOLUME_CONTROL
BT_SDP_ATTR_MCAP_SUPPORTED_PROCEDURES
BT_SDP_ATTR_FAX_CLASS20_SUPPORT
BT_SDP_ATTR_SUPPORTED_FORMATS_LIST
BT_SDP_ATTR_FAX_CLASS2_SUPPORT
BT_SDP_ATTR_AUDIO_FEEDBACK_SUPPORT
BT_SDP_ATTR_NETWORK_ADDRESS
BT_SDP_ATTR_WAP_GATEWAY
BT_SDP_ATTR_HOMEPAGE_URL
BT_SDP_ATTR_WAP_STACK_TYPE
BT_SDP_ATTR_SECURITY_DESC
BT_SDP_ATTR_NET_ACCESS_TYPE
BT_SDP_ATTR_MAX_NET_ACCESSRATE
BT_SDP_ATTR_IP4_SUBNET
BT_SDP_ATTR_IP6_SUBNET
BT_SDP_ATTR_SUPPORTED_CAPABILITIES
BT_SDP_ATTR_SUPPORTED_FEATURES
BT_SDP_ATTR_SUPPORTED_FUNCTIONS
BT_SDP_ATTR_TOTAL_IMAGING_DATA_CAPACITY
BT_SDP_ATTR_SUPPORTED_REPOSITORIES
BT_SDP_ATTR_MAS_INSTANCE_ID
BT_SDP_ATTR_SUPPORTED_MESSAGE_TYPES
BT_SDP_ATTR_PBAP_SUPPORTED_FEATURES
BT_SDP_ATTR_MAP_SUPPORTED_FEATURES
BT_SDP_ATTR_SPECIFICATION_ID
BT_SDP_ATTR_VENDOR_ID
BT_SDP_ATTR_PRODUCT_ID
BT_SDP_ATTR_VERSION
BT_SDP_ATTR_PRIMARY_RECORD
BT_SDP_ATTR_VENDOR_ID_SOURCE
BT_SDP_ATTR_HID_DEVICE_RELEASE_NUMBER
BT_SDP_ATTR_HID_PARSER_VERSION
BT_SDP_ATTR_HID_DEVICE_SUBCLASS
BT_SDP_ATTR_HID_COUNTRY_CODE
BT_SDP_ATTR_HID_VIRTUAL_CABLE
BT_SDP_ATTR_HID_RECONNECT_INITIATE
BT_SDP_ATTR_HID_DESCRIPTOR_LIST
BT_SDP_ATTR_HID_LANG_ID_BASE_LIST
BT_SDP_ATTR_HID_SDP_DISABLE
BT_SDP_ATTR_HID_BATTERY_POWER
BT_SDP_ATTR_HID_REMOTE_WAKEUP
BT_SDP_ATTR_HID_PROFILE_VERSION
BT_SDP_ATTR_HID_SUPERVISION_TIMEOUT
BT_SDP_ATTR_HID_NORMALLY_CONNECTABLE
BT_SDP_ATTR_HID_BOOT_DEVICE
BT_SDP_PRIMARY_LANG_BASE
BT_SDP_ATTR_SVCNAME_PRIMARY
BT_SDP_ATTR_SVCDESC_PRIMARY
BT_SDP_ATTR_PROVNAME_PRIMARY
BT_SDP_DATA_NIL
BT_SDP_UINT8
BT_SDP_UINT16
BT_SDP_UINT32
BT_SDP_UINT64
BT_SDP_UINT128
BT_SDP_INT8
BT_SDP_INT16
BT_SDP_INT32
BT_SDP_INT64
BT_SDP_INT128
BT_SDP_UUID_UNSPEC
BT_SDP_UUID16
BT_SDP_UUID32
BT_SDP_UUID128
BT_SDP_TEXT_STR_UNSPEC
BT_SDP_TEXT_STR8
BT_SDP_TEXT_STR16
BT_SDP_TEXT_STR32
BT_SDP_BOOL
BT_SDP_SEQ_UNSPEC
BT_SDP_SEQ8
BT_SDP_SEQ16
BT_SDP_SEQ32
BT_SDP_ALT_UNSPEC
BT_SDP_ALT8
BT_SDP_ALT16
BT_SDP_ALT32
BT_SDP_URL_STR_UNSPEC
BT_SDP_URL_STR8
BT_SDP_URL_STR16
BT_SDP_URL_STR32
BT_SDP_TYPE_DESC_MASK
BT_SDP_SIZE_DESC_MASK
BT_SDP_SIZE_INDEX_OFFSET
BT_SDP_ARRAY_8(...)
Declare an array of 8-bit elements in an attribute.
BT_SDP_ARRAY_16(...)
Declare an array of 16-bit elements in an attribute.
BT_SDP_ARRAY_32(...)
Declare an array of 32-bit elements in an attribute.
BT_SDP_TYPE_SIZE(_type)
Declare a fixed-size data element header.
Parameters
• _type – Data element header containing type and size descriptors.
BT_SDP_TYPE_SIZE_VAR(_type, _size)
Declare a variable-size data element header.
Parameters
• _type – Data element header containing type and size descriptors.
• _size – The actual size of the data.
BT_SDP_DATA_ELEM_LIST(...)
Declare a list of data elements.
BT_SDP_NEW_SERVICE
SDP New Service Record Declaration Macro.
Helper macro to declare a new service record. Default attributes: Record Handle, Record
State, Language Base, Root Browse Group
BT_SDP_LIST(_att_id, _type_size, _data_elem_seq)
Generic SDP List Attribute Declaration Macro.
Helper macro to declare a list attribute.
Parameters
• _att_id – List Attribute ID.
• _data_elem_seq – Data element sequence for the list.
• _type_size – SDP type and size descriptor.
BT_SDP_SERVICE_ID(_uuid)
SDP Service ID Attribute Declaration Macro.
Helper macro to declare a service ID attribute.
Parameters
• _uuid – Service ID 16bit UUID.
BT_SDP_SERVICE_NAME(_name)
SDP Name Attribute Declaration Macro.
Helper macro to declare a service name attribute.
Parameters
• _name – Service name as a string (up to 256 chars).
BT_SDP_SUPPORTED_FEATURES(_features)
SDP Supported Features Attribute Declaration Macro.
Helper macro to declare supported features of a profile/protocol.
Parameters
• _features – Feature mask as 16bit unsigned integer.
BT_SDP_RECORD(_attrs)
SDP Service Declaration Macro.
Helper macro to declare a service.
Parameters
• _attrs – List of attributes for the service record.
Typedefs
Enums
enum [anonymous]
Helper enum to be used as return value of bt_sdp_discover_func_t. The value informs the
caller to perform further pending actions or stop them.
Values:
enumerator BT_SDP_DISCOVER_UUID_STOP = 0
enumerator BT_SDP_DISCOVER_UUID_CONTINUE
enum bt_sdp_proto
Protocols to be asked about specific parameters.
Values:
Functions
struct bt_sdp_data_elem
#include <sdp.h> SDP Generic Data Element Value.
struct bt_sdp_attribute
#include <sdp.h> SDP Attribute Value.
struct bt_sdp_record
#include <sdp.h> SDP Service Record Value.
struct bt_sdp_client_result
#include <sdp.h> Generic SDP Client Query Result data holder.
struct bt_sdp_discover_params
#include <sdp.h> Main user structure used in SDP discovery of remote.
Public Members
bt_sdp_discover_func_t func
Discover callback to be called on resolved SDP record
API Reference
group bt_gatt_vcp
Volume Control Profile (VCP)
[Experimental] Users should note that the APIs can change as a part of ongoing development.
Defines
BT_VCP_VOL_REND_VOCS_CNT
BT_VCP_VOL_REND_AICS_CNT
BT_VCP_ERR_INVALID_COUNTER
Volume Control Service Error codes
BT_VCP_ERR_OP_NOT_SUPPORTED
BT_VCP_STATE_UNMUTED
Volume Control Service Mute Values
BT_VCP_STATE_MUTED
Functions
int bt_vcp_vol_rend_get_flags(void)
Get the Volume Control Service flags.
Returns
0 if success, errno on failure.
int bt_vcp_vol_rend_vol_down(void)
Turn the volume down by one step on the server.
Returns
0 if success, errno on failure.
int bt_vcp_vol_rend_vol_up(void)
Turn the volume up by one step on the server.
Returns
0 if success, errno on failure.
int bt_vcp_vol_rend_unmute_vol_down(void)
Turn the volume down and unmute the server.
Returns
0 if success, errno on failure.
int bt_vcp_vol_rend_unmute_vol_up(void)
Turn the volume up and unmute the server.
Returns
0 if success, errno on failure.
int bt_vcp_vol_rend_set_vol(uint8_t volume)
Set the volume on the server.
Parameters
• volume – The absolute volume to set.
Returns
0 if success, errno on failure.
int bt_vcp_vol_rend_unmute(void)
Unmute the server.
Returns
0 if success, errno on failure.
int bt_vcp_vol_rend_mute(void)
Mute the server.
Returns
0 if success, errno on failure.
int bt_vcp_vol_ctlr_cb_register(struct bt_vcp_vol_ctlr_cb *cb)
Registers the callbacks used by the Volume Controller.
Parameters
• cb – The callback structure.
Returns
0 if success, errno on failure.
int bt_vcp_vol_ctlr_discover(struct bt_conn *conn, struct bt_vcp_vol_ctlr **vol_ctlr)
Discover Volume Control Service and included services.
This will start a GATT discovery and setup handles and subscriptions. This shall be
called once before any other actions can be executed for the peer device, and the
bt_vcp_vol_ctlr_cb::discover callback will notify when it is possible to start remote operations.
struct bt_vcp_vol_rend_register_param
#include <vcp.h> Register structure for Volume Control Service
Public Members
uint8_t step
Initial step size (1-255)
uint8_t mute
Initial mute state (0-1)
uint8_t volume
Initial volume level (0-255)
struct bt_vcp_included
#include <vcp.h> Volume Control Service included services.
Used for to represent the Volume Control Service included service instances, for either a client
or a server. The instance pointers either represent local server instances, or remote service
instances.
Public Members
uint8_t vocs_cnt
Number of Volume Offset Control Service instances
uint8_t aics_cnt
Number of Audio Input Control Service instances
struct bt_vcp_vol_rend_cb
#include <vcp.h>
Public Members
struct bt_vcp_vol_ctlr_cb
#include <vcp.h>
Public Members
void (*state)(struct bt_vcp_vol_ctlr *vol_ctlr, int err, uint8_t volume, uint8_t mute)
Callback function for Volume Control Profile volume state.
Called when the value is remotely read as the Volume Controller. Called if the value
is changed by either the Volume Renderer or Volume Controller, and notified to the to
Volume Controller.
Param vol_ctlr
Volume Controller instance pointer.
Param err
Error value. 0 on success, GATT error on positive value or errno on negative
value.
Param volume
The volume of the Volume Renderer.
Param mute
The mute setting of the Volume Renderer.
Param err
Error value. 0 on success, GATT error on positive value or errno on negative
value.
Param flags
The flags of the Volume Renderer.
Param err
Error value. 0 on success, GATT error on positive value or errno on negative
value.
API Reference
group bt_uuid
UUIDs.
Defines
BT_UUID_SIZE_16
Size in octets of a 16-bit UUID
BT_UUID_SIZE_32
Size in octets of a 32-bit UUID
BT_UUID_SIZE_128
Size in octets of a 128-bit UUID
BT_UUID_INIT_16(value)
Initialize a 16-bit UUID.
Parameters
• value – 16-bit UUID value in host endianness.
BT_UUID_INIT_32(value)
Initialize a 32-bit UUID.
Parameters
• value – 32-bit UUID value in host endianness.
BT_UUID_INIT_128(value...)
Initialize a 128-bit UUID.
Parameters
• value – 128-bit UUID array values in little-endian format. Can be combined
with BT_UUID_128_ENCODE to initialize a UUID from the readable form of
UUIDs.
BT_UUID_DECLARE_16(value)
Helper to declare a 16-bit UUID inline.
Parameters
• value – 16-bit UUID value in host endianness.
Returns
Pointer to a generic UUID.
BT_UUID_DECLARE_32(value)
Helper to declare a 32-bit UUID inline.
Parameters
• value – 32-bit UUID value in host endianness.
Returns
Pointer to a generic UUID.
BT_UUID_DECLARE_128(value...)
Helper to declare a 128-bit UUID inline.
Parameters
• value – 128-bit UUID array values in little-endian format. Can be combined
with BT_UUID_128_ENCODE to declare a UUID from the readable form of
UUIDs.
Returns
Pointer to a generic UUID.
BT_UUID_16(__u)
Helper macro to access the 16-bit UUID from a generic UUID.
BT_UUID_32(__u)
Helper macro to access the 32-bit UUID from a generic UUID.
BT_UUID_128(__u)
Helper macro to access the 128-bit UUID from a generic UUID.
BT_UUID_128_ENCODE(w32, w1, w2, w3, w48)
Encode 128 bit UUID into array values in little-endian format.
Helper macro to initialize a 128-bit UUID array value from the readable form of
UUIDs, or encode 128-bit UUID values into advertising data Can be combined with
BT_UUID_DECLARE_128 to declare a 128-bit UUID.
Example of how to declare the UUID 6E400001-B5A3-F393-E0A9-E50E24DCCA9E
BT_UUID_DECLARE_128(
BT_UUID_128_ENCODE(0x6E400001, 0xB5A3, 0xF393, 0xE0A9, 0xE50E24DCCA9E))
BT_DATA_BYTES(BT_DATA_UUID128_ALL,
BT_UUID_128_ENCODE(0x6E400001, 0xB5A3, 0xF393, 0xE0A9, 0xE50E24DCCA9E))
BT_DATA_BYTES(BT_DATA_UUID16_ALL, BT_UUID_16_ENCODE(0x180a))
Parameters
• w16 – UUID value (16-bits)
Returns
The comma separated values for UUID 16 value that may be used directly as an
argument for BT_DATA_BYTES.
BT_UUID_32_ENCODE(w32)
Encode 32-bit UUID into array values in little-endian format.
Helper macro to encode 32-bit UUID values into advertising data.
Example of how to encode the UUID 0x180a01af into advertising data.
BT_DATA_BYTES(BT_DATA_UUID32_ALL, BT_UUID_32_ENCODE(0x180a01af))
Parameters
• w32 – UUID value (32-bits)
Returns
The comma separated values for UUID 32 value that may be used directly as an
argument for BT_DATA_BYTES.
BT_UUID_STR_LEN
Recommended length of user string buffer for Bluetooth UUID.
The recommended length guarantee the output of UUID conversion will not lose valuable
information about the UUID being processed. If the length of the UUID is known the string
can be shorter.
BT_UUID_GAP_VAL
Generic Access UUID value.
BT_UUID_GAP
Generic Access.
BT_UUID_GATT_VAL
Generic attribute UUID value.
BT_UUID_GATT
Generic Attribute.
BT_UUID_IAS_VAL
Immediate Alert Service UUID value.
BT_UUID_IAS
Immediate Alert Service.
BT_UUID_LLS_VAL
Link Loss Service UUID value.
BT_UUID_LLS
Link Loss Service.
BT_UUID_TPS_VAL
Tx Power Service UUID value.
BT_UUID_TPS
Tx Power Service.
BT_UUID_CTS_VAL
Current Time Service UUID value.
BT_UUID_CTS
Current Time Service.
BT_UUID_RTUS_VAL
Reference Time Update Service UUID value.
BT_UUID_RTUS
Reference Time Update Service.
BT_UUID_NDSTS_VAL
Next DST Change Service UUID value.
BT_UUID_NDSTS
Next DST Change Service.
BT_UUID_GS_VAL
Glucose Service UUID value.
BT_UUID_GS
Glucose Service.
BT_UUID_HTS_VAL
Health Thermometer Service UUID value.
BT_UUID_HTS
Health Thermometer Service.
BT_UUID_DIS_VAL
Device Information Service UUID value.
BT_UUID_DIS
Device Information Service.
BT_UUID_NAS_VAL
Network Availability Service UUID value.
BT_UUID_NAS
Network Availability Service.
BT_UUID_WDS_VAL
Watchdog Service UUID value.
BT_UUID_WDS
Watchdog Service.
BT_UUID_HRS_VAL
Heart Rate Service UUID value.
BT_UUID_HRS
Heart Rate Service.
BT_UUID_PAS_VAL
Phone Alert Service UUID value.
BT_UUID_PAS
Phone Alert Service.
BT_UUID_BAS_VAL
Battery Service UUID value.
BT_UUID_BAS
Battery Service.
BT_UUID_BPS_VAL
Blood Pressure Service UUID value.
BT_UUID_BPS
Blood Pressure Service.
BT_UUID_ANS_VAL
Alert Notification Service UUID value.
BT_UUID_ANS
Alert Notification Service.
BT_UUID_HIDS_VAL
HID Service UUID value.
BT_UUID_HIDS
HID Service.
BT_UUID_SPS_VAL
Scan Parameters Service UUID value.
BT_UUID_SPS
Scan Parameters Service.
BT_UUID_RSCS_VAL
Running Speed and Cadence Service UUID value.
BT_UUID_RSCS
Running Speed and Cadence Service.
BT_UUID_AIOS_VAL
Automation IO Service UUID value.
BT_UUID_AIOS
Automation IO Service.
BT_UUID_CSC_VAL
Cycling Speed and Cadence Service UUID value.
BT_UUID_CSC
Cycling Speed and Cadence Service.
BT_UUID_CPS_VAL
Cyclicg Power Service UUID value.
BT_UUID_CPS
Cycling Power Service.
BT_UUID_LNS_VAL
Location and Navigation Service UUID value.
BT_UUID_LNS
Location and Navigation Service.
BT_UUID_ESS_VAL
Environmental Sensing Service UUID value.
BT_UUID_ESS
Environmental Sensing Service.
BT_UUID_BCS_VAL
Body Composition Service UUID value.
BT_UUID_BCS
Body Composition Service.
BT_UUID_UDS_VAL
User Data Service UUID value.
BT_UUID_UDS
User Data Service.
BT_UUID_WSS_VAL
Weight Scale Service UUID value.
BT_UUID_WSS
Weight Scale Service.
BT_UUID_BMS_VAL
Bond Management Service UUID value.
BT_UUID_BMS
Bond Management Service.
BT_UUID_CGMS_VAL
Continuous Glucose Monitoring Service UUID value.
BT_UUID_CGMS
Continuous Glucose Monitoring Service.
BT_UUID_IPSS_VAL
IP Support Service UUID value.
BT_UUID_IPSS
IP Support Service.
BT_UUID_IPS_VAL
Indoor Positioning Service UUID value.
BT_UUID_IPS
Indoor Positioning Service.
BT_UUID_POS_VAL
Pulse Oximeter Service UUID value.
BT_UUID_POS
Pulse Oximeter Service.
BT_UUID_HPS_VAL
HTTP Proxy Service UUID value.
BT_UUID_HPS
HTTP Proxy Service.
BT_UUID_TDS_VAL
Transport Discovery Service UUID value.
BT_UUID_TDS
Transport Discovery Service.
BT_UUID_OTS_VAL
Object Transfer Service UUID value.
BT_UUID_OTS
Object Transfer Service.
BT_UUID_FMS_VAL
Fitness Machine Service UUID value.
BT_UUID_FMS
Fitness Machine Service.
BT_UUID_MESH_PROV_VAL
Mesh Provisioning Service UUID value.
BT_UUID_MESH_PROV
Mesh Provisioning Service.
BT_UUID_MESH_PROXY_VAL
Mesh Proxy Service UUID value.
BT_UUID_MESH_PROXY
Mesh Proxy Service.
BT_UUID_MESH_PROXY_SOLICITATION_VAL
Proxy Solicitation UUID value.
BT_UUID_RCSRV_VAL
Reconnection Configuration Service UUID value.
BT_UUID_RCSRV
Reconnection Configuration Service.
BT_UUID_IDS_VAL
Insulin Delivery Service UUID value.
BT_UUID_IDS
Insulin Delivery Service.
BT_UUID_BSS_VAL
Binary Sensor Service UUID value.
BT_UUID_BSS
Binary Sensor Service.
BT_UUID_ECS_VAL
Emergency Configuration Service UUID value.
BT_UUID_ECS
Energency Configuration Service.
BT_UUID_ACLS_VAL
Authorization Control Service UUID value.
BT_UUID_ACLS
Authorization Control Service.
BT_UUID_PAMS_VAL
Physical Activity Monitor Service UUID value.
BT_UUID_PAMS
Physical Activity Monitor Service.
BT_UUID_AICS_VAL
Audio Input Control Service UUID value.
BT_UUID_AICS
Audio Input Control Service.
BT_UUID_VCS_VAL
Volume Control Service UUID value.
BT_UUID_VCS
Volume Control Service.
BT_UUID_VOCS_VAL
Volume Offset Control Service UUID value.
BT_UUID_VOCS
Volume Offset Control Service.
BT_UUID_CSIS_VAL
Coordinated Set Identification Service UUID value.
BT_UUID_CSIS
Coordinated Set Identification Service.
BT_UUID_DTS_VAL
Device Time Service UUID value.
BT_UUID_DTS
Device Time Service.
BT_UUID_MCS_VAL
Media Control Service UUID value.
BT_UUID_MCS
Media Control Service.
BT_UUID_GMCS_VAL
Generic Media Control Service UUID value.
BT_UUID_GMCS
Generic Media Control Service.
BT_UUID_CTES_VAL
Constant Tone Extension Service UUID value.
BT_UUID_CTES
Constant Tone Extension Service.
BT_UUID_TBS_VAL
Telephone Bearer Service UUID value.
BT_UUID_TBS
Telephone Bearer Service.
BT_UUID_GTBS_VAL
Generic Telephone Bearer Service UUID value.
BT_UUID_GTBS
Generic Telephone Bearer Service.
BT_UUID_MICS_VAL
Microphone Control Service UUID value.
BT_UUID_MICS
Microphone Control Service.
BT_UUID_ASCS_VAL
Audio Stream Control Service UUID value.
BT_UUID_ASCS
Audio Stream Control Service.
BT_UUID_BASS_VAL
Broadcast Audio Scan Service UUID value.
BT_UUID_BASS
Broadcast Audio Scan Service.
BT_UUID_PACS_VAL
Published Audio Capabilities Service UUID value.
BT_UUID_PACS
Published Audio Capabilities Service.
BT_UUID_BASIC_AUDIO_VAL
Basic Audio Announcement Service UUID value.
BT_UUID_BASIC_AUDIO
Basic Audio Announcement Service.
BT_UUID_BROADCAST_AUDIO_VAL
Broadcast Audio Announcement Service UUID value.
BT_UUID_BROADCAST_AUDIO
Broadcast Audio Announcement Service.
BT_UUID_CAS_VAL
Common Audio Service UUID value.
BT_UUID_CAS
Common Audio Service.
BT_UUID_HAS_VAL
Hearing Access Service UUID value.
BT_UUID_HAS
Hearing Access Service.
BT_UUID_TMAS_VAL
Telephony and Media Audio Service UUID value.
BT_UUID_TMAS
Telephony and Media Audio Service.
BT_UUID_PBA_VAL
Public Broadcast Announcement Service UUID value.
BT_UUID_PBA
Public Broadcast Announcement Service.
BT_UUID_GATT_PRIMARY_VAL
GATT Primary Service UUID value.
BT_UUID_GATT_PRIMARY
GATT Primary Service.
BT_UUID_GATT_SECONDARY_VAL
GATT Secondary Service UUID value.
BT_UUID_GATT_SECONDARY
GATT Secondary Service.
BT_UUID_GATT_INCLUDE_VAL
GATT Include Service UUID value.
BT_UUID_GATT_INCLUDE
GATT Include Service.
BT_UUID_GATT_CHRC_VAL
GATT Characteristic UUID value.
BT_UUID_GATT_CHRC
GATT Characteristic.
BT_UUID_GATT_CEP_VAL
GATT Characteristic Extended Properties UUID value.
BT_UUID_GATT_CEP
GATT Characteristic Extended Properties.
BT_UUID_GATT_CUD_VAL
GATT Characteristic User Description UUID value.
BT_UUID_GATT_CUD
GATT Characteristic User Description.
BT_UUID_GATT_CCC_VAL
GATT Client Characteristic Configuration UUID value.
BT_UUID_GATT_CCC
GATT Client Characteristic Configuration.
BT_UUID_GATT_SCC_VAL
GATT Server Characteristic Configuration UUID value.
BT_UUID_GATT_SCC
GATT Server Characteristic Configuration.
BT_UUID_GATT_CPF_VAL
GATT Characteristic Presentation Format UUID value.
BT_UUID_GATT_CPF
GATT Characteristic Presentation Format.
BT_UUID_GATT_CAF_VAL
GATT Characteristic Aggregated Format UUID value.
BT_UUID_GATT_CAF
GATT Characteristic Aggregated Format.
BT_UUID_VALID_RANGE_VAL
Valid Range Descriptor UUID value.
BT_UUID_VALID_RANGE
Valid Range Descriptor.
BT_UUID_HIDS_EXT_REPORT_VAL
HID External Report Descriptor UUID value.
BT_UUID_HIDS_EXT_REPORT
HID External Report Descriptor.
BT_UUID_HIDS_REPORT_REF_VAL
HID Report Reference Descriptor UUID value.
BT_UUID_HIDS_REPORT_REF
HID Report Reference Descriptor.
BT_UUID_VAL_TRIGGER_SETTING_VAL
Value Trigger Setting Descriptor UUID value.
BT_UUID_VAL_TRIGGER_SETTING
Value Trigger Setting Descriptor.
BT_UUID_ES_CONFIGURATION_VAL
Environmental Sensing Configuration Descriptor UUID value.
BT_UUID_ES_CONFIGURATION
Environmental Sensing Configuration Descriptor.
BT_UUID_ES_MEASUREMENT_VAL
Environmental Sensing Measurement Descriptor UUID value.
BT_UUID_ES_MEASUREMENT
Environmental Sensing Measurement Descriptor.
BT_UUID_ES_TRIGGER_SETTING_VAL
Environmental Sensing Trigger Setting Descriptor UUID value.
BT_UUID_ES_TRIGGER_SETTING
Environmental Sensing Trigger Setting Descriptor.
BT_UUID_TM_TRIGGER_SETTING_VAL
Time Trigger Setting Descriptor UUID value.
BT_UUID_TM_TRIGGER_SETTING
Time Trigger Setting Descriptor.
BT_UUID_GAP_DEVICE_NAME_VAL
GAP Characteristic Device Name UUID value.
BT_UUID_GAP_DEVICE_NAME
GAP Characteristic Device Name.
BT_UUID_GAP_APPEARANCE_VAL
GAP Characteristic Appearance UUID value.
BT_UUID_GAP_APPEARANCE
GAP Characteristic Appearance.
BT_UUID_GAP_PPF_VAL
GAP Characteristic Peripheal Privacy Flag UUID value.
BT_UUID_GAP_PPF
GAP Characteristic Peripheal Privacy Flag.
BT_UUID_GAP_RA_VAL
GAP Characteristic Reconnection Address UUID value.
BT_UUID_GAP_RA
GAP Characteristic Reconnection Address.
BT_UUID_GAP_PPCP_VAL
GAP Characteristic Peripheral Preferred Connection Parameters UUID value.
BT_UUID_GAP_PPCP
GAP Characteristic Peripheral Preferred Connection Parameters.
BT_UUID_GATT_SC_VAL
GATT Characteristic Service Changed UUID value.
BT_UUID_GATT_SC
GATT Characteristic Service Changed.
BT_UUID_ALERT_LEVEL_VAL
GATT Characteristic Alert Level UUID value.
BT_UUID_ALERT_LEVEL
GATT Characteristic Alert Level.
BT_UUID_TPS_TX_POWER_LEVEL_VAL
TPS Characteristic Tx Power Level UUID value.
BT_UUID_TPS_TX_POWER_LEVEL
TPS Characteristic Tx Power Level.
BT_UUID_GATT_DT_VAL
GATT Characteristic Date Time UUID value.
BT_UUID_GATT_DT
GATT Characteristic Date Time.
BT_UUID_GATT_DW_VAL
GATT Characteristic Day of Week UUID value.
BT_UUID_GATT_DW
GATT Characteristic Day of Week.
BT_UUID_GATT_DDT_VAL
GATT Characteristic Day Date Time UUID value.
BT_UUID_GATT_DDT
GATT Characteristic Day Date Time.
BT_UUID_GATT_ET256_VAL
GATT Characteristic Exact Time 256 UUID value.
BT_UUID_GATT_ET256
GATT Characteristic Exact Time 256.
BT_UUID_GATT_DST_VAL
GATT Characteristic DST Offset UUID value.
BT_UUID_GATT_DST
GATT Characteristic DST Offset.
BT_UUID_GATT_TZ_VAL
GATT Characteristic Time Zone UUID value.
BT_UUID_GATT_TZ
GATT Characteristic Time Zone.
BT_UUID_GATT_LTI_VAL
GATT Characteristic Local Time Information UUID value.
BT_UUID_GATT_LTI
GATT Characteristic Local Time Information.
BT_UUID_GATT_TDST_VAL
GATT Characteristic Time with DST UUID value.
BT_UUID_GATT_TDST
GATT Characteristic Time with DST.
BT_UUID_GATT_TA_VAL
GATT Characteristic Time Accuracy UUID value.
BT_UUID_GATT_TA
GATT Characteristic Time Accuracy.
BT_UUID_GATT_TS_VAL
GATT Characteristic Time Source UUID value.
BT_UUID_GATT_TS
GATT Characteristic Time Source.
BT_UUID_GATT_RTI_VAL
GATT Characteristic Reference Time Information UUID value.
BT_UUID_GATT_RTI
GATT Characteristic Reference Time Information.
BT_UUID_GATT_TUCP_VAL
GATT Characteristic Time Update Control Point UUID value.
BT_UUID_GATT_TUCP
GATT Characteristic Time Update Control Point.
BT_UUID_GATT_TUS_VAL
GATT Characteristic Time Update State UUID value.
BT_UUID_GATT_TUS
GATT Characteristic Time Update State.
BT_UUID_GATT_GM_VAL
GATT Characteristic Glucose Measurement UUID value.
BT_UUID_GATT_GM
GATT Characteristic Glucose Measurement.
BT_UUID_BAS_BATTERY_LEVEL_VAL
BAS Characteristic Battery Level UUID value.
BT_UUID_BAS_BATTERY_LEVEL
BAS Characteristic Battery Level.
BT_UUID_BAS_BATTERY_POWER_STATE_VAL
BAS Characteristic Battery Power State UUID value.
BT_UUID_BAS_BATTERY_POWER_STATE
BAS Characteristic Battery Power State.
BT_UUID_BAS_BATTERY_LEVEL_STATE_VAL
BAS Characteristic Battery Level StateUUID value.
BT_UUID_BAS_BATTERY_LEVEL_STATE
BAS Characteristic Battery Level State.
BT_UUID_HTS_MEASUREMENT_VAL
HTS Characteristic Temperature Measurement UUID value.
BT_UUID_HTS_MEASUREMENT
HTS Characteristic Temperature Measurement Value.
BT_UUID_HTS_TEMP_TYP_VAL
HTS Characteristic Temperature Type UUID value.
BT_UUID_HTS_TEMP_TYP
HTS Characteristic Temperature Type.
BT_UUID_HTS_TEMP_INT_VAL
HTS Characteristic Intermediate Temperature UUID value.
BT_UUID_HTS_TEMP_INT
HTS Characteristic Intermediate Temperature.
BT_UUID_HTS_TEMP_C_VAL
HTS Characteristic Temperature Celsius UUID value.
BT_UUID_HTS_TEMP_C
HTS Characteristic Temperature Celsius.
BT_UUID_HTS_TEMP_F_VAL
HTS Characteristic Temperature Fahrenheit UUID value.
BT_UUID_HTS_TEMP_F
HTS Characteristic Temperature Fahrenheit.
BT_UUID_HTS_INTERVAL_VAL
HTS Characteristic Measurement Interval UUID value.
BT_UUID_HTS_INTERVAL
HTS Characteristic Measurement Interval.
BT_UUID_HIDS_BOOT_KB_IN_REPORT_VAL
HID Characteristic Boot Keyboard Input Report UUID value.
BT_UUID_HIDS_BOOT_KB_IN_REPORT
HID Characteristic Boot Keyboard Input Report.
BT_UUID_DIS_SYSTEM_ID_VAL
DIS Characteristic System ID UUID value.
BT_UUID_DIS_SYSTEM_ID
DIS Characteristic System ID.
BT_UUID_DIS_MODEL_NUMBER_VAL
DIS Characteristic Model Number String UUID value.
BT_UUID_DIS_MODEL_NUMBER
DIS Characteristic Model Number String.
BT_UUID_DIS_SERIAL_NUMBER_VAL
DIS Characteristic Serial Number String UUID value.
BT_UUID_DIS_SERIAL_NUMBER
DIS Characteristic Serial Number String.
BT_UUID_DIS_FIRMWARE_REVISION_VAL
DIS Characteristic Firmware Revision String UUID value.
BT_UUID_DIS_FIRMWARE_REVISION
DIS Characteristic Firmware Revision String.
BT_UUID_DIS_HARDWARE_REVISION_VAL
DIS Characteristic Hardware Revision String UUID value.
BT_UUID_DIS_HARDWARE_REVISION
DIS Characteristic Hardware Revision String.
BT_UUID_DIS_SOFTWARE_REVISION_VAL
DIS Characteristic Software Revision String UUID value.
BT_UUID_DIS_SOFTWARE_REVISION
DIS Characteristic Software Revision String.
BT_UUID_DIS_MANUFACTURER_NAME_VAL
DIS Characteristic Manufacturer Name String UUID Value.
BT_UUID_DIS_MANUFACTURER_NAME
DIS Characteristic Manufacturer Name String.
BT_UUID_GATT_IEEE_RCDL_VAL
GATT Characteristic IEEE Regulatory Certification Data List UUID Value.
BT_UUID_GATT_IEEE_RCDL
GATT Characteristic IEEE Regulatory Certification Data List.
BT_UUID_CTS_CURRENT_TIME_VAL
CTS Characteristic Current Time UUID value.
BT_UUID_CTS_CURRENT_TIME
CTS Characteristic Current Time.
BT_UUID_MAGN_DECLINATION_VAL
Magnetic Declination Characteristic UUID value.
BT_UUID_MAGN_DECLINATION
Magnetic Declination Characteristic.
BT_UUID_GATT_LLAT_VAL
GATT Characteristic Legacy Latitude UUID Value.
BT_UUID_GATT_LLAT
GATT Characteristic Legacy Latitude.
BT_UUID_GATT_LLON_VAL
GATT Characteristic Legacy Longitude UUID Value.
BT_UUID_GATT_LLON
GATT Characteristic Legacy Longitude.
BT_UUID_GATT_POS_2D_VAL
GATT Characteristic Position 2D UUID Value.
BT_UUID_GATT_POS_2D
GATT Characteristic Position 2D.
BT_UUID_GATT_POS_3D_VAL
GATT Characteristic Position 3D UUID Value.
BT_UUID_GATT_POS_3D
GATT Characteristic Position 3D.
BT_UUID_GATT_SR_VAL
GATT Characteristic Scan Refresh UUID Value.
BT_UUID_GATT_SR
GATT Characteristic Scan Refresh.
BT_UUID_HIDS_BOOT_KB_OUT_REPORT_VAL
HID Boot Keyboard Output Report Characteristic UUID value.
BT_UUID_HIDS_BOOT_KB_OUT_REPORT
HID Boot Keyboard Output Report Characteristic.
BT_UUID_HIDS_BOOT_MOUSE_IN_REPORT_VAL
HID Boot Mouse Input Report Characteristic UUID value.
BT_UUID_HIDS_BOOT_MOUSE_IN_REPORT
HID Boot Mouse Input Report Characteristic.
BT_UUID_GATT_GMC_VAL
GATT Characteristic Glucose Measurement Context UUID Value.
BT_UUID_GATT_GMC
GATT Characteristic Glucose Measurement Context.
BT_UUID_GATT_BPM_VAL
GATT Characteristic Blood Pressure Measurement UUID Value.
BT_UUID_GATT_BPM
GATT Characteristic Blood Pressure Measurement.
BT_UUID_GATT_ICP_VAL
GATT Characteristic Intermediate Cuff Pressure UUID Value.
BT_UUID_GATT_ICP
GATT Characteristic Intermediate Cuff Pressure.
BT_UUID_HRS_MEASUREMENT_VAL
HRS Characteristic Measurement Interval UUID value.
BT_UUID_HRS_MEASUREMENT
HRS Characteristic Measurement Interval.
BT_UUID_HRS_BODY_SENSOR_VAL
HRS Characteristic Body Sensor Location.
BT_UUID_HRS_BODY_SENSOR
HRS Characteristic Control Point.
BT_UUID_HRS_CONTROL_POINT_VAL
HRS Characteristic Control Point UUID value.
BT_UUID_HRS_CONTROL_POINT
HRS Characteristic Control Point.
BT_UUID_GATT_REM_VAL
GATT Characteristic Removable UUID Value.
BT_UUID_GATT_REM
GATT Characteristic Removable.
BT_UUID_GATT_SRVREQ_VAL
GATT Characteristic Service Required UUID Value.
BT_UUID_GATT_SRVREQ
GATT Characteristic Service Required.
BT_UUID_GATT_SC_TEMP_C_VAL
GATT Characteristic Scientific Temperature in Celsius UUID Value.
BT_UUID_GATT_SC_TEMP_C
GATT Characteristic Scientific Temperature in Celsius.
BT_UUID_GATT_STRING_VAL
GATT Characteristic String UUID Value.
BT_UUID_GATT_STRING
GATT Characteristic String.
BT_UUID_GATT_NETA_VAL
GATT Characteristic Network Availability UUID Value.
BT_UUID_GATT_NETA
GATT Characteristic Network Availability.
BT_UUID_GATT_ALRTS_VAL
GATT Characteristic Alert Status UUID Value.
BT_UUID_GATT_ALRTS
GATT Characteristic Alert Status.
BT_UUID_GATT_RCP_VAL
GATT Characteristic Ringer Control Point UUID Value.
BT_UUID_GATT_RCP
GATT Characteristic Ringer Control Point.
BT_UUID_GATT_RS_VAL
GATT Characteristic Ringer Setting UUID Value.
BT_UUID_GATT_RS
GATT Characteristic Ringer Setting.
BT_UUID_GATT_ALRTCID_MASK_VAL
GATT Characteristic Alert Category ID Bit Mask UUID Value.
BT_UUID_GATT_ALRTCID_MASK
GATT Characteristic Alert Category ID Bit Mask.
BT_UUID_GATT_ALRTCID_VAL
GATT Characteristic Alert Category ID UUID Value.
BT_UUID_GATT_ALRTCID
GATT Characteristic Alert Category ID.
BT_UUID_GATT_ALRTNCP_VAL
GATT Characteristic Alert Notification Control Point Value.
BT_UUID_GATT_ALRTNCP
GATT Characteristic Alert Notification Control Point.
BT_UUID_GATT_UALRTS_VAL
GATT Characteristic Unread Alert Status UUID Value.
BT_UUID_GATT_UALRTS
GATT Characteristic Unread Alert Status.
BT_UUID_GATT_NALRT_VAL
GATT Characteristic New Alert UUID Value.
BT_UUID_GATT_NALRT
GATT Characteristic New Alert.
BT_UUID_GATT_SNALRTC_VAL
GATT Characteristic Supported New Alert Category UUID Value.
BT_UUID_GATT_SNALRTC
GATT Characteristic Supported New Alert Category.
BT_UUID_GATT_SUALRTC_VAL
GATT Characteristic Supported Unread Alert Category UUID Value.
BT_UUID_GATT_SUALRTC
GATT Characteristic Supported Unread Alert Category.
BT_UUID_GATT_BPF_VAL
GATT Characteristic Blood Pressure Feature UUID Value.
BT_UUID_GATT_BPF
GATT Characteristic Blood Pressure Feature.
BT_UUID_HIDS_INFO_VAL
HID Information Characteristic UUID value.
BT_UUID_HIDS_INFO
HID Information Characteristic.
BT_UUID_HIDS_REPORT_MAP_VAL
HID Report Map Characteristic UUID value.
BT_UUID_HIDS_REPORT_MAP
HID Report Map Characteristic.
BT_UUID_HIDS_CTRL_POINT_VAL
HID Control Point Characteristic UUID value.
BT_UUID_HIDS_CTRL_POINT
HID Control Point Characteristic.
BT_UUID_HIDS_REPORT_VAL
HID Report Characteristic UUID value.
BT_UUID_HIDS_REPORT
HID Report Characteristic.
BT_UUID_HIDS_PROTOCOL_MODE_VAL
HID Protocol Mode Characteristic UUID value.
BT_UUID_HIDS_PROTOCOL_MODE
HID Protocol Mode Characteristic.
BT_UUID_GATT_SIW_VAL
GATT Characteristic Scan Interval Windows UUID Value.
BT_UUID_GATT_SIW
GATT Characteristic Scan Interval Windows.
BT_UUID_DIS_PNP_ID_VAL
DIS Characteristic PnP ID UUID value.
BT_UUID_DIS_PNP_ID
DIS Characteristic PnP ID.
BT_UUID_GATT_GF_VAL
GATT Characteristic Glucose Feature UUID Value.
BT_UUID_GATT_GF
GATT Characteristic Glucose Feature.
BT_UUID_RECORD_ACCESS_CONTROL_POINT_VAL
Record Access Control Point Characteristic value.
BT_UUID_RECORD_ACCESS_CONTROL_POINT
Record Access Control Point.
BT_UUID_RSC_MEASUREMENT_VAL
RSC Measurement Characteristic UUID value.
BT_UUID_RSC_MEASUREMENT
RSC Measurement Characteristic.
BT_UUID_RSC_FEATURE_VAL
RSC Feature Characteristic UUID value.
BT_UUID_RSC_FEATURE
RSC Feature Characteristic.
BT_UUID_SC_CONTROL_POINT_VAL
SC Control Point Characteristic UUID value.
BT_UUID_SC_CONTROL_POINT
SC Control Point Characteristic.
BT_UUID_GATT_DI_VAL
GATT Characteristic Digital Input UUID Value.
BT_UUID_GATT_DI
GATT Characteristic Digital Input.
BT_UUID_GATT_DO_VAL
GATT Characteristic Digital Output UUID Value.
BT_UUID_GATT_DO
GATT Characteristic Digital Output.
BT_UUID_GATT_AI_VAL
GATT Characteristic Analog Input UUID Value.
BT_UUID_GATT_AI
GATT Characteristic Analog Input.
BT_UUID_GATT_AO_VAL
GATT Characteristic Analog Output UUID Value.
BT_UUID_GATT_AO
GATT Characteristic Analog Output.
BT_UUID_GATT_AGGR_VAL
GATT Characteristic Aggregate UUID Value.
BT_UUID_GATT_AGGR
GATT Characteristic Aggregate.
BT_UUID_CSC_MEASUREMENT_VAL
CSC Measurement Characteristic UUID value.
BT_UUID_CSC_MEASUREMENT
CSC Measurement Characteristic.
BT_UUID_CSC_FEATURE_VAL
CSC Feature Characteristic UUID value.
BT_UUID_CSC_FEATURE
CSC Feature Characteristic.
BT_UUID_SENSOR_LOCATION_VAL
Sensor Location Characteristic UUID value.
BT_UUID_SENSOR_LOCATION
Sensor Location Characteristic.
BT_UUID_GATT_PLX_SCM_VAL
GATT Characteristic PLX Spot-Check Measurement UUID Value.
BT_UUID_GATT_PLX_SCM
GATT Characteristic PLX Spot-Check Measurement.
BT_UUID_GATT_PLX_CM_VAL
GATT Characteristic PLX Continuous Measurement UUID Value.
BT_UUID_GATT_PLX_CM
GATT Characteristic PLX Continuous Measurement.
BT_UUID_GATT_PLX_F_VAL
GATT Characteristic PLX Features UUID Value.
BT_UUID_GATT_PLX_F
GATT Characteristic PLX Features.
BT_UUID_GATT_POPE_VAL
GATT Characteristic Pulse Oximetry Pulastile Event UUID Value.
BT_UUID_GATT_POPE
GATT Characteristic Pulse Oximetry Pulsatile Event.
BT_UUID_GATT_POCP_VAL
GATT Characteristic Pulse Oximetry Control Point UUID Value.
BT_UUID_GATT_POCP
GATT Characteristic Pulse Oximetry Control Point.
BT_UUID_GATT_CPS_CPM_VAL
GATT Characteristic Cycling Power Measurement UUID Value.
BT_UUID_GATT_CPS_CPM
GATT Characteristic Cycling Power Measurement.
BT_UUID_GATT_CPS_CPV_VAL
GATT Characteristic Cycling Power Vector UUID Value.
BT_UUID_GATT_CPS_CPV
GATT Characteristic Cycling Power Vector.
BT_UUID_GATT_CPS_CPF_VAL
GATT Characteristic Cycling Power Feature UUID Value.
BT_UUID_GATT_CPS_CPF
GATT Characteristic Cycling Power Feature.
BT_UUID_GATT_CPS_CPCP_VAL
GATT Characteristic Cycling Power Control Point UUID Value.
BT_UUID_GATT_CPS_CPCP
GATT Characteristic Cycling Power Control Point.
BT_UUID_GATT_LOC_SPD_VAL
GATT Characteristic Location and Speed UUID Value.
BT_UUID_GATT_LOC_SPD
GATT Characteristic Location and Speed.
BT_UUID_GATT_NAV_VAL
GATT Characteristic Navigation UUID Value.
BT_UUID_GATT_NAV
GATT Characteristic Navigation.
BT_UUID_GATT_PQ_VAL
GATT Characteristic Position Quality UUID Value.
BT_UUID_GATT_PQ
GATT Characteristic Position Quality.
BT_UUID_GATT_LNF_VAL
GATT Characteristic LN Feature UUID Value.
BT_UUID_GATT_LNF
GATT Characteristic LN Feature.
BT_UUID_GATT_LNCP_VAL
GATT Characteristic LN Control Point UUID Value.
BT_UUID_GATT_LNCP
GATT Characteristic LN Control Point.
BT_UUID_ELEVATION_VAL
Elevation Characteristic UUID value.
BT_UUID_ELEVATION
Elevation Characteristic.
BT_UUID_PRESSURE_VAL
Pressure Characteristic UUID value.
BT_UUID_PRESSURE
Pressure Characteristic.
BT_UUID_TEMPERATURE_VAL
Temperature Characteristic UUID value.
BT_UUID_TEMPERATURE
Temperature Characteristic.
BT_UUID_HUMIDITY_VAL
Humidity Characteristic UUID value.
BT_UUID_HUMIDITY
Humidity Characteristic.
BT_UUID_TRUE_WIND_SPEED_VAL
True Wind Speed Characteristic UUID value.
BT_UUID_TRUE_WIND_SPEED
True Wind Speed Characteristic.
BT_UUID_TRUE_WIND_DIR_VAL
True Wind Direction Characteristic UUID value.
BT_UUID_TRUE_WIND_DIR
True Wind Direction Characteristic.
BT_UUID_APPARENT_WIND_SPEED_VAL
Apparent Wind Speed Characteristic UUID value.
BT_UUID_APPARENT_WIND_SPEED
Apparent Wind Speed Characteristic.
BT_UUID_APPARENT_WIND_DIR_VAL
Apparent Wind Direction Characteristic UUID value.
BT_UUID_APPARENT_WIND_DIR
Apparent Wind Direction Characteristic.
BT_UUID_GUST_FACTOR_VAL
Gust Factor Characteristic UUID value.
BT_UUID_GUST_FACTOR
Gust Factor Characteristic.
BT_UUID_POLLEN_CONCENTRATION_VAL
Pollen Concentration Characteristic UUID value.
BT_UUID_POLLEN_CONCENTRATION
Pollen Concentration Characteristic.
BT_UUID_UV_INDEX_VAL
UV Index Characteristic UUID value.
BT_UUID_UV_INDEX
UV Index Characteristic.
BT_UUID_IRRADIANCE_VAL
Irradiance Characteristic UUID value.
BT_UUID_IRRADIANCE
Irradiance Characteristic.
BT_UUID_RAINFALL_VAL
Rainfall Characteristic UUID value.
BT_UUID_RAINFALL
Rainfall Characteristic.
BT_UUID_WIND_CHILL_VAL
Wind Chill Characteristic UUID value.
BT_UUID_WIND_CHILL
Wind Chill Characteristic.
BT_UUID_HEAT_INDEX_VAL
Heat Index Characteristic UUID value.
BT_UUID_HEAT_INDEX
Heat Index Characteristic.
BT_UUID_DEW_POINT_VAL
Dew Point Characteristic UUID value.
BT_UUID_DEW_POINT
Dew Point Characteristic.
BT_UUID_GATT_TREND_VAL
GATT Characteristic Trend UUID Value.
BT_UUID_GATT_TREND
GATT Characteristic Trend.
BT_UUID_DESC_VALUE_CHANGED_VAL
Descriptor Value Changed Characteristic UUID value.
BT_UUID_DESC_VALUE_CHANGED
Descriptor Value Changed Characteristic.
BT_UUID_GATT_AEHRLL_VAL
GATT Characteristic Aerobic Heart Rate Low Limit UUID Value.
BT_UUID_GATT_AEHRLL
GATT Characteristic Aerobic Heart Rate Lower Limit.
BT_UUID_GATT_AETHR_VAL
GATT Characteristic Aerobic Threshold UUID Value.
BT_UUID_GATT_AETHR
GATT Characteristic Aerobic Threshold.
BT_UUID_GATT_AGE_VAL
GATT Characteristic Age UUID Value.
BT_UUID_GATT_AGE
GATT Characteristic Age.
BT_UUID_GATT_ANHRLL_VAL
GATT Characteristic Anaerobic Heart Rate Lower Limit UUID Value.
BT_UUID_GATT_ANHRLL
GATT Characteristic Anaerobic Heart Rate Lower Limit.
BT_UUID_GATT_ANHRUL_VAL
GATT Characteristic Anaerobic Heart Rate Upper Limit UUID Value.
BT_UUID_GATT_ANHRUL
GATT Characteristic Anaerobic Heart Rate Upper Limit.
BT_UUID_GATT_ANTHR_VAL
GATT Characteristic Anaerobic Threshold UUID Value.
BT_UUID_GATT_ANTHR
GATT Characteristic Anaerobic Threshold.
BT_UUID_GATT_AEHRUL_VAL
GATT Characteristic Aerobic Heart Rate Upper Limit UUID Value.
BT_UUID_GATT_AEHRUL
GATT Characteristic Aerobic Heart Rate Upper Limit.
BT_UUID_GATT_DATE_BIRTH_VAL
GATT Characteristic Date of Birth UUID Value.
BT_UUID_GATT_DATE_BIRTH
GATT Characteristic Date of Birth.
BT_UUID_GATT_DATE_THRASS_VAL
GATT Characteristic Date of Threshold Assessment UUID Value.
BT_UUID_GATT_DATE_THRASS
GATT Characteristic Date of Threshold Assessment.
BT_UUID_GATT_EMAIL_VAL
GATT Characteristic Email Address UUID Value.
BT_UUID_GATT_EMAIL
GATT Characteristic Email Address.
BT_UUID_GATT_FBHRLL_VAL
GATT Characteristic Fat Burn Heart Rate Lower Limit UUID Value.
BT_UUID_GATT_FBHRLL
GATT Characteristic Fat Burn Heart Rate Lower Limit.
BT_UUID_GATT_FBHRUL_VAL
GATT Characteristic Fat Burn Heart Rate Upper Limit UUID Value.
BT_UUID_GATT_FBHRUL
GATT Characteristic Fat Burn Heart Rate Upper Limit.
BT_UUID_GATT_FIRST_NAME_VAL
GATT Characteristic First Name UUID Value.
BT_UUID_GATT_FIRST_NAME
GATT Characteristic First Name.
BT_UUID_GATT_5ZHRL_VAL
GATT Characteristic Five Zone Heart Rate Limits UUID Value.
BT_UUID_GATT_5ZHRL
GATT Characteristic Five Zone Heart Rate Limits.
BT_UUID_GATT_GENDER_VAL
GATT Characteristic Gender UUID Value.
BT_UUID_GATT_GENDER
GATT Characteristic Gender.
BT_UUID_GATT_HR_MAX_VAL
GATT Characteristic Heart Rate Max UUID Value.
BT_UUID_GATT_HR_MAX
GATT Characteristic Heart Rate Max.
BT_UUID_GATT_HEIGHT_VAL
GATT Characteristic Height UUID Value.
BT_UUID_GATT_HEIGHT
GATT Characteristic Height.
BT_UUID_GATT_HC_VAL
GATT Characteristic Hip Circumference UUID Value.
BT_UUID_GATT_HC
GATT Characteristic Hip Circumference.
BT_UUID_GATT_LAST_NAME_VAL
GATT Characteristic Last Name UUID Value.
BT_UUID_GATT_LAST_NAME
GATT Characteristic Last Name.
BT_UUID_GATT_MRHR_VAL
GATT Characteristic Maximum Recommended Heart Rate> UUID Value.
BT_UUID_GATT_MRHR
GATT Characteristic Maximum Recommended Heart Rate.
BT_UUID_GATT_RHR_VAL
GATT Characteristic Resting Heart Rate UUID Value.
BT_UUID_GATT_RHR
GATT Characteristic Resting Heart Rate.
BT_UUID_GATT_AEANTHR_VAL
GATT Characteristic Sport Type for Aerobic and Anaerobic Thresholds UUID Value.
BT_UUID_GATT_AEANTHR
GATT Characteristic Sport Type for Aerobic and Anaerobic Threshold.
BT_UUID_GATT_3ZHRL_VAL
GATT Characteristic Three Zone Heart Rate Limits UUID Value.
BT_UUID_GATT_3ZHRL
GATT Characteristic Three Zone Heart Rate Limits.
BT_UUID_GATT_2ZHRL_VAL
GATT Characteristic Two Zone Heart Rate Limits UUID Value.
BT_UUID_GATT_2ZHRL
GATT Characteristic Two Zone Heart Rate Limits.
BT_UUID_GATT_VO2_MAX_VAL
GATT Characteristic VO2 Max UUID Value.
BT_UUID_GATT_VO2_MAX
GATT Characteristic VO2 Max.
BT_UUID_GATT_WC_VAL
GATT Characteristic Waist Circumference UUID Value.
BT_UUID_GATT_WC
GATT Characteristic Waist Circumference.
BT_UUID_GATT_WEIGHT_VAL
GATT Characteristic Weight UUID Value.
BT_UUID_GATT_WEIGHT
GATT Characteristic Weight.
BT_UUID_GATT_DBCHINC_VAL
GATT Characteristic Database Change Increment UUID Value.
BT_UUID_GATT_DBCHINC
GATT Characteristic Database Change Increment.
BT_UUID_GATT_USRIDX_VAL
GATT Characteristic User Index UUID Value.
BT_UUID_GATT_USRIDX
GATT Characteristic User Index.
BT_UUID_GATT_BCF_VAL
GATT Characteristic Body Composition Feature UUID Value.
BT_UUID_GATT_BCF
GATT Characteristic Body Composition Feature.
BT_UUID_GATT_BCM_VAL
GATT Characteristic Body Composition Measurement UUID Value.
BT_UUID_GATT_BCM
GATT Characteristic Body Composition Measurement.
BT_UUID_GATT_WM_VAL
GATT Characteristic Weight Measurement UUID Value.
BT_UUID_GATT_WM
GATT Characteristic Weight Measurement.
BT_UUID_GATT_WSF_VAL
GATT Characteristic Weight Scale Feature UUID Value.
BT_UUID_GATT_WSF
GATT Characteristic Weight Scale Feature.
BT_UUID_GATT_USRCP_VAL
GATT Characteristic User Control Point UUID Value.
BT_UUID_GATT_USRCP
GATT Characteristic User Control Point.
BT_UUID_MAGN_FLUX_DENSITY_2D_VAL
Magnetic Flux Density - 2D Characteristic UUID value.
BT_UUID_MAGN_FLUX_DENSITY_2D
Magnetic Flux Density - 2D Characteristic.
BT_UUID_MAGN_FLUX_DENSITY_3D_VAL
Magnetic Flux Density - 3D Characteristic UUID value.
BT_UUID_MAGN_FLUX_DENSITY_3D
Magnetic Flux Density - 3D Characteristic.
BT_UUID_GATT_LANG_VAL
GATT Characteristic Language UUID Value.
BT_UUID_GATT_LANG
GATT Characteristic Language.
BT_UUID_BAR_PRESSURE_TREND_VAL
Barometric Pressure Trend Characteristic UUID value.
BT_UUID_BAR_PRESSURE_TREND
Barometric Pressure Trend Characteristic.
BT_UUID_BMS_CONTROL_POINT_VAL
Bond Management Control Point UUID value.
BT_UUID_BMS_CONTROL_POINT
Bond Management Control Point.
BT_UUID_BMS_FEATURE_VAL
Bond Management Feature UUID value.
BT_UUID_BMS_FEATURE
Bond Management Feature.
BT_UUID_CENTRAL_ADDR_RES_VAL
Central Address Resolution Characteristic UUID value.
BT_UUID_CENTRAL_ADDR_RES
Central Address Resolution Characteristic.
BT_UUID_CGM_MEASUREMENT_VAL
CGM Measurement Characteristic value.
BT_UUID_CGM_MEASUREMENT
CGM Measurement Characteristic.
BT_UUID_CGM_FEATURE_VAL
CGM Feature Characteristic value.
BT_UUID_CGM_FEATURE
CGM Feature Characteristic.
BT_UUID_CGM_STATUS_VAL
CGM Status Characteristic value.
BT_UUID_CGM_STATUS
CGM Status Characteristic.
BT_UUID_CGM_SESSION_START_TIME_VAL
CGM Session Start Time Characteristic value.
BT_UUID_CGM_SESSION_START_TIME
CGM Session Start Time.
BT_UUID_CGM_SESSION_RUN_TIME_VAL
CGM Session Run Time Characteristic value.
BT_UUID_CGM_SESSION_RUN_TIME
CGM Session Run Time.
BT_UUID_CGM_SPECIFIC_OPS_CONTROL_POINT_VAL
CGM Specific Ops Control Point Characteristic value.
BT_UUID_CGM_SPECIFIC_OPS_CONTROL_POINT
CGM Specific Ops Control Point.
BT_UUID_GATT_IPC_VAL
GATT Characteristic Indoor Positioning Configuration UUID Value.
BT_UUID_GATT_IPC
GATT Characteristic Indoor Positioning Configuration.
BT_UUID_GATT_LAT_VAL
GATT Characteristic Latitude UUID Value.
BT_UUID_GATT_LAT
GATT Characteristic Latitude.
BT_UUID_GATT_LON_VAL
GATT Characteristic Longitude UUID Value.
BT_UUID_GATT_LON
GATT Characteristic Longitude.
BT_UUID_GATT_LNCOORD_VAL
GATT Characteristic Local North Coordinate UUID Value.
BT_UUID_GATT_LNCOORD
GATT Characteristic Local North Coordinate.
BT_UUID_GATT_LECOORD_VAL
GATT Characteristic Local East Coordinate UUID Value.
BT_UUID_GATT_LECOORD
GATT Characteristic Local East Coordinate.
BT_UUID_GATT_FN_VAL
GATT Characteristic Floor Number UUID Value.
BT_UUID_GATT_FN
GATT Characteristic Floor Number.
BT_UUID_GATT_ALT_VAL
GATT Characteristic Altitude UUID Value.
BT_UUID_GATT_ALT
GATT Characteristic Altitude.
BT_UUID_GATT_UNCERTAINTY_VAL
GATT Characteristic Uncertainty UUID Value.
BT_UUID_GATT_UNCERTAINTY
GATT Characteristic Uncertainty.
BT_UUID_GATT_LOC_NAME_VAL
GATT Characteristic Location Name UUID Value.
BT_UUID_GATT_LOC_NAME
GATT Characteristic Location Name.
BT_UUID_URI_VAL
URI UUID value.
BT_UUID_URI
URI.
BT_UUID_HTTP_HEADERS_VAL
HTTP Headers UUID value.
BT_UUID_HTTP_HEADERS
HTTP Headers.
BT_UUID_HTTP_STATUS_CODE_VAL
HTTP Status Code UUID value.
BT_UUID_HTTP_STATUS_CODE
HTTP Status Code.
BT_UUID_HTTP_ENTITY_BODY_VAL
HTTP Entity Body UUID value.
BT_UUID_HTTP_ENTITY_BODY
HTTP Entity Body.
BT_UUID_HTTP_CONTROL_POINT_VAL
HTTP Control Point UUID value.
BT_UUID_HTTP_CONTROL_POINT
HTTP Control Point.
BT_UUID_HTTPS_SECURITY_VAL
HTTPS Security UUID value.
BT_UUID_HTTPS_SECURITY
HTTPS Security.
BT_UUID_GATT_TDS_CP_VAL
GATT Characteristic TDS Control Point UUID Value.
BT_UUID_GATT_TDS_CP
GATT Characteristic TDS Control Point.
BT_UUID_OTS_FEATURE_VAL
OTS Feature Characteristic UUID value.
BT_UUID_OTS_FEATURE
OTS Feature Characteristic.
BT_UUID_OTS_NAME_VAL
OTS Object Name Characteristic UUID value.
BT_UUID_OTS_NAME
OTS Object Name Characteristic.
BT_UUID_OTS_TYPE_VAL
OTS Object Type Characteristic UUID value.
BT_UUID_OTS_TYPE
OTS Object Type Characteristic.
BT_UUID_OTS_SIZE_VAL
OTS Object Size Characteristic UUID value.
BT_UUID_OTS_SIZE
OTS Object Size Characteristic.
BT_UUID_OTS_FIRST_CREATED_VAL
OTS Object First-Created Characteristic UUID value.
BT_UUID_OTS_FIRST_CREATED
OTS Object First-Created Characteristic.
BT_UUID_OTS_LAST_MODIFIED_VAL
OTS Object Last-Modified Characteristic UUI value.
BT_UUID_OTS_LAST_MODIFIED
OTS Object Last-Modified Characteristic.
BT_UUID_OTS_ID_VAL
OTS Object ID Characteristic UUID value.
BT_UUID_OTS_ID
OTS Object ID Characteristic.
BT_UUID_OTS_PROPERTIES_VAL
OTS Object Properties Characteristic UUID value.
BT_UUID_OTS_PROPERTIES
OTS Object Properties Characteristic.
BT_UUID_OTS_ACTION_CP_VAL
OTS Object Action Control Point Characteristic UUID value.
BT_UUID_OTS_ACTION_CP
OTS Object Action Control Point Characteristic.
BT_UUID_OTS_LIST_CP_VAL
OTS Object List Control Point Characteristic UUID value.
BT_UUID_OTS_LIST_CP
OTS Object List Control Point Characteristic.
BT_UUID_OTS_LIST_FILTER_VAL
OTS Object List Filter Characteristic UUID value.
BT_UUID_OTS_LIST_FILTER
OTS Object List Filter Characteristic.
BT_UUID_OTS_CHANGED_VAL
OTS Object Changed Characteristic UUID value.
BT_UUID_OTS_CHANGED
OTS Object Changed Characteristic.
BT_UUID_GATT_RPAO_VAL
GATT Characteristic Resolvable Private Address Only UUID Value.
BT_UUID_GATT_RPAO
GATT Characteristic Resolvable Private Address Only.
BT_UUID_OTS_TYPE_UNSPECIFIED_VAL
OTS Unspecified Object Type UUID value.
BT_UUID_OTS_TYPE_UNSPECIFIED
OTS Unspecified Object Type.
BT_UUID_OTS_DIRECTORY_LISTING_VAL
OTS Directory Listing UUID value.
BT_UUID_OTS_DIRECTORY_LISTING
OTS Directory Listing.
BT_UUID_GATT_FMF_VAL
GATT Characteristic Fitness Machine Feature UUID Value.
BT_UUID_GATT_FMF
GATT Characteristic Fitness Machine Feature.
BT_UUID_GATT_TD_VAL
GATT Characteristic Treadmill Data UUID Value.
BT_UUID_GATT_TD
GATT Characteristic Treadmill Data.
BT_UUID_GATT_CTD_VAL
GATT Characteristic Cross Trainer Data UUID Value.
BT_UUID_GATT_CTD
GATT Characteristic Cross Trainer Data.
BT_UUID_GATT_STPCD_VAL
GATT Characteristic Step Climber Data UUID Value.
BT_UUID_GATT_STPCD
GATT Characteristic Step Climber Data.
BT_UUID_GATT_STRCD_VAL
GATT Characteristic Stair Climber Data UUID Value.
BT_UUID_GATT_STRCD
GATT Characteristic Stair Climber Data.
BT_UUID_GATT_RD_VAL
GATT Characteristic Rower Data UUID Value.
BT_UUID_GATT_RD
GATT Characteristic Rower Data.
BT_UUID_GATT_IBD_VAL
GATT Characteristic Indoor Bike Data UUID Value.
BT_UUID_GATT_IBD
GATT Characteristic Indoor Bike Data.
BT_UUID_GATT_TRSTAT_VAL
GATT Characteristic Training Status UUID Value.
BT_UUID_GATT_TRSTAT
GATT Characteristic Training Status.
BT_UUID_GATT_SSR_VAL
GATT Characteristic Supported Speed Range UUID Value.
BT_UUID_GATT_SSR
GATT Characteristic Supported Speed Range.
BT_UUID_GATT_SIR_VAL
GATT Characteristic Supported Inclination Range UUID Value.
BT_UUID_GATT_SIR
GATT Characteristic Supported Inclination Range.
BT_UUID_GATT_SRLR_VAL
GATT Characteristic Supported Resistance Level Range UUID Value.
BT_UUID_GATT_SRLR
GATT Characteristic Supported Resistance Level Range.
BT_UUID_GATT_SHRR_VAL
GATT Characteristic Supported Heart Rate Range UUID Value.
BT_UUID_GATT_SHRR
GATT Characteristic Supported Heart Rate Range.
BT_UUID_GATT_SPR_VAL
GATT Characteristic Supported Power Range UUID Value.
BT_UUID_GATT_SPR
GATT Characteristic Supported Power Range.
BT_UUID_GATT_FMCP_VAL
GATT Characteristic Fitness Machine Control Point UUID Value.
BT_UUID_GATT_FMCP
GATT Characteristic Fitness Machine Control Point.
BT_UUID_GATT_FMS_VAL
GATT Characteristic Fitness Machine Status UUID Value.
BT_UUID_GATT_FMS
GATT Characteristic Fitness Machine Status.
BT_UUID_MESH_PROV_DATA_IN_VAL
Mesh Provisioning Data In UUID value.
BT_UUID_MESH_PROV_DATA_IN
Mesh Provisioning Data In.
BT_UUID_MESH_PROV_DATA_OUT_VAL
Mesh Provisioning Data Out UUID value.
BT_UUID_MESH_PROV_DATA_OUT
Mesh Provisioning Data Out.
BT_UUID_MESH_PROXY_DATA_IN_VAL
Mesh Proxy Data In UUID value.
BT_UUID_MESH_PROXY_DATA_IN
Mesh Proxy Data In.
BT_UUID_MESH_PROXY_DATA_OUT_VAL
Mesh Proxy Data Out UUID value.
BT_UUID_MESH_PROXY_DATA_OUT
Mesh Proxy Data Out.
BT_UUID_GATT_NNN_VAL
GATT Characteristic New Number Needed UUID Value.
BT_UUID_GATT_NNN
GATT Characteristic New Number Needed.
BT_UUID_GATT_AC_VAL
GATT Characteristic Average Current UUID Value.
BT_UUID_GATT_AC
GATT Characteristic Average Current.
BT_UUID_GATT_AV_VAL
GATT Characteristic Average Voltage UUID Value.
BT_UUID_GATT_AV
GATT Characteristic Average Voltage.
BT_UUID_GATT_BOOLEAN_VAL
GATT Characteristic Boolean UUID Value.
BT_UUID_GATT_BOOLEAN
GATT Characteristic Boolean.
BT_UUID_GATT_CRDFP_VAL
GATT Characteristic Chromatic Distance From Planckian UUID Value.
BT_UUID_GATT_CRDFP
GATT Characteristic Chromatic Distance From Planckian.
BT_UUID_GATT_CRCOORDS_VAL
GATT Characteristic Chromaticity Coordinates UUID Value.
BT_UUID_GATT_CRCOORDS
GATT Characteristic Chromaticity Coordinates.
BT_UUID_GATT_CRCCT_VAL
GATT Characteristic Chromaticity In CCT And Duv Values UUID Value.
BT_UUID_GATT_CRCCT
GATT Characteristic Chromaticity In CCT And Duv Values.
BT_UUID_GATT_CRT_VAL
GATT Characteristic Chromaticity Tolerance UUID Value.
BT_UUID_GATT_CRT
GATT Characteristic Chromaticity Tolerance.
BT_UUID_GATT_CIEIDX_VAL
GATT Characteristic CIE 13.3-1995 Color Rendering Index UUID Value.
BT_UUID_GATT_CIEIDX
GATT Characteristic CIE 13.3-1995 Color Rendering Index.
BT_UUID_GATT_COEFFICIENT_VAL
GATT Characteristic Coefficient UUID Value.
BT_UUID_GATT_COEFFICIENT
GATT Characteristic Coefficient.
BT_UUID_GATT_CCTEMP_VAL
GATT Characteristic Correlated Color Temperature UUID Value.
BT_UUID_GATT_CCTEMP
GATT Characteristic Correlated Color Temperature.
BT_UUID_GATT_COUNT16_VAL
GATT Characteristic Count 16 UUID Value.
BT_UUID_GATT_COUNT16
GATT Characteristic Count 16.
BT_UUID_GATT_COUNT24_VAL
GATT Characteristic Count 24 UUID Value.
BT_UUID_GATT_COUNT24
GATT Characteristic Count 24.
BT_UUID_GATT_CNTRCODE_VAL
GATT Characteristic Country Code UUID Value.
BT_UUID_GATT_CNTRCODE
GATT Characteristic Country Code.
BT_UUID_GATT_DATEUTC_VAL
GATT Characteristic Date UTC UUID Value.
BT_UUID_GATT_DATEUTC
GATT Characteristic Date UTC.
BT_UUID_GATT_EC_VAL
GATT Characteristic Electric Current UUID Value.
BT_UUID_GATT_EC
GATT Characteristic Electric Current.
BT_UUID_GATT_ECR_VAL
GATT Characteristic Electric Current Range UUID Value.
BT_UUID_GATT_ECR
GATT Characteristic Electric Current Range.
BT_UUID_GATT_ECSPEC_VAL
GATT Characteristic Electric Current Specification UUID Value.
BT_UUID_GATT_ECSPEC
GATT Characteristic Electric Current Specification.
BT_UUID_GATT_ECSTAT_VAL
GATT Characteristic Electric Current Statistics UUID Value.
BT_UUID_GATT_ECSTAT
GATT Characteristic Electric Current Statistics.
BT_UUID_GATT_ENERGY_VAL
GATT Characteristic Energy UUID Value.
BT_UUID_GATT_ENERGY
GATT Characteristic Energy.
BT_UUID_GATT_EPOD_VAL
GATT Characteristic Energy In A Period Of Day UUID Value.
BT_UUID_GATT_EPOD
GATT Characteristic Energy In A Period Of Day.
BT_UUID_GATT_EVTSTAT_VAL
GATT Characteristic Event Statistics UUID Value.
BT_UUID_GATT_EVTSTAT
GATT Characteristic Event Statistics.
BT_UUID_GATT_FSTR16_VAL
GATT Characteristic Fixed String 16 UUID Value.
BT_UUID_GATT_FSTR16
GATT Characteristic Fixed String 16.
BT_UUID_GATT_FSTR24_VAL
GATT Characteristic Fixed String 24 UUID Value.
BT_UUID_GATT_FSTR24
GATT Characteristic Fixed String 24.
BT_UUID_GATT_FSTR36_VAL
GATT Characteristic Fixed String 36 UUID Value.
BT_UUID_GATT_FSTR36
GATT Characteristic Fixed String 36.
BT_UUID_GATT_FSTR8_VAL
GATT Characteristic Fixed String 8 UUID Value.
BT_UUID_GATT_FSTR8
GATT Characteristic Fixed String 8.
BT_UUID_GATT_GENLVL_VAL
GATT Characteristic Generic Level UUID Value.
BT_UUID_GATT_GENLVL
GATT Characteristic Generic Level.
BT_UUID_GATT_GTIN_VAL
GATT Characteristic Global Trade Item Number UUID Value.
BT_UUID_GATT_GTIN
GATT Characteristic Global Trade Item Number.
BT_UUID_GATT_ILLUM_VAL
GATT Characteristic Illuminance UUID Value.
BT_UUID_GATT_ILLUM
GATT Characteristic Illuminance.
BT_UUID_GATT_LUMEFF_VAL
GATT Characteristic Luminous Efficacy UUID Value.
BT_UUID_GATT_LUMEFF
GATT Characteristic Luminous Efficacy.
BT_UUID_GATT_LUMNRG_VAL
GATT Characteristic Luminous Energy UUID Value.
BT_UUID_GATT_LUMNRG
GATT Characteristic Luminous Energy.
BT_UUID_GATT_LUMEXP_VAL
GATT Characteristic Luminous Exposure UUID Value.
BT_UUID_GATT_LUMEXP
GATT Characteristic Luminous Exposure.
BT_UUID_GATT_LUMFLX_VAL
GATT Characteristic Luminous Flux UUID Value.
BT_UUID_GATT_LUMFLX
GATT Characteristic Luminous Flux.
BT_UUID_GATT_LUMFLXR_VAL
GATT Characteristic Luminous Flux Range UUID Value.
BT_UUID_GATT_LUMFLXR
GATT Characteristic Luminous Flux Range.
BT_UUID_GATT_LUMINT_VAL
GATT Characteristic Luminous Intensity UUID Value.
BT_UUID_GATT_LUMINT
GATT Characteristic Luminous Intensity.
BT_UUID_GATT_MASSFLOW_VAL
GATT Characteristic Mass Flow UUID Value.
BT_UUID_GATT_MASSFLOW
GATT Characteristic Mass Flow.
BT_UUID_GATT_PERLGHT_VAL
GATT Characteristic Perceived Lightness UUID Value.
BT_UUID_GATT_PERLGHT
GATT Characteristic Perceived Lightness.
BT_UUID_GATT_PER8_VAL
GATT Characteristic Percentage 8 UUID Value.
BT_UUID_GATT_PER8
GATT Characteristic Percentage 8.
BT_UUID_GATT_PWR_VAL
GATT Characteristic Power UUID Value.
BT_UUID_GATT_PWR
GATT Characteristic Power.
BT_UUID_GATT_PWRSPEC_VAL
GATT Characteristic Power Specification UUID Value.
BT_UUID_GATT_PWRSPEC
GATT Characteristic Power Specification.
BT_UUID_GATT_RRICR_VAL
GATT Characteristic Relative Runtime In A Current Range UUID Value.
BT_UUID_GATT_RRICR
GATT Characteristic Relative Runtime In A Current Range.
BT_UUID_GATT_RRIGLR_VAL
GATT Characteristic Relative Runtime In A Generic Level Range UUID Value.
BT_UUID_GATT_RRIGLR
GATT Characteristic Relative Runtime In A Generic Level Range.
BT_UUID_GATT_RVIVR_VAL
GATT Characteristic Relative Value In A Voltage Range UUID Value.
BT_UUID_GATT_RVIVR
GATT Characteristic Relative Value In A Voltage Range.
BT_UUID_GATT_RVIIR_VAL
GATT Characteristic Relative Value In A Illuminance Range UUID Value.
BT_UUID_GATT_RVIIR
GATT Characteristic Relative Value In A Illuminance Range.
BT_UUID_GATT_RVIPOD_VAL
GATT Characteristic Relative Value In A Period Of Day UUID Value.
BT_UUID_GATT_RVIPOD
GATT Characteristic Relative Value In A Period Of Day.
BT_UUID_GATT_RVITR_VAL
GATT Characteristic Relative Value In A Temperature Range UUID Value.
BT_UUID_GATT_RVITR
GATT Characteristic Relative Value In A Temperature Range.
BT_UUID_GATT_TEMP8_VAL
GATT Characteristic Temperature 8 UUID Value.
BT_UUID_GATT_TEMP8
GATT Characteristic Temperature 8.
BT_UUID_GATT_TEMP8_IPOD_VAL
GATT Characteristic Temperature 8 In A Period Of Day UUID Value.
BT_UUID_GATT_TEMP8_IPOD
GATT Characteristic Temperature 8 In A Period Of Day.
BT_UUID_GATT_TEMP8_STAT_VAL
GATT Characteristic Temperature 8 Statistics UUID Value.
BT_UUID_GATT_TEMP8_STAT
GATT Characteristic Temperature 8 Statistics.
BT_UUID_GATT_TEMP_RNG_VAL
GATT Characteristic Temperature Range UUID Value.
BT_UUID_GATT_TEMP_RNG
GATT Characteristic Temperature Range.
BT_UUID_GATT_TEMP_STAT_VAL
GATT Characteristic Temperature Statistics UUID Value.
BT_UUID_GATT_TEMP_STAT
GATT Characteristic Temperature Statistics.
BT_UUID_GATT_TIM_DC8_VAL
GATT Characteristic Time Decihour 8 UUID Value.
BT_UUID_GATT_TIM_DC8
GATT Characteristic Time Decihour 8.
BT_UUID_GATT_TIM_EXP8_VAL
GATT Characteristic Time Exponential 8 UUID Value.
BT_UUID_GATT_TIM_EXP8
GATT Characteristic Time Exponential 8.
BT_UUID_GATT_TIM_H24_VAL
GATT Characteristic Time Hour 24 UUID Value.
BT_UUID_GATT_TIM_H24
GATT Characteristic Time Hour 24.
BT_UUID_GATT_TIM_MS24_VAL
GATT Characteristic Time Millisecond 24 UUID Value.
BT_UUID_GATT_TIM_MS24
GATT Characteristic Time Millisecond 24.
BT_UUID_GATT_TIM_S16_VAL
GATT Characteristic Time Second 16 UUID Value.
BT_UUID_GATT_TIM_S16
GATT Characteristic Time Second 16.
BT_UUID_GATT_TIM_S8_VAL
GATT Characteristic Time Second 8 UUID Value.
BT_UUID_GATT_TIM_S8
GATT Characteristic Time Second 8.
BT_UUID_GATT_V_VAL
GATT Characteristic Voltage UUID Value.
BT_UUID_GATT_V
GATT Characteristic Voltage.
BT_UUID_GATT_V_SPEC_VAL
GATT Characteristic Voltage Specification UUID Value.
BT_UUID_GATT_V_SPEC
GATT Characteristic Voltage Specification.
BT_UUID_GATT_V_STAT_VAL
GATT Characteristic Voltage Statistics UUID Value.
BT_UUID_GATT_V_STAT
GATT Characteristic Voltage Statistics.
BT_UUID_GATT_VOLF_VAL
GATT Characteristic Volume Flow UUID Value.
BT_UUID_GATT_VOLF
GATT Characteristic Volume Flow.
BT_UUID_GATT_CRCOORD_VAL
GATT Characteristic Chromaticity Coordinate (not Coordinates) UUID Value.
BT_UUID_GATT_CRCOORD
GATT Characteristic Chromaticity Coordinate (not Coordinates)
BT_UUID_GATT_RCF_VAL
GATT Characteristic RC Feature UUID Value.
BT_UUID_GATT_RCF
GATT Characteristic RC Feature.
BT_UUID_GATT_RCSET_VAL
GATT Characteristic RC Settings UUID Value.
BT_UUID_GATT_RCSET
GATT Characteristic RC Settings.
BT_UUID_GATT_RCCP_VAL
GATT Characteristic Reconnection Configuration Control Point UUID Value.
BT_UUID_GATT_RCCP
GATT Characteristic Reconnection Configurationn Control Point.
BT_UUID_GATT_IDD_SC_VAL
GATT Characteristic IDD Status Changed UUID Value.
BT_UUID_GATT_IDD_SC
GATT Characteristic IDD Status Changed.
BT_UUID_GATT_IDD_S_VAL
GATT Characteristic IDD Status UUID Value.
BT_UUID_GATT_IDD_S
GATT Characteristic IDD Status.
BT_UUID_GATT_IDD_AS_VAL
GATT Characteristic IDD Announciation Status UUID Value.
BT_UUID_GATT_IDD_AS
GATT Characteristic IDD Announciation Status.
BT_UUID_GATT_IDD_F_VAL
GATT Characteristic IDD Features UUID Value.
BT_UUID_GATT_IDD_F
GATT Characteristic IDD Features.
BT_UUID_GATT_IDD_SRCP_VAL
GATT Characteristic IDD Status Reader Control Point UUID Value.
BT_UUID_GATT_IDD_SRCP
GATT Characteristic IDD Status Reader Control Point.
BT_UUID_GATT_IDD_CCP_VAL
GATT Characteristic IDD Command Control Point UUID Value.
BT_UUID_GATT_IDD_CCP
GATT Characteristic IDD Command Control Point.
BT_UUID_GATT_IDD_CD_VAL
GATT Characteristic IDD Command Data UUID Value.
BT_UUID_GATT_IDD_CD
GATT Characteristic IDD Command Data.
BT_UUID_GATT_IDD_RACP_VAL
GATT Characteristic IDD Record Access Control Point UUID Value.
BT_UUID_GATT_IDD_RACP
GATT Characteristic IDD Record Access Control Point.
BT_UUID_GATT_IDD_HD_VAL
GATT Characteristic IDD History Data UUID Value.
BT_UUID_GATT_IDD_HD
GATT Characteristic IDD History Data.
BT_UUID_GATT_CLIENT_FEATURES_VAL
GATT Characteristic Client Supported Features UUID value.
BT_UUID_GATT_CLIENT_FEATURES
GATT Characteristic Client Supported Features.
BT_UUID_GATT_DB_HASH_VAL
GATT Characteristic Database Hash UUID value.
BT_UUID_GATT_DB_HASH
GATT Characteristic Database Hash.
BT_UUID_GATT_BSS_CP_VAL
GATT Characteristic BSS Control Point UUID Value.
BT_UUID_GATT_BSS_CP
GATT Characteristic BSS Control Point.
BT_UUID_GATT_BSS_R_VAL
GATT Characteristic BSS Response UUID Value.
BT_UUID_GATT_BSS_R
GATT Characteristic BSS Response.
BT_UUID_GATT_EMG_ID_VAL
GATT Characteristic Emergency ID UUID Value.
BT_UUID_GATT_EMG_ID
GATT Characteristic Emergency ID.
BT_UUID_GATT_EMG_TXT_VAL
GATT Characteristic Emergency Text UUID Value.
BT_UUID_GATT_EMG_TXT
GATT Characteristic Emergency Text.
BT_UUID_GATT_ACS_S_VAL
GATT Characteristic ACS Status UUID Value.
BT_UUID_GATT_ACS_S
GATT Characteristic ACS Status.
BT_UUID_GATT_ACS_DI_VAL
GATT Characteristic ACS Data In UUID Value.
BT_UUID_GATT_ACS_DI
GATT Characteristic ACS Data In.
BT_UUID_GATT_ACS_DON_VAL
GATT Characteristic ACS Data Out Notify UUID Value.
BT_UUID_GATT_ACS_DON
GATT Characteristic ACS Data Out Notify.
BT_UUID_GATT_ACS_DOI_VAL
GATT Characteristic ACS Data Out Indicate UUID Value.
BT_UUID_GATT_ACS_DOI
GATT Characteristic ACS Data Out Indicate.
BT_UUID_GATT_ACS_CP_VAL
GATT Characteristic ACS Control Point UUID Value.
BT_UUID_GATT_ACS_CP
GATT Characteristic ACS Control Point.
BT_UUID_GATT_EBPM_VAL
GATT Characteristic Enhanced Blood Pressure Measurement UUID Value.
BT_UUID_GATT_EBPM
GATT Characteristic Enhanced Blood Pressure Measurement.
BT_UUID_GATT_EICP_VAL
GATT Characteristic Enhanced Intermediate Cuff Pressure UUID Value.
BT_UUID_GATT_EICP
GATT Characteristic Enhanced Intermediate Cuff Pressure.
BT_UUID_GATT_BPR_VAL
GATT Characteristic Blood Pressure Record UUID Value.
BT_UUID_GATT_BPR
GATT Characteristic Blood Pressure Record.
BT_UUID_GATT_RU_VAL
GATT Characteristic Registered User UUID Value.
BT_UUID_GATT_RU
GATT Characteristic Registered User.
BT_UUID_GATT_BR_EDR_HD_VAL
GATT Characteristic BR-EDR Handover Data UUID Value.
BT_UUID_GATT_BR_EDR_HD
GATT Characteristic BR-EDR Handover Data.
BT_UUID_GATT_BT_SIG_D_VAL
GATT Characteristic Bluetooth SIG Data UUID Value.
BT_UUID_GATT_BT_SIG_D
GATT Characteristic Bluetooth SIG Data.
BT_UUID_GATT_SERVER_FEATURES_VAL
GATT Characteristic Server Supported Features UUID value.
BT_UUID_GATT_SERVER_FEATURES
GATT Characteristic Server Supported Features.
BT_UUID_GATT_PHY_AMF_VAL
GATT Characteristic Physical Activity Monitor Features UUID Value.
BT_UUID_GATT_PHY_AMF
GATT Characteristic Physical Activity Monitor Features.
BT_UUID_GATT_GEN_AID_VAL
GATT Characteristic General Activity Instantaneous Data UUID Value.
BT_UUID_GATT_GEN_AID
GATT Characteristic General Activity Instantaneous Data.
BT_UUID_GATT_GEN_ASD_VAL
GATT Characteristic General Activity Summary Data UUID Value.
BT_UUID_GATT_GEN_ASD
GATT Characteristic General Activity Summary Data.
BT_UUID_GATT_CR_AID_VAL
GATT Characteristic CardioRespiratory Activity Instantaneous Data UUID Value.
BT_UUID_GATT_CR_AID
GATT Characteristic CardioRespiratory Activity Instantaneous Data.
BT_UUID_GATT_CR_ASD_VAL
GATT Characteristic CardioRespiratory Activity Summary Data UUID Value.
BT_UUID_GATT_CR_ASD
GATT Characteristic CardioRespiratory Activity Summary Data.
BT_UUID_GATT_SC_ASD_VAL
GATT Characteristic Step Counter Activity Summary Data UUID Value.
BT_UUID_GATT_SC_ASD
GATT Characteristic Step Counter Activity Summary Data.
BT_UUID_GATT_SLP_AID_VAL
GATT Characteristic Sleep Activity Instantaneous Data UUID Value.
BT_UUID_GATT_SLP_AID
GATT Characteristic Sleep Activity Instantaneous Data.
BT_UUID_GATT_SLP_ASD_VAL
GATT Characteristic Sleep Actiity Summary Data UUID Value.
BT_UUID_GATT_SLP_ASD
GATT Characteristic Sleep Activity Summary Data.
BT_UUID_GATT_PHY_AMCP_VAL
GATT Characteristic Physical Activity Monitor Control Point UUID Value.
BT_UUID_GATT_PHY_AMCP
GATT Characteristic Physical Activity Monitor Control Point.
BT_UUID_GATT_ACS_VAL
GATT Characteristic Activity Current Session UUID Value.
BT_UUID_GATT_ACS
GATT Characteristic Activity Current Session.
BT_UUID_GATT_PHY_ASDESC_VAL
GATT Characteristic Physical Activity Session Descriptor UUID Value.
BT_UUID_GATT_PHY_ASDESC
GATT Characteristic Physical Activity Session Descriptor.
BT_UUID_GATT_PREF_U_VAL
GATT Characteristic Preffered Units UUID Value.
BT_UUID_GATT_PREF_U
GATT Characteristic Preffered Units.
BT_UUID_GATT_HRES_H_VAL
GATT Characteristic High Resolution Height UUID Value.
BT_UUID_GATT_HRES_H
GATT Characteristic High Resolution Height.
BT_UUID_GATT_MID_NAME_VAL
GATT Characteristic Middle Name UUID Value.
BT_UUID_GATT_MID_NAME
GATT Characteristic Middle Name.
BT_UUID_GATT_STRDLEN_VAL
GATT Characteristic Stride Length UUID Value.
BT_UUID_GATT_STRDLEN
GATT Characteristic Stride Length.
BT_UUID_GATT_HANDEDNESS_VAL
GATT Characteristic Handedness UUID Value.
BT_UUID_GATT_HANDEDNESS
GATT Characteristic Handedness.
BT_UUID_GATT_DEVICE_WP_VAL
GATT Characteristic Device Wearing Position UUID Value.
BT_UUID_GATT_DEVICE_WP
GATT Characteristic Device Wearing Position.
BT_UUID_GATT_4ZHRL_VAL
GATT Characteristic Four Zone Heart Rate Limit UUID Value.
BT_UUID_GATT_4ZHRL
GATT Characteristic Four Zone Heart Rate Limit.
BT_UUID_GATT_HIET_VAL
GATT Characteristic High Intensity Exercise Threshold UUID Value.
BT_UUID_GATT_HIET
GATT Characteristic High Intensity Exercise Threshold.
BT_UUID_GATT_AG_VAL
GATT Characteristic Activity Goal UUID Value.
BT_UUID_GATT_AG
GATT Characteristic Activity Goal.
BT_UUID_GATT_SIN_VAL
GATT Characteristic Sedentary Interval Notification UUID Value.
BT_UUID_GATT_SIN
GATT Characteristic Sedentary Interval Notification.
BT_UUID_GATT_CI_VAL
GATT Characteristic Caloric Intake UUID Value.
BT_UUID_GATT_CI
GATT Characteristic Caloric Intake.
BT_UUID_GATT_TMAPR_VAL
GATT Characteristic TMAP Role UUID Value.
BT_UUID_GATT_TMAPR
GATT Characteristic TMAP Role.
BT_UUID_AICS_STATE_VAL
Audio Input Control Service State value.
BT_UUID_AICS_STATE
Audio Input Control Service State.
BT_UUID_AICS_GAIN_SETTINGS_VAL
Audio Input Control Service Gain Settings Properties value.
BT_UUID_AICS_GAIN_SETTINGS
Audio Input Control Service Gain Settings Properties.
BT_UUID_AICS_INPUT_TYPE_VAL
Audio Input Control Service Input Type value.
BT_UUID_AICS_INPUT_TYPE
Audio Input Control Service Input Type.
BT_UUID_AICS_INPUT_STATUS_VAL
Audio Input Control Service Input Status value.
BT_UUID_AICS_INPUT_STATUS
Audio Input Control Service Input Status.
BT_UUID_AICS_CONTROL_VAL
Audio Input Control Service Control Point value.
BT_UUID_AICS_CONTROL
Audio Input Control Service Control Point.
BT_UUID_AICS_DESCRIPTION_VAL
Audio Input Control Service Input Description value.
BT_UUID_AICS_DESCRIPTION
Audio Input Control Service Input Description.
BT_UUID_VCS_STATE_VAL
Volume Control Setting value.
BT_UUID_VCS_STATE
Volume Control Setting.
BT_UUID_VCS_CONTROL_VAL
Volume Control Control point value.
BT_UUID_VCS_CONTROL
Volume Control Control point.
BT_UUID_VCS_FLAGS_VAL
Volume Control Flags value.
BT_UUID_VCS_FLAGS
Volume Control Flags.
BT_UUID_VOCS_STATE_VAL
Volume Offset State value.
BT_UUID_VOCS_STATE
Volume Offset State.
BT_UUID_VOCS_LOCATION_VAL
Audio Location value.
BT_UUID_VOCS_LOCATION
Audio Location.
BT_UUID_VOCS_CONTROL_VAL
Volume Offset Control Point value.
BT_UUID_VOCS_CONTROL
Volume Offset Control Point.
BT_UUID_VOCS_DESCRIPTION_VAL
Volume Offset Audio Output Description value.
BT_UUID_VOCS_DESCRIPTION
Volume Offset Audio Output Description.
BT_UUID_CSIS_SET_SIRK_VAL
Set Identity Resolving Key value.
BT_UUID_CSIS_SET_SIRK
Set Identity Resolving Key.
BT_UUID_CSIS_SET_SIZE_VAL
Set size value.
BT_UUID_CSIS_SET_SIZE
Set size.
BT_UUID_CSIS_SET_LOCK_VAL
Set lock value.
BT_UUID_CSIS_SET_LOCK
Set lock.
BT_UUID_CSIS_RANK_VAL
Rank value.
BT_UUID_CSIS_RANK
Rank.
BT_UUID_GATT_EDKM_VAL
GATT Characteristic Encrypted Data Key Material UUID Value.
BT_UUID_GATT_EDKM
GATT Characteristic Encrypted Data Key Material.
BT_UUID_GATT_AE32_VAL
GATT Characteristic Apparent Energy 32 UUID Value.
BT_UUID_GATT_AE32
GATT Characteristic Apparent Energy 32.
BT_UUID_GATT_AP_VAL
GATT Characteristic Apparent Power UUID Value.
BT_UUID_GATT_AP
GATT Characteristic Apparent Power.
BT_UUID_GATT_CO2CONC_VAL
GATT Characteristic CO2 Concentration UUID Value.
BT_UUID_GATT_CO2CONC
GATT Characteristic CO2 Concentration.
BT_UUID_GATT_COS_VAL
GATT Characteristic Cosine of the Angle UUID Value.
BT_UUID_GATT_COS
GATT Characteristic Cosine of the Angle.
BT_UUID_GATT_DEVTF_VAL
GATT Characteristic Device Time Feature UUID Value.
BT_UUID_GATT_DEVTF
GATT Characteristic Device Time Feature.
BT_UUID_GATT_DEVTP_VAL
GATT Characteristic Device Time Parameters UUID Value.
BT_UUID_GATT_DEVTP
GATT Characteristic Device Time Parameters.
BT_UUID_GATT_DEVT_VAL
GATT Characteristic Device Time UUID Value.
BT_UUID_GATT_DEVT
GATT Characteristic String.
BT_UUID_GATT_DEVTCP_VAL
GATT Characteristic Device Time Control Point UUID Value.
BT_UUID_GATT_DEVTCP
GATT Characteristic Device Time Control Point.
BT_UUID_GATT_TCLD_VAL
GATT Characteristic Time Change Log Data UUID Value.
BT_UUID_GATT_TCLD
GATT Characteristic Time Change Log Data.
BT_UUID_MCS_PLAYER_NAME_VAL
Media player name value.
BT_UUID_MCS_PLAYER_NAME
Media player name.
BT_UUID_MCS_ICON_OBJ_ID_VAL
Media Icon Object ID value.
BT_UUID_MCS_ICON_OBJ_ID
Media Icon Object ID.
BT_UUID_MCS_ICON_URL_VAL
Media Icon URL value.
BT_UUID_MCS_ICON_URL
Media Icon URL.
BT_UUID_MCS_TRACK_CHANGED_VAL
Track Changed value.
BT_UUID_MCS_TRACK_CHANGED
Track Changed.
BT_UUID_MCS_TRACK_TITLE_VAL
Track Title value.
BT_UUID_MCS_TRACK_TITLE
Track Title.
BT_UUID_MCS_TRACK_DURATION_VAL
Track Duration value.
BT_UUID_MCS_TRACK_DURATION
Track Duration.
BT_UUID_MCS_TRACK_POSITION_VAL
Track Position value.
BT_UUID_MCS_TRACK_POSITION
Track Position.
BT_UUID_MCS_PLAYBACK_SPEED_VAL
Playback Speed value.
BT_UUID_MCS_PLAYBACK_SPEED
Playback Speed.
BT_UUID_MCS_SEEKING_SPEED_VAL
Seeking Speed value.
BT_UUID_MCS_SEEKING_SPEED
Seeking Speed.
BT_UUID_MCS_TRACK_SEGMENTS_OBJ_ID_VAL
Track Segments Object ID value.
BT_UUID_MCS_TRACK_SEGMENTS_OBJ_ID
Track Segments Object ID.
BT_UUID_MCS_CURRENT_TRACK_OBJ_ID_VAL
Current Track Object ID value.
BT_UUID_MCS_CURRENT_TRACK_OBJ_ID
Current Track Object ID.
BT_UUID_MCS_NEXT_TRACK_OBJ_ID_VAL
Next Track Object ID value.
BT_UUID_MCS_NEXT_TRACK_OBJ_ID
Next Track Object ID.
BT_UUID_MCS_PARENT_GROUP_OBJ_ID_VAL
Parent Group Object ID value.
BT_UUID_MCS_PARENT_GROUP_OBJ_ID
Parent Group Object ID.
BT_UUID_MCS_CURRENT_GROUP_OBJ_ID_VAL
Group Object ID value.
BT_UUID_MCS_CURRENT_GROUP_OBJ_ID
Group Object ID.
BT_UUID_MCS_PLAYING_ORDER_VAL
Playing Order value.
BT_UUID_MCS_PLAYING_ORDER
Playing Order.
BT_UUID_MCS_PLAYING_ORDERS_VAL
Playing Orders supported value.
BT_UUID_MCS_PLAYING_ORDERS
Playing Orders supported.
BT_UUID_MCS_MEDIA_STATE_VAL
Media State value.
BT_UUID_MCS_MEDIA_STATE
Media State.
BT_UUID_MCS_MEDIA_CONTROL_POINT_VAL
Media Control Point value.
BT_UUID_MCS_MEDIA_CONTROL_POINT
Media Control Point.
BT_UUID_MCS_MEDIA_CONTROL_OPCODES_VAL
Media control opcodes supported value.
BT_UUID_MCS_MEDIA_CONTROL_OPCODES
Media control opcodes supported.
BT_UUID_MCS_SEARCH_RESULTS_OBJ_ID_VAL
Search result object ID value.
BT_UUID_MCS_SEARCH_RESULTS_OBJ_ID
Search result object ID.
BT_UUID_MCS_SEARCH_CONTROL_POINT_VAL
Search control point value.
BT_UUID_MCS_SEARCH_CONTROL_POINT
Search control point.
BT_UUID_GATT_E32_VAL
GATT Characteristic Energy 32 UUID Value.
BT_UUID_GATT_E32
GATT Characteristic Energy 32.
BT_UUID_OTS_TYPE_MPL_ICON_VAL
Media Player Icon Object Type value.
BT_UUID_OTS_TYPE_MPL_ICON
Media Player Icon Object Type.
BT_UUID_OTS_TYPE_TRACK_SEGMENT_VAL
Track Segments Object Type value.
BT_UUID_OTS_TYPE_TRACK_SEGMENT
Track Segments Object Type.
BT_UUID_OTS_TYPE_TRACK_VAL
Track Object Type value.
BT_UUID_OTS_TYPE_TRACK
Track Object Type.
BT_UUID_OTS_TYPE_GROUP_VAL
Group Object Type value.
BT_UUID_OTS_TYPE_GROUP
Group Object Type.
BT_UUID_GATT_CTEE_VAL
GATT Characteristic Constant Tone Extension Enable UUID Value.
BT_UUID_GATT_CTEE
GATT Characteristic Constant Tone Extension Enable.
BT_UUID_GATT_ACTEML_VAL
GATT Characteristic Advertising Constant Tone Extension Minimum Length UUID Value.
BT_UUID_GATT_ACTEML
GATT Characteristic Advertising Constant Tone Extension Minimum Length.
BT_UUID_GATT_ACTEMTC_VAL
GATT Characteristic Advertising Constant Tone Extension Minimum Transmit Count UUID
Value.
BT_UUID_GATT_ACTEMTC
GATT Characteristic Advertising Constant Tone Extension Minimum Transmit Count.
BT_UUID_GATT_ACTETD_VAL
GATT Characteristic Advertising Constant Tone Extension Transmit Duration UUID Value.
BT_UUID_GATT_ACTETD
GATT Characteristic Advertising Constant Tone Extension Transmit Duration.
BT_UUID_GATT_ACTEI_VAL
GATT Characteristic Advertising Constant Tone Extension Interval UUID Value.
BT_UUID_GATT_ACTEI
GATT Characteristic Advertising Constant Tone Extension Interval.
BT_UUID_GATT_ACTEP_VAL
GATT Characteristic Advertising Constant Tone Extension PHY UUID Value.
BT_UUID_GATT_ACTEP
GATT Characteristic Advertising Constant Tone Extension PHY.
BT_UUID_TBS_PROVIDER_NAME_VAL
Bearer Provider Name value.
BT_UUID_TBS_PROVIDER_NAME
Bearer Provider Name.
BT_UUID_TBS_UCI_VAL
Bearer UCI value.
BT_UUID_TBS_UCI
Bearer UCI.
BT_UUID_TBS_TECHNOLOGY_VAL
Bearer Technology value.
BT_UUID_TBS_TECHNOLOGY
Bearer Technology.
BT_UUID_TBS_URI_LIST_VAL
Bearer URI Prefixes Supported List value.
BT_UUID_TBS_URI_LIST
Bearer URI Prefixes Supported List.
BT_UUID_TBS_SIGNAL_STRENGTH_VAL
Bearer Signal Strength value.
BT_UUID_TBS_SIGNAL_STRENGTH
Bearer Signal Strength.
BT_UUID_TBS_SIGNAL_INTERVAL_VAL
Bearer Signal Strength Reporting Interval value.
BT_UUID_TBS_SIGNAL_INTERVAL
Bearer Signal Strength Reporting Interval.
BT_UUID_TBS_LIST_CURRENT_CALLS_VAL
Bearer List Current Calls value.
BT_UUID_TBS_LIST_CURRENT_CALLS
Bearer List Current Calls.
BT_UUID_CCID_VAL
Content Control ID value.
BT_UUID_CCID
Content Control ID.
BT_UUID_TBS_STATUS_FLAGS_VAL
Status flags value.
BT_UUID_TBS_STATUS_FLAGS
Status flags.
BT_UUID_TBS_INCOMING_URI_VAL
Incoming Call Target Caller ID value.
BT_UUID_TBS_INCOMING_URI
Incoming Call Target Caller ID.
BT_UUID_TBS_CALL_STATE_VAL
Call State value.
BT_UUID_TBS_CALL_STATE
Call State.
BT_UUID_TBS_CALL_CONTROL_POINT_VAL
Call Control Point value.
BT_UUID_TBS_CALL_CONTROL_POINT
Call Control Point.
BT_UUID_TBS_OPTIONAL_OPCODES_VAL
Optional Opcodes value.
BT_UUID_TBS_OPTIONAL_OPCODES
Optional Opcodes.
BT_UUID_TBS_TERMINATE_REASON_VAL
Terminate reason value.
BT_UUID_TBS_TERMINATE_REASON_VAL
BT_UUID_TBS_TERMINATE_REASON
Terminate reason.
BT_UUID_TBS_TERMINATE_REASON
BT_UUID_TBS_INCOMING_CALL_VAL
Incoming Call value.
BT_UUID_TBS_INCOMING_CALL
Incoming Call.
BT_UUID_TBS_FRIENDLY_NAME_VAL
Incoming Call Friendly name value.
BT_UUID_TBS_FRIENDLY_NAME
Incoming Call Friendly name.
BT_UUID_MICS_MUTE_VAL
Microphone Control Service Mute value.
BT_UUID_MICS_MUTE
Microphone Control Service Mute.
BT_UUID_ASCS_ASE_SNK_VAL
Audio Stream Endpoint Sink Characteristic value.
BT_UUID_ASCS_ASE_SNK
Audio Stream Endpoint Sink Characteristic.
BT_UUID_ASCS_ASE_SRC_VAL
Audio Stream Endpoint Source Characteristic value.
BT_UUID_ASCS_ASE_SRC
Audio Stream Endpoint Source Characteristic.
BT_UUID_ASCS_ASE_CP_VAL
Audio Stream Endpoint Control Point Characteristic value.
BT_UUID_ASCS_ASE_CP
Audio Stream Endpoint Control Point Characteristic.
BT_UUID_BASS_CONTROL_POINT_VAL
Broadcast Audio Scan Service Scan State value.
BT_UUID_BASS_CONTROL_POINT
Broadcast Audio Scan Service Scan State.
BT_UUID_BASS_RECV_STATE_VAL
Broadcast Audio Scan Service Receive State value.
BT_UUID_BASS_RECV_STATE
Broadcast Audio Scan Service Receive State.
BT_UUID_PACS_SNK_VAL
Sink PAC Characteristic value.
BT_UUID_PACS_SNK
Sink PAC Characteristic.
BT_UUID_PACS_SNK_LOC_VAL
Sink PAC Locations Characteristic value.
BT_UUID_PACS_SNK_LOC
Sink PAC Locations Characteristic.
BT_UUID_PACS_SRC_VAL
Source PAC Characteristic value.
BT_UUID_PACS_SRC
Source PAC Characteristic.
BT_UUID_PACS_SRC_LOC_VAL
Source PAC Locations Characteristic value.
BT_UUID_PACS_SRC_LOC
Source PAC Locations Characteristic.
BT_UUID_PACS_AVAILABLE_CONTEXT_VAL
Available Audio Contexts Characteristic value.
BT_UUID_PACS_AVAILABLE_CONTEXT
Available Audio Contexts Characteristic.
BT_UUID_PACS_SUPPORTED_CONTEXT_VAL
Supported Audio Context Characteristic value.
BT_UUID_PACS_SUPPORTED_CONTEXT
Supported Audio Context Characteristic.
BT_UUID_GATT_NH4CONC_VAL
GATT Characteristic Ammonia Concentration UUID Value.
BT_UUID_GATT_NH4CONC
GATT Characteristic Ammonia Concentration.
BT_UUID_GATT_COCONC_VAL
GATT Characteristic Carbon Monoxide Concentration UUID Value.
BT_UUID_GATT_COCONC
GATT Characteristic Carbon Monoxide Concentration.
BT_UUID_GATT_CH4CONC_VAL
GATT Characteristic Methane Concentration UUID Value.
BT_UUID_GATT_CH4CONC
GATT Characteristic Methane Concentration.
BT_UUID_GATT_NO2CONC_VAL
GATT Characteristic Nitrogen Dioxide Concentration UUID Value.
BT_UUID_GATT_NO2CONC
GATT Characteristic Nitrogen Dioxide Concentration.
BT_UUID_GATT_NONCH4CONC_VAL
GATT Characteristic Non-Methane Volatile Organic Compounds Concentration UUID Value.
BT_UUID_GATT_NONCH4CONC
GATT Characteristic Non-Methane Volatile Organic Compounds Concentration.
BT_UUID_GATT_O3CONC_VAL
GATT Characteristic Ozone Concentration UUID Value.
BT_UUID_GATT_O3CONC
GATT Characteristic Ozone Concentration.
BT_UUID_GATT_PM1CONC_VAL
GATT Characteristic Particulate Matter - PM1 Concentration UUID Value.
BT_UUID_GATT_PM1CONC
GATT Characteristic Particulate Matter - PM1 Concentration.
BT_UUID_GATT_PM25CONC_VAL
GATT Characteristic Particulate Matter - PM2.5 Concentration UUID Value.
BT_UUID_GATT_PM25CONC
GATT Characteristic Particulate Matter - PM2.5 Concentration.
BT_UUID_GATT_PM10CONC_VAL
GATT Characteristic Particulate Matter - PM10 Concentration UUID Value.
BT_UUID_GATT_PM10CONC
GATT Characteristic Particulate Matter - PM10 Concentration.
BT_UUID_GATT_SO2CONC_VAL
GATT Characteristic Sulfur Dioxide Concentration UUID Value.
BT_UUID_GATT_SO2CONC
GATT Characteristic Sulfur Dioxide Concentration.
BT_UUID_GATT_SF6CONC_VAL
GATT Characteristic Sulfur Hexafluoride Concentration UUID Value.
BT_UUID_GATT_SF6CONC
GATT Characteristic Sulfur Hexafluoride Concentration.
BT_UUID_HAS_HEARING_AID_FEATURES_VAL
Hearing Aid Features Characteristic value.
BT_UUID_HAS_HEARING_AID_FEATURES
Hearing Aid Features Characteristic.
BT_UUID_HAS_PRESET_CONTROL_POINT_VAL
Hearing Aid Preset Control Point Characteristic value.
BT_UUID_HAS_PRESET_CONTROL_POINT
Hearing Aid Preset Control Point Characteristic.
BT_UUID_HAS_ACTIVE_PRESET_INDEX_VAL
Active Preset Index Characteristic value.
BT_UUID_HAS_ACTIVE_PRESET_INDEX
Active Preset Index Characteristic.
BT_UUID_GATT_FSTR64_VAL
GATT Characteristic Fixed String 64 UUID Value.
BT_UUID_GATT_FSTR64
GATT Characteristic Fixed String 64.
BT_UUID_GATT_HITEMP_VAL
GATT Characteristic High Temperature UUID Value.
BT_UUID_GATT_HITEMP
GATT Characteristic High Temperature.
BT_UUID_GATT_HV_VAL
GATT Characteristic High Voltage UUID Value.
BT_UUID_GATT_HV
GATT Characteristic High Voltage.
BT_UUID_GATT_LD_VAL
GATT Characteristic Light Distribution UUID Value.
BT_UUID_GATT_LD
GATT Characteristic Light Distribution.
BT_UUID_GATT_LO_VAL
GATT Characteristic Light Output UUID Value.
BT_UUID_GATT_LO
GATT Characteristic Light Output.
BT_UUID_GATT_LST_VAL
GATT Characteristic Light Source Type UUID Value.
BT_UUID_GATT_LST
GATT Characteristic Light Source Type.
BT_UUID_GATT_NOISE_VAL
GATT Characteristic Noise UUID Value.
BT_UUID_GATT_NOISE
GATT Characteristic Noise.
BT_UUID_GATT_RRCCTP_VAL
GATT Characteristic Relative Runtime in a Correlated Color Temperature Range UUID Value.
BT_UUID_GATT_RRCCTR
GATT Characteristic Relative Runtime in a Correlated Color Temperature Range.
BT_UUID_GATT_TIM_S32_VAL
GATT Characteristic Time Second 32 UUID Value.
BT_UUID_GATT_TIM_S32
GATT Characteristic Time Second 32.
BT_UUID_GATT_VOCCONC_VAL
GATT Characteristic VOC Concentration UUID Value.
BT_UUID_GATT_VOCCONC
GATT Characteristic VOC Concentration.
BT_UUID_GATT_VF_VAL
GATT Characteristic Voltage Frequency UUID Value.
BT_UUID_GATT_VF
GATT Characteristic Voltage Frequency.
BT_UUID_BAS_BATTERY_CRIT_STATUS_VAL
BAS Characteristic Battery Critical Status UUID Value.
BT_UUID_BAS_BATTERY_CRIT_STATUS
BAS Characteristic Battery Critical Status.
BT_UUID_BAS_BATTERY_HEALTH_STATUS_VAL
BAS Characteristic Battery Health Status UUID Value.
BT_UUID_BAS_BATTERY_HEALTH_STATUS
BAS Characteristic Battery Health Status.
BT_UUID_BAS_BATTERY_HEALTH_INF_VAL
BAS Characteristic Battery Health Information UUID Value.
BT_UUID_BAS_BATTERY_HEALTH_INF
BAS Characteristic Battery Health Information.
BT_UUID_BAS_BATTERY_INF_VAL
BAS Characteristic Battery Information UUID Value.
BT_UUID_BAS_BATTERY_INF
BAS Characteristic Battery Information.
BT_UUID_BAS_BATTERY_LEVEL_STATUS_VAL
BAS Characteristic Battery Level Status UUID Value.
BT_UUID_BAS_BATTERY_LEVEL_STATUS
BAS Characteristic Battery Level Status.
BT_UUID_BAS_BATTERY_TIME_STATUS_VAL
BAS Characteristic Battery Time Status UUID Value.
BT_UUID_BAS_BATTERY_TIME_STATUS
BAS Characteristic Battery Time Status.
BT_UUID_GATT_ESD_VAL
GATT Characteristic Estimated Service Date UUID Value.
BT_UUID_GATT_ESD
GATT Characteristic Estimated Service Date.
BT_UUID_BAS_BATTERY_ENERGY_STATUS_VAL
BAS Characteristic Battery Energy Status UUID Value.
BT_UUID_BAS_BATTERY_ENERGY_STATUS
BAS Characteristic Battery Energy Status.
BT_UUID_GATT_SL_VAL
GATT Characteristic LE GATT Security Levels UUID Value.
BT_UUID_GATT_SL
GATT Characteristic LE GATT Security Levels.
BT_UUID_SDP_VAL
BT_UUID_SDP
BT_UUID_UDP_VAL
BT_UUID_UDP
BT_UUID_RFCOMM_VAL
BT_UUID_RFCOMM
BT_UUID_TCP_VAL
BT_UUID_TCP
BT_UUID_TCS_BIN_VAL
BT_UUID_TCS_BIN
BT_UUID_TCS_AT_VAL
BT_UUID_TCS_AT
BT_UUID_ATT_VAL
BT_UUID_ATT
BT_UUID_OBEX_VAL
BT_UUID_OBEX
BT_UUID_IP_VAL
BT_UUID_IP
BT_UUID_FTP_VAL
BT_UUID_FTP
BT_UUID_HTTP_VAL
BT_UUID_HTTP
BT_UUID_WSP_VAL
BT_UUID_WSP
BT_UUID_BNEP_VAL
BT_UUID_BNEP
BT_UUID_UPNP_VAL
BT_UUID_UPNP
BT_UUID_HIDP_VAL
BT_UUID_HIDP
BT_UUID_HCRP_CTRL_VAL
BT_UUID_HCRP_CTRL
BT_UUID_HCRP_DATA_VAL
BT_UUID_HCRP_DATA
BT_UUID_HCRP_NOTE_VAL
BT_UUID_HCRP_NOTE
BT_UUID_AVCTP_VAL
BT_UUID_AVCTP
BT_UUID_AVDTP_VAL
BT_UUID_AVDTP
BT_UUID_CMTP_VAL
BT_UUID_CMTP
BT_UUID_UDI_VAL
BT_UUID_UDI
BT_UUID_MCAP_CTRL_VAL
BT_UUID_MCAP_CTRL
BT_UUID_MCAP_DATA_VAL
BT_UUID_MCAP_DATA
BT_UUID_L2CAP_VAL
BT_UUID_L2CAP
Enums
enum [anonymous]
Bluetooth UUID types.
Values:
enumerator BT_UUID_TYPE_16
UUID type 16-bit.
enumerator BT_UUID_TYPE_32
UUID type 32-bit.
enumerator BT_UUID_TYPE_128
UUID type 128-bit.
Functions
struct bt_uuid
#include <uuid.h> This is a ‘tentative’ type and should be used as a pointer only.
struct bt_uuid_16
#include <uuid.h>
Public Members
uint16_t val
UUID value, 16-bit in host endianness.
struct bt_uuid_32
#include <uuid.h>
Public Members
uint32_t val
UUID value, 32-bit in host endianness.
struct bt_uuid_128
#include <uuid.h>
Public Members
uint8_t val[16]
UUID value, 128-bit in little-endian format.
This document describes how to run Basic Audio Profile functionality which includes:
• Capabilities and Endpoint discovery
• Audio Stream Endpoint procedures
Commands
bap --help
Subcommands:
init
select_broadcast :<stream>
create_broadcast :[preset <preset_name>] [enc <broadcast_code>]
start_broadcast :
stop_broadcast :
delete_broadcast :
broadcast_scan :<on, off>
accept_broadcast :0x<broadcast_id>
sync_broadcast :0x<bis_index> [[[0x<bis_index>] 0x<bis_index>] ...]
stop_broadcast_sink :Stops broadcast sink
term_broadcast_sink :
discover :[dir: sink, source]
config :<direction: sink, source> <index> [loc <loc_bits>] [preset
˓→<preset_name>]
uart:~$ bt init
uart:~$ bap init
uart:~$ bt connect <address>
uart:~$ gatt exchange-mtu
uart:~$ bap discover sink
uart:~$ bap connect sink 0
uart:~$ bt init
uart:~$ bap init
uart:~$ bt advertise on
Example Broadcast Sink Scan for and establish a broadcast sink stream:
Init The init command register local PAC records which are necessary to be able to configure stream
and properly manage capabilities in use.
Discover PAC(s) and ASE(s) Once connected the discover command discover PAC records and ASE
characteristics representing remote endpoints.
Note: Use command gatt exchange-mtu to make sure the MTU is configured properly.
Select preset The preset command can be used to either print the default preset configuration or set
a different one, it is worth noting that it doesn’t change any stream previously configured.
Configure Codec The config command attempts to configure a stream for the given direction using a
preset codec configuration which can either be passed directly or in case it is omitted the default preset
is used.
uart:~$ bap config <direction: sink, source> <index> [loc <loc_bits>] [preset <preset_
˓→name>]
uart:~$ bap stream_qos <interval> [framing] [latency] [pd] [sdu] [phy] [rtn]
uart:~$ bap stream_qos 10
Configure QoS The qos command attempts to configure the stream QoS using the preset configuration,
each individual QoS parameter can be set with use optional parameters.
Enable The enable command attempts to enable the stream previously configured, if the remote peer
accepts then the ISO connection procedure is also initiated.
Start The start command is only necessary when acting as a sink as it indicates to the source the stack
is ready to start receiving data.
Disable The disable command attempts to disable the stream previously enabled, if the remote peer
accepts then the ISO disconnection procedure is also initiated.
Stop The stop command is only necessary when acting as a sink as it indicates to the source the stack
is ready to stop receiving data.
Release The release command releases the current stream and its configuration.
This document describes how to run the BAP Broadcast Assistant functionality. Note that in the examples
below, some lines of debug have been removed to make this shorter and provide a better overview.
The Broadcast Assistant is responsible for offloading scan for a resource restricted device, such that
scanning does not drain the battery. The Broadcast Assistant shall support scanning for periodic ad-
vertisements and may optionally support the periodic advertisements synchronization transfer (PAST)
protocol.
The Broadcast Assistant will typically be phones or laptops. The Broadcast Assistant scans for periodic
advertisements and transfer information to the server.
It is necessary to have BT_DEBUG_BAP_BROADCAST_ASSISTANT enabled for using the Broadcast Assistant
interactively.
When the Bluetooth stack has been initialized (bt init), and a device has been connected, the Broad-
cast Assistant can discover BASS on the connected device calling bap_broadcast_assistant discover,
which will start a discovery for the BASS UUIDs and store the handles, and subscribe to all notifications.
Example usage
Setup
uart:~$ bt init
uart:~$ bt connect xx:xx:xx:xx:xx:xx public
Note: The Broadcast Assistant will not actually start scanning for periodic advertisements, as that
feature is still, at the time of writing, not implemented.
This document describes how to run the Scan Delegator functionality, Note that in the examples below,
some lines of debug have been removed to make this shorter and provide a better overview.
The Scan Delegator may optionally support the periodic advertisements synchronization transfer (PAST)
protocol.
The Scan Delegator server typically resides on devices that have inputs or outputs.
It is necessary to have BT_DEBUG_BAP_SCAN_DELEGATOR enabled for using the Scan Delegator interac-
tively.
The Scan Delegator can currently only set the sync state of a receive state, but does not actually support
syncing with periodic advertisements yet.
bap_scan_delegator --help
bap_scan_delegator - Bluetooth BAP Scan Delegator shell commands
Subcommands:
init :Initialize the service and register callbacks
synced :Set server scan state <src_id> <pa_synced> <bis_syncs> <enc_state>
Example Usage
Setup
uart:~$ bt init
uart:~$ bt advertise on
Advertising started
This document describes how to run the Common Audio Profile functionality.
CAP Acceptor The Acceptor will typically be a resource-constrained device, such as a headset, earbud
or hearing aid. The Acceptor can initialize a Coordinated Set Identification Service instance, if it is in a
pair with one or more other CAP Acceptors.
Using the CAP Acceptor When the Bluetooth stack has been initialized (bt init), the Acceptor can
be registered by by calling cap_acceptor init, which will register the CAS and CSIS services, as well
as register callbacks.
cap_acceptor --help
cap_acceptor - Bluetooth CAP acceptor shell commands
Subcommands:
init :Initialize the service and register callbacks [size <int>]
[rank <int>] [not-lockable] [sirk <data>]
lock :Lock the set
release :Release the set [force]
print_sirk :Print the currently used SIRK
set_sirk_rsp :Set the response used in SIRK requests <accept, accept_enc,
reject, oob>
Besides initializing the CAS and the CSIS, there are also commands to lock and release the CSIS instance,
as well as printing and modifying access to the SIRK of the CSIS.
CAP Initiator The Initiator will typically be a resource-rich device, such as a phone or PC. The Initiator
can discover CAP Acceptors’s CAS and optional CSIS services. The CSIS service can be read to provide
information about other CAP Acceptors in the same Coordinated Set. The Initiator can execute stream
control procedures on sets of devices, either ad-hoc or Coordinated, and thus provides an easy way to
setup multiple streams on multiple devices at once.
Using the CAP Initiator When the Bluetooth stack has been initialized (bt init), the Initiator can
discover CAS and the optionally included CSIS instance by calling (cap_initiator discover).
cap_initiator --help
cap_initiator - Bluetooth CAP initiator shell commands
Subcommands:
discover :Discover CAS
unicast-start :Unicast Start [csip] [sinks <cnt> (default 1)] [sources <cnt>
(default 1)] [conns (<cnt> | all) (default 1)]
unicast-list :Unicast list streams
unicast-update :Unicast Update <all | stream [stream [stream...]]>
unicast-stop :Unicast stop all streams
Before being able to perform any stream operation, the device must also perform the bap discover
operation to discover the ASEs and PAC records. The bap init command also needs to be called.
Both of the above commands should be done for each device that you want to use in the set. To use
multiple devices, simply connect to more and then use bt select the device to execute the commands
on.
Once all devices have been connected and the respective discovery commands have been called, the
cap_initiator unicast-start command can be used to put one or more streams into the streaming
state.
To stop all the streams that has been started, the cap_initiator unicast-stop command can be used.
This document describes how to run the call control functionality, both as a client and as a (telephone
bearer service (TBS)) server. Note that in the examples below, some lines of debug have been removed
to make this shorter and provide a better overview.
Telephone Bearer Service Client The telephone bearer service client will typically exist on a resource
restricted device, such as headphones, but may also exist on e.g. phones or laptops. The call control
client will also thus typically be the advertiser. The client can control the states of calls on a server using
the call control point.
It is necessary to have BT_DEBUG_TBS_CLIENT enabled for using the client interactively.
Using the telephone bearer service client When the Bluetooth stack has been initialized (bt init),
and a device has been connected, the telephone bearer service client can discover TBS on the connected
device calling tbs_client discover, which will start a discovery for the TBS UUIDs and store the
handles, and optionally subscribe to all notifications (default is to subscribe to all).
Since a server may have multiple TBS instances, most of the tbs_client commands will take an index
(starting from 0) as input. Joining calls require at least 2 call IDs, and all call indexes shall be on the
same TBS instance.
A server may also have a GTBS instance, which is an abstraction layer for all the telephone bearers on
the server. If the server has both GTBS and TBS, the client may subscribe and use either when sending
requests if BT_TBS_CLIENT_GTBS is enabled.
tbs_client --help
tbs_client - Bluetooth TBS_CLIENT shell commands
Subcommands:
discover :Discover TBS [subscribe]
set_signal_reporting_interval :Set the signal reporting interval
[<{instance_index, gtbs}>] <interval>
originate :Originate a call [<{instance_index, gtbs}>]
<uri>
terminate :terminate a call [<{instance_index, gtbs}>]
<id>
accept :Accept a call [<{instance_index, gtbs}>] <id>
hold :Place a call on hold [<{instance_index,
gtbs}>] <id>
retrieve :Retrieve a held call [<{instance_index,
gtbs}>] <id>
read_provider_name :Read the bearer name [<{instance_index,
gtbs}>]
read_bearer_uci :Read the bearer UCI [<{instance_index, gtbs}>]
read_technology :Read the bearer technology [<{instance_index,
gtbs}>]
read_uri_list :Read the bearer's supported URI list
[<{instance_index, gtbs}>]
read_signal_strength :Read the bearer signal strength
[<{instance_index, gtbs}>]
read_signal_interval :Read the bearer signal strength reporting
interval [<{instance_index, gtbs}>]
read_current_calls :Read the current calls [<{instance_index,
gtbs}>]
read_ccid :Read the CCID [<{instance_index, gtbs}>]
read_status_flags :Read the in feature and status value
[<{instance_index, gtbs}>]
read_uri :Read the incoming call target URI
[<{instance_index, gtbs}>]
read_call_state :Read the call state [<{instance_index, gtbs}>]
read_remote_uri :Read the incoming remote URI
[<{instance_index, gtbs}>]
read_friendly_name :Read the friendly name of an incoming call
[<{instance_index, gtbs}>]
(continues on next page)
In the following examples, notifications from GTBS is ignored, unless otherwise specified.
Example usage
Setup
uart:~$ bt init
uart:~$ bt advertise on
Advertising started
Terminate call:
uart:~$ tbs_client terminate 0 5
<dbg> bt_tbs_client.termination_reason_notify_handler: ID 0x05, reason 0x06
<dbg> bt_tbs_client.call_cp_notify_handler: Status: success for the terminate opcode␣
˓→for call 0x05
<dbg> bt_tbs_client.current_calls_notify_handler:
Telephone Bearer Service (TBS) The telephone bearer service is a service that typically resides on
devices that can make calls, including calls from apps such as Skype, e.g. (smart)phones and PCs.
It is necessary to have BT_DEBUG_TBS enabled for using the TBS server interactively.
Using the telephone bearer service TBS can be controlled locally, or by a remote device (when in a
call). For example a remote device may initiate a call to the device with the TBS server, or the TBS server
may initiate a call to remote device, without a TBS_CLIENT client. The TBS implementation is capable
of fully controlling any call.
tbs --help
tbs - Bluetooth TBS shell commands
Subcommands:
init :Initialize TBS
authorize :Authorize the current connection
accept :Accept call <call_index>
terminate :Terminate call <call_index>
hold :Hold call <call_index>
retrieve :Retrieve call <call_index>
originate :Originate call [<instance_index>] <uri>
join :Join calls <id> <id> [<id> [<id> [...]]]
incoming :Simulate incoming remote call [<{instance_index,
gtbs}>] <local_uri> <remote_uri>
<remote_friendly_name>
remote_answer :Simulate remote answer outgoing call <call_index>
remote_retrieve :Simulate remote retrieve <call_index>
remote_terminate :Simulate remote terminate <call_index>
remote_hold :Simulate remote hold <call_index>
set_bearer_provider_name :Set the bearer provider name [<{instance_index,
gtbs}>] <name>
set_bearer_technology :Set the bearer technology [<{instance_index,
gtbs}>] <technology>
set_bearer_signal_strength :Set the bearer signal strength [<{instance_index,
gtbs}>] <strength>
set_status_flags :Set the bearer feature and status value
[<{instance_index, gtbs}>] <feature_and_status>
set_uri_scheme :Set the URI prefix list <bearer_idx> <uri1 [uri2
(continues on next page)
Example Usage
Setup
uart:~$ bt init
uart:~$ bt connect xx:xx:xx:xx:xx:xx public
This document describes how to run the coordinated set identification functionality, both as a client and
as a server. Note that in the examples below, some lines of debug have been removed to make this shorter
and provide a better overview.
Set Coordinator (Client) The client will typically be a resource-rich device, such as a smartphone or a
laptop. The client is able to lock and release members of a coordinated set. While the coordinated set is
locked, no other clients may lock the set.
To lock a set, the client must connect to each of the set members it wants to lock. This implementation
will always try to to connect to all the members of the set, and at the same time. Thus if the set size is 3,
then BT_MAX_CONN shall be at least 3.
If the locks on set members shall persists through disconnects, it is necessary to bond with the set
members. If you need to bond with multiple set members, make sure that BT_MAX_PAIRED is correctly
configured.
Using the Set Coordinator When the Bluetooth stack has been initialized (bt init), and a set member
device has been connected, the call control client can be initialized by calling csip_set_coordinator
init, which will start a discovery for the TBS uuids and store the handles, and optionally subscribe to
all notifications (default is to subscribe to all).
Once the client has connected and discovered the handles, then it can read the set information, which
is needed to identify other set members. The client can then scan for and connect to the remaining set
members, and once all the members has been connected to, it can lock and release the set.
csip_set_coordinator --help
csip_set_coordinator - Bluetooth CSIP_SET_COORDINATOR shell commands
Subcommands:
init :Initialize CSIP_SET_COORDINATOR
discover :Run discover for CSIS on peer device [member_index]
discover_members :Scan for set members <set_pointer>
lock_set :Lock set
release_set :Release set
lock :Lock specific member [member_index]
release :Release specific member [member_index]
lock_get :Get the lock value of the specific member and instance
[member_index [inst_idx]]
Example usage
Setup
uart:~$ init
uart:~$ bt connect xx:xx:xx:xx:xx:xx public
Coordinated Set Member (Server) The server on devices that are part of a set, consisting of at least
two devices, e.g. a pair of earbuds.
Example Usage
Setup
uart:~$ bt init
uart:~$ csip_set_member register
Commands
iso --help
iso - Bluetooth ISO shell commands
Subcommands:
cig_create :[dir=tx,rx,txrx] [interval] [packing] [framing] [latency] [sdu]
[phy] [rtn]
cig_term :Terminate the CIG
connect :Connect ISO Channel
listen :<dir=tx,rx,txrx> [security level]
send :Send to ISO Channel [count]
disconnect :Disconnect ISO Channel
create-big :Create a BIG as a broadcaster [enc <broadcast code>]
broadcast :Broadcast on ISO channels
sync-big :Synchronize to a BIG as a receiver <BIS bitfield> [mse] [timeout]
[enc <broadcast code>]
term-big :Terminate a BIG
4. Send data:
This document describes how to run the media control functionality, using the shell, both as a client and
as a server.
The media control server consists of to parts. There is a media player (mpl) that contains the logic to
handle media, and there is a media control service (mcs) that serves as a GATT-based interface to the
player. The media control client consists of one part, the GATT based client (mcc).
The media control server may include an object transfer service (ots) and the media control client may
include an object transfer client (otc). When these are included, a richer set of functionality is available.
The media control server and client both implement the Generic Media Control Service (only), and do
not use any underlying Media Control Services.
Note that in the examples below, in many cases the debug output has been removed and long outputs
may have been shortened to make the examples shorter and clearer.
Also note that this documentation does not list all shell commands, it just shows examples of some of
them. The set of commands is explorable from the mcc shell and the mpl shell, by typing mcc or mpl
and pressing TAB. A help text for each command can be found by doing mcc <command> help or or mpl
<command> help.
Overview A media player has a name and an icon that allows identification of the player for the user.
The content of the media player is structured into tracks and groups. A media player has a number of
groups. A group contains tracks and other groups. (In this implementation, a group only contains tracks,
not other groups.) Tracks can be divided into segments.
An active player will have a current track. This is the track that is playing now (if the player is playing).
The current track has a title, a duration (given in hundredths of a second) and a position - the current
position of the player within the track.
There is also a current group (the group of the current track), a parent group (the parent group of the
current group) and a next track.
The media player is in a state, which will be one of playing, paused, seeking or inactive. When playing,
playback happens at a given playback speed, and the tracks are played according to the playing order,
which is one of the playing orders supported. Track changes are signalled as notifications of the track
changed characteristic. When seeking (fast forward or fast rewind), the track position is moved according
to the seeking speed.
The opcodes supported tells which operations are supported by the player by writing to the media control
point. There is also a search control point that allows to search for groups and tracks according to various
criteria, with the result returned in the search results.
Finally, the content control ID is used to associate the media player with an audio stream.
Media Control Client (MCP) The media control client is used to control, and to get information from,
a media control server. Control is done by writing to one of the two control points, or by writing to
other writable characteristics. Getting information is done by reading characteristics, or by configuring
the server to send notifications.
Using the media control client Before use, the media control client must be initialized by the com-
mand mcc init.
To achieve a connection to the peer, the bt commands must be used - bt init followed by bt advertise
on (or bt connect if the server is advertising).
When the media control client is connected to a media control server, the client can discover the server’s
Generic Media Control Service, by giving the command mcc discover_mcs. This will store the handles
of the service, and (optionally, but default) subscribe to all notifications.
After discovery, the media control client can read and write characteristics, including the media control
point and the search control point.
Example usage
Setup
uart:~$ bt init
Bluetooth initialized
uart:~$ bt advertise on
Advertising started
Connected: F6:58:DC:27:F3:57 (random)
Reading characteristics - the player name and the track duration as examples:
Note that the value of some characteristics may be truncated due to being too long to fit in the ATT
packet. Increasing the ATT MTU may help:
Using the included object transfer client When object transfer is supported by both the client and
the server, a larger set of characteristics is available. These include object IDs for the various track and
group objects. These IDs can be used to select and download the corresponding objects from the server’s
object transfer service.
Read the object ID of the current group object:
Search The search control point takes as its input a sequence of search control items, each consisting
of length, type (e.g. track name or artist name) and parameter (the track name or artist name to search
for). If the result is successful, the search results are stored in an object in the object transfer service.
The ID of the search results ID object can be read from the search results object ID characteristic. The
search result object can then be downloaded as for the current group object above. (Note that the search
results object ID is empty until a search has been done.)
This implementation has a working implementation of the search functionality interface and the server-
side search control point parameter parsing. But the actual searching is faked, the same results are
returned no matter what is searched for.
There are two commands for search, one (mcc set_scp_raw) allows to input the search control point
parameter (the sequence of search control items) as a string. The other (mcc set_scp_ioptest) does
preset IOP test searches and takes the round number of the IOP search control point test as a parameter.
Before the search, the search results object ID is empty
Run the search corresponding to the fourth round of the IOP test:
The search control point parameter generated by this command and parameter has one search control
item. The length field (first octet) is 16 (0x10). (The length of the length field itself is not included.) The
type field (second octet) is 0x04 (search for a group name). The parameter (the group name to search
for) is “TSPX_Group_Name”.
After the successful search, the search results object ID has a value:
Media Control Service (MCS) The media control service (mcs) and the associated media player (mpl)
typically reside on devices that can provide access to, and serve, media content, like PCs and smart-
phones.
As mentioned above, the media player (mpl) has the player logic, while the media control service (mcs)
has the GATT-based interface. This separation is done so that the media player can also be used without
the GATT-based interface.
Using the media control service and the media player The media control service and the media
player are in general controlled remotely, from the media control client.
Before use, the media control client must be initialized by the command mpl init.
As for the client, the bt commands are used for connecting - bt init followed by bt connect
<address> <address type> (or bt advertise on if the server is advertising).
Example Usage
Setup
uart:~$ bt init
Bluetooth initialized
Some server commands are available. These commands force notifications of the various characterstics,
for testing that the client receives notifications. The values sent in the notifications caused by these
testing commands are independent of the media player, so they do not correspond the actual values of
the characteristics nor to the actual state of the media player.
Example: Force (fake value) notification of the track duration:
uart:~$ mpl duration_changed_cb
[00:15:17.491,058] <dbg> bt_mcs.mpl_track_duration_cb: Notifying track duration: 12000
The Bluetooth Shell is an application based on the Shell module. It offer a collection of commands made
to easily interact with the Bluetooth stack.
First you need to build and flash your board with the Bluetooth shell. For how to do that, see the Getting
Started Guide. The Bluetooth shell itself is located in tests/bluetooth/shell/.
When it’s done, connect to the CLI using your favorite serial terminal application. You should see the
following prompt:
uart:~$
Identities
Identities are a Zephyr host concept, allowing a single physical device to behave like multiple logical
Bluetooth devices.
The shell allows the creation of multiple identities, to a maximum that is set by the Kconfig symbol
CONFIG_BT_ID_MAX. To create a new identity, use bt id-create command. You can then use it by
selecting it with its ID bt id-select <id>. Finally, you can list all the available identities with id-show.
Start scanning by using the bt scan on command. Depending on the environment you’re in, you may
see a lot of lines printed on the shell. To stop the scan, run bt scan off, the scrolling should stop.
Here is an example of what you can expect:
uart:~$ bt scan on
Bluetooth active scan enabled
[DEVICE]: CB:01:1A:2D:6E:AE (random), AD evt type 0, RSSI -78 C:1 S:1 D:0 SR:0 E:0␣
˓→Prim: LE 1M, Secn: No packets, Interval: 0x0000 (0 us), SID: 0xff
[DEVICE]: 20:C2:EE:59:85:5B (random), AD evt type 3, RSSI -62 C:0 S:0 D:0 SR:0 E:0␣
˓→Prim: LE 1M, Secn: No packets, Interval: 0x0000 (0 us), SID: 0xff
[DEVICE]: E3:72:76:87:2F:E8 (random), AD evt type 3, RSSI -74 C:0 S:0 D:0 SR:0 E:0␣
˓→Prim: LE 1M, Secn: No packets, Interval: 0x0000 (0 us), SID: 0xff
[DEVICE]: 1E:19:25:8A:CB:84 (random), AD evt type 3, RSSI -67 C:0 S:0 D:0 SR:0 E:0␣
˓→Prim: LE 1M, Secn: No packets, Interval: 0x0000 (0 us), SID: 0xff
[DEVICE]: 26:42:F3:D5:A0:86 (random), AD evt type 3, RSSI -73 C:0 S:0 D:0 SR:0 E:0␣
˓→Prim: LE 1M, Secn: No packets, Interval: 0x0000 (0 us), SID: 0xff
[DEVICE]: 0C:61:D1:B9:5D:9E (random), AD evt type 3, RSSI -87 C:0 S:0 D:0 SR:0 E:0␣
˓→Prim: LE 1M, Secn: No packets, Interval: 0x0000 (0 us), SID: 0xff
[DEVICE]: 20:C2:EE:59:85:5B (random), AD evt type 3, RSSI -66 C:0 S:0 D:0 SR:0 E:0␣
˓→Prim: LE 1M, Secn: No packets, Interval: 0x0000 (0 us), SID: 0xff
[DEVICE]: 25:3F:7A:EE:0F:55 (random), AD evt type 3, RSSI -83 C:0 S:0 D:0 SR:0 E:0␣
˓→Prim: LE 1M, Secn: No packets, Interval: 0x0000 (0 us), SID: 0xff
uart:~$ bt scan off
Scan successfully stopped
As you can see, this can lead to a high number of results. To reduce that number and easily find a specific
device, you can enable scan filters. There are four types of filters: by name, by RSSI, by address and by
periodic advertising interval. To apply a filter, use the bt scan-set-filter command followed by the
type of filters. You can add multiple filters by using the commands again.
For example, if you want to look only for devices with the name test shell:
You can use the command bt scan on to create an active scanner, meaning that the scanner will ask the
advertisers for more information by sending a scan request packet. Alternatively, you can create a passive
scanner by using the bt scan passive command, so the scanner will not ask the advertiser for more
information.
Connecting to a device
To connect to a device, you need to know its address and type of address and use the bt connect
command with the address and the type as arguments.
Here is an example:
You can list the active connections of the shell using the bt connections command. The shell maximum
number of connections is defined by CONFIG_BT_MAX_CONN. You can disconnect from a connection with
the bt disconnect <address: XX:XX:XX:XX:XX:XX> <type: (public|random)> command.
Note: If you were scanning just before, you can connect to the last scanned device by simply running
the bt connect command.
Alternatively, you can use the bt connect-name <name> command to automatically enable scanning
with a name filter and connect to the first match.
Advertising
Begin advertising by using the bt advertise on command. This will use the default parameters and
advertise a resolvable private address with the name of the device. You can choose to use the identity
address instead by running the bt advertise on identity command. To stop advertising use the bt
advertise off command.
To enable more advanced features of advertising, you should create an advertiser using the bt
adv-create command. Parameters for the advertiser can be passed either at the creation of it or by
using the bt adv-param command. To begin advertising with this newly created advertiser, use the bt
adv-start command, and then the bt adv-stop command to stop advertising.
When using the custom advertisers, you can choose if it will be connectable or scannable. This leads to
four options: conn-scan, conn-nscan, nconn-scan and nconn-nscan. Those parameters are mandatory
when creating an advertiser or updating its parameters.
For example, if you want to create a connectable and scannable advertiser and start it:
You may notice that with this, the custom advertiser does not advertise the device name; you need to
enable it. Continuing from the previous example:
uart:~$ bt adv-stop
Advertiser set stopped
uart:~$ bt adv-param conn-scan name
uart:~$ bt adv-start
Advertiser[0] 0x200022f0 set started
You should now see the name of the device in the advertising data. You can also set the advertising data
manually by using the bt adv-data command. The following example shows how to set the advertiser
name with it:
The data must be formatted according to the Bluetooth Core Specification (see version 5.3, vol. 3, part
C, 11). In this example, the first octet is the size of the data (the data and one octet for the data type),
the second one is the type of data, 0x09 is the Complete Local Name and the remaining data are the
name in ASCII. So, on the other device you should see the name Bluetooth-Shell.
When advertising, if others devices use an active scanner, you may receive scan request packets. To
visualize those packets, you can add scan-reports to the parameters of your advertiser.
Directed Advertising It is possible to use directed advertising on the shell if you want to reconnect
to a device. The following example demonstrates how to create a directed advertiser with the address
specified right after the parameter directed. The low parameter indicates that we want to use the low
duty cycle mode, and the dir-rpa parameter is required if the remote device is privacy-enabled and
supports address resolution of the target address in directed advertisement.
After that, you can start the advertiser and then the target device will be able to reconnect.
Extended Advertising Let’s now have a look at some extended advertising features. To enable ex-
tended advertising, use the ext-adv parameter.
It’s possible to create a list of allowed addresses that can be used to connect to those addresses automat-
ically. Here is how to do it:
The shell will then connect to the first available device. In the example, if both devices are advertising at
the same time, we will connect to the first address added to the list.
The Filter Accept List can also be used for scanning or advertising by using the option fal. For example,
if we want to scan for a bunch of selected addresses, we can set up a Filter Accept List:
You should see only those three addresses reported by the scanner.
Enabling security
When connected to a device, you can enable multiple levels of security, here is the list for Bluetooth LE:
• 1 No encryption and no authentication;
• 2 Encryption and no authentication;
• 3 Encryption and authentication;
• 4 Bluetooth LE Secure Connection.
To enable security, use the bt security <level> command. For levels requiring authentication (level
3 and above), you must first set the authentication method. To do it, you can use the bt auth all
command. After that, when you will set the security level, you will be asked to confirm the passkey on
both devices. On the shell side, do it with the command bt auth-passkey-confirm.
Pairing Enabling authentication requires the devices to be bondable. By default the shell is bondable.
You can make the shell not bondable using bt bondable off. You can list all the devices you are paired
with using the command bt bonds.
The maximum number of paired devices is set using CONFIG_BT_MAX_PAIRED. You can remove a paired
device using bt clear <address: XX:XX:XX:XX:XX:XX> <type: (public|random)> or remove all
paired devices with the command bt clear all.
GATT
The following examples assume that you have two devices already connected.
To perform service discovery on the client side, use the gatt discover command. This should print all
the services that are available on the GATT server.
On the server side, you can register pre-defined test services using the gatt register command. When
done, you should see the newly added services on the client side when running the discovery command.
You can now subscribe to those new services on the client side. Here is an example on how to subscribe
to the test service:
The server can now notify the client with the command gatt notify.
Another option available through the GATT command is initiating the MTU exchange. To do it, use the
gatt exchange-mtu command. To update the shell maximum MTU, you need to update Kconfig symbols
in the configuration file of the shell. For more details, see bluetooth_mtu_update_sample.
L2CAP
The l2cap command exposes parts of the L2CAP API. The following example shows how to register a LE
PSM, connect to it from another device and send 3 packets of 14 octets each.
The example assumes that the two devices are already connected.
On device A, register the LE PSM:
Logging
You can configure the logging level per module at runtime. This depends on the maximum logging level
that is compiled in. To configure, use the log command. Here are some examples:
• List the available modules and their current logging level
6.2 Networking
The networking section contains information regarding the network stack of the Zephyr kernel. Use
the information to understand the principles behind the operation of the stacks and how they were
implemented.
6.2.1 Overview
• Supported Features
• Source Tree Layout
Supported Features
The networking IP stack is modular and highly configurable via build-time configuration options. You
can minimize system memory consumption by enabling only those network features required by your
application. Almost all features can be disabled if not needed.
• IPv6 The support for IPv6 is enabled by default. Various IPv6 sub-options can be enabled or
disabled depending on networking needs.
– Developer can set the number of unicast and multicast IPv6 addresses that are active at the
same time.
– The IPv6 address for the device can be set either statically or dynamically using SLAAC (State-
less Address Auto Configuration) (RFC 4862).
– The system also supports multiple IPv6 prefixes and the maximum IPv6 prefix count can be
configured at build time.
– The IPv6 neighbor cache can be disabled if not needed, and its size can be configured at build
time.
– The IPv6 neighbor discovery support (RFC 4861) is enabled by default.
– Multicast Listener Discovery v2 support (RFC 3810) is enabled by default.
– IPv6 header compression (6lo) is available for IPv6 connectivity for Bluetooth IPSP (RFC
7668) and IEEE 802.15.4 networks (RFC 4944).
• IPv4 The legacy IPv4 is supported by the networking stack. It cannot be used by IEEE 802.15.4
or Bluetooth IPSP as those network technologies support only IPv6. IPv4 can be used in Ethernet
based networks. By default IPv4 support is disabled.
• Bluetooth
• Ethernet
• SLIP (IP over serial line). Used for testing with QEMU. It provides ethernet interface to host system
(like Linux) and test applications can be run in Linux host and send network data to Zephyr OS
device.
This page describes how to get information about network packet processing statistics inside network
stack.
Network stack contains infrastructure to figure out how long the network packet processing takes ei-
ther in sending or receiving path. There are two Kconfig options that control this. For transmit (TX)
path the option is called CONFIG_NET_PKT_TXTIME_STATS and for receive (RX) path the options is called
CONFIG_NET_PKT_RXTIME_STATS. Note that for TX, all kind of network packet statistics is collected. For
RX, only UDP, TCP or raw packet type network packet statistics is collected.
After enabling these options, the net stats network shell command will show this information:
Note: The values above and below are from emulated qemu_x86 board and UDP traffic
The TX time tells how long it took for network packet from its creation to when it was sent to the
network. The RX time tells the time from its creation to when it was passed to the application. The
values are in microseconds. The statistics will be collected per traffic class if there are more than one
transmit or receive queues defined in the system. These are controlled by CONFIG_NET_TC_TX_COUNT and
CONFIG_NET_TC_RX_COUNT options.
The numbers inside the brackets contain information how many microseconds it took for a network
packet to go from previous state to next.
In the TX example above, the values are averages over 18902 packets and contain this information:
• Packet was created by application so the time is 0.
• Packet is about to be placed to transmit queue. The time it took from network packet creation to
this state, is 22 microseconds in this example.
• The correct TX thread is invoked, and the packet is read from the transmit queue. It took 15
microseconds from previous state.
• The network packet was just sent and the network stack is about to free the network packet. It
took 23 microseconds from previous state.
• In total it took on average 60 microseconds to get the network packet sent. The value 63 tells also
the same information, but is calculated differently so there is slight difference because of rounding
errors.
In the RX example above, the values are averages over 18892 packets and contain this information:
• Packet was created network device driver so the time is 0.
• Packet is about to be placed to receive queue. The time it took from network packet creation to
this state, is 9 microseconds in this example.
• The correct RX thread is invoked, and the packet is read from the receive queue. It took 6 mi-
croseconds from previous state.
• The network packet is then processed and placed to correct socket queue. It took 11 microseconds
from previous state.
• The last value tells how long it took from there to the application. Here the value is 13 microsec-
onds.
• In total it took on average 39 microseconds to get the network packet sent. The value 42 tells also
the same information, but is calculated differently so there is slight difference because of rounding
errors.
The Zephyr network stack is a native network stack specifically designed for Zephyr OS. It consists of
layers, each meant to provide certain services to other layers. Network stack functionality is highly
configurable via Kconfig options.
Network Application
Application Protocols
CoAP LWM2M
MQTT ...
Socket API
Network Protocols
UDP TCP
Non-IP sockets
ICMPv6 ICMPv4
L2 Network Technologies
IPv6 Header Compression
Ethernet 802.15.4
Other drivers
drivers drivers
• Network Application. The network application can either use the provided application-level pro-
tocol libraries or access the BSD socket API directly to create a network connection, send or receive
data, and close a connection. The application can also use the network management API to con-
figure the network and set related parameters such as network link options, starting a scan (when
applicable), listen network configuration events, etc. The network interface API can be used to set
IP address to a network interface, taking the network interface down, etc.
• Network Protocols. This provides implementations for various protocols such as
– Application-level network protocols like CoAP, LWM2M, and MQTT. See application protocols
chapter for information about them.
– Core network protocols like IPv6, IPv4, UDP, TCP, ICMPv4, and ICMPv6. You access these
protocols by using the BSD socket API.
• Network Interface Abstraction. This provides functionality that is common in all the network
interfaces, such as setting network interface down, etc. There can be multiple network interfaces
in the system. See network interface overview for more details.
• L2 Network Technologies. This provides a common API for sending and receiving data to and
from an actual network device. See L2 overview for more details. These network technologies
include Ethernet, IEEE 802.15.4, Bluetooth, CANBUS, etc. Some of these technologies support IPv6
header compression (6Lo), see RFC 6282 for details. For example ARP for IPv4 is done by the
Ethernet component.
• Network Device Drivers. The actual low-level device drivers handle the physical sending or re-
ceiving of network packets.
An application typically consists of one or more threads that execute the application logic. When using
the BSD socket API, the following things will happen.
Receiving UDP
Network Application
packet
Application Protocols
8 CoAP LWM2M
MQTT ...
Recv returns
User space
7 Packet retrieved from
socket queue.Data copied Socket API
into application buffers.
6 UDP headers parsed and FIF O
Network e Protocols
Kernel space
stripped. Packet added to ueu
q
ket
socket queue Soc
UDP TCP
Non-IP sockets
ICMPv6 ICMPv4
2
RX q
Ethernet 802.15.4
Other drivers
drivers drivers
1
Packet received from the
network
Sending UDP
Network Application
packet
Application Protocols
1 CoAP LWM2M
User space
2 Net_packet structure created,
user data copied to it. Packet Socket API
marshalled to kernel space
Kernel space
added in front of the data
UDP TCP
Non-IP sockets
ICMPv6 ICMPv4
7
FIFO
the network.
Ethernet 802.15.4
Other drivers
drivers drivers
8
Data physically sent
Applications should use the BSD socket API defined in include/zephyr/net/socket.h to create a connec-
tion, send or receive data, and close a connection. The same API can be used when working with UDP
or TCP data. See BSD socket API for more details.
See sockets-echo-server-sample and sockets-echo-client-sample applications how to create a simple
server or client BSD socket based application.
The legacy connectivity API in include/zephyr/net/net_context.h should not be used by applications.
• Prerequisites
• Basic Setup
– Step 1 - Create Ethernet interface
– Step 2 - Start app in native_posix board
– Step 3 - Connect to console (optional)
This page describes how to set up a virtual network between a (Linux) host and a Zephyr application
running in a native_posix board.
In this example, the sockets-echo-server-sample sample application from the Zephyr source distribution
is run in native_posix board. The Zephyr native_posix board instance is connected to a Linux host using
a tuntap device which is modeled in Linux as an Ethernet network interface.
Prerequisites On the Linux Host, fetch the Zephyr net-tools project, which is located in a separate
Git repository:
Basic Setup For the steps below, you will need three terminal windows:
• Terminal #1 is terminal window with net-tools being the current directory (cd net-tools)
• Terminal #2 is your usual Zephyr development terminal, with the Zephyr environment initialized.
• Terminal #3 is the console to the running Zephyr native_posix instance (optional).
Step 1 - Create Ethernet interface Before starting native_posix with network emulation, a network
interface should be created.
In terminal #1, type:
./net-setup.sh
You can tweak the behavior of the net-setup.sh script. See various options by running net-setup.sh like
this:
./net-setup.sh --help
Step 2 - Start app in native_posix board Build and start the echo_server sample application.
In terminal #2, type:
Step 3 - Connect to console (optional) The console window should be launched automatically when
the Zephyr instance is started but if it does not show up, you can manually connect to the console. The
native_posix board will print a string like this when it starts:
screen /dev/pts/5
• Prerequisites
• Basic Setup
This page describes how to set up a virtual network between a (Linux) host and a Zephyr application
running in QEMU.
In this example, the sockets-echo-server-sample sample application from the Zephyr source distribution
is run in QEMU. The Zephyr instance is connected to a Linux host using a tuntap device which is modeled
in Linux as an Ethernet network interface.
Prerequisites On the Linux Host, fetch the Zephyr net-tools project, which is located in a separate
Git repository:
Basic Setup For the steps below, you will need two terminal windows:
• Terminal #1 is terminal window with net-tools being the current directory (cd net-tools)
• Terminal #2 is your usual Zephyr development terminal, with the Zephyr environment initialized.
When configuring the Zephyr instance, you must select the correct Ethernet driver for QEMU connectiv-
ity:
• For qemu_x86, select Intel(R) PRO/1000 Gigabit Ethernet driver Ethernet driver. Driver is
called e1000 in Zephyr source tree.
• For qemu_cortex_m3, select TI Stellaris MCU family ethernet driver Ethernet driver. Driver
is called stellaris in Zephyr source tree.
• For mps2_an385, select SMSC911x/9220 Ethernet driver Ethernet driver. Driver is called
smsc911x in Zephyr source tree.
Step 1 - Create Ethernet interface Before starting QEMU with network connectivity, a network inter-
face should be created in the host system.
In terminal #1, type:
./net-setup.sh
You can tweak the behavior of the net-setup.sh script. See various options by running net-setup.sh
like this:
./net-setup.sh --help
Step 2 - Start app in QEMU board Build and start the sockets-echo-server-sample sample application.
In this example, the qemu_x86 board is used.
In terminal #2, type:
• Prerequisites
• Basic Setup
– Step 1 - Create helper socket
– Step 2 - Start TAP device routing daemon
– Step 3 - Start app in QEMU
– Step 4 - Run apps on host
– Step 5 - Stop supporting daemons
• Setting up Zephyr and NAT/masquerading on host to access Internet
• Network connection between two QEMU VMs
– Terminal #1:
– Terminal #2:
• Running multiple QEMU VMs of the same sample
– Terminal #1:
– Terminal #2:
This page describes how to set up a virtual network between a (Linux) host and a Zephyr application
running in a QEMU virtual machine (built for Zephyr targets such as qemu_x86 and qemu_cortex_m3).
In this example, the sockets-echo-server-sample sample application from the Zephyr source distribution
is run in QEMU. The QEMU instance is connected to a Linux host using a serial port, and SLIP is used to
transfer data between the Zephyr application and Linux (over a chain of virtual connections).
Prerequisites On the Linux Host, fetch the Zephyr net-tools project, which is located in a separate
Git repository:
Note: If you get an error about AX_CHECK_COMPILE_FLAG, install package autoconf-archive pack-
age on Debian/Ubuntu.
Basic Setup For the steps below, you will need at least 4 terminal windows:
• Terminal #1 is your usual Zephyr development terminal, with the Zephyr environment initialized.
• Terminals #2, #3, and #4 are terminal windows with net-tools being the current directory (cd
net-tools)
Step 1 - Create helper socket Before starting QEMU with network emulation, a Unix socket for the
emulation should be created.
In terminal #2, type:
./loop-socat.sh
sudo ./loop-slip-tap.sh
For applications requiring DNS, you may need to restart the host’s DNS server at this point, as described
in Setting up Zephyr and NAT/masquerading on host to access Internet.
Step 3 - Start app in QEMU Build and start the echo_server sample application.
In terminal #1, type:
If you see an error from QEMU about unix:/tmp/slip.sock, it means you missed Step 1 above.
Step 4 - Run apps on host Now in terminal #4, you can run various tools to communicate with the
application running in QEMU.
You can start with pings:
ping 192.0.2.1
ping6 2001:db8::1
You can use the netcat (“nc”) utility, connecting using UDP:
If echo_server is compiled with TCP support (now enabled by default for the echo_server sample, CON-
FIG_NET_TCP=y):
You can also use the telnet command to achieve the above.
Step 5 - Stop supporting daemons When you are finished with network testing using QEMU, you
should stop any daemons or helpers started in the initial steps, to avoid possible networking or routing
problems such as address conflicts in local network interfaces. For example, stop them if you switch from
testing networking with QEMU to using real hardware, or to return your host laptop to normal Wi-Fi use.
To stop the daemons, press Ctrl+C in the corresponding terminal windows (you need to stop both
loop-slip-tap.sh and loop-socat.sh).
Exit QEMU by pressing CTRL+A x.
Setting up Zephyr and NAT/masquerading on host to access Internet To access the internet from a
Zephyr application, some additional setup on the host may be required. This setup is common for both
application running in QEMU and on real hardware, assuming that a development board is connected to
the development host. If a board is connected to a dedicated router, it should not be needed.
To access the internet from a Zephyr application using IPv4, a gateway should be set via DHCP
or configured manually. For applications using the “Settings” facility (with the config option
CONFIG_NET_CONFIG_SETTINGS enabled), set the CONFIG_NET_CONFIG_MY_IPV4_GW option to the IP ad-
dress of the gateway. For apps not using the “Settings” facility, set up the gateway by calling the
net_if_ipv4_set_gw() at runtime.
To access the internet from a custom application running in QEMU, NAT (masquerading) should be set
up for QEMU’s source address. Assuming 192.0.2.1 is used, the following command should be run as
root:
Additionally, IPv4 forwarding should be enabled on the host, and you may need to check that other
firewall (iptables) rules don’t interfere with masquerading. To enable IPv4 forwarding the following
command should be run as root:
sysctl -w net.ipv4.ip_forward=1
Some applications may also require a DNS server. A number of Zephyr-provided samples assume by
default that the DNS server is available on the host (IP 192.0.2.2), which, in modern Linux distributions,
usually runs at least a DNS proxy. When running with QEMU, it may be required to restart the host’s
DNS, so it can serve requests on the newly created TAP interface. For example, on Debian-based systems:
An alternative to relying on the host’s DNS server is to use one in the network. For example, 8.8.8.8 is a
publicly available DNS server. You can configure it using CONFIG_DNS_SERVER1 option.
Network connection between two QEMU VMs Unlike the VM-to-Host setup described above, VM-to-
VM setup is automatic. For sample applications that support this mode (such as the echo_server and
echo_client samples), you will need two terminal windows, set up for Zephyr development.
Terminal #1:
west build -b qemu_x86 samples/net/sockets/echo_server
This will start QEMU, waiting for a connection from a client QEMU.
Terminal #2:
west build -b qemu_x86 samples/net/sockets/echo_client
This will start a second QEMU instance, where you should see logging of data sent and received in both.
Running multiple QEMU VMs of the same sample If you find yourself wanting to run multiple
instances of the same Zephyr sample application, which do not need to talk to each other, use the
QEMU_INSTANCE argument.
Start socat and tunslip6 manually (instead of using the loop-xxx.sh scripts) for as many instances as
you want. Use the following as a guide, replacing MAIN or OTHER.
Terminal #1:
socat PTY,link=/tmp/slip.devMAIN UNIX-LISTEN:/tmp/slip.sockMAIN
$ZEPHYR_BASE/../net-tools/tunslip6 -t tapMAIN -T -s /tmp/slip.devMAIN \
2001:db8::1/64
# Now run Zephyr
make -Cbuild run QEMU_INSTANCE=MAIN
Terminal #2:
socat PTY,link=/tmp/slip.devOTHER UNIX-LISTEN:/tmp/slip.sockOTHER
$ZEPHYR_BASE/../net-tools/tunslip6 -t tapOTHER -T -s /tmp/slip.devOTHER \
2001:db8::1/64
make -Cbuild run QEMU_INSTANCE=OTHER
• Basic Setup
– Choosing IP addresses
– Setting IPv4 address and routing
– Setting IPv6 address and routing
• Testing connection
This page describes how to set up networking between a Linux host and a Zephyr application running
on USB supported devices.
The board is connected to Linux host using USB cable and provides an Ethernet interface to the host. The
sockets-echo-server-sample application from the Zephyr source distribution is run on supported board.
The board is connected to a Linux host using a USB cable providing an Ethernet interface to the host.
Basic Setup To communicate with the Zephyr application over a newly created Ethernet interface, we
need to assign IP addresses and set up a routing table for the Linux host. After plugging a USB cable
from the board to the Linux host, the cdc_ether driver registers a new Ethernet device with a provided
MAC address.
You can check that network device is created and MAC address assigned by running dmesg from the
Linux host.
Choosing IP addresses To establish network connection to the board we need to choose IP address for
the interface on the Linux host.
It make sense to choose addresses in the same subnet we have in Zephyr application. IP addresses usually
set in the project configuration files and may be checked also from the shell with following commands.
Connect a serial console program (such as puTTY) to the board, and enter this command to the Zephyr
shell:
This command shows that one IPv4 address and two IPv6 addresses have been assigned to the board.
We can use either IPv4 or IPv6 for network connection depending on the board network configuration.
Next step is to assign IP addresses to the new Linux host interface, in the following steps
enx00005e005301 is the name of the interface on my Linux system.
Testing connection From the host we can test the connection by pinging Zephyr IP address of the
board with:
$ ping 192.0.2.1
PING 192.0.2.1 (192.0.2.1) 56(84) bytes of data.
64 bytes from 192.0.2.1: icmp_seq=1 ttl=64 time=2.30 ms
64 bytes from 192.0.2.1: icmp_seq=2 ttl=64 time=1.43 ms
64 bytes from 192.0.2.1: icmp_seq=3 ttl=64 time=2.45 ms
...
• Introduction
• Using SLIRP with Zephyr
• Limitations
This page is intended to serve as a starting point for anyone interested in using QEMU SLIRP with Zephyr.
Introduction SLIRP is a network backend which provides the complete TCP/IP stack within QEMU and
uses that stack to implement a virtual NAT’d network. As there are no dependencies on the host, SLIRP
is simple to setup.
By default, QEMU uses the 10.0.2.X/24 network and runs a gateway at 10.0.2.2. All traffic intended
for the host network has to travel through this gateway, which will filter out packets based on the QEMU
command line parameters. This gateway also functions as a DHCP server for all GOS, allowing them to
be automatically assigned with an IP address starting from 10.0.2.15.
More details about User Networking can be obtained from here: https://wiki.qemu.org/Documentation/
Networking#User_Networking_.28SLIRP.29
Using SLIRP with Zephyr In order to use SLIRP with Zephyr, the user has to set the Kconfig option to
enable User Networking.
CONFIG_NET_QEMU_USER=y
Once this configuration option is enabled, all QEMU launches will use SLIRP. In the default configuration,
Zephyr only enables User Networking, and does not pass any arguments to it. This means that the Guest
will only be able to communicate to the QEMU gateway, and any data intended for the host machine will
be dropped by QEMU.
In general, QEMU User Networking can take in a lot of arguments including,
• Information about host/guest port forwarding. This must be provided to create a communication
channel between the guest and host.
• Information about network to use. This may be valuable if the user does not want to use the default
10.0.2.X network.
• Tell QEMU to start DHCP server at user-defined IP address.
• ID and other information.
As this information varies with every use case, it is difficult to come up with good defaults that work
for all. Therefore, Zephyr Implementation offloads this to the user, and expects that they will provide
arguments based on requirements. For this, there is a Kconfig string which can be populated by the user.
CONFIG_NET_QEMU_USER_EXTRA_ARGS="net=192.168.0.0/24,hostfwd=tcp::8080-:8080"
This option is appended as-is to the QEMU command line. Therefore, any problems with this command
line will be reported by QEMU only. Here’s what this particular example will do,
• Make QEMU use the 192.168.0.0/24 network instead of the default.
• Enable forwarding of any TCP data received from port 8080 of host to port 8080 of guest, and vice
versa.
Limitations If the user does not have any specific networking requirements other than the ability to
access a web page from the guest, user networking (slirp) is a good choice. However, it has several
limitations
• There is a lot of overhead so the performance is poor.
• The guest is not directly accessible from the host or the external network.
• In general, ICMP traffic does not work (so you cannot use ping within a guest).
• As port mappings need to be defined before launching qemu, clients which use dynamically gener-
ated ports cannot communicate with external network.
• There is a bug in the SLIRP implementation which filters out all IPv6 packets from the guest. See
https://bugs.launchpad.net/qemu/+bug/1724590 for details. Therefore, IPv6 will not work with
User Networking.
• Prerequisites
• Basic Setup
– Step 1 - Create configuration files
– Step 2 - Create Ethernet interfaces
– Step 3 - Setup network bridging
This page describes how to set up a virtual network between multiple Zephyr instances. The Zephyr
instances could be running inside QEMU or could be native_posix board processes. The Linux host can
be used to route network traffic between these systems.
Prerequisites On the Linux Host, fetch the Zephyr net-tools project, which is located in a separate
Git repository:
Basic Setup For the steps below, you will need five terminal windows:
• Terminal #1 and #2 are terminal windows with net-tools being the current directory (cd
net-tools)
• Terminal #3, where you setup bridging in Linux host
• Terminal #4 and #5 are your usual Zephyr development terminal, with the Zephyr environment
initialized.
As there are multiple ways to setup the Zephyr network, the example below uses qemu_x86 board with
e1000 Ethernet controller and native_posix board to simplify the setup instructions. You can use other
QEMU boards and drivers if needed, see Networking with QEMU Ethernet for details. You can also use
two or more native_posix board Zephyr instances and connect them together.
Step 1 - Create configuration files Before starting QEMU with network connectivity, a network in-
terfaces for each Zephyr instance should be created in the host system. The default setup for creating
network interface cannot be used here as that is for connecting one Zephyr instance to Linux host.
For Zephyr instance #1, create file called zephyr1.conf to net-tools project, or to some other suitable
directory.
For Zephyr instance #2, create file called zephyr2.conf to net-tools project, or to some other suitable
directory.
Step 2 - Create Ethernet interfaces The following net-setup.sh commands should be typed in net-
tools directory (cd net-tools).
In terminal #1, type:
Step 4 - Start Zephyr instances In this example we start sockets-echo-server-sample and sockets-echo-
client-sample applications. You can use other applications too as needed.
In terminal #4, if you are using QEMU, type this:
Also if you have firewall enabled in your host, you need to allow traffic between zeth.1, zeth.2 and
zeth-br interfaces.
• Basic Setup
– Step 1 - Compile and start echo-server
– Step 2 - Compile and start echo-client
This page describes how to set up a virtual network between two QEMUs that are connected together via
UART and are running IEEE 802.15.4 link layer between them. Note that this only works in Linux host.
Basic Setup For the steps below, you will need two terminal windows:
• Terminal #1 is terminal window with echo-server Zephyr sample application.
• Terminal #2 is terminal window with echo-client Zephyr sample application.
If you want to capture the transferred network data, you must compile the monitor_15_4 program in
net-tools directory.
Open a terminal window and type:
cd $ZEPHYR_BASE/../net-tools
make monitor_15_4
If you want to capture the network traffic between the two QEMUs, type:
Note that the make must be used for server target if packet capture option is set in command line. The
build/server/capture.pcap file will contain the transferred data.
You should see data passed between the two QEMUs. Exit QEMU by pressing CTRL+A x.
• Introduction
• Using Arm FVP User Mode Networking with Zephyr
• Limitations
This page is intended to serve as a starting point for anyone interested in using Arm FVP user mode
networking with Zephyr.
Introduction User mode networking emulates a built-in IP router and DHCP server, and routes TCP and
UDP traffic between the guest and host. It uses the user mode socket layer of the host to communicate
with other hosts. This allows the use of a significant number of IP network services without requiring
administrative privileges, or the installation of a separate driver on the host on which the model is
running.
By default, Arm FVP uses the 172.20.51.0/24 network and runs a gateway at 172.20.51.254. This
gateway also functions as a DHCP server for the GOS, allowing it to be automatically assigned with an
IP address 172.20.51.1.
More details about Arm FVP user mode networking can be obtained from here: https://developer.arm.
com/documentation/100964/latest/Introduction-to-Fast-Models/User-mode-networking
Using Arm FVP User Mode Networking with Zephyr Arm FVP user mode networking can be enabled
in any applications and it doesn’t need any configurations on the host system. This feature has been
enabled in DHCPv4 client sample. See Sample DHCPv4 client application
Limitations
• You can use TCP and UDP over IP, but not ICMP (ping).
• User mode networking does not support forwarding UDP ports on the host to the model.
• You can only use DHCP within the private network.
• You can only make inward connections by mapping TCP ports on the host to the model. This is
common to all implementations that provide host connectivity using NAT.
• Operations that require privileged source ports, for example NFS in its default configuration, do
not work.
• Host Configuration
• Zephyr Configuration
• Wireshark Configuration
It is useful to be able to monitor the network traffic especially when debugging a connectivity issues or
when developing new protocol support in Zephyr. This page describes how to set up a way to capture
network traffic so that user is able to use Wireshark or similar tool in remote host to see the network
packets sent or received by a Zephyr device.
See also the net-capture-sample sample application from the Zephyr source distribution for configuration
options that need to be enabled.
Host Configuration
The instructions here describe how to setup a Linux host to capture Zephyr network RX and TX traffic.
Similar instructions should work also in other operating systems. On the Linux Host, fetch the Zephyr
net-tools project, which is located in a separate Git repository:
The net-tools project provides a configure file to setup IP-to-IP tunnel interface so that we can transfer
monitoring data from Zephyr to host.
In terminal #1, type:
./net-setup.sh -c zeth-tunnel.conf
Zephyr will send captured network packets to one of these interfaces. The actual interface will depend
on how the capturing is configured. You can then use Wireshark to monitor the proper network interface.
After the tunneling interfaces have been created, you can use for example net-capture.py script from
net-tools project to print or save the captured network packets. The net-capture.py provides an UDP
listener, it can print the captured data to screen and optionally can also save the data to a pcap file.
Listen captured network data from Zephyr and save it optionally to pcap file.
./net-capture.py \
-i | --interface <network interface>
(continues on next page)
Instead of the net-capture.py script, you can for example use netcat to provide an UDP listener so
that the host will not send port unreachable message to Zephyr:
The IP address above is the inner tunnel endpoint, and can be changed and it depends on how the Zephyr
is configured. Zephyr will send UDP packets containing the captured network packets to the configured
IP tunnel, so we need to terminate the network connection like this.
Zephyr Configuration
In this example, we use native_posix board. You can also use any other board that supports networking.
In terminal #3, type:
To see the Zephyr console and shell, start Zephyr instance like this:
build/zephyr/zephyr.exe -attach_uart
Any other application can be used too, just make sure that suitable configuration options are enabled
(see samples/net/capture/prj.conf file for examples).
The network capture can be configured automatically if needed, but currently the capture sample ap-
plication does not do that. User has to use net-shell to setup and enable the monitoring.
The network packet monitoring needs to be setup first. The net-shell has net capture setup com-
mand for doing that. The command syntax is
This command will create the tunneling interface. The 192.0.2.2 is the remote host where the tunnel
is terminated. The address is used to select the local network interface where the tunneling interface is
attached to. The 2001:db8:200::1 tells the local IP address for the tunnel, the 2001:db8:200::2 is the
peer IP address where the captured network packets are sent. The port numbers for UDP packet can be
given in the setup command like this for IPv6-over-IPv4 tunnel
If the port number is omitted, then 4242 UDP port is used as a default.
The current monitoring configuration can be checked like this:
which will print the current configuration. As we have not yet enabled monitoring, the Capture iface
is not set.
Then we need to enable the network packet monitoring like this:
The 2 tells the network interface which traffic we want to capture. In this example, the 2 is the
native_posix board Ethernet interface. Note that we send the network traffic to the same interface
that we are monitoring in this example. The monitoring system avoids to capture already captured net-
work traffic as that would lead to recursion. You can use net iface command to see what network
interfaces are available. Note that you cannot capture traffic from the tunnel interface as that would
cause recursion loop. The captured network traffic can be sent to some other network interface if con-
figured so. Just set the <remote-ip-addr> option properly in net capture setup so that the IP tunnel
is attached to desired network interface. The capture status can be checked again like this:
After enabling the monitoring, the system will send captured (either received or sent) network packets
to the tunnel interface for further processing.
The monitoring can be disabled like this:
which will turn currently running monitoring off. The monitoring setup can be cleared like this:
It is not necessary to use net-shell for configuring the monitoring. The network capture API functions
can be called by the application if needed.
Wireshark Configuration
The Wireshark tool can be used to monitor the captured network traffic in a useful way.
You can monitor either the tunnel interfaces or the zeth interface. In order to see the actual captured
data inside an UDP packet, see Wireshark decapsulate UDP document for instructions.
Network APIs
BSD Sockets
• Overview
• Secure Sockets
– TLS credentials subsystem
– Secure Socket Creation
– Secure Sockets options
• Socket offloading
– Offloaded socket creation
– Dealing with multiple offloaded interfaces
• API Reference
– BSD Sockets
– TLS Credentials
Overview Zephyr offers an implementation of a subset of the BSD Sockets API (a part of the POSIX
standard). This API allows to reuse existing programming experience and port existing simple network-
ing applications to Zephyr.
Here are the key requirements and concepts which governed BSD Sockets compatible API implementa-
tion for Zephyr:
• Has minimal overhead, similar to the requirement for other Zephyr subsystems.
• Is namespaced by default, to avoid name conflicts with well-known names like close(),
which may be part of libc or other POSIX compatibility libraries. If enabled by
CONFIG_NET_SOCKETS_POSIX_NAMES, it will also expose native POSIX names.
BSD Sockets compatible API is enabled using CONFIG_NET_SOCKETS config option and implements the
following operations: socket(), close(), recv(), recvfrom(), send(), sendto(), connect(), bind(),
listen(), accept(), fcntl() (to set non-blocking mode), getsockopt(), setsockopt(), poll(),
select(), getaddrinfo(), getnameinfo().
Based on the namespacing requirements above, these operations are by default exposed as func-
tions with zsock_ prefix, e.g. zsock_socket() and zsock_close() . If the config option
CONFIG_NET_SOCKETS_POSIX_NAMES is defined, all the functions will be also exposed as aliases with-
out the prefix. This includes the functions like close() and fcntl() (which may conflict with functions
in libc or other libraries, for example, with the filesystem libraries).
Another entailment of the design requirements above is that the Zephyr API aggressively employs the
short-read/short-write property of the POSIX API whenever possible (to minimize complexity and over-
heads). POSIX allows for calls like recv() and send() to actually process (receive or send) less data
than requested by the user (on SOCK_STREAM type sockets). For example, a call recv(sock, 1000, 0)
may return 100, meaning that only 100 bytes were read (short read), and the application needs to retry
call(s) to receive the remaining 900 bytes.
The BSD Sockets API uses file descriptors to represent sockets. File descriptors are small integers, consec-
utively assigned from zero, shared among sockets, files, special devices (like stdin/stdout), etc. Internally,
there is a table mapping file descriptors to internal object pointers. The file descriptor table is used by
the BSD Sockets API even if the rest of the POSIX subsystem (filesystem, stdin/stdout) is not enabled.
Secure Sockets Zephyr provides an extension of standard POSIX socket API, allowing to create and
configure sockets with TLS protocol types, facilitating secure communication. Secure functions for the
implementation are provided by mbedTLS library. Secure sockets implementation allows use of both TLS
and DTLS protocols with standard socket calls. See net_ip_protocol_secure type for supported secure
protocol versions.
To enable secure sockets, set the CONFIG_NET_SOCKETS_SOCKOPT_TLS option. To enable DTLS support,
use CONFIG_NET_SOCKETS_ENABLE_DTLS option.
TLS credentials subsystem TLS credentials must be registered in the system before they can be used
with secure sockets. See tls_credential_add() for more information.
When a specific TLS credential is registered in the system, it is assigned with numeric value of type
sec_tag_t , called a tag. This value can be used later on to reference the credential during secure socket
configuration with socket options.
The following TLS credential types can be registered in the system:
• TLS_CREDENTIAL_CA_CERTIFICATE
• TLS_CREDENTIAL_SERVER_CERTIFICATE
• TLS_CREDENTIAL_PRIVATE_KEY
• TLS_CREDENTIAL_PSK
• TLS_CREDENTIAL_PSK_ID
An example registration of CA certificate (provided in ca_certificate array) looks like this:
By default certificates in DER format are supported. PEM support can be enabled in mbedTLS settings.
Secure Socket Creation A secure socket can be created by specifying secure protocol type, for instance:
Once created, it can be configured with socket options. For instance, the CA certificate and hostname
can be set:
sec_tag_t sec_tag_opt[] = {
CA_CERTIFICATE_TAG,
};
Once configured, socket can be used just like a regular TCP socket.
Several samples in Zephyr use secure sockets for communication. For a sample use see e.g. echo-server
sample application or HTTP GET sample application.
Secure Sockets options Secure sockets offer the following options for socket management:
group secure_sockets_options
Defines
TLS_SEC_TAG_LIST
Socket option to select TLS credentials to use. It accepts and returns an array of sec_tag_t that
indicate which TLS credentials should be used with specific socket.
TLS_HOSTNAME
Write-only socket option to set hostname. It accepts a string containing the hostname (may
be NULL to disable hostname verification). By default, hostname check is enforced for TLS
clients.
TLS_CIPHERSUITE_LIST
Socket option to select ciphersuites to use. It accepts and returns an array of integers with
IANA assigned ciphersuite identifiers. If not set, socket will allow all ciphersuites available in
the system (mbedTLS default behavior).
TLS_CIPHERSUITE_USED
Read-only socket option to read a ciphersuite chosen during TLS handshake. It returns an
integer containing an IANA assigned ciphersuite identifier of chosen ciphersuite.
TLS_PEER_VERIFY
Write-only socket option to set peer verification level for TLS connection. This option accepts
an integer with a peer verification level, compatible with mbedTLS values:
• 0 - none
• 1 - optional
• 2 - required
If not set, socket will use mbedTLS defaults (none for servers, required for clients).
TLS_DTLS_ROLE
Write-only socket option to set role for DTLS connection. This option is irrelevant for TLS
connections, as for them role is selected based on connect()/listen() usage. By default, DTLS
will assume client role. This option accepts an integer with a TLS role, compatible with
mbedTLS values:
• 0 - client
• 1 - server
TLS_ALPN_LIST
Socket option for setting the supported Application Layer Protocols. It accepts and returns
a const char array of NULL terminated strings representing the supported application layer
protocols listed during the TLS handshake.
TLS_DTLS_HANDSHAKE_TIMEOUT_MIN
Socket option to set DTLS handshake timeout. The timeout starts at min, and upon retrans-
mission the timeout is doubled util max is reached. Min and max arguments are separate
options. The time unit is ms.
TLS_DTLS_HANDSHAKE_TIMEOUT_MAX
TLS_CERT_NOCOPY
Socket option for preventing certificates from being copied to the mbedTLS heap if possible.
The option is only effective for DER certificates and is ignored for PEM certificates.
TLS_NATIVE
TLS socket option to use with offloading. The option instructs the network stack only to
offload underlying TCP/UDP communication. The TLS/DTLS operation is handled by a native
TLS/DTLS socket implementation from Zephyr.
Note, that this option is only applicable if socket dispatcher is used (CON-
FIG_NET_SOCKETS_OFFLOAD_DISPATCHER is enabled). In such case, it should be the
first socket option set on a newly created socket. After that, the application may use
SO_BINDTODEVICE to choose the dedicated network interface for the underlying TCP/UDP
socket.
TLS_SESSION_CACHE
Socket option to control TLS session caching on a socket. Accepted values:
• 0 - Disabled.
• 1 - Enabled.
TLS_SESSION_CACHE_PURGE
Write-only socket option to purge session cache immediately. This option accepts any value.
Socket offloading Zephyr allows to register custom socket implementations (called offloaded sockets).
This allows for seamless integration for devices which provide an external IP stack and expose socket-like
API.
Socket offloading can be enabled with CONFIG_NET_SOCKETS_OFFLOAD option. A network driver that
wants to register a new socket implementation should use NET_SOCKET_OFFLOAD_REGISTER macro. The
macro accepts the following parameters:
• socket_name - an arbitrary name for the socket implementation.
• prio - socket implementation priority, the higher priority is, the earlier
particular implementation is processed when creating a new socket. Lower numeric value
indicate higher priority.
• _family - socket family implemented by the offloaded socket. AF_UNSPEC
indicate any family.
• _is_supported - a filtering function, used to verify whether particular
socket family, type and protocol are supported by the offloaded socket implementation.
• _handler - a function compatible with socket() API, used to create
an offloaded socket.
Every offloaded socket implementation should also implement a set of socket APIs, specified in
socket_op_vtable struct.
The function registered for socket creation should allocate a new file descriptor using z_reserve_fd()
function. Any additional actions, specific to the creation of a particular offloaded socket implementation
should take place after the file descriptor is allocated. As a final step, if the offloaded socket was created
successfully, the file descriptor should be finalized with z_finalize_fd() function. The finalize function
allows to register a socket_op_vtable structure implementing socket APIs for an offloaded socket along
with an optional socket context data pointer.
Finally, when an offloaded network interface is initialized, it should indicate that the interface is of-
floaded with net_if_socket_offload_set() function. The function registers the function used to
create an offloaded socket (the same as the one provided in NET_SOCKET_OFFLOAD_REGISTER) at the
network interface.
Offloaded socket creation When application creates a new socket with socket() function, the net-
work stack iterates over all registered socket implementations (native and offloaded). Higher prior-
ity socket implementations are processed first. For each registered socket implementation, an address
family is verified, and if it matches (or the socket was registered as AF_UNSPEC), the corresponding
_is_supported function is called to verify the remaining socket parameters. The first implementation
that fulfills the socket requirements (i. e. _is_supported returns true) will create a new socket with its
_handler function.
The above indicates the importance of the socket priority. If multiple socket implementations support
the same set of socket family/type/protocol, the first implementation processed by the system will create
a socket. Therefore it’s important to give the highest priority to the implementation that should be the
system default.
The socket priority for native socket implementation is configured with Kconfig. Use
CONFIG_NET_SOCKETS_TLS_PRIORITY to set the priority for the native TLS sockets. Use
CONFIG_NET_SOCKETS_PRIORITY_DEFAULT to set the priority for the remaining native sockets.
Dealing with multiple offloaded interfaces As the socket() function does not allow to specify which
network interface should be used by a socket, it’s not possible to choose a specific implementation in case
multiple offloaded socket implementations, supporting the same type of sockets, are available. The same
problem arises when both native and offloaded sockets are available in the system.
To address this problem, a special socket implementation (called socket dispatcher) was introduced. The
sole reason for this module is to postpone the socket creation for until the first operation on a socket
is performed. This leaves an opening to use SO_BINDTODEVICE socket option, to bind a socket to a
particular network interface (and thus offloaded socket implementation). The socket dispatcher can be
enabled with CONFIG_NET_SOCKETS_OFFLOAD_DISPATCHER Kconfig option.
When enabled, the application can specify the network interface to use with setsockopt() function:
Similarly, if TLS is supported by both native and offloaded sockets, TLS_NATIVE socket option can be
used to indicate that a native TLS socket should be created. The underlying socket can then be bound to
a particular network interface:
int tls_native = 1;
In case no SO_BINDTODEVICE socket option is used on a socket, the socket will be dispatched according
to the default priority and filtering rules on a first socket API call.
API Reference
BSD Sockets
group bsd_sockets
BSD Sockets compatible API.
Defines
ZSOCK_POLLIN
zsock_poll: Poll for readability
ZSOCK_POLLPRI
zsock_poll: Compatibility value, ignored
ZSOCK_POLLOUT
zsock_poll: Poll for writability
ZSOCK_POLLERR
zsock_poll: Poll results in error condition (output value only)
ZSOCK_POLLHUP
zsock_poll: Poll detected closed connection (output value only)
ZSOCK_POLLNVAL
zsock_poll: Invalid socket (output value only)
ZSOCK_MSG_PEEK
zsock_recv: Read data without removing it from socket input queue
ZSOCK_MSG_TRUNC
zsock_recv: return the real length of the datagram, even when it was longer than the passed
buffer
ZSOCK_MSG_DONTWAIT
zsock_recv/zsock_send: Override operation to non-blocking
ZSOCK_MSG_WAITALL
zsock_recv: block until the full amount of data can be returned
ZSOCK_SHUT_RD
zsock_shutdown: Shut down for reading
ZSOCK_SHUT_WR
zsock_shutdown: Shut down for writing
ZSOCK_SHUT_RDWR
zsock_shutdown: Shut down for both reading and writing
SOL_TLS
Protocol level for TLS. Here, the same socket protocol level for TLS as in Linux was used.
TLS_PEER_VERIFY_NONE
Peer verification disabled.
TLS_PEER_VERIFY_OPTIONAL
Peer verification optional.
TLS_PEER_VERIFY_REQUIRED
Peer verification required.
TLS_DTLS_ROLE_CLIENT
Client role in a DTLS session.
TLS_DTLS_ROLE_SERVER
Server role in a DTLS session.
TLS_CERT_NOCOPY_NONE
Cert duplicated in heap
TLS_CERT_NOCOPY_OPTIONAL
Cert not copied in heap if DER
TLS_SESSION_CACHE_DISABLED
Disable TLS session caching.
TLS_SESSION_CACHE_ENABLED
Enable TLS session caching.
AI_PASSIVE
Address for bind() (vs for connect())
AI_CANONNAME
Fill in ai_canonname
AI_NUMERICHOST
Assume host address is in numeric notation, don’t DNS lookup
AI_V4MAPPED
May return IPv4 mapped address for IPv6
AI_ALL
May return both native IPv6 and mapped IPv4 address for IPv6
AI_ADDRCONFIG
IPv4/IPv6 support depends on local system config
AI_NUMERICSERV
Assume service (port) is numeric
NI_NUMERICHOST
zsock_getnameinfo(): Resolve to numeric address.
NI_NUMERICSERV
zsock_getnameinfo(): Resolve to numeric port number.
NI_NOFQDN
zsock_getnameinfo(): Return only hostname instead of FQDN
NI_NAMEREQD
zsock_getnameinfo(): Dummy option for compatibility
NI_DGRAM
zsock_getnameinfo(): Dummy option for compatibility
NI_MAXHOST
zsock_getnameinfo(): Max supported hostname length
pollfd
fcntl
addrinfo
POLLIN
POSIX wrapper for ZSOCK_POLLIN
POLLOUT
POSIX wrapper for ZSOCK_POLLOUT
POLLERR
POSIX wrapper for ZSOCK_POLLERR
POLLHUP
POSIX wrapper for ZSOCK_POLLHUP
POLLNVAL
POSIX wrapper for ZSOCK_POLLNVAL
MSG_PEEK
POSIX wrapper for ZSOCK_MSG_PEEK
MSG_TRUNC
POSIX wrapper for ZSOCK_MSG_TRUNC
MSG_DONTWAIT
POSIX wrapper for ZSOCK_MSG_DONTWAIT
MSG_WAITALL
POSIX wrapper for ZSOCK_MSG_WAITALL
SHUT_RD
POSIX wrapper for ZSOCK_SHUT_RD
SHUT_WR
POSIX wrapper for ZSOCK_SHUT_WR
SHUT_RDWR
POSIX wrapper for ZSOCK_SHUT_RDWR
EAI_BADFLAGS
POSIX wrapper for DNS_EAI_BADFLAGS
EAI_NONAME
POSIX wrapper for DNS_EAI_NONAME
EAI_AGAIN
POSIX wrapper for DNS_EAI_AGAIN
EAI_FAIL
POSIX wrapper for DNS_EAI_FAIL
EAI_NODATA
POSIX wrapper for DNS_EAI_NODATA
EAI_MEMORY
POSIX wrapper for DNS_EAI_MEMORY
EAI_SYSTEM
POSIX wrapper for DNS_EAI_SYSTEM
EAI_SERVICE
POSIX wrapper for DNS_EAI_SERVICE
EAI_SOCKTYPE
POSIX wrapper for DNS_EAI_SOCKTYPE
EAI_FAMILY
POSIX wrapper for DNS_EAI_FAMILY
IFNAMSIZ
SOL_SOCKET
sockopt: Socket-level option
SO_DEBUG
sockopt: Recording debugging information (ignored, for compatibility)
SO_REUSEADDR
sockopt: address reuse (ignored, for compatibility)
SO_TYPE
sockopt: Type of the socket
SO_ERROR
sockopt: Async error (ignored, for compatibility)
SO_DONTROUTE
sockopt: Bypass normal routing and send directly to host (ignored, for compatibility)
SO_BROADCAST
sockopt: Transmission of broadcast messages is supported (ignored, for compatibility)
SO_SNDBUF
sockopt: Size of socket socket send buffer (ignored, for compatibility)
SO_RCVBUF
sockopt: Size of socket recv buffer
SO_KEEPALIVE
sockopt: Enable sending keep-alive messages on connections (ignored, for compatibility)
SO_OOBINLINE
sockopt: Place out-of-band data into receive stream (ignored, for compatibility)
SO_LINGER
sockopt: Socket lingers on close (ignored, for compatibility)
SO_REUSEPORT
sockopt: Allow multiple sockets to reuse a single port (ignored, for compatibility)
SO_RCVLOWAT
sockopt: Receive low watermark (ignored, for compatibility)
SO_SNDLOWAT
sockopt: Send low watermark (ignored, for compatibility)
SO_RCVTIMEO
sockopt: Receive timeout Applies to receive functions like recv(), but not to connect()
SO_SNDTIMEO
sockopt: Send timeout
SO_BINDTODEVICE
sockopt: Bind a socket to an interface
SO_ACCEPTCONN
sockopt: Socket accepts incoming connections (ignored, for compatibility)
SO_TIMESTAMPING
sockopt: Timestamp TX packets
SO_PROTOCOL
sockopt: Protocol used with the socket
SO_DOMAIN
sockopt: Domain used with SOCKET (ignored, for compatibility)
TCP_NODELAY
End Socket options for SOL_SOCKET level sockopt: Disable TCP buffering (ignored, for com-
patibility)
IP_TOS
sockopt: Set or receive the Type-Of-Service value for an outgoing packet.
IPV6_V6ONLY
sockopt: Don’t support IPv4 access (ignored, for compatibility)
IPV6_TCLASS
sockopt: Set or receive the traffic class value for an outgoing packet.
SO_PRIORITY
sockopt: Socket priority
SO_TXTIME
sockopt: Socket TX time (when the data should be sent)
SCM_TXTIME
SO_SOCKS5
sockopt: Enable SOCKS5 for Socket
SOMAXCONN
listen: The maximum backlog queue length (ignored, for compatibility)
ZSOCK_FD_SETSIZE
Number of file descriptors which can be added to zsock_fd_set
fd_set
FD_SETSIZE
zsock_timeval
Typedefs
Functions
static inline ssize_t zsock_send(int sock, const void *buf, size_t len, int flags)
Send data to a connected peer.
See POSIX.1-2017 article for normative description. This function is also exposed as send()
if CONFIG_NET_SOCKETS_POSIX_NAMES is defined.
ssize_t zsock_sendmsg(int sock, const struct msghdr *msg, int flags)
Send data to an arbitrary network address.
See POSIX.1-2017 article for normative description. This function is also exposed as
sendmsg() if CONFIG_NET_SOCKETS_POSIX_NAMES is defined.
ssize_t zsock_recvfrom(int sock, void *buf, size_t max_len, int flags, struct sockaddr *src_addr,
socklen_t *addrlen)
Receive data from an arbitrary network address.
See POSIX.1-2017 article for normative description. This function is also exposed as
recvfrom() if CONFIG_NET_SOCKETS_POSIX_NAMES is defined.
static inline ssize_t zsock_recv(int sock, void *buf, size_t max_len, int flags)
Receive data from a connected peer.
See POSIX.1-2017 article for normative description. This function is also exposed as recv()
if CONFIG_NET_SOCKETS_POSIX_NAMES is defined.
int zsock_fcntl(int sock, int cmd, int flags)
Control blocking/non-blocking mode of a socket.
This functions allow to (only) configure a socket for blocking or non-blocking
operation (O_NONBLOCK). This function is also exposed as fcntl() if
CONFIG_NET_SOCKETS_POSIX_NAMES is defined (in which case it may conflict with generic
POSIX fcntl() function).
int zsock_poll(struct zsock_pollfd *fds, int nfds, int timeout)
Efficiently poll multiple sockets for events.
See POSIX.1-2017 article for normative description. This function is also exposed as poll()
if CONFIG_NET_SOCKETS_POSIX_NAMES is defined (in which case it may conflict with generic
POSIX poll() function).
int zsock_getsockopt(int sock, int level, int optname, void *optval, socklen_t *optlen)
Get various socket options.
See POSIX.1-2017 article for normative description. In Zephyr this function supports a subset
of socket options described by POSIX, but also some additional options available in Linux
(some options are dummy and provided to ease porting of existing code). This function is
also exposed as getsockopt() if CONFIG_NET_SOCKETS_POSIX_NAMES is defined.
int zsock_setsockopt(int sock, int level, int optname, const void *optval, socklen_t optlen)
Set various socket options.
See POSIX.1-2017 article for normative description. In Zephyr this function supports a subset
of socket options described by POSIX, but also some additional options available in Linux
(some options are dummy and provided to ease porting of existing code). This function is
also exposed as setsockopt() if CONFIG_NET_SOCKETS_POSIX_NAMES is defined.
int zsock_getpeername(int sock, struct sockaddr *addr, socklen_t *addrlen)
Get peer name.
See POSIX.1-2017 article for normative description. This function is also exposed as
getpeername() if CONFIG_NET_SOCKETS_POSIX_NAMES is defined.
static inline int connect(int sock, const struct sockaddr *addr, socklen_t addrlen)
POSIX wrapper for zsock_connect
static inline int listen(int sock, int backlog)
POSIX wrapper for zsock_listen
static inline int accept(int sock, struct sockaddr *addr, socklen_t *addrlen)
POSIX wrapper for zsock_accept
static inline ssize_t send(int sock, const void *buf, size_t len, int flags)
POSIX wrapper for zsock_send
static inline ssize_t recv(int sock, void *buf, size_t max_len, int flags)
POSIX wrapper for zsock_recv
static inline int zsock_fcntl_wrapper(int sock, int cmd, ...)
static inline ssize_t sendto(int sock, const void *buf, size_t len, int flags, const struct sockaddr
*dest_addr, socklen_t addrlen)
POSIX wrapper for zsock_sendto
static inline ssize_t sendmsg(int sock, const struct msghdr *message, int flags)
POSIX wrapper for zsock_sendmsg
static inline ssize_t recvfrom(int sock, void *buf, size_t max_len, int flags, struct sockaddr
*src_addr, socklen_t *addrlen)
POSIX wrapper for zsock_recvfrom
static inline int poll(struct zsock_pollfd *fds, int nfds, int timeout)
POSIX wrapper for zsock_poll
static inline int getsockopt(int sock, int level, int optname, void *optval, socklen_t *optlen)
POSIX wrapper for zsock_getsockopt
static inline int setsockopt(int sock, int level, int optname, const void *optval, socklen_t optlen)
POSIX wrapper for zsock_setsockopt
static inline int getpeername(int sock, struct sockaddr *addr, socklen_t *addrlen)
POSIX wrapper for zsock_getpeername
static inline int getsockname(int sock, struct sockaddr *addr, socklen_t *addrlen)
POSIX wrapper for zsock_getsockname
static inline int getaddrinfo(const char *host, const char *service, const struct zsock_addrinfo
*hints, struct zsock_addrinfo **res)
POSIX wrapper for zsock_getaddrinfo
static inline void freeaddrinfo(struct zsock_addrinfo *ai)
POSIX wrapper for zsock_freeaddrinfo
static inline const char *gai_strerror(int errcode)
POSIX wrapper for zsock_gai_strerror
static inline int getnameinfo(const struct sockaddr *addr, socklen_t addrlen, char *host, socklen_t
hostlen, char *serv, socklen_t servlen, int flags)
POSIX wrapper for zsock_getnameinfo
static inline int gethostname(char *buf, size_t len)
POSIX wrapper for zsock_gethostname
static inline int inet_pton(sa_family_t family, const char *src, void *dst)
POSIX wrapper for zsock_inet_pton
static inline char *inet_ntop(sa_family_t family, const void *src, char *dst, size_t size)
POSIX wrapper for zsock_inet_ntop
int zsock_select(int nfds, zsock_fd_set *readfds, zsock_fd_set *writefds, zsock_fd_set *exceptfds,
struct zsock_timeval *timeout)
Legacy function to poll multiple sockets for events.
See POSIX.1-2017 article for normative description. This function is provided to ease porting
of existing code and not recommended for usage due to its inefficiency, use zsock_poll()
instead. In Zephyr this function works only with sockets, not arbitrary file descriptors. This
function is also exposed as select() if CONFIG_NET_SOCKETS_POSIX_NAMES is defined (in
which case it may conflict with generic POSIX select() function).
void ZSOCK_FD_ZERO(zsock_fd_set *set)
Initialize (clear) fd_set.
See POSIX.1-2017 article for normative description. This function is also exposed as
FD_ZERO() if CONFIG_NET_SOCKETS_POSIX_NAMES is defined.
int ZSOCK_FD_ISSET(int fd, zsock_fd_set *set)
Check whether socket is a member of fd_set.
See POSIX.1-2017 article for normative description. This function is also exposed as
FD_ISSET() if CONFIG_NET_SOCKETS_POSIX_NAMES is defined.
void ZSOCK_FD_CLR(int fd, zsock_fd_set *set)
Remove socket from fd_set.
See POSIX.1-2017 article for normative description. This function is also exposed as FD_CLR()
if CONFIG_NET_SOCKETS_POSIX_NAMES is defined.
void ZSOCK_FD_SET(int fd, zsock_fd_set *set)
Add socket to fd_set.
See POSIX.1-2017 article for normative description. This function is also exposed as FD_SET()
if CONFIG_NET_SOCKETS_POSIX_NAMES is defined.
static inline int select(int nfds, zsock_fd_set *readfds, zsock_fd_set *writefds, zsock_fd_set
*exceptfds, struct timeval *timeout)
struct zsock_pollfd
#include <socket.h>
struct zsock_addrinfo
#include <socket.h>
struct ifreq
#include <socket.h> Interface description structure
struct zsock_fd_set
#include <socket_select.h>
TLS Credentials
group tls_credentials
TLS credentials management.
Typedefs
Enums
enum tls_credential_type
TLS credential types
Values:
enumerator TLS_CREDENTIAL_NONE
Unspecified credential.
enumerator TLS_CREDENTIAL_CA_CERTIFICATE
A trusted CA certificate. Use this to authenticate remote servers. Used with certificate-
based ciphersuites.
enumerator TLS_CREDENTIAL_SERVER_CERTIFICATE
A public server certificate. Use this to register your own server certificate. Should be
registered together with a corresponding private key. Used with certificate-based cipher-
suites.
enumerator TLS_CREDENTIAL_PRIVATE_KEY
Private key. Should be registered together with a corresponding public certificate. Used
with certificate-based ciphersuites.
enumerator TLS_CREDENTIAL_PSK
Pre-shared key. Should be registered together with a corresponding PSK identity. Used
with PSK-based ciphersuites.
enumerator TLS_CREDENTIAL_PSK_ID
Pre-shared key identity. Should be registered together with a corresponding PSK. Used
with PSK-based ciphersuites.
Functions
int tls_credential_add(sec_tag_t tag, enum tls_credential_type type, const void *cred, size_t
credlen)
Add a TLS credential.
This function adds a TLS credential, that can be used by TLS/DTLS for authentication.
Parameters
• tag – A security tag that credential will be referenced with.
• type – A TLS/DTLS credential type.
• cred – A TLS/DTLS credential.
• credlen – A TLS/DTLS credential length.
Return values
• 0 – TLS credential successfully added.
• -EACCES – Access to the TLS credential subsystem was denied.
• -ENOMEM – Not enough memory to add new TLS credential.
• -EEXIST – TLS credential of specific tag and type already exists.
int tls_credential_get(sec_tag_t tag, enum tls_credential_type type, void *cred, size_t *credlen)
Get a TLS credential.
This function gets an already registered TLS credential, referenced by tag secure tag of type.
Parameters
• tag – A security tag of requested credential.
• type – A TLS/DTLS credential type of requested credential.
• cred – A buffer for TLS/DTLS credential.
• credlen – A buffer size on input. TLS/DTLS credential length on output.
Return values
• 0 – TLS credential successfully obtained.
• -EACCES – Access to the TLS credential subsystem was denied.
• -ENOENT – Requested TLS credential was not found.
• -EFBIG – Requested TLS credential does not fit in the buffer provided.
int tls_credential_delete(sec_tag_t tag, enum tls_credential_type type)
Delete a TLS credential.
This function removes a TLS credential, referenced by tag secure tag of type.
Parameters
• tag – A security tag corresponding to removed credential.
• type – A TLS/DTLS credential type of removed credential.
Return values
• 0 – TLS credential successfully deleted.
• -EACCES – Access to the TLS credential subsystem was denied.
• -ENOENT – Requested TLS credential was not found.
• Overview
• API Reference
Overview Miscellaneous defines and helper functions for IP addresses and IP protocols.
API Reference
group ip_4_6
IPv4/IPv6 primitives and helpers.
Defines
PF_UNSPEC
Unspecified protocol family.
PF_INET
IP protocol family version 4.
PF_INET6
IP protocol family version 6.
PF_PACKET
Packet family.
PF_CAN
Controller Area Network.
PF_NET_MGMT
Network management info.
PF_LOCAL
Inter-process communication
PF_UNIX
Inter-process communication
AF_UNSPEC
Unspecified address family.
AF_INET
IP protocol family version 4.
AF_INET6
IP protocol family version 6.
AF_PACKET
Packet family.
AF_CAN
Controller Area Network.
AF_NET_MGMT
Network management info.
AF_LOCAL
Inter-process communication
AF_UNIX
Inter-process communication
ntohs(x)
Convert 16-bit value from network to host byte order.
Parameters
• x – The network byte order value to convert.
Returns
Host byte order value.
ntohl(x)
Convert 32-bit value from network to host byte order.
Parameters
• x – The network byte order value to convert.
Returns
Host byte order value.
ntohll(x)
Convert 64-bit value from network to host byte order.
Parameters
• x – The network byte order value to convert.
Returns
Host byte order value.
htons(x)
Convert 16-bit value from host to network byte order.
Parameters
• x – The host byte order value to convert.
Returns
Network byte order value.
htonl(x)
Convert 32-bit value from host to network byte order.
Parameters
• x – The host byte order value to convert.
Returns
Network byte order value.
htonll(x)
Convert 64-bit value from host to network byte order.
Parameters
• x – The host byte order value to convert.
Returns
Network byte order value.
NET_IPV6_ADDR_SIZE
NET_IPV4_ADDR_SIZE
ALIGN_H(x)
ALIGN_D(x)
CMSG_FIRSTHDR(msghdr)
CMSG_NXTHDR(msghdr, cmsg)
CMSG_DATA(cmsg)
CMSG_SPACE(length)
CMSG_LEN(length)
INET_ADDRSTRLEN
Max length of the IPv4 address as a string. Defined by POSIX.
INET6_ADDRSTRLEN
Max length of the IPv6 address as a string. Takes into account possible mapped IPv4 addresses.
NET_MAX_PRIORITIES
net_ipaddr_copy(dest, src)
Copy an IPv4 or IPv6 address.
Parameters
• dest – Destination IP address.
• src – Source IP address.
Returns
Destination address.
Typedefs
Enums
enum net_ip_protocol
Protocol numbers from IANA/BSD
Values:
enumerator IPPROTO_IP = 0
IP protocol (pseudo-val for setsockopt()
enumerator IPPROTO_ICMP = 1
ICMP protocol
enumerator IPPROTO_IGMP = 2
IGMP protocol
enumerator IPPROTO_IPIP = 4
IPIP tunnels
enumerator IPPROTO_TCP = 6
TCP protocol
enumerator IPPROTO_UDP = 17
UDP protocol
enumerator IPPROTO_IPV6 = 41
IPv6 protocol
enumerator IPPROTO_ICMPV6 = 58
ICMPv6 protocol
enum net_ip_protocol_secure
Protocol numbers for TLS protocols
Values:
enum net_sock_type
Socket type
Values:
enumerator SOCK_STREAM = 1
Stream socket type
enumerator SOCK_DGRAM
Datagram socket type
enumerator SOCK_RAW
RAW socket type
enum net_ip_mtu
Values:
enum net_priority
Network packet priority settings described in IEEE 802.1Q Annex I.1
Values:
enumerator NET_PRIORITY_BK = 1
Background (lowest)
enumerator NET_PRIORITY_BE = 0
Best effort (default)
enumerator NET_PRIORITY_EE = 2
Excellent effort
enumerator NET_PRIORITY_CA = 3
Critical applications (highest)
enumerator NET_PRIORITY_VI = 4
Video, < 100 ms latency and jitter
enumerator NET_PRIORITY_VO = 5
Voice, < 10 ms latency and jitter
enumerator NET_PRIORITY_IC = 6
Internetwork control
enumerator NET_PRIORITY_NC = 7
Network control
enum net_addr_state
What is the current state of the network address
Values:
enumerator NET_ADDR_ANY_STATE = -1
Default (invalid) address type
enumerator NET_ADDR_TENTATIVE = 0
Tentative address
enumerator NET_ADDR_PREFERRED
Preferred address
enumerator NET_ADDR_DEPRECATED
Deprecated address
enum net_addr_type
How the network address is assigned to network interface
Values:
enumerator NET_ADDR_ANY = 0
Default value. This is not a valid value.
enumerator NET_ADDR_AUTOCONF
Auto configured address
enumerator NET_ADDR_DHCP
Address is from DHCP
enumerator NET_ADDR_MANUAL
Manually set address
enumerator NET_ADDR_OVERRIDABLE
Manually set address which is overridable by DHCP
Functions
Returns
True if address is a loopback address, False otherwise.
static inline bool net_ipv6_is_addr_mcast(const struct in6_addr *addr)
Check if the IPv6 address is a multicast address.
Parameters
• addr – IPv6 address
Returns
True if address is multicast address, False otherwise.
struct net_if_addr *net_if_ipv6_addr_lookup(const struct in6_addr *addr, struct net_if **iface)
Returns
True if the address is unspecified, false otherwise.
static inline bool net_ipv4_is_addr_mcast(const struct in_addr *addr)
Check if the IPv4 address is a multicast address.
Parameters
• addr – IPv4 address
Returns
True if address is multicast address, False otherwise.
static inline bool net_ipv4_is_ll_addr(const struct in_addr *addr)
Check if the given IPv4 address is a link local address.
Parameters
• addr – A valid pointer on an IPv4 address
Returns
True if it is, false otherwise.
static inline void net_ipv4_addr_copy_raw(uint8_t *dest, const uint8_t *src)
Copy an IPv4 address raw buffer.
Parameters
• dest – Destination IP address.
• src – Source IP address.
static inline void net_ipv6_addr_copy_raw(uint8_t *dest, const uint8_t *src)
Copy an IPv6 address raw buffer.
Parameters
• dest – Destination IP address.
• src – Source IP address.
static inline bool net_ipv4_addr_cmp(const struct in_addr *addr1, const struct in_addr *addr2)
Compare two IPv4 addresses.
Parameters
• addr1 – Pointer to IPv4 address.
• addr2 – Pointer to IPv4 address.
Returns
True if the addresses are the same, false otherwise.
static inline bool net_ipv4_addr_cmp_raw(const uint8_t *addr1, const uint8_t *addr2)
Compare two raw IPv4 address buffers.
Parameters
• addr1 – Pointer to IPv4 address buffer.
• addr2 – Pointer to IPv4 address buffer.
Returns
True if the addresses are the same, false otherwise.
static inline bool net_ipv6_addr_cmp(const struct in6_addr *addr1, const struct in6_addr
*addr2)
Compare two IPv6 addresses.
Parameters
static inline bool net_ipv4_addr_mask_cmp(struct net_if *iface, const struct in_addr *addr)
Check if the given address belongs to same subnet that has been configured for the interface.
Parameters
• iface – A valid pointer on an interface
• addr – IPv4 address
Returns
True if address is in same subnet, false otherwise.
static inline bool net_ipv4_is_addr_bcast(struct net_if *iface, const struct in_addr *addr)
Check if the given IPv4 address is a broadcast address.
Parameters
• iface – Interface to use. Must be a valid pointer to an interface.
• addr – IPv4 address
Returns
True if address is a broadcast address, false otherwise.
struct net_if_addr *net_if_ipv4_addr_lookup(const struct in_addr *addr, struct net_if **iface)
Returns
True if both addresses have same multicast scope, false otherwise.
static inline bool net_ipv6_is_addr_mcast_global(const struct in6_addr *addr)
Check if the IPv6 address is a global multicast address (FFxE::/16).
Parameters
• addr – IPv6 address.
Returns
True if the address is global multicast address, false otherwise.
static inline bool net_ipv6_is_addr_mcast_iface(const struct in6_addr *addr)
Check if the IPv6 address is a interface scope multicast address (FFx1::).
Parameters
• addr – IPv6 address.
Returns
True if the address is a interface scope multicast address, false otherwise.
static inline bool net_ipv6_is_addr_mcast_link(const struct in6_addr *addr)
Check if the IPv6 address is a link local scope multicast address (FFx2::).
Parameters
• addr – IPv6 address.
Returns
True if the address is a link local scope multicast address, false otherwise.
static inline bool net_ipv6_is_addr_mcast_mesh(const struct in6_addr *addr)
Check if the IPv6 address is a mesh-local scope multicast address (FFx3::).
Parameters
• addr – IPv6 address.
Returns
True if the address is a mesh-local scope multicast address, false otherwise.
static inline bool net_ipv6_is_addr_mcast_site(const struct in6_addr *addr)
Check if the IPv6 address is a site scope multicast address (FFx5::).
Parameters
• addr – IPv6 address.
Returns
True if the address is a site scope multicast address, false otherwise.
static inline bool net_ipv6_is_addr_mcast_org(const struct in6_addr *addr)
Check if the IPv6 address is an organization scope multicast address (FFx8::).
Parameters
• addr – IPv6 address.
Returns
True if the address is an organization scope multicast address, false otherwise.
static inline bool net_ipv6_is_addr_mcast_group(const struct in6_addr *addr, const struct
in6_addr *group)
Check if the IPv6 address belongs to certain multicast group.
Parameters
• addr – IPv6 address.
• group – Group id IPv6 address, the values must be in network byte order
Returns
True if the IPv6 multicast address belongs to given multicast group, false other-
wise.
static inline bool net_ipv6_is_addr_mcast_all_nodes_group(const struct in6_addr *addr)
Check if the IPv6 address belongs to the all nodes multicast group.
Parameters
• addr – IPv6 address
Returns
True if the IPv6 multicast address belongs to the all nodes multicast group, false
otherwise
static inline bool net_ipv6_is_addr_mcast_iface_all_nodes(const struct in6_addr *addr)
Check if the IPv6 address is a interface scope all nodes multicast address (FF01::1).
Parameters
• addr – IPv6 address.
Returns
True if the address is a interface scope all nodes multicast address, false other-
wise.
static inline bool net_ipv6_is_addr_mcast_link_all_nodes(const struct in6_addr *addr)
Check if the IPv6 address is a link local scope all nodes multicast address (FF02::1).
Parameters
• addr – IPv6 address.
Returns
True if the address is a link local scope all nodes multicast address, false other-
wise.
static inline void net_ipv6_addr_create_solicited_node(const struct in6_addr *src, struct
in6_addr *dst)
Create solicited node IPv6 multicast address FF02:0:0:0:0:1:FFXX:XXXX defined in RFC 3513.
Parameters
• src – IPv6 address.
• dst – IPv6 address.
static inline void net_ipv6_addr_create(struct in6_addr *addr, uint16_t addr0, uint16_t addr1,
uint16_t addr2, uint16_t addr3, uint16_t addr4,
uint16_t addr5, uint16_t addr6, uint16_t addr7)
Construct an IPv6 address from eight 16-bit words.
Parameters
• addr – IPv6 address
• addr0 – 16-bit word which is part of the address
• addr1 – 16-bit word which is part of the address
• addr2 – 16-bit word which is part of the address
• addr3 – 16-bit word which is part of the address
• addr4 – 16-bit word which is part of the address
• addr5 – 16-bit word which is part of the address
Note: This function doesn’t do precise error checking, do not use for untrusted strings.
Parameters
• family – IP address family (AF_INET or AF_INET6)
• src – IP address in a null terminated string
• dst – Pointer to struct in_addr if family is AF_INET or pointer to struct in6_addr
if family is AF_INET6
Returns
0 if ok, < 0 if error
char *net_addr_ntop(sa_family_t family, const void *src, char *dst, size_t size)
Convert IP address to string form.
Parameters
• family – IP address family (AF_INET or AF_INET6)
• src – Pointer to struct in_addr if family is AF_INET or pointer to struct in6_addr
if family is AF_INET6
• dst – Buffer for IP address as a null terminated string
• size – Number of bytes available in the buffer
Returns
dst pointer if ok, NULL if error
struct in6_addr
#include <net_ip.h> IPv6 address struct
struct in_addr
#include <net_ip.h> IPv4 address struct
struct sockaddr_in6
#include <net_ip.h> Socket address struct for IPv6.
struct sockaddr_in6_ptr
#include <net_ip.h>
struct sockaddr_in
#include <net_ip.h> Socket address struct for IPv4.
struct sockaddr_in_ptr
#include <net_ip.h>
struct sockaddr_ll
#include <net_ip.h> Socket address struct for packet socket.
struct sockaddr_ll_ptr
#include <net_ip.h>
struct sockaddr_can_ptr
#include <net_ip.h>
struct iovec
#include <net_ip.h>
struct msghdr
#include <net_ip.h>
struct cmsghdr
#include <net_ip.h>
struct sockaddr
#include <net_ip.h> Generic sockaddr struct. Must be cast to proper type.
struct net_tuple
#include <net_ip.h> IPv6/IPv4 network connection tuple
Public Members
uint16_t remote_port
UDP/TCP remote port
uint16_t local_port
UDP/TCP local port
DNS Resolve
• Overview
• Sample usage
• API Reference
Overview The DNS resolver implements a basic DNS resolver according to IETF RFC1035 on Domain
Implementation and Specification. Supported DNS answers are IPv4/IPv6 addresses and CNAME.
If a CNAME is received, the DNS resolver will create another DNS query. The number of additional
queries is controlled by the CONFIG_DNS_RESOLVER_ADDITIONAL_QUERIES Kconfig variable.
The multicast DNS (mDNS) client resolver support can be enabled by setting CONFIG_MDNS_RESOLVER
Kconfig option. See IETF RFC6762 for more details about mDNS.
The link-local multicast name resolution (LLMNR) client resolver support can be enabled by setting the
CONFIG_LLMNR_RESOLVER Kconfig variable. See IETF RFC4795 for more details about LLMNR.
For more information about DNS configuration variables, see: subsys/net/lib/dns/Kconfig. The DNS
resolver API can be found at include/zephyr/net/dns_resolve.h.
API Reference
group dns_resolve
DNS resolving library.
Defines
DNS_MAX_NAME_SIZE
Max size of the resolved name.
Typedefs
Enums
enum dns_query_type
DNS query type enum
Values:
enumerator DNS_QUERY_TYPE_A = 1
IPv4 query
enumerator DNS_QUERY_TYPE_AAAA = 28
IPv6 query
enum dns_resolve_status
Status values for the callback.
Values:
enumerator DNS_EAI_BADFLAGS = -1
Invalid value for ai_flags field
enumerator DNS_EAI_NONAME = -2
NAME or SERVICE is unknown
enumerator DNS_EAI_AGAIN = -3
Temporary failure in name resolution
enumerator DNS_EAI_FAIL = -4
Non-recoverable failure in name res
enumerator DNS_EAI_NODATA = -5
No address associated with NAME
enumerator DNS_EAI_FAMILY = -6
ai_family not supported
enumerator DNS_EAI_SOCKTYPE = -7
ai_socktype not supported
enumerator DNS_EAI_SERVICE = -8
SRV not supported for ai_socktype
enumerator DNS_EAI_ADDRFAMILY = -9
Address family for NAME not supported
enum dns_resolve_context_state
Values:
enumerator DNS_RESOLVE_CONTEXT_ACTIVE
enumerator DNS_RESOLVE_CONTEXT_DEACTIVATING
enumerator DNS_RESOLVE_CONTEXT_INACTIVE
Functions
Returns
0 if ok, <0 if error.
int dns_resolve_close(struct dns_resolve_context *ctx)
Close DNS resolving context.
This releases DNS resolving context and marks the context unusable. Caller must call the
dns_resolve_init() again to make context usable.
Parameters
• ctx – DNS context
Returns
0 if ok, <0 if error.
int dns_resolve_reconfigure(struct dns_resolve_context *ctx, const char *servers_str[], const
struct sockaddr *servers_sa[])
Reconfigure DNS resolving context.
Reconfigures DNS context with new server list.
Parameters
• ctx – DNS context
• servers_str – DNS server addresses using textual strings. The array is NULL
terminated. The port number can be given in the string. Syntax for the
server addresses with or without port numbers: IPv4 : 10.0.9.1 IPv4 + port :
10.0.9.1:5353 IPv6 : 2001:db8::22:42 IPv6 + port : [2001:db8::22:42]:5353
• servers_sa – DNS server addresses as struct sockaddr. The array is NULL
terminated. Port numbers are optional in struct sockaddr, the default will be
used if set to 0.
Returns
0 if ok, <0 if error.
int dns_resolve_cancel(struct dns_resolve_context *ctx, uint16_t dns_id)
Cancel a pending DNS query.
This releases DNS resources used by a pending query.
Parameters
• ctx – DNS context
• dns_id – DNS id of the pending query
Returns
0 if ok, <0 if error.
int dns_resolve_cancel_with_name(struct dns_resolve_context *ctx, uint16_t dns_id, const char
*query_name, enum dns_query_type query_type)
Cancel a pending DNS query using id, name and type.
This releases DNS resources used by a pending query.
Parameters
• ctx – DNS context
• dns_id – DNS id of the pending query
• query_name – Name of the resource we are trying to query (hostname)
• query_type – Type of the query (A or AAAA)
Returns
0 if ok, <0 if error.
struct dns_addrinfo
#include <dns_resolve.h> Address info struct is passed to callback that gets all the results.
struct dns_resolve_context
#include <dns_resolve.h> DNS resolve context structure.
Public Members
uint8_t is_mdns
Is this server mDNS one
uint8_t is_llmnr
Is this server LLMNR one
k_timeout_t buf_timeout
This timeout is also used when a buffer is required from the buffer pools.
struct dns_pending_query
#include <dns_resolve.h> Result callbacks. We have multiple callbacks here so that it is
possible to do multiple queries at the same time.
Contents of this structure can be inspected and changed only when the lock is held.
Public Members
dns_resolve_cb_t cb
Result callback.
A null value indicates the slot is not in use.
void *user_data
User data
k_timeout_t timeout
TX timeout
uint16_t id
DNS id of this query
uint16_t query_hash
Hash of the DNS name + query type we are querying. This hash is calculated so we
can match the response that we are receiving. This is needed mainly for mDNS which
is setting the DNS id to 0, which means that the id alone cannot be used to find correct
pending query.
Network Management
• Overview
• Requesting a defined procedure
• Listening to network events
• Defining a network management procedure
• Signaling a network event
• API Reference
Overview The Network Management APIs allow applications, as well as network layer code itself, to
call defined network routines at any level in the IP stack, or receive notifications on relevant network
events. For example, by using these APIs, application code can request a scan be done on a Wi-Fi- or
Bluetooth-based network interface, or request notification if a network interface IP address changes.
The Network Management API implementation is designed to save memory by eliminating code at
build time for management routines that are not used. Distinct and statically defined APIs for net-
work management procedures are not used. Instead, defined procedure handlers are registered by
using a NET_MGMT_REGISTER_REQUEST_HANDLER macro. Procedure requests are done through a single
net_mgmt() API that invokes the registered handler for the corresponding request.
The current implementation is experimental and may change and improve in future releases.
Requesting a defined procedure All network management requests are of the form
net_mgmt(mgmt_request, ...). The mgmt_request parameter is a bit mask that tells which
stack layer is targeted, if a net_if object is implied, and the specific management procedure being
requested. The available procedure requests depend on what has been implemented in the stack.
To avoid extra cost, all net_mgmt() calls are direct. Though this may change in a future release, it will
not affect the users of this function.
Listening to network events You can receive notifications on network events by registering a callback
function and specifying a set of events used to filter when your callback is invoked. The callback will
have to be unique for a pair of layer and code, whereas on the command part it will be a mask of events.
Two functions are available, net_mgmt_add_event_callback() for registering the callback func-
tion, and net_mgmt_del_event_callback() for unregistering a callback. A helper function,
net_mgmt_init_event_callback() , can be used to ease the initialization of the callback structure.
When an event occurs that matches a callback’s event set, the associated callback function is invoked
with the actual event code. This makes it possible for different events to be handled by the same callback
function, if desired.
Warning: Event set filtering allows false positives for events that have the same layer and layer
code. A callback handler function must check the event code (passed as an argument) against the
specific network events it will handle, regardless of how many events were in the set passed to
net_mgmt_init_event_callback() .
Note that in order to receive events from multiple layers, one must have multiple listeners registered,
one for each layer being listened. The callback handler function can be shared between different
layer events.
(False positives can occur for events which have the same layer and layer code.)
An example follows.
/*
* Set of events to handle.
* See e.g. include/net/net_event.h for some NET_EVENT_xxx values.
*/
# define EVENT_IFACE_SET (NET_EVENT_IF_xxx | NET_EVENT_IF_yyy)
# define EVENT_IPV4_SET (NET_EVENT_IPV4_xxx | NET_EVENT_IPV4_yyy)
void register_cb(void)
{
net_mgmt_init_event_callback(&iface_callback, callback_handler,
EVENT_IFACE_SET);
net_mgmt_init_event_callback(&ipv4_callback, callback_handler,
EVENT_IPV4_SET);
net_mgmt_add_event_callback(&iface_callback);
net_mgmt_add_event_callback(&ipv4_callback);
}
See include/zephyr/net/net_event.h for available generic core events that can be listened to.
Defining a network management procedure You can provide additional management procedures
specific to your stack implementation by defining a handler and registering it with an associated
mgmt_request code.
Management request code are defined in relevant places depending on the targeted layer or eventually,
if l2 is the layer, on the technology as well. For instance, all IP layer management request code will
be found in the include/zephyr/net/net_event.h header file. But in case of an L2 technology, let’s say
Ethernet, these would be found in include/zephyr/net/ethernet.h
You define your handler modeled with this signature:
Signaling a network event You can signal a specific network event using the
net_mgmt_event_notify() function and provide the network event code. See in-
clude/zephyr/net/net_mgmt.h for details. As for the management request code, event code can be also
found on specific L2 technology mgmt headers, for example include/zephyr/net/ieee802154_mgmt.h
would be the right place if 802.15.4 L2 is the technology one wants to listen to events.
API Reference
group net_mgmt
Network Management.
Defines
NET_MGMT_DEFINE_REQUEST_HANDLER(_mgmt_request)
NET_MGMT_REGISTER_REQUEST_HANDLER(_mgmt_request, _func)
Typedefs
Functions
Parameters
• iface – a pointer on a valid network interface to listen event to
• mgmt_event_mask – A mask of relevant events to wait on. Listened
to events should be relevant to iface events and thus have the bit
NET_MGMT_IFACE_BIT set.
• raised_event – a pointer on a uint32_t to get which event from the mask gen-
erated the event. Can be NULL if the caller is not interested in that information.
• info – a valid pointer if user wants to get the information the event might
bring along. NULL otherwise.
• info_length – tells how long the info memory area is. Only valid if the info is
not NULL.
• timeout – A timeout delay. K_FOREVER can be used to wait indefinitely.
Returns
0 on success, a negative error code otherwise. -ETIMEDOUT will be specifically
returned if the timeout kick-in instead of an actual event.
void net_mgmt_event_init(void)
Used by the core of the network stack to initialize the network event processing.
struct net_mgmt_event_callback
#include <net_mgmt.h> Network Management event callback structure Used to register a
callback into the network management event part, in order to let the owner of this struct to
get network event notification based on given event mask.
Public Members
sys_snode_t node
Meant to be used internally, to insert the callback into a list. So nobody should mess with
it.
net_mgmt_event_handler_t handler
Actual callback function being used to notify the owner
uint32_t event_mask
A mask of network events on which the above handler should be called in case those
events come. Note that only the command part is treated as a mask, matching one to
several commands. Layer and layer code will be made of an exact match. This means
that in order to receive events from multiple layers, one must have multiple listeners
registered, one for each layer being listened.
uint32_t raised_event
Internal place holder when a synchronous event wait is successfully unlocked on a event.
events come. Such mask can be modified whenever necessary by the owner, and thus will
affect the handler being called or not.
Network Statistics
• Overview
• API Reference
API Reference
group net_stats
Network statistics library.
Defines
NET_TC_TX_STATS_COUNT
NET_TC_RX_STATS_COUNT
Typedefs
struct net_stats_bytes
#include <net_stats.h> Number of bytes sent and received.
Public Members
net_stats_t sent
Number of bytes sent
net_stats_t received
Number of bytes received
struct net_stats_pkts
#include <net_stats.h> Number of network packets sent and received.
Public Members
net_stats_t tx
Number of packets sent
net_stats_t rx
Number of packets received
struct net_stats_ip
#include <net_stats.h> IP layer statistics.
Public Members
net_stats_t recv
Number of received packets at the IP layer.
net_stats_t sent
Number of sent packets at the IP layer.
net_stats_t forwarded
Number of forwarded packets at the IP layer.
net_stats_t drop
Number of dropped packets at the IP layer.
struct net_stats_ip_errors
#include <net_stats.h> IP layer error statistics.
Public Members
net_stats_t vhlerr
Number of packets dropped due to wrong IP version or header length.
net_stats_t hblenerr
Number of packets dropped due to wrong IP length, high byte.
net_stats_t lblenerr
Number of packets dropped due to wrong IP length, low byte.
net_stats_t fragerr
Number of packets dropped because they were IP fragments.
net_stats_t chkerr
Number of packets dropped due to IP checksum errors.
net_stats_t protoerr
Number of packets dropped because they were neither ICMP, UDP nor TCP.
struct net_stats_icmp
#include <net_stats.h> ICMP statistics.
Public Members
net_stats_t recv
Number of received ICMP packets.
net_stats_t sent
Number of sent ICMP packets.
net_stats_t drop
Number of dropped ICMP packets.
net_stats_t typeerr
Number of ICMP packets with a wrong type.
net_stats_t chkerr
Number of ICMP packets with a bad checksum.
struct net_stats_tcp
#include <net_stats.h> TCP statistics.
Public Members
net_stats_t resent
Amount of retransmitted data.
net_stats_t drop
Number of dropped packets at the TCP layer.
net_stats_t recv
Number of received TCP segments.
net_stats_t sent
Number of sent TCP segments.
net_stats_t seg_drop
Number of dropped TCP segments.
net_stats_t chkerr
Number of TCP segments with a bad checksum.
net_stats_t ackerr
Number of received TCP segments with a bad ACK number.
net_stats_t rsterr
Number of received bad TCP RST (reset) segments.
net_stats_t rst
Number of received TCP RST (reset) segments.
net_stats_t rexmit
Number of retransmitted TCP segments.
net_stats_t conndrop
Number of dropped connection attempts because too few connections were available.
net_stats_t connrst
Number of connection attempts for closed ports, triggering a RST.
struct net_stats_udp
#include <net_stats.h> UDP statistics.
Public Members
net_stats_t drop
Number of dropped UDP segments.
net_stats_t recv
Number of received UDP segments.
net_stats_t sent
Number of sent UDP segments.
net_stats_t chkerr
Number of UDP segments with a bad checksum.
struct net_stats_ipv6_nd
#include <net_stats.h> IPv6 neighbor discovery statistics.
struct net_stats_ipv6_mld
#include <net_stats.h> IPv6 multicast listener daemon statistics.
Public Members
net_stats_t recv
Number of received IPv6 MLD queries
net_stats_t sent
Number of sent IPv6 MLD reports
net_stats_t drop
Number of dropped IPv6 MLD packets
struct net_stats_ipv4_igmp
#include <net_stats.h> IPv4 IGMP daemon statistics.
Public Members
net_stats_t recv
Number of received IPv4 IGMP queries
net_stats_t sent
Number of sent IPv4 IGMP reports
net_stats_t drop
Number of dropped IPv4 IGMP packets
struct net_stats_tx_time
#include <net_stats.h> Network packet transfer times for calculating average TX time.
struct net_stats_rx_time
#include <net_stats.h> Network packet receive times for calculating average RX time.
struct net_stats_tc
#include <net_stats.h> Traffic class statistics.
struct net_stats_pm
#include <net_stats.h> Power management statistics.
struct net_stats
#include <net_stats.h> All network statistics in one struct.
Public Members
net_stats_t processing_error
Count of malformed packets or packets we do not have handler for
struct net_stats_eth_errors
#include <net_stats.h> Ethernet error statistics.
struct net_stats_eth_flow
#include <net_stats.h> Ethernet flow control statistics.
struct net_stats_eth_csum
#include <net_stats.h> Ethernet checksum statistics.
struct net_stats_eth_hw_timestamp
#include <net_stats.h> Ethernet hardware timestamp statistics.
struct net_stats_eth
#include <net_stats.h> All Ethernet specific statistics.
struct net_stats_ppp
#include <net_stats.h> All PPP specific statistics.
Public Members
net_stats_t drop
Number of received and dropped PPP frames.
net_stats_t chkerr
Number of received PPP frames with a bad checksum.
struct net_stats_sta_mgmt
#include <net_stats.h> All Wi-Fi management statistics.
Public Members
net_stats_t beacons_rx
Number of received beacons
net_stats_t beacons_miss
Number of missed beacons
struct net_stats_wifi
#include <net_stats.h> All Wi-Fi specific statistics.
Network Timeout
• Overview
• Use
• API Reference
Overview Zephyr’s network infrastructure mostly uses the millisecond-resolution uptime clock to track
timeouts, with both deadlines and durations measured with 32-bit unsigned values. The 32-bit value
rolls over at 49 days 17 hours 2 minutes 47.296 seconds.
Timeout processing is often affected by latency, so that the time at which the timeout is checked may
be some time after it should have expired. Handling this correctly without arbitrary expectations of
maximum latency requires that the maximum delay that can be directly represented be a 31-bit non-
negative number (INT32_MAX), which overflows at 24 days 20 hours 31 minutes 23.648 seconds.
Most network timeouts are shorter than the delay rollover, but a few protocols allow for delays that
are represented as unsigned 32-bit values counting seconds, which corresponds to a 42-bit millisecond
count.
The net_timeout API provides a generic timeout mechanism to correctly track the remaining time for
these extended-duration timeouts.
API Reference
group net_timeout
Network long timeout primitives and helpers.
Defines
NET_TIMEOUT_MAX_VALUE
Divisor used to support ms resolution timeouts.
Because delays are processed in work queues which are not invoked synchronously with clock
changes we need to be able to detect timeouts after they occur, which requires comparing
“deadline” to “now” with enough “slop” to handle any observable latency due to “now” ad-
vancing past “deadline”.
The simplest solution is to use the native conversion of the well-defined 32-bit unsigned dif-
ference to a 32-bit signed difference, which caps the maximum delay at INT32_MAX. This is
compatible with the standard mechanism for detecting completion of deadlines that do not
overflow their representation.
Functions
Parameters
• timeout – state a pointer to the timeout state, initialized by net_timeout_set()
and maintained by net_timeout_evaluate().
• now – the full-precision value of k_uptime_get() relative to which the deadline
will be calculated.
Returns
the value of k_uptime_get() at which the timeout will expire.
Note: This function rounds the remaining time down, i.e. if the timeout will occur in 3500
milliseconds the value 3 will be returned.
Parameters
• timeout – a pointer to the timeout state
• now – the time relative to which the estimate of remaining time should be
calculated. This should be recently captured value from k_uptime_get_32().
Return values
• 0 – if the timeout has completed.
• positive – the remaining duration of the timeout, in seconds.
struct net_timeout
#include <net_timeout.h> Generic struct for handling network timeouts.
Except for the linking node, all access to state from these objects must go through the defined
API.
Public Members
sys_snode_t node
Used to link multiple timeouts that share a common timer infrastructure.
For examples a set of related timers may use a single delayed work structure, which is
always scheduled at the shortest time to a timeout event.
Networking Context The net_context API is not meant for application use. Application should use
BSD Sockets API instead.
Promiscuous Mode
• Overview
• Sample usage
• API Reference
Overview Promiscuous mode is a mode for a network interface controller that causes it to pass all
traffic it receives to the application rather than passing only the frames that the controller is specifically
programmed to receive. This mode is normally used for packet sniffing as used to diagnose network
connectivity issues by showing an application all the data being transferred over the network. (See the
Wikipedia article on promiscuous mode for more information.)
The network promiscuous APIs are used to enable and disable this mode, and to wait for and receive
a network data to arrive. Not all network technologies or network device drivers support promiscuous
mode.
Sample usage First the promiscuous mode needs to be turned ON by the application like this:
ret = net_promisc_mode_on(iface);
if (ret < 0) {
if (ret == -EALREADY) {
printf("Promiscuous mode already enabled\n");
} else {
printf("Cannot enable promiscuous mode for "
"interface %p (%d)\n", iface, ret);
}
}
If there is no error, then the application can start to wait for network data:
while (true) {
pkt = net_promisc_mode_wait_data(K_FOREVER);
if (pkt) {
print_info(pkt);
}
net_pkt_unref(pkt);
}
Finally the promiscuous mode can be turned OFF by the application like this:
ret = net_promisc_mode_off(iface);
if (ret < 0) {
if (ret == -EALREADY) {
printf("Promiscuous mode already disabled\n");
} else {
printf("Cannot disable promiscuous mode for "
"interface %p (%d)\n", iface, ret);
}
}
API Reference
group promiscuous
Promiscuous mode support.
Functions
Returns
0 if ok, <0 if error
static inline int net_promisc_mode_off(struct net_if *iface)
Disable promiscuous mode for a given network interface.
Parameters
• iface – Network interface
Returns
0 if ok, <0 if error
• Overview
• API Reference
Overview The SNTP library implements IETF RFC4330 (Simple Network Time Protocol v4).
SNTP provides a way to synchronize clocks in computer networks.
API Reference
group sntp
Simple Network Time Protocol API.
Functions
struct sntp_ctx
#include <sntp.h> SNTP context
Public Members
uint32_t expected_orig_ts
Timestamp when the request was sent from client to server. This is used to check if the
originated timestamp in the server reply matches the one in client request.
struct sntp_time
#include <sntp.h> Time as returned by SNTP API, fractional seconds since 1 Jan 1970
• Overview
• SOCKS5 API
• SOCKS5 Proxy Usage in MQTT
Overview The SOCKS library implements SOCKS5 support, which allows Zephyr to connect to peer
devices via a network proxy.
See this SOCKS5 Wikipedia article for a detailed overview of how SOCKS5 works.
For more information about the protocol itself, see IETF RFC1928 SOCKS Protocol Version 5.
SOCKS5 API The SOCKS5 support is enabled by CONFIG_SOCKS Kconfig variable. Application wanting
to use the SOCKS5 must set the SOCKS5 proxy host address by calling setsockopt() like this:
return 0;
}
SOCKS5 Proxy Usage in MQTT For MQTT client, there is mqtt_client_set_proxy() API that the
application can call to setup SOCKS5 proxy. See mqtt-publisher-sample for usage example.
• Overview
• API Reference
Overview The Trickle timer library implements IETF RFC6206 (Trickle Algorithm).
The Trickle algorithm allows nodes in a lossy shared medium (e.g., low-power and lossy networks) to
exchange information in a highly robust, energy efficient, simple, and scalable manner.
API Reference
group trickle
Trickle algorithm library.
Typedefs
Functions
struct net_trickle
#include <trickle.h> The variable names are taken directly from RFC 6206 when applicable.
Note that the struct members should not be accessed directly but only via the Trickle API.
Public Members
uint32_t Imin
Min interval size in ms
uint8_t Imax
Max number of doublings
uint8_t k
Redundancy constant
uint32_t I
Current interval size
uint32_t Istart
Start of the interval in ms
uint8_t c
Consistency counter
uint32_t Imax_abs
Max interval size in ms (not doublings)
net_trickle_cb_t cb
Callback to be called when timer expires
• Overview
• Websocket Transport
• API Reference
Overview The Websocket client library allows Zephyr to connect to a Websocket server. The Websocket
client API can be used directly by application to establish a Websocket connection to server, or it can be
used as a transport for other network protocols like MQTT.
See this Websocket Wikipedia article for a detailed overview of how Websocket works.
For more information about the protocol itself, see IETF RFC6455 The WebSocket Protocol.
Websocket Transport The Websocket API allows it to be used as a transport for other high level pro-
tocols like MQTT. The Zephyr MQTT client library can be configured to use Websocket transport by
enabling CONFIG_MQTT_LIB_WEBSOCKET and CONFIG_WEBSOCKET_CLIENT Kconfig options.
First a socket needs to be created and connected to the Websocket server:
The Websocket socket can then be used to send or receive data, and the Websocket client API will
encapsulate the sent or received data to/from Websocket packet payload. Both the websocket_xxx()
API or normal BSD socket API functions can be used to send and receive application data.
If normal BSD socket functions are used, then currently only TEXT data is supported. In order to send
BINARY data, the websocket_send_msg() must be used.
When done, the Websocket transport socket must be closed.
ret = close(ws_sock);
or
ret = websocket_disconnect(ws_sock);
API Reference
group websocket
Websocket API.
Defines
WEBSOCKET_FLAG_FINAL
Message type values. Returned in websocket_recv_msg() Final frame
WEBSOCKET_FLAG_TEXT
Textual data
WEBSOCKET_FLAG_BINARY
Binary data
WEBSOCKET_FLAG_CLOSE
Closing connection
WEBSOCKET_FLAG_PING
Ping message
WEBSOCKET_FLAG_PONG
Pong message
Typedefs
Enums
enum websocket_opcode
Values:
Functions
struct websocket_request
#include <websocket.h> Websocket client connection request. This contains all the data that
is needed when doing a Websocket connection request.
Public Members
http_header_cb_t optional_headers_cb
User supplied callback function to call when optional headers need to be sent. This can
be NULL, in which case the optional_headers field in http_request is used. The idea of
this optional_headers callback is to allow user to send more HTTP header data that is
practical to store in allocated memory.
websocket_connect_cb_t cb
User supplied callback function to call when a connection is established.
uint8_t *tmp_buf
User supplied buffer where HTTP connection data is stored
size_t tmp_buf_len
Length of the user supplied temp buffer
• Overview
• Sample usage
• API Reference
Overview The net_capture API allows user to monitor the network traffic in one of the Zephyr net-
work interfaces and send that traffic to external system for analysis. The monitoring can be setup either
manually using net-shell or automatically by using the net_capture API.
Sample usage See Network capture sample application and Monitor Network Traffic for details.
API Reference
group net_capture
Network packet capture support functions.
Functions
Returns
True if enabled, False if network capture is disabled.
static inline int net_capture_disable(const struct device *dev)
Disable network packet capturing support.
Parameters
• dev – Network capture device
Returns
0 if ok, <0 if network packet capture disable failed
static inline int net_capture_send(const struct device *dev, struct net_if *iface, struct net_pkt
*pkt)
Send captured packet.
Parameters
• dev – Network capture device
• iface – Network interface the packet is being sent
• pkt – The network packet that is sent
Returns
0 if ok, <0 if network packet capture send failed
Network Buffer
• Overview
• Creating buffers
• Common Operations
• Reference Counting
• API Reference
Overview Network buffers are a core concept of how the networking stack (as well as the Bluetooth
stack) pass data around. The API for them is defined in include/zephyr/net/buf.h:.
Creating buffers Network buffers are created by first defining a pool of them:
The pool is a static variable, so if it’s needed to be exported to another module a separate pointer is
needed.
Once the pool has been defined, buffers can be allocated from it with:
There is no explicit initialization function for the pool or its buffers, rather this is done implicitly as
net_buf_alloc() gets called.
If there is a need to reserve space in the buffer for protocol headers to be prepended later, it’s possible to
reserve this headroom with:
net_buf_reserve(buf, headroom);
In addition to actual protocol data and generic parsing context, network buffers may also contain
protocol-specific context, known as user data. Both the maximum data and user data capacity of the
buffers is compile-time defined when declaring the buffer pool.
The buffers have native support for being passed through k_fifo kernel objects. This is a very practical
feature when the buffers need to be passed from one thread to another. However, since a net_buf may
have a fragment chain attached to it, instead of using the k_fifo_put() and k_fifo_get() APIs, special
net_buf_put() and net_buf_get() APIs must be used when passing buffers through FIFOs. These APIs
ensure that the buffer chains stay intact. The same applies for passing buffers through a singly linked list,
in which case the net_buf_slist_put() and net_buf_slist_get() functions must be used instead of
sys_slist_append() and sys_slist_get() .
Common Operations The network buffer API provides some useful helpers for encoding and decod-
ing data in the buffers. To fully understand these helpers it’s good to understand the basic names of
operations used with them:
Add
Add data to the end of the buffer. Modifies the data length value while leaving the actual data
pointer intact. Requires that there is enough tailroom in the buffer. Some examples of APIs for
adding data:
Remove
Remove data from the end of the buffer. Modifies the data length value while leaving the actual
data pointer intact. Some examples of APIs for removing data:
Push
Prepend data to the beginning of the buffer. Modifies both the data length value as well as the data
pointer. Requires that there is enough headroom in the buffer. Some examples of APIs for pushing
data:
Pull
Remove data from the beginning of the buffer. Modifies both the data length value as well as the
data pointer. Some examples of APIs for pulling data:
The Add and Push operations are used when encoding data into the buffer, whereas the Remove and Pull
operations are used when decoding data from a buffer.
Reference Counting Each network buffer is reference counted. The buffer is initially acquired from a
free buffers pool by calling net_buf_alloc() , resulting in a buffer with reference count 1. The reference
count can be incremented with net_buf_ref() or decremented with net_buf_unref() . When the
count drops to zero the buffer is automatically placed back to the free buffers pool.
API Reference
group net_buf
Network buffer library.
Defines
NET_BUF_SIMPLE_DEFINE(_name, _size)
Define a net_buf_simple stack variable.
This is a helper macro which is used to define a net_buf_simple object on the stack.
Parameters
• _name – Name of the net_buf_simple object.
• _size – Maximum data storage for the buffer.
NET_BUF_SIMPLE_DEFINE_STATIC(_name, _size)
Define a static net_buf_simple variable.
This is a helper macro which is used to define a static net_buf_simple object.
Parameters
• _name – Name of the net_buf_simple object.
• _size – Maximum data storage for the buffer.
NET_BUF_SIMPLE(_size)
Define a net_buf_simple stack variable and get a pointer to it.
This is a helper macro which is used to define a net_buf_simple object on the stack and the get
a pointer to it as follows:
struct net_buf_simple *my_buf = NET_BUF_SIMPLE(10);
After creating the object it needs to be initialized by calling net_buf_simple_init().
Parameters
• _size – Maximum data storage for the buffer.
Returns
Pointer to stack-allocated net_buf_simple object.
NET_BUF_EXTERNAL_DATA
Flag indicating that the buffer’s associated data pointer, points to externally allocated memory.
Therefore once ref goes down to zero, the pointed data will not need to be deallocated. This
never needs to be explicitly set or unset by the net_buf API user. Such net_buf is exclusively
instantiated via net_buf_alloc_with_data() function. Reference count mechanism however will
behave the same way, and ref count going to 0 will free the net_buf but no the data pointer in
it.
If provided with a custom destroy callback, this callback is responsible for eventually calling
net_buf_destroy() to complete the process of returning the buffer to the pool.
Parameters
• _name – Name of the pool variable.
• _count – Number of buffers in the pool.
• _data_size – Total amount of memory available for data payloads.
• _ud_size – User data space to reserve per buffer.
• _destroy – Optional destroy callback when buffer is freed.
NET_BUF_POOL_DEFINE(_name, _count, _size, _ud_size, _destroy)
Define a new pool for buffers.
Defines a net_buf_pool struct and the necessary memory storage (array of structs) for the
needed amount of buffers. After this,the buffers can be accessed from the pool through
net_buf_alloc. The pool is defined as a static variable, so if it needs to be exported out-
side the current module this needs to happen with the help of a separate pointer rather than
an extern declaration.
If provided with a custom destroy callback this callback is responsible for eventually calling
net_buf_destroy() to complete the process of returning the buffer to the pool.
Parameters
• _name – Name of the pool variable.
• _count – Number of buffers in the pool.
• _size – Maximum data size for each buffer.
• _ud_size – Amount of user data space to reserve.
• _destroy – Optional destroy callback when buffer is freed.
Typedefs
Functions
Parameters
• buf – Buffer to initialize.
• reserve_head – Headroom to reserve.
void net_buf_simple_init_with_data(struct net_buf_simple *buf, void *data, size_t size)
Initialize a net_buf_simple object with data.
Initialized buffer object with external data.
Parameters
• buf – Buffer to initialize.
• data – External data pointer
• size – Amount of data the pointed data buffer if able to fit.
static inline void net_buf_simple_reset(struct net_buf_simple *buf)
Reset buffer.
Reset buffer data so it can be reused for other purposes.
Parameters
• buf – Buffer to reset.
void net_buf_simple_clone(const struct net_buf_simple *original, struct net_buf_simple *clone)
Clone buffer state, using the same data buffer.
Initializes a buffer to point to the same data as an existing buffer. Allows operations on the
same data without altering the length and offset of the original.
Parameters
• original – Buffer to clone.
• clone – The new clone.
void *net_buf_simple_add(struct net_buf_simple *buf, size_t len)
Prepare data to be added at the end of the buffer.
Increments the data length of a buffer to account for more data at the end.
Parameters
• buf – Buffer to update.
• len – Number of bytes to increment the length with.
Returns
The original tail of the buffer.
void *net_buf_simple_add_mem(struct net_buf_simple *buf, const void *mem, size_t len)
Copy given number of bytes from memory to the end of the buffer.
Increments the data length of the buffer to account for more data at the end.
Parameters
• buf – Buffer to update.
• mem – Location of data to be added.
• len – Length of data to be added
Returns
The original tail of the buffer.
Returns
New end of the buffer data.
uint8_t net_buf_simple_remove_u8(struct net_buf_simple *buf)
Remove a 8-bit value from the end of the buffer.
Same idea as with net_buf_simple_remove_mem(), but a helper for operating on 8-bit values.
Parameters
• buf – A valid pointer on a buffer.
Returns
The 8-bit removed value
uint16_t net_buf_simple_remove_le16(struct net_buf_simple *buf)
Remove and convert 16 bits from the end of the buffer.
Same idea as with net_buf_simple_remove_mem(), but a helper for operating on 16-bit little
endian data.
Parameters
• buf – A valid pointer on a buffer.
Returns
16-bit value converted from little endian to host endian.
uint16_t net_buf_simple_remove_be16(struct net_buf_simple *buf)
Remove and convert 16 bits from the end of the buffer.
Same idea as with net_buf_simple_remove_mem(), but a helper for operating on 16-bit big
endian data.
Parameters
• buf – A valid pointer on a buffer.
Returns
16-bit value converted from big endian to host endian.
uint32_t net_buf_simple_remove_le24(struct net_buf_simple *buf)
Remove and convert 24 bits from the end of the buffer.
Same idea as with net_buf_simple_remove_mem(), but a helper for operating on 24-bit little
endian data.
Parameters
• buf – A valid pointer on a buffer.
Returns
24-bit value converted from little endian to host endian.
uint32_t net_buf_simple_remove_be24(struct net_buf_simple *buf)
Remove and convert 24 bits from the end of the buffer.
Same idea as with net_buf_simple_remove_mem(), but a helper for operating on 24-bit big
endian data.
Parameters
• buf – A valid pointer on a buffer.
Returns
24-bit value converted from big endian to host endian.
Parameters
• buf – A valid pointer on a buffer.
Returns
64-bit value converted from big endian to host endian.
void *net_buf_simple_push(struct net_buf_simple *buf, size_t len)
Prepare data to be added to the start of the buffer.
Modifies the data pointer and buffer length to account for more data in the beginning of the
buffer.
Parameters
• buf – Buffer to update.
• len – Number of bytes to add to the beginning.
Returns
The new beginning of the buffer data.
void *net_buf_simple_push_mem(struct net_buf_simple *buf, const void *mem, size_t len)
Copy given number of bytes from memory to the start of the buffer.
Modifies the data pointer and buffer length to account for more data in the beginning of the
buffer.
Parameters
• buf – Buffer to update.
• mem – Location of data to be added.
• len – Length of data to be added.
Returns
The new beginning of the buffer data.
void net_buf_simple_push_le16(struct net_buf_simple *buf, uint16_t val)
Push 16-bit value to the beginning of the buffer.
Adds 16-bit value in little endian format to the beginning of the buffer.
Parameters
• buf – Buffer to update.
• val – 16-bit value to be pushed to the buffer.
void net_buf_simple_push_be16(struct net_buf_simple *buf, uint16_t val)
Push 16-bit value to the beginning of the buffer.
Adds 16-bit value in big endian format to the beginning of the buffer.
Parameters
• buf – Buffer to update.
• val – 16-bit value to be pushed to the buffer.
void net_buf_simple_push_u8(struct net_buf_simple *buf, uint8_t val)
Push 8-bit value to the beginning of the buffer.
Adds 8-bit value the beginning of the buffer.
Parameters
• buf – Buffer to update.
• val – 8-bit value to be pushed to the buffer.
Returns
48-bit value converted from little endian to host endian.
uint64_t net_buf_simple_pull_be48(struct net_buf_simple *buf)
Remove and convert 48 bits from the beginning of the buffer.
Same idea as with net_buf_simple_pull(), but a helper for operating on 48-bit big endian data.
Parameters
• buf – A valid pointer on a buffer.
Returns
48-bit value converted from big endian to host endian.
uint64_t net_buf_simple_pull_le64(struct net_buf_simple *buf)
Remove and convert 64 bits from the beginning of the buffer.
Same idea as with net_buf_simple_pull(), but a helper for operating on 64-bit little endian
data.
Parameters
• buf – A valid pointer on a buffer.
Returns
64-bit value converted from little endian to host endian.
uint64_t net_buf_simple_pull_be64(struct net_buf_simple *buf)
Remove and convert 64 bits from the beginning of the buffer.
Same idea as with net_buf_simple_pull(), but a helper for operating on 64-bit big endian data.
Parameters
• buf – A valid pointer on a buffer.
Returns
64-bit value converted from big endian to host endian.
static inline uint8_t *net_buf_simple_tail(struct net_buf_simple *buf)
Get the tail pointer for a buffer.
Get a pointer to the end of the data in a buffer.
Parameters
• buf – Buffer.
Returns
Tail pointer for the buffer.
size_t net_buf_simple_headroom(struct net_buf_simple *buf)
Check buffer headroom.
Check how much free space there is in the beginning of the buffer.
buf A valid pointer on a buffer
Returns
Number of bytes available in the beginning of the buffer.
size_t net_buf_simple_tailroom(struct net_buf_simple *buf)
Check buffer tailroom.
Check how much free space there is at the end of the buffer.
Parameters
• buf – A valid pointer on a buffer
Returns
Number of bytes available at the end of the buffer.
uint16_t net_buf_simple_max_len(struct net_buf_simple *buf)
Check maximum net_buf_simple::len value.
This value is depending on the number of bytes being reserved as headroom.
Parameters
• buf – A valid pointer on a buffer
Returns
Number of bytes usable behind the net_buf_simple::data pointer.
static inline void net_buf_simple_save(struct net_buf_simple *buf, struct net_buf_simple_state
*state)
Save the parsing state of a buffer.
Saves the parsing state of a buffer so it can be restored later.
Parameters
• buf – Buffer from which the state should be saved.
• state – Storage for the state.
static inline void net_buf_simple_restore(struct net_buf_simple *buf, struct
net_buf_simple_state *state)
Restore the parsing state of a buffer.
Restores the parsing state of a buffer from a state previously stored by net_buf_simple_save().
Parameters
• buf – Buffer to which the state should be restored.
• state – Stored state.
struct net_buf_pool *net_buf_pool_get(int id)
Looks up a pool based on its ID.
Parameters
• id – Pool ID (e.g. from buf->pool_id).
Returns
Pointer to pool.
int net_buf_id(struct net_buf *buf)
Get a zero-based index for a buffer.
This function will translate a buffer into a zero-based index, based on its placement in its
buffer pool. This can be useful if you want to associate an external array of meta-data contexts
with the buffers of a pool.
Parameters
• buf – Network buffer.
Returns
Zero-based index for the buffer.
struct net_buf *net_buf_alloc_fixed(struct net_buf_pool *pool, k_timeout_t timeout)
Allocate a new fixed buffer from a pool.
Parameters
• pool – Which pool to allocate the buffer from.
• timeout – Affects the action taken should the pool be empty. If K_NO_WAIT,
then return immediately. If K_FOREVER, then wait as long as necessary. Other-
wise, wait until the specified timeout. Note that some types of data allocators
do not support blocking (such as the HEAP type). In this case it’s still possible
for net_buf_alloc() to fail (return NULL) even if it was given K_FOREVER.
Returns
New buffer or NULL if out of buffers.
static inline struct net_buf *net_buf_alloc(struct net_buf_pool *pool, k_timeout_t timeout)
Parameters
• pool – Which pool to allocate the buffer from.
• timeout – Affects the action taken should the pool be empty. If K_NO_WAIT,
then return immediately. If K_FOREVER, then wait as long as necessary. Other-
wise, wait until the specified timeout. Note that some types of data allocators
do not support blocking (such as the HEAP type). In this case it’s still possible
for net_buf_alloc() to fail (return NULL) even if it was given K_FOREVER.
Returns
New buffer or NULL if out of buffers.
struct net_buf *net_buf_alloc_len(struct net_buf_pool *pool, size_t size, k_timeout_t timeout)
Allocate a new variable length buffer from a pool.
Parameters
• pool – Which pool to allocate the buffer from.
• size – Amount of data the buffer must be able to fit.
• timeout – Affects the action taken should the pool be empty. If K_NO_WAIT,
then return immediately. If K_FOREVER, then wait as long as necessary. Other-
wise, wait until the specified timeout. Note that some types of data allocators
do not support blocking (such as the HEAP type). In this case it’s still possible
for net_buf_alloc() to fail (return NULL) even if it was given K_FOREVER.
Returns
New buffer or NULL if out of buffers.
struct net_buf *net_buf_alloc_with_data(struct net_buf_pool *pool, void *data, size_t size,
k_timeout_t timeout)
Allocate a new buffer from a pool but with external data pointer.
Allocate a new buffer from a pool, where the data pointer comes from the user and not from
the pool.
Parameters
• pool – Which pool to allocate the buffer from.
• data – External data pointer
• size – Amount of data the pointed data buffer if able to fit.
• timeout – Affects the action taken should the pool be empty. If K_NO_WAIT,
then return immediately. If K_FOREVER, then wait as long as necessary. Other-
wise, wait until the specified timeout. Note that some types of data allocators
do not support blocking (such as the HEAP type). In this case it’s still possible
for net_buf_alloc() to fail (return NULL) even if it was given K_FOREVER.
Returns
New buffer or NULL if out of buffers.
struct net_buf_simple
#include <buf.h> Simple network buffer representation.
This is a simpler variant of the net_buf object (in fact net_buf uses net_buf_simple internally).
It doesn’t provide any kind of reference counting, user data, dynamic allocation, or in general
the ability to pass through kernel objects such as FIFOs.
The main use of this is for scenarios where the meta-data of the normal net_buf isn’t needed
and causes too much overhead. This could be e.g. when the buffer only needs to be allocated
on the stack or when the access to and lifetime of the buffer is well controlled and constrained.
Public Members
uint8_t *data
Pointer to the start of data in the buffer.
uint16_t len
Length of the data behind the data pointer.
To determine the max length, use net_buf_simple_max_len(), not size!
uint16_t size
Amount of data that net_buf_simple::__buf can store.
struct net_buf_simple_state
#include <buf.h> Parsing state of a buffer.
This is used for temporarily storing the parsing state of a buffer while giving control of the
parsing to a routine which we don’t control.
Public Members
uint16_t offset
Offset of the data pointer from the beginning of the storage
uint16_t len
Length of data
struct net_buf
#include <buf.h> Network buffer representation.
This struct is used to represent network buffers. Such buffers are normally defined through
the NET_BUF_POOL_*_DEFINE() APIs and allocated using the net_buf_alloc() API.
Public Members
sys_snode_t node
Allow placing the buffer into sys_slist_t
uint8_t ref
Reference count.
uint8_t flags
Bit-field of buffer flags.
uint8_t pool_id
Where the buffer should go when freed up.
uint8_t *data
Pointer to the start of data in the buffer.
uint16_t len
Length of the data behind the data pointer.
uint16_t size
Amount of data that this buffer can store.
uint8_t user_data[]
System metadata for this buffer.
struct net_buf_data_cb
#include <buf.h>
struct net_buf_data_alloc
#include <buf.h>
struct net_buf_pool
#include <buf.h> Network buffer pool representation.
This struct is used to represent a pool of network buffers.
Public Members
uint16_t uninit_count
Number of uninitialized buffers
struct net_buf_pool_fixed
#include <buf.h>
Packet Management
• Overview
– Architectural notes
• Memory management
– Allocation
– Buffer allocation
– Deallocation
• Operations
– Read and Write access
– Data access
• API Reference
Overview Network packets are the main data the networking stack manipulates. Such data is repre-
sented through the net_pkt structure which provides a means to hold the packet, write and read it, as
well as necessary metadata for the core to hold important information. Such an object is called net_pkt
in this document.
The data structure and the whole API around it are defined in include/zephyr/net/net_pkt.h.
Architectural notes There are two network packets flows within the stack, TX for the transmission
path, and RX for the reception one. In both paths, each net_pkt is written and read from the beginning
to the end, or more specifically from the headers to the payload.
Memory management
Allocation All net_pkt objects come from a pre-defined pool of struct net_pkt. Such pool is defined via
NET_PKT_SLAB_DEFINE(name, count)
Note, however, one will rarely have to use it, as the core provides already two pools, one for the TX path
and one for the RX path.
Allocating a raw net_pkt can be done through:
pkt = net_pkt_alloc(timeout);
However, by its nature, a raw net_pkt is useless without a buffer and needs various metadata information
to become relevant as well. It requires at least to get the network interface it is meant to be sent through
or through which it was received. As this is a very common operation, a helper exist:
A more complete allocator exists, where both the net_pkt and its buffer can be allocated at once:
Buffer allocation The net_pkt object does not define its own buffer, but instead uses an existing object
for this: net_buf . (See Network Buffer for more information). However, it mostly hides the usage of
such a buffer because net_pkt brings network awareness to buffer allocation and, as we will see later, its
operation too.
To allocate a buffer, a net_pkt needs to have at least its network interface set. This works if the family of
the packet is unknown at the time of buffer allocation. Then one could do:
will successfully allocate 800 + 20 + 8 bytes of buffer for the new net_pkt where:
will successfully allocate 1500 bytes, and where 20 + 8 bytes (IPv4 + UDP headers) will not be used for
the payload.
On the receiving side, when the family and protocol are not known:
Deallocation Each net_pkt is reference counted. At allocation, the reference is set to 1. The reference
count can be incremented with net_pkt_ref() or decremented with net_pkt_unref() . When the
count drops to zero the buffer is also un-referenced and net_pkt is automatically placed back into the
free net_pkt_slabs
If net_pkt’s buffer is needed even after net_pkt deallocation, one will need to reference once more all the
chain of net_buf before calling last net_pkt_unref. See Network Buffer for more information.
Operations There are two ways to access the net_pkt buffer, explained in the following sections: basic
read/write access and data access, the latter being the preferred way.
Read and Write access As said earlier, though net_pkt uses net_buf for its buffer, it provides its own API
to access it. Indeed, a network packet might be scattered over a chain of net_buf objects, the functions
provided by net_buf are then limited for such case. Instead, net_pkt provides functions which hide all
the complexity of potential non-contiguous access.
Data movement into the buffer is made through a cursor maintained within each net_pkt. All read/write
operations affect this cursor. Note as well that read or write functions are strict on their length parame-
ters: if it cannot r/w the given length it will fail. Length is not interpreted as an upper limit, it is instead
the exact amount of data that must be read or written.
As there are two paths, TX and RX, there are two access modes: write and overwrite. This might sound
a bit unusual, but is in fact simple and provides flexibility.
In write mode, whatever is written in the buffer affects the length of actual data present in the buffer.
Buffer length should not be confused with the buffer size which is a limit any mode cannot pass. In
overwrite mode then, whatever is written must happen on valid data, and will not affect the buffer
length. By default, a newly allocated net_pkt is on write mode, and its cursor points to the beginning of
its buffer.
Let’s see now, step by step, the functions and how they behave depending on the mode.
When freshly allocated with a buffer of 500 bytes, a net_pkt has 0 length, which means no valid data is
in its buffer. One could verify this by:
len = net_pkt_get_len(pkt);
The buffer length is now 8 bytes. There are various helpers to write a byte, or big endian uint16_t,
uint32_t.
net_pkt_write_u8(pkt, &foo);
net_pkt_write_be16(pkt, &ba);
net_pkt_write_be32(pkt, &bar);
Logically, net_pkt’s length is now 15. But if we try to read at this point, it will fail because there is
nothing to read at the cursor where we are at in the net_pkt. It is possible, while in write mode, to read
what has been already written by resetting the cursor of the net_pkt. For instance:
net_pkt_cursor_init(pkt);
net_pkt_read(pkt, data, 15);
This will reset the cursor of the pkt to the beginning of the buffer and then let you read the actual 15
bytes present. The cursor is then again pointing at the end of the buffer.
To set a large area with the same byte, a memset function is provided:
net_pkt_memset(pkt, 0, 5);
net_pkt_set_overwrite(pkt, true);
net_pkt_cursor_init(pkt);
Now the same operators can be used, but it will be limited to the existing data in the buffer, i.e. 20 bytes.
If it is necessary to know how much space is available in the net_pkt call:
net_pkt_available_buffer(pkt);
net_pkt_available_payload_buffer(pkt, proto);
If you want to place the cursor at a known position use the function net_pkt_skip() . For example, to
go after the IP header, use:
net_pkt_cursor_init(pkt);
net_pkt_skip(pkt, net_pkt_ip_header_len(pkt));
Data access Though the API shown previously is rather simple, it involves always copying things to
and from the net_pkt buffer. In many occasions, it is more relevant to access the information stored in
the buffer contiguously, especially with network packets which embed headers.
These headers are, most of the time, a known fixed set of bytes. It is then more natural to have a
structure representing a certain type of header. In addition to this, if it is known the header size appears
in a contiguous area of the buffer, it will be way more efficient to cast the actual position in the buffer to
the type of header. Either for reading or writing the fields of such header, accessing it directly will save
memory.
Net pkt comes with a dedicated API for this, built on top of the previously described API. It is able to
handle both contiguous and non-contiguous access transparently.
There are two macros used to define a data access descriptor: NET_PKT_DATA_ACCESS_DEFINE
when it is not possible to tell if the data will be in a contiguous area, and
NET_PKT_DATA_ACCESS_CONTIGUOUS_DEFINE when it is guaranteed the data is in a contiguous
area.
Let’s take the example of IP and UDP. Both IPv4 and IPv6 headers are always found at the beginning of
the packet and are small enough to fit in a net_buf of 128 bytes (for instance, though 64 bytes could be
chosen).
It would be the same for struct net_ipv4_hdr. For a UDP header it is likely not to be in a contiguous area
in IPv6 for instance so:
At this point, the cursor of the net_pkt points at the beginning of the requested data. On the RX path,
these headers will be read but not modified so to proceed further the cursor needs to advance past the
data. There is a function dedicated for this:
net_pkt_acknowledge_data(pkt, &ipv4_access);
On the TX path, however, the header fields have been modified. In such a case:
net_pkt_set_data(pkt, &ipv4_access);
If the data are in a contiguous area, it will advance the cursor relevantly. If not, it will write the data and
the cursor will be updated. Note that net_pkt_set_data() could be used in the RX path as well, but it
is slightly faster to use net_pkt_acknowledge_data() as this one does not care about contiguity at all,
it just advances the cursor via net_pkt_skip() directly.
API Reference
group net_pkt
Network packet management library.
Defines
NET_PKT_SLAB_DEFINE(name, count)
Create a net_pkt slab.
A net_pkt slab is used to store meta-information about network packets. It must be coupled
with a data fragment pool (:c:macro:NET_PKT_DATA_POOL_DEFINE) used to store the actual
packet data. The macro can be used by an application to define additional custom per-context
TX packet slabs (see :c:func:net_context_setup_pools).
Parameters
• name – Name of the slab.
• count – Number of net_pkt in this slab.
NET_PKT_TX_SLAB_DEFINE(name, count)
NET_PKT_DATA_POOL_DEFINE(name, count)
Create a data fragment net_buf pool.
A net_buf pool is used to store actual data for network packets. It must be coupled with
a net_pkt slab (:c:macro:NET_PKT_SLAB_DEFINE) used to store the packet meta-information.
The macro can be used by an application to define additional custom per-context TX packet
pools (see :c:func:net_context_setup_pools).
Parameters
• name – Name of the pool.
• count – Number of net_buf in this pool.
net_pkt_print_frags(pkt)
Print fragment list and the fragment sizes.
Only available if debugging is activated.
Parameters
• pkt – Network pkt.
NET_PKT_DATA_ACCESS_DEFINE(_name, _type)
NET_PKT_DATA_ACCESS_CONTIGUOUS_DEFINE(_name, _type)
Functions
• timeout – Affects the action taken should the net buf pool be empty. If
K_NO_WAIT, then return immediately. If K_FOREVER, then wait as long as
necessary. Otherwise, wait up to the specified time.
Returns
Network buffer if successful, NULL otherwise.
void net_pkt_unref(struct net_pkt *pkt)
Place packet back into the available packets slab.
Releases the packet to other use. This needs to be called by application after it has finished
with the packet.
Parameters
• pkt – Network packet to release.
struct net_pkt *net_pkt_ref(struct net_pkt *pkt)
Increase the packet ref count.
Mark the packet to be used still.
Parameters
• pkt – Network packet to ref.
Returns
Network packet if successful, NULL otherwise.
struct net_buf *net_pkt_frag_ref(struct net_buf *frag)
Increase the packet fragment ref count.
Mark the fragment to be used still.
Parameters
• frag – Network fragment to ref.
Returns
a pointer on the referenced Network fragment.
void net_pkt_frag_unref(struct net_buf *frag)
Decrease the packet fragment ref count.
Parameters
• frag – Network fragment to unref.
struct net_buf *net_pkt_frag_del(struct net_pkt *pkt, struct net_buf *parent, struct net_buf
*frag)
Delete existing fragment from a packet.
Parameters
• pkt – Network packet from which frag belongs to.
• parent – parent fragment of frag, or NULL if none.
• frag – Fragment to delete.
Returns
Pointer to the following fragment, or NULL if it had no further fragments.
void net_pkt_frag_add(struct net_pkt *pkt, struct net_buf *frag)
Add a fragment to a packet at the end of its fragment list.
Parameters
• pkt – pkt Network packet where to add the fragment
• frag – Fragment to add
Returns
a pointer to a newly allocated net_pkt on success, NULL otherwise.
struct net_pkt *net_pkt_alloc_on_iface(struct net_if *iface, k_timeout_t timeout)
Allocate a network packet for a specific network interface.
Parameters
• iface – The network interface the packet is supposed to go through.
• timeout – Maximum time to wait for an allocation.
Returns
a pointer to a newly allocated net_pkt on success, NULL otherwise.
struct net_pkt *net_pkt_rx_alloc_on_iface(struct net_if *iface, k_timeout_t timeout)
Note: Reserved bytes (headroom) in any of the fragments are not considered to be available.
Parameters
• pkt – The net_pkt which buffer availability should be evaluated
Returns
the amount of buffer available
Unlike net_pkt_available_buffer(), this will take into account the headers space.
Note: Reserved bytes (headroom) in any of the fragments are not considered to be available.
Parameters
• pkt – The net_pkt which payload buffer availability should be evaluated
• proto – The IP protocol type (can be 0 for none).
Returns
the amount of buffer available for payload
struct net_pkt_cursor
#include <net_pkt.h>
Public Members
uint8_t *pos
Current position in the data buffer of the net_buf
struct net_pkt
#include <net_pkt.h> Network packet.
Note that if you add new fields into net_pkt, remember to update net_pkt_clone() function.
Public Members
intptr_t fifo
The fifo is used by RX/TX threads and by socket layer. The net_pkt is queued via fifo to
the processing thread.
struct net_pkt_data_access
#include <net_pkt.h>
Networking Technologies
Ethernet
• Overview
• API Reference
• Overview
• API Reference
Overview Virtual LAN (VLAN) is a partitioned and isolated computer network at the data link layer
(OSI layer 2). For ethernet network this refers to IEEE 802.1Q
In Zephyr, each individual VLAN is modeled as a virtual network interface. This means that there is an
ethernet network interface that corresponds to a real physical ethernet port in the system. A virtual net-
work interface is created for each VLAN, and this virtual network interface connects to the real network
interface. This is similar to how Linux implements VLANs. The eth0 is the real network interface and
vlan0 is a virtual network interface that is run on top of eth0.
VLAN support must be enabled at compile time by setting option CONFIG_NET_VLAN and
CONFIG_NET_VLAN_COUNT to reflect how many network interfaces there will be in the system. For
example, if there is one network interface without VLAN support, and two with VLAN support, the
CONFIG_NET_VLAN_COUNT option should be set to 3.
Even if VLAN is enabled in a prj.conf file, the VLAN needs to be activated at runtime by the application.
The VLAN API provides a net_eth_vlan_enable() function to do that. The application needs to give
the network interface and desired VLAN tag as a parameter to that function. The VLAN tagging for a
given network interface can be disabled by a net_eth_vlan_disable() function. The application needs
to configure the VLAN network interface itself, such as setting the IP address, etc.
See also the VLAN sample application for API usage example. The source code for that sample application
can be found at samples/net/vlan.
The net-shell module contains net vlan add and net vlan del commands that can be used to enable or
disable VLAN tags for a given network interface.
See the IEEE 802.1Q spec for more information about ethernet VLANs.
API Reference
group vlan_api
VLAN definitions and helpers.
Defines
NET_VLAN_TAG_UNSPEC
Unspecified VLAN tag value
Functions
• Overview
• API Reference
Overview The Link Layer Discovery Protocol (LLDP) is a vendor-neutral link layer protocol used by
network devices for advertising their identity, capabilities, and neighbors on a wired Ethernet network.
For more information, see this LLDP Wikipedia article.
API Reference
group lldp
LLDP definitions and helpers.
Defines
net_lldp_set_lldpdu(iface)
Set LLDP protocol data unit (LLDPDU) for the network interface.
Parameters
• iface – Network interface
Returns
<0 if error, index in lldp array if iface is found there
net_lldp_unset_lldpdu(iface)
Unset LLDP protocol data unit (LLDPDU) for the network interface.
Parameters
• iface – Network interface
Typedefs
Enums
enum net_lldp_tlv_type
TLV Types. Please refer to table 8-1 from IEEE 802.1AB standard.
Values:
enumerator LLDP_TLV_END_LLDPDU = 0
End Of LLDPDU (optional)
enumerator LLDP_TLV_CHASSIS_ID = 1
Chassis ID (mandatory)
enumerator LLDP_TLV_PORT_ID = 2
Port ID (mandatory)
enumerator LLDP_TLV_TTL = 3
Time To Live (mandatory)
enumerator LLDP_TLV_PORT_DESC = 4
Port Description (optional)
enumerator LLDP_TLV_SYSTEM_NAME = 5
System Name (optional)
enumerator LLDP_TLV_SYSTEM_DESC = 6
System Description (optional)
enumerator LLDP_TLV_SYSTEM_CAPABILITIES = 7
System Capability (optional)
enumerator LLDP_TLV_MANAGEMENT_ADDR = 8
Management Address (optional)
Functions
struct net_lldp_chassis_tlv
#include <lldp.h> Chassis ID TLV, see chapter 8.5.2 in IEEE 802.1AB
Public Members
uint16_t type_length
7 bits for type, 9 bits for length
uint8_t subtype
ID subtype
uint8_t value[NET_LLDP_CHASSIS_ID_VALUE_LEN]
Chassis ID value
struct net_lldp_port_tlv
#include <lldp.h> Port ID TLV, see chapter 8.5.3 in IEEE 802.1AB
Public Members
uint16_t type_length
7 bits for type, 9 bits for length
uint8_t subtype
ID subtype
uint8_t value[NET_LLDP_PORT_ID_VALUE_LEN]
Port ID value
struct net_lldp_time_to_live_tlv
#include <lldp.h> Time To Live TLV, see chapter 8.5.4 in IEEE 802.1AB
Public Members
uint16_t type_length
7 bits for type, 9 bits for length
uint16_t ttl
Time To Live (TTL) value
struct net_lldpdu
#include <lldp.h> LLDP Data Unit (LLDPDU) shall contain the following ordered TLVs as
stated in “8.2 LLDPDU format” from the IEEE 802.1AB
Public Members
IEEE 802.1Qav
Enabling 802.1Qav To enable 802.1Qav shaper, the Ethernet device driver must declare that it sup-
ports credit-based shaping. The Ethernet driver’s capability function must return ETHERNET_QAV value
for this purpose. Typically also priority queues ETHERNET_PRIORITY_QUEUES need to be supported.
Configuring 802.1Qav The application can configure the credit-based shaper like this:
# include <zephyr/net/net_if.h>
# include <zephyr/net/ethernet.h>
# include <zephyr/net/ethernet_mgmt.h>
memset(¶ms, 0, sizeof(params));
params.qav_param.queue_id = queue_id;
params.qav_param.enabled = enable;
params.qav_param.type = ETHERNET_QAV_PARAM_TYPE_STATUS;
memset(¶ms, 0, sizeof(params));
params.qav_param.queue_id = queue_id;
params.qav_param.delta_bandwidth = bandwidth;
params.qav_param.type = ETHERNET_QAV_PARAM_TYPE_DELTA_BANDWIDTH;
ret = net_mgmt(NET_REQUEST_ETHERNET_SET_QAV_PARAM,
iface, ¶ms,
sizeof(struct ethernet_req_params));
if (ret) {
LOG_ERR("Cannot set Qav delta bandwidth %u for "
"queue %d for interface %p",
bandwidth, queue_id, iface);
}
params.qav_param.idle_slope = idle_slope;
params.qav_param.type = ETHERNET_QAV_PARAM_TYPE_IDLE_SLOPE;
ret = net_mgmt(NET_REQUEST_ETHERNET_SET_QAV_PARAM,
iface, ¶ms,
sizeof(struct ethernet_req_params));
if (ret) {
LOG_ERR("Cannot set Qav idle slope %u for "
"queue %d for interface %p",
idle_slope, queue_id, iface);
}
}
Overview Ethernet is a networking technology commonly used in local area networks (LAN). For more
information, see this Ethernet Wikipedia article.
Zephyr supports following Ethernet features:
• 10, 100 and 1000 Mbit/sec links
• Auto negotiation
• Half/full duplex
• Promiscuous mode
• TX and RX checksum offloading
• MAC address filtering
• Virtual LANs
• Priority queues
API Reference
group ethernet
Ethernet support functions.
Defines
ETH_NET_DEVICE_DT_INST_DEFINE(inst, ...)
Like ETH_NET_DEVICE_DT_DEFINE for an instance of a DT_DRV_COMPAT compatible.
Parameters
• inst – instance number. This is replaced by DT_DRV_COMPAT(inst) in the call
to ETH_NET_DEVICE_DT_DEFINE.
• ... – other parameters as expected by ETH_NET_DEVICE_DT_DEFINE.
Enums
enum ethernet_hw_caps
Ethernet hardware capabilities
Values:
enum ethernet_flags
Values:
enumerator ETH_CARRIER_UP
Functions
struct ethernet_qav_param
#include <ethernet.h>
Public Members
int queue_id
ID of the priority queue to use
bool enabled
True if Qav is enabled for queue
struct ethernet_qbv_param
#include <ethernet.h>
Public Members
int port_id
Port id
bool enabled
True if Qbv is enabled or not
bool gate_status[NET_TC_TX_COUNT]
True = open, False = closed
uint32_t time_interval
Time interval ticks (nanoseconds)
uint16_t row
Gate control list row
uint32_t gate_control_list_len
Number of entries in gate control list
uint32_t extension_time
Extension time (nanoseconds)
struct ethernet_qbu_param
#include <ethernet.h>
Public Members
int port_id
Port id
uint32_t hold_advance
Hold advance (nanoseconds)
uint32_t release_advance
Release advance (nanoseconds)
bool enabled
True if Qbu is enabled or not
bool link_partner_status
Link partner status (from Qbr)
uint8_t additional_fragment_size
Additional fragment size (from Qbr). The minimum non-final fragment size is (addi-
tional_fragment_size + 1) * 64 octets
struct ethernet_filter
#include <ethernet.h>
Public Members
bool set
Set (true) or unset (false) the filter
struct ethernet_txtime_param
#include <ethernet.h>
Public Members
int queue_id
Queue number for configuring TXTIME
bool enable_txtime
Enable or disable TXTIME per queue
struct ethernet_api
#include <ethernet.h>
Public Members
int (*set_config)(const struct device *dev, enum ethernet_config_type type, const struct
ethernet_config *config)
Set specific hardware configuration
struct ethernet_context
#include <ethernet.h> Ethernet L2 context that is needed for VLAN
Public Members
atomic_t flags
Flags representing ethernet state, which are accessed from multiple threads.
bool is_net_carrier_up
Is network carrier up
bool is_init
Is this context already initialized
group ethernet_mii
Ethernet MII (media independent interface) functions.
Defines
MII_BMCR
Basic Mode Control Register
MII_BMSR
Basic Mode Status Register
MII_PHYID1R
PHY ID 1 Register
MII_PHYID2R
PHY ID 2 Register
MII_ANAR
Auto-Negotiation Advertisement Register
MII_ANLPAR
Auto-Negotiation Link Partner Ability Reg
MII_ANER
Auto-Negotiation Expansion Register
MII_ANNPTR
Auto-Negotiation Next Page Transmit Register
MII_ANLPRNPR
Auto-Negotiation Link Partner Received Next Page Reg
MII_1KTCR
1000BASE-T Control Register
MII_1KSTSR
1000BASE-T Status Register
MII_MMD_ACR
MMD Access Control Register
MII_MMD_AADR
MMD Access Address Data Register
MII_ESTAT
Extended Status Register
MII_BMCR_RESET
PHY reset
MII_BMCR_LOOPBACK
enable loopback mode
MII_BMCR_SPEED_LSB
10=1000Mbps 01=100Mbps; 00=10Mbps
MII_BMCR_AUTONEG_ENABLE
Auto-Negotiation enable
MII_BMCR_POWER_DOWN
power down mode
MII_BMCR_ISOLATE
isolate electrically PHY from MII
MII_BMCR_AUTONEG_RESTART
restart auto-negotiation
MII_BMCR_DUPLEX_MODE
full duplex mode
MII_BMCR_SPEED_MSB
10=1000Mbps 01=100Mbps; 00=10Mbps
MII_BMCR_SPEED_MASK
Link Speed Field
MII_BMCR_SPEED_10
select speed 10 Mb/s
MII_BMCR_SPEED_100
select speed 100 Mb/s
MII_BMCR_SPEED_1000
select speed 1000 Mb/s
MII_BMSR_100BASE_T4
100BASE-T4 capable
MII_BMSR_100BASE_X_FULL
100BASE-X full duplex capable
MII_BMSR_100BASE_X_HALF
100BASE-X half duplex capable
MII_BMSR_10_FULL
10 Mb/s full duplex capable
MII_BMSR_10_HALF
10 Mb/s half duplex capable
MII_BMSR_100BASE_T2_FULL
100BASE-T2 full duplex capable
MII_BMSR_100BASE_T2_HALF
100BASE-T2 half duplex capable
MII_BMSR_EXTEND_STATUS
extend status information in reg 15
MII_BMSR_MF_PREAMB_SUPPR
PHY accepts management frames with preamble suppressed
MII_BMSR_AUTONEG_COMPLETE
Auto-negotiation process completed
MII_BMSR_REMOTE_FAULT
remote fault detected
MII_BMSR_AUTONEG_ABILITY
PHY is able to perform Auto-Negotiation
MII_BMSR_LINK_STATUS
link is up
MII_BMSR_JABBER_DETECT
jabber condition detected
MII_BMSR_EXTEND_CAPAB
extended register capabilities
MII_ADVERTISE_NEXT_PAGE
next page
MII_ADVERTISE_LPACK
link partner acknowledge response
MII_ADVERTISE_REMOTE_FAULT
remote fault
MII_ADVERTISE_ASYM_PAUSE
try for asymmetric pause
MII_ADVERTISE_PAUSE
try for pause
MII_ADVERTISE_100BASE_T4
try for 100BASE-T4 support
MII_ADVERTISE_100_FULL
try for 100BASE-X full duplex support
MII_ADVERTISE_100_HALF
try for 100BASE-X support
MII_ADVERTISE_10_FULL
try for 10 Mb/s full duplex support
MII_ADVERTISE_10_HALF
try for 10 Mb/s half duplex support
MII_ADVERTISE_SEL_MASK
Selector Field
MII_ADVERTISE_SEL_IEEE_802_3
MII_ADVERTISE_1000_FULL
try for 1000BASE-T full duplex support
MII_ADVERTISE_1000_HALF
try for 1000BASE-T half duplex support
MII_ADVERTISE_ALL
MII_ESTAT_1000BASE_X_FULL
1000BASE-X full-duplex capable
MII_ESTAT_1000BASE_X_HALF
1000BASE-X half-duplex capable
MII_ESTAT_1000BASE_T_FULL
1000BASE-T full-duplex capable
MII_ESTAT_1000BASE_T_HALF
1000BASE-T half-duplex capable
IEEE 802.15.4
• Overview
• API Reference
– IEEE 802.15.4
– IEEE 802.15.4 Management
Overview IEEE 802.15.4 is a technical standard which defines the operation of low-rate wireless per-
sonal area networks (LR-WPANs). For more detailed overview of this standard, see this IEEE 802.15.4
Wikipedia article. Also, see IEEE GET Program for creating an IEEE account and downloading the speci-
fication.
Zephyr supports IEEE 802.15.4 with Thread and 6LoWPAN. The Thread implementation is based on
OpenThread. The IPv6 header compression in 6LoWPAN is shared with the Bluetooth IPSP (IP support
profile).
API Reference
IEEE 802.15.4
group ieee802154
IEEE 802.15.4 library.
Defines
IEEE802154_MAX_PHY_PACKET_SIZE
IEEE802154_FCS_LENGTH
IEEE802154_MTU
IEEE802154_SHORT_ADDR_LENGTH
IEEE802154_EXT_ADDR_LENGTH
IEEE802154_MAX_ADDR_LENGTH
IEEE802154_NO_CHANNEL
IEEE802154_BROADCAST_ADDRESS
IEEE802154_NO_SHORT_ADDRESS_ASSIGNED
IEEE802154_BROADCAST_PAN_ID
IEEE802154_SHORT_ADDRESS_NOT_ASSOCIATED
IEEE802154_L2_CTX_TYPE
IEEE802154_AR_FLAG_SET
Typedefs
Enums
enum ieee802154_channel
IEEE 802.15.4 Channel assignments.
Channel numbering for 868 MHz, 915 MHz, and 2450 MHz bands (channel page zero).
enumerator IEEE802154_SUB_GHZ_CHANNEL_MIN = 0
enumerator IEEE802154_SUB_GHZ_CHANNEL_MAX = 10
enumerator IEEE802154_2_4_GHZ_CHANNEL_MIN = 11
enumerator IEEE802154_2_4_GHZ_CHANNEL_MAX = 26
enum ieee802154_hw_caps
Values:
enum ieee802154_filter_type
Values:
enumerator IEEE802154_FILTER_TYPE_IEEE_ADDR
enumerator IEEE802154_FILTER_TYPE_SHORT_ADDR
enumerator IEEE802154_FILTER_TYPE_PAN_ID
enumerator IEEE802154_FILTER_TYPE_SRC_IEEE_ADDR
enumerator IEEE802154_FILTER_TYPE_SRC_SHORT_ADDR
enum ieee802154_event
Values:
enumerator IEEE802154_EVENT_TX_STARTED
enumerator IEEE802154_EVENT_RX_FAILED
enumerator IEEE802154_EVENT_SLEEP
enum ieee802154_rx_fail_reason
Values:
enumerator IEEE802154_RX_FAIL_NOT_RECEIVED
enumerator IEEE802154_RX_FAIL_INVALID_FCS
enumerator IEEE802154_RX_FAIL_ADDR_FILTERED
enumerator IEEE802154_RX_FAIL_OTHER
enum ieee802154_tx_mode
IEEE802.15.4 Transmission mode.
Values:
enumerator IEEE802154_TX_MODE_DIRECT
Transmit packet immediately, no CCA.
enumerator IEEE802154_TX_MODE_CCA
Perform CCA before packet transmission.
enumerator IEEE802154_TX_MODE_CSMA_CA
Perform full CSMA CA procedure before packet transmission.
enumerator IEEE802154_TX_MODE_TXTIME
Transmit packet in the future, at specified time, no CCA.
enumerator IEEE802154_TX_MODE_TXTIME_CCA
Transmit packet in the future, perform CCA before transmission.
enum ieee802154_fpb_mode
IEEE802.15.4 Frame Pending Bit table address matching mode.
Values:
enumerator IEEE802154_FPB_ADDR_MATCH_THREAD
The pending bit shall be set only for addresses found in the list.
enumerator IEEE802154_FPB_ADDR_MATCH_ZIGBEE
The pending bit shall be cleared for short addresses found in the list.
enum ieee802154_config_type
IEEE802.15.4 driver configuration types.
Values:
enumerator IEEE802154_CONFIG_AUTO_ACK_FPB
Indicates how radio driver should set Frame Pending bit in ACK responses for Data Re-
quests. If enabled, radio driver should determine whether to set the bit or not based
on the information provided with IEEE802154_CONFIG_ACK_FPB config and FPB address
matching mode specified. Otherwise, Frame Pending bit should be set to 1 (see IEEE Std
802.15.4-2006, 7.2.2.3.1).
enumerator IEEE802154_CONFIG_ACK_FPB
Indicates whether to set ACK Frame Pending bit for specific address or not. Disabling
the Frame Pending bit with no address provided (NULL pointer) should disable it for all
enabled addresses.
enumerator IEEE802154_CONFIG_PAN_COORDINATOR
Indicates whether the device is a PAN coordinator.
enumerator IEEE802154_CONFIG_PROMISCUOUS
Enable/disable promiscuous mode.
enumerator IEEE802154_CONFIG_EVENT_HANDLER
Specifies new radio event handler. Specifying NULL as a handler will disable radio events
notification.
enumerator IEEE802154_CONFIG_MAC_KEYS
Updates MAC keys and key index for radios supporting transmit security.
enumerator IEEE802154_CONFIG_FRAME_COUNTER
Sets the current MAC frame counter value for radios supporting transmit security.
enumerator IEEE802154_CONFIG_FRAME_COUNTER_IF_LARGER
Sets the current MAC frame counter value if the provided value is greater than the current
one.
enumerator IEEE802154_CONFIG_RX_SLOT
Configure a radio reception slot. This can be used for any scheduler reception, e.g.:
Zigbee GP device, CSL, TSCH, etc.
enumerator IEEE802154_CONFIG_CSL_PERIOD
Configure CSL receiver (Endpoint) period
In order to configure a CSL receiver the upper layer should combine␣
˓→several
configuration options in the following way:
1. Use ``IEEE802154_CONFIG_ENH_ACK_HEADER_IE`` once to inform the␣
˓→radio driver of the
to the nearest CSL window to inject in the CSL IEs for both␣
˓→transmitted data and ack
frames.
3. Use ``IEEE802154_CONFIG_CSL_PERIOD`` on each value change to␣
˓→update the current CSL
period value which will be injected in the CSL IEs together with the␣
˓→CSL Phase based on
``IEEE802154_CONFIG_CSL_RX_TIME``.
4. Use ``IEEE802154_CONFIG_RX_SLOT`` periodically to schedule the␣
˓→immediate receive
window earlier enough before the expected window start time, taking␣
˓→into account
This diagram shows the usage of the four options over time:
Start CSL Schedule CSL window
| | | |
| | CSL_PERIOD | |
| | | | |
v v v v v
----------------------------------------------------------[ CSL␣
˓→window ]-----+
^ ␣
˓→ |
(continues on next page)
enumerator IEEE802154_CONFIG_CSL_RX_TIME
Configure the next CSL receive window center, in units of microseconds, based on the
radio time.
enumerator IEEE802154_CONFIG_ENH_ACK_HEADER_IE
Indicates whether to inject IE into ENH ACK Frame for specific address or not. Disabling
the ENH ACK with no address provided (NULL pointer) should disable it for all enabled
addresses.
Functions
struct ieee802154_security_ctx
#include <ieee802154.h>
struct ieee802154_context
#include <ieee802154.h>
struct ieee802154_filter
#include <ieee802154_radio.h>
struct ieee802154_key
#include <ieee802154_radio.h>
struct ieee802154_config
#include <ieee802154_radio.h> IEEE802.15.4 driver configuration data.
Public Members
bool pan_coordinator
IEEE802154_CONFIG_PAN_COORDINATOR
bool promiscuous
IEEE802154_CONFIG_PROMISCUOUS
ieee802154_event_cb_t event_handler
IEEE802154_CONFIG_EVENT_HANDLER
uint32_t frame_counter
IEEE802154_CONFIG_FRAME_COUNTER
uint32_t csl_period
IEEE802154_CONFIG_CSL_PERIOD
uint32_t csl_rx_time
IEEE802154_CONFIG_CSL_RX_TIME
struct ieee802154_radio_api
#include <ieee802154_radio.h> IEEE 802.15.4 radio interface API.
Public Members
int (*filter)(const struct device *dev, bool set, enum ieee802154_filter_type type, const
struct ieee802154_filter *filter)
Set/Unset filters (for IEEE802154_HW_FILTER )
int (*tx)(const struct device *dev, enum ieee802154_tx_mode mode, struct net_pkt *pkt,
struct net_buf *frag)
Transmit a packet fragment
int (*configure)(const struct device *dev, enum ieee802154_config_type type, const struct
ieee802154_config *config)
group ieee802154_mgmt
IEEE 802.15.4 net management library.
Defines
NET_REQUEST_IEEE802154_SET_ACK
NET_REQUEST_IEEE802154_UNSET_ACK
NET_REQUEST_IEEE802154_PASSIVE_SCAN
NET_REQUEST_IEEE802154_ACTIVE_SCAN
NET_REQUEST_IEEE802154_CANCEL_SCAN
NET_REQUEST_IEEE802154_ASSOCIATE
NET_REQUEST_IEEE802154_DISASSOCIATE
NET_REQUEST_IEEE802154_SET_CHANNEL
NET_REQUEST_IEEE802154_GET_CHANNEL
NET_REQUEST_IEEE802154_SET_PAN_ID
NET_REQUEST_IEEE802154_GET_PAN_ID
NET_REQUEST_IEEE802154_SET_EXT_ADDR
NET_REQUEST_IEEE802154_GET_EXT_ADDR
NET_REQUEST_IEEE802154_SET_SHORT_ADDR
NET_REQUEST_IEEE802154_GET_SHORT_ADDR
NET_REQUEST_IEEE802154_GET_TX_POWER
NET_REQUEST_IEEE802154_SET_TX_POWER
NET_EVENT_IEEE802154_SCAN_RESULT
IEEE802154_IS_CHAN_SCANNED(_channel_set, _chan)
IEEE802154_IS_CHAN_UNSCANNED(_channel_set, _chan)
IEEE802154_ALL_CHANNELS
Enums
enum net_request_ieee802154_cmd
Values:
enumerator NET_REQUEST_IEEE802154_CMD_SET_ACK = 1
enumerator NET_REQUEST_IEEE802154_CMD_UNSET_ACK
enumerator NET_REQUEST_IEEE802154_CMD_PASSIVE_SCAN
enumerator NET_REQUEST_IEEE802154_CMD_ACTIVE_SCAN
enumerator NET_REQUEST_IEEE802154_CMD_CANCEL_SCAN
enumerator NET_REQUEST_IEEE802154_CMD_ASSOCIATE
enumerator NET_REQUEST_IEEE802154_CMD_DISASSOCIATE
enumerator NET_REQUEST_IEEE802154_CMD_SET_CHANNEL
enumerator NET_REQUEST_IEEE802154_CMD_GET_CHANNEL
enumerator NET_REQUEST_IEEE802154_CMD_SET_PAN_ID
enumerator NET_REQUEST_IEEE802154_CMD_GET_PAN_ID
enumerator NET_REQUEST_IEEE802154_CMD_SET_EXT_ADDR
enumerator NET_REQUEST_IEEE802154_CMD_GET_EXT_ADDR
enumerator NET_REQUEST_IEEE802154_CMD_SET_SHORT_ADDR
enumerator NET_REQUEST_IEEE802154_CMD_GET_SHORT_ADDR
enumerator NET_REQUEST_IEEE802154_CMD_GET_TX_POWER
enumerator NET_REQUEST_IEEE802154_CMD_SET_TX_POWER
enumerator NET_REQUEST_IEEE802154_CMD_SET_SECURITY_SETTINGS
enumerator NET_REQUEST_IEEE802154_CMD_GET_SECURITY_SETTINGS
enum net_event_ieee802154_cmd
Values:
enumerator NET_EVENT_IEEE802154_CMD_SCAN_RESULT = 1
struct ieee802154_req_params
#include <ieee802154_mgmt.h> Scanning parameters.
Used to request a scan and get results as well, see section 8.2.11.2
Public Members
uint32_t channel_set
The set of channels to scan, use above macros to manage it
uint32_t duration
Duration of scan, per-channel, in milliseconds
uint16_t channel
Current channel in use as a result
uint16_t pan_id
Current pan_id in use as a result
uint8_t len
length of address
uint8_t lqi
Link quality information, between 0 and 255
struct ieee802154_security_params
#include <ieee802154_mgmt.h> Security parameters.
Used to setup the link-layer security settings, see tables 9-9 and 9-10 in section 9.5.
Thread protocol
• Overview
• Internet connectivity
• Sample usage
Overview Thread is a low-power mesh networking technology, designed specifically for home automa-
tion applications. It is an IPv6-based standard, which uses 6LoWPAN technology over IEEE 802.15.4
protocol. IP connectivity lets you easily connect a Thread mesh network to the internet with a Thread
Border Router.
The Thread specification provides a high level of network security. Mesh networks built with Thread are
secure - only authenticated devices can join the network and all communications within the mesh are
encrypted. More information about Thread protocol can be found at Thread Group website.
Zephyr integrates an open source Thread protocol implementation called OpenThread, documented on
the OpenThread website.
Internet connectivity A Thread Border Router is required to connect mesh network to the internet. An
open source implementation of Thread Border Router is provided by the OpenThread community. See
OpenThread Border Router guide for instructions on how to set up a Border Router.
Sample usage You can try using OpenThread with the Zephyr Echo server and Echo client samples,
which provide out-of-the-box configuration for OpenThread. To enable OpenThread support in these
samples, build them with overlay-ot.conf overlay config file. See sockets-echo-server-sample and
sockets-echo-client-sample for details.
• Overview
• Testing
Overview Point-to-Point Protocol (PPP) is a data link layer (layer 2) communications protocol used to
establish a direct connection between two nodes. PPP is used over many types of serial links since IP
packets cannot be transmitted over a modem line on their own, without some data link protocol.
In Zephyr, each individual PPP link is modelled as a network interface. This is similar to how Linux
implements PPP.
PPP support must be enabled at compile time by setting option CONFIG_NET_PPP and
CONFIG_NET_L2_PPP. The PPP support in Zephyr 2.0 is still experimental and the implementation sup-
ports only these protocols:
Testing See the net-tools README file for more details on how to test the Zephyr PPP against pppd
running in Linux.
Protocols
CoAP
• Overview
• Sample Usage
– CoAP Server
– CoAP Client
• Testing
– libcoap
– TTCN3
• API Reference
Overview The Constrained Application Protocol (CoAP) is a specialized web transfer protocol for use
with constrained nodes and constrained (e.g., low-power, lossy) networks. It provides a convenient API
for RESTful Web services that support CoAP’s features. For more information about the protocol itself,
see IETF RFC7252 The Constrained Application Protocol.
Zephyr provides a CoAP library which supports client and server roles. The library is configurable as per
user needs. The Zephyr CoAP library is implemented using plain buffers. Users of the API create sockets
for communication and pass the buffer to the library for parsing and other purposes. The library itself
doesn’t create any sockets for users.
On top of CoAP, Zephyr has support for LWM2M “Lightweight Machine 2 Machine” protocol, a simple,
low-cost remote management and service enablement mechanism. See Lightweight M2M (LWM2M) for
more information.
Supported RFCs:
Supported RFCs:
• RFC7252: The Constrained Application Protocol (CoAP)
• RFC6690: Constrained RESTful Environments (CoRE) Link Format
• RFC7959: Block-Wise Transfers in the Constrained Application Protocol (CoAP)
• RFC7641: Observing Resources in the Constrained Application Protocol (CoAP)
Note: Not all parts of these RFCs are supported. Features are supported based on Zephyr requirements.
Sample Usage
CoAP Server To create a CoAP server, resources for the server need to be defined. The .well-known/
core resource should be added before all other resources that should be included in the responses of the
.well-known/core resource.
An application reads data from the socket and passes the buffer to the CoAP library to parse the message.
If the CoAP message is proper, the library uses the buffer along with resources defined above to call
the correct callback function to handle the CoAP request from the client. It’s the callback function’s
responsibility to either reply or act according to CoAP request.
If CONFIG_COAP_URI_WILDCARD enabled, server may accept multiple resources using MQTT-like wildcard
style:
• the plus symbol represents a single-level wild card in the path;
• the hash symbol represents the multi-level wild card in the path.
It accepts /led/0/set, led/1234/set, led/any/set, /button/door/1, /test/+1, but returns -ENOENT for
/led/1, /test/21, /test/1.
This option is enabled by default, disable it to avoid unexpected behaviour with resource path like
‘/some_resource/+/#’.
CoAP Client If the CoAP client knows about resources in the CoAP server, the client can start prepare
CoAP requests and wait for responses. If the client doesn’t know about resources in the CoAP server, it
can request resources through the .well-known/core CoAP message.
/* Append options */
coap_packet_append_option(&request, COAP_OPTION_URI_PATH,
path, strlen(path));
/* Append payload */
coap_packet_append_payload(&request, (uint8_t *)payload,
sizeof(payload) - 1);
libcoap libcoap implements a lightweight application-protocol for devices that are resource con-
strained, such as by computing power, RF range, memory, bandwidth, or network packet sizes. Sources
can be found here libcoap. libcoap has a script (examples/etsi_coaptest.sh) to test coap-server func-
tionality in Zephyr.
See the net-tools project for more details
The coap-server-sample sample can be built and executed on QEMU as described in Networking with
QEMU.
Use this command on the host to run the libcoap implementation of the ETSI test cases:
TTCN3 Eclipse has TTCN3 based tests to run against CoAP implementations.
Install eclipse-titan and set symbolic links for titan tools
cd /usr/share/titan
export TTCN3_DIR=/usr/share/titan
cd titan.misc
After the build is complete, the coap-server-sample sample can be built and executed on QEMU as de-
scribed in Networking with QEMU.
Change the client (test suite) and server (Zephyr coap-server sample) addresses in coap.cfg file as per
your setup.
Execute the test cases with following command.
Verdict statistics: 0 none (0.00 %), 10 pass (100.00 %), 0 inconc (0.00 %), 0 fail (0.
˓→00 %), 0 error (0.00 %).
Test execution summary: 10 test cases were executed. Overall verdict: pass
API Reference
group coap
COAP library.
Defines
COAP_REQUEST_MASK
COAP_VERSION_1
coap_make_response_code(class, det)
COAP_CODE_EMPTY
COAP_TOKEN_MAX_LEN
GET_BLOCK_NUM(v)
GET_BLOCK_SIZE(v)
GET_MORE(v)
COAP_WELL_KNOWN_CORE_PATH
This resource should be added before all other resources that should be included in the re-
sponses of the .well-known/core resource.
Typedefs
typedef int (*coap_reply_t)(const struct coap_packet *response, struct coap_reply *reply, const
struct sockaddr *from)
Helper function to be called when a response matches the a pending request.
Enums
enum coap_option_num
Set of CoAP packet options we are aware of.
Users may add options other than these to their packets, provided they know how to format
them correctly. The only restriction is that all options must be added to a packet in numeric
order.
Refer to RFC 7252, section 12.2 for more information.
Values:
enumerator COAP_OPTION_IF_MATCH = 1
enumerator COAP_OPTION_URI_HOST = 3
enumerator COAP_OPTION_ETAG = 4
enumerator COAP_OPTION_IF_NONE_MATCH = 5
enumerator COAP_OPTION_OBSERVE = 6
enumerator COAP_OPTION_URI_PORT = 7
enumerator COAP_OPTION_LOCATION_PATH = 8
enumerator COAP_OPTION_URI_PATH = 11
enumerator COAP_OPTION_CONTENT_FORMAT = 12
enumerator COAP_OPTION_MAX_AGE = 14
enumerator COAP_OPTION_URI_QUERY = 15
enumerator COAP_OPTION_ACCEPT = 17
enumerator COAP_OPTION_LOCATION_QUERY = 20
enumerator COAP_OPTION_BLOCK2 = 23
enumerator COAP_OPTION_BLOCK1 = 27
enumerator COAP_OPTION_SIZE2 = 28
enumerator COAP_OPTION_PROXY_URI = 35
enumerator COAP_OPTION_PROXY_SCHEME = 39
enumerator COAP_OPTION_SIZE1 = 60
enum coap_method
Available request methods.
To be used when creating a request or a response.
Values:
enumerator COAP_METHOD_GET = 1
enumerator COAP_METHOD_POST = 2
enumerator COAP_METHOD_PUT = 3
enumerator COAP_METHOD_DELETE = 4
enumerator COAP_METHOD_FETCH = 5
enumerator COAP_METHOD_PATCH = 6
enumerator COAP_METHOD_IPATCH = 7
enum coap_msgtype
CoAP packets may be of one of these types.
Values:
enumerator COAP_TYPE_CON = 0
Confirmable message.
The packet is a request or response the destination end-point must acknowledge.
enumerator COAP_TYPE_NON_CON = 1
Non-confirmable message.
The packet is a request or response that doesn’t require acknowledgements.
enumerator COAP_TYPE_ACK = 2
Acknowledge.
Response to a confirmable message.
enumerator COAP_TYPE_RESET = 3
Reset.
Rejecting a packet for any reason is done by sending a message of this type.
enum coap_response_code
Set of response codes available for a response packet.
To be used when creating a response.
Values:
enum coap_content_format
Set of Content-Format option values for CoAP.
To be used when encoding or decoding a Content-Format option.
Values:
enumerator COAP_CONTENT_FORMAT_TEXT_PLAIN = 0
enumerator COAP_CONTENT_FORMAT_APP_LINK_FORMAT = 40
enumerator COAP_CONTENT_FORMAT_APP_XML = 41
enumerator COAP_CONTENT_FORMAT_APP_OCTET_STREAM = 42
enumerator COAP_CONTENT_FORMAT_APP_EXI = 47
enumerator COAP_CONTENT_FORMAT_APP_JSON = 50
enumerator COAP_CONTENT_FORMAT_APP_JSON_PATCH_JSON = 51
enumerator COAP_CONTENT_FORMAT_APP_MERGE_PATCH_JSON = 52
enumerator COAP_CONTENT_FORMAT_APP_CBOR = 60
enum coap_block_size
Represents the size of each block that will be transferred using block-wise transfers
[RFC7959]:
Each entry maps directly to the value that is used in the wire.
https://tools.ietf.org/html/rfc7959
Values:
enumerator COAP_BLOCK_16
enumerator COAP_BLOCK_32
enumerator COAP_BLOCK_64
enumerator COAP_BLOCK_128
enumerator COAP_BLOCK_256
enumerator COAP_BLOCK_512
enumerator COAP_BLOCK_1024
Functions
Returns
0 in case of success or negative in case of error.
int coap_ack_init(struct coap_packet *cpkt, const struct coap_packet *req, uint8_t *data,
uint16_t max_len, uint8_t code)
Create a new CoAP Acknowledgment message for given request.
This function works like coap_packet_init, filling CoAP header type, CoAP header token, and
CoAP header message id fields according to acknowledgment rules.
Parameters
• cpkt – New packet to be initialized using the storage from data.
• req – CoAP request packet that is being acknowledged
• data – Data that will contain a CoAP packet information
• max_len – Maximum allowable length of data
• code – CoAP header code
Returns
0 in case of success or negative in case of error.
uint8_t *coap_next_token(void)
Returns a randomly generated array of 8 bytes, that can be used as a message’s token.
Returns
a 8-byte pseudo-random token.
uint16_t coap_next_id(void)
Helper to generate message ids.
Returns
a new message id
int coap_find_options(const struct coap_packet *cpkt, uint16_t code, struct coap_option
*options, uint16_t veclen)
Return the values associated with the option of value code.
Parameters
• cpkt – CoAP packet representation
• code – Option number to look for
• options – Array of coap_option where to store the value of the options found
• veclen – Number of elements in the options array
Returns
The number of options found in packet matching code, negative on error.
int coap_packet_append_option(struct coap_packet *cpkt, uint16_t code, const uint8_t *value,
uint16_t len)
Appends an option to the packet.
Note: options must be added in numeric order of their codes. Otherwise error will be returned.
TODO: Add support for placing options according to its delta value.
Parameters
• cpkt – Packet to be updated
• code – Option code to add to the packet, see coap_option_num
• value – Pointer to the value of the option, will be copied to the packet
• len – Size of the data to be added
Returns
0 in case of success or negative in case of error.
unsigned int coap_option_value_to_int(const struct coap_option *option)
Converts an option to its integer representation.
Assumes that the number is encoded in the network byte order in the option.
Parameters
• option – Pointer to the option value, retrieved by coap_find_options()
Returns
The integer representation of the option
int coap_append_option_int(struct coap_packet *cpkt, uint16_t code, unsigned int val)
Appends an integer value option to the packet.
The option must be added in numeric order of their codes, and the least amount of bytes will
be used to encode the value.
Parameters
• cpkt – Packet to be updated
• code – Option code to add to the packet, see coap_option_num
• val – Integer value to be added
Returns
0 in case of success or negative in case of error.
int coap_packet_append_payload_marker(struct coap_packet *cpkt)
Append payload marker to CoAP packet.
Parameters
• cpkt – Packet to append the payload marker (0xFF)
Returns
0 in case of success or negative in case of error.
int coap_packet_append_payload(struct coap_packet *cpkt, const uint8_t *payload, uint16_t
payload_len)
Append payload to CoAP packet.
Parameters
• cpkt – Packet to append the payload
• payload – CoAP packet payload
• payload_len – CoAP packet payload len
Returns
0 in case of success or negative in case of error.
int coap_handle_request(struct coap_packet *cpkt, struct coap_resource *resources, struct
coap_option *options, uint8_t opt_num, struct sockaddr *addr,
socklen_t addr_len)
When a request is received, call the appropriate methods of the matching resources.
Parameters
• cpkt – Packet received
• resources – Array of known resources
• options – Parsed options from coap_packet_parse()
• opt_num – Number of options
Returns
pointer to a free coap_reply structure, NULL in case none could be found.
struct coap_pending *coap_pending_received(const struct coap_packet *response, struct
coap_pending *pendings, size_t len)
After a response is received, returns if there is any matching pending request exits. User has
to clear all pending retransmissions related to that response by calling coap_pending_clear().
Parameters
• response – The received response
• pendings – Pointer to the array of coap_reply structures
• len – Size of the array of coap_reply structures
Returns
pointer to the associated coap_pending structure, NULL in case none could be
found.
struct coap_reply *coap_response_received(const struct coap_packet *response, const struct
sockaddr *from, struct coap_reply *replies, size_t
len)
After a response is received, call coap_reply_t handler registered in coap_reply structure.
Parameters
• response – A response received
• from – Address from which the response was received
• replies – Pointer to the array of coap_reply structures
• len – Size of the array of coap_reply structures
Returns
Pointer to the reply matching the packet received, NULL if none could be found.
struct coap_pending *coap_pending_next_to_expire(struct coap_pending *pendings, size_t len)
Returns the next pending about to expire, pending->timeout informs how many ms to next
expiration.
Parameters
• pendings – Pointer to the array of coap_pending structures
• len – Size of the array of coap_pending structures
Returns
The next coap_pending to expire, NULL if none is about to expire.
bool coap_pending_cycle(struct coap_pending *pending)
After a request is sent, user may want to cycle the pending retransmission so the timeout is
updated.
Parameters
• pending – Pending representation to have its timeout updated
Returns
false if this is the last retransmission.
void coap_pending_clear(struct coap_pending *pending)
Cancels the pending retransmission, so it again becomes available.
Parameters
• pending – Pending representation to be canceled
struct coap_resource
#include <coap.h> Description of CoAP resource.
CoAP servers often want to register resources, so that clients can act on them, by fetching
their state or requesting updates to them.
Public Members
coap_method_t get
Which function to be called for each CoAP method
struct coap_observer
#include <coap.h> Represents a remote device that is observing a local resource.
struct coap_packet
#include <coap.h> Representation of a CoAP Packet.
struct coap_option
#include <coap.h>
struct coap_pending
#include <coap.h> Represents a request awaiting for an acknowledgment (ACK).
struct coap_reply
#include <coap.h> Represents the handler for the reply of a request, it is also used when
observing resources.
struct coap_block_context
#include <coap.h> Represents the current state of a block-wise transaction.
struct coap_core_metadata
#include <coap_link_format.h> In case you want to add attributes to the resources included
in the ‘well-known/core’ “virtual” resource, the ‘user_data’ field should point to a valid
coap_core_metadata structure.
CoAP client
• Overview
• Sample Usage
• API Reference
Overview The CoAP client library allows application to send CoAP requests and parse CoAP responses.
The application is notified about the response via a callback that is provided to the API in the request.
The CoAP client handles the communication over sockets. As the CoAP client doesn’t create socket it is
using, the application is responsible for creating the socket. Plain UDP or DTLS sockets are supported.
Sample Usage The following is an example of a CoAP client initialization and request sending:
coap_client_init(&client, NULL);
req.method = COAP_METHOD_GET;
req.confirmable = true;
req.path = "test";
req.fmt = COAP_CONTENT_FORMAT_TEXT_PLAIN;
req.cb = response_cb;
req.payload = NULL;
req.len = 0;
/* Sock is a file descriptor referencing a socket, address is the sockaddr struct for␣
˓→the
Before any requests can be sent, the CoAP client needs to be initialized. After initialization, the appli-
cation can send a CoAP request and wait for the response. Currently only one request can be sent for a
single CoAP client at a time. There can be multiple CoAP clients.
The callback provided in the callback will be called in following cases:
• There is a response for the request
• The request failed for some reason
The callback contains a flag last_block, which indicates if there is more data to come in the response and
means that the current response is part of a blockwise transfer. When the last_block is set to true, the
response is finished and the client is ready for the next request after returning from the callback.
If the server responds to the request, the library provides the response to the application through the
response callback registered in the request structure. As the response can be a blockwise transfer and
the client calls the callback once per each block, the application should be to process all of the blocks to
be able to process the response.
The following is an example of a very simple response handling function:
void response_cb(int16_t code, size_t offset, const uint8_t *payload, size_t len,
bool last_block, void *user_data)
{
if (code >= 0) {
LOG_INF("CoAP response from server %d", code);
if (last_block) {
LOG_INF("Last packet received");
}
} else {
LOG_ERR("Error in sending request %d", code);
}
}
API Reference
group coap_client
CoAP client API.
Defines
MAX_COAP_MSG_LEN
Typedefs
called sequentially with increasing payload offset and only partial content in buffer pointed
by payload parameter.
Param result_code
Result code of the response. Negative if there was a failure in send.
coap_response_code for positive.
Param offset
Payload offset from the beginning of a blockwise transfer.
Param payload
Buffer containing the payload from the response. NULL for empty payload.
Param len
Size of the payload.
Param last_block
Indicates the last block of the response.
Param user_data
User provided context.
Functions
struct coap_client_request
#include <coap_client.h> Representation of a CoAP client request.
Public Members
bool confirmable
CoAP Confirmable/Non-confirmable message
uint8_t *payload
User allocated buffer for send request
size_t len
Length of the payload
coap_client_response_cb_t cb
Callback when response received
uint8_t num_options
Number of extra options
void *user_data
User provided context
struct coap_client_option
#include <coap_client.h> Representation of extra options for the CoAP client request.
HTTP client
• Overview
• Sample Usage
• API Reference
Overview The HTTP client library allows you to send HTTP requests and parse HTTP responses. The
library communicates over the sockets API but it does not create sockets on its own.
The application must be responsible for creating a socket and passing it to the library. Therefore, de-
pending on the application’s needs, the library can communicate over either a plain TCP socket (HTTP)
or a TLS socket (HTTPS).
Sample Usage The API of the HTTP client library has a single function.
The following is an example of a request structure created correctly:
req.method = HTTP_GET;
req.url = "/";
req.host = "localhost";
req.protocol = "HTTP/1.1";
req.response = response_cb;
req.recv_buf = recv_buf;
req.recv_buf_len = sizeof(recv_buf);
If the server responds to the request, the library provides the response to the application through the
response callback registered in the request structure. As the library can provide the response in chunks,
the application must be able to process these.
Together with the structure containing the response data, the callback function also provides information
about whether the library expects to receive more data.
The following is an example of a very simple response handling function:
See HTTP client sample application for more information about the library usage.
API Reference
group http_client
HTTP client API.
Defines
HTTP_CRLF
HTTP_STATUS_STR_SIZE
Typedefs
Enums
enum http_final_call
Values:
enumerator HTTP_DATA_MORE = 0
enumerator HTTP_DATA_FINAL = 1
Functions
int http_client_req(int sock, struct http_request *req, int32_t timeout, void *user_data)
Do a HTTP request. The callback is called when data is received from the HTTP server. The
caller must have created a connection to the server before calling this function so connect()
call must have be done successfully for the socket.
Parameters
• sock – Socket id of the connection.
• req – HTTP request information
• timeout – Max timeout to wait for the data. The timeout value cannot be 0 as
there would be no time to receive the data. The timeout value is in millisec-
onds.
• user_data – User specified data that is passed to the callback.
Returns
<0 if error, >=0 amount of data sent to the server
struct http_response
#include <client.h> HTTP response from the server.
Public Members
http_response_cb_t cb
User provided HTTP response callback which is called when a response is received to a
sent HTTP request.
uint8_t *body_frag_start
---------------------------------------------------------------
↑ ↑
recv_buf body_frag_start
size_t body_frag_len
Length of the body fragment contained in the recv_buf
uint8_t *recv_buf
Where the response is stored, this is to be provided by the user.
size_t recv_buf_len
Response buffer maximum length
size_t data_len
Length of the data in the result buf. If the value is larger than recv_buf_len, then it means
that the data is truncated and could not be fully copied into recv_buf. This can only
happen if the user did not set the response callback. If the callback is set, then the HTTP
client API will call response callback many times so that all the data is delivered to the
user. Will be zero in the event of a null response.
size_t content_length
HTTP Content-Length field value. Will be set to zero in the event of a null response.
size_t processed
Amount of data given to the response callback so far, including the current data given to
the callback. This should be equal to the content_length field once the entire body has
been received. Will be zero if a null response is given.
uint16_t http_status_code
Numeric HTTP status code which corresponds to the textual description. Set to zero if
null response is given. Otherwise, will be a 3-digit integer code if valid HTTP response is
given.
struct http_client_internal_data
#include <client.h> HTTP client internal data that the application should not touch
Public Members
void *user_data
User data
int sock
HTTP socket
struct http_request
#include <client.h> HTTP client request. This contains all the data that is needed when doing
a HTTP request.
Public Members
http_response_cb_t response
User supplied callback function to call when response is received.
uint8_t *recv_buf
User supplied buffer where received data is stored
size_t recv_buf_len
Length of the user supplied receive buffer
http_payload_cb_t payload_cb
User supplied callback function to call when payload needs to be sent. This can be NULL
in which case the payload field in http_request is used. The idea of this payload callback
is to allow user to send more data that is practical to store in allocated memory.
size_t payload_len
Payload length is used to calculate Content-Length. Set to 0 for chunked transfers.
http_header_cb_t optional_headers_cb
User supplied callback function to call when optional headers need to be sent. This can
be NULL, in which case the optional_headers field in http_request is used. The idea of
this optional_headers callback is to allow user to send more HTTP header data that is
practical to store in allocated memory.
• Overview
• Example LwM2M object and resources: Device
• Sample usage
• Using LwM2M library with DTLS
• Multi-thread usage
• Support for time series data
– Enabling and configuring
– Read and Write operations
– Limitations
• LwM2M engine and application events
• LwM2M shell
• API Reference
Overview Lightweight Machine to Machine (LwM2M) is an application layer protocol designed with
device management, data reporting and device actuation in mind. Based on CoAP/UDP, LwM2M is a
standard defined by the Open Mobile Alliance and suitable for constrained devices by its use of CoAP
packet-size optimization and a simple, stateless flow that supports a REST API.
One of the key differences between LwM2M and CoAP is that an LwM2M client initiates the connection
to an LwM2M server. The server can then use the REST API to manage various interfaces with the client.
LwM2M uses a simple resource model with the core set of objects and resources defined in the specifica-
tion.
Resource definitions
* R=Read, W=Write, E=Execute
The server could query the Manufacturer resource for Device object instance 0 (the default and only
instance) by sending a READ 3/0/0 operation to the client.
The full list of registered objects and resource IDs can be found in the LwM2M registry.
Zephyr’s LwM2M library lives in the subsys/net/lib/lwm2m, with a client sample in sam-
ples/net/lwm2m_client. For more information about the provided sample see: lwm2m-client-sample
The sample can be configured to use normal unsecure network sockets or sockets secured via DTLS.
The Zephyr LwM2M library implements the following items:
• engine to process networking events and core functions
• RD client which performs BOOTSTRAP and REGISTRATION functions
• SenML CBOR, SenML JSON, CBOR, TLV, JSON, and plain text formatting functions
• LwM2M Technical Specification Enabler objects such as Security, Server, Device, Firmware Update,
etc.
• Extended IPSO objects such as Light Control, Temperature Sensor, and Timer
By default, the library implements LwM2M specification 1.0.2 and can be set to LwM2M specification
1.1.1 with a Kconfig option.
For more information about LwM2M visit OMA Specworks LwM2M.
Sample usage To use the LwM2M library, start by creating an LwM2M client context lwm2m_ctx struc-
ture:
The LwM2M RD client can send events back to the sample. To receive those events, setup a callback
function:
case LWM2M_RD_CLIENT_EVENT_NONE:
/* do nothing */
break;
case LWM2M_RD_CLIENT_EVENT_BOOTSTRAP_REG_FAILURE:
LOG_DBG("Bootstrap registration failure!");
break;
case LWM2M_RD_CLIENT_EVENT_BOOTSTRAP_REG_COMPLETE:
LOG_DBG("Bootstrap registration complete");
break;
case LWM2M_RD_CLIENT_EVENT_BOOTSTRAP_TRANSFER_COMPLETE:
LOG_DBG("Bootstrap transfer complete");
break;
case LWM2M_RD_CLIENT_EVENT_REGISTRATION_FAILURE:
LOG_DBG("Registration failure!");
break;
case LWM2M_RD_CLIENT_EVENT_REGISTRATION_COMPLETE:
LOG_DBG("Registration complete");
break;
case LWM2M_RD_CLIENT_EVENT_REG_TIMEOUT:
LOG_DBG("Registration timeout!");
break;
case LWM2M_RD_CLIENT_EVENT_REG_UPDATE_COMPLETE:
LOG_DBG("Registration update complete");
break;
case LWM2M_RD_CLIENT_EVENT_DEREGISTER_FAILURE:
(continues on next page)
case LWM2M_RD_CLIENT_EVENT_DISCONNECT:
LOG_DBG("Disconnected");
break;
}
}
Next we assign Security resource values to let the client know where and how to connect as well as
set the Manufacturer and Reboot resources in the Device object with some data and the callback we
defined above:
/*
* Server URL of default Security object = 0/0/0
* Use leshan.eclipse.org server IP (5.39.83.206) for connection
*/
lwm2m_set_string(&LWM2M_OBJ(0, 0, 0), "coap://5.39.83.206");
/*
* Security Mode of default Security object = 0/0/2
* 3 = NoSec mode (no security beware!)
*/
lwm2m_set_u8(&LWM2M_OBJ(0, 0, 2), 3);
/*
* Manufacturer resource of Device object = 3/0/0
* We use lwm2m_set_res_data() function to set a pointer to the
* CLIENT_MANUFACTURER string.
* Note the LWM2M_RES_DATA_FLAG_RO flag which stops the engine from
* trying to assign a new value to the buffer.
*/
lwm2m_set_res_data(&LWM2M_OBJ(3, 0, 0), CLIENT_MANUFACTURER,
sizeof(CLIENT_MANUFACTURER),
LWM2M_RES_DATA_FLAG_RO);
Lastly, we start the LwM2M RD client (which in turn starts the LwM2M engine). The second parameter
of lwm2m_rd_client_start() is the client endpoint name. This is important as it needs to be unique
per LwM2M server:
(void)memset(&client, 0x0, sizeof(client));
lwm2m_rd_client_start(&client, "unique-endpoint-name", 0, rd_client_event);
Using LwM2M library with DTLS The Zephyr LwM2M library can be used with DTLS transport for
secure communication by selecting CONFIG_LWM2M_DTLS_SUPPORT. In the client initialization we need to
create a PSK and identity. These need to match the security information loaded onto the LwM2M server.
Normally, the endpoint name is used to lookup the related security information:
/* "000102030405060708090a0b0c0d0e0f" */
static unsigned char client_psk[] = {
(continues on next page)
Next we alter the Security object resources to include DTLS security information. The server URL
should begin with coaps:// to indicate security is required. Assign a 0 value (Pre-shared Key mode) to
the Security Mode resource. Lastly, set the client identity and PSK resources.
Before calling lwm2m_rd_client_start() assign the tls_tag # where the LwM2M library should store
the DTLS information prior to connection (normally a value of 1 is ok here).
Multi-thread usage Writing a value to a resource can be done using functions like lwm2m_set_u8.
When writing to multiple resources, the function lwm2m_registry_lock will ensure that the client halts
until all writing operations are finished:
lwm2m_registry_lock();
lwm2m_set_u32(&LWM2M_OBJ(1, 0, 1), 60);
lwm2m_set_u8(&LWM2M_OBJ(5, 0, 3), 0);
lwm2m_set_f64(&LWM2M_OBJ(3303, 0, 5700), value);
lwm2m_registry_unlock();
This is especially useful if the server is composite-observing the resources being written to. Locking will
then ensure that the client only updates and sends notifications to the server after all operations are
done, resulting in fewer messages in general.
Support for time series data LwM2M version 1.1 adds support for SenML CBOR and SenML JSON
data formats. These data formats add support for time series data. Time series formats can be used for
READ, NOTIFY and SEND operations. When data cache is enabled for a resource, each write will create
a timestamped entry in a cache, and its content is then returned as a content in in READ, NOTIFY or
SEND operation for a given resource.
Data cache is only supported for resources with a fixed data size.
Supported resource types:
• Signed and unsigned 8-64-bit integers
• Float
• Boolean
LwM2M engine have room for four resources that have cache enabled. Limit can be increased by chang-
ing CONFIG_LWM2M_MAX_CACHED_RESOURCES. This affects a static memory usage of engine.
Data caches depends on one of the SenML data formats CONFIG_LWM2M_RW_SENML_CBOR_SUPPORT or
CONFIG_LWM2M_RW_SENML_JSON_SUPPORT and needs CONFIG_POSIX_CLOCK so it can request a timestamp
from the system and CONFIG_RING_BUFFER for ring buffer.
Read and Write operations Full content of data cache is written into a payload when any READ, SEND
or NOTIFY operation internally reads the content of a given resource. This has a side effect that any read
callbacks registered for a that resource are ignored when cache is enabled. Data is written into a cache
when any of the lwm2m_set_* functions are called. To filter the data entering the cache, application may
register a validation callback using lwm2m_register_validate_callback() .
Limitations Cache size should be manually set so small that the content can fit normal packets sizes.
When cache is full, new values are dropped.
LwM2M engine and application events The Zephyr LwM2M engine defines events that can be sent
back to the application through callback functions. The engine state machine shows when the events
are spawned. Events depicted in the diagram are listed in the table. The events are prefixed with
LWM2M_RD_CLIENT_EVENT_.
LwM2M shell For testing the client it is possible to enable Zephyr’s shell and LwM2M specific com-
mands which support changing the state of the client. Operations supported are read, write and execute
resources. Client start, stop, pause and resume are also available. The feature is enabled by selecting
CONFIG_LWM2M_SHELL. The shell is meant for testing so productions systems should not enable it.
One imaginable scenario, where to use the shell, would be executing client side actions over UART when
a server side tests would require those. It is assumed that not all tests are able to trigger required actions
from the server side.
uart:~$ lwm2m
lwm2m - LwM2M commands
Subcommands:
exec :Execute a resource
exec PATH
API Reference
group lwm2m_api
LwM2M high-level API.
LwM2M high-level interface is defined in this header.
Defines
LWM2M_OBJECT_SECURITY_ID
LwM2M Objects managed by OMA for LwM2M tech specification. Objects in this range have
IDs from 0 to 1023.
LWM2M_OBJECT_SERVER_ID
LWM2M_OBJECT_ACCESS_CONTROL_ID
LWM2M_OBJECT_DEVICE_ID
LWM2M_OBJECT_CONNECTIVITY_MONITORING_ID
LWM2M_OBJECT_FIRMWARE_ID
LWM2M_OBJECT_LOCATION_ID
LWM2M_OBJECT_CONNECTIVITY_STATISTICS_ID
LWM2M_OBJECT_SOFTWARE_MANAGEMENT_ID
LWM2M_OBJECT_PORTFOLIO_ID
LWM2M_OBJECT_BINARYAPPDATACONTAINER_ID
LWM2M_OBJECT_EVENT_LOG_ID
LWM2M_OBJECT_GATEWAY_ID
IPSO_OBJECT_GENERIC_SENSOR_ID
LwM2M Objects produced by 3rd party Standards Development Organizations. Objects in
this range have IDs from 2048 to 10240 Refer to the OMA LightweightM2M (LwM2M)
Object and Resource Registry: http://www.openmobilealliance.org/wp/OMNA/LwM2M/
LwM2MRegistry.html.
IPSO_OBJECT_TEMP_SENSOR_ID
IPSO_OBJECT_HUMIDITY_SENSOR_ID
IPSO_OBJECT_LIGHT_CONTROL_ID
IPSO_OBJECT_ACCELEROMETER_ID
IPSO_OBJECT_VOLTAGE_SENSOR_ID
IPSO_OBJECT_CURRENT_SENSOR_ID
IPSO_OBJECT_PRESSURE_ID
IPSO_OBJECT_BUZZER_ID
IPSO_OBJECT_TIMER_ID
IPSO_OBJECT_ONOFF_SWITCH_ID
IPSO_OBJECT_PUSH_BUTTON_ID
UCIFI_OBJECT_BATTERY_ID
IPSO_OBJECT_FILLING_LEVEL_SENSOR_ID
LWM2M_DEVICE_PWR_SRC_TYPE_DC_POWER
Power source types used for the “Available Power Sources” resource of the LwM2M Device
object.
LWM2M_DEVICE_PWR_SRC_TYPE_BAT_INT
LWM2M_DEVICE_PWR_SRC_TYPE_BAT_EXT
LWM2M_DEVICE_PWR_SRC_TYPE_UNUSED
LWM2M_DEVICE_PWR_SRC_TYPE_PWR_OVER_ETH
LWM2M_DEVICE_PWR_SRC_TYPE_USB
LWM2M_DEVICE_PWR_SRC_TYPE_AC_POWER
LWM2M_DEVICE_PWR_SRC_TYPE_SOLAR
LWM2M_DEVICE_PWR_SRC_TYPE_MAX
LWM2M_DEVICE_ERROR_NONE
Error codes used for the “Error Code” resource of the LwM2M Device object. An LwM2M
client can register one of the following error codes via the lwm2m_device_add_err() function.
LWM2M_DEVICE_ERROR_LOW_POWER
LWM2M_DEVICE_ERROR_EXT_POWER_SUPPLY_OFF
LWM2M_DEVICE_ERROR_GPS_FAILURE
LWM2M_DEVICE_ERROR_LOW_SIGNAL_STRENGTH
LWM2M_DEVICE_ERROR_OUT_OF_MEMORY
LWM2M_DEVICE_ERROR_SMS_FAILURE
LWM2M_DEVICE_ERROR_NETWORK_FAILURE
LWM2M_DEVICE_ERROR_PERIPHERAL_FAILURE
LWM2M_DEVICE_BATTERY_STATUS_NORMAL
Battery status codes used for the “Battery Status” resource (3/0/20) of the LwM2M Device
object. As the battery status changes, an LwM2M client can set one of the following codes via:
lwm2m_engine_set_u8(“3/0/20”, [battery status])
LWM2M_DEVICE_BATTERY_STATUS_CHARGING
LWM2M_DEVICE_BATTERY_STATUS_CHARGE_COMP
LWM2M_DEVICE_BATTERY_STATUS_DAMAGED
LWM2M_DEVICE_BATTERY_STATUS_LOW
LWM2M_DEVICE_BATTERY_STATUS_NOT_INST
LWM2M_DEVICE_BATTERY_STATUS_UNKNOWN
STATE_IDLE
LWM2M Firmware Update object states.
An LwM2M client or the LwM2M Firmware Update object use the following codes to represent
the LwM2M Firmware Update state (5/0/3).
STATE_DOWNLOADING
STATE_DOWNLOADED
STATE_UPDATING
RESULT_DEFAULT
LWM2M Firmware Update object result codes.
After processing a firmware update, the client sets the result via one of the following codes
via lwm2m_engine_set_u8(“5/0/5”, [result code])
RESULT_SUCCESS
RESULT_NO_STORAGE
RESULT_OUT_OF_MEM
RESULT_CONNECTION_LOST
RESULT_INTEGRITY_FAILED
RESULT_UNSUP_FW
RESULT_INVALID_URI
RESULT_UPDATE_FAILED
RESULT_UNSUP_PROTO
LWM2M_OBJLNK_MAX_ID
Maximum value for ObjLnk resource fields.
LWM2M_RES_DATA_READ_ONLY
Resource read-only value bit.
LWM2M_RES_DATA_FLAG_RO
Resource read-only flag.
LWM2M_HAS_RES_FLAG(res, f)
Read resource flags helper macro.
LWM2M_RD_CLIENT_EVENT_REG_UPDATE_FAILURE
Define for old event name keeping backward compatibility.
LWM2M_RD_CLIENT_FLAG_BOOTSTRAP
Run bootstrap procedure in current session.
LWM2M_MAX_PATH_STR_SIZE
LwM2M path maximum length.
Typedefs
Param ctx
[in] LwM2M context generating the event
Param event
[in] LwM2M RD client event code
Return
Callback returns a negative error code (errno.h) indicating reason of failure or 0
for success.
Enums
enum lwm2m_observe_event
Observe callback events.
Values:
enumerator LWM2M_OBSERVE_EVENT_OBSERVER_ADDED
enumerator LWM2M_OBSERVE_EVENT_OBSERVER_REMOVED
enumerator LWM2M_OBSERVE_EVENT_NOTIFY_ACK
enumerator LWM2M_OBSERVE_EVENT_NOTIFY_TIMEOUT
enum lwm2m_rd_client_event
LwM2M RD client events.
LwM2M client events are passed back to the event_cb function in lwm2m_rd_client_start()
Values:
enumerator LWM2M_RD_CLIENT_EVENT_NONE
enumerator LWM2M_RD_CLIENT_EVENT_BOOTSTRAP_REG_FAILURE
enumerator LWM2M_RD_CLIENT_EVENT_BOOTSTRAP_REG_COMPLETE
enumerator LWM2M_RD_CLIENT_EVENT_BOOTSTRAP_TRANSFER_COMPLETE
enumerator LWM2M_RD_CLIENT_EVENT_REGISTRATION_FAILURE
enumerator LWM2M_RD_CLIENT_EVENT_REGISTRATION_COMPLETE
enumerator LWM2M_RD_CLIENT_EVENT_REG_TIMEOUT
enumerator LWM2M_RD_CLIENT_EVENT_REG_UPDATE_COMPLETE
enumerator LWM2M_RD_CLIENT_EVENT_DEREGISTER_FAILURE
enumerator LWM2M_RD_CLIENT_EVENT_DISCONNECT
enumerator LWM2M_RD_CLIENT_EVENT_QUEUE_MODE_RX_OFF
enumerator LWM2M_RD_CLIENT_EVENT_ENGINE_SUSPENDED
enumerator LWM2M_RD_CLIENT_EVENT_NETWORK_ERROR
enumerator LWM2M_RD_CLIENT_EVENT_REG_UPDATE
enum lwm2m_send_status
LwM2M send status.
LwM2M send status are generated back to the lwm2m_send_cb_t function in
lwm2m_send_cb()
Values:
enumerator LWM2M_SEND_STATUS_SUCCESS
enumerator LWM2M_SEND_STATUS_FAILURE
enumerator LWM2M_SEND_STATUS_TIMEOUT
Functions
Deprecated:
Use lwm2m_update_observer_min_period() instead.
LwM2M clients use this function to modify the pmin attribute for an observation be-
ing made. Example to update the pmin of a temperature sensor value being observed:
lwm2m_engine_update_observer_min_period(“client_ctx, 3303/0/5700”, 5);
Parameters
• client_ctx – [in] LwM2M context
• pathstr – [in] LwM2M path string “obj/obj-inst/res”
• period_s – [in] Value of pmin to be given (in seconds).
Returns
0 for success or negative in case of error.
int lwm2m_update_observer_min_period(struct lwm2m_ctx *client_ctx, const struct
lwm2m_obj_path *path, uint32_t period_s)
Change an observer’s pmin value.
LwM2M clients use this function to modify the pmin attribute for an observation be-
ing made. Example to update the pmin of a temperature sensor value being observed:
lwm2m_update_observer_min_period(client_ctx, &LWM2M_OBJ(3303, 0, 5700), 5);
Parameters
• client_ctx – [in] LwM2M context
• path – [in] LwM2M path as a struct
• period_s – [in] Value of pmin to be given (in seconds).
Returns
0 for success or negative in case of error.
int lwm2m_engine_update_observer_max_period(struct lwm2m_ctx *client_ctx, const char
*pathstr, uint32_t period_s)
Change an observer’s pmax value.
Deprecated:
Use lwm2m_update_observer_max_period() instead.
LwM2M clients use this function to modify the pmax attribute for an observation be-
ing made. Example to update the pmax of a temperature sensor value being observed:
lwm2m_engine_update_observer_max_period(“client_ctx, 3303/0/5700”, 5);
Parameters
• client_ctx – [in] LwM2M context
• pathstr – [in] LwM2M path string “obj/obj-inst/res”
• period_s – [in] Value of pmax to be given (in seconds).
Returns
0 for success or negative in case of error.
int lwm2m_update_observer_max_period(struct lwm2m_ctx *client_ctx, const struct
lwm2m_obj_path *path, uint32_t period_s)
Change an observer’s pmax value.
LwM2M clients use this function to modify the pmax attribute for an observation be-
ing made. Example to update the pmax of a temperature sensor value being observed:
lwm2m__update_observer_max_period(client_ctx, &LWM2M_OBJ(3303, 0, 5700), 5);
Parameters
• client_ctx – [in] LwM2M context
• path – [in] LwM2M path as a struct
• period_s – [in] Value of pmax to be given (in seconds).
Returns
0 for success or negative in case of error.
int lwm2m_engine_create_obj_inst(const char *pathstr)
Create an LwM2M object instance.
Deprecated:
Use lwm2m_create_obj_inst() instead.
LwM2M clients use this function to create non-default LwM2M objects: Example to create
first temperature sensor object: lwm2m_engine_create_obj_inst(“3303/0”);
Parameters
• pathstr – [in] LwM2M path string “obj/obj-inst”
Returns
0 for success or negative in case of error.
int lwm2m_create_object_inst(const struct lwm2m_obj_path *path)
Create an LwM2M object instance.
LwM2M clients use this function to create non-default LwM2M objects: Example to create
first temperature sensor object: lwm2m_create_obj_inst(&LWM2M_OBJ(3303, 0));
Parameters
• path – [in] LwM2M path as a struct
Returns
0 for success or negative in case of error.
int lwm2m_engine_delete_obj_inst(const char *pathstr)
Delete an LwM2M object instance.
Deprecated:
Use lwm2m_delete_obj_inst() instead.
LwM2M clients use this function to delete LwM2M objects.
Parameters
• pathstr – [in] LwM2M path string “obj/obj-inst”
Returns
0 for success or negative in case of error.
int lwm2m_delete_object_inst(const struct lwm2m_obj_path *path)
Delete an LwM2M object instance.
LwM2M clients use this function to delete LwM2M objects.
Parameters
• path – [in] LwM2M path as a struct
Returns
0 for success or negative in case of error.
void lwm2m_registry_lock(void)
Locks the registry for this thread.
Use this function before writing to multiple resources. This halts the lwm2m main thread
until all the write-operations are finished.
void lwm2m_registry_unlock(void)
Unlocks the registry previously locked by lwm2m_registry_lock().
int lwm2m_engine_set_opaque(const char *pathstr, const char *data_ptr, uint16_t data_len)
Set resource (instance) value (opaque buffer)
Deprecated:
Use lwm2m_set_opaque() instead.
Parameters
• pathstr – [in] LwM2M path string “obj/obj-inst/res(/res-inst)”
• data_ptr – [in] Data buffer
• data_len – [in] Length of buffer
Returns
0 for success or negative in case of error.
Deprecated:
Use lwm2m_set_string() instead.
Parameters
• pathstr – [in] LwM2M path string “obj/obj-inst/res(/res-inst)”
• data_ptr – [in] NULL terminated char buffer
Returns
0 for success or negative in case of error.
Deprecated:
Use lwm2m_set_u8() instead.
Parameters
• pathstr – [in] LwM2M path string “obj/obj-inst/res(/res-inst)”
• value – [in] u8 value
Returns
0 for success or negative in case of error.
Deprecated:
Use lwm2m_set_u16() instead.
Parameters
• pathstr – [in] LwM2M path string “obj/obj-inst/res(/res-inst)”
• value – [in] u16 value
Returns
0 for success or negative in case of error.
Deprecated:
Use lwm2m_set_u32() instead.
Parameters
• pathstr – [in] LwM2M path string “obj/obj-inst/res(/res-inst)”
• value – [in] u32 value
Returns
0 for success or negative in case of error.
Deprecated:
Use lwm2m_set_u64() instead.
Parameters
• pathstr – [in] LwM2M path string “obj/obj-inst/res(/res-inst)”
• value – [in] u64 value
Returns
0 for success or negative in case of error.
Deprecated:
Use lwm2m_set_s8() instead.
Parameters
• pathstr – [in] LwM2M path string “obj/obj-inst/res(/res-inst)”
• value – [in] s8 value
Returns
0 for success or negative in case of error.
Deprecated:
Use lwm2m_set_s16() instead.
Parameters
• pathstr – [in] LwM2M