KICS 1.5.1 Scanned paths: . Platforms: Dockerfile, Common, KubernetesStart time: 22:06:01, Apr 23 2022 End time: 22:06:06, Apr 23 2022

Vulnerabilities:

29 HIGH
142 MEDIUM
91 LOW
3 INFO
265 TOTAL

Missing User Instruction

Platform: Dockerfile Category: Build Process
A user should be specified in the dockerfile, otherwise the image will run as roothttps://docs.docker.com/engine/reference/builder/#user
Results (15)
File: infrastructure/cache-store/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM redis:6-alpine
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/hunger-check/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM ubuntu:18.04
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/internal-api/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM node:alpine
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/k8s-goat-home/Dockerfile Line 18
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
17
18FROM nginx:alpine
19LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
File: infrastructure/info-app/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM python:alpine
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/hidden-in-layers/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM alpine:latest
2
3LABEL MAINTAINER "Madhu Akula" INFO="Kubernetes Goat"
File: infrastructure/users-repos/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM python:alpine
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/health-check/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM golang:buster
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/helm-tiller/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM debian:stable
2LABEL MAINTAINER "Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/k8s-goat-home/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM alpine as build
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/build-code/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM alpine:latest
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/poor-registry/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM registry:2
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/system-monitor/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM ubuntu:18.04
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/batch-check/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM alpine:latest
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/metadata-db/Dockerfile Line 1
Expected: The 'Dockerfile' contains the 'USER' instruction Found: The 'Dockerfile' does not contain any 'USER' instruction
1FROM golang:alpine
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3

Passwords And Secrets - Generic API Key

Platform: Common Category: Secret Management
Query to find passwords and secrets in infrastructure code.https://kics.io/
Results (2)
File: scenarios/hunger-check/deployment.yaml Line 53
Expected: Hardcoded secret key should not appear in source Found: ' k8swebhookapikey: azhzLWdvYXQtZGZjZjYzMDUzOTU1M2VjZjk1ODZmZGZkYTE5NjhmZWM=' contains a secret
52data:
53 k8swebhookapikey: azhzLWdvYXQtZGZjZjYzMDUzOTU1M2VjZjk1ODZmZGZkYTE5NjhmZWM=
54---
File: scenarios/hunger-check/deployment.yaml Line 44
Expected: Hardcoded secret key should not appear in source Found: ' k8svaultapikey: azhzLWdvYXQtODUwNTc4NDZhODA0NmEyNWIzNWYzOGYzYTI2NDlkY2U=' contains a secret
43data:
44 k8svaultapikey: azhzLWdvYXQtODUwNTc4NDZhODA0NmEyNWIzNWYzOGYzYTI2NDlkY2U=
45---

Privilege Escalation Allowed

Platform: Kubernetes Category: Insecure Configurations
Containers should not run with allowPrivilegeEscalation in order to prevent them from gaining more privileges than their parent processhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/
Results (4)
File: scenarios/docker-bench-security/deployment.yaml Line 44
Expected: spec.template.spec.containers[docker-bench].securityContext.allowPrivilegeEscalation is set Found: spec.template.spec.containers[docker-bench].securityContext.allowPrivilegeEscalation is undefined
43 memory: 80Mi
44 securityContext:
45 privileged: true
File: scenarios/health-check/deployment.yaml Line 24
Expected: spec.template.spec.containers[health-check].securityContext.allowPrivilegeEscalation is set Found: spec.template.spec.containers[health-check].securityContext.allowPrivilegeEscalation is undefined
23 # Custom Stuff
24 securityContext:
25 privileged: true
File: scenarios/health-check/deployment-kind.yaml Line 24
Expected: spec.template.spec.containers[health-check].securityContext.allowPrivilegeEscalation is set Found: spec.template.spec.containers[health-check].securityContext.allowPrivilegeEscalation is undefined
23 # Custom Stuff
24 securityContext:
25 privileged: true
File: scenarios/system-monitor/deployment.yaml Line 38
Expected: spec.template.spec.containers[system-monitor].securityContext.allowPrivilegeEscalation is false Found: spec.template.spec.containers[system-monitor].securityContext.allowPrivilegeEscalation is true
37 securityContext:
38 allowPrivilegeEscalation: true
39 privileged: true

Shared Host IPC Namespace

Platform: Kubernetes Category: Insecure Configurations
Container should not share the host IPC namespacehttps://kubernetes.io/docs/concepts/policy/pod-security-policy/
Results (2)
File: scenarios/docker-bench-security/deployment.yaml Line 28
Expected: 'spec.template.spec.hostIPC' is false or undefined Found: 'spec.template.spec.hostIPC' is true
27 hostPID: true
28 hostIPC: true
29 hostNetwork: true
File: scenarios/system-monitor/deployment.yaml Line 24
Expected: 'spec.template.spec.hostIPC' is false or undefined Found: 'spec.template.spec.hostIPC' is true
23 hostPID: true
24 hostIPC: true
25 hostNetwork: true

Shared Host Network Namespace

Platform: Kubernetes Category: Insecure Configurations
Container should not share the host network namespacehttps://kubernetes.io/docs/concepts/policy/pod-security-policy/
Results (2)
File: scenarios/docker-bench-security/deployment.yaml Line 29
Expected: 'spec.template.spec.hostNetwork' is false or undefined Found: 'spec.template.spec.hostNetwork' is true
28 hostIPC: true
29 hostNetwork: true
30 securityContext:
File: scenarios/system-monitor/deployment.yaml Line 25
Expected: 'spec.template.spec.hostNetwork' is false or undefined Found: 'spec.template.spec.hostNetwork' is true
24 hostIPC: true
25 hostNetwork: true
26 volumes:

Shared Host PID Namespace

Platform: Kubernetes Category: Insecure Configurations
Container should not share the host process ID namespacehttps://kubernetes.io/docs/concepts/policy/pod-security-policy/
Results (4)
File: scenarios/kube-bench-security/node-job.yaml Line 9
Expected: 'spec.template.spec.hostPID' is false or undefined Found: 'spec.template.spec.hostPID' is true
8 spec:
9 hostPID: true
10 containers:
File: scenarios/system-monitor/deployment.yaml Line 23
Expected: 'spec.template.spec.hostPID' is false or undefined Found: 'spec.template.spec.hostPID' is true
22 spec:
23 hostPID: true
24 hostIPC: true
File: scenarios/docker-bench-security/deployment.yaml Line 27
Expected: 'spec.template.spec.hostPID' is false or undefined Found: 'spec.template.spec.hostPID' is true
26 spec:
27 hostPID: true
28 hostIPC: true
File: scenarios/kube-bench-security/master-job.yaml Line 9
Expected: 'spec.template.spec.hostPID' is false or undefined Found: 'spec.template.spec.hostPID' is true
8 spec:
9 hostPID: true
10 nodeSelector:

Apt Get Install Pin Version Not Defined

Platform: Dockerfile Category: Supply-Chain
When installing a package, its pin version should be definedhttps://docs.docker.com/develop/develop-images/dockerfile_best-practices/
Results (4)
File: infrastructure/system-monitor/Dockerfile Line 4
Expected: Package 'wget' has version defined Found: Package 'wget' does not have version defined
3
4RUN apt-get update && apt-get install -y htop \
5 libcap2-bin curl wget && \
File: infrastructure/system-monitor/Dockerfile Line 4
Expected: Package 'htop' has version defined Found: Package 'htop' does not have version defined
3
4RUN apt-get update && apt-get install -y htop \
5 libcap2-bin curl wget && \
File: infrastructure/system-monitor/Dockerfile Line 4
Expected: Package 'libcap2-bin' has version defined Found: Package 'libcap2-bin' does not have version defined
3
4RUN apt-get update && apt-get install -y htop \
5 libcap2-bin curl wget && \
File: infrastructure/system-monitor/Dockerfile Line 4
Expected: Package 'curl' has version defined Found: Package 'curl' does not have version defined
3
4RUN apt-get update && apt-get install -y htop \
5 libcap2-bin curl wget && \

CPU Limits Not Set

Platform: Kubernetes Category: Resource Management
CPU limits should be set because if the system has CPU time free, a container is guaranteed to be allocated as much CPU as it requestshttps://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Results (6)
File: scenarios/kube-bench-security/master-job.yaml Line 17
Expected: spec.template.spec.containers.name=kube-bench has resources defined Found: spec.template.spec.containers.name=kube-bench doesn't have resources defined
16 containers:
17 - name: kube-bench
18 image: aquasec/kube-bench:latest
File: scenarios/cache-store/deployment.yaml Line 36
Expected: spec.template.spec.containers.name=cache-store has resources defined Found: spec.template.spec.containers.name=cache-store doesn't have resources defined
35 containers:
36 - name: cache-store
37 image: madhuakula/k8s-goat-cache-store
File: scenarios/batch-check/job.yaml Line 11
Expected: spec.template.spec.containers.name=batch-check has resources defined Found: spec.template.spec.containers.name=batch-check doesn't have resources defined
10 containers:
11 - name: batch-check
12 image: madhuakula/k8s-goat-batch-check
File: scenarios/kube-bench-security/node-job.yaml Line 11
Expected: spec.template.spec.containers.name=kube-bench has resources defined Found: spec.template.spec.containers.name=kube-bench doesn't have resources defined
10 containers:
11 - name: kube-bench
12 image: aquasec/kube-bench:latest
File: scenarios/hidden-in-layers/deployment.yaml Line 11
Expected: spec.template.spec.containers.name=hidden-in-layers has resources defined Found: spec.template.spec.containers.name=hidden-in-layers doesn't have resources defined
10 containers:
11 - name: hidden-in-layers
12 image: madhuakula/k8s-goat-hidden-in-layers
File: scenarios/hunger-check/deployment.yaml Line 71
Expected: spec.template.spec.containers.name=hunger-check has resources defined Found: spec.template.spec.containers.name=hunger-check doesn't have resources defined
70 containers:
71 - name: hunger-check
72 image: madhuakula/k8s-goat-hunger-check

CPU Requests Not Set

Platform: Kubernetes Category: Resource Management
CPU requests should be set to ensure the sum of the resource requests of the scheduled Containers is less than the capacity of the nodehttps://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#
Results (12)
File: scenarios/batch-check/job.yaml Line 11
Expected: spec.template.spec.containers.name=batch-check does have resources defined Found: spec.template.spec.containers.name=batch-check doesn't have resources defined
10 containers:
11 - name: batch-check
12 image: madhuakula/k8s-goat-batch-check
File: scenarios/cache-store/deployment.yaml Line 36
Expected: spec.template.spec.containers.name=cache-store does have resources defined Found: spec.template.spec.containers.name=cache-store doesn't have resources defined
35 containers:
36 - name: cache-store
37 image: madhuakula/k8s-goat-cache-store
File: scenarios/kube-bench-security/master-job.yaml Line 17
Expected: spec.template.spec.containers.name=kube-bench does have resources defined Found: spec.template.spec.containers.name=kube-bench doesn't have resources defined
16 containers:
17 - name: kube-bench
18 image: aquasec/kube-bench:latest
File: scenarios/kubernetes-goat-home/deployment.yaml Line 17
Expected: spec.template.spec.containers.name=kubernetes-goat-home.resources does have requests defined Found: spec.template.spec.containers.name=kubernetes-goat-home.resources doesn't have requests defined
16 image: madhuakula/k8s-goat-home
17 resources:
18 limits:
File: scenarios/health-check/deployment.yaml Line 17
Expected: spec.template.spec.containers.name=health-check.resources does have requests defined Found: spec.template.spec.containers.name=health-check.resources doesn't have requests defined
16 image: madhuakula/k8s-goat-health-check
17 resources:
18 limits:
File: scenarios/hunger-check/deployment.yaml Line 71
Expected: spec.template.spec.containers.name=hunger-check does have resources defined Found: spec.template.spec.containers.name=hunger-check doesn't have resources defined
70 containers:
71 - name: hunger-check
72 image: madhuakula/k8s-goat-hunger-check
File: scenarios/kube-bench-security/node-job.yaml Line 11
Expected: spec.template.spec.containers.name=kube-bench does have resources defined Found: spec.template.spec.containers.name=kube-bench doesn't have resources defined
10 containers:
11 - name: kube-bench
12 image: aquasec/kube-bench:latest
File: scenarios/health-check/deployment-kind.yaml Line 17
Expected: spec.template.spec.containers.name=health-check.resources does have requests defined Found: spec.template.spec.containers.name=health-check.resources doesn't have requests defined
16 image: madhuakula/k8s-goat-health-check
17 resources:
18 limits:
File: scenarios/hidden-in-layers/deployment.yaml Line 11
Expected: spec.template.spec.containers.name=hidden-in-layers does have resources defined Found: spec.template.spec.containers.name=hidden-in-layers doesn't have resources defined
10 containers:
11 - name: hidden-in-layers
12 image: madhuakula/k8s-goat-hidden-in-layers
File: scenarios/build-code/deployment.yaml Line 17
Expected: spec.template.spec.containers.name=build-code.resources does have requests defined Found: spec.template.spec.containers.name=build-code.resources doesn't have requests defined
16 image: madhuakula/k8s-goat-build-code
17 resources:
18 limits:
File: scenarios/poor-registry/deployment.yaml Line 17
Expected: spec.template.spec.containers.name=poor-registry.resources does have requests defined Found: spec.template.spec.containers.name=poor-registry.resources doesn't have requests defined
16 image: madhuakula/k8s-goat-poor-registry
17 resources:
18 limits:
File: scenarios/system-monitor/deployment.yaml Line 33
Expected: spec.template.spec.containers.name=system-monitor.resources does have requests defined Found: spec.template.spec.containers.name=system-monitor.resources doesn't have requests defined
32 image: madhuakula/k8s-goat-system-monitor
33 resources:
34 limits:

Container Running With Low UID

Platform: Kubernetes Category: Best Practices
Check if containers are running with low UID, which might cause conflicts with the host's user table.https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
Results (5)
File: scenarios/system-monitor/deployment.yaml Line 37
Expected: spec.template.spec.containers.securityContext.runAsUser should be defined Found: spec.template.spec.containers.securityContext.runAsUser is undefined
36 cpu: "20m"
37 securityContext:
38 allowPrivilegeEscalation: true
File: scenarios/docker-bench-security/deployment.yaml Line 31
Expected: spec.template.spec.securityContext.runAsUser should not be a low UID Found: spec.template.spec.securityContext.runAsUser is a low UID
30 securityContext:
31 runAsUser: 0
32 containers:
File: scenarios/docker-bench-security/deployment.yaml Line 44
Expected: spec.template.spec.containers.securityContext.runAsUser should be defined Found: spec.template.spec.containers.securityContext.runAsUser is undefined
43 memory: 80Mi
44 securityContext:
45 privileged: true
File: scenarios/health-check/deployment-kind.yaml Line 24
Expected: spec.template.spec.containers.securityContext.runAsUser should be defined Found: spec.template.spec.containers.securityContext.runAsUser is undefined
23 # Custom Stuff
24 securityContext:
25 privileged: true
File: scenarios/health-check/deployment.yaml Line 24
Expected: spec.template.spec.containers.securityContext.runAsUser should be defined Found: spec.template.spec.containers.securityContext.runAsUser is undefined
23 # Custom Stuff
24 securityContext:
25 privileged: true

Containers With Added Capabilities

Platform: Kubernetes Category: Insecure Configurations
Results (1)
File: scenarios/docker-bench-security/deployment.yaml Line 47
Expected: spec.template.spec.containers.name={{docker-bench}} does not have added capability Found: spec.template.spec.containers.name={{docker-bench}} has added capability
46 capabilities:
47 add: ["AUDIT_CONTROL"]
48 volumeMounts:

Image Version Using 'latest'

Platform: Dockerfile Category: Supply-Chain
When building images, always tag them with useful tags which codify version information, intended destination (prod or test, for instance), stability, or other information that is useful when deploying the application in different environments. Do not rely on the automatically-created latest taghttps://docs.docker.com/develop/dev-best-practices/
Results (3)
File: infrastructure/batch-check/Dockerfile Line 1
Expected: FROM alpine:latest:'version' where version is not 'latest' Found: FROM alpine:latest'
1FROM alpine:latest
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/hidden-in-layers/Dockerfile Line 1
Expected: FROM alpine:latest:'version' where version is not 'latest' Found: FROM alpine:latest'
1FROM alpine:latest
2
3LABEL MAINTAINER "Madhu Akula" INFO="Kubernetes Goat"
File: infrastructure/build-code/Dockerfile Line 1
Expected: FROM alpine:latest:'version' where version is not 'latest' Found: FROM alpine:latest'
1FROM alpine:latest
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3

Liveness Probe Is Not Defined

Platform: Kubernetes Category: Availability
Results (11)
File: scenarios/internal-proxy/deployment.yaml Line 17
Expected: metadata.name={{internal-proxy-deployment}}.spec.containers.name={{internal-api}}.livenessProbe is defined Found: metadata.name={{internal-proxy-deployment}}.spec.containers.name={{internal-api}}.livenessProbe is undefined
16 containers:
17 - name: internal-api
18 image: madhuakula/k8s-goat-internal-api
File: scenarios/kubernetes-goat-home/deployment.yaml Line 15
Expected: metadata.name={{kubernetes-goat-home-deployment}}.spec.containers.name={{kubernetes-goat-home}}.livenessProbe is defined Found: metadata.name={{kubernetes-goat-home-deployment}}.spec.containers.name={{kubernetes-goat-home}}.livenessProbe is undefined
14 containers:
15 - name: kubernetes-goat-home
16 image: madhuakula/k8s-goat-home
File: scenarios/health-check/deployment-kind.yaml Line 15
Expected: metadata.name={{health-check-deployment}}.spec.containers.name={{health-check}}.livenessProbe is defined Found: metadata.name={{health-check-deployment}}.spec.containers.name={{health-check}}.livenessProbe is undefined
14 containers:
15 - name: health-check
16 image: madhuakula/k8s-goat-health-check
File: scenarios/build-code/deployment.yaml Line 15
Expected: metadata.name={{build-code-deployment}}.spec.containers.name={{build-code}}.livenessProbe is defined Found: metadata.name={{build-code-deployment}}.spec.containers.name={{build-code}}.livenessProbe is undefined
14 containers:
15 - name: build-code
16 image: madhuakula/k8s-goat-build-code
File: scenarios/internal-proxy/deployment.yaml Line 28
Expected: metadata.name={{internal-proxy-deployment}}.spec.containers.name={{info-app}}.livenessProbe is defined Found: metadata.name={{internal-proxy-deployment}}.spec.containers.name={{info-app}}.livenessProbe is undefined
27 - containerPort: 3000
28 - name: info-app
29 image: madhuakula/k8s-goat-info-app
File: scenarios/health-check/deployment.yaml Line 15
Expected: metadata.name={{health-check-deployment}}.spec.containers.name={{health-check}}.livenessProbe is defined Found: metadata.name={{health-check-deployment}}.spec.containers.name={{health-check}}.livenessProbe is undefined
14 containers:
15 - name: health-check
16 image: madhuakula/k8s-goat-health-check
File: scenarios/hunger-check/deployment.yaml Line 71
Expected: metadata.name={{hunger-check-deployment}}.spec.containers.name={{hunger-check}}.livenessProbe is defined Found: metadata.name={{hunger-check-deployment}}.spec.containers.name={{hunger-check}}.livenessProbe is undefined
70 containers:
71 - name: hunger-check
72 image: madhuakula/k8s-goat-hunger-check
File: scenarios/cache-store/deployment.yaml Line 36
Expected: metadata.name={{cache-store-deployment}}.spec.containers.name={{cache-store}}.livenessProbe is defined Found: metadata.name={{cache-store-deployment}}.spec.containers.name={{cache-store}}.livenessProbe is undefined
35 containers:
36 - name: cache-store
37 image: madhuakula/k8s-goat-cache-store
File: scenarios/system-monitor/deployment.yaml Line 31
Expected: metadata.name={{system-monitor-deployment}}.spec.containers.name={{system-monitor}}.livenessProbe is defined Found: metadata.name={{system-monitor-deployment}}.spec.containers.name={{system-monitor}}.livenessProbe is undefined
30 containers:
31 - name: system-monitor
32 image: madhuakula/k8s-goat-system-monitor
File: scenarios/docker-bench-security/deployment.yaml Line 33
Expected: metadata.name={{docker-bench-security}}.spec.containers.name={{docker-bench}}.livenessProbe is defined Found: metadata.name={{docker-bench-security}}.spec.containers.name={{docker-bench}}.livenessProbe is undefined
32 containers:
33 - name: docker-bench
34 image: madhuakula/hacker-container
File: scenarios/poor-registry/deployment.yaml Line 15
Expected: metadata.name={{poor-registry-deployment}}.spec.containers.name={{poor-registry}}.livenessProbe is defined Found: metadata.name={{poor-registry-deployment}}.spec.containers.name={{poor-registry}}.livenessProbe is undefined
14 containers:
15 - name: poor-registry
16 image: madhuakula/k8s-goat-poor-registry

NPM Install Command Without Pinned Version

Platform: Dockerfile Category: Supply-Chain
Check if packages installed by npm are pinning a specific version.https://docs.docker.com/engine/reference/builder/#run
Results (1)
File: infrastructure/internal-api/Dockerfile Line 8
Expected: 'RUN npm install && apk add --no-cache curl' uses npm install with a pinned version Found: 'RUN npm install && apk add --no-cache curl' does not uses npm install with a pinned version
7
8RUN npm install \
9 && apk add --no-cache curl

Non Kube System Pod With Host Mount

Platform: Kubernetes Category: Access Control
A non kube-system workload should not have hostPath mountedhttps://kubernetes.io/docs/concepts/storage/volumes/
Results (17)
File: scenarios/docker-bench-security/deployment.yaml Line 91
Expected: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' should not have hostPath '/var/run/docker.sock' mounted Found: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' has a hostPath '/var/run/docker.sock' mounted
90 hostPath:
91 path: /var/run/docker.sock
92 type: Socket
File: scenarios/health-check/deployment.yaml Line 32
Expected: Resource name 'health-check-deployment' of kind 'Deployment' in a non kube-system namespace 'default' should not have hostPath '/var/run/docker.sock' mounted Found: Resource name 'health-check-deployment' of kind 'Deployment' in a non kube-system namespace 'default' has a hostPath '/var/run/docker.sock' mounted
31 hostPath:
32 path: /var/run/docker.sock
33 type: Socket
File: scenarios/health-check/deployment-kind.yaml Line 32
Expected: Resource name 'health-check-deployment' of kind 'Deployment' in a non kube-system namespace 'default' should not have hostPath '/var/run/docker.sock' mounted Found: Resource name 'health-check-deployment' of kind 'Deployment' in a non kube-system namespace 'default' has a hostPath '/var/run/docker.sock' mounted
31 hostPath:
32 path: /var/run/docker.sock
33---
File: scenarios/kube-bench-security/master-job.yaml Line 36
Expected: Resource name 'kube-bench-master' of kind 'Job' in a non kube-system namespace 'default' should not have hostPath '/var/lib/etcd' mounted Found: Resource name 'kube-bench-master' of kind 'Job' in a non kube-system namespace 'default' has a hostPath '/var/lib/etcd' mounted
35 hostPath:
36 path: "/var/lib/etcd"
37 - name: etc-kubernetes
File: scenarios/docker-bench-security/deployment.yaml Line 76
Expected: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' should not have hostPath '/usr/lib/systemd' mounted Found: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' has a hostPath '/usr/lib/systemd' mounted
75 hostPath:
76 path: /usr/lib/systemd
77 - name: etc-vol
File: scenarios/system-monitor/deployment.yaml Line 29
Expected: Resource name 'system-monitor-deployment' of kind 'Deployment' in a non kube-system namespace 'default' should not have hostPath '/' mounted Found: Resource name 'system-monitor-deployment' of kind 'Deployment' in a non kube-system namespace 'default' has a hostPath '/' mounted
28 hostPath:
29 path: /
30 containers:
File: scenarios/kube-bench-security/master-job.yaml Line 42
Expected: Resource name 'kube-bench-master' of kind 'Job' in a non kube-system namespace 'default' should not have hostPath '/usr/bin' mounted Found: Resource name 'kube-bench-master' of kind 'Job' in a non kube-system namespace 'default' has a hostPath '/usr/bin' mounted
41 hostPath:
42 path: "/usr/bin"
43
File: scenarios/docker-bench-security/deployment.yaml Line 88
Expected: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' should not have hostPath '/usr/bin/runc' mounted Found: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' has a hostPath '/usr/bin/runc' mounted
87 hostPath:
88 path: /usr/bin/runc
89 - name: docker-sock-volume
File: scenarios/docker-bench-security/deployment.yaml Line 85
Expected: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' should not have hostPath '/usr/bin/containerd' mounted Found: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' has a hostPath '/usr/bin/containerd' mounted
84 hostPath:
85 path: /usr/bin/containerd
86 - name: usr-bin-runc-vol
File: scenarios/docker-bench-security/deployment.yaml Line 79
Expected: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' should not have hostPath '/etc' mounted Found: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' has a hostPath '/etc' mounted
78 hostPath:
79 path: /etc
80 - name: lib-systemd-system-vol
File: scenarios/kube-bench-security/node-job.yaml Line 37
Expected: Resource name 'kube-bench-node' of kind 'Job' in a non kube-system namespace 'default' should not have hostPath '/etc/systemd' mounted Found: Resource name 'kube-bench-node' of kind 'Job' in a non kube-system namespace 'default' has a hostPath '/etc/systemd' mounted
36 hostPath:
37 path: "/etc/systemd"
38 - name: etc-kubernetes
File: scenarios/kube-bench-security/node-job.yaml Line 43
Expected: Resource name 'kube-bench-node' of kind 'Job' in a non kube-system namespace 'default' should not have hostPath '/usr/bin' mounted Found: Resource name 'kube-bench-node' of kind 'Job' in a non kube-system namespace 'default' has a hostPath '/usr/bin' mounted
42 hostPath:
43 path: "/usr/bin"
44
File: scenarios/kube-bench-security/master-job.yaml Line 39
Expected: Resource name 'kube-bench-master' of kind 'Job' in a non kube-system namespace 'default' should not have hostPath '/etc/kubernetes' mounted Found: Resource name 'kube-bench-master' of kind 'Job' in a non kube-system namespace 'default' has a hostPath '/etc/kubernetes' mounted
38 hostPath:
39 path: "/etc/kubernetes"
40 - name: usr-bin
File: scenarios/docker-bench-security/deployment.yaml Line 82
Expected: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' should not have hostPath '/lib/systemd/system' mounted Found: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' has a hostPath '/lib/systemd/system' mounted
81 hostPath:
82 path: /lib/systemd/system
83 - name: usr-bin-contained-vol
File: scenarios/docker-bench-security/deployment.yaml Line 73
Expected: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' should not have hostPath '/var/lib' mounted Found: Resource name 'docker-bench-security' of kind 'DaemonSet' in a non kube-system namespace 'default' has a hostPath '/var/lib' mounted
72 hostPath:
73 path: /var/lib
74 - name: usr-lib-systemd-vol
File: scenarios/kube-bench-security/node-job.yaml Line 40
Expected: Resource name 'kube-bench-node' of kind 'Job' in a non kube-system namespace 'default' should not have hostPath '/etc/kubernetes' mounted Found: Resource name 'kube-bench-node' of kind 'Job' in a non kube-system namespace 'default' has a hostPath '/etc/kubernetes' mounted
39 hostPath:
40 path: "/etc/kubernetes"
41 - name: usr-bin
File: scenarios/kube-bench-security/node-job.yaml Line 34
Expected: Resource name 'kube-bench-node' of kind 'Job' in a non kube-system namespace 'default' should not have hostPath '/var/lib/kubelet' mounted Found: Resource name 'kube-bench-node' of kind 'Job' in a non kube-system namespace 'default' has a hostPath '/var/lib/kubelet' mounted
33 hostPath:
34 path: "/var/lib/kubelet"
35 - name: etc-systemd

Pip install Keeping Cached Packages

Platform: Dockerfile Category: Supply-Chain
When installing packages with pip, the '--no-cache-dir' flag should be set to make Docker images smallerhttps://docs.docker.com/develop/develop-images/dockerfile_best-practices/
Results (2)
File: infrastructure/build-code/Dockerfile Line 6
Expected: The '--no-cache-dir' flag is set when running 'pip/pip3 install' Found: The '--no-cache-dir' flag isn't set when running 'pip/pip3 install'
5
6RUN apk --no-cache add git py3-pip \
7 && pip install truffleHog \
File: infrastructure/info-app/Dockerfile Line 6
Expected: The '--no-cache-dir' flag is set when running 'pip/pip3 install' Found: The '--no-cache-dir' flag isn't set when running 'pip/pip3 install'
5
6RUN pip install flask
7

RUN Instruction Using 'cd' Instead of WORKDIR

Platform: Dockerfile Category: Build Process
Use WORKDIR instead of proliferating instructions like RUN cd … && do-something, which are hard to read, troubleshoot, and maintain.https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#workdir
Results (2)
File: infrastructure/system-monitor/Dockerfile Line 4
Expected: Using WORKDIR to change directory Found: RUN apt-get update && apt-get install -y htop libcap2-bin curl wget && cd /tmp; wget https://github.com/yudai/gotty/releases/download/v1.0.1/gotty_linux_amd64.tar.gz && tar -xvzf gotty_linux_amd64.tar.gz; mv gotty /usr/local/bin/gotty'
3
4RUN apt-get update && apt-get install -y htop \
5 libcap2-bin curl wget && \
File: infrastructure/hunger-check/Dockerfile Line 4
Expected: Using WORKDIR to change directory Found: RUN apt update && apt install stress-ng curl wget -y && cd /tmp; wget https://github.com/yudai/gotty/releases/download/v1.0.1/gotty_linux_amd64.tar.gz && tar -xvzf gotty_linux_amd64.tar.gz; mv gotty /usr/local/bin/gotty'
3
4RUN apt update && apt install stress-ng curl wget -y \
5 && cd /tmp; wget https://github.com/yudai/gotty/releases/download/v1.0.1/gotty_linux_amd64.tar.gz \

Resource With Allow Privilege Escalation

Platform: Kubernetes Category: Best Practices
Minimize the admission of privileged resourceshttps://kubernetes.io/docs/concepts/policy/pod-security-policy/
Results (1)
File: scenarios/system-monitor/deployment.yaml Line 38
Expected: spec.template.spec.containers.securityContext.allowPrivilegeEscalation = false Found: spec.template.spec.containers.securityContext.allowPrivilegeEscalation = true
37 securityContext:
38 allowPrivilegeEscalation: true
39 privileged: true

Run Using apt

Platform: Dockerfile Category: Supply-Chain
apt is discouraged by the linux distributions as an unattended tool as its interface may suffer changes between versions. Better use the more stable apt-get and apt-cachehttps://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run
Results (3)
File: infrastructure/helm-tiller/Dockerfile Line 9
Expected: RUN instructions should not use the 'apt' program Found: RUN instruction is invoking the 'apt' program
8
9RUN apt update && apt install curl wget ca-certificates bash telnet -y \
10 && curl -LO https://get.helm.sh/helm-v${HELMV2_VERSION}-linux-amd64.tar.gz \
File: infrastructure/hunger-check/Dockerfile Line 4
Expected: RUN instructions should not use the 'apt' program Found: RUN instruction is invoking the 'apt' program
3
4RUN apt update && apt install stress-ng curl wget -y \
5 && cd /tmp; wget https://github.com/yudai/gotty/releases/download/v1.0.1/gotty_linux_amd64.tar.gz \
File: infrastructure/health-check/Dockerfile Line 12
Expected: RUN instructions should not use the 'apt' program Found: RUN instruction is invoking the 'apt' program
11
12RUN apt update && apt install curl wget iputils-ping -y
13RUN go build -o /

Seccomp Profile Is Not Configured

Platform: Kubernetes Category: Insecure Configurations
Check if any resource does not configure Seccomp default profile properlyhttps://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
Results (12)
File: scenarios/internal-proxy/deployment.yaml Line 12
Expected: 'spec.template.metadata.annotations' is set Found: 'spec.template.metadata.annotations' is undefined
11 template:
12 metadata:
13 labels:
File: scenarios/hidden-in-layers/deployment.yaml Line 7
Expected: 'spec.template.metadata.annotations' is set Found: 'spec.template.metadata.annotations' is undefined
6 template:
7 metadata:
8 name: hidden-in-layers
File: scenarios/health-check/deployment-kind.yaml Line 10
Expected: 'spec.template.metadata.annotations' is set Found: 'spec.template.metadata.annotations' is undefined
9 template:
10 metadata:
11 labels:
File: scenarios/docker-bench-security/deployment.yaml Line 23
Expected: 'spec.template.metadata.annotations' is set Found: 'spec.template.metadata.annotations' is undefined
22 template:
23 metadata:
24 labels:
File: scenarios/cache-store/deployment.yaml Line 31
Expected: 'spec.template.metadata.annotations' is set Found: 'spec.template.metadata.annotations' is undefined
30 template:
31 metadata:
32 labels:
File: scenarios/health-check/deployment.yaml Line 10
Expected: 'spec.template.metadata.annotations' is set Found: 'spec.template.metadata.annotations' is undefined
9 template:
10 metadata:
11 labels:
File: scenarios/hunger-check/deployment.yaml Line 65
Expected: 'spec.template.metadata.annotations' is set Found: 'spec.template.metadata.annotations' is undefined
64 template:
65 metadata:
66 labels:
File: scenarios/kubernetes-goat-home/deployment.yaml Line 10
Expected: 'spec.template.metadata.annotations' is set Found: 'spec.template.metadata.annotations' is undefined
9 template:
10 metadata:
11 labels:
File: scenarios/batch-check/job.yaml Line 7
Expected: 'spec.template.metadata.annotations' is set Found: 'spec.template.metadata.annotations' is undefined
6 template:
7 metadata:
8 name: batch-check-job
File: scenarios/build-code/deployment.yaml Line 10
Expected: 'spec.template.metadata.annotations' is set Found: 'spec.template.metadata.annotations' is undefined
9 template:
10 metadata:
11 labels:
File: scenarios/system-monitor/deployment.yaml Line 19
Expected: 'spec.template.metadata.annotations' is set Found: 'spec.template.metadata.annotations' is undefined
18 template:
19 metadata:
20 labels:
File: scenarios/poor-registry/deployment.yaml Line 10
Expected: 'spec.template.metadata.annotations' is set Found: 'spec.template.metadata.annotations' is undefined
9 template:
10 metadata:
11 labels:

Service Account Token Automount Not Disabled

Platform: Kubernetes Category: Insecure Defaults
Results (14)
File: scenarios/hidden-in-layers/deployment.yaml Line 9
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
8 name: hidden-in-layers
9 spec:
10 containers:
File: scenarios/build-code/deployment.yaml Line 13
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
12 app: build-code
13 spec:
14 containers:
File: scenarios/docker-bench-security/deployment.yaml Line 26
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
25 name: docker-bench
26 spec:
27 hostPID: true
File: scenarios/hunger-check/deployment.yaml Line 68
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
67 app: hunger-check
68 spec:
69 serviceAccountName: big-monolith-sa
File: scenarios/kube-bench-security/master-job.yaml Line 8
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
7 template:
8 spec:
9 hostPID: true
File: scenarios/kubernetes-goat-home/deployment.yaml Line 13
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
12 app: kubernetes-goat-home
13 spec:
14 containers:
File: scenarios/internal-proxy/deployment.yaml Line 15
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
14 app: internal-proxy
15 spec:
16 containers:
File: scenarios/cache-store/deployment.yaml Line 34
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
33 app: cache-store
34 spec:
35 containers:
File: scenarios/health-check/deployment.yaml Line 13
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
12 app: health-check
13 spec:
14 containers:
File: scenarios/system-monitor/deployment.yaml Line 22
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
21 app: system-monitor
22 spec:
23 hostPID: true
File: scenarios/kube-bench-security/node-job.yaml Line 8
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
7 template:
8 spec:
9 hostPID: true
File: scenarios/health-check/deployment-kind.yaml Line 13
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
12 app: health-check
13 spec:
14 containers:
File: scenarios/poor-registry/deployment.yaml Line 13
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
12 app: poor-registry
13 spec:
14 containers:
File: scenarios/batch-check/job.yaml Line 9
Expected: 'spec.template.spec.automountServiceAccountToken' is false Found: 'spec.template.spec.automountServiceAccountToken' is undefined
8 name: batch-check-job
9 spec:
10 containers:

Unpinned Package Version in Apk Add

Platform: Dockerfile Category: Supply-Chain
Package version pinning reduces the range of versions that can be installed, reducing the chances of failure due to unanticipated changeshttps://docs.docker.com/develop/develop-images/dockerfile_best-practices/
Results (5)
File: infrastructure/k8s-goat-home/Dockerfile Line 7
Expected: RUN instruction with 'apk add <package>' should use package pinning form 'apk add <package>=<version>' Found: RUN instruction set -x && apk add --update wget git ca-certificates imagemagick && wget https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/${HUGO_BINARY} && tar xzf ${HUGO_BINARY} && mv hugo /usr/bin does not use package pinning form
6ENV HUGO_BINARY hugo_${HUGO_VERSION}_Linux-64bit.tar.gz
7RUN set -x && \
8 apk add --update wget git ca-certificates imagemagick && \
File: infrastructure/build-code/Dockerfile Line 6
Expected: RUN instruction with 'apk add <package>' should use package pinning form 'apk add <package>=<version>' Found: RUN instruction apk --no-cache add git py3-pip && pip install truffleHog && tar -xvzf app.tar.gz -C / does not use package pinning form
5
6RUN apk --no-cache add git py3-pip \
7 && pip install truffleHog \
File: infrastructure/metadata-db/Dockerfile Line 11
Expected: RUN instruction with 'apk add <package>' should use package pinning form 'apk add <package>=<version>' Found: RUN instruction apk add --no-cache curl ca-certificates does not use package pinning form
10
11RUN apk add --no-cache curl ca-certificates
12RUN go build -o /
File: infrastructure/internal-api/Dockerfile Line 8
Expected: RUN instruction with 'apk add <package>' should use package pinning form 'apk add <package>=<version>' Found: RUN instruction npm install && apk add --no-cache curl does not use package pinning form
7
8RUN npm install \
9 && apk add --no-cache curl
File: infrastructure/batch-check/Dockerfile Line 1
Expected: RUN instruction with 'apk add <package>' should use package pinning form 'apk add <package>=<version>' Found: RUN instruction apk add --no-cache htop curl ca-certificates && echo "curl -sSL https://madhuakula.com/kubernetes-goat/k8s-goat-a5e0a28fa75bf429123943abedb065d1 && echo 'id' | sh " > /usr/bin/system-startup && chmod +x /usr/bin/system-startup && rm -rf /tmp/* does not use package pinning form
1FROM alpine:latest
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3

Unpinned Package Version in Pip Install

Platform: Dockerfile Category: Supply-Chain
Package version pinning reduces the range of versions that can be installed, reducing the chances of failure due to unanticipated changeshttps://docs.docker.com/develop/develop-images/dockerfile_best-practices/
Results (2)
File: infrastructure/info-app/Dockerfile Line 6
Expected: RUN instruction with 'pip/pip3 install <package>' should use package pinning form 'pip/pip3 install <package>=<version>' Found: RUN instruction pip install flask does not use package pinning form
5
6RUN pip install flask
7
File: infrastructure/build-code/Dockerfile Line 6
Expected: RUN instruction with 'pip/pip3 install <package>' should use package pinning form 'pip/pip3 install <package>=<version>' Found: RUN instruction apk --no-cache add git py3-pip && pip install truffleHog && tar -xvzf app.tar.gz -C / does not use package pinning form
5
6RUN apk --no-cache add git py3-pip \
7 && pip install truffleHog \

Update Instruction Alone

Platform: Dockerfile Category: Build Process
Instruction 'RUN <package-manager> update' should always be followed by '<package-manager> install' in the same RUN statementhttps://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run
Results (1)
File: infrastructure/k8s-goat-home/Dockerfile Line 7
Expected: Instruction 'RUN <package-manager> update' is followed by 'RUN <package-manager> install' Found: Instruction 'RUN <package-manager> update' isn't followed by 'RUN <package-manager> install in the same 'RUN' statement
6ENV HUGO_BINARY hugo_${HUGO_VERSION}_Linux-64bit.tar.gz
7RUN set -x && \
8 apk add --update wget git ca-certificates imagemagick && \

Using Unrecommended Namespace

Platform: Kubernetes Category: Insecure Configurations
Namespaces like 'default', 'kube-system' or 'kube-public' should not be usedhttps://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/
Results (23)
File: scenarios/health-check/deployment-kind.yaml Line 37
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
36metadata:
37 name: health-check-service
38spec:
File: scenarios/health-check/deployment.yaml Line 38
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
37metadata:
38 name: health-check-service
39spec:
File: scenarios/kubernetes-goat-home/deployment.yaml Line 27
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
26metadata:
27 name: kubernetes-goat-home-service
28spec:
File: scenarios/build-code/deployment.yaml Line 27
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
26metadata:
27 name: build-code-service
28spec:
File: scenarios/kubernetes-goat-home/deployment.yaml Line 4
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
3metadata:
4 name: kubernetes-goat-home-deployment
5spec:
File: scenarios/insecure-rbac/setup.yaml Line 5
Expected: 'metadata.namespace' is not set to default, kube-system or kube-public Found: 'metadata.namespace' is set to kube-system
4 name: superadmin
5 namespace: kube-system
6---
File: scenarios/internal-proxy/deployment.yaml Line 43
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
42metadata:
43 name: internal-proxy-api-service
44spec:
File: scenarios/kube-bench-security/master-job.yaml Line 5
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
4metadata:
5 name: kube-bench-master
6spec:
File: scenarios/kube-bench-security/node-job.yaml Line 5
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
4metadata:
5 name: kube-bench-node
6spec:
File: scenarios/metadata-db/templates/service.yaml Line 4
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
3metadata:
4 name: {{ include "metadata-db.fullname" . }}
5 labels:
File: scenarios/poor-registry/deployment.yaml Line 27
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
26metadata:
27 name: poor-registry-service
28spec:
File: scenarios/poor-registry/deployment.yaml Line 4
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
3metadata:
4 name: poor-registry-deployment
5spec:
File: scenarios/hidden-in-layers/deployment.yaml Line 4
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
3metadata:
4 name: hidden-in-layers
5spec:
File: scenarios/system-monitor/deployment.yaml Line 55
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
54metadata:
55 name: system-monitor-service
56spec:
File: scenarios/health-check/deployment-kind.yaml Line 4
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
3metadata:
4 name: health-check-deployment
5spec:
File: scenarios/docker-bench-security/deployment.yaml Line 15
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
14metadata:
15 name: docker-bench-security
16 labels:
File: scenarios/internal-proxy/deployment.yaml Line 55
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
54metadata:
55 name: internal-proxy-info-app-service
56spec:
File: scenarios/system-monitor/deployment.yaml Line 13
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
12metadata:
13 name: system-monitor-deployment
14spec:
File: scenarios/build-code/deployment.yaml Line 4
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
3metadata:
4 name: build-code-deployment
5spec:
File: scenarios/health-check/deployment.yaml Line 4
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
3metadata:
4 name: health-check-deployment
5spec:
File: scenarios/batch-check/job.yaml Line 4
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
3metadata:
4 name: batch-check-job
5spec:
File: scenarios/system-monitor/deployment.yaml Line 4
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
3metadata:
4 name: goatvault
5type: Opaque
File: scenarios/internal-proxy/deployment.yaml Line 4
Expected: metadata.namespace is defined and not null Found: metadata.namespace is undefined or null
3metadata:
4 name: internal-proxy-deployment
5 labels:

Workload Mounting With Sensitive OS Directory

Platform: Kubernetes Category: Insecure Configurations
Workload is mounting a volume with sensitive OS Directoryhttps://kubernetes.io/docs/concepts/policy/pod-security-policy/
Results (17)
File: scenarios/kube-bench-security/master-job.yaml Line 42
Expected: Workload name 'kube-bench-master' of kind 'Job' should not mount a host sensitive OS directory '/usr/bin' with hostPath Found: Workload name 'kube-bench-master' of kind 'Job' is mounting a host sensitive OS directory '/usr/bin' with hostPath
41 hostPath:
42 path: "/usr/bin"
43
File: scenarios/system-monitor/deployment.yaml Line 29
Expected: Workload name 'system-monitor-deployment' of kind 'Deployment' should not mount a host sensitive OS directory '/' with hostPath Found: Workload name 'system-monitor-deployment' of kind 'Deployment' is mounting a host sensitive OS directory '/' with hostPath
28 hostPath:
29 path: /
30 containers:
File: scenarios/docker-bench-security/deployment.yaml Line 85
Expected: Workload name 'docker-bench-security' of kind 'DaemonSet' should not mount a host sensitive OS directory '/usr/bin/containerd' with hostPath Found: Workload name 'docker-bench-security' of kind 'DaemonSet' is mounting a host sensitive OS directory '/usr/bin/containerd' with hostPath
84 hostPath:
85 path: /usr/bin/containerd
86 - name: usr-bin-runc-vol
File: scenarios/health-check/deployment.yaml Line 32
Expected: Workload name 'health-check-deployment' of kind 'Deployment' should not mount a host sensitive OS directory '/var/run/docker.sock' with hostPath Found: Workload name 'health-check-deployment' of kind 'Deployment' is mounting a host sensitive OS directory '/var/run/docker.sock' with hostPath
31 hostPath:
32 path: /var/run/docker.sock
33 type: Socket
File: scenarios/docker-bench-security/deployment.yaml Line 82
Expected: Workload name 'docker-bench-security' of kind 'DaemonSet' should not mount a host sensitive OS directory '/lib/systemd/system' with hostPath Found: Workload name 'docker-bench-security' of kind 'DaemonSet' is mounting a host sensitive OS directory '/lib/systemd/system' with hostPath
81 hostPath:
82 path: /lib/systemd/system
83 - name: usr-bin-contained-vol
File: scenarios/kube-bench-security/node-job.yaml Line 43
Expected: Workload name 'kube-bench-node' of kind 'Job' should not mount a host sensitive OS directory '/usr/bin' with hostPath Found: Workload name 'kube-bench-node' of kind 'Job' is mounting a host sensitive OS directory '/usr/bin' with hostPath
42 hostPath:
43 path: "/usr/bin"
44
File: scenarios/kube-bench-security/node-job.yaml Line 34
Expected: Workload name 'kube-bench-node' of kind 'Job' should not mount a host sensitive OS directory '/var/lib/kubelet' with hostPath Found: Workload name 'kube-bench-node' of kind 'Job' is mounting a host sensitive OS directory '/var/lib/kubelet' with hostPath
33 hostPath:
34 path: "/var/lib/kubelet"
35 - name: etc-systemd
File: scenarios/docker-bench-security/deployment.yaml Line 73
Expected: Workload name 'docker-bench-security' of kind 'DaemonSet' should not mount a host sensitive OS directory '/var/lib' with hostPath Found: Workload name 'docker-bench-security' of kind 'DaemonSet' is mounting a host sensitive OS directory '/var/lib' with hostPath
72 hostPath:
73 path: /var/lib
74 - name: usr-lib-systemd-vol
File: scenarios/kube-bench-security/master-job.yaml Line 36
Expected: Workload name 'kube-bench-master' of kind 'Job' should not mount a host sensitive OS directory '/var/lib/etcd' with hostPath Found: Workload name 'kube-bench-master' of kind 'Job' is mounting a host sensitive OS directory '/var/lib/etcd' with hostPath
35 hostPath:
36 path: "/var/lib/etcd"
37 - name: etc-kubernetes
File: scenarios/kube-bench-security/node-job.yaml Line 37
Expected: Workload name 'kube-bench-node' of kind 'Job' should not mount a host sensitive OS directory '/etc/systemd' with hostPath Found: Workload name 'kube-bench-node' of kind 'Job' is mounting a host sensitive OS directory '/etc/systemd' with hostPath
36 hostPath:
37 path: "/etc/systemd"
38 - name: etc-kubernetes
File: scenarios/kube-bench-security/node-job.yaml Line 40
Expected: Workload name 'kube-bench-node' of kind 'Job' should not mount a host sensitive OS directory '/etc/kubernetes' with hostPath Found: Workload name 'kube-bench-node' of kind 'Job' is mounting a host sensitive OS directory '/etc/kubernetes' with hostPath
39 hostPath:
40 path: "/etc/kubernetes"
41 - name: usr-bin
File: scenarios/docker-bench-security/deployment.yaml Line 76
Expected: Workload name 'docker-bench-security' of kind 'DaemonSet' should not mount a host sensitive OS directory '/usr/lib/systemd' with hostPath Found: Workload name 'docker-bench-security' of kind 'DaemonSet' is mounting a host sensitive OS directory '/usr/lib/systemd' with hostPath
75 hostPath:
76 path: /usr/lib/systemd
77 - name: etc-vol
File: scenarios/docker-bench-security/deployment.yaml Line 79
Expected: Workload name 'docker-bench-security' of kind 'DaemonSet' should not mount a host sensitive OS directory '/etc' with hostPath Found: Workload name 'docker-bench-security' of kind 'DaemonSet' is mounting a host sensitive OS directory '/etc' with hostPath
78 hostPath:
79 path: /etc
80 - name: lib-systemd-system-vol
File: scenarios/health-check/deployment-kind.yaml Line 32
Expected: Workload name 'health-check-deployment' of kind 'Deployment' should not mount a host sensitive OS directory '/var/run/docker.sock' with hostPath Found: Workload name 'health-check-deployment' of kind 'Deployment' is mounting a host sensitive OS directory '/var/run/docker.sock' with hostPath
31 hostPath:
32 path: /var/run/docker.sock
33---
File: scenarios/docker-bench-security/deployment.yaml Line 88
Expected: Workload name 'docker-bench-security' of kind 'DaemonSet' should not mount a host sensitive OS directory '/usr/bin/runc' with hostPath Found: Workload name 'docker-bench-security' of kind 'DaemonSet' is mounting a host sensitive OS directory '/usr/bin/runc' with hostPath
87 hostPath:
88 path: /usr/bin/runc
89 - name: docker-sock-volume
File: scenarios/kube-bench-security/master-job.yaml Line 39
Expected: Workload name 'kube-bench-master' of kind 'Job' should not mount a host sensitive OS directory '/etc/kubernetes' with hostPath Found: Workload name 'kube-bench-master' of kind 'Job' is mounting a host sensitive OS directory '/etc/kubernetes' with hostPath
38 hostPath:
39 path: "/etc/kubernetes"
40 - name: usr-bin
File: scenarios/docker-bench-security/deployment.yaml Line 91
Expected: Workload name 'docker-bench-security' of kind 'DaemonSet' should not mount a host sensitive OS directory '/var/run/docker.sock' with hostPath Found: Workload name 'docker-bench-security' of kind 'DaemonSet' is mounting a host sensitive OS directory '/var/run/docker.sock' with hostPath
90 hostPath:
91 path: /var/run/docker.sock
92 type: Socket

Add Instead of Copy

Platform: Dockerfile Category: Build Process
Should use COPY instead of ADD unless, running a tar filehttps://docs.docker.com/engine/reference/builder/#add
Results (1)
File: infrastructure/hidden-in-layers/Dockerfile Line 5
Expected: 'COPY' secret.txt Found: 'ADD' secret.txt
4
5ADD secret.txt /root/secret.txt
6

Cluster Admin Rolebinding With Superuser Permissions

Platform: Kubernetes Category: Access Control
Ensure that the cluster-admin role is only used where required (RBAC)https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
Results (1)
File: scenarios/insecure-rbac/setup.yaml Line 14
Expected: Resource name 'superadmin' of kind 'ClusterRoleBinding' isn't binding 'cluster-admin' role with superuser permissions Found: Resource name 'superadmin' of kind 'ClusterRoleBinding' is binding 'cluster-admin' role with superuser permissions
13 kind: ClusterRole
14 name: cluster-admin
15subjects:

Docker Daemon Socket is Exposed to Containers

Platform: Kubernetes Category: Access Control
Sees if Docker Daemon Socket is not exposed to Containershttps://kubernetes.io/docs/concepts/storage/volumes/
Results (3)
File: scenarios/health-check/deployment.yaml Line 32
Expected: spec.volumes[docker-sock-volume].hostPath.path is not '/var/run/docker.sock' Found: spec.volumes[docker-sock-volume].hostPath.path is '/var/run/docker.sock'
31 hostPath:
32 path: /var/run/docker.sock
33 type: Socket
File: scenarios/docker-bench-security/deployment.yaml Line 91
Expected: spec.volumes[docker-sock-volume].hostPath.path is not '/var/run/docker.sock' Found: spec.volumes[docker-sock-volume].hostPath.path is '/var/run/docker.sock'
90 hostPath:
91 path: /var/run/docker.sock
92 type: Socket
File: scenarios/health-check/deployment-kind.yaml Line 32
Expected: spec.volumes[docker-sock-volume].hostPath.path is not '/var/run/docker.sock' Found: spec.volumes[docker-sock-volume].hostPath.path is '/var/run/docker.sock'
31 hostPath:
32 path: /var/run/docker.sock
33---

Healthcheck Instruction Missing

Platform: Dockerfile Category: Insecure Configurations
Ensure that HEALTHCHECK is being used. The HEALTHCHECK instruction tells Docker how to test a container to check that it is still workinghttps://docs.docker.com/engine/reference/builder/#healthcheck
Results (15)
File: infrastructure/k8s-goat-home/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM alpine as build
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/internal-api/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM node:alpine
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/helm-tiller/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM debian:stable
2LABEL MAINTAINER "Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/k8s-goat-home/Dockerfile Line 18
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
17
18FROM nginx:alpine
19LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
File: infrastructure/poor-registry/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM registry:2
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/hidden-in-layers/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM alpine:latest
2
3LABEL MAINTAINER "Madhu Akula" INFO="Kubernetes Goat"
File: infrastructure/users-repos/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM python:alpine
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/system-monitor/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM ubuntu:18.04
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/info-app/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM python:alpine
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/metadata-db/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM golang:alpine
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/batch-check/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM alpine:latest
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/health-check/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM golang:buster
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/cache-store/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM redis:6-alpine
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/build-code/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM alpine:latest
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3
File: infrastructure/hunger-check/Dockerfile Line 1
Expected: Dockerfile contains instruction 'HEALTHCHECK' Found: Dockerfile doesn't contain instruction 'HEALTHCHECK'
1FROM ubuntu:18.04
2LABEL MAINTAINER="Madhu Akula" INFO="Kubernetes Goat"
3

Multiple RUN, ADD, COPY, Instructions Listed

Platform: Dockerfile Category: Best Practices
Multiple commands (RUN, Copy, And) should be grouped in order to reduce the number of layers.https://sysdig.com/blog/dockerfile-best-practices/
Results (2)
File: infrastructure/health-check/Dockerfile Line 12
Expected: There isn´t any RUN instruction that could be grouped Found: There are RUN instructions that could be grouped
11
12RUN apt update && apt install curl wget iputils-ping -y
13RUN go build -o /
File: infrastructure/metadata-db/Dockerfile Line 11
Expected: There isn´t any RUN instruction that could be grouped Found: There are RUN instructions that could be grouped
10
11RUN apk add --no-cache curl ca-certificates
12RUN go build -o /

No Drop Capabilities for Containers

Platform: Kubernetes Category: Best Practices
Sees if Kubernetes Drop Capabilities exists to ensure containers security contexthttps://kubernetes.io/docs/concepts/workloads/pods/init-containers/
Results (15)
File: scenarios/cache-store/deployment.yaml Line 36
Expected: metadata.name={{cache-store-deployment}}.spec.containers.name=cache-store.securityContext is set Found: metadata.name={{cache-store-deployment}}.spec.containers.name=cache-store.securityContext is undefined
35 containers:
36 - name: cache-store
37 image: madhuakula/k8s-goat-cache-store
File: scenarios/hunger-check/deployment.yaml Line 71
Expected: metadata.name={{hunger-check-deployment}}.spec.containers.name=hunger-check.securityContext is set Found: metadata.name={{hunger-check-deployment}}.spec.containers.name=hunger-check.securityContext is undefined
70 containers:
71 - name: hunger-check
72 image: madhuakula/k8s-goat-hunger-check
File: scenarios/system-monitor/deployment.yaml Line 37
Expected: metadata.name={{system-monitor-deployment}}.spec.containers.name={{system-monitor}}.securityContext.capabilities is set Found: metadata.name={{system-monitor-deployment}}.spec.containers.name={{system-monitor}}.securityContext.capabilities is undefined
36 cpu: "20m"
37 securityContext:
38 allowPrivilegeEscalation: true
File: scenarios/build-code/deployment.yaml Line 15
Expected: metadata.name={{build-code-deployment}}.spec.containers.name=build-code.securityContext is set Found: metadata.name={{build-code-deployment}}.spec.containers.name=build-code.securityContext is undefined
14 containers:
15 - name: build-code
16 image: madhuakula/k8s-goat-build-code
File: scenarios/internal-proxy/deployment.yaml Line 28
Expected: metadata.name={{internal-proxy-deployment}}.spec.containers.name=info-app.securityContext is set Found: metadata.name={{internal-proxy-deployment}}.spec.containers.name=info-app.securityContext is undefined
27 - containerPort: 3000
28 - name: info-app
29 image: madhuakula/k8s-goat-info-app
File: scenarios/kubernetes-goat-home/deployment.yaml Line 15
Expected: metadata.name={{kubernetes-goat-home-deployment}}.spec.containers.name=kubernetes-goat-home.securityContext is set Found: metadata.name={{kubernetes-goat-home-deployment}}.spec.containers.name=kubernetes-goat-home.securityContext is undefined
14 containers:
15 - name: kubernetes-goat-home
16 image: madhuakula/k8s-goat-home
File: scenarios/internal-proxy/deployment.yaml Line 17
Expected: metadata.name={{internal-proxy-deployment}}.spec.containers.name=internal-api.securityContext is set Found: metadata.name={{internal-proxy-deployment}}.spec.containers.name=internal-api.securityContext is undefined
16 containers:
17 - name: internal-api
18 image: madhuakula/k8s-goat-internal-api
File: scenarios/poor-registry/deployment.yaml Line 15
Expected: metadata.name={{poor-registry-deployment}}.spec.containers.name=poor-registry.securityContext is set Found: metadata.name={{poor-registry-deployment}}.spec.containers.name=poor-registry.securityContext is undefined
14 containers:
15 - name: poor-registry
16 image: madhuakula/k8s-goat-poor-registry
File: scenarios/batch-check/job.yaml Line 11
Expected: metadata.name={{batch-check-job}}.spec.containers.name=batch-check.securityContext is set Found: metadata.name={{batch-check-job}}.spec.containers.name=batch-check.securityContext is undefined
10 containers:
11 - name: batch-check
12 image: madhuakula/k8s-goat-batch-check
File: scenarios/kube-bench-security/node-job.yaml Line 11
Expected: metadata.name={{kube-bench-node}}.spec.containers.name=kube-bench.securityContext is set Found: metadata.name={{kube-bench-node}}.spec.containers.name=kube-bench.securityContext is undefined
10 containers:
11 - name: kube-bench
12 image: aquasec/kube-bench:latest
File: scenarios/hidden-in-layers/deployment.yaml Line 11
Expected: metadata.name={{hidden-in-layers}}.spec.containers.name=hidden-in-layers.securityContext is set Found: metadata.name={{hidden-in-layers}}.spec.containers.name=hidden-in-layers.securityContext is undefined
10 containers:
11 - name: hidden-in-layers
12 image: madhuakula/k8s-goat-hidden-in-layers
File: scenarios/health-check/deployment-kind.yaml Line 24
Expected: metadata.name={{health-check-deployment}}.spec.containers.name={{health-check}}.securityContext.capabilities is set Found: metadata.name={{health-check-deployment}}.spec.containers.name={{health-check}}.securityContext.capabilities is undefined
23 # Custom Stuff
24 securityContext:
25 privileged: true
File: scenarios/docker-bench-security/deployment.yaml Line 46
Expected: spec.containers[docker-bench].securityContext.capabilities.drop is Defined Found: spec.containers[docker-bench].securityContext.capabilities.drop is not Defined
45 privileged: true
46 capabilities:
47 add: ["AUDIT_CONTROL"]
File: scenarios/kube-bench-security/master-job.yaml Line 17
Expected: metadata.name={{kube-bench-master}}.spec.containers.name=kube-bench.securityContext is set Found: metadata.name={{kube-bench-master}}.spec.containers.name=kube-bench.securityContext is undefined
16 containers:
17 - name: kube-bench
18 image: aquasec/kube-bench:latest
File: scenarios/health-check/deployment.yaml Line 24
Expected: metadata.name={{health-check-deployment}}.spec.containers.name={{health-check}}.securityContext.capabilities is set Found: metadata.name={{health-check-deployment}}.spec.containers.name={{health-check}}.securityContext.capabilities is undefined
23 # Custom Stuff
24 securityContext:
25 privileged: true

Permissive Access to Create Pods

Platform: Kubernetes Category: Access Control
The permission to create pods in a cluster should be restricted because it allows privilege escalation.https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping
Results (1)
File: infrastructure/helm-tiller/pwnchart/templates/clusterrole.yaml Line 8
Expected: metadata.name=all-your-base.rules.verbs should not contain a wildcard value when metadata.name=all-your-base.rules.resources contains a wildcard value Found: metadata.name=all-your-base.rules.verbs contains a wildcard value and metadata.name=all-your-base.rules.resources contains a wildcard value
7 resources: ["*"]
8 verbs: ["*"]
9

Pod or Container Without LimitRange

Platform: Kubernetes Category: Insecure Configurations
Results (14)
File: scenarios/hidden-in-layers/deployment.yaml Line 4
Expected: metadata.name={{hidden-in-layers}} has a 'LimitRange' associated Found: metadata.name={{hidden-in-layers}} does not have a 'LimitRange' associated
3metadata:
4 name: hidden-in-layers
5spec:
File: scenarios/poor-registry/deployment.yaml Line 4
Expected: metadata.name={{poor-registry-deployment}} has a 'LimitRange' associated Found: metadata.name={{poor-registry-deployment}} does not have a 'LimitRange' associated
3metadata:
4 name: poor-registry-deployment
5spec:
File: scenarios/kube-bench-security/node-job.yaml Line 5
Expected: metadata.name={{kube-bench-node}} has a 'LimitRange' associated Found: metadata.name={{kube-bench-node}} does not have a 'LimitRange' associated
4metadata:
5 name: kube-bench-node
6spec:
File: scenarios/health-check/deployment.yaml Line 4
Expected: metadata.name={{health-check-deployment}} has a 'LimitRange' associated Found: metadata.name={{health-check-deployment}} does not have a 'LimitRange' associated
3metadata:
4 name: health-check-deployment
5spec:
File: scenarios/health-check/deployment-kind.yaml Line 4
Expected: metadata.name={{health-check-deployment}} has a 'LimitRange' associated Found: metadata.name={{health-check-deployment}} does not have a 'LimitRange' associated
3metadata:
4 name: health-check-deployment
5spec:
File: scenarios/internal-proxy/deployment.yaml Line 4
Expected: metadata.name={{internal-proxy-deployment}} has a 'LimitRange' associated Found: metadata.name={{internal-proxy-deployment}} does not have a 'LimitRange' associated
3metadata:
4 name: internal-proxy-deployment
5 labels:
File: scenarios/build-code/deployment.yaml Line 4
Expected: metadata.name={{build-code-deployment}} has a 'LimitRange' associated Found: metadata.name={{build-code-deployment}} does not have a 'LimitRange' associated
3metadata:
4 name: build-code-deployment
5spec:
File: scenarios/system-monitor/deployment.yaml Line 13
Expected: metadata.name={{system-monitor-deployment}} has a 'LimitRange' associated Found: metadata.name={{system-monitor-deployment}} does not have a 'LimitRange' associated
12metadata:
13 name: system-monitor-deployment
14spec:
File: scenarios/kubernetes-goat-home/deployment.yaml Line 4
Expected: metadata.name={{kubernetes-goat-home-deployment}} has a 'LimitRange' associated Found: metadata.name={{kubernetes-goat-home-deployment}} does not have a 'LimitRange' associated
3metadata:
4 name: kubernetes-goat-home-deployment
5spec:
File: scenarios/kube-bench-security/master-job.yaml Line 5
Expected: metadata.name={{kube-bench-master}} has a 'LimitRange' associated Found: metadata.name={{kube-bench-master}} does not have a 'LimitRange' associated
4metadata:
5 name: kube-bench-master
6spec:
File: scenarios/hunger-check/deployment.yaml Line 59
Expected: metadata.name={{hunger-check-deployment}} has a 'LimitRange' associated Found: metadata.name={{hunger-check-deployment}} does not have a 'LimitRange' associated
58 name: hunger-check-deployment
59 namespace: big-monolith
60spec:
File: scenarios/cache-store/deployment.yaml Line 22
Expected: metadata.name={{cache-store-deployment}} has a 'LimitRange' associated Found: metadata.name={{cache-store-deployment}} does not have a 'LimitRange' associated
21metadata:
22 namespace: secure-middleware
23 name: cache-store-deployment
File: scenarios/docker-bench-security/deployment.yaml Line 15
Expected: metadata.name={{docker-bench-security}} has a 'LimitRange' associated Found: metadata.name={{docker-bench-security}} does not have a 'LimitRange' associated
14metadata:
15 name: docker-bench-security
16 labels:
File: scenarios/batch-check/job.yaml Line 4
Expected: metadata.name={{batch-check-job}} has a 'LimitRange' associated Found: metadata.name={{batch-check-job}} does not have a 'LimitRange' associated
3metadata:
4 name: batch-check-job
5spec:

Pod or Container Without ResourceQuota

Platform: Kubernetes Category: Insecure Configurations
Pod or Container should have a ResourceQuota associatedhttps://kubernetes.io/docs/concepts/policy/resource-quotas/
Results (14)
File: scenarios/poor-registry/deployment.yaml Line 4
Expected: metadata.name={{poor-registry-deployment}} has a 'ResourceQuota' associated Found: metadata.name={{poor-registry-deployment}} does not have a 'ResourceQuota' associated
3metadata:
4 name: poor-registry-deployment
5spec:
File: scenarios/docker-bench-security/deployment.yaml Line 15
Expected: metadata.name={{docker-bench-security}} has a 'ResourceQuota' associated Found: metadata.name={{docker-bench-security}} does not have a 'ResourceQuota' associated
14metadata:
15 name: docker-bench-security
16 labels:
File: scenarios/batch-check/job.yaml Line 4
Expected: metadata.name={{batch-check-job}} has a 'ResourceQuota' associated Found: metadata.name={{batch-check-job}} does not have a 'ResourceQuota' associated
3metadata:
4 name: batch-check-job
5spec:
File: scenarios/kubernetes-goat-home/deployment.yaml Line 4
Expected: metadata.name={{kubernetes-goat-home-deployment}} has a 'ResourceQuota' associated Found: metadata.name={{kubernetes-goat-home-deployment}} does not have a 'ResourceQuota' associated
3metadata:
4 name: kubernetes-goat-home-deployment
5spec:
File: scenarios/cache-store/deployment.yaml Line 22
Expected: metadata.name={{cache-store-deployment}} has a 'ResourceQuota' associated Found: metadata.name={{cache-store-deployment}} does not have a 'ResourceQuota' associated
21metadata:
22 namespace: secure-middleware
23 name: cache-store-deployment
File: scenarios/health-check/deployment-kind.yaml Line 4
Expected: metadata.name={{health-check-deployment}} has a 'ResourceQuota' associated Found: metadata.name={{health-check-deployment}} does not have a 'ResourceQuota' associated
3metadata:
4 name: health-check-deployment
5spec:
File: scenarios/build-code/deployment.yaml Line 4
Expected: metadata.name={{build-code-deployment}} has a 'ResourceQuota' associated Found: metadata.name={{build-code-deployment}} does not have a 'ResourceQuota' associated
3metadata:
4 name: build-code-deployment
5spec:
File: scenarios/system-monitor/deployment.yaml Line 13
Expected: metadata.name={{system-monitor-deployment}} has a 'ResourceQuota' associated Found: metadata.name={{system-monitor-deployment}} does not have a 'ResourceQuota' associated
12metadata:
13 name: system-monitor-deployment
14spec:
File: scenarios/kube-bench-security/node-job.yaml Line 5
Expected: metadata.name={{kube-bench-node}} has a 'ResourceQuota' associated Found: metadata.name={{kube-bench-node}} does not have a 'ResourceQuota' associated
4metadata:
5 name: kube-bench-node
6spec:
File: scenarios/health-check/deployment.yaml Line 4
Expected: metadata.name={{health-check-deployment}} has a 'ResourceQuota' associated Found: metadata.name={{health-check-deployment}} does not have a 'ResourceQuota' associated
3metadata:
4 name: health-check-deployment
5spec:
File: scenarios/hunger-check/deployment.yaml Line 59
Expected: metadata.name={{hunger-check-deployment}} has a 'ResourceQuota' associated Found: metadata.name={{hunger-check-deployment}} does not have a 'ResourceQuota' associated
58 name: hunger-check-deployment
59 namespace: big-monolith
60spec:
File: scenarios/kube-bench-security/master-job.yaml Line 5
Expected: metadata.name={{kube-bench-master}} has a 'ResourceQuota' associated Found: metadata.name={{kube-bench-master}} does not have a 'ResourceQuota' associated
4metadata:
5 name: kube-bench-master
6spec:
File: scenarios/internal-proxy/deployment.yaml Line 4
Expected: metadata.name={{internal-proxy-deployment}} has a 'ResourceQuota' associated Found: metadata.name={{internal-proxy-deployment}} does not have a 'ResourceQuota' associated
3metadata:
4 name: internal-proxy-deployment
5 labels:
File: scenarios/hidden-in-layers/deployment.yaml Line 4
Expected: metadata.name={{hidden-in-layers}} has a 'ResourceQuota' associated Found: metadata.name={{hidden-in-layers}} does not have a 'ResourceQuota' associated
3metadata:
4 name: hidden-in-layers
5spec:

Pod or Container Without Security Context

Platform: Kubernetes Category: Insecure Configurations
A security context defines privilege and access control settings for a Pod or Containerhttps://kubernetes.io/docs/tasks/configure-pod-container/security-context/
Results (11)
File: scenarios/build-code/deployment.yaml Line 15
Expected: spec.template.spec.containers.name=build-code has a security context Found: spec.template.spec.containers.name=build-code does not have a security context
14 containers:
15 - name: build-code
16 image: madhuakula/k8s-goat-build-code
File: scenarios/hunger-check/deployment.yaml Line 71
Expected: spec.template.spec.containers.name=hunger-check has a security context Found: spec.template.spec.containers.name=hunger-check does not have a security context
70 containers:
71 - name: hunger-check
72 image: madhuakula/k8s-goat-hunger-check
File: scenarios/kube-bench-security/node-job.yaml Line 11
Expected: spec.template.spec.containers.name=kube-bench has a security context Found: spec.template.spec.containers.name=kube-bench does not have a security context
10 containers:
11 - name: kube-bench
12 image: aquasec/kube-bench:latest
File: scenarios/internal-proxy/deployment.yaml Line 17
Expected: spec.template.spec.containers.name=internal-api has a security context Found: spec.template.spec.containers.name=internal-api does not have a security context
16 containers:
17 - name: internal-api
18 image: madhuakula/k8s-goat-internal-api
File: scenarios/internal-proxy/deployment.yaml Line 28
Expected: spec.template.spec.containers.name=info-app has a security context Found: spec.template.spec.containers.name=info-app does not have a security context
27 - containerPort: 3000
28 - name: info-app
29 image: madhuakula/k8s-goat-info-app
File: scenarios/kubernetes-goat-home/deployment.yaml Line 15
Expected: spec.template.spec.containers.name=kubernetes-goat-home has a security context Found: spec.template.spec.containers.name=kubernetes-goat-home does not have a security context
14 containers:
15 - name: kubernetes-goat-home
16 image: madhuakula/k8s-goat-home
File: scenarios/batch-check/job.yaml Line 11
Expected: spec.template.spec.containers.name=batch-check has a security context Found: spec.template.spec.containers.name=batch-check does not have a security context
10 containers:
11 - name: batch-check
12 image: madhuakula/k8s-goat-batch-check
File: scenarios/hidden-in-layers/deployment.yaml Line 11
Expected: spec.template.spec.containers.name=hidden-in-layers has a security context Found: spec.template.spec.containers.name=hidden-in-layers does not have a security context
10 containers:
11 - name: hidden-in-layers
12 image: madhuakula/k8s-goat-hidden-in-layers
File: scenarios/kube-bench-security/master-job.yaml Line 17
Expected: spec.template.spec.containers.name=kube-bench has a security context Found: spec.template.spec.containers.name=kube-bench does not have a security context
16 containers:
17 - name: kube-bench
18 image: aquasec/kube-bench:latest
File: scenarios/poor-registry/deployment.yaml Line 15
Expected: spec.template.spec.containers.name=poor-registry has a security context Found: spec.template.spec.containers.name=poor-registry does not have a security context
14 containers:
15 - name: poor-registry
16 image: madhuakula/k8s-goat-poor-registry
File: scenarios/cache-store/deployment.yaml Line 36
Expected: spec.template.spec.containers.name=cache-store has a security context Found: spec.template.spec.containers.name=cache-store does not have a security context
35 containers:
36 - name: cache-store
37 image: madhuakula/k8s-goat-cache-store

RBAC Wildcard In Rule

Platform: Kubernetes Category: Access Control
Kubernetes Roles and ClusterRoles should not use wildcards in rules (objects or actions)https://kubernetes.io/docs/reference/access-authn-authz/rbac/
Results (4)
File: infrastructure/helm-tiller/pwnchart/templates/clusterrole.yaml Line 5
Expected: metadata.name={{all-your-base}}.rules[0].resources shouldn't contain value: '*' Found: metadata.name={{all-your-base}}.rules[0].resources contains value: '*'
4 name: all-your-base
5rules:
6 - apiGroups: ["*"]
File: scenarios/hunger-check/deployment.yaml Line 12
Expected: metadata.name={{secret-reader}}.rules[0].resources shouldn't contain value: '*' Found: metadata.name={{secret-reader}}.rules[0].resources contains value: '*'
11 name: secret-reader
12rules:
13- apiGroups: [""] # "" indicates the core API group
File: infrastructure/helm-tiller/pwnchart/templates/clusterrole.yaml Line 5
Expected: metadata.name={{all-your-base}}.rules[0].verbs shouldn't contain value: '*' Found: metadata.name={{all-your-base}}.rules[0].verbs contains value: '*'
4 name: all-your-base
5rules:
6 - apiGroups: ["*"]
File: infrastructure/helm-tiller/pwnchart/templates/clusterrole.yaml Line 5
Expected: metadata.name={{all-your-base}}.rules[0].apiGroups shouldn't contain value: '*' Found: metadata.name={{all-your-base}}.rules[0].apiGroups contains value: '*'
4 name: all-your-base
5rules:
6 - apiGroups: ["*"]

Secrets As Environment Variables

Platform: Kubernetes Category: Secret Management
Results (1)
File: scenarios/system-monitor/deployment.yaml Line 48
Expected: 'spec.template.spec.containers.name={{system-monitor}}.env.name={{K8S_GOAT_VAULT_KEY}}.valueFrom.secretKeyRef' is undefined Found: 'spec.template.spec.containers.name={{system-monitor}}.env.name={{K8S_GOAT_VAULT_KEY}}.valueFrom.secretKeyRef' is defined
47 valueFrom:
48 secretKeyRef:
49 name: goatvault

Service Does Not Target Pod

Platform: Kubernetes Category: Insecure Configurations
Results (8)
File: scenarios/kubernetes-goat-home/deployment.yaml Line 33
Expected: metadata.name={{kubernetes-goat-home-service}}.spec.selector label refers to a Pod label Found: metadata.name={{kubernetes-goat-home-service}}.spec.selector label does not match with any Pod label
32 targetPort: 80
33 selector:
34 app: kubernetes-goat-home
File: scenarios/health-check/deployment-kind.yaml Line 43
Expected: metadata.name={{health-check-service}}.spec.selector label refers to a Pod label Found: metadata.name={{health-check-service}}.spec.selector label does not match with any Pod label
42 targetPort: 80
43 selector:
44 app: health-check
File: scenarios/system-monitor/deployment.yaml Line 61
Expected: metadata.name={{system-monitor-service}}.spec.selector label refers to a Pod label Found: metadata.name={{system-monitor-service}}.spec.selector label does not match with any Pod label
60 targetPort: 8080
61 selector:
62 app: system-monitor
File: scenarios/build-code/deployment.yaml Line 33
Expected: metadata.name={{build-code-service}}.spec.selector label refers to a Pod label Found: metadata.name={{build-code-service}}.spec.selector label does not match with any Pod label
32 targetPort: 3000
33 selector:
34 app: build-code
File: scenarios/hunger-check/deployment.yaml Line 93
Expected: metadata.name={{hunger-check-service}}.spec.selector label refers to a Pod label Found: metadata.name={{hunger-check-service}}.spec.selector label does not match with any Pod label
92 targetPort: 8080
93 selector:
94 app: hunger-check
File: scenarios/poor-registry/deployment.yaml Line 33
Expected: metadata.name={{poor-registry-service}}.spec.selector label refers to a Pod label Found: metadata.name={{poor-registry-service}}.spec.selector label does not match with any Pod label
32 targetPort: 5000
33 selector:
34 app: poor-registry
File: scenarios/health-check/deployment.yaml Line 44
Expected: metadata.name={{health-check-service}}.spec.selector label refers to a Pod label Found: metadata.name={{health-check-service}}.spec.selector label does not match with any Pod label
43 targetPort: 80
44 selector:
45 app: health-check
File: scenarios/metadata-db/templates/service.yaml Line 3
Expected: metadata.name={{{}}}.spec.selector label refers to a Pod label Found: metadata.name={{{}}}.spec.selector label does not match with any Pod label
2kind: Service
3metadata:
4 name: {{ include "metadata-db.fullname" . }}

Service Type is NodePort

Platform: Kubernetes Category: Networking and Firewall
Results (1)
File: scenarios/internal-proxy/deployment.yaml Line 57
Expected: spec.type is not 'NodePort' Found: spec.type is 'NodePort'
56spec:
57 type: NodePort
58 ports:

APT-GET Not Avoiding Additional Packages

Platform: Dockerfile Category: Supply-Chain
Check if any apt-get installs don't use '--no-install-recommends' flag to avoid installing additional packages.https://docs.docker.com/engine/reference/builder/#run
Results (1)
File: infrastructure/system-monitor/Dockerfile Line 4
Expected: 'RUN apt-get update && apt-get install -y htop libcap2-bin curl wget && cd /tmp; wget https://github.com/yudai/gotty/releases/download/v1.0.1/gotty_linux_amd64.tar.gz && tar -xvzf gotty_linux_amd64.tar.gz; mv gotty /usr/local/bin/gotty' uses '--no-install-recommends' flag to avoid installing additional packages Found: 'RUN apt-get update && apt-get install -y htop libcap2-bin curl wget && cd /tmp; wget https://github.com/yudai/gotty/releases/download/v1.0.1/gotty_linux_amd64.tar.gz && tar -xvzf gotty_linux_amd64.tar.gz; mv gotty /usr/local/bin/gotty' does not use '--no-install-recommends' flag to avoid installing additional packages
3
4RUN apt-get update && apt-get install -y htop \
5 libcap2-bin curl wget && \

Apk Add Using Local Cache Path

Platform: Dockerfile Category: Supply-Chain
When installing packages, use the '--no-cache' switch to avoid the need to use '--update' and remove '/var/cache/apk/*'https://docs.docker.com/engine/reference/builder/#run
Results (1)
File: infrastructure/k8s-goat-home/Dockerfile Line 7
Expected: 'RUN' does not contain 'apk add' command without '--no-cache' switch Found: 'RUN' contains 'apk add' command without '--no-cache' switch
6ENV HUGO_BINARY hugo_${HUGO_VERSION}_Linux-64bit.tar.gz
7RUN set -x && \
8 apk add --update wget git ca-certificates imagemagick && \

Apt Get Install Lists Were Not Deleted

Platform: Dockerfile Category: Supply-Chain
After using apt-get install, it is needed to delete apt-get listshttps://docs.docker.com/develop/develop-images/dockerfile_best-practices/
Results (1)
File: infrastructure/system-monitor/Dockerfile Line 4
Expected: After using apt-get install, it is needed to delete apt-get lists Found: After using apt-get install, the apt-get lists were not deleted
3
4RUN apt-get update && apt-get install -y htop \
5 libcap2-bin curl wget && \

KICS is open and will always stay such. Both the scanning engine and the security queries are clear and open for the software development community.
Spread the love: