You have access to multiple clusters from your main terminal through kubectl contexts. Write all context names into /opt/course/1/contexts, one per line.
From the kubeconfig extract the certificate of user restricted@infra-prod and write it decoded to /opt/course/1/cert.
You received a list from the DevSecOps team which performed a security investigation of the k8s cluster1 (workload-prod). The list states the following about the apiserver setup:
# 解题思路:修改APIserver的配置,--kubernetes-service-node-port=0,或者删除该配置参数,删除API的service # 1.找到静态pod配置,并设置参数为0 vim /etc/kubernetes/manifests/kube-apiserver.yaml ... - --kubernetes-service-node-port=0 # delete or set to 0 ... # 2.查看pod,并删除APIserver的service kubectl -n kube-system get pod | grep apiserver kubectl delete svc kubernetes kubectl get svc
Question 4 | Pod Security Policies
Task weight: 8%
There is Deploymentcontainer-host-hacker in Namespaceteam-red which mounts /run/containerd as a hostPath volume on the Node where its running. This means that the Pod can access various data about other containers running on the same Node .
You’re asked to forbid this behavior by:
Enabling Admission Plugin PodSecurityPolicy in the apiserver
Creating a PodSecurityPolicy named psp-mount which allows hostPath volumes only for directory /tmp
Creating a ClusterRole named psp-mount which allows to use the new PSP
Creating a RoleBinding named psp-mount in Namespaceteam-red which binds the new ClusterRole to all ServiceAccounts in the Namespaceteam-red
Restart the Pod of Deploymentcontainer-host-hacker afterwards to verify new creation is prevented.
NOTE: PSPs can affect the whole cluster. Should you encounter issues you can always disable the Admission Plugin again.
You’re ask to evaluate specific settings of cluster2 against the CIS Benchmark recommendations. Use the tool kube-bench which is already installed on the nodes.
Connect using ssh cluster2-master1 and ssh cluster2-worker1.
On the master node ensure (correct if necessary) that the CIS recommendations are set for:
The -profiling argument of the kube-controller-manager
The ownership of directory /var/lib/etcd
On the worker node ensure (correct if necessary) that the CIS recommendations are set for:
The permissions of the kubelet configuration /var/lib/kubelet/config.yaml
# master 配置 # 1.查找kube-controller-manager kube-bench run --targets=master | grep kube-controller -A 3 1.3.2 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml on the master node and set the below parameter. --profiling=false # 根据修改建议添加参数:--profiling=false vim /etc/kubernetes/manifests/kube-controller-manager.yaml - --profiling=false# add # 重启再扫一次发现合规了。 # 2.etcd数据目录权限设置 ls -lh /var/lib | grep etcd drwx------ 3 root root 4.0K Sep 11 20:08 etcd kube-bench run --targets=master | grep "/var/lib/etcd" -B5 ps -ef | grep etcd Run the below command (based on the etcd data directory found above). For example, chown etcd:etcd /var/lib/etcd # 设置权限 chown etcd:etcd /var/lib/etcd
# work node配置 # 1.设置kubelet配置文件权限 kube-bench run --targets=node | grep /var/lib/kubelet/config.yaml -B2 4.1.9 Run the following command (using the config file location identified in the Audit step) chmod 644 /var/lib/kubelet/config.yaml # 执行该命令即可
# 2.配置kubelet client-ca-file参数,发现该参数是通过的忽略 kube-bench run --targets=node | grep client-ca-file [PASS] 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
Question 6 | Verify Platform Binaries
Task weight: 2%
There are four Kubernetes server binaries located at /opt/course/6/binaries. You’re provided with the following verified sha512 values for these:
The Open Policy Agent and Gatekeeper have been installed to, among other things, enforce blacklisting of certain image registries. Alter the existing constraint and/or template to also blacklist images from very-bad-registry.com.
Test it by creating a single Pod using image very-bad-registry.com/image in Namespacedefault, it shouldn’t work.
You can also verify your changes by looking at the existing Deploymentuntrusted in Namespacedefault, it uses an image from the new untrusted source. The OPA contraint should throw violation messages for this one.
# 1.查看OPA资源 # 查看 约束 kubectl get constraint NAME AGE blacklistimages.constraints.gatekeeper.sh/pod-trusted-images 10m # 查看blacklistimages 资源 kubectl get blacklistimages pod-trusted-images -o yaml | less # 查看约束模板,并修改镜像黑名单,并添加very-bad-registry.com constrainttemplates blacklistimages apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: ... spec: crd: spec: names: kind: BlacklistImages targets: - rego: | package k8strustedimages
images { image := input.review.object.spec.containers[_].image not startswith(image, "docker-fake.io/") not startswith(image, "google-gcr-fake.com/") not startswith(image, "very-bad-registry.com/") # 添加这行 }
violation[{"msg": msg}] { not images msg := "not trusted image!" } target: admission.k8s.gatekeeper.sh # 创建pod测试验证是否生效 kubectl run opa-test --image=very-bad-registry.com/image Error from server ([denied by pod-trusted-images] not trusted image!): admission webhook "validation.gatekeeper.sh" denied the request: [denied by pod-trusted-images] not trusted image!
Question 8 | Secure Kubernetes Dashboard
Task weight: 3%
The Kubernetes Dashboard is installed in Namespacekubernetes-dashboard and is configured to:
Allow users to “skip login”
Allow insecure access (HTTP without authentication)
Allow basic authentication
Allow access from outside the cluster
You are asked to make it more secure by:
Deny users to “skip login”
Deny insecure access, enforce HTTPS (self signed certificates are ok for now)
Add the -auto-generate-certificates argument
Enforce authentication using a token (with possibility to use RBAC)
# 测试 kubectl -n kubernetes-dashboard get svc NAME TYPE CLUSTER-IP ... PORT(S) dashboard-metrics-scraper ClusterIP 10.111.171.247 ... 8000/TCP kubernetes-dashboard ClusterIP 10.100.118.128 ... 9090/TCP,443/TCP curl <http://192.168.100.11:32520> # 发现原来的nodePort端口访问不了。
Question 9 | AppArmor Profile
Task weight: 3%
Some containers need to run more secure and restricted. There is an existing AppArmor profile located at /opt/course/9/profile for this.
Install the AppArmor profile on Nodecluster1-worker1. Connect using ssh cluster1-worker1.
Add label security=apparmor to the Node
Create a Deployment named apparmor in Namespacedefault with:
One replica of image nginx:1.19.2
NodeSelector for security=apparmor
Single container named c1 with the AppArmor profile enabled
The Pod might not run properly with the profile enabled. Write the logs of the Pod into /opt/course/9/logs so another team can work on getting the application running.
Team purple wants to run some of their workloads more secure. Worker node cluster1-worker2 has container engine containerd already installed and its configured to support the runsc/gvisor runtime.
Create a RuntimeClass named gvisor with handler runsc.
Create a Pod that uses the RuntimeClass . The Pod should be in Namespaceteam-purple, named gvisor-test and of image nginx:1.19.2. Make sure the Pod runs on cluster1-worker2.
Write the dmesg output of the successfully started Pod into /opt/course/10/gvisor-test-dmesg.
# 3、写日志到/opt/course/10/gvisor-test-dmesg kubectl -n team-purple exec gvisor-test -- dmesg > /opt/course/10/gvisor-test-dmesg [ 0.000000] Starting gVisor... [ 0.417740] Checking naughty and nice process list... [ 0.623721] Waiting for children... [ 0.902192] Gathering forks... [ 1.258087] Committing treasure map to memory... [ 1.653149] Generating random numbers by fair dice roll... [ 1.918386] Creating cloned children... [ 2.137450] Digging up root... [ 2.369841] Forking spaghetti code... [ 2.840216] Rewriting operating system in Javascript... [ 2.956226] Creating bureaucratic processes... [ 3.329981] Ready!
Question 11 | Secrets in ETCD
Task weight: 7%
There is an existing Secret called database-access in Namespaceteam-green.
Read the complete Secret content directly from ETCD (using etcdctl) and store it into /opt/course/11/etcd-secret-content. Write the plain and decoded Secret’s value of key “pass” into /opt/course/11/database-password.
You’re asked to investigate a possible permission escape in Namespacerestricted. The context authenticates as user restricted which has only limited permissions and shouldn’t be able to read Secret values.
Try to find the password-key values of the Secretssecret1, secret2 and secret3 in Namespacerestricted. Write the decoded plaintext values into files /opt/course/12/secret1, /opt/course/12/secret2 and /opt/course/12/secret3.
There is a metadata service available at http://192.168.100.21:32000 on which Nodes can reach sensitive data, like cloud credentials for initialisation. By default, all Pods in the cluster also have access to this endpoint. The DevSecOps team has asked you to restrict access to this metadata server.
In Namespacemetadata-access:
Create a NetworkPolicy named metadata-deny which prevents egress to 192.168.100.21 for all Pods but still allows access to everything else
Create a NetworkPolicy named metadata-allow which allows Pods having label role: metadata-accessor to access endpoint 192.168.100.21
There are existing Pods in the target Namespace with which you can test your policies, but don’t change their labels.
There are Pods in Namespaceteam-yellow. A security investigation noticed that some processes running in these Pods are using the Syscall kill, which is forbidden by a Team Yellow internal policy.
Find the offending Pod(s) and remove these by reducing the replicas of the parent Deployment to 0.
In Namespaceteam-pink there is an existing Nginx Ingress resources named secure which accepts two paths /app and /api which point to different ClusterIP Services .
From your main terminal you can connect to it using for example:
Right now it uses a default generated TLS certificate by the Nginx Ingress Controller.
You’re asked to instead use the key and certificate provided at /opt/course/15/tls.key and /opt/course/15/tls.crt. As it’s a self-signed certificate you need to use curl -k when connecting to it.
There is a Deploymentimage-verify in Namespaceteam-blue which runs image registry.killer.sh:5000/image-verify:v1. DevSecOps has asked you to improve this image by:
Changing the base image to alpine:3.12
Not installing curl
Updating nginx to use the version constraint >=1.18.0
Running the main process as user myuser
Do not add any new lines to the Dockerfile, just edit existing ones. The file is located at /opt/course/16/image/Dockerfile.
Tag your version as v2. You can build, tag and push using:
1
cd /opt/course/16/imagepodman build -t registry.killer.sh:5000/image-verify:v2 .podman run registry.killer.sh:5000/image-verify:v2 # to test your changespodman push registry.killer.sh:5000/image-verify:v2
Make the Deployment use your updated image tag v2.
Audit Logging has been enabled in the cluster with an Audit Policy located at /etc/kubernetes/audit/policy.yaml on cluster2-master1.
Change the configuration so that only one backup of the logs is stored.
Alter the Policy in a way that it only stores logs:
From Secret resources, level Metadata
From “system:nodes” userGroups, level RequestResponse
After you altered the Policy make sure to empty the log file so it only contains entries according to your changes, like using truncate -s 0 /etc/kubernetes/audit/logs/audit.log.
NOTE: You can use jq to render json more readable. cat data.json | jq
Namespacesecurity contains five Secrets of type Opaque which can be considered highly confidential. The latest Incident-Prevention-Investigation revealed that ServiceAccountp.auster had too broad access to the cluster for some time. This SA should’ve never had access to any Secrets in that Namespace .
Find out which Secrets in Namespacesecurity this SA did access by looking at the Audit Logs under /opt/course/18/audit.log.
Change the password to any new string of only those Secrets that were accessed by this SA .
NOTE: You can use jq to render json more readable. cat data.json | jq
The Deploymentimmutable-deployment in Namespaceteam-purple should run immutable, it’s created from file /opt/course/19/immutable-deployment.yaml. Even after a successful break-in, it shouldn’t be possible for an attacker to modify the filesystem of the running container.
Modify the Deployment in a way that no processes inside the container can modify the local filesystem, only /tmp directory should be writeable. Don’t modify the Docker image.
Save the updated YAML under /opt/course/19/immutable-deployment-new.yaml and update the running Deployment .
The Release Engineering Team has shared some YAML manifests and Dockerfiles with you to review. The files are located under /opt/course/22/files.
As a container security expert, you are asked to perform a manual static analysis and find out possible security issues with respect to unwanted credential exposure. Running processes as root is of no concern in this task.
Write the filenames which have issues into /opt/course/22/security-issues.
NOTE: In the Dockerfile and YAML manifests, assume that the referred files, folders, secrets and volume mounts are present. Disregard syntax or logic errors.
# Add MySQL configuration COPY my.cnf /etc/mysql/conf.d/my.cnf COPY mysqld_charset.cnf /etc/mysql/conf.d/mysqld_charset.cnf
RUN apt-get update && \\ apt-get -yq install mysql-server-5.6 &&
# Add MySQL scripts COPY import_sql.sh /import_sql.sh COPY run.sh /run.sh
# Configure credentials COPY secret-token . # LAYER X RUN /etc/register.sh ./secret-token # LAYER Y RUN rm ./secret-token # delete secret token again # LATER Z