The DevOps team would like to get the list of all Namespaces in the cluster. Get the list and save it to /opt/course/1/namespaces.
Answer:
1 2
k get ns > /opt/course/1/namespaces
The content should then look like:
1 2
# /opt/course/1/namespacesNAME STATUS AGEdefault Active 150mearth Active 76mjupiter Active 76mkube-public Active 150mkube-system Active 150mmars Active 76mmercury Active 76mmoon Active 76mneptune Active 76mpluto Active 76msaturn Active 76mshell-intern Active 76msun Active 76mvenus Active 76m
Question 2 | Pods
Create a single Pod of image httpd:2.4.41-alpine in Namespacedefault. The Pod should be named pod1 and the container should be named pod1-container.
Your manager would like to run a command manually on occasion to output the status of that exact Pod . Please write a command that does this into /opt/course/2/pod1-status-command.sh. The command should use kubectl.
Answer:
1 2
k run # help# check the export on the very top of this document so we can use $dok run pod1 --image=httpd:2.4.41-alpine $do > 2.yamlvim 2.yaml
Change the container name in 2.yaml to pod1-container:
➜ k create -f 2.yamlpod/pod1 created➜ k get podNAME READY STATUS RESTARTS AGEpod1 0/1 ContainerCreating 0 6s➜ k get podNAME READY STATUS RESTARTS AGEpod1 1/1 Running 0 30s
# /opt/course/2/pod1-status-command.shkubectl -n default get pod pod1 -o jsonpath="{.status.phase}"
To test the command:
1 2
➜ sh /opt/course/2/pod1-status-command.shRunning
Question 3 | Job
Team Neptune needs a Job template located at /opt/course/3/job.yaml. This Job should run image busybox:1.31.0 and execute sleep 2 && echo done. It should be in namespace neptune, run a total of 3 times and should execute 2 runs in parallel.
Start the Job and check its history. Each pod created by the Job should have the label id: awesome-job. The job should be named neb-new-job and the container neb-new-job-container.
Answer:
1 2
k -n neptun create job -h# check the export on the very top of this document so we can use $dok -n neptune create job neb-new-job --image=busybox:1.31.0 $do > /opt/course/3/job.yaml -- sh -c "sleep 2 && echo done"vim /opt/course/3/job.yaml
➜ k -n neptune describe job neb-new-job...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 2m52s job-controller Created pod: neb-new-job-jhq2g Normal SuccessfulCreate 2m52s job-controller Created pod: neb-new-job-vf6ts Normal SuccessfulCreate 2m42s job-controller Created pod: neb-new-job-gm8sz
At the age column we can see that two pods run parallel and the third one after that. Just as it was required in the task.
Question 4 | Helm Management
Team Mercury asked you to perform some operations using Helm, all in Namespacemercury:
Delete release internal-issue-report-apiv1
Upgrade release internal-issue-report-apiv2 to any newer version of chart bitnami/nginx available
Install a new release internal-issue-report-apache of chart bitnami/apache. The Deployment should have two replicas, set these via Helm-values during install
There seems to be a broken release, stuck in pending-install state. Find it and delete it
Answer:
Helm Chart : Kubernetes YAML template-files combined into a single package, Values allow customisation
Helm Release : Installed instance of a Chart
Helm Values : Allow to customise the YAML template-files in a Chart when creating a Release
Next we need to upgrade a release, for this we could first list the charts of the repo:
1 2
➜ helm repo listNAME URLbitnami <https://charts.bitnami.com/bitnami➜> helm repo updateHang tight while we grab the latest from your chart repositories......Successfully got an update from the "bitnami" chart repositoryUpdate Complete. ⎈Happy Helming!⎈➜ helm search repo nginxNAME CHART VERSION APP VERSION DESCRIPTIONbitnami/nginx 9.5.2 1.21.1 Chart for the nginx server ...
Here we see that a newer chart version 9.5.2 is available. But the task only requires us to upgrade to any newer chart version available, so we can simply run:
INFO: Also check out helm rollback for undoing a helm rollout/upgrade
3.
Now we’re asked to install a new release, with a customised values setting. For this we first list all possible value settings for the chart, we can do this via:
1 2
helm show values bitnami/apache # will show a long list of all possible value-settingshelm show values bitnami/apache | yq e # parse yaml and show with colors
Huge list, if we search in it we should find the setting replicaCount: 1 on top level. This means we can run:
Thank you Helm for making our lifes easier! (Till something breaks)
Question 5 | ServiceAccount, Secret
Team Neptune has its own ServiceAccount named neptune-sa-v2 in Namespaceneptune. A coworker needs the token from the Secret that belongs to that ServiceAccount . Write the base64 decoded token to file /opt/course/5/token.
Answer:
Since K8s 1.24, Secrets won’t be created automatically for ServiceAccounts any longer. But it’s still possible to create a Secret manually and attach it to a ServiceAccount by setting the correct annotation on the Secret. This was done for this task.
1 2
k -n neptune get sa # get overviewk -n neptune get secrets # shows all secrets of namespacek -n neptune get secrets -oyaml | grep annotations -A 1 # shows secrets with first annotation
If a Secret belongs to a ServiceAccont , it’ll have the annotation kubernetes.io/service-account.name. Here the Secret we’re looking for is neptune-secret-1.
1 2
➜ k -n neptune get secret neptune-secret-1 -o yamlapiVersion: v1data:... token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNkltNWFaRmRxWkRKMmFHTnZRM0JxV0haT1IxZzFiM3BJY201SlowaEhOV3hUWmt3elFuRmFhVEZhZDJNaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUp1WlhCMGRXNWxJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpXTnlaWFF1Ym1GdFpTSTZJbTVsY0hSMWJtVXRjMkV0ZGpJdGRHOXJaVzR0Wm5FNU1tb2lMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIzVnVkQzV1WVcxbElqb2libVZ3ZEhWdVpTMXpZUzEyTWlJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZ5ZG1salpTMWhZMk52ZFc1MExuVnBaQ0k2SWpZMlltUmpOak0yTFRKbFl6TXROREpoWkMwNE9HRTFMV0ZoWXpGbFpqWmxPVFpsTlNJc0luTjFZaUk2SW5ONWMzUmxiVHB6WlhKMmFXTmxZV05qYjNWdWREcHVaWEIwZFc1bE9tNWxjSFIxYm1VdGMyRXRkaklpZlEuVllnYm9NNENUZDBwZENKNzh3alV3bXRhbGgtMnZzS2pBTnlQc2gtNmd1RXdPdFdFcTVGYnc1WkhQdHZBZHJMbFB6cE9IRWJBZTRlVU05NUJSR1diWUlkd2p1Tjk1SjBENFJORmtWVXQ0OHR3b2FrUlY3aC1hUHV3c1FYSGhaWnp5NHlpbUZIRzlVZm1zazVZcjRSVmNHNm4xMzd5LUZIMDhLOHpaaklQQXNLRHFOQlF0eGctbFp2d1ZNaTZ2aUlocnJ6QVFzME1CT1Y4Mk9KWUd5Mm8tV1FWYzBVVWFuQ2Y5NFkzZ1QwWVRpcVF2Y3pZTXM2bno5dXQtWGd3aXRyQlk2VGo5QmdQcHJBOWtfajVxRXhfTFVVWlVwUEFpRU43T3pka0pzSThjdHRoMTBseXBJMUFlRnI0M3Q2QUx5clFvQk0zOWFiRGZxM0Zrc1Itb2NfV013kind: Secret...
This shows the base64 encoded token. To get the encoded one we could pipe it manually through base64 -d or we simply do:
Create a single Pod named pod6 in Namespacedefault of image busybox:1.31.0. The Pod should have a readiness-probe executing cat /tmp/ready. It should initially wait 5 and periodically wait 10 seconds. This will set the container ready only if the file /tmp/ready exists.
The Pod should run the command touch /tmp/ready && sleep 1d, which will create the necessary file to be ready and then idles. Create the Pod and confirm it starts.
Answer:
1 2
k run pod6 --image=busybox:1.31.0 $do --command -- sh -c "touch /tmp/ready && sleep 1d" > 6.yamlvim 6.yaml
Search for a readiness-probe example on https://kubernetes.io/docs, then copy and alter the relevant section for the task:
Running k get pod6 we should see the job being created and completed:
1 2
➜ k get pod pod6NAME READY STATUS RESTARTS AGEpod6 0/1 ContainerCreating 0 2s➜ k get pod pod6NAME READY STATUS RESTARTS AGEpod6 0/1 Running 0 7s➜ k get pod pod6NAME READY STATUS RESTARTS AGEpod6 1/1 Running 0 15s
We see that the Pod is finally ready.
Question 7 | Pods, Namespaces
The board of Team Neptune decided to take over control of one e-commerce webserver from Team Saturn. The administrator who once setup this webserver is not part of the organisation any longer. All information you could get was that the e-commerce system is called my-happy-shop.
Search for the correct Pod in Namespacesaturn and move it to Namespaceneptune. It doesn’t matter if you shut it down and spin it up again, it probably hasn’t any customers anyways.
The Pod names don’t reveal any information. We assume the Pod we are searching has a label or annotation with the name my-happy-shop, so we search for it:
1 2
k -n saturn describe pod # describe all pods, then manually look for it# or do some filtering like thisk -n saturn get pod -o yaml | grep my-happy-shop -A10
We see the webserver we’re looking for is webserver-sat-003
1 2
k -n saturn get pod webserver-sat-003 -o yaml > 7_webserver-sat-003.yaml # exportvim 7_webserver-sat-003.yaml
Change the Namespace to neptune, also remove the status: section, the token volume, the token volumeMount and the nodeName, else the new Pod won’t start. The final file could look as clean like this:
1 2
# 7_webserver-sat-003.yamlapiVersion: v1kind: Podmetadata: annotations: description: this is the server for the E-Commerce System my-happy-shop labels: id: webserver-sat-003 name: webserver-sat-003 namespace: neptune # new namespace herespec: containers: - image: nginx:1.16.1-alpine imagePullPolicy: IfNotPresent name: webserver-sat restartPolicy: Always
Then we execute:
1 2
k -n neptune create -f 7_webserver-sat-003.yaml
1 2
➜ k -n neptune get pod | grep webserverwebserver-sat-003 1/1 Running 0 22s
It seems the server is running in Namespaceneptune, so we can do:
1 2
k -n saturn delete pod webserver-sat-003 --force --grace-period=0
Let’s confirm only one is running:
1 2
➜ k get pod -A | grep webserver-sat-003neptune webserver-sat-003 1/1 Running 0 6s
This should list only one pod called webserver-sat-003 in Namespaceneptune, status running.
Question 8 | Deployment, Rollouts
There is an existing Deployment named api-new-c32 in Namespaceneptune. A developer did make an update to the Deployment but the updated version never came online. Check the Deployment history and find a revision that works, then rollback to it. Could you tell Team Neptune what the error was so it doesn’t happen again?
Answer:
1 2
k -n neptune get deploy # overviewk -n neptune rollout -hk -n neptune rollout history -h
➜ k -n neptune describe pod api-new-c32-7d64747c87-zh648 | grep -i error ... Error: ImagePullBackOff
1 2
➜ k -n neptune describe pod api-new-c32-7d64747c87-zh648 | grep -i image Image: ngnix:1.16.3 Image ID: Reason: ImagePullBackOff Warning Failed 4m28s (x616 over 144m) kubelet, gke-s3ef67020-28c5-45f7--default-pool-248abd4f-s010 Error: ImagePullBackOff
Someone seems to have added a new image with a spelling mistake in the name ngnix:1.16.3, that’s the reason we can tell Team Neptune!
Now let’s revert to the previous version:
1 2
k -n neptune rollout undo deploy api-new-c32
Does this one work?
1 2
➜ k -n neptune get deploy api-new-c32NAME READY UP-TO-DATE AVAILABLE AGEapi-new-c32 3/3 3 3 146m
Yes! All up-to-date and available.
Also a fast way to get an overview of the ReplicaSets of a Deployment and their images could be done with:
1 2
k -n neptune get rs -o wide | grep api-new-c32
Question 9 | Pod -> Deployment
In Namespacepluto there is single Pod named holy-api. It has been working okay for a while now but Team Pluto needs it to be more reliable.
Convert the Pod into a Deployment named holy-api with 3 replicas and delete the single Pod once done. The raw Pod template file is available at /opt/course/9/holy-api-pod.yaml.
In addition, the new Deployment should set allowPrivilegeEscalation: false and privileged: false for the security context on container level.
Please create the Deployment and save its yaml under /opt/course/9/holy-api-deployment.yaml.
Answer
There are multiple ways to do this, one is to copy an Deployment example from https://kubernetes.io/docs and then merge it with the existing Pod yaml. That’s what we will do now:
1 2
cp /opt/course/9/holy-api-pod.yaml /opt/course/9/holy-api-deployment.yaml # make a copy!vim /opt/course/9/holy-api-deployment.yaml
Now copy/use a Deployment example yaml and put the Pod’smetadata: and spec: into the Deployment’stemplate: section:
1 2
# /opt/course/9/holy-api-deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: holy-api # name stays the same namespace: pluto # importantspec: replicas: 3 # 3 replicas selector: matchLabels: id: holy-api # set the correct selector template: # => from here down its the same as the pods metadata: and spec: sections metadata: labels: id: holy-api name: holy-api spec: containers: - env: - name: CACHE_KEY_1 value: b&MTCi0=[T66RXm!jO@ - name: CACHE_KEY_2 value: PCAILGej5Ld@Q%{Q1=# - name: CACHE_KEY_3 value: 2qz-]2OJlWDSTn_;RFQ image: nginx:1.17.3-alpine name: holy-api-container securityContext: # add allowPrivilegeEscalation: false # add privileged: false # add volumeMounts: - mountPath: /cache1 name: cache-volume1 - mountPath: /cache2 name: cache-volume2 - mountPath: /cache3 name: cache-volume3 volumes: - emptyDir: {} name: cache-volume1 - emptyDir: {} name: cache-volume2 - emptyDir: {} name: cache-volume3
To indent multiple lines using vim you should set the shiftwidth using :set shiftwidth=2. Then mark multiple lines using Shift v and the up/down keys.
To then indent the marked lines press > or < and to repeat the action press .
Next create the new Deployment :
1 2
k -f /opt/course/9/holy-api-deployment.yaml create
and confirm it’s running:
1 2
➜ k -n pluto get pod | grep holyNAME READY STATUS RESTARTS AGEholy-api 1/1 Running 0 19mholy-api-5dbfdb4569-8qr5x 1/1 Running 0 30sholy-api-5dbfdb4569-b5clh 1/1 Running 0 30sholy-api-5dbfdb4569-rj2gz 1/1 Running 0 30s
Finally delete the single Pod :
1 2
k -n pluto delete pod holy-api --force --grace-period=0
Team Pluto needs a new cluster internal Service . Create a ClusterIP Service named project-plt-6cc-svc in Namespacepluto. This Service should expose a single Pod named project-plt-6cc-api of image nginx:1.17.3-alpine, create that Pod as well. The Pod should be identified by label project: plt-6cc-api. The Service should use tcp port redirection of 3333:80.
Finally use for example curl from a temporary nginx:alpinePod to get the response from the Service . Write the response into /opt/course/10/service_test.html. Also check if the logs of Podproject-plt-6cc-api show the request and write those into /opt/course/10/service_test.log.
Answer
1 2
k -n pluto run project-plt-6cc-api --image=nginx:1.17.3-alpine --labels project=plt-6cc-api
This will create the requested Pod . In yaml it would look like this:
We could also use create service but then we would need to change the yaml afterwards:
1 2
k -n pluto create service -h # helpk -n pluto create service clusterip -h #helpk -n pluto create service clusterip project-plt-6cc-svc --tcp 3333:80 $do# now we would need to set the correct selector labels
Check the Service is running:
1 2
➜ k -n pluto get pod,svc | grep 6ccpod/project-plt-6cc-api 1/1 Running 0 9m42sservice/project-plt-6cc-svc ClusterIP 10.31.241.234 <none> 3333/TCP 2m24s
➜ k -n pluto get epNAME ENDPOINTS AGEproject-plt-6cc-svc 10.28.2.32:80 84m
Yes, endpoint there! Finally we check the connection using a temporary Pod :
1 2
➜ k run tmp --restart=Never --rm --image=nginx:alpine -i -- curl <http://project-plt-6cc-svc.pluto:3333> % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 612 100 612 0 0 32210 0 --:--:-- --:--:-- --:--:-- 32210<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1>...
Great! Notice that we use the Kubernetes Namespace dns resolving (project-plt-6cc-svc.pluto) here. We could only use the Service name if we would also spin up the temporary Pod in Namespacepluto .
And now really finally copy or pipe the html content into /opt/course/10/service_test.html.
1 2
# /opt/course/10/service_test.html<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; }...
Also the requested logs:
1 2
k -n pluto logs project-plt-6cc-api > /opt/course/10/service_test.log
During the last monthly meeting you mentioned your strong expertise in container technology. Now the Build&Release team of department Sun is in need of your insight knowledge. There are files to build a container image located at /opt/course/11/image. The container will run a Golang application which outputs information to stdout. You’re asked to perform the following tasks:
NOTE: Make sure to run all commands as user k8s, for docker use sudo docker
Change the Dockerfile. The value of the environment variable SUN_CIPHER_ID should be set to the hardcoded value 5b9c1065-e39d-4a43-a04a-e59bcea3e03f
Build the image using Docker, named registry.killer.sh:5000/sun-cipher, tagged as latest and v1-docker, push these to the registry
Build the image using Podman, named registry.killer.sh:5000/sun-cipher, tagged as v1-podman, push it to the registry
Run a container using Podman, which keeps running in the background, named sun-cipher using image registry.killer.sh:5000/sun-cipher:v1-podman. Run the container from k8s@terminal and not root@terminal
Write the logs your container sun-cipher produced into /opt/course/11/logs. Then write a list of all running Podman containers into /opt/course/11/containers
Answer
Dockerfile : list of commands from which an Image can be build
Image : binary file which includes all data/requirements to be run as a Container
Container : running instance of an Image
Registry : place where we can push/pull Images to/from
➜ cd /opt/course/11/image➜ sudo docker build -t registry.killer.sh:5000/sun-cipher:latest -t registry.killer.sh:5000/sun-cipher:v1-docker ....Successfully built 409fde3c5bf9Successfully tagged registry.killer.sh:5000/sun-cipher:latestSuccessfully tagged registry.killer.sh:5000/sun-cipher:v1-docker➜ sudo docker image lsREPOSITORY TAG IMAGE ID CREATED SIZEregistry.killer.sh:5000/sun-cipher latest 409fde3c5bf9 24 seconds ago 7.76MBregistry.killer.sh:5000/sun-cipher v1-docker 409fde3c5bf9 24 seconds ago 7.76MB...➜ sudo docker push registry.killer.sh:5000/sun-cipher:latestThe push refers to repository [registry.killer.sh:5000/sun-cipher]c947fb5eba52: Pushed33e8713114f8: Pushedlatest: digest: sha256:d216b4136a5b232b738698e826e7d12fccba9921d163b63777be23572250f23d size: 739➜ sudo docker push registry.killer.sh:5000/sun-cipher:v1-dockerThe push refers to repository [registry.killer.sh:5000/sun-cipher]c947fb5eba52: Layer already exists33e8713114f8: Layer already existsv1-docker: digest: sha256:d216b4136a5b232b738698e826e7d12fccba9921d163b63777be23572250f23d size: 739
There we go, built and pushed.
3.
Next we build the image using Podman. Here it’s only required to create one tag. The usage of Podman is very similar (for most cases even identical) to Docker:
1 2
➜ cd /opt/course/11/image➜ podman build -t registry.killer.sh:5000/sun-cipher:v1-podman ....--> 38adc53bd92Successfully tagged registry.killer.sh:5000/sun-cipher:v1-podman38adc53bd92881d91981c4b537f4f1b64f8de1de1b32eacc8479883170cee537➜ podman image lsREPOSITORY TAG IMAGE ID CREATED SIZEregistry.killer.sh:5000/sun-cipher v1-podman 38adc53bd928 2 minutes ago 8.03 MB...➜ podman push registry.killer.sh:5000/sun-cipher:v1-podmanGetting image source signaturesCopying blob 4d0d60db9eb6 doneCopying blob 33e8713114f8 doneCopying config bfa1a225f8 doneWriting manifest to image destinationStoring signatures
Built and pushed using Podman.
4.
We’ll create a container from the perviously created image, using Podman, which keeps running in the background:
1 2
➜ podman run -d --name sun-cipher registry.killer.sh:5000/sun-cipher:v1-podmanf8199cba792f9fd2d1bd4decc9b7a9c0acfb975d95eda35f5f583c9efbf95589
5.
Finally we need to collect some information into files:
1 2
➜ podman psCONTAINER ID IMAGE COMMAND ...f8199cba792f registry.killer.sh:5000/sun-cipher:v1-podman ./app ...➜ podman ps > /opt/course/11/containers➜ podman logs sun-cipher2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 80812077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 78872077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 18472077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 40592077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 20812077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 13182077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 44252077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 25402077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 4562077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 33002077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 6942077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 85112077/03/13 06:50:44 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 81622077/03/13 06:50:54 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 5089➜ podman logs sun-cipher > /opt/course/11/logs
This is looking not too bad at all. Our container skills are back in town!
Question 12 | Storage, PV, PVC, Pod volume
Create a new PersistentVolume named earth-project-earthflower-pv. It should have a capacity of 2Gi , accessMode ReadWriteOnce , hostPath /Volumes/Data and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespaceearth named earth-project-earthflower-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deploymentproject-earthflower in Namespaceearth which mounts that volume at /tmp/project-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.
➜ k -n earth describe pod project-earthflower-d6887f7c5-pn5wv | grep -A2 Mounts: Mounts: /tmp/project-data from data (rw) # there it is /var/run/secrets/kubernetes.io/serviceaccount from default-token-n2sjj (ro)
Question 13 | Storage, StorageClass, PVC
Team Moonpie, which has the Namespacemoon, needs more storage. Create a new PersistentVolumeClaim named moon-pvc-126 in that namespace. This claim should use a new StorageClassmoon-retain with the provisioner set to moon-retainer and the reclaimPolicy set to Retain . The claim should request storage of 3Gi , an accessMode of ReadWriteOnce and should use the new StorageClass .
The provisioner moon-retainer will be created by another team, so it’s expected that the PVC will not boot yet. Confirm this by writing the log message from the PVC into file /opt/course/13/pvc-126-reason.
Now the same for the PersistentVolumeClaim , head to the docs, copy an example and transform it into:
1 2
vim 13_pvc.yaml
1 2
# 13_pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: moon-pvc-126 # name as requested namespace: moon # importantspec: accessModes: - ReadWriteOnce # RWO resources: requests: storage: 3Gi # size storageClassName: moon-retain # uses our new storage class
1 2
k -f 13_pvc.yaml create
Next we check the status of the PVC :
1 2
➜ k -n moon get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmoon-pvc-126 Pending moon-retain 2m57s
1 2
➜ k -n moon describe pvc moon-pvc-126Name: moon-pvc-126...Status: Pending...Events:...waiting for a volume to be created, either by external provisioner "moon-retainer" or manually created by system administrator
This confirms that the PVC waits for the provisioner moon-retainer to be created. Finally we copy or write the event message into the requested location:
1 2
# /opt/course/13/pvc-126-reasonwaiting for a volume to be created, either by external provisioner "moon-retainer" or manually created by system administrator
Question 14 | Secret, Secret-Volume, Secret-Env
You need to make changes on an existing Pod in Namespacemoon called secret-handler. Create a new Secretsecret1 which contains user=test and pass=pwd. The Secret ‘s content should be available in Podsecret-handler as environment variables SECRET1_USER and SECRET1_PASS. The yaml for Podsecret-handler is available at /opt/course/14/secret-handler.yaml.
There is existing yaml for another Secret at /opt/course/14/secret2.yaml, create this Secret and mount it inside the same Pod at /tmp/secret2. Your changes should be saved under /opt/course/14/secret-handler-new.yaml. Both Secrets should only be available in Namespacemoon.
Answer
1 2
k -n moon get pod # show podsk -n moon create secret -h # helpk -n moon create secret generic -h # helpk -n moon create secret generic secret1 --from-literal user=test --from-literal pass=pwd
There is also the possibility to import all keys from a Secret as env variables at once, though the env variable names will then be the same as in the Secret , which doesn’t work for the requirements here:
1 2
containers: - name: secret-handler... envFrom: - secretRef: # also works for configMapRef name: secret1
Then we apply the changes:
1 2
k -f /opt/course/14/secret-handler.yaml delete --force --grace-period=0k -f /opt/course/14/secret-handler-new.yaml create
Instead of running delete and create we can also use recreate:
1 2
k -f /opt/course/14/secret-handler-new.yaml replace --force --grace-period=0
It was not requested directly, but you should always confirm it’s working:
1 2
➜ k -n moon exec secret-handler -- env | grep SECRET1SECRET1_USER=testSECRET1_PASS=pwd➜ k -n moon exec secret-handler -- find /tmp/secret2/tmp/secret2/tmp/secret2/..data/tmp/secret2/key/tmp/secret2/..2019_09_11_09_03_08.147048594/tmp/secret2/..2019_09_11_09_03_08.147048594/key➜ k -n moon exec secret-handler -- cat /tmp/secret2/key12345678
Question 15 | ConfigMap, Configmap-Volume
Team Moonpie has a nginx server Deployment called web-moon in Namespacemoon. Someone started configuring it but it was never completed. To complete please create a ConfigMap called configmap-web-moon-html containing the content of file /opt/course/15/web-moon.html under the data key-name index.html.
The Deploymentweb-moon is already configured to work with this ConfigMap and serve its content. Test the nginx configuration for example using curl from a temporary nginx:alpinePod .
➜ k -n moon describe pod web-moon-847496c686-2rzj4...Warning FailedMount 31s (x7 over 63s) kubelet, gke-test-default-pool-ce83a51a-p6s4 MountVolume.SetUp failed for volume "html-volume" : configmaps "configmap-web-moon-html" not found
Good so far, now let’s create the missing ConfigMap :
1 2
k -n moon create configmap -h # helpk -n moon create configmap configmap-web-moon-html --from-file=index.html=/opt/course/15/web-moon.html # important to set the index.html key
This should create a ConfigMap with yaml like:
1 2
apiVersion: v1data: index.html: | # notice the key index.html, this will be the filename when mounted <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Web Moon Webpage</title> </head> <body> This is some great content. </body> </html>kind: ConfigMapmetadata: creationTimestamp: null name: configmap-web-moon-html namespace: moon
After waiting a bit or deleting/recreating (k -n moon rollout restart deploy web-moon) the Pods we should see:
Looking much better. Finally we check if the nginx returns the correct content:
1 2
k -n moon get pod -o wide # get pod cluster IPs
Then use one IP to test the configuration:
1 2
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.44.0.78 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 161 100 161 0 0 80500 0 --:--:-- --:--:-- --:--:-- 157k<!DOCTYPE html><html lang="en"><head> <meta charset="UTF-8"> <title>Web Moon Webpage</title></head><body>This is some great content.</body>
For debugging or further checks we could find out more about the Pods volume mounts:
1 2
➜ k -n moon describe pod web-moon-c77655cc-dc8v4 | grep -A2 Mounts: Mounts: /usr/share/nginx/html from html-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-rvzcf (ro)
And check the mounted folder content:
1 2
➜ k -n moon exec web-moon-c77655cc-dc8v4 find /usr/share/nginx/html/usr/share/nginx/html/usr/share/nginx/html/..2019_09_11_10_05_56.336284411/usr/share/nginx/html/..2019_09_11_10_05_56.336284411/index.html/usr/share/nginx/html/..data/usr/share/nginx/html/index.html
Here it was important that the file will have the name index.html and not the original one web-moon.html which is controlled through the ConfigMap data key.
Question 16 | Logging sidecar
The Tech Lead of Mercury2D decided it’s time for more logging, to finally fight all these missing data incidents. There is an existing container named cleaner-con in Deploymentcleaner in Namespacemercury. This container mounts a volume and writes logs into a file called cleaner.log.
The yaml for the existing Deployment is available at /opt/course/16/cleaner.yaml. Persist your changes at /opt/course/16/cleaner-new.yaml but also make sure the Deployment is running.
Create a sidecar container named logger-con, image busybox:1.31.0 , which mounts the same volume and writes the content of cleaner.log to stdout, you can use the tail -f command for this. This way it can be picked up by kubectl logs.
Check if the logs of the new container reveal something about the missing data incidents.
Then apply the changes and check the logs of the sidecar:
1 2
k -f /opt/course/16/cleaner-new.yaml apply
This will cause a deployment rollout of which we can get more details:
1 2
k -n mercury rollout history deploy cleanerk -n mercury rollout history deploy cleaner --revision 1k -n mercury rollout history deploy cleaner --revision 2
Check Pod statuses:
1 2
➜ k -n mercury get podNAME READY STATUS RESTARTS AGEcleaner-86b7758668-9pw6t 2/2 Running 0 6scleaner-86b7758668-qgh4v 0/2 Init:0/1 0 1s➜ k -n mercury get podNAME READY STATUS RESTARTS AGEcleaner-86b7758668-9pw6t 2/2 Running 0 14scleaner-86b7758668-qgh4v 2/2 Running 0 9s
Finally check the logs of the logging sidecar container:
1 2
➜ k -n mercury logs cleaner-576967576c-cqtgx -c logger-coninitWed Sep 11 10:45:44 UTC 2099: remove random fileWed Sep 11 10:45:45 UTC 2099: remove random file...
Mystery solved, something is removing files at random ;) It’s important to understand how containers can communicate with each other using volumes.
Question 17 | InitContainer
Last lunch you told your coworker from department Mars Inc how amazing InitContainers are. Now he would like to see one in action. There is a Deployment yaml at /opt/course/17/test-init-container.yaml. This Deployment spins up a single Pod of image nginx:1.17.3-alpine and serves files from a mounted volume, which is empty right now.
Create an InitContainer named init-con which also mounts that volume and creates a file index.html with content check this out! in the root of the mounted volume. For this test we ignore that it doesn’t contain valid html.
The InitContainer should be using image busybox:1.31.0. Test your implementation for example using curl from a temporary nginx:alpinePod .
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.0.0.67 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speedcheck this out!
Beautiful.
Question 18 | Service misconfiguration
There seems to be an issue in Namespacemars where the ClusterIP service manager-api-svc should make the Pods of Deploymentmanager-api-deployment available inside the cluster.
You can test this with curl manager-api-svc.mars:4444 from a temporary nginx:alpinePod . Check for the misconfiguration and apply a fix.
Answer
First let’s get an overview:
1 2
➜ k -n mars get allNAME READY STATUS RESTARTS AGEpod/manager-api-deployment-dbcc6657d-bg2hh 1/1 Running 0 98mpod/manager-api-deployment-dbcc6657d-f5fv4 1/1 Running 0 98mpod/manager-api-deployment-dbcc6657d-httjv 1/1 Running 0 98mpod/manager-api-deployment-dbcc6657d-k98xn 1/1 Running 0 98mpod/test-init-container-5db7c99857-htx6b 1/1 Running 0 2m19sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/manager-api-svc ClusterIP 10.15.241.159 <none> 4444/TCP 99mNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/manager-api-deployment 4/4 4 4 98mdeployment.apps/test-init-container 1/1 1 1 2m19s...
Everything seems to be running, but we can’t seem to get a connection:
1 2
➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444If you don't see a command prompt, try pressing enter. 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0curl: (28) Connection timed out after 1000 millisecondspod "tmp" deletedpod mars/tmp terminated (Error)
Ok, let’s try to connect to one pod directly:
1 2
k -n mars get pod -o wide # get cluster IP
1 2
➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 10.0.1.14 % Total % Received % Xferd Average Speed Time Time Time Current<!DOCTYPE html><html><head><title>Welcome to nginx!</title>...
The Pods itself seem to work. Let’s investigate the Service a bit:
1 2
➜ k -n mars describe service manager-api-svcName: manager-api-svcNamespace: marsLabels: app=manager-api-svc...Endpoints: <none>...
Endpoint inspection is also possible using:
1 2
k -n mars get ep
No endpoints - No good. We check the Service yaml:
1 2
k -n mars edit service manager-api-svc
1 2
# k -n mars edit service manager-api-svcapiVersion: v1kind: Servicemetadata:... labels: app: manager-api-svc name: manager-api-svc namespace: mars...spec: clusterIP: 10.3.244.121 ports: - name: 4444-80 port: 4444 protocol: TCP targetPort: 80 selector: #id: manager-api-deployment # wrong selector, needs to point to pod! id: manager-api-pod sessionAffinity: None type: ClusterIP
Though Pods are usually never created without a Deployment or ReplicaSet , Services always select for Pods directly. This gives great flexibility because Pods could be created through various customized ways. After saving the new selector we check the Service again for endpoints:
1 2
➜ k -n mars get epNAME ENDPOINTS AGEmanager-api-svc 10.0.0.30:80,10.0.1.30:80,10.0.1.31:80 + 1 more... 41m
Endpoints - Good! Now we try connecting again:
1 2
➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 612 100 612 0 0 99k 0 --:--:-- --:--:-- --:--:-- 99k<!DOCTYPE html><html><head><title>Welcome to nginx!</title>...
And we fixed it. Good to know is how to be able to use Kubernetes DNS resolution from a different Namespace . Not necessary, but we could spin up the temporary Pod in default Namespace :
1 2
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: manager-api-svcpod "tmp" deletedpod default/tmp terminated (Error)➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc.mars:4444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 612 100 612 0 0 68000 0 --:--:-- --:--:-- --:--:-- 68000<!DOCTYPE html><html><head><title>Welcome to nginx!</title>
Short manager-api-svc.mars or long manager-api-svc.mars.svc.cluster.local work.
Question 19 | Service ClusterIP->NodePort
In Namespacejupiter you’ll find an apache Deployment (with one replica) named jupiter-crew-deploy and a ClusterIP Service called jupiter-crew-svc which exposes it. Change this service to a NodePort one to make it available on all nodes on port 30100.
Test the NodePort Service using the internal IP of all available nodes and the port 30100 using curl, you can reach the internal node IPs directly from your main terminal. On which nodes is the Service reachable? On which node is the Pod running?
Answer
First we get an overview:
1 2
➜ k -n jupiter get allNAME READY STATUS RESTARTS AGEpod/jupiter-crew-deploy-8cdf99bc9-klwqt 1/1 Running 0 34mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/jupiter-crew-svc ClusterIP 10.100.254.66 <none> 8080/TCP 34m...
(Optional) Next we check if the ClusterIP Service actually works:
1 2
➜ k -n jupiter run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 jupiter-crew-svc:8080 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 45 100 45 0 0 5000 0 --:--:-- --:--:-- --:--:-- 5000<html><body><h1>It works!</h1></body></html>
The Service is working great. Next we change the Service type to NodePort and set the port:
➜ k -n jupiter get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEjupiter-crew-svc NodePort 10.3.245.70 <none> 8080:30100/TCP 3m52s
(Optional) And we confirm that the service is still reachable internally:
1 2
➜ k -n jupiter run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 jupiter-crew-svc:8080 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed<html><body><h1>It works!</h1></body></html>
Nice. A NodePort Service kind of lies on top of a ClusterIP one, making the ClusterIP Service reachable on the Node IPs (internal and external). Next we get the internal IPs of all nodes to check the connectivity:
1 2
➜ k get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP ...cluster1-controlplane1 Ready control-plane 18h v1.29.0 192.168.100.11 ...cluster1-node1 Ready <none> 18h v1.29.0 192.168.100.12 ...
On both, even the controlplane. On which node is the Pod running?
1 2
➜ k -n jupiter get pod jupiter-crew-deploy-8cdf99bc9-klwqt -o yaml | grep nodeName nodeName: cluster1-node1➜ k -n jupiter get pod -o wide # or even shorter
In our case on cluster1-node1, but could be any other worker if more available. Here we hopefully gained some insight into how a NodePort Service works. Although the Pod is just running on one specific node, the Service makes it available through port 30100 on the internal and external IP addresses of all nodes. This is at least the common/default behaviour but can depend on cluster configuration.
Question 20 | NetworkPolicy
In Namespacevenus you’ll find two Deployments named api and frontend. Both Deployments are exposed inside the cluster using Services . Create a NetworkPolicy named np1 which restricts outgoing tcp connections from Deploymentfrontend and only allows those going to Deploymentapi. Make sure the NetworkPolicy still allows outgoing traffic on UDP/TCP ports 53 for DNS resolution.
Test using: wget www.google.com and wget api:2222 from a Pod of Deploymentfrontend.
Answer
INFO: For learning NetworkPolicies check out https://editor.cilium.io. But you’re not allowed to use it during the exam.
(Optional) This is not necessary but we could check if the Services are working inside the cluster:
1 2
➜ k -n venus run tmp --restart=Never --rm -i --image=busybox -i -- wget -O- frontend:80Connecting to frontend:80 (10.3.245.9:80)<!DOCTYPE html><html><head><title>Welcome to nginx!</title>...➜ k -n venus run tmp --restart=Never --rm --image=busybox -i -- wget -O- api:2222Connecting to api:2222 (10.3.250.233:2222)<html><body><h1>It works!</h1></body></html>
Then we use any frontendPod and check if it can reach external names and the apiService :
1 2
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- www.google.comConnecting to www.google.com (216.58.205.227:80)- 100% |********************************| 12955 0:00:00 ETA<!doctype html><html itemscope="" itemtype="<http://schema.org/WebPage>" lang="en"><head>...➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- api:2222<html><body><h1>It works!</h1></body></html>Connecting to api:2222 (10.3.255.137:2222)- 100% |********************************| 45 0:00:00 ETA...
We see Pods of frontend can reach the api and external names.
1 2
vim 20_np1.yaml
Now we head to https://kubernetes.io/docs, search for NetworkPolicy , copy the example code and adjust it to:
1 2
# 20_np1.yamlapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: np1 namespace: venusspec: podSelector: matchLabels: id: frontend # label of the pods this policy should be applied on policyTypes: - Egress # we only want to control egress egress: - to: # 1st egress rule - podSelector: # allow egress only to pods with api label matchLabels: id: api - ports: # 2nd egress rule - port: 53 # allow DNS UDP protocol: UDP - port: 53 # allow DNS TCP protocol: TCP
Notice that we specify two egress rules in the yaml above. If we specify multiple egress rules then these are connected using a logical OR. So in the example above we do:
1 2
allow outgoing traffic if (destination pod has label id:api) OR ((port is 53 UDP) OR (port is 53 TCP))
Let’s have a look at example code which wouldn’t work in our case:
1 2
# this example does not work in our case... egress: - to: # 1st AND ONLY egress rule - podSelector: # allow egress only to pods with api label matchLabels: id: api ports: # STILL THE SAME RULE but just an additional selector - port: 53 # allow DNS UDP protocol: UDP - port: 53 # allow DNS TCP protocol: TCP
In the yaml above we only specify one egress rule with two selectors. It can be translated into:
1 2
allow outgoing traffic if (destination pod has label id:api) AND ((port is 53 UDP) OR (port is 53 TCP))
Apply the correct policy:
1 2
k -f 20_np1.yaml create
And try again, external is not working any longer:
1 2
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- www.google.deConnecting to www.google.de:2222 (216.58.207.67:80)^C➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- -T 5 www.google.de:80Connecting to www.google.com (172.217.203.104:80)wget: download timed outcommand terminated with exit code 1
Internal connection to api work as before:
1 2
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- api:2222<html><body><h1>It works!</h1></body></html>Connecting to api:2222 (10.3.255.137:2222)- 100% |********************************| 45 0:00:00 ETA
Question 21 | Requests and Limits, ServiceAccount
Team Neptune needs 3 Pods of image httpd:2.4-alpine, create a Deployment named neptune-10ab for this. The containers should be named neptune-pod-10ab. Each container should have a memory request of 20Mi and a memory limit of 50Mi .
Team Neptune has it’s own ServiceAccountneptune-sa-v2 under which the Pods should run. The Deployment should be in Namespaceneptune.
Answer:
1 2
k -n neptune create deployment -h # helpk -n neptune create deploy -h # deploy is short for deployment# check the export on the very top of this document so we can use $dok -n neptune create deploy neptune-10ab --image=httpd:2.4-alpine $do > 21.yamlvim 21.yaml
k create -f 21.yaml # namespace already set in yaml
To verify all Pods are running we do:
1 2
➜ k -n neptune get pod | grep neptune-10abneptune-10ab-7d4b8d45b-4nzj5 1/1 Running 0 57sneptune-10ab-7d4b8d45b-lzwrf 1/1 Running 0 17sneptune-10ab-7d4b8d45b-z5hcc 1/1 Running 0 17s
Question 22 | Labels, Annotations
Team Sunny needs to identify some of their Pods in namespace sun. They ask you to add a new label protected: true to all Pods with an existing label type: worker or type: runner. Also add an annotation protected: do not delete this pod to all Pods having the new label protected: true.
If we would only like to get pods with certain labels we can run:
1 2
k -n sun get pod -l type=runner # only pods with label runner
We can use this label filtering also when using other commands, like setting new labels:
1 2
k label -h # helpk -n sun label pod -l type=runner protected=true # run for label runnerk -n sun label pod -l type=worker protected=true # run for label worker
Or we could run:
1 2
k -n sun label pod -l "type in (worker,runner)" protected=true