kubernetes Grimoire
Local deployments
k0s
Install the k0s binary from the AUR or manually:
curl -L https://get.k0s.sh/ > /tmp/k0s-installer.sh
sudo sh /tmp/k0s-installer.sh
sudo bash -c ''k0s completion bash > /etc/bash_completion.d/k0s'
Install a systemd service named k0scontroller
that will start a single node actign as controller and worker. The unit will simply run k0s controller --single
.
sudo mkdir /etc/k0s
sudo k0s config create > /etc/k0s/k0s.yaml
nvim /etc/k0s/k0s.yaml # edit as needed
k0s config validate -c /etc/k0s/k0s.yaml # validate edits
sudo k0s install controller --single # add systemd unit
sudo systemctl enable k0scontroller
sudo systemctl start k0scontroller
sudo journalctl -xefu k0scontroller # see progress/failures
For wg.liv.argosware.com, the public IPv4 and IPv6 addresses of eth0 should be removed or replaced with the wireguard address.
Adguard may block container image downloads because someone hostted a phising site somewhere on the
prod-registry-k8s-io-us-east-1.s3.dualstack.us-east-1.amazonaws.com
hierarchy. Check the query log and add an unblock rule if this is the case.
minikube
On arch, install with pacman -S minikube
. If ufw is active to enforce a VPN kill switch, rules such as the following may have to be added
sudo ufw allow {in from,out to} 192.168.{99,49}.0/24 comment minikube
sudo ufw allow {in from,out to} 172.1[789].0.0/16 comment minikube
The actual network addresses can be guessed from ip addr
on entries for dockerX
and br-
networks.
Main minikube
commands: - start --driver=docker
: start a kubernetes cluster within a docker container - Omitting –driver will default to docker but fallback to virtualbox if docker is not up, consuming way more RAM - dashboard [--url]
: show a GUI on the browser - stop
shutdown the cluster - delete
remove the cluster
After starting minikube, kubectl “simpyl finds” the minikube API endpoint.
kind
Install with
go install sigs.k8s.io/kind@v0.11.1
As with [[#minikube]] UFW rules may need to be added.
Create/destroy a cluster with kind create|delete cluster
To use hostPath
volumes, create a kind-config.yaml
file anywhere and provide it with --config
on the kind create cluster
command:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /path/on/host
containerPath: /host-data
To avoid abusing the docker public registry, docker images can be pushed from the host into the kind cluster nodes:
kind load docker-image repository/image:tag
If tag is omitted (or latest
) the image on the docker hub will be fetched even if the image locally on the host is a more recent unpushed build. During development cycles it is best to use a dev
tag (which does not exist in the public registry) to enforce that the kind load
ed image is used.
Concepts
- Controller: A continuous loop that monitors resources in the kubernetes cluster and acts to maintain a declared goal state.
- Deployment: declarative desired state for Pods and ReplicaSets.
- Job: create n ≧ 1 pods and monitors their execution until k of them successfuly finish
- Node: A VM or physical machine where kubernetes creates and manages pods/containers
- Pod: 1+ container, minimum deployable unit, share storage/network. A pod provides a specification on how to run the containers.
- ReplicaSet: multiple pods acting as replicas of one another. A ReplicaSet selector specifies which new pods (provided the pod is not created with ownerReference to a Controller) are to be automatically added to the set.
- Service: abstraction for an application running on a set of pods. Purpose is service discovery (avoiding the need to find out the IP where the service is listening). Services can be referred by DNS names, resolved by kubernetes
- Volumes: storage (ephemeral or persistent) that can be mounted into the FS of all containers running in pods. Ex:
awsElasticBlockStore
,emptyDir
,hostPath
,local
,nfs
, …
Tutorial
sudo pacman -S minikube kubectl
minikube start --driver=docker
minikube dashboard --url # optional
- Create a deployment with a single pod with a single container:
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
- View status with
kubectl get deployments,pods,events,services
- View goal state in YAML with
kubectl config view
- Create a service for the pod:
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
- On minikube, to get the IP and port of the service:
minikube service hello-node
- On minikube, to get the IP and port of the service:
- Cleanup:
kubectl delete service hello-node && kubectl delete deployment hello-node
minikube stop && minikube delete
FAQ
Persistent volumes with local provider
This does not work on [[#minikube]], use [[#kind]] instead. Configuration of the host -> node directory mapping is done with a config file on cluster creation, see [[#kind|the kind section above]].
Exposing services
First, add a ports
entry under the container that hosts the service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
hostPort: 80
Then create a Service matching the Deployment (or Pod) by its app
label:
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: nginx
The external IP address will be shown in kubectl get services
. If the cluster is a minikube cluster, minikube tunnel
must be executing in another shell and the public IP to be used is the one shown in minikube tunnel
. Not running minikube tunnel
may cause the external IP field from kubectl get services
to hang in the Pending state.
Connecting a java debugger
Enable a debug server on the java process by adding to the java command line:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=*:5005
Then, expose TCP port 5005 on the pod and expose it externally using a [[#Exposing services|service]]. On IDEA, use a Remote debugger target and connect to the IP address shown in minikube tunnel.
Attaching a temporary container to a pod (for debugging)
kubectl debug -it $POD_NAME --image=$NAME:$TAG -- [COMMAND args...]
This will create a new container within the pod, using the given docker image. COMMAND args...
overrides the CMD
property of the chosen image. This requires the cluster to support ephemeral containers, which kind does not.
Remote netowrk capture with wireshark
Theoretically, one could run tcpdump with kubectl and pipe its output to wireshark. But wireshark appears to have trouble understanding the format.
A more robust solution is to capture to a file, then simply open the file in wireshark. To capture with tcpdump:
kubectl exec $POD_OR_SVC -- tcpdump -s 0 -i eth0 -w capture.pcap
Then copy the capture file to the host where wireshark can open it:
kubectl cp $POD_NAME:capture.pcap /tmp/capture.pcap
If piping tcpdump, beware that kubectl may bundle stdout and stderr from the container into stdout on the host. Thus stderr should be silenced inside the container. The following should work but YMMV:
rm -f /tmp/cap.fifo && mkfifo /tmp/cap.fifo && kubectl exec -it $POD_NAME -- bash -c "tcpdump -U -s0 -i eth0 -w - 2>/dev/null" 1>/tmp/cap.fifo
sudo wireshark -k -i - </tmp/cap.fifo
If there is no traffic, wireshark may block before showing the UI. In this case, inducing traffic will make the UI show.
tcpdump options: - -U
: flushes the output file after each packet is complete instead of only when the output buffer is filled (whose size has no relation to packets) - -w -
: write captured packets to STDOUT - -s0
: set the default number of bytes to process in each packet (262144) - -i eth0
: interface to listen to - 'not port 22'
expression to avoid recursive capture where tcpdump being called over SSH - port 80'
Restrict capture to port 80 traffic
wireshark options: - -k
: Start capture immediately - -i -
: Read capture data from stdin
On a image derived from debian, apt-get install tcpdump
. The following packages may also be useful: - iproute2
: For ss -tnrp
(t)cp connections with (p)rocesses (r)esolving hostnames but not ports (n). Add -l for listening ports - iputils-tracepath
: provides tracepath
Open reverse SSH tunnel from pod
Copy credentials and create reverse tunnel from within the pod. Connections to 5005 on the host will be redirected to port 5005 within the pod.
The snippet below assumes kind is being used.
kubectl cp ~/.ssh/id_rsa $POD:/root/.ssh/ \
&& kubectl exec -it $POD -- ssh -R 5005:localhost:5005 172.20.0.1 bash -c "'while true; do echo Mapping host:5005 to container:5005; read i; done'"