class: title, self-paced Advanced
Kubernetes
.nav[*Self-paced version*] .debug[ ``` ``` These slides have been built from commit: 45770cc [shared/title.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/title.md)] --- class: title, in-person Advanced
Kubernetes
.footnote[ **Slides[:](https://www.youtube.com/watch?v=h16zyxiwDLY) https://container.training/** ] .debug[[shared/title.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/title.md)] --- ## Introductions ⚠️ This slide should be customized by the tutorial instructor(s). [@alexbuisine]: https://twitter.com/alexbuisine [EphemeraSearch]: https://ephemerasearch.com/ [@jpetazzo]: https://twitter.com/jpetazzo [@jpetazzo@hachyderm.io]: https://hachyderm.io/@jpetazzo [@s0ulshake]: https://twitter.com/s0ulshake [Quantgene]: https://www.quantgene.com/ .debug[[logistics.md](https://github.com/jpetazzo/container.training/tree/main/slides/logistics.md)] --- ## Exercises - At the end of each day, there is a series of exercises - To make the most out of the training, please try the exercises! (it will help to practice and memorize the content of the day) - We recommend to take at least one hour to work on the exercises (if you understood the content of the day, it will be much faster) - Each day will start with a quick review of the exercises of the previous day .debug[[logistics.md](https://github.com/jpetazzo/container.training/tree/main/slides/logistics.md)] --- ## A brief introduction - This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person, instructor-led workshops and tutorials - Credit is also due to [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors) — thank you! - You can also follow along on your own, at your own pace - We included as much information as possible in these slides - We recommend having a mentor to help you ... - ... Or be comfortable spending some time reading the Kubernetes [documentation](https://kubernetes.io/docs/) ... - ... And looking for answers on [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and other outlets .debug[[k8s/intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/intro.md)] --- class: self-paced ## Hands on, you shall practice - Nobody ever became a Jedi by spending their lives reading Wookiepedia - Likewise, it will take more than merely *reading* these slides to make you an expert - These slides include *tons* of demos, exercises, and examples - They assume that you have access to a Kubernetes cluster - If you are attending a workshop or tutorial:
you will be given specific instructions to access your cluster - If you are doing this on your own:
the first chapter will give you various options to get your own cluster .debug[[k8s/intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/intro.md)] --- ## Accessing these slides now - We recommend that you open these slides in your browser: https://container.training/ - This is a public URL, you're welcome to share it with others! - Use arrows to move to next/previous slide (up, down, left, right, page up, page down) - Type a slide number + ENTER to go to that slide - The slide number is also visible in the URL bar (e.g. .../#123 for slide 123) .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/about-slides.md)] --- ## These slides are open source - The sources of these slides are available in a public GitHub repository: https://github.com/jpetazzo/container.training - These slides are written in Markdown - You are welcome to share, re-use, re-mix these slides - Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ... .footnote[👇 Try it! The source file will be shown and you can view it on GitHub and fork and edit it.] .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/about-slides.md)] --- ## Accessing these slides later - Slides will remain online so you can review them later if needed (let's say we'll keep them online at least 1 year, how about that?) - You can download the slides using this URL: https://container.training/slides.zip (then open the file `kube-adv.yml.html`) - You can also generate a PDF of the slides (by printing them to a file; but be patient with your browser!) .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/about-slides.md)] --- ## These slides are constantly updated - Feel free to check the GitHub repository for updates: https://github.com/jpetazzo/container.training - Look for branches named YYYY-MM-... - You can also find specific decks and other resources on: https://container.training/ .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/about-slides.md)] --- class: extra-details ## Extra details - This slide has a little magnifying glass in the top left corner - This magnifying glass indicates slides that provide extra details - Feel free to skip them if: - you are in a hurry - you are new to this and want to avoid cognitive overload - you want only the most essential information - You can review these slides another time if you want, they'll be waiting for you ☺ .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/about-slides.md)] --- name: toc-part-1 ## Part 1 - [Pre-requirements](#toc-pre-requirements) - [Kubernetes architecture](#toc-kubernetes-architecture) - [The Kubernetes API](#toc-the-kubernetes-api) - [Other control plane components](#toc-other-control-plane-components) - [Kubernetes Internal APIs](#toc-kubernetes-internal-apis) - [Building our own cluster (easy)](#toc-building-our-own-cluster-easy) .debug[(auto-generated TOC)] --- name: toc-part-2 ## Part 2 - [Building our own cluster (medium)](#toc-building-our-own-cluster-medium) - [Building our own cluster (hard)](#toc-building-our-own-cluster-hard) - [CNI internals](#toc-cni-internals) .debug[(auto-generated TOC)] --- name: toc-part-3 ## Part 3 - [API server availability](#toc-api-server-availability) - [Securing the control plane](#toc-securing-the-control-plane) - [(Extra content)](#toc-extra-content) - [Static pods](#toc-static-pods) - [Upgrading clusters](#toc-upgrading-clusters) .debug[(auto-generated TOC)] --- name: toc-part-4 ## Part 4 - [Kustomize](#toc-kustomize) - [Managing stacks with Helm](#toc-managing-stacks-with-helm) - [Helm chart format](#toc-helm-chart-format) - [Creating a basic chart](#toc-creating-a-basic-chart) - [(Extra content)](#toc-extra-content) - [Creating better Helm charts](#toc-creating-better-helm-charts) - [Charts using other charts](#toc-charts-using-other-charts) - [Helm and invalid values](#toc-helm-and-invalid-values) - [Helm secrets](#toc-helm-secrets) - [YTT](#toc-ytt) .debug[(auto-generated TOC)] --- name: toc-part-5 ## Part 5 - [Extending the Kubernetes API](#toc-extending-the-kubernetes-api) - [Operators](#toc-operators) - [Sealed Secrets](#toc-sealed-secrets) - [Custom Resource Definitions](#toc-custom-resource-definitions) .debug[(auto-generated TOC)] --- name: toc-part-6 ## Part 6 - [Ingress and TLS certificates](#toc-ingress-and-tls-certificates) - [cert-manager](#toc-cert-manager) - [An ElasticSearch Operator](#toc-an-elasticsearch-operator) .debug[(auto-generated TOC)] --- name: toc-part-7 ## Part 7 - [Dynamic Admission Control](#toc-dynamic-admission-control) - [Policy Management with Kyverno](#toc-policy-management-with-kyverno) .debug[(auto-generated TOC)] --- name: toc-part-8 ## Part 8 - [The Aggregation Layer](#toc-the-aggregation-layer) - [Checking Node and Pod resource usage](#toc-checking-node-and-pod-resource-usage) - [Collecting metrics with Prometheus](#toc-collecting-metrics-with-prometheus) - [Prometheus and Grafana](#toc-prometheus-and-grafana) - [Scaling with custom metrics](#toc-scaling-with-custom-metrics) .debug[(auto-generated TOC)] --- name: toc-part-9 ## Part 9 - [Designing an operator](#toc-designing-an-operator) - [Writing a tiny operator](#toc-writing-a-tiny-operator) - [Kubebuilder](#toc-kubebuilder) - [Events](#toc-events) - [Finalizers](#toc-finalizers) - [(Extra content)](#toc-extra-content) - [Owners and dependents](#toc-owners-and-dependents) - [API server internals](#toc-api-server-internals) .debug[(auto-generated TOC)] .debug[[shared/toc.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/toc.md)] --- class: pic .interstitial[] --- name: toc-pre-requirements class: title Pre-requirements .nav[ [Previous part](#toc-) | [Back to table of contents](#toc-part-1) | [Next part](#toc-kubernetes-architecture) ] .debug[(automatically generated title slide)] --- # Pre-requirements - Kubernetes concepts (pods, deployments, services, labels, selectors) - Hands-on experience working with containers (building images, running them; doesn't matter how exactly) - Familiarity with the UNIX command-line (navigating directories, editing files, using `kubectl`) .debug[[k8s/prereqs-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prereqs-advanced.md)] --- class: title *Tell me and I forget.*
*Teach me and I remember.*
*Involve me and I learn.* Misattributed to Benjamin Franklin [(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/) .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- ## Hands-on sections - There will be *a lot* of examples and demos - We are going to build, ship, and run containers (and sometimes, clusters!) - If you want, you can run all the examples and demos in your environment (but you don't have to; it's up to you!) - All hands-on sections are clearly identified, like the gray rectangle below .lab[ - This is a command that we're gonna run: ```bash echo hello world ``` ] .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: in-person ## Where are we going to run our containers? .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: in-person, pic  .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- ## If you're attending a live training or workshop - Each person gets a private lab environment (depending on the scenario, this will be one VM, one cluster, multiple clusters...) - The instructor will tell you how to connect to your environment - Your lab environments will be available for the duration of the workshop (check with your instructor to know exactly when they'll be shutdown) .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- ## Running your own lab environments - If you are following a self-paced course... - Or watching a replay of a recorded course... - ...You will need to set up a local environment for the labs - If you want to deliver your own training or workshop: - deployment scripts are available in the [prepare-labs] directory - you can use them to automatically deploy many lab environments - they support many different infrastructure providers [prepare-labs]: https://github.com/jpetazzo/container.training/tree/main/prepare-labs .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: in-person ## Why don't we run containers locally? - Installing this stuff can be hard on some machines (32 bits CPU or OS... Laptops without administrator access... etc.) - *"The whole team downloaded all these container images from the WiFi!
... and it went great!"* (Literally no-one ever) - All you need is a computer (or even a phone or tablet!), with: - an Internet connection - a web browser - an SSH client .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: in-person ## SSH clients - On Linux, OS X, FreeBSD... you are probably all set - On Windows, get one of these: - [putty](http://www.putty.org/) - Microsoft [Win32 OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH) - [Git BASH](https://git-for-windows.github.io/) - [MobaXterm](http://mobaxterm.mobatek.net/) - On Android, [JuiceSSH](https://juicessh.com/) ([Play Store](https://play.google.com/store/apps/details?id=com.sonelli.juicessh)) works pretty well - Nice-to-have: [Mosh](https://mosh.org/) instead of SSH, if your Internet connection tends to lose packets .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: in-person, extra-details ## What is this Mosh thing? *You don't have to use Mosh or even know about it to follow along.
We're just telling you about it because some of us think it's cool!* - Mosh is "the mobile shell" - It is essentially SSH over UDP, with roaming features - It retransmits packets quickly, so it works great even on lossy connections (Like hotel or conference WiFi) - It has intelligent local echo, so it works great even in high-latency connections (Like hotel or conference WiFi) - It supports transparent roaming when your client IP address changes (Like when you hop from hotel to conference WiFi) .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: in-person, extra-details ## Using Mosh - To install it: `(apt|yum|brew) install mosh` - It has been pre-installed on the VMs that we are using - To connect to a remote machine: `mosh user@host` (It is going to establish an SSH connection, then hand off to UDP) - It requires UDP ports to be open (By default, it uses a UDP port between 60000 and 61000) .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-architecture class: title Kubernetes architecture .nav[ [Previous part](#toc-pre-requirements) | [Back to table of contents](#toc-part-1) | [Next part](#toc-the-kubernetes-api) ] .debug[(automatically generated title slide)] --- # Kubernetes architecture We can arbitrarily split Kubernetes in two parts: - the *nodes*, a set of machines that run our containerized workloads; - the *control plane*, a set of processes implementing the Kubernetes APIs. Kubernetes also relies on underlying infrastructure: - servers, network connectivity (obviously!), - optional components like storage systems, load balancers ... .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## What runs on a node - Our containerized workloads - A container engine like Docker, CRI-O, containerd... (in theory, the choice doesn't matter, as the engine is abstracted by Kubernetes) - kubelet: an agent connecting the node to the cluster (it connects to the API server, registers the node, receives instructions) - kube-proxy: a component used for internal cluster communication (note that this is *not* an overlay network or a CNI plugin!) .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## What's in the control plane - Everything is stored in etcd (it's the only stateful component) - Everyone communicates exclusively through the API server: - we (users) interact with the cluster through the API server - the nodes register and get their instructions through the API server - the other control plane components also register with the API server - API server is the only component that reads/writes from/to etcd .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## Communication protocols: API server - The API server exposes a REST API (except for some calls, e.g. to attach interactively to a container) - Almost all requests and responses are JSON following a strict format - For performance, the requests and responses can also be done over protobuf (see this [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) for details) - In practice, protobuf is used for all internal communication (between control plane components, and with kubelet) .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## Communication protocols: on the nodes The kubelet agent uses a number of special-purpose protocols and interfaces, including: - CRI (Container Runtime Interface) - used for communication with the container engine - abstracts the differences between container engines - based on gRPC+protobuf - [CNI (Container Network Interface)](https://github.com/containernetworking/cni/blob/master/SPEC.md) - used for communication with network plugins - network plugins are implemented as executable programs invoked by kubelet - network plugins provide IPAM - network plugins set up network interfaces in pods .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## Control plane location The control plane can run: - in containers, on the same nodes that run other application workloads (default behavior for local clusters like [Minikube](https://github.com/kubernetes/minikube), [kind](https://kind.sigs.k8s.io/)...) - on a dedicated node (default behavior when deploying with kubeadm) - on a dedicated set of nodes ([Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way); [kops](https://github.com/kubernetes/kops); also kubeadm) - outside of the cluster (most managed clusters like AKS, DOK, EKS, GKE, Kapsule, LKE, OKE...) .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic .interstitial[] --- name: toc-the-kubernetes-api class: title The Kubernetes API .nav[ [Previous part](#toc-kubernetes-architecture) | [Back to table of contents](#toc-part-1) | [Next part](#toc-other-control-plane-components) ] .debug[(automatically generated title slide)] --- # The Kubernetes API [ *The Kubernetes API server is a "dumb server" which offers storage, versioning, validation, update, and watch semantics on API resources.* ]( https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md#proposal-and-motivation ) ([Clayton Coleman](https://twitter.com/smarterclayton), Kubernetes Architect and Maintainer) What does that mean? .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## The Kubernetes API is declarative - We cannot tell the API, "run a pod" - We can tell the API, "here is the definition for pod X" - The API server will store that definition (in etcd) - *Controllers* will then wake up and create a pod matching the definition .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## The core features of the Kubernetes API - We can create, read, update, and delete objects - We can also *watch* objects (be notified when an object changes, or when an object of a given type is created) - Objects are strongly typed - Types are *validated* and *versioned* - Storage and watch operations are provided by etcd (note: the [k3s](https://k3s.io/) project allows us to use sqlite instead of etcd) .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## Let's experiment a bit! - For this section, connect to the first node of the `test` cluster .lab[ - SSH to the first node of the test cluster - Check that the cluster is operational: ```bash kubectl get nodes ``` - All nodes should be `Ready` ] .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## Create - Let's create a simple object .lab[ - Create a namespace with the following command: ```bash kubectl create -f- <
(example: this [demo scheduler](https://github.com/kelseyhightower/scheduler) uses the cost of nodes, stored in node annotations) - A pod might stay in `Pending` state for a long time: - if the cluster is full - if the pod has special constraints that can't be met - if the scheduler is not running (!) ??? :EN:- Kubernetes architecture review :FR:- Passage en revue de l'architecture de Kubernetes .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-internal-apis class: title Kubernetes Internal APIs .nav[ [Previous part](#toc-other-control-plane-components) | [Back to table of contents](#toc-part-1) | [Next part](#toc-building-our-own-cluster-easy) ] .debug[(automatically generated title slide)] --- # Kubernetes Internal APIs - Almost every Kubernetes component has some kind of internal API (some components even have multiple APIs on different ports!) - At the very least, these can be used for healthchecks (you *should* leverage this if you are deploying and operating Kubernetes yourself!) - Sometimes, they are used internally by Kubernetes (e.g. when the API server retrieves logs from kubelet) - Let's review some of these APIs! .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- ## API hunting guide This is how we found and investigated these APIs: - look for open ports on Kubernetes nodes (worker nodes or control plane nodes) - check which process owns that port - probe the port (with `curl` or other tools) - read the source code of that process (in particular when looking for API routes) OK, now let's see the results! .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- ## etcd - 2379/tcp → etcd clients - should be HTTPS and require mTLS authentication - 2380/tcp → etcd peers - should be HTTPS and require mTLS authentication - 2381/tcp → etcd healthcheck - HTTP without authentication - exposes two API routes: `/health` and `/metrics` .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- ## kubelet - 10248/tcp → healthcheck - HTTP without authentication - exposes a single API route, `/healthz`, that just returns `ok` - 10250/tcp → internal API - should be HTTPS and require mTLS authentication - used by the API server to obtain logs, `kubectl exec`, etc. .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- class: extra-details ## kubelet API - We can authenticate with e.g. our TLS admin certificate - The following routes should be available: - `/healthz` - `/configz` (serves kubelet configuration) - `/metrics` - `/pods` (returns *desired state*) - `/runningpods` (returns *current state* from the container runtime) - `/logs` (serves files from `/var/log`) - `/containerLogs/
/
/
` (can add e.g. `?tail=10`) - `/run`, `/exec`, `/attach`, `/portForward` - See [kubelet source code](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go) for details! .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- class: extra-details ## Trying the kubelet API The following example should work on a cluster deployed with `kubeadm`. 1. Obtain the key and certificate for the `cluster-admin` user. 2. Log into a node. 3. Copy the key and certificate on the node. 4. Find out the name of the `kube-proxy` pod running on that node. 5. Run the following command, updating the pod name: ```bash curl -d cmd=ls -k --cert admin.crt --key admin.key \ https://localhost:10250/run/kube-system/`kube-proxy-xy123`/kube-proxy ``` ... This should show the content of the root directory in the pod. .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- ## kube-proxy - 10249/tcp → healthcheck - HTTP, without authentication - exposes a few API routes: `/healthz` (just returns `ok`), `/configz`, `/metrics` - 10256/tcp → another healthcheck - HTTP, without authentication - also exposes a `/healthz` API route (but this one shows a timestamp) .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- ## kube-controller and kube-scheduler - 10257/tcp → kube-controller - HTTPS, with optional mTLS authentication - `/healthz` doesn't require authentication - ... but `/configz` and `/metrics` do (use e.g. admin key and certificate) - 10259/tcp → kube-scheduler - similar to kube-controller, with the same routes ??? :EN:- Kubernetes internal APIs :FR:- Les APIs internes de Kubernetes .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- ## 19,000 words They say, "a picture is worth one thousand words." The following 19 slides show what really happens when we run: ```bash kubectl create deployment web --image=nginx ``` .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic .interstitial[] --- name: toc-building-our-own-cluster-easy class: title Building our own cluster (easy) .nav[ [Previous part](#toc-kubernetes-internal-apis) | [Back to table of contents](#toc-part-1) | [Next part](#toc-building-our-own-cluster-medium) ] .debug[(automatically generated title slide)] --- # Building our own cluster (easy) - Let's build our own cluster! *Perfection is attained not when there is nothing left to add, but when there is nothing left to take away. (Antoine de Saint-Exupery)* - Our goal is to build a minimal cluster allowing us to: - create a Deployment (with `kubectl create deployment`) - expose it with a Service - connect to that service - "Minimal" here means: - smaller number of components - smaller number of command-line flags - smaller number of configuration files .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Non-goals - For now, we don't care about security - For now, we don't care about scalability - For now, we don't care about high availability - All we care about is *simplicity* .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Our environment - We will use the machine indicated as `monokube1` - This machine: - runs Ubuntu LTS - has Kubernetes, Docker, and etcd binaries installed - but nothing is running .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## The fine print - We're going to use a *very old* version of Kubernetes (specifically, 1.19) - Why? - It's much easier to set up than recent versions - it's compatible with Docker (no need to set up CNI) - it doesn't require a ServiceAccount keypair - it can be exposed over plain HTTP (insecure but easier) - We'll do that, and later, move to recent versions of Kubernetes! .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Checking our environment - Let's make sure we have everything we need first .lab[ - Log into the `monokube1` machine - Get root: ```bash sudo -i ``` - Check available versions: ```bash etcd -version kube-apiserver --version dockerd --version ``` ] .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## The plan 1. Start API server 2. Interact with it (create Deployment and Service) 3. See what's broken 4. Fix it and go back to step 2 until it works! .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Dealing with multiple processes - We are going to start many processes - Depending on what you're comfortable with, you can: - open multiple windows and multiple SSH connections - use a terminal multiplexer like screen or tmux - put processes in the background with `&`
(warning: log output might get confusing to read!) .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting API server .lab[ - Try to start the API server: ```bash kube-apiserver # It will fail with "--etcd-servers must be specified" ``` ] Since the API server stores everything in etcd, it cannot start without it. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting etcd .lab[ - Try to start etcd: ```bash etcd ``` ] Success! Note the last line of output: ``` serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged! ``` *Sure, that's discouraged. But thanks for telling us the address!* .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting API server (for real) - Try again, passing the `--etcd-servers` argument - That argument should be a comma-separated list of URLs .lab[ - Start API server: ```bash kube-apiserver --etcd-servers http://127.0.0.1:2379 ``` ] Success! .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Interacting with API server - Let's try a few "classic" commands .lab[ - List nodes: ```bash kubectl get nodes ``` - List services: ```bash kubectl get services ``` ] We should get `No resources found.` and the `kubernetes` service, respectively. Note: the API server automatically created the `kubernetes` service entry. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- class: extra-details ## What about `kubeconfig`? - We didn't need to create a `kubeconfig` file - By default, the API server is listening on `localhost:8080` (without requiring authentication) - By default, `kubectl` connects to `localhost:8080` (without providing authentication) .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Creating a Deployment - Let's run a web server! .lab[ - Create a Deployment with NGINX: ```bash kubectl create deployment web --image=nginx ``` ] Success? .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Checking our Deployment status .lab[ - Look at pods, deployments, etc.: ```bash kubectl get all ``` ] Our Deployment is in bad shape: ``` NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/web 0/1 0 0 2m26s ``` And, there is no ReplicaSet, and no Pod. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## What's going on? - We stored the definition of our Deployment in etcd (through the API server) - But there is no *controller* to do the rest of the work - We need to start the *controller manager* .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting the controller manager .lab[ - Try to start the controller manager: ```bash kube-controller-manager ``` ] The final error message is: ``` invalid configuration: no configuration has been provided ``` But the logs include another useful piece of information: ``` Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. ``` .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Reminder: everyone talks to API server - The controller manager needs to connect to the API server - It *does not* have a convenient `localhost:8080` default - We can pass the connection information in two ways: - `--master` and a host:port combination (easy) - `--kubeconfig` and a `kubeconfig` file - For simplicity, we'll use the first option .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting the controller manager (for real) .lab[ - Start the controller manager: ```bash kube-controller-manager --master http://localhost:8080 ``` ] Success! .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Checking our Deployment status .lab[ - Check all our resources again: ```bash kubectl get all ``` ] We now have a ReplicaSet. But we still don't have a Pod. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## What's going on? In the controller manager logs, we should see something like this: ``` E0404 15:46:25.753376 22847 replica_set.go:450] Sync "default/web-5bc9bd5b8d" failed with `No API token found for service account "default"`, retry after the token is automatically created and added to the service account ``` - The service account `default` was automatically added to our Deployment (and to its pods) - The service account `default` exists - But it doesn't have an associated token (the token is a secret; creating it requires signature; therefore a CA) .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Solving the missing token issue There are many ways to solve that issue. We are going to list a few (to get an idea of what's happening behind the scenes). Of course, we don't need to perform *all* the solutions mentioned here. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Option 1: disable service accounts - Restart the API server with `--disable-admission-plugins=ServiceAccount` - The API server will no longer add a service account automatically - Our pods will be created without a service account .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Option 2: do not mount the (missing) token - Add `automountServiceAccountToken: false` to the Deployment spec *or* - Add `automountServiceAccountToken: false` to the default ServiceAccount - The ReplicaSet controller will no longer create pods referencing the (missing) token .lab[ - Programmatically change the `default` ServiceAccount: ```bash kubectl patch sa default -p "automountServiceAccountToken: false" ``` ] .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Option 3: set up service accounts properly - This is the most complex option! - Generate a key pair - Pass the private key to the controller manager (to generate and sign tokens) - Pass the public key to the API server (to verify these tokens) .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Continuing without service account token - Once we patch the default service account, the ReplicaSet can create a Pod .lab[ - Check that we now have a pod: ```bash kubectl get all ``` ] Note: we might have to wait a bit for the ReplicaSet controller to retry. If we're impatient, we can restart the controller manager. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## What's next? - Our pod exists, but it is in `Pending` state - Remember, we don't have a node so far (`kubectl get nodes` shows an empty list) - We need to: - start a container engine - start kubelet .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting a container engine - We're going to use Docker (because it's the default option) .lab[ - Start the Docker Engine: ```bash dockerd ``` ] Success! Feel free to check that it actually works with e.g.: ```bash docker run alpine echo hello world ``` .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting kubelet - If we start kubelet without arguments, it *will* start - But it will not join the cluster! - It will start in *standalone* mode - Just like with the controller manager, we need to tell kubelet where the API server is - Alas, kubelet doesn't have a simple `--master` option - We have to use `--kubeconfig` - We need to write a `kubeconfig` file for kubelet .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Writing a kubeconfig file - We can copy/paste a bunch of YAML - Or we can generate the file with `kubectl` .lab[ - Create the file `~/.kube/config` with `kubectl`: ```bash kubectl config \ set-cluster localhost --server http://localhost:8080 kubectl config \ set-context localhost --cluster localhost kubectl config \ use-context localhost ``` ] .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Our `~/.kube/config` file The file that we generated looks like the one below. That one has been slightly simplified (removing extraneous fields), but it is still valid. ```yaml apiVersion: v1 kind: Config current-context: localhost contexts: - name: localhost context: cluster: localhost clusters: - name: localhost cluster: server: http://localhost:8080 ``` .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting kubelet .lab[ - Start kubelet with that kubeconfig file: ```bash kubelet --kubeconfig ~/.kube/config ``` ] If it works: great! If it complains about a "cgroup driver", check the next slide. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Cgroup drivers - Cgroups ("control groups") are a Linux kernel feature - They're used to account and limit resources (e.g.: memory, CPU, block I/O...) - There are multiple ways to manipulate cgroups, including: - through a pseudo-filesystem (typically mounted in /sys/fs/cgroup) - through systemd - Kubelet and the container engine need to agree on which method to use .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Setting the cgroup driver - If kubelet refused to start, mentioning a cgroup driver issue, try: ```bash kubelet --kubeconfig ~/.kube/config --cgroup-driver=systemd ``` - That *should* do the trick! .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Looking at our 1-node cluster - Let's check that our node registered correctly .lab[ - List the nodes in our cluster: ```bash kubectl get nodes ``` ] Our node should show up. Its name will be its hostname (it should be `monokube1`). .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Are we there yet? - Let's check if our pod is running .lab[ - List all resources: ```bash kubectl get all ``` ] -- Our pod is still `Pending`. 🤔 -- Which is normal: it needs to be *scheduled*. (i.e., something needs to decide which node it should go on.) .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Scheduling our pod - Why do we need a scheduling decision, since we have only one node? - The node might be full, unavailable; the pod might have constraints ... - The easiest way to schedule our pod is to start the scheduler (we could also schedule it manually) .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting the scheduler - The scheduler also needs to know how to connect to the API server - Just like for controller manager, we can use `--kubeconfig` or `--master` .lab[ - Start the scheduler: ```bash kube-scheduler --master http://localhost:8080 ``` ] - Our pod should now start correctly .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Checking the status of our pod - Our pod will go through a short `ContainerCreating` phase - Then it will be `Running` .lab[ - Check pod status: ```bash kubectl get pods ``` ] Success! .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- class: extra-details ## Scheduling a pod manually - We can schedule a pod in `Pending` state by creating a Binding, e.g.: ```bash kubectl create -f- <
(warning: log output might get confusing to read!) .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Starting API server .lab[ - Try to start the API server: ```bash kube-apiserver # It will complain about permission to /var/run/kubernetes sudo kube-apiserver # Now it will complain about a bunch of missing flags, including: # --etcd-servers # --service-account-issuer # --service-account-signing-key-file ``` ] Just like before, we'll need to start etcd. But we'll also need some TLS keys! .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Generating TLS keys - There are many ways to generate TLS keys (and certificates) - A very popular and modern tool to do that is [cfssl] - We're going to use the old-fashioned [openssl] CLI - Feel free to use cfssl or any other tool if you prefer! [cfssl]: https://github.com/cloudflare/cfssl#using-the-command-line-tool [openssl]: https://www.openssl.org/docs/man3.0/man1/ .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## How many keys do we need? At the very least, we need the following two keys: - ServiceAccount key pair - API client key pair, aka "CA key" (technically, we will need a *certificate* for that key pair) But if we wanted to tighten the cluster security, we'd need many more... .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## The other keys These keys are not strictly necessary at this point: - etcd key pair *without that key, communication with etcd will be insecure* - API server endpoint key pair *the API server will generate this one automatically if we don't* - kubelet key pair (used by API server to connect to kubelets) *without that key, commands like kubectl logs/exec will be insecure* .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Would you like some auth with that? If we want to enable authentication and authorization, we also need various API client key pairs signed by the "CA key" mentioned earlier. That would include (non-exhaustive list): - controller manager key pair - scheduler key pair - in most cases: kube-proxy (or equivalent) key pair - in most cases: key pairs for the nodes joining the cluster (these might be generated through TLS bootstrap tokens) - key pairs for users that will interact with the clusters (unless another authentication mechanism like OIDC is used) .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Generating our keys and certificates .lab[ - Generate the ServiceAccount key pair: ```bash openssl genrsa -out sa.key 2048 ``` - Generate the CA key pair: ```bash openssl genrsa -out ca.key 2048 ``` - Generate a self-signed certificate for the CA key: ```bash openssl x509 -new -key ca.key -out ca.cert -subj /CN=kubernetes/ ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Starting etcd - This one is easy! .lab[ - Start etcd: ```bash etcd ``` ] Note: if you want a bit of extra challenge, you can try to generate the etcd key pair and use it. (You will need to pass it to etcd and to the API server.) .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Starting API server - We need to use the keys and certificate that we just generated .lab[ - Start the API server: ```bash sudo kube-apiserver \ --etcd-servers=http://localhost:2379 \ --service-account-signing-key-file=sa.key \ --service-account-issuer=https://kubernetes \ --service-account-key-file=sa.key \ --client-ca-file=ca.cert ``` ] The API server should now start. But can we really use it? 🤔 .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Trying `kubectl` - Let's try some simple `kubectl` command .lab[ - Try to list Namespaces: ```bash kubectl get namespaces ``` ] We're getting an error message like this one: ``` The connection to the server localhost:8080 was refused - did you specify the right host or port? ``` .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## What's going on? - Recent versions of Kubernetes don't support unauthenticated API access - The API server doesn't support listening on plain HTTP anymore - `kubectl` still tries to connect to `localhost:8080` by default - But there is nothing listening there - Our API server listens on port 6443, using TLS .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Trying to access the API server - Let's use `curl` first to confirm that everything works correctly (and then we will move to `kubectl`) .lab[ - Try to connect with `curl`: ```bash curl https://localhost:6443 # This will fail because the API server certificate is unknown. ``` - Try again, skipping certificate verification: ```bash curl --insecure https://localhost:6443 ``` ] We should now see an `Unauthorized` Kubernetes API error message. We need to authenticate with our key and certificate. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Authenticating with the API server - For the time being, we can use the CA key and cert directly - In a real world scenario, we would *never* do that! (because we don't want the CA key to be out there in the wild) .lab[ - Try again, skipping cert verification, and using the CA key and cert: ```bash curl --insecure --key ca.key --cert ca.cert https://localhost:6443 ``` ] We should see a list of API routes. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- class: extra-details ## Doing it right In the future, instead of using the CA key and certificate, we should generate a new key, and a certificate for that key, signed by the CA key. Then we can use that new key and certificate to authenticate. Example: ``` ### Generate a key pair openssl genrsa -out user.key ### Extract the public key openssl pkey -in user.key -out user.pub -pubout ### Generate a certificate signed by the CA key openssl x509 -new -key ca.key -force_pubkey user.pub -out user.cert \ -subj /CN=kubernetes-user/ ``` .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Writing a kubeconfig file - We now want to use `kubectl` instead of `curl` - We'll need to write a kubeconfig file for `kubectl` - There are many way to do that; here, we're going to use `kubectl config` - We'll need to: - set the "cluster" (API server endpoint) - set the "credentials" (the key and certficate) - set the "context" (referencing the cluster and credentials) - use that context (make it the default that `kubectl` will use) .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Set the cluster The "cluster" section holds the API server endpoint. .lab[ - Set the API server endpoint: ```bash kubectl config set-cluster polykube --server=https://localhost:6443 ``` - Don't verify the API server certificate: ```bash kubectl config set-cluster polykube --insecure-skip-tls-verify ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Set the credentials The "credentials" section can hold a TLS key and certificate, or a token, or configuration information for a plugin (for instance, when using AWS EKS or GCP GKE, they use a plugin). .lab[ - Set the client key and certificate: ```bash kubectl config set-credentials polykube \ --client-key ca.key \ --client-certificate ca.cert ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Set and use the context The "context" section references the "cluster" and "credentials" that we defined earlier. (It can also optionally reference a Namespace.) .lab[ - Set the "context": ```bash kubectl config set-context polykube --cluster polykube --user polykube ``` - Set that context to be the default context: ```bash kubectl config use-context polykube ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Review the kubeconfig file The kubeconfig file should look like this: .small[ ```yaml apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://localhost:6443 name: polykube contexts: - context: cluster: polykube user: polykube name: polykube current-context: polykube kind: Config preferences: {} users: - name: polykube user: client-certificate: /root/ca.cert client-key: /root/ca.key ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Trying the kubeconfig file - We should now be able to access our cluster's API! .lab[ - Try to list Namespaces: ```bash kubectl get namespaces ``` ] This should show the classic `default`, `kube-system`, etc. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- class: extra-details ## Do we need `--client-ca-file` ? Technically, we didn't need to specify the `--client-ca-file` flag! But without that flag, no client can be authenticated. Which means that we wouldn't be able to issue any API request! .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Running pods - We can now try to create a Deployment .lab[ - Create a Deployment: ```bash kubectl create deployment blue --image=jpetazzo/color ``` - Check the results: ```bash kubectl get deployments,replicasets,pods ``` ] Our Deployment exists, but not the Replica Set or Pod. We need to run the controller manager. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Running the controller manager - Previously, we used the `--master` flag to pass the API server address - Now, we need to authenticate properly - The simplest way at this point is probably to use the same kubeconfig file! .lab[ - Start the controller manager: ```bash kube-controller-manager --kubeconfig .kube/config ``` - Check the results: ```bash kubectl get deployments,replicasets,pods ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## What's next? - Normally, the last commands showed us a Pod in `Pending` state - We need two things to continue: - the scheduler (to assign the Pod to a Node) - a Node! - We're going to run `kubelet` to register the Node with the cluster .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Running `kubelet` - Let's try to run `kubelet` and see what happens! .lab[ - Start `kubelet`: ```bash sudo kubelet ``` ] We should see an error about connecting to `containerd.sock`. We need to run a container engine! (For instance, `containerd`.) .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Running `containerd` - We need to install and start `containerd` - You could try another engine if you wanted (but there might be complications!) .lab[ - Install `containerd`: ```bash sudo apt-get install containerd ``` - Start `containerd`: ```bash sudo containerd ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- class: extra-details ## Configuring `containerd` Depending on how we install `containerd`, it might need a bit of extra configuration. Watch for the following symptoms: - `containerd` refuses to start (rare, unless there is an *invalid* configuration) - `containerd` starts but `kubelet` can't connect (could be the case if the configuration disables the CRI socket) - `containerd` starts and things work but Pods keep being killed (may happen if there is a mismatch in the cgroups driver) .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Starting `kubelet` for good - Now that `containerd` is running, `kubelet` should start! .lab[ - Try to start `kubelet`: ```bash sudo kubelet ``` - In another terminal, check if our Node is now visible: ```bash sudo kubectl get nodes ``` ] `kubelet` should now start, but our Node doesn't show up in `kubectl get nodes`! This is because without a kubeconfig file, `kubelet` runs in standalone mode:
it will not connect to a Kubernetes API server, and will only start *static pods*. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Passing the kubeconfig file - Let's start `kubelet` again, with our kubeconfig file .lab[ - Stop `kubelet` (e.g. with `Ctrl-C`) - Restart it with the kubeconfig file: ```bash sudo kubelet --kubeconfig .kube/config ``` - Check our list of Nodes: ```bash kubectl get nodes ``` ] This time, our Node should show up! .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Node readiness - However, our Node shows up as `NotReady` - If we wait a few minutes, the `kubelet` logs will tell us why: *we're missing a CNI configuration!* - As a result, the containers can't be connected to the network - `kubelet` detects that and doesn't become `Ready` until this is fixed .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## CNI configuration - We need to provide a CNI configuration - This is a file in `/etc/cni/net.d` (the name of the file doesn't matter; the first file in lexicographic order will be used) - Usually, when installing a "CNI plugin¹", this file gets installed automatically - Here, we are going to write that file manually .footnote[¹Technically, a *pod network*; typically running as a DaemonSet, which will install the file with a `hostPath` volume.] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Our CNI configuration Create the following file in e.g. `/etc/cni/net.d/kube.conf`: ```json { "cniVersion": "0.3.1", "name": "kube", "type": "bridge", "bridge": "cni0", "isDefaultGateway": true, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local", "subnet": "10.1.1.0/24" } } ``` That's all we need - `kubelet` will detect and validate the file automatically! .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Checking our Node again - After a short time (typically about 10 seconds) the Node should be `Ready` .lab[ - Wait until the Node is `Ready`: ```bash kubectl get nodes ``` ] If the Node doesn't show up as `Ready`, check the `kubelet` logs. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## What's next? - At this point, we have a `Pending` Pod and a `Ready` Node - All we need is the scheduler to bind the former to the latter .lab[ - Run the scheduler: ```bash kube-scheduler --kubeconfig .kube/config ``` - Check that the Pod gets assigned to the Node and becomes `Running`: ```bash kubectl get pods ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Check network access - Let's check that we can connect to our Pod, and that the Pod can connect outside .lab[ - Get the Pod's IP address: ```bash kubectl get pods -o wide ``` - Connect to the Pod (make sure to update the IP address): ```bash curl `10.1.1.2` ``` - Check that the Pod has external connectivity too: ```bash kubectl exec `blue-xxxxxxxxxx-yyyyy` -- ping -c3 1.1 ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Expose our Deployment - We can now try to expose the Deployment and connect to the ClusterIP .lab[ - Expose the Deployment: ```bash kubectl expose deployment blue --port=80 ``` - Retrieve the ClusterIP: ```bash kubectl get services ``` - Try to connect to the ClusterIP: ```bash curl `10.0.0.42` ``` ] At this point, it won't work - we need to run `kube-proxy`! .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Running `kube-proxy` - We need to run `kube-proxy` (also passing it our kubeconfig file) .lab[ - Run `kube-proxy`: ```bash sudo kube-proxy --kubeconfig .kube/config ``` - Try again to connect to the ClusterIP: ```bash curl `10.0.0.42` ``` ] This time, it should work. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## What's next? - Scale up the Deployment, and check that load balancing works properly - Enable RBAC, and generate individual certificates for each controller (check the [certificate paths][certpath] section in the Kubernetes documentation for a detailed list of all the certificates and keys that are used by the control plane, and which flags are used by which components to configure them!) - Add more nodes to the cluster *Feel free to try these if you want to get additional hands-on experience!* [certpath]: https://kubernetes.io/docs/setup/best-practices/certificates/#certificate-paths ??? :EN:- Setting up control plane certificates :EN:- Implementing a basic CNI configuration :FR:- Mettre en place les certificats du plan de contrôle :FR:- Réaliser un configuration CNI basique .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- class: pic .interstitial[] --- name: toc-building-our-own-cluster-hard class: title Building our own cluster (hard) .nav[ [Previous part](#toc-building-our-own-cluster-medium) | [Back to table of contents](#toc-part-2) | [Next part](#toc-cni-internals) ] .debug[(automatically generated title slide)] --- # Building our own cluster (hard) - This section assumes that you already went through *“Building our own cluster (medium)”* - In that previous section, we built a cluster with a single node - In this new section, we're going to add more nodes to the cluster - Note: we will need the lab environment of that previous section - If you haven't done it yet, you should go through that section first .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Our environment - On `polykube1`, we should have our Kubernetes control plane - We're also assuming that we have the kubeconfig file created earlier (in `~/.kube/config`) - We're going to work on `polykube2` and add it to the cluster - This machine has exactly the same setup as `polykube1` (Ubuntu LTS with CNI, etcd, and Kubernetes binaries installed) - Note that we won't need the etcd binaries here (the control plane will run solely on `polykube1`) .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Checklist We need to: - generate the kubeconfig file for `polykube2` - install a container engine - generate a CNI configuration file - start kubelet .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Generating the kubeconfig file - Ideally, we should generate a key pair and certificate for `polykube2`... - ...and generate a kubeconfig file using these - At the moment, for simplicity, we'll use the same key pair and certificate as earlier - We have a couple of options: - copy the required files (kubeconfig, key pair, certificate) - "flatten" the kubeconfig file (embed the key and certificate within) .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- class: extra-details ## To flatten or not to flatten? - "Flattening" the kubeconfig file can seem easier (because it means we'll only have one file to move around) - But it's easier to rotate the key or renew the certificate when they're in separate files .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Flatten and copy the kubeconfig file - We'll flatten the file and copy it over .lab[ - On `polykube1`, flatten the kubeconfig file: ```bash kubectl config view --flatten > kubeconfig ``` - Then copy it to `polykube2`: ```bash scp kubeconfig polykube2: ``` ] .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Generate CNI configuration Back on `polykube2`, put the following in `/etc/cni/net.d/kube.conf`: ```json { "cniVersion": "0.3.1", "name": "kube", "type": "bridge", "bridge": "cni0", "isDefaultGateway": true, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local", "subnet": `"10.1.2.0/24"` } } ``` Note how we changed the subnet! .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Install container engine and start `kubelet` .lab[ - Install `containerd`: ```bash sudo apt-get install containerd -y ``` - Start `containerd`: ```bash sudo systemctl start containerd ``` - Start `kubelet`: ```bash sudo kubelet --kubeconfig kubeconfig ``` ] We're getting errors looking like: ``` "Post \"https://localhost:6443/api/v1/nodes\": ... connect: connection refused" ``` .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Updating the kubeconfig file - Our kubeconfig file still references `localhost:6443` - This was fine on `polykube1` (where `kubelet` was connecting to the control plane running locally) - On `polykube2`, we need to change that and put the address of the API server (i.e. the address of `polykube1`) .lab[ - Update the `kubeconfig` file: ```bash sed -i s/localhost:6443/polykube1:6443/ kubeconfig ``` ] .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Starting `kubelet` - `kubelet` should now start correctly (hopefully!) .lab[ - On `polykube2`, start `kubelet`: ```bash sudo kubelet --kubeconfig kubeconfig ``` - On `polykube1`, check that `polykube2` shows up and is `Ready`: ```bash kubectl get nodes ``` ] .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Testing connectivity - From `polykube1`, can we connect to Pods running on `polykube2`? 🤔 .lab[ - Scale the test Deployment: ```bash kubectl scale deployment blue --replicas=5 ``` - Get the IP addresses of the Pods: ```bash kubectl get pods -o wide ``` - Pick a Pod on `polykube2` and try to connect to it: ```bash curl `10.1.2.2` ``` ] -- At that point, it doesn't work. .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Refresher on the *pod network* - The *pod network* (or *pod-to-pod network*) has a few responsibilities: - allocating and managing Pod IP addresses - connecting Pods and Nodes - connecting Pods together on a given node - *connecting Pods together across nodes* - That last part is the one that's not functioning in our cluster - It typically requires some combination of routing, tunneling, bridging... .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Connecting networks together - We can add manual routes between our nodes - This requires adding `N x (N-1)` routes (on each node, add a route to every other node) - This will work on home labs where nodes are directly connected (e.g. on an Ethernet switch, or same WiFi network, or a bridge between local VMs) - ...Or on clouds where IP address filtering has been disabled (by default, most cloud providers will discard packets going to unknown IP addresses) - If IP address filtering is enabled, you'll have to use e.g. tunneling or overlay networks .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Important warning - The technique that we are about to use doesn't work everywhere - It only works if: - all the nodes are directly connected to each other (at layer 2) - the underlying network allows the IP addresses of our pods - If we are on physical machines connected by a switch: OK - If we are on virtual machines in a public cloud: NOT OK - on AWS, we need to disable "source and destination checks" on our instances - on OpenStack, we need to disable "port security" on our network ports .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Routing basics - We need to tell *each* node: "The subnet 10.1.N.0/24 is located on node N" (for all values of N) - This is how we add a route on Linux: ```bash ip route add 10.1.N.0/24 via W.X.Y.Z ``` (where `W.X.Y.Z` is the internal IP address of node N) - We can see the internal IP addresses of our nodes with: ```bash kubectl get nodes -o wide ``` .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Adding our route - Let's add a route from `polykube1` to `polykube2` .lab[ - Check the internal address of `polykube2`: ```bash kubectl get node polykube2 -o wide ``` - Now, on `polykube1`, add the route to the Pods running on `polykube2`: ```bash sudo ip route add 10.1.2.0/24 via `A.B.C.D` ``` - Finally, check that we can now connect to a Pod running on `polykube2`: ```bash curl 10.1.2.2 ``` ] .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## What's next? - The network configuration feels very manual: - we had to generate the CNI configuration file (in `/etc/cni/net.d`) - we had to manually update the nodes' routing tables - Can we automate that? **YES!** - We could install something like [kube-router](https://www.kube-router.io/) (which specifically takes care of the CNI configuration file and populates routing tables) - Or we could also go with e.g. [Cilium](https://cilium.io/) .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- class: extra-details ## If you want to try Cilium... - Add the `--root-ca-file` flag to the controller manager: - use the certificate automatically generated by the API server
(it should be in `/var/run/kubernetes/apiserver.crt`) - or generate a key pair and certificate for the API server and point to that certificate - without that, you'll get certificate validation errors
(because in our Pods, the `ca.crt` file used to validate the API server will be empty) - Check the Cilium [without kube-proxy][ciliumwithoutkubeproxy] instructions (make sure to pass the API server IP address and port!) - Other pod-to-pod network implementations might also require additional steps [ciliumwithoutkubeproxy]: https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/#kubeproxy-free .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- class: extra-details ## About the API server certificate... - In the previous sections, we've skipped API server certificate verification - To generate a proper certificate, we need to include a `subjectAltName` extension - And make sure that the CA includes the extension in the certificate ```bash openssl genrsa -out apiserver.key 4096 openssl req -new -key apiserver.key -subj /CN=kubernetes/ \ -addext "subjectAltName = DNS:kubernetes.default.svc, \ DNS:kubernetes.default, DNS:kubernetes, \ DNS:localhost, DNS:polykube1" -out apiserver.csr openssl x509 -req -in apiserver.csr -CAkey ca.key -CA ca.cert \ -out apiserver.crt -copy_extensions copy ``` ??? :EN:- Connecting nodes and pods :FR:- Interconnecter les nœuds et les pods .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- class: pic .interstitial[] --- name: toc-cni-internals class: title CNI internals .nav[ [Previous part](#toc-building-our-own-cluster-hard) | [Back to table of contents](#toc-part-2) | [Next part](#toc-api-server-availability) ] .debug[(automatically generated title slide)] --- # CNI internals - Kubelet looks for a CNI configuration file (by default, in `/etc/cni/net.d`) - Note: if we have multiple files, the first one will be used (in lexicographic order) - If no configuration can be found, kubelet holds off on creating containers (except if they are using `hostNetwork`) - Let's see how exactly plugins are invoked! .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## General principle - A plugin is an executable program - It is invoked with by kubelet to set up / tear down networking for a container - It doesn't take any command-line argument - However, it uses environment variables to know what to do, which container, etc. - It reads JSON on stdin, and writes back JSON on stdout - There will generally be multiple plugins invoked in a row (at least IPAM + network setup; possibly more) .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## Environment variables - `CNI_COMMAND`: `ADD`, `DEL`, `CHECK`, or `VERSION` - `CNI_CONTAINERID`: opaque identifier (container ID of the "sandbox", i.e. the container running the `pause` image) - `CNI_NETNS`: path to network namespace pseudo-file (e.g. `/var/run/netns/cni-0376f625-29b5-7a21-6c45-6a973b3224e5`) - `CNI_IFNAME`: interface name, usually `eth0` - `CNI_PATH`: path(s) with plugin executables (e.g. `/opt/cni/bin`) - `CNI_ARGS`: "extra arguments" (see next slide) .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## `CNI_ARGS` - Extra key/value pair arguments passed by "the user" - "The user", here, is "kubelet" (or in an abstract way, "Kubernetes") - This is used to pass the pod name and namespace to the CNI plugin - Example: ``` IgnoreUnknown=1 K8S_POD_NAMESPACE=default K8S_POD_NAME=web-96d5df5c8-jcn72 K8S_POD_INFRA_CONTAINER_ID=016493dbff152641d334d9828dab6136c1ff... ``` Note that technically, it's a `;`-separated list, so it really looks like this: ``` CNI_ARGS=IgnoreUnknown=1;K8S_POD_NAMESPACE=default;K8S_POD_NAME=web-96d... ``` .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## JSON in, JSON out - The plugin reads its configuration on stdin - It writes back results in JSON (e.g. allocated address, routes, DNS...) ⚠️ "Plugin configuration" is not always the same as "CNI configuration"! .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## Conf vs Conflist - The CNI configuration can be a single plugin configuration - it will then contain a `type` field in the top-most structure - it will be passed "as-is" - It can also be a "conflist", containing a chain of plugins (it will then contain a `plugins` field in the top-most structure) - Plugins are then invoked in order (reverse order for `DEL` action) - In that case, the input of the plugin is not the whole configuration (see details on next slide) .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## List of plugins - When invoking a plugin in a list, the JSON input will be: - the configuration of the plugin - augmented with `name` (matching the conf list `name`) - augmented with `prevResult` (which will be the output of the previous plugin) - Conceptually, a plugin (generally the first one) will do the "main setup" - Other plugins can do tuning / refinement (firewalling, traffic shaping...) .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## Analyzing plugins - Let's see what goes in and out of our CNI plugins! - We will create a fake plugin that: - saves its environment and input - executes the real plugin with the saved input - saves the plugin output - passes the saved output .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## Our fake plugin ```bash #!/bin/sh PLUGIN=$(basename $0) cat > /tmp/cni.$$.$PLUGIN.in env | sort > /tmp/cni.$$.$PLUGIN.env echo "PPID=$PPID, $(readlink /proc/$PPID/exe)" > /tmp/cni.$$.$PLUGIN.parent $0.real < /tmp/cni.$$.$PLUGIN.in > /tmp/cni.$$.$PLUGIN.out EXITSTATUS=$? cat /tmp/cni.$$.$PLUGIN.out exit $EXITSTATUS ``` Save this script as `/opt/cni/bin/debug` and make it executable. .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## Substituting the fake plugin - For each plugin that we want to instrument: - rename the plugin from e.g. `ptp` to `ptp.real` - symlink `ptp` to our `debug` plugin - There is no need to change the CNI configuration or restart kubelet .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## Create some pods and looks at the results - Create a pod - For each instrumented plugin, there will be files in `/tmp`: `cni.PID.pluginname.in` (JSON input) `cni.PID.pluginname.env` (environment variables) `cni.PID.pluginname.parent` (parent process information) `cni.PID.pluginname.out` (JSON output) ❓️ What is calling our plugins? ??? :EN:- Deep dive into CNI internals :FR:- La Container Network Interface (CNI) en détails .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- class: pic .interstitial[] --- name: toc-api-server-availability class: title API server availability .nav[ [Previous part](#toc-cni-internals) | [Back to table of contents](#toc-part-3) | [Next part](#toc-securing-the-control-plane) ] .debug[(automatically generated title slide)] --- # API server availability - When we set up a node, we need the address of the API server: - for kubelet - for kube-proxy - sometimes for the pod network system (like kube-router) - How do we ensure the availability of that endpoint? (what if the node running the API server goes down?) .debug[[k8s/apilb.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apilb.md)] --- ## Option 1: external load balancer - Set up an external load balancer - Point kubelet (and other components) to that load balancer - Put the node(s) running the API server behind that load balancer - Update the load balancer if/when an API server node needs to be replaced - On cloud infrastructures, some mechanisms provide automation for this (e.g. on AWS, an Elastic Load Balancer + Auto Scaling Group) - [Example in Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#the-kubernetes-frontend-load-balancer) .debug[[k8s/apilb.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apilb.md)] --- ## Option 2: local load balancer - Set up a load balancer (like NGINX, HAProxy...) on *each* node - Configure that load balancer to send traffic to the API server node(s) - Point kubelet (and other components) to `localhost` - Update the load balancer configuration when API server nodes are updated .debug[[k8s/apilb.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apilb.md)] --- ## Updating the local load balancer config - Distribute the updated configuration (push) - Or regularly check for updates (pull) - The latter requires an external, highly available store (it could be an object store, an HTTP server, or even DNS...) - Updates can be facilitated by a DaemonSet (but remember that it can't be used when installing a new node!) .debug[[k8s/apilb.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apilb.md)] --- ## Option 3: DNS records - Put all the API server nodes behind a round-robin DNS - Point kubelet (and other components) to that name - Update the records when needed - Note: this option is not officially supported (but since kubelet supports reconnection anyway, it *should* work) .debug[[k8s/apilb.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apilb.md)] --- ## Option 4: .................... - Many managed clusters expose a high-availability API endpoint (and you don't have to worry about it) - You can also use HA mechanisms that you're familiar with (e.g. virtual IPs) - Tunnels are also fine (e.g. [k3s](https://k3s.io/) uses a tunnel to allow each node to contact the API server) ??? :EN:- Ensuring API server availability :FR:- Assurer la disponibilité du serveur API .debug[[k8s/apilb.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apilb.md)] --- class: pic .interstitial[] --- name: toc-securing-the-control-plane class: title Securing the control plane .nav[ [Previous part](#toc-api-server-availability) | [Back to table of contents](#toc-part-3) | [Next part](#toc-extra-content) ] .debug[(automatically generated title slide)] --- # Securing the control plane - Many components accept connections (and requests) from others: - API server - etcd - kubelet - We must secure these connections: - to deny unauthorized requests - to prevent eavesdropping secrets, tokens, and other sensitive information - Disabling authentication and/or authorization is **strongly discouraged** (but it's possible to do it, e.g. for learning / troubleshooting purposes) .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Authentication and authorization - Authentication (checking "who you are") is done with mutual TLS (both the client and the server need to hold a valid certificate) - Authorization (checking "what you can do") is done in different ways - the API server implements a sophisticated permission logic (with RBAC) - some services will defer authorization to the API server (through webhooks) - some services require a certificate signed by a particular CA / sub-CA .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## In practice - We will review the various communication channels in the control plane - We will describe how they are secured - When TLS certificates are used, we will indicate: - which CA signs them - what their subject (CN) should be, when applicable - We will indicate how to configure security (client- and server-side) .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## etcd peers - Replication and coordination of etcd happens on a dedicated port (typically port 2380; the default port for normal client connections is 2379) - Authentication uses TLS certificates with a separate sub-CA (otherwise, anyone with a Kubernetes client certificate could access etcd!) - The etcd command line flags involved are: `--peer-client-cert-auth=true` to activate it `--peer-cert-file`, `--peer-key-file`, `--peer-trusted-ca-file` .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## etcd clients - The only¹ thing that connects to etcd is the API server - Authentication uses TLS certificates with a separate sub-CA (for the same reasons as for etcd inter-peer authentication) - The etcd command line flags involved are: `--client-cert-auth=true` to activate it `--trusted-ca-file`, `--cert-file`, `--key-file` - The API server command line flags involved are: `--etcd-cafile`, `--etcd-certfile`, `--etcd-keyfile` .footnote[¹Technically, there is also the etcd healthcheck. Let's ignore it for now.] .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## etcd authorization - etcd supports RBAC, but Kubernetes doesn't use it by default (note: etcd RBAC is completely different from Kubernetes RBAC!) - By default, etcd access is "all or nothing" (if you have a valid certificate, you get in) - Be very careful if you use the same root CA for etcd and other things (if etcd trusts the root CA, then anyone with a valid cert gets full etcd access) - For more details, check the following resources: - [etcd documentation on authentication](https://etcd.io/docs/current/op-guide/authentication/) - [PKI The Wrong Way](https://www.youtube.com/watch?v=gcOLDEzsVHI) at KubeCon NA 2020 .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## API server clients - The API server has a sophisticated authentication and authorization system - For connections coming from other components of the control plane: - authentication uses certificates (trusting the certificates' subject or CN) - authorization uses whatever mechanism is enabled (most oftentimes, RBAC) - The relevant API server flags are: `--client-ca-file`, `--tls-cert-file`, `--tls-private-key-file` - Each component connecting to the API server takes a `--kubeconfig` flag (to specify a kubeconfig file containing the CA cert, client key, and client cert) - Yes, that kubeconfig file follows the same format as our `~/.kube/config` file! .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Kubelet and API server - Communication between kubelet and API server can be established both ways - Kubelet → API server: - kubelet registers itself ("hi, I'm node42, do you have work for me?") - connection is kept open and re-established if it breaks - that's how the kubelet knows which pods to start/stop - API server → kubelet: - used to retrieve logs, exec, attach to containers .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Kubelet → API server - Kubelet is started with `--kubeconfig` with API server information - The client certificate of the kubelet will typically have: `CN=system:node:
` and groups `O=system:nodes` - Nothing special on the API server side (it will authenticate like any other client) .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## API server → kubelet - Kubelet is started with the flag `--client-ca-file` (typically using the same CA as the API server) - API server will use a dedicated key pair when contacting kubelet (specified with `--kubelet-client-certificate` and `--kubelet-client-key`) - Authorization uses webhooks (enabled with `--authorization-mode=Webhook` on kubelet) - The webhook server is the API server itself (the kubelet sends back a request to the API server to ask, "can this person do that?") .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Scheduler - The scheduler connects to the API server like an ordinary client - The certificate of the scheduler will have `CN=system:kube-scheduler` .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Controller manager - The controller manager is also a normal client to the API server - Its certificate will have `CN=system:kube-controller-manager` - If we use the CSR API, the controller manager needs the CA cert and key (passed with flags `--cluster-signing-cert-file` and `--cluster-signing-key-file`) - We usually want the controller manager to generate tokens for service accounts - These tokens deserve some details (on the next slide!) .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- class: extra-details ## How are these permissions set up? - A bunch of roles and bindings are defined as constants in the API server code: [auth/authorizer/rbac/bootstrappolicy/policy.go](https://github.com/kubernetes/kubernetes/blob/release-1.19/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/policy.go#L188) - They are created automatically when the API server starts: [registry/rbac/rest/storage_rbac.go](https://github.com/kubernetes/kubernetes/blob/release-1.19/pkg/registry/rbac/rest/storage_rbac.go#L140) - We must use the correct Common Names (`CN`) for the control plane certificates (since the bindings defined above refer to these common names) .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Service account tokens - Each time we create a service account, the controller manager generates a token - These tokens are JWT tokens, signed with a particular key - These tokens are used for authentication with the API server (and therefore, the API server needs to be able to verify their integrity) - This uses another keypair: - the private key (used for signature) is passed to the controller manager
(using flags `--service-account-private-key-file` and `--root-ca-file`) - the public key (used for verification) is passed to the API server
(using flag `--service-account-key-file`) .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## kube-proxy - kube-proxy is "yet another API server client" - In many clusters, it runs as a Daemon Set - In that case, it will have its own Service Account and associated permissions - It will authenticate using the token of that Service Account .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Webhooks - We mentioned webhooks earlier; how does that really work? - The Kubernetes API has special resource types to check permissions - One of them is SubjectAccessReview - To check if a particular user can do a particular action on a particular resource: - we prepare a SubjectAccessReview object - we send that object to the API server - the API server responds with allow/deny (and optional explanations) - Using webhooks for authorization = sending SAR to authorize each request .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Subject Access Review Here is an example showing how to check if `jean.doe` can `get` some `pods` in `kube-system`: ```bash kubectl -v9 create -f- <
.debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Checking what we're running - It's easy to check the version for the API server .lab[ - Log into node `oldversion1` - Check the version of kubectl and of the API server: ```bash kubectl version ``` ] - In a HA setup with multiple API servers, they can have different versions - Running the command above multiple times can return different values .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Node versions - It's also easy to check the version of kubelet .lab[ - Check node versions (includes kubelet, kernel, container engine): ```bash kubectl get nodes -o wide ``` ] - Different nodes can run different kubelet versions - Different nodes can run different kernel versions - Different nodes can run different container engines .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Control plane versions - If the control plane is self-hosted (running in pods), we can check it .lab[ - Show image versions for all pods in `kube-system` namespace: ```bash kubectl --namespace=kube-system get pods -o json \ | jq -r ' .items[] | [.spec.nodeName, .metadata.name] + (.spec.containers[].image | split(":")) | @tsv ' \ | column -t ``` ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## What version are we running anyway? - When I say, "I'm running Kubernetes 1.28", is that the version of: - kubectl - API server - kubelet - controller manager - something else? .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Other versions that are important - etcd - kube-dns or CoreDNS - CNI plugin(s) - Network controller, network policy controller - Container engine - Linux kernel .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Important questions - Should we upgrade the control plane before or after the kubelets? - Within the control plane, should we upgrade the API server first or last? - How often should we upgrade? - How long are versions maintained? - All the answers are in [the documentation about version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/)! - Let's review the key elements together ... .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Kubernetes uses semantic versioning - Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.28.9: - MAJOR = 1 - MINOR = 28 - PATCH = 9 - It's always possible to mix and match different PATCH releases (e.g. 1.28.9 and 1.28.13 are compatible) - It is recommended to run the latest PATCH release (but it's mandatory only when there is a security advisory) .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Version skew - API server must be more recent than its clients (kubelet and control plane) - ... Which means it must always be upgraded first - All components support a difference of one¹ MINOR version - This allows live upgrades (since we can mix e.g. 1.28 and 1.29) - It also means that going from 1.28 to 1.30 requires going through 1.29 .footnote[¹Except kubelet, which can be up to two MINOR behind API server, and kubectl, which can be one MINOR ahead or behind API server.] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Release cycle - There is a new PATCH relese whenever necessary (every few weeks, or "ASAP" when there is a security vulnerability) - There is a new MINOR release every 3 months (approximately) - At any given time, three MINOR releases are maintained - ... Which means that MINOR releases are maintained approximately 9 months - We should expect to upgrade at least every 3 months (on average) .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## General guidelines - To update a component, use whatever was used to install it - If it's a distro package, update that distro package - If it's a container or pod, update that container or pod - If you used configuration management, update with that .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Know where your binaries come from - Sometimes, we need to upgrade *quickly* (when a vulnerability is announced and patched) - If we are using an installer, we should: - make sure it's using upstream packages - or make sure that whatever packages it uses are current - make sure we can tell it to pin specific component versions .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## In practice - We are going to update a few cluster components - We will change the kubelet version on one node - We will change the version of the API server - We will work with cluster `oldversion` (nodes `oldversion1`, `oldversion2`, `oldversion3`) .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Updating the API server - This cluster has been deployed with kubeadm - The control plane runs in *static pods* - These pods are started automatically by kubelet (even when kubelet can't contact the API server) - They are defined in YAML files in `/etc/kubernetes/manifests` (this path is set by a kubelet command-line flag) - kubelet automatically updates the pods when the files are changed .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Changing the API server version - We will edit the YAML file to use a different image version .lab[ - Log into node `oldversion1` - Check API server version: ```bash kubectl version ``` - Edit the API server pod manifest: ```bash sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml ``` - Look for the `image:` line, and update it to e.g. `v1.30.1` ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Checking what we've done - The API server will be briefly unavailable while kubelet restarts it .lab[ - Check the API server version: ```bash kubectl version ``` ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Was that a good idea? -- **No!** -- - Remember the guideline we gave earlier: *To update a component, use whatever was used to install it.* - This control plane was deployed with kubeadm - We should use kubeadm to upgrade it! .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Updating the whole control plane - Let's make it right, and use kubeadm to upgrade the entire control plane (note: this is possible only because the cluster was installed with kubeadm) .lab[ - Check what will be upgraded: ```bash sudo kubeadm upgrade plan ``` ] Note 1: kubeadm thinks that our cluster is running 1.24.1.
It is confused by our manual upgrade of the API server! Note 2: kubeadm itself is still version 1.22.1..
It doesn't know how to upgrade do 1.23.X. .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Upgrading kubeadm - First things first: we need to upgrade kubeadm - The Kubernetes package repositories are now split by minor versions (i.e. there is one repository for 1.28, another for 1.29, etc.) - This avoids accidentally upgrading from one minor version to another (e.g. with unattended upgrades or if packages haven't been held/pinned) - We'll need to add the new package repository and unpin packages! .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Installing the new packages - Edit `/etc/apt/sources.list.d/kubernetes.list` (or copy it to e.g. `kubernetes-1.29.list` and edit that) - `apt-get update` - Now edit (or remove) `/etc/apt/preferences.d/kubernetes` - `apt-get install kubeadm` should now upgrade `kubeadm` correctly! 🎉 .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Reverting our manual API server upgrade - First, we should revert our `image:` change (so that kubeadm executes the right migration steps) .lab[ - Edit the API server pod manifest: ```bash sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml ``` - Look for the `image:` line, and restore it to the original value (e.g. `v1.28.9`) - Wait for the control plane to come back up ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Upgrading the cluster with kubeadm - Now we can let kubeadm do its job! .lab[ - Check the upgrade plan: ```bash sudo kubeadm upgrade plan ``` - Perform the upgrade: ```bash sudo kubeadm upgrade apply v1.29.0 ``` ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Updating kubelet - These nodes have been installed using the official Kubernetes packages - We can therefore use `apt` or `apt-get` .lab[ - Log into node `oldversion2` - Update package lists and APT pins like we did before - Then upgrade kubelet ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Checking what we've done .lab[ - Log into node `oldversion1` - Check node versions: ```bash kubectl get nodes -o wide ``` - Create a deployment and scale it to make sure that the node still works ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Was that a good idea? -- **Almost!** -- - Yes, kubelet was installed with distribution packages - However, kubeadm took care of configuring kubelet (when doing `kubeadm join ...`) - We were supposed to run a special command *before* upgrading kubelet! - That command should be executed on each node - It will download the kubelet configuration generated by kubeadm .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Upgrading kubelet the right way - We need to upgrade kubeadm, upgrade kubelet config, then upgrade kubelet (after upgrading the control plane) .lab[ - Execute the whole upgrade procedure on each node: ```bash for N in 1 2 3; do ssh oldversion$N " sudo sed -i s/1.28/1.29/ /etc/apt/sources.list.d/kubernetes.list && sudo rm /etc/apt/preferences.d/kubernetes && sudo apt update && sudo apt install kubeadm -y && sudo kubeadm upgrade node && sudo apt install kubelet -y" done ``` ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Checking what we've done - All our nodes should now be updated to version 1.29 .lab[ - Check nodes versions: ```bash kubectl get nodes -o wide ``` ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## And now, was that a good idea? -- **Almost!** -- - The official recommendation is to *drain* a node before performing node maintenance (migrate all workloads off the node before upgrading it) - How do we do that? - Is it really necessary? - Let's see! .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Draining a node - This can be achieved with the `kubectl drain` command, which will: - *cordon* the node (prevent new pods from being scheduled there) - *evict* all the pods running on the node (delete them gracefully) - the evicted pods will automatically be recreated somewhere else - evictions might be blocked in some cases (Pod Disruption Budgets, `emptyDir` volumes) - Once the node is drained, it can safely be upgraded, restarted... - Once it's ready, it can be put back in commission with `kubectl uncordon` .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Is it necessary? - When upgrading kubelet from one patch-level version to another: - it's *probably fine* - When upgrading system packages: - it's *probably fine* - except [when it's not][datadog-systemd-outage] - When upgrading the kernel: - it's *probably fine* - ...as long as we can tolerate a restart of the containers on the node - ...and that they will be unavailable for a few minutes (during the reboot) [datadog-systemd-outage]: https://www.datadoghq.com/blog/engineering/2023-03-08-deep-dive-into-platform-level-impact/ .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Is it necessary? - When upgrading kubelet from one minor version to another: - it *may or may not be fine* - in some cases (e.g. migrating from Docker to containerd) it *will not* - Here's what [the documentation][node-upgrade-docs] says: *Draining nodes before upgrading kubelet ensures that pods are re-admitted and containers are re-created, which may be necessary to resolve some security issues or other important bugs.* - Do it at your own risk, and if you do, test extensively in staging environments! [node-upgrade-docs]: https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/#manual-deployments .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Database operators to the rescue - Moving stateful pods (e.g.: database server) can cause downtime - Database replication can help: - if a node contains database servers, we make sure these servers aren't primaries - if they are primaries, we execute a *switch over* - Some database operators (e.g. [CNPG]) will do that switch over automatically (when they detect that a node has been *cordoned*) [CNPG]: https://cloudnative-pg.io/ .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- class: extra-details ## Skipping versions - This example worked because we went from 1.28 to 1.29 - If you are upgrading from e.g. 1.26, you will have to go through 1.27 first - This means upgrading kubeadm to 1.27.X, then using it to upgrade the cluster - Then upgrading kubeadm to 1.28.X, etc. - **Make sure to read the release notes before upgrading!** ??? :EN:- Best practices for cluster upgrades :EN:- Example: upgrading a kubeadm cluster :FR:- Bonnes pratiques pour la mise à jour des clusters :FR:- Exemple : mettre à jour un cluster kubeadm .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- class: pic .interstitial[] --- name: toc-kustomize class: title Kustomize .nav[ [Previous part](#toc-upgrading-clusters) | [Back to table of contents](#toc-part-4) | [Next part](#toc-managing-stacks-with-helm) ] .debug[(automatically generated title slide)] --- # Kustomize - Kustomize lets us transform Kubernetes resources: *YAML + kustomize → new YAML* - Starting point = valid resource files (i.e. something that we could load with `kubectl apply -f`) - Recipe = a *kustomization* file (describing how to transform the resources) - Result = new resource files (that we can load with `kubectl apply -f`) .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Pros and cons - Relatively easy to get started (just get some existing YAML files) - Easy to leverage existing "upstream" YAML files (or other *kustomizations*) - Somewhat integrated with `kubectl` (but only "somewhat" because of version discrepancies) - Less complex than e.g. Helm, but also less powerful - No central index like the Artifact Hub (but is there a need for it?) .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Kustomize in a nutshell - Get some valid YAML (our "resources") - Write a *kustomization* (technically, a file named `kustomization.yaml`) - reference our resources - reference other kustomizations - add some *patches* - ... - Use that kustomization either with `kustomize build` or `kubectl apply -k` - Write new kustomizations referencing the first one to handle minor differences .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## A simple kustomization This features a Deployment, Service, and Ingress (in separate files), and a couple of patches (to change the number of replicas and the hostname used in the Ingress). ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patchesStrategicMerge: - scale-deployment.yaml - ingress-hostname.yaml resources: - deployment.yaml - service.yaml - ingress.yaml ``` On the next slide, let's see a more complex example ... .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## A more complex Kustomization .small[ ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization commonAnnotations: mood: 😎 commonLabels: add-this-to-all-my-resources: please namePrefix: prod- patchesStrategicMerge: - prod-scaling.yaml - prod-healthchecks.yaml bases: - api/ - frontend/ - db/ - github.com/example/app?ref=tag-or-branch resources: - ingress.yaml - permissions.yaml configMapGenerator: - name: appconfig files: - global.conf - local.conf=prod.conf ``` ] .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Glossary - A *base* is a kustomization that is referred to by other kustomizations - An *overlay* is a kustomization that refers to other kustomizations - A kustomization can be both a base and an overlay at the same time (a kustomization can refer to another, which can refer to a third) - A *patch* describes how to alter an existing resource (e.g. to change the image in a Deployment; or scaling parameters; etc.) - A *variant* is the final outcome of applying bases + overlays (See the [kustomize glossary][glossary] for more definitions!) [glossary]: https://kubectl.docs.kubernetes.io/references/kustomize/glossary/ .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## What Kustomize *cannot* do - By design, there are a number of things that Kustomize won't do - For instance: - using command-line arguments or environment variables to generate a variant - overlays can only *add* resources, not *remove* them - See the full list of [eschewed features](https://kubectl.docs.kubernetes.io/faq/kustomize/eschewedfeatures/) for more details .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Kustomize workflows - The Kustomize documentation proposes two different workflows - *Bespoke configuration* - base and overlays managed by the same team - *Off-the-shelf configuration* (OTS) - base and overlays managed by different teams - base is regularly updated by "upstream" (e.g. a vendor) - our overlays and patches should (hopefully!) apply cleanly - we may regularly update the base, or use a remote base .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Remote bases - Kustomize can also use bases that are remote git repositories - Examples: github.com/jpetazzo/kubercoins (remote git repository) github.com/jpetazzo/kubercoins?ref=kustomize (specific tag or branch) - Note that this only works for kustomizations, not individual resources (the specified repository or directory must contain a `kustomization.yaml` file) .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- class: extra-details ## Hashicorp go-getter - Some versions of Kustomize support additional forms for remote resources - Examples: https://releases.hello.io/k/1.0.zip (remote archive) https://releases.hello.io/k/1.0.zip//some-subdir (subdirectory in archive) - This relies on [hashicorp/go-getter](https://github.com/hashicorp/go-getter#url-format) - ... But it prevents Kustomize inclusion in `kubectl` - Avoid them! - See [kustomize#3578](https://github.com/kubernetes-sigs/kustomize/issues/3578) for details .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Managing `kustomization.yaml` - There are many ways to manage `kustomization.yaml` files, including: - the `kustomize` CLI - opening the file with our favorite text editor - ~~web wizards like [Replicated Ship](https://www.replicated.com/ship/)~~ (deprecated) - Let's see these in action! .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Working with the `kustomize` CLI General workflow: 1. `kustomize create` to generate an empty `kustomization.yaml` file 2. `kustomize edit add resource` to add Kubernetes YAML files to it 3. `kustomize edit add patch` to add patches to said resources 4. `kustomize edit add ...` or `kustomize edit set ...` (many options!) 5. `kustomize build | kubectl apply -f-` or `kubectl apply -k .` 6. Repeat steps 4-5 as many times as necessary! .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Why work with the CLI? - Editing manually can introduce errors and typos - With the CLI, we don't need to remember the name of all the options and parameters (just add `--help` after any command to see possible options!) - Make sure to install the completion and try e.g. `kustomize edit add [TAB][TAB]` .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## `kustomize create` .lab[ - Change to a new directory: ```bash mkdir ~/kustomcoins cd ~/kustomcoins ``` - Run `kustomize create` with the kustomcoins repository: ```bash kustomize create --resources https://github.com/jpetazzo/kubercoins ``` - Run `kustomize build | kubectl apply -f-` ] .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## `kubectl` integration - Kustomize has been integrated in `kubectl` (since Kubernetes 1.14) - `kubectl kustomize` is an equivalent to `kustomize build` - commands that use `-f` can also use `-k` (`kubectl apply`/`delete`/...) - The `kustomize` tool is still needed if we want to use `create`, `edit`, ... - Kubernetes 1.14 to 1.20 uses Kustomize 2.0.3 - Kubernetes 1.21 jumps to Kustomize 4.1.2 - Future versions should track Kustomize updates more closely .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- class: extra-details ## Differences between 2.0.3 and later - Kustomize 2.1 / 3.0 deprecates `bases` (they should be listed in `resources`) (this means that "modern" `kustomize edit add resource` won't work with "old" `kubectl apply -k`) - Kustomize 2.1 introduces `replicas` and `envs` - Kustomize 3.1 introduces multipatches - Kustomize 3.2 introduce inline patches in `kustomization.yaml` - Kustomize 3.3 to 3.10 is mostly internal refactoring - Kustomize 4.0 drops go-getter again - Kustomize 4.1 allows patching kind and name .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Adding labels Labels can be added to all resources liks this: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization ... commonLabels: app.kubernetes.io/name: dockercoins ``` Or with the equivalent CLI command: ```bash kustomize edit add label app.kubernetes.io/name:dockercoins ``` .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Use cases for labels - Example: clean up components that have been removed from the kustomization - Assuming that `commonLabels` have been set as shown on the previous slide: ```bash kubectl apply -k . --prune --selector app.kubernetes.io/name=dockercoins ``` - ... This command removes resources that have been removed from the kustomization - Technically, resources with: - a `kubectl.kubernetes.io/last-applied-configuration` annotation - labels matching the given selector .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Scaling Instead of using a patch, scaling can be done like this: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization ... replicas: - name: worker count: 5 ``` or the CLI equivalent: ```bash kustomize edit set replicas worker=5 ``` It will automatically work with Deployments, ReplicaSets, StatefulSets. (For other resource types, fall back to a patch.) .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Updating images Instead of using patches, images can be changed like this: ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization ... images: - name: postgres newName: harbor.enix.io/my-postgres - name: dockercoins/worker newTag: v0.2 - name: dockercoins/hasher newName: registry.dockercoins.io/hasher newTag: v0.2 - name: alpine digest: sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3 ``` .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Updating images with the CLI To add an entry in the `images:` section of the kustomization: ```bash kustomize edit set image name=[newName][:newTag][@digest] ``` - `[]` denote optional parameters - `:` and `@` are the delimiters used to indicate a field Examples: ```bash kustomize edit set image dockercoins/worker=ghcr.io/dockercoins/worker kustomize edit set image dockercoins/worker=ghcr.io/dockercoins/worker:v0.2 kustomize edit set image dockercoins/worker=:v0.2 ``` .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Updating images, pros and cons - Very convenient when the same image appears multiple times - Very convenient to define tags (or pin to hashes) outside of the main YAML - Doesn't support wildcard or generic substitutions: - cannot "replace `dockercoins/*` with `ghcr.io/dockercoins/*`" - cannot "tag all `dockercoins/*` with `v0.2`" - Only patches "well-known" image fields (won't work with CRDs referencing images) - Helm can deal with these scenarios, for instance: ```yaml image: {{ .Values.registry }}/worker:{{ .Values.version }} ``` .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Advanced resource patching The example below shows how to: - patch multiple resources with a selector (new in Kustomize 3.1) - use an inline patch instead of a separate patch file (new in Kustomize 3.2) ```yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization ... patches: - patch: |- - op: replace path: /spec/template/spec/containers/0/image value: alpine target: kind: Deployment labelSelector: "app" ``` (This replaces all images of Deployments matching the `app` selector with `alpine`.) .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Advanced resource patching, pros and cons - Very convenient to patch an arbitrary number of resources - Very convenient to patch any kind of resource, including CRDs - Doesn't support "fine-grained" patching (e.g. image registry or tag) - Once again, Helm can do it: ```yaml image: {{ .Values.registry }}/worker:{{ .Values.version }} ``` .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- ## Differences with Helm - Helm charts generally require more upfront work (while kustomize "bases" are standard Kubernetes YAML) - ... But Helm charts are also more powerful; their templating language can: - conditionally include/exclude resources or blocks within resources - generate values by concatenating, hashing, transforming parameters - generate values or resources by iteration (`{{ range ... }}`) - access the Kubernetes API during template evaluation - [and much more](https://helm.sh/docs/chart_template_guide/) ??? :EN:- Packaging and running apps with Kustomize :FR:- *Packaging* d'applications avec Kustomize .debug[[k8s/kustomize.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kustomize.md)] --- class: pic .interstitial[] --- name: toc-managing-stacks-with-helm class: title Managing stacks with Helm .nav[ [Previous part](#toc-kustomize) | [Back to table of contents](#toc-part-4) | [Next part](#toc-helm-chart-format) ] .debug[(automatically generated title slide)] --- # Managing stacks with Helm - Helm is a (kind of!) package manager for Kubernetes - We can use it to: - find existing packages (called "charts") created by other folks - install these packages, configuring them for our particular setup - package our own things (for distribution or for internal use) - manage the lifecycle of these installs (rollback to previous version etc.) - It's a "CNCF graduate project", indicating a certain level of maturity (more on that later) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## From `kubectl run` to YAML - We can create resources with one-line commands (`kubectl run`, `kubectl create deployment`, `kubectl expose`...) - We can also create resources by loading YAML files (with `kubectl apply -f`, `kubectl create -f`...) - There can be multiple resources in a single YAML files (making them convenient to deploy entire stacks) - However, these YAML bundles often need to be customized (e.g.: number of replicas, image version to use, features to enable...) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Beyond YAML - Very often, after putting together our first `app.yaml`, we end up with: - `app-prod.yaml` - `app-staging.yaml` - `app-dev.yaml` - instructions indicating to users "please tweak this and that in the YAML" - That's where using something like [CUE](https://github.com/cue-labs/cue-by-example/tree/main/003_kubernetes_tutorial), [Kustomize](https://kustomize.io/), or [Helm](https://helm.sh/) can help! - Now we can do something like this: ```bash helm install app ... --set this.parameter=that.value ``` .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Other features of Helm - With Helm, we create "charts" - These charts can be used internally or distributed publicly - Public charts can be indexed through the [Artifact Hub](https://artifacthub.io/) - This gives us a way to find and install other folks' charts - Helm also gives us ways to manage the lifecycle of what we install: - keep track of what we have installed - upgrade versions, change parameters, roll back, uninstall - Furthermore, even if it's not "the" standard, it's definitely "a" standard! .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## CNCF graduation status - On April 30th 2020, Helm was the 10th project to *graduate* within the CNCF (alongside Containerd, Prometheus, and Kubernetes itself) - This is an acknowledgement by the CNCF for projects that *demonstrate thriving adoption, an open governance process,
and a strong commitment to community, sustainability, and inclusivity.* - See [CNCF announcement](https://www.cncf.io/announcement/2020/04/30/cloud-native-computing-foundation-announces-helm-graduation/) and [Helm announcement](https://helm.sh/blog/celebrating-helms-cncf-graduation/) - In other words: Helm is here to stay .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Helm concepts - `helm` is a CLI tool - It is used to find, install, upgrade *charts* - A chart is an archive containing templatized YAML bundles - Charts are versioned - Charts can be stored on private or public repositories .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Differences between charts and packages - A package (deb, rpm...) contains binaries, libraries, etc. - A chart contains YAML manifests (the binaries, libraries, etc. are in the images referenced by the chart) - On most distributions, a package can only be installed once (installing another version replaces the installed one) - A chart can be installed multiple times - Each installation is called a *release* - This allows to install e.g. 10 instances of MongoDB (with potentially different versions and configurations) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- class: extra-details ## Wait a minute ... *But, on my Debian system, I have Python 2 **and** Python 3.
Also, I have multiple versions of the Postgres database engine!* Yes! But they have different package names: - `python2.7`, `python3.8` - `postgresql-10`, `postgresql-11` Good to know: the Postgres package in Debian includes provisions to deploy multiple Postgres servers on the same system, but it's an exception (and it's a lot of work done by the package maintainer, not by the `dpkg` or `apt` tools). .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Helm 2 vs Helm 3 - Helm 3 was released [November 13, 2019](https://helm.sh/blog/helm-3-released/) - Charts remain compatible between Helm 2 and Helm 3 - The CLI is very similar (with minor changes to some commands) - The main difference is that Helm 2 uses `tiller`, a server-side component - Helm 3 doesn't use `tiller` at all, making it simpler (yay!) - If you see references to `tiller` in a tutorial, documentation... that doc is obsolete! .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- class: extra-details ## What was the problem with `tiller`? - With Helm 3: - the `helm` CLI communicates directly with the Kubernetes API - it creates resources (deployments, services...) with our credentials - With Helm 2: - the `helm` CLI communicates with `tiller`, telling `tiller` what to do - `tiller` then communicates with the Kubernetes API, using its own credentials - This indirect model caused significant permissions headaches - It also made it more complicated to embed Helm in other tools .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Installing Helm - If the `helm` CLI is not installed in your environment, install it .lab[ - Check if `helm` is installed: ```bash helm ``` - If it's not installed, run the following command: ```bash curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 \ | bash ``` ] (To install Helm 2, replace `get-helm-3` with `get`.) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Charts and repositories - A *repository* (or repo in short) is a collection of charts - It's just a bunch of files (they can be hosted by a static HTTP server, or on a local directory) - We can add "repos" to Helm, giving them a nickname - The nickname is used when referring to charts on that repo (for instance, if we try to install `hello/world`, that means the chart `world` on the repo `hello`; and that repo `hello` might be something like https://blahblah.hello.io/charts/) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## How to find charts - Go to the [Artifact Hub](https://artifacthub.io/packages/search?kind=0) (https://artifacthub.io) - Or use `helm search hub ...` from the CLI - Let's try to find a Helm chart for something called "OWASP Juice Shop"! (it is a famous demo app used in security challenges) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Finding charts from the CLI - We can use `helm search hub
` .lab[ - Look for the OWASP Juice Shop app: ```bash helm search hub owasp juice ``` - Since the URLs are truncated, try with the YAML output: ```bash helm search hub owasp juice -o yaml ``` ] Then go to → https://artifacthub.io/packages/helm/seccurecodebox/juice-shop .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Finding charts on the web - We can also use the Artifact Hub search feature .lab[ - Go to https://artifacthub.io/ - In the search box on top, enter "owasp juice" - Click on the "juice-shop" result (not "multi-juicer" or "juicy-ctf") ] .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Installing the chart - Click on the "Install" button, it will show instructions .lab[ - First, add the repository for that chart: ```bash helm repo add juice https://charts.securecodebox.io ``` - Then, install the chart: ```bash helm install my-juice-shop juice/juice-shop ``` ] Note: it is also possible to install directly a chart, with `--repo https://...` .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Charts and releases - "Installing a chart" means creating a *release* - In the previous example, the release was named "my-juice-shop" - We can also use `--generate-name` to ask Helm to generate a name for us .lab[ - List the releases: ```bash helm list ``` - Check that we have a `my-juice-shop-...` Pod up and running: ```bash kubectl get pods ``` ] .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Viewing resources of a release - This specific chart labels all its resources with a `release` label - We can use a selector to see these resources .lab[ - List all the resources created by this release: ```bash kubectl get all --selector=app.kubernetes.io/instance=my-juice-shop ``` ] Note: this label wasn't added automatically by Helm.
It is defined in that chart. In other words, not all charts will provide this label. .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Configuring a release - By default, `juice/juice-shop` creates a service of type `ClusterIP` - We would like to change that to a `NodePort` - We could use `kubectl edit service my-juice-shop`, but ... ... our changes would get overwritten next time we update that chart! - Instead, we are going to *set a value* - Values are parameters that the chart can use to change its behavior - Values have default values - Each chart is free to define its own values and their defaults .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Checking possible values - We can inspect a chart with `helm show` or `helm inspect` .lab[ - Look at the README for the app: ```bash helm show readme juice/juice-shop ``` - Look at the values and their defaults: ```bash helm show values juice/juice-shop ``` ] The `values` may or may not have useful comments. The `readme` may or may not have (accurate) explanations for the values. (If we're unlucky, there won't be any indication about how to use the values!) .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Setting values - Values can be set when installing a chart, or when upgrading it - We are going to update `my-juice-shop` to change the type of the service .lab[ - Update `my-juice-shop`: ```bash helm upgrade my-juice-shop juice/juice-shop \ --set service.type=NodePort ``` ] Note that we have to specify the chart that we use (`juice/my-juice-shop`), even if we just want to update some values. We can set multiple values. If we want to set many values, we can use `-f`/`--values` and pass a YAML file with all the values. All unspecified values will take the default values defined in the chart. .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- ## Connecting to the Juice Shop - Let's check the app that we just installed .lab[ - Check the node port allocated to the service: ```bash kubectl get service my-juice-shop PORT=$(kubectl get service my-juice-shop -o jsonpath={..nodePort}) ``` - Connect to it: ```bash curl localhost:$PORT/ ``` ] ??? :EN:- Helm concepts :EN:- Installing software with Helm :EN:- Finding charts on the Artifact Hub :FR:- Fonctionnement général de Helm :FR:- Installer des composants via Helm :FR:- Trouver des *charts* sur *Artifact Hub* :T: Getting started with Helm and its concepts :Q: Which comparison is the most adequate? :A: Helm is a firewall, charts are access lists :A: ✔️Helm is a package manager, charts are packages :A: Helm is an artefact repository, charts are artefacts :A: Helm is a CI/CD platform, charts are CI/CD pipelines :Q: What's required to distribute a Helm chart? :A: A Helm commercial license :A: A Docker registry :A: An account on the Helm Hub :A: ✔️An HTTP server .debug[[k8s/helm-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-intro.md)] --- class: pic .interstitial[] --- name: toc-helm-chart-format class: title Helm chart format .nav[ [Previous part](#toc-managing-stacks-with-helm) | [Back to table of contents](#toc-part-4) | [Next part](#toc-creating-a-basic-chart) ] .debug[(automatically generated title slide)] --- # Helm chart format - What exactly is a chart? - What's in it? - What would be involved in creating a chart? (we won't create a chart, but we'll see the required steps) .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## What is a chart - A chart is a set of files - Some of these files are mandatory for the chart to be viable (more on that later) - These files are typically packed in a tarball - These tarballs are stored in "repos" (which can be static HTTP servers) - We can install from a repo, from a local tarball, or an unpacked tarball (the latter option is preferred when developing a chart) .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## What's in a chart - A chart must have at least: - a `templates` directory, with YAML manifests for Kubernetes resources - a `values.yaml` file, containing (tunable) parameters for the chart - a `Chart.yaml` file, containing metadata (name, version, description ...) - Let's look at a simple chart for a basic demo app .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## Adding the repo - If you haven't done it before, you need to add the repo for that chart .lab[ - Add the repo that holds the chart for the OWASP Juice Shop: ```bash helm repo add juice https://charts.securecodebox.io ``` ] .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## Downloading a chart - We can use `helm pull` to download a chart from a repo .lab[ - Download the tarball for `juice/juice-shop`: ```bash helm pull juice/juice-shop ``` (This will create a file named `juice-shop-X.Y.Z.tgz`.) - Or, download + untar `juice/juice-shop`: ```bash helm pull juice/juice-shop --untar ``` (This will create a directory named `juice-shop`.) ] .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## Looking at the chart's content - Let's look at the files and directories in the `juice-shop` chart .lab[ - Display the tree structure of the chart we just downloaded: ```bash tree juice-shop ``` ] We see the components mentioned above: `Chart.yaml`, `templates/`, `values.yaml`. .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## Templates - The `templates/` directory contains YAML manifests for Kubernetes resources (Deployments, Services, etc.) - These manifests can contain template tags (using the standard Go template library) .lab[ - Look at the template file for the Service resource: ```bash cat juice-shop/templates/service.yaml ``` ] .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## Analyzing the template file - Tags are identified by `{{ ... }}` - `{{ template "x.y" }}` expands a [named template](https://helm.sh/docs/chart_template_guide/named_templates/#declaring-and-using-templates-with-define-and-template) (previously defined with `{{ define "x.y" }}...stuff...{{ end }}`) - The `.` in `{{ template "x.y" . }}` is the *context* for that named template (so that the named template block can access variables from the local context) - `{{ .Release.xyz }}` refers to [built-in variables](https://helm.sh/docs/chart_template_guide/builtin_objects/) initialized by Helm (indicating the chart name, version, whether we are installing or upgrading ...) - `{{ .Values.xyz }}` refers to tunable/settable [values](https://helm.sh/docs/chart_template_guide/values_files/) (more on that in a minute) .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## Values - Each chart comes with a [values file](https://helm.sh/docs/chart_template_guide/values_files/) - It's a YAML file containing a set of default parameters for the chart - The values can be accessed in templates with e.g. `{{ .Values.x.y }}` (corresponding to field `y` in map `x` in the values file) - The values can be set or overridden when installing or ugprading a chart: - with `--set x.y=z` (can be used multiple times to set multiple values) - with `--values some-yaml-file.yaml` (set a bunch of values from a file) - Charts following best practices will have values following specific patterns (e.g. having a `service` map allowing to set `service.type` etc.) .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## Other useful tags - `{{ if x }} y {{ end }}` allows to include `y` if `x` evaluates to `true` (can be used for e.g. healthchecks, annotations, or even an entire resource) - `{{ range x }} y {{ end }}` iterates over `x`, evaluating `y` each time (the elements of `x` are assigned to `.` in the range scope) - `{{- x }}`/`{{ x -}}` will remove whitespace on the left/right - The whole [Sprig](http://masterminds.github.io/sprig/) library, with additions: `lower` `upper` `quote` `trim` `default` `b64enc` `b64dec` `sha256sum` `indent` `toYaml` ... .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## Pipelines - `{{ quote blah }}` can also be expressed as `{{ blah | quote }}` - With multiple arguments, `{{ x y z }}` can be expressed as `{{ z | x y }}`) - Example: `{{ .Values.annotations | toYaml | indent 4 }}` - transforms the map under `annotations` into a YAML string - indents it with 4 spaces (to match the surrounding context) - Pipelines are not specific to Helm, but a feature of Go templates (check the [Go text/template documentation](https://golang.org/pkg/text/template/) for more details and examples) .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## README and NOTES.txt - At the top-level of the chart, it's a good idea to have a README - It will be viewable with e.g. `helm show readme juice/juice-shop` - In the `templates/` directory, we can also have a `NOTES.txt` file - When the template is installed (or upgraded), `NOTES.txt` is processed too (i.e. its `{{ ... }}` tags are evaluated) - It gets displayed after the install or upgrade - It's a great place to generate messages to tell the user: - how to connect to the release they just deployed - any passwords or other thing that we generated for them .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## Additional files - We can place arbitrary files in the chart (outside of the `templates/` directory) - They can be accessed in templates with `.Files` - They can be transformed into ConfigMaps or Secrets with `AsConfig` and `AsSecrets` (see [this example](https://helm.sh/docs/chart_template_guide/accessing_files/#configmap-and-secrets-utility-functions) in the Helm docs) .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- ## Hooks and tests - We can define *hooks* in our templates - Hooks are resources annotated with `"helm.sh/hook": NAME-OF-HOOK` - Hook names include `pre-install`, `post-install`, `test`, [and much more](https://helm.sh/docs/topics/charts_hooks/#the-available-hooks) - The resources defined in hooks are loaded at a specific time - Hook execution is *synchronous* (if the resource is a Job or Pod, Helm will wait for its completion) - This can be use for database migrations, backups, notifications, smoke tests ... - Hooks named `test` are executed only when running `helm test RELEASE-NAME` ??? :EN:- Helm charts format :FR:- Le format des *Helm charts* .debug[[k8s/helm-chart-format.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-chart-format.md)] --- class: pic .interstitial[] --- name: toc-creating-a-basic-chart class: title Creating a basic chart .nav[ [Previous part](#toc-helm-chart-format) | [Back to table of contents](#toc-part-4) | [Next part](#toc-extra-content) ] .debug[(automatically generated title slide)] --- # Creating a basic chart - We are going to show a way to create a *very simplified* chart - In a real chart, *lots of things* would be templatized (Resource names, service types, number of replicas...) .lab[ - Create a sample chart: ```bash helm create dockercoins ``` - Move away the sample templates and create an empty template directory: ```bash mv dockercoins/templates dockercoins/default-templates mkdir dockercoins/templates ``` ] .debug[[k8s/helm-create-basic-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-basic-chart.md)] --- ## Adding the manifests of our app - There is a convenient `dockercoins.yml` in the repo .lab[ - Copy the YAML file to the `templates` subdirectory in the chart: ```bash cp ~/container.training/k8s/dockercoins.yaml dockercoins/templates ``` ] - Note: it is probably easier to have multiple YAML files (rather than a single, big file with all the manifests) - But that works too! .debug[[k8s/helm-create-basic-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-basic-chart.md)] --- ## Testing our Helm chart - Our Helm chart is now ready (as surprising as it might seem!) .lab[ - Let's try to install the chart: ``` helm install helmcoins dockercoins ``` (`helmcoins` is the name of the release; `dockercoins` is the local path of the chart) ] -- - If the application is already deployed, this will fail: ``` Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: Service, namespace: default, name: hasher ``` .debug[[k8s/helm-create-basic-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-basic-chart.md)] --- ## Switching to another namespace - If there is already a copy of dockercoins in the current namespace: - we can switch with `kubens` or `kubectl config set-context` - we can also tell Helm to use a different namespace .lab[ - Create a new namespace: ```bash kubectl create namespace helmcoins ``` - Deploy our chart in that namespace: ```bash helm install helmcoins dockercoins --namespace=helmcoins ``` ] .debug[[k8s/helm-create-basic-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-basic-chart.md)] --- ## Helm releases are namespaced - Let's try to see the release that we just deployed .lab[ - List Helm releases: ```bash helm list ``` ] Our release doesn't show up! We have to specify its namespace (or switch to that namespace). .debug[[k8s/helm-create-basic-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-basic-chart.md)] --- ## Specifying the namespace - Try again, with the correct namespace .lab[ - List Helm releases in `helmcoins`: ```bash helm list --namespace=helmcoins ``` ] .debug[[k8s/helm-create-basic-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-basic-chart.md)] --- ## Checking our new copy of DockerCoins - We can check the worker logs, or the web UI .lab[ - Retrieve the NodePort number of the web UI: ```bash kubectl get service webui --namespace=helmcoins ``` - Open it in a web browser - Look at the worker logs: ```bash kubectl logs deploy/worker --tail=10 --follow --namespace=helmcoins ``` ] Note: it might take a minute or two for the worker to start. .debug[[k8s/helm-create-basic-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-basic-chart.md)] --- ## Discussion, shortcomings - Helm (and Kubernetes) best practices recommend to add a number of annotations (e.g. `app.kubernetes.io/name`, `helm.sh/chart`, `app.kubernetes.io/instance` ...) - Our basic chart doesn't have any of these - Our basic chart doesn't use any template tag - Does it make sense to use Helm in that case? - *Yes,* because Helm will: - track the resources created by the chart - save successive revisions, allowing us to rollback [Helm docs](https://helm.sh/docs/topics/chart_best_practices/labels/) and [Kubernetes docs](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/) have details about recommended annotations and labels. .debug[[k8s/helm-create-basic-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-basic-chart.md)] --- ## Cleaning up - Let's remove that chart before moving on .lab[ - Delete the release (don't forget to specify the namespace): ```bash helm delete helmcoins --namespace=helmcoins ``` ] .debug[[k8s/helm-create-basic-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-basic-chart.md)] --- ## Tips when writing charts - It is not necessary to `helm install`/`upgrade` to test a chart - If we just want to look at the generated YAML, use `helm template`: ```bash helm template ./my-chart helm template release-name ./my-chart ``` - Of course, we can use `--set` and `--values` too - Note that this won't fully validate the YAML! (e.g. if there is `apiVersion: klingon` it won't complain) - This can be used when trying things out .debug[[k8s/helm-create-basic-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-basic-chart.md)] --- ## Exploring the templating system Try to put something like this in a file in the `templates` directory: ```yaml hello: {{ .Values.service.port }} comment: {{/* something completely.invalid !!! */}} type: {{ .Values.service | typeOf | printf }} ### print complex value {{ .Values.service | toYaml }} ### indent it indented: {{ .Values.service | toYaml | indent 2 }} ``` Then run `helm template`. The result is not a valid YAML manifest, but this is a great debugging tool! ??? :EN:- Writing a basic Helm chart for the whole app :FR:- Écriture d'un *chart* Helm simplifié .debug[[k8s/helm-create-basic-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-basic-chart.md)] --- # (Extra content) .debug[[kube-adv.yml](https://github.com/jpetazzo/container.training/tree/main/slides/kube-adv.yml)] --- class: pic .interstitial[] --- name: toc-creating-better-helm-charts class: title Creating better Helm charts .nav[ [Previous part](#toc-extra-content) | [Back to table of contents](#toc-part-4) | [Next part](#toc-charts-using-other-charts) ] .debug[(automatically generated title slide)] --- # Creating better Helm charts - We are going to create a chart with the helper `helm create` - This will give us a chart implementing lots of Helm best practices (labels, annotations, structure of the `values.yaml` file ...) - We will use that chart as a generic Helm chart - We will use it to deploy DockerCoins - Each component of DockerCoins will have its own *release* - In other words, we will "install" that Helm chart multiple times (one time per component of DockerCoins) .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Creating a generic chart - Rather than starting from scratch, we will use `helm create` - This will give us a basic chart that we will customize .lab[ - Create a basic chart: ```bash cd ~ helm create helmcoins ``` ] This creates a basic chart in the directory `helmcoins`. .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## What's in the basic chart? - The basic chart will create a Deployment and a Service - Optionally, it will also include an Ingress - If we don't pass any values, it will deploy the `nginx` image - We can override many things in that chart - Let's try to deploy DockerCoins components with that chart! .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Writing `values.yaml` for our components - We need to write one `values.yaml` file for each component (hasher, redis, rng, webui, worker) - We will start with the `values.yaml` of the chart, and remove what we don't need - We will create 5 files: hasher.yaml, redis.yaml, rng.yaml, webui.yaml, worker.yaml - In each file, we want to have: ```yaml image: repository: IMAGE-REPOSITORY-NAME tag: IMAGE-TAG ``` .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Getting started - For component X, we want to use the image dockercoins/X:v0.1 (for instance, for rng, we want to use the image dockercoins/rng:v0.1) - Exception: for redis, we want to use the official image redis:latest .lab[ - Write YAML files for the 5 components, with the following model: ```yaml image: repository: `IMAGE-REPOSITORY-NAME` (e.g. dockercoins/worker) tag: `IMAGE-TAG` (e.g. v0.1) ``` ] .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Deploying DockerCoins components - For convenience, let's work in a separate namespace .lab[ - Create a new namespace (if it doesn't already exist): ```bash kubectl create namespace helmcoins ``` - Switch to that namespace: ```bash kns helmcoins ``` ] .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Deploying the chart - To install a chart, we can use the following command: ```bash helm install COMPONENT-NAME CHART-DIRECTORY ``` - We can also use the following command, which is *idempotent*: ```bash helm upgrade COMPONENT-NAME CHART-DIRECTORY --install ``` .lab[ - Install the 5 components of DockerCoins: ```bash for COMPONENT in hasher redis rng webui worker; do helm upgrade $COMPONENT helmcoins --install --values=$COMPONENT.yaml done ``` ] .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- class: extra-details ## "Idempotent" - Idempotent = that can be applied multiple times without changing the result (the word is commonly used in maths and computer science) - In this context, this means: - if the action (installing the chart) wasn't done, do it - if the action was already done, don't do anything - Ideally, when such an action fails, it can be retried safely (as opposed to, e.g., installing a new release each time we run it) - Other example: `kubectl apply -f some-file.yaml` .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Checking what we've done - Let's see if DockerCoins is working! .lab[ - Check the logs of the worker: ```bash stern worker ``` - Look at the resources that were created: ```bash kubectl get all ``` ] There are *many* issues to fix! .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Can't pull image - It looks like our images can't be found .lab[ - Use `kubectl describe` on any of the pods in error ] - We're trying to pull `rng:1.16.0` instead of `rng:v0.1`! - Where does that `1.16.0` tag come from? .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Inspecting our template - Let's look at the `templates/` directory (and try to find the one generating the Deployment resource) .lab[ - Show the structure of the `helmcoins` chart that Helm generated: ```bash tree helmcoins ``` - Check the file `helmcoins/templates/deployment.yaml` - Look for the `image:` parameter ] *The image tag references `{{ .Chart.AppVersion }}`. Where does that come from?* .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## The `.Chart` variable - `.Chart` is a map corresponding to the values in `Chart.yaml` - Let's look for `AppVersion` there! .lab[ - Check the file `helmcoins/Chart.yaml` - Look for the `appVersion:` parameter ] (Yes, the case is different between the template and the Chart file.) .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Using the correct tags - If we change `AppVersion` to `v0.1`, it will change for *all* deployments (including redis) - Instead, let's change the *template* to use `{{ .Values.image.tag }}` (to match what we've specified in our values YAML files) .lab[ - Edit `helmcoins/templates/deployment.yaml` - Replace `{{ .Chart.AppVersion }}` with `{{ .Values.image.tag }}` ] .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Upgrading to use the new template - Technically, we just made a new version of the *chart* - To use the new template, we need to *upgrade* the release to use that chart .lab[ - Upgrade all components: ```bash for COMPONENT in hasher redis rng webui worker; do helm upgrade $COMPONENT helmcoins done ``` - Check how our pods are doing: ```bash kubectl get pods ``` ] We should see all pods "Running". But ... not all of them are READY. .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Troubleshooting readiness - `hasher`, `rng`, `webui` should show up as `1/1 READY` - But `redis` and `worker` should show up as `0/1 READY` - Why? .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Troubleshooting pods - The easiest way to troubleshoot pods is to look at *events* - We can look at all the events on the cluster (with `kubectl get events`) - Or we can use `kubectl describe` on the objects that have problems (`kubectl describe` will retrieve the events related to the object) .lab[ - Check the events for the redis pods: ```bash kubectl describe pod -l app.kubernetes.io/name=redis ``` ] It's failing both its liveness and readiness probes! .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Healthchecks - The default chart defines healthchecks doing HTTP requests on port 80 - That won't work for redis and worker (redis is not HTTP, and not on port 80; worker doesn't even listen) -- - We could remove or comment out the healthchecks - We could also make them conditional - This sounds more interesting, let's do that! .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Conditionals - We need to enclose the healthcheck block with: `{{ if false }}` at the beginning (we can change the condition later) `{{ end }}` at the end .lab[ - Edit `helmcoins/templates/deployment.yaml` - Add `{{ if false }}` on the line before `livenessProbe` - Add `{{ end }}` after the `readinessProbe` section (see next slide for details) ] .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- This is what the new YAML should look like (added lines in yellow): ```yaml ports: - name: http containerPort: 80 protocol: TCP `{{ if false }}` livenessProbe: httpGet: path: / port: http readinessProbe: httpGet: path: / port: http `{{ end }}` resources: {{- toYaml .Values.resources | nindent 12 }} ``` .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Testing the new chart - We need to upgrade all the services again to use the new chart .lab[ - Upgrade all components: ```bash for COMPONENT in hasher redis rng webui worker; do helm upgrade $COMPONENT helmcoins done ``` - Check how our pods are doing: ```bash kubectl get pods ``` ] Everything should now be running! .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## What's next? - Is this working now? .lab[ - Let's check the logs of the worker: ```bash stern worker ``` ] This error might look familiar ... The worker can't resolve `redis`. Typically, that error means that the `redis` service doesn't exist. .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Checking services - What about the services created by our chart? .lab[ - Check the list of services: ```bash kubectl get services ``` ] They are named `COMPONENT-helmcoins` instead of just `COMPONENT`. We need to change that! .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Where do the service names come from? - Look at the YAML template used for the services - It should be using `{{ include "helmcoins.fullname" }}` - `include` indicates a *template block* defined somewhere else .lab[ - Find where that `fullname` thing is defined: ```bash grep define.*fullname helmcoins/templates/* ``` ] It should be in `_helpers.tpl`. We can look at the definition, but it's fairly complex ... .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Changing service names - Instead of that `{{ include }}` tag, let's use the name of the release - The name of the release is available as `{{ .Release.Name }}` .lab[ - Edit `helmcoins/templates/service.yaml` - Replace the service name with `{{ .Release.Name }}` - Upgrade all the releases to use the new chart - Confirm that the services now have the right names ] .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Is it working now? - If we look at the worker logs, it appears that the worker is still stuck - What could be happening? -- - The redis service is not on port 80! - Let's see how the port number is set - We need to look at both the *deployment* template and the *service* template .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Service template - In the service template, we have the following section: ```yaml ports: - port: {{ .Values.service.port }} targetPort: http protocol: TCP name: http ``` - `port` is the port on which the service is "listening" (i.e. to which our code needs to connect) - `targetPort` is the port on which the pods are listening - The `name` is not important (it's OK if it's `http` even for non-HTTP traffic) .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Setting the redis port - Let's add a `service.port` value to the redis release .lab[ - Edit `redis.yaml` to add: ```yaml service: port: 6379 ``` - Apply the new values file: ```bash helm upgrade redis helmcoins --values=redis.yaml ``` ] .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Deployment template - If we look at the deployment template, we see this section: ```yaml ports: - name: http containerPort: 80 protocol: TCP ``` - The container port is hard-coded to 80 - We'll change it to use the port number specified in the values .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Changing the deployment template .lab[ - Edit `helmcoins/templates/deployment.yaml` - The line with `containerPort` should be: ```yaml containerPort: {{ .Values.service.port }} ``` ] .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Apply changes - Re-run the for loop to execute `helm upgrade` one more time - Check the worker logs - This time, it should be working! .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- ## Extra steps - We don't need to create a service for the worker - We can put the whole service block in a conditional (this will require additional changes in other files referencing the service) - We can set the webui to be a NodePort service - We can change the number of workers with `replicaCount` - And much more! ??? :EN:- Writing better Helm charts for app components :FR:- Écriture de *charts* composant par composant .debug[[k8s/helm-create-better-chart.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-create-better-chart.md)] --- class: pic .interstitial[] --- name: toc-charts-using-other-charts class: title Charts using other charts .nav[ [Previous part](#toc-creating-better-helm-charts) | [Back to table of contents](#toc-part-4) | [Next part](#toc-helm-and-invalid-values) ] .debug[(automatically generated title slide)] --- # Charts using other charts - Helm charts can have *dependencies* on other charts - These dependencies will help us to share or reuse components (so that we write and maintain less manifests, less templates, less code!) - As an example, we will use a community chart for Redis - This will help people who write charts, and people who use them - ... And potentially remove a lot of code! ✌️ .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Redis in DockerCoins - In the DockerCoins demo app, we have 5 components: - 2 internal webservices - 1 worker - 1 public web UI - 1 Redis data store - Every component is running some custom code, except Redis - Every component is using a custom image, except Redis (which is using the official `redis` image) - Could we use a standard chart for Redis? - Yes! Dependencies to the rescue! .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Adding our dependency - First, we will add the dependency to the `Chart.yaml` file - Then, we will ask Helm to download that dependency - We will also *lock* the dependency (lock it to a specific version, to ensure reproducibility) .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Declaring the dependency - First, let's edit `Chart.yaml` .lab[ - In `Chart.yaml`, fill the `dependencies` section: ```yaml dependencies: - name: redis version: 11.0.5 repository: https://charts.bitnami.com/bitnami condition: redis.enabled ``` ] Where do that `repository` and `version` come from? We're assuming here that we did our research, or that our resident Helm expert advised us to use Bitnami's Redis chart. .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Conditions - The `condition` field gives us a way to enable/disable the dependency: ```yaml conditions: redis.enabled ``` - Here, we can disable Redis with the Helm flag `--set redis.enabled=false` (or set that value in a `values.yaml` file) - Of course, this is mostly useful for *optional* dependencies (otherwise, the app ends up being broken since it'll miss a component) .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Lock & Load! - After adding the dependency, we ask Helm to pin an download it .lab[ - Ask Helm: ```bash helm dependency update ``` (Or `helm dep up`) ] - This wil create `Chart.lock` and fetch the dependency .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## What's `Chart.lock`? - This is a common pattern with dependencies (see also: `Gemfile.lock`, `package.json.lock`, and many others) - This lets us define loose dependencies in `Chart.yaml` (e.g. "version 11.whatever, but below 12") - But have the exact version used in `Chart.lock` - This ensures reproducible deployments - `Chart.lock` can (should!) be added to our source tree - `Chart.lock` can (should!) regularly be updated .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Loose dependencies - Here is an example of loose version requirement: ```yaml dependencies: - name: redis version: ">=11, <12" repository: https://charts.bitnami.com/bitnami ``` - This makes sure that we have the most recent version in the 11.x train - ... But without upgrading to version 12.x (because it might be incompatible) .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## `build` vs `update` - Helm actually offers two commands to manage dependencies: `helm dependency build` = fetch dependencies listed in `Chart.lock` `helm dependency update` = update `Chart.lock` (and run `build`) - When the dependency gets updated, we can/should: - `helm dep up` (update `Chart.lock` and fetch new chart) - test! - if everything is fine, `git add Chart.lock` and commit .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Where are my dependencies? - Dependencies are downloaded to the `charts/` subdirectory - When they're downloaded, they stay in compressed format (`.tgz`) - Should we commit them to our code repository? - Pros: - more resilient to internet/mirror failures/decomissioning - Cons: - can add a lot of weight to the repo if charts are big or change often - this can be solved by extra tools like git-lfs .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Dependency tuning - DockerCoins expects the `redis` Service to be named `redis` - Our Redis chart uses a different Service name by default - Service name is `{{ template "redis.fullname" . }}-master` - `redis.fullname` looks like this: ``` {{- define "redis.fullname" -}} {{- if .Values.fullnameOverride -}} {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} {{- else -}} [...] {{- end }} {{- end }} ``` - How do we fix this? .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Setting dependency variables - If we set `fullnameOverride` to `redis`: - the `{{ template ... }}` block will output `redis` - the Service name will be `redis-master` - A parent chart can set values for its dependencies - For example, in the parent's `values.yaml`: ```yaml redis: # Name of the dependency fullnameOverride: redis # Value passed to redis cluster: # Other values passed to redis enabled: false ``` - User can also set variables with `--set=` or with `--values=` .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- class: extra-details ## Passing templates - We can even pass template `{{ include "template.name" }}`, but warning: - need to be evaluated with the `tpl` function, on the child side - evaluated in the context of the child, with no access to parent variables .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Getting rid of the `-master` - Even if we set that `fullnameOverride`, the Service name will be `redis-master` - To remove the `-master` suffix, we need to edit the chart itself - To edit the Redis chart, we need to *embed* it in our own chart - We need to: - decompress the chart - adjust `Chart.yaml` accordingly .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Embedding a dependency .lab[ - Decompress the chart: ```yaml cd charts tar zxf redis-*.tgz cd .. ``` - Edit `Chart.yaml` and update the `dependencies` section: ```yaml dependencies: - name: redis version: '*' # No need to constraint version, from local files ``` - Run `helm dep update` ] .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Updating the dependency - Now we can edit the Service name (it should be in `charts/redis/templates/redis-master-svc.yaml`) - Then try to deploy the whole chart! .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- ## Embedding a dependency multiple times - What if we need multiple copies of the same subchart? (for instance, if we need two completely different Redis servers) - We can declare a dependency multiple times, and specify an `alias`: ```yaml dependencies: - name: redis version: '*' alias: querycache - name: redis version: '*' alias: celeryqueue ``` - `.Chart.Name` will be set to the `alias` .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- class: extra-details ## Determining if we're in a subchart - `.Chart.IsRoot` indicates if we're in the top-level chart or in a sub-chart - Useful in charts that are designed to be used standalone or as dependencies - Example: generic chart - when used standalone (`.Chart.IsRoot` is `true`), use `.Release.Name` - when used as a subchart e.g. with multiple aliases, use `.Chart.Name` .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- class: extra-details ## Compatibility with Helm 2 - Chart `apiVersion: v1` is the only version supported by Helm 2 - Chart v1 is also supported by Helm 3 - Use v1 if you want to be compatible with Helm 2 - Instead of `Chart.yaml`, dependencies are defined in `requirements.yaml` (and we should commit `requirements.lock` instead of `Chart.lock`) ??? :EN:- Depending on other charts :EN:- Charts within charts :FR:- Dépendances entre charts :FR:- Un chart peut en cacher un autre .debug[[k8s/helm-dependencies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-dependencies.md)] --- class: pic .interstitial[] --- name: toc-helm-and-invalid-values class: title Helm and invalid values .nav[ [Previous part](#toc-charts-using-other-charts) | [Back to table of contents](#toc-part-4) | [Next part](#toc-helm-secrets) ] .debug[(automatically generated title slide)] --- # Helm and invalid values - A lot of Helm charts let us specify an image tag like this: ```bash helm install ... --set image.tag=v1.0 ``` - What happens if we make a small mistake, like this: ```bash helm install ... --set imagetag=v1.0 ``` - Or even, like this: ```bash helm install ... --set image=v1.0 ``` 🤔 .debug[[k8s/helm-values-schema-validation.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-values-schema-validation.md)] --- ## Making mistakes - In the first case: - we set `imagetag=v1.0` instead of `image.tag=v1.0` - Helm will ignore that value (if it's not used anywhere in templates) - the chart is deployed with the default value instead - In the second case: - we set `image=v1.0` instead of `image.tag=v1.0` - `image` will be a string instead of an object - Helm will *probably* fail when trying to evaluate `image.tag` .debug[[k8s/helm-values-schema-validation.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-values-schema-validation.md)] --- ## Preventing mistakes - To prevent the first mistake, we need to tell Helm: *"let me know if any additional (unknown) value was set!"* - To prevent the second mistake, we need to tell Helm: *"`image` should be an object, and `image.tag` should be a string!"* - We can do this with *values schema validation* .debug[[k8s/helm-values-schema-validation.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-values-schema-validation.md)] --- ## Helm values schema validation - We can write a spec representing the possible values accepted by the chart - Helm will check the validity of the values before trying to install/upgrade - If it finds problems, it will stop immediately - The spec uses [JSON Schema](https://json-schema.org/): *JSON Schema is a vocabulary that allows you to annotate and validate JSON documents.* - JSON Schema is designed for JSON, but can easily work with YAML too (or any language with `map|dict|associativearray` and `list|array|sequence|tuple`) .debug[[k8s/helm-values-schema-validation.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-values-schema-validation.md)] --- ## In practice - We need to put the JSON Schema spec in a file called `values.schema.json` (at the root of our chart; right next to `values.yaml` etc.) - The file is optional - We don't need to register or declare it in `Chart.yaml` or anywhere - Let's write a schema that will verify that ... - `image.repository` is an official image (string without slashes or dots) - `image.pullPolicy` can only be `Always`, `Never`, `IfNotPresent` .debug[[k8s/helm-values-schema-validation.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-values-schema-validation.md)] --- ## `values.schema.json` ```json { "$schema": "http://json-schema.org/schema#", "type": "object", "properties": { "image": { "type": "object", "properties": { "repository": { "type": "string", "pattern": "^[a-z0-9-_]+$" }, "pullPolicy": { "type": "string", "pattern": "^(Always|Never|IfNotPresent)$" } } } } } ``` .debug[[k8s/helm-values-schema-validation.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-values-schema-validation.md)] --- ## Testing our schema - Let's try to install a couple releases with that schema! .lab[ - Try an invalid `pullPolicy`: ```bash helm install broken --set image.pullPolicy=ShallNotPass ``` - Try an invalid value: ```bash helm install should-break --set ImAgeTAg=toto ``` ] - The first one fails, but the second one still passes ... - Why? .debug[[k8s/helm-values-schema-validation.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-values-schema-validation.md)] --- ## Bailing out on unkown properties - We told Helm what properties (values) were valid - We didn't say what to do about additional (unknown) properties! - We can fix that with `"additionalProperties": false` .lab[ - Edit `values.schema.json` to add `"additionalProperties": false` ```json { "$schema": "http://json-schema.org/schema#", "type": "object", "additionalProperties": false, "properties": { ... ``` ] .debug[[k8s/helm-values-schema-validation.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-values-schema-validation.md)] --- ## Testing with unknown properties .lab[ - Try to pass an extra property: ```bash helm install should-break --set ImAgeTAg=toto ``` - Try to pass an extra nested property: ```bash helm install does-it-work --set image.hello=world ``` ] The first command should break. The second will not. `"additionalProperties": false` needs to be specified at each level. ??? :EN:- Helm schema validation :FR:- Validation de schema Helm .debug[[k8s/helm-values-schema-validation.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-values-schema-validation.md)] --- class: pic .interstitial[] --- name: toc-helm-secrets class: title Helm secrets .nav[ [Previous part](#toc-helm-and-invalid-values) | [Back to table of contents](#toc-part-4) | [Next part](#toc-ytt) ] .debug[(automatically generated title slide)] --- # Helm secrets - Helm can do *rollbacks*: - to previously installed charts - to previous sets of values - How and where does it store the data needed to do that? - Let's investigate! .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- ## Adding the repo - If you haven't done it before, you need to add the repo for that chart .lab[ - Add the repo that holds the chart for the OWASP Juice Shop: ```bash helm repo add juice https://charts.securecodebox.io ``` ] .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- ## We need a release - We need to install something with Helm - Let's use the `juice/juice-shop` chart as an example .lab[ - Install a release called `orange` with the chart `juice/juice-shop`: ```bash helm upgrade orange juice/juice-shop --install ``` - Let's upgrade that release, and change a value: ```bash helm upgrade orange juice/juice-shop --set ingress.enabled=true ``` ] .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- ## Release history - Helm stores successive revisions of each release .lab[ - View the history for that release: ```bash helm history orange ``` ] Where does that come from? .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- ## Investigate - Possible options: - local filesystem (no, because history is visible from other machines) - persistent volumes (no, Helm works even without them) - ConfigMaps, Secrets? .lab[ - Look for ConfigMaps and Secrets: ```bash kubectl get configmaps,secrets ``` ] -- We should see a number of secrets with TYPE `helm.sh/release.v1`. .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- ## Unpacking a secret - Let's find out what is in these Helm secrets .lab[ - Examine the secret corresponding to the second release of `orange`: ```bash kubectl describe secret sh.helm.release.v1.orange.v2 ``` (`v1` is the secret format; `v2` means revision 2 of the `orange` release) ] There is a key named `release`. .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- ## Unpacking the release data - Let's see what's in this `release` thing! .lab[ - Dump the secret: ```bash kubectl get secret sh.helm.release.v1.orange.v2 \ -o go-template='{{ .data.release }}' ``` ] Secrets are encoded in base64. We need to decode that! .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- ## Decoding base64 - We can pipe the output through `base64 -d` or use go-template's `base64decode` .lab[ - Decode the secret: ```bash kubectl get secret sh.helm.release.v1.orange.v2 \ -o go-template='{{ .data.release | base64decode }}' ``` ] -- ... Wait, this *still* looks like base64. What's going on? -- Let's try one more round of decoding! .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- ## Decoding harder - Just add one more base64 decode filter .lab[ - Decode it twice: ```bash kubectl get secret sh.helm.release.v1.orange.v2 \ -o go-template='{{ .data.release | base64decode | base64decode }}' ``` ] -- ... OK, that was *a lot* of binary data. What should we do with it? .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- ## Guessing data type - We could use `file` to figure out the data type .lab[ - Pipe the decoded release through `file -`: ```bash kubectl get secret sh.helm.release.v1.orange.v2 \ -o go-template='{{ .data.release | base64decode | base64decode }}' \ | file - ``` ] -- Gzipped data! It can be decoded with `gunzip -c`. .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- ## Uncompressing the data - Let's uncompress the data and save it to a file .lab[ - Rerun the previous command, but with `| gunzip -c > release-info` : ```bash kubectl get secret sh.helm.release.v1.orange.v2 \ -o go-template='{{ .data.release | base64decode | base64decode }}' \ | gunzip -c > release-info ``` - Look at `release-info`: ```bash cat release-info ``` ] -- It's a bundle of ~~YAML~~ JSON. .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- ## Looking at the JSON If we inspect that JSON (e.g. with `jq keys release-info`), we see: - `chart` (contains the entire chart used for that release) - `config` (contains the values that we've set) - `info` (date of deployment, status messages) - `manifest` (YAML generated from the templates) - `name` (name of the release, so `orange`) - `namespace` (namespace where we deployed the release) - `version` (revision number within that release; starts at 1) The chart is in a structured format, but it's entirely captured in this JSON. .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- ## Conclusions - Helm stores each release information in a Secret in the namespace of the release - The secret is JSON object (gzipped and encoded in base64) - It contains the manifests generated for that release - ... And everything needed to rebuild these manifests (including the full source of the chart, and the values used) - This allows arbitrary rollbacks, as well as tweaking values even without having access to the source of the chart (or the chart repo) used for deployment ??? :EN:- Deep dive into Helm internals :FR:- Fonctionnement interne de Helm .debug[[k8s/helm-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/helm-secrets.md)] --- class: pic .interstitial[] --- name: toc-ytt class: title YTT .nav[ [Previous part](#toc-helm-secrets) | [Back to table of contents](#toc-part-4) | [Next part](#toc-extending-the-kubernetes-api) ] .debug[(automatically generated title slide)] --- # YTT - YAML Templating Tool - Part of [Carvel] (a set of tools for Kubernetes application building, configuration, and deployment) - Can be used for any YAML (Kubernetes, Compose, CI pipelines...) [Carvel]: https://carvel.dev/ .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Features - Manipulate data structures, not text (≠ Helm) - Deterministic, hermetic execution - Define variables, blocks, functions - Write code in Starlark (dialect of Python) - Define and override values (Helm-style) - Patch resources arbitrarily (Kustomize-style) .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Getting started - Install `ytt` ([binary download][download]) - Start with one (or multiple) Kubernetes YAML files *(without comments; no `#` allowed at this point!)* - `ytt -f one.yaml -f two.yaml | kubectl apply -f-` - `ytt -f. | kubectl apply -f-` [download]: https://github.com/vmware-tanzu/carvel-ytt/releases/latest .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## No comments?!? - Replace `#` with `#!` - `#@` is used by ytt - It's a kind of template tag, for instance: ```yaml #! This is a comment #@ a = 42 #@ b = "*" a: #@ a b: #@ b operation: multiply result: #@ a*b ``` - `#@` at the beginning of a line = instruction - `#@` somewhere else = value .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Building strings - Concatenation: ```yaml #@ repository = "dockercoins" #@ tag = "v0.1" containers: - name: worker image: #@ repository + "/worker:" + tag ``` - Formatting: ```yaml #@ repository = "dockercoins" #@ tag = "v0.1" containers: - name: worker image: #@ "{}/worker:{}".format(repository, tag) ``` .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Defining functions - Reusable functions can be written in Starlark (=Python) - Blocks (`def`, `if`, `for`...) must be terminated with `#@ end` - Example: ```yaml #@ def image(component, repository="dockercoins", tag="v0.1"): #@ return "{}/{}:{}".format(repository, component, tag) #@ end containers: - name: worker image: #@ image("worker") - name: hasher image: #@ image("hasher") ``` .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Structured data - Functions can return complex types - Example: defining a common set of labels ```yaml #@ name = "worker" #@ def labels(component): #@ return { #@ "app": component, #@ "container.training/generated-by": "ytt", #@ } #@ end kind: Pod apiVersion: v1 metadata: name: #@ name labels: #@ labels(name) ``` .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## YAML functions - Function body can also be straight YAML: ```yaml #@ name = "worker" #@ def labels(component): app: #@ component container.training/generated-by: ytt #@ end kind: Pod apiVersion: v1 metadata: name: #@ name labels: #@ labels(name) ``` - The return type of the function is then a [YAML fragment][fragment] [fragment]: https://carvel.dev/ytt/docs/v0.41.0/ .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## More YAML functions - We can load library functions: ```yaml #@ load("@ytt:sha256", "sha256") ``` - This is (sort of) equivalent fo `from ytt.sha256 import sha256` - Functions can contain a mix of code and YAML fragment: ```yaml #@ load("@ytt:sha256", "sha256") #@ def annotations(): #@ author = "Jérôme Petazzoni" author: #@ author author_hash: #@ sha256.sum(author)[:8] #@ end annotations: #@ annotations() ``` .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Data values - We can define a *schema* in a separate file: ```yaml #@data/values-schema --- #! there must be a "---" here! repository: dockercoins tag: v0.1 ``` - This defines the data values (=customizable parameters), as well as their *types* and *default values* - Technically, `#@data/values-schema` is an annotation, and it applies to a YAML document; so the following element must be a YAML document - This is conceptually similar to Helm's *values* file
(but with type enforcement as a bonus) .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Using data values - Requires loading `@ytt:data` - Values are then available in `data.values` - Example: ```yaml #@ load("@ytt:data", "data") #@ def image(component): #@ return "{}/{}:{}".format(data.values.repository, component, data.values.tag) #@ end #@ name = "worker" containers: - name: #@ name image: #@ image(name) ``` .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Overriding data values - There are many ways to set and override data values: - plain YAML files - data value overlays - environment variables - command-line flags - Precedence of the different methods is defined in the [docs][data-values-merge-order] [data-values-merge-order]: https://carvel.dev/ytt/docs/v0.41.0/ytt-data-values/#data-values-merge-order .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Values in plain YAML files - Content of `values.yaml`: ```yaml tag: latest ``` - Values get merged with `--data-values-file`: ```bash ytt -f config/ --data-values-file values.yaml ``` - Multiple files can be specified - These files can also be URLs! .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Data value overlay - Content of `values.yaml`: ```yaml #@data/values --- #! must have --- here tag: latest ``` - Values get merged by being specified like "normal" files: ```bash ytt -f config/ -f values.yaml ``` - Multiple files can be specified .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Set a value with a flag - Set a string value: ```bash ytt -f config/ --data-value tag=latest ``` - Set a YAML value (useful to parse it as e.g. integer, boolean...): ```bash ytt -f config/ --data-value-yaml replicas=10 ``` - Read a string value from a file: ```bash ytt -f config/ --data-value-file ca_cert=cert.pem ``` .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Set values from environment variables - Set environment variables with a prefix: ```bash export VAL_tag=latest export VAL_repository=ghcr.io/dockercoins ``` - Use the variables as strings: ```bash ytt -f config/ --data-values-env VAL ``` - Or parse them as YAML: ```bash ytt -f config/ --data-values-env-yaml VAL ``` .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Lines starting with `#@` - This generates an empty document: ```yaml #@ def hello(): hello: world #@ end #@ hello() ``` - Do this instead: ```yaml #@ def hello(): hello: world #@ end --- #@ hello() ``` .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Generating multiple documents, take 1 - This won't work: ```yaml #@ def app(): kind: Deployment apiVersion: apps/v1 --- #! separate from next document kind: Service apiVersion: v1 #@ end --- #@ app() ``` .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Generating multiple documents, take 2 - This won't work either: ```yaml #@ def app(): --- #! the initial separator indicates "this is a Document Set" kind: Deployment apiVersion: apps/v1 --- #! separate from next document kind: Service apiVersion: v1 #@ end --- #@ app() ``` .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Generating multiple documents, take 3 - We must use the `template` module: ```yaml #@ load("@ytt:template", "template") #@ def app(): --- #! the initial separator indicates "this is a Document Set" kind: Deployment apiVersion: apps/v1 --- #! separate from next document kind: Service apiVersion: v1 #@ end --- #@ template.replace(app()) ``` - `template.replace(...)` is the only way (?) to replace one element with many .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Libraries - A reusable ytt configuration can be transformed into a library - Put it in a subdirectory named `_ytt_lib/whatever`, then: ```yaml #@ load("@ytt:library", "library") #@ load("@ytt:template", "template") #@ whatever = library.get("whatever") #@ my_values = {"tag": "latest", "registry": "..."} #@ output = whatever.with_data_values(my_values).eval() --- #@ template.replace(output) ``` - The `with_data_values()` step is optional, but useful to "configure" the library - Note the whole combo: ```yaml template.replace(library.get("...").with_data_values(...).eval()) ``` .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Overlays - Powerful, but complex, but powerful! 💥 - Define transformations that are applied after generating the whole document set - General idea: - select YAML nodes to be transformed with an `#@overlay/match` decorator - write a YAML snippet with the modifications to be applied
(a bit like a strategic merge patch) .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Example ```yaml #@ load("@ytt:overlay", "overlay") #@ selector = {"kind": "Deployment", "metadata": {"name": "worker"}} #@overlay/match by=overlay.subset(selector) --- spec: replicas: 10 ``` - By default, `#@overlay/match` must find *exactly* one match (that can be changed by specifying `expects=...`, `missing_ok=True`... see [docs][docs-ytt-overlaymatch]) - By default, the specified fields (here, `spec.replicas`) must exist (that can also be changed by annotating the optional fields) [docs-ytt-overlaymatch]: https://carvel.dev/ytt/docs/v0.41.0/lang-ref-ytt-overlay/#overlaymatch .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Matching using a YAML document ```yaml #@ load("@ytt:overlay", "overlay") #@ def match(): kind: Deployment metadata: name: worker #@ end #@overlay/match by=overlay.subset(match()) --- spec: replicas: 10 ``` - This is equivalent to the subset match of the previous slide - It will find YAML nodes having all the listed fields .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Removing a field ```yaml #@ load("@ytt:overlay", "overlay") #@ def match(): kind: Deployment metadata: name: worker #@ end #@overlay/match by=overlay.subset(match()) --- spec: #@overlay/remove replicas: ``` - This would remove the `replicas:` field from a specific Deployment spec - This could be used e.g. when enabling autoscaling .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Selecting multiple nodes ```yaml #@ load("@ytt:overlay", "overlay") #@ def match(): kind: Deployment #@ end #@overlay/match by=overlay.subset(match()), expects="1+" --- spec: #@overlay/remove replicas: ``` - This would match all Deployments
(assuming that *at least one* exists) - It would remove the `replicas:` field from their spec
(the field must exist!) .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Adding a field ```yaml #@ load("@ytt:overlay", "overlay") #@overlay/match by=overlay.all, expects="1+" --- metadata: #@overlay/match missing_ok=True annotations: #@overlay/match expects=0 rainbow: 🌈 ``` - `#@overlay/match missing_ok=True`
*will match whether our resources already have annotations or not* - `#@overlay/match expects=0`
*will only match if the `rainbow` annotation doesn't exist*
*(to make sure that we don't override/replace an existing annotation)* .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Overlays vs data values - The documentation has a [detailed discussion][data-values-vs-overlays] about this question - In short: - values = for parameters that are exposed to the user - overlays = for arbitrary extra modifications - Values are easier to use (use them when possible!) - Fallback to overlays when values don't expose what you need (keeping in mind that overlays are harder to write/understand/maintain) [data-values-vs-overlays]: https://carvel.dev/ytt/docs/v0.41.0/data-values-vs-overlays/ .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Gotchas - Reminder: put your `#@` at the right place! ```yaml #! This will generate "hello, world!" --- #@ "{}, {}!".format("hello", "world") ``` ```yaml #! But this will generate an empty document --- #@ "{}, {}!".format("hello", "world") ``` - Also, don't use YAML anchors (`*foo` and `&foo`) - They don't mix well with ytt - Remember to use `template.render(...)` when generating multiple nodes (or to update lists or arrays without replacing them entirely) .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- ## Next steps with ytt - Read this documentation page about [injecting secrets][secrets] - Check the [FAQ], it gives some insights about what's possible with ytt - Exercise idea: write an overlay that will find all ConfigMaps mounted in Pods... ...and annotate the Pod with a hash of the ConfigMap [FAQ]: https://carvel.dev/ytt/docs/v0.41.0/faq/ [secrets]: https://carvel.dev/ytt/docs/v0.41.0/injecting-secrets/ ??? :EN:- YTT :FR:- YTT .debug[[k8s/ytt.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ytt.md)] --- class: pic .interstitial[] --- name: toc-extending-the-kubernetes-api class: title Extending the Kubernetes API .nav[ [Previous part](#toc-ytt) | [Back to table of contents](#toc-part-5) | [Next part](#toc-operators) ] .debug[(automatically generated title slide)] --- # Extending the Kubernetes API There are multiple ways to extend the Kubernetes API. We are going to cover: - Controllers - Dynamic Admission Webhooks - Custom Resource Definitions (CRDs) - The Aggregation Layer But first, let's re(re)visit the API server ... .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Revisiting the API server - The Kubernetes API server is a central point of the control plane - Everything connects to the API server: - users (that's us, but also automation like CI/CD) - kubelets - network components (e.g. `kube-proxy`, pod network, NPC) - controllers; lots of controllers .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Some controllers - `kube-controller-manager` runs built-on controllers (watching Deployments, Nodes, ReplicaSets, and much more) - `kube-scheduler` runs the scheduler (it's conceptually not different from another controller) - `cloud-controller-manager` takes care of "cloud stuff" (e.g. provisioning load balancers, persistent volumes...) - Some components mentioned above are also controllers (e.g. Network Policy Controller) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## More controllers - Cloud resources can also be managed by additional controllers (e.g. the [AWS Load Balancer Controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller)) - Leveraging Ingress resources requires an Ingress Controller (many options available here; we can even install multiple ones!) - Many add-ons (including CRDs and operators) have controllers as well 🤔 *What's even a controller ?!?* .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## What's a controller? According to the [documentation](https://kubernetes.io/docs/concepts/architecture/controller/): *Controllers are **control loops** that
**watch** the state of your cluster,
then make or request changes where needed.* *Each controller tries to move the current cluster state closer to the desired state.* .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## What controllers do - Watch resources - Make changes: - purely at the API level (e.g. Deployment, ReplicaSet controllers) - and/or configure resources (e.g. `kube-proxy`) - and/or provision resources (e.g. load balancer controller) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Extending Kubernetes with controllers - Random example: - watch resources like Deployments, Services ... - read annotations to configure monitoring - Technically, this is not extending the API (but it can still be very useful!) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Other ways to extend Kubernetes - Prevent or alter API requests before resources are committed to storage: *Admission Control* - Create new resource types leveraging Kubernetes storage facilities: *Custom Resource Definitions* - Create new resource types with different storage or different semantics: *Aggregation Layer* - Spoiler alert: often, we will combine multiple techniques (and involve controllers as well!) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Admission controllers - Admission controllers can vet or transform API requests - The diagram on the next slide shows the path of an API request (courtesy of Banzai Cloud) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- class: pic  .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Types of admission controllers - *Validating* admission controllers can accept/reject the API call - *Mutating* admission controllers can modify the API request payload - Both types can also trigger additional actions (e.g. automatically create a Namespace if it doesn't exist) - There are a number of built-in admission controllers (see [documentation](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#what-does-each-admission-controller-do) for a list) - We can also dynamically define and register our own .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- class: extra-details ## Some built-in admission controllers - ServiceAccount: automatically adds a ServiceAccount to Pods that don't explicitly specify one - LimitRanger: applies resource constraints specified by LimitRange objects when Pods are created - NamespaceAutoProvision: automatically creates namespaces when an object is created in a non-existent namespace *Note: #1 and #2 are enabled by default; #3 is not.* .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Dynamic Admission Control - We can set up *admission webhooks* to extend the behavior of the API server - The API server will submit incoming API requests to these webhooks - These webhooks can be *validating* or *mutating* - Webhooks can be set up dynamically (without restarting the API server) - To setup a dynamic admission webhook, we create a special resource: a `ValidatingWebhookConfiguration` or a `MutatingWebhookConfiguration` - These resources are created and managed like other resources (i.e. `kubectl create`, `kubectl get`...) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Webhook Configuration - A ValidatingWebhookConfiguration or MutatingWebhookConfiguration contains: - the address of the webhook - the authentication information to use with the webhook - a list of rules - The rules indicate for which objects and actions the webhook is triggered (to avoid e.g. triggering webhooks when setting up webhooks) - The webhook server can be hosted in or out of the cluster .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Dynamic Admission Examples - Policy control ([Kyverno](https://kyverno.io/), [Open Policy Agent](https://www.openpolicyagent.org/docs/latest/)) - Sidecar injection (Used by some service meshes) - Type validation (More on this later, in the CRD section) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Kubernetes API types - Almost everything in Kubernetes is materialized by a resource - Resources have a type (or "kind") (similar to strongly typed languages) - We can see existing types with `kubectl api-resources` - We can list resources of a given type with `kubectl get
` .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Creating new types - We can create new types with Custom Resource Definitions (CRDs) - CRDs are created dynamically (without recompiling or restarting the API server) - CRDs themselves are resources: - we can create a new type with `kubectl create` and some YAML - we can see all our custom types with `kubectl get crds` - After we create a CRD, the new type works just like built-in types .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Examples - Representing composite resources (e.g. clusters like databases, messages queues ...) - Representing external resources (e.g. virtual machines, object store buckets, domain names ...) - Representing configuration for controllers and operators (e.g. custom Ingress resources, certificate issuers, backups ...) - Alternate representations of other objects; services and service instances (e.g. encrypted secret, git endpoints ...) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## The aggregation layer - We can delegate entire parts of the Kubernetes API to external servers - This is done by creating APIService resources (check them with `kubectl get apiservices`!) - The APIService resource maps a type (kind) and version to an external service - All requests concerning that type are sent (proxied) to the external service - This allows to have resources like CRDs, but that aren't stored in etcd - Example: `metrics-server` .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Why? - Using a CRD for live metrics would be extremely inefficient (etcd **is not** a metrics store; write performance is way too slow) - Instead, `metrics-server`: - collects metrics from kubelets - stores them in memory - exposes them as PodMetrics and NodeMetrics (in API group metrics.k8s.io) - is registered as an APIService .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Drawbacks - Requires a server - ... that implements a non-trivial API (aka the Kubernetes API semantics) - If we need REST semantics, CRDs are probably way simpler - *Sometimes* synchronizing external state with CRDs might do the trick (unless we want the external state to be our single source of truth) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Documentation - [Custom Resource Definitions: when to use them](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) - [Custom Resources Definitions: how to use them](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) - [Built-in Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) - [Dynamic Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) - [Aggregation Layer](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) ??? :EN:- Overview of Kubernetes API extensions :FR:- Comment étendre l'API Kubernetes .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- class: pic .interstitial[] --- name: toc-operators class: title Operators .nav[ [Previous part](#toc-extending-the-kubernetes-api) | [Back to table of contents](#toc-part-5) | [Next part](#toc-sealed-secrets) ] .debug[(automatically generated title slide)] --- # Operators The Kubernetes documentation describes the [Operator pattern] as follows: *Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop.* Another good definition from [CoreOS](https://coreos.com/blog/introducing-operators.html): *An operator represents **human operational knowledge in software,**
to reliably manage an application.* There are many different use cases spanning different domains; but the general idea is: *Manage some resources (that reside inside our outside the cluster),
using Kubernetes manifests and tooling.* [Operator pattern]: https://kubernetes.io/docs/concepts/extend-kubernetes/operator/ .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators.md)] --- ## Some uses cases - Managing external resources ([AWS], [GCP], [KubeVirt]...) - Setting up database replication or distributed systems
(Cassandra, Consul, CouchDB, ElasticSearch, etcd, Kafka, MongoDB, MySQL, PostgreSQL, RabbitMQ, Redis, ZooKeeper...) - Running and configuring CI/CD
([ArgoCD], [Flux]), backups ([Velero]), policies ([Gatekeeper], [Kyverno])... - Automating management of certificates and secrets
([cert-manager]), secrets ([External Secrets Operator], [Sealed Secrets]...) - Configuration of cluster components ([Istio], [Prometheus]) - etc. [ArgoCD]: https://argoproj.github.io/cd/ [AWS]: https://aws-controllers-k8s.github.io/community/docs/community/services/ [cert-manager]: https://cert-manager.io/ [External Secrets Operator]: https://external-secrets.io/ [Flux]: https://fluxcd.io/ [Gatekeeper]: https://open-policy-agent.github.io/gatekeeper/website/docs/ [GCP]: https://github.com/paulczar/gcp-cloud-compute-operator [Istio]: https://istio.io/latest/docs/setup/install/operator/ [KubeVirt]: https://kubevirt.io/ [Kyverno]: https://kyverno.io/ [Prometheus]: https://prometheus-operator.dev/ [Sealed Secrets]: https://github.com/bitnami-labs/sealed-secrets [Velero]: https://velero.io/ .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators.md)] --- ## What are they made from? - Operators combine two things: - Custom Resource Definitions - controller code watching the corresponding resources and acting upon them - A given operator can define one or multiple CRDs - The controller code (control loop) typically runs within the cluster (running as a Deployment with 1 replica is a common scenario) - But it could also run elsewhere (nothing mandates that the code run on the cluster, as long as it has API access) .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators.md)] --- ## Operators for e.g. replicated databases - Kubernetes gives us Deployments, StatefulSets, Services ... - These mechanisms give us building blocks to deploy applications - They work great for services that are made of *N* identical containers (like stateless ones) - They also work great for some stateful applications like Consul, etcd ... (with the help of highly persistent volumes) - They're not enough for complex services: - where different containers have different roles - where extra steps have to be taken when scaling or replacing containers .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators.md)] --- ## How operators work - An operator creates one or more CRDs (i.e., it creates new "Kinds" of resources on our cluster) - The operator also runs a *controller* that will watch its resources - Each time we create/update/delete a resource, the controller is notified (we could write our own cheap controller with `kubectl get --watch`) .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators.md)] --- ## Operators are not magic - Look at this ElasticSearch resource definition: [k8s/eck-elasticsearch.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/eck-elasticsearch.yaml) - What should happen if we flip the TLS flag? Twice? - What should happen if we add another group of nodes? - What if we want different images or parameters for the different nodes? *Operators can be very powerful.
But we need to know exactly the scenarios that they can handle.* ??? :EN:- Kubernetes operators :FR:- Les opérateurs .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators.md)] --- class: pic .interstitial[] --- name: toc-sealed-secrets class: title Sealed Secrets .nav[ [Previous part](#toc-operators) | [Back to table of contents](#toc-part-5) | [Next part](#toc-custom-resource-definitions) ] .debug[(automatically generated title slide)] --- # Sealed Secrets - Kubernetes provides the "Secret" resource to store credentials, keys, passwords ... - Secrets can be protected with RBAC (e.g. "you can write secrets, but only the app's service account can read them") - [Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets) is an operator that lets us store secrets in code repositories - It uses asymetric cryptography: - anyone can *encrypt* a secret - only the cluster can *decrypt* a secret .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Principle - The Sealed Secrets operator uses a *public* and a *private* key - The public key is available publicly (duh!) - We use the public key to encrypt secrets into a SealedSecret resource - the SealedSecret resource can be stored in a code repo (even a public one) - The SealedSecret resource is `kubectl apply`'d to the cluster - The Sealed Secrets controller decrypts the SealedSecret with the private key (this creates a classic Secret resource) - Nobody else can decrypt secrets, since only the controller has the private key .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## In action - We will install the Sealed Secrets operator - We will generate a Secret - We will "seal" that Secret (generate a SealedSecret) - We will load that SealedSecret on the cluster - We will check that we now have a Secret .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Installing the operator - The official installation is done through a single YAML file - There is also a Helm chart if you prefer that (see next slide!) .lab[ - Install the operator: .small[ ```bash kubectl apply -f \ https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.17.5/controller.yaml ``` ] ] Note: it installs into `kube-system` by default. If you change that, you will also need to inform `kubeseal` later on. .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- class: extra-details ## Installing with Helm - The Sealed Secrets controller can be installed like this: ```bash helm install --repo https://bitnami-labs.github.io/sealed-secrets/ \ sealed-secrets-controller sealed-secrets --namespace kube-system ``` - Make sure to install in the `kube-system` Namespace - Make sure that the release is named `sealed-secrets-controller` (or pass a `--controller-name` option to `kubeseal` later) .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Creating a Secret - Let's create a normal (unencrypted) secret .lab[ - Create a Secret with a couple of API tokens: ```bash kubectl create secret generic awskey \ --from-literal=AWS_ACCESS_KEY_ID=AKI... \ --from-literal=AWS_SECRET_ACCESS_KEY=abc123xyz... \ --dry-run=client -o yaml > secret-aws.yaml ``` ] - Note the `--dry-run` and `-o yaml` (we're just generating YAML, not sending the secrets to our Kubernetes cluster) - We could also write the YAML from scratch or generate it with other tools .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Creating a Sealed Secret - This is done with the `kubeseal` tool - It will obtain the public key from the cluster .lab[ - Create the Sealed Secret: ```bash kubeseal < secret-aws.yaml > sealed-secret-aws.json ``` ] - The file `sealed-secret-aws.json` can be committed to your public repo (if you prefer YAML output, you can add `-o yaml`) .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Using a Sealed Secret - Now let's `kubectl apply` that Sealed Secret to the cluster - The Sealed Secret controller will "unseal" it for us .lab[ - Check that our Secret doesn't exist (yet): ```bash kubectl get secrets ``` - Load the Sealed Secret into the cluster: ```bash kubectl create -f sealed-secret-aws.json ``` - Check that the secret is now available: ```bash kubectl get secrets ``` ] .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Tweaking secrets - Let's see what happens if we try to rename the Secret (or use it in a different namespace) .lab[ - Delete both the Secret and the SealedSecret - Edit `sealed-secret-aws.json` - Change the name of the secret, or its namespace (both in the SealedSecret metadata and in the Secret template) - `kubectl apply -f` the new JSON file and observe the results 🤔 ] .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Sealed Secrets are *scoped* - A SealedSecret cannot be renamed or moved to another namespace (at least, not by default!) - Otherwise, it would allow to evade RBAC rules: - if I can view Secrets in namespace `myapp` but not in namespace `yourapp` - I could take a SealedSecret belonging to namespace `yourapp` - ... and deploy it in `myapp` - ... and view the resulting decrypted Secret! - This can be changed with `--scope namespace-wide` or `--scope cluster-wide` .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Working offline - We can obtain the public key from the server (technically, as a PEM certificate) - Then we can use that public key offline (without contacting the server) - Relevant commands: `kubeseal --fetch-cert > seal.pem` `kubeseal --cert seal.pem < secret.yaml > sealedsecret.json` .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Key rotation - The controller generate new keys every month by default - The keys are kept as TLS Secrets in the `kube-system` namespace (named `sealed-secrets-keyXXXXX`) - When keys are "rotated", old decryption keys are kept (otherwise we can't decrypt previously-generated SealedSecrets) .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Key compromise - If the *sealing* key (obtained with `--fetch-cert` is compromised): *we don't need to do anything (it's a public key!)* - However, if the *unsealing* key (the TLS secret in `kube-system`) is compromised ... *we need to:* - rotate the key - rotate the SealedSecrets that were encrypted with that key
(as they are compromised) .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Rotating the key - By default, new keys are generated every 30 days - To force the generation of a new key "right now": - obtain an RFC1123 timestamp with `date -R` - edit Deployment `sealed-secrets-controller` (in `kube-system`) - add `--key-cutoff-time=TIMESTAMP` to the command-line - *Then*, rotate the SealedSecrets that were encrypted with it (generate new Secrets, then encrypt them with the new key) .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Discussion (the good) - The footprint of the operator is rather small: - only one CRD - one Deployment, one Service - a few RBAC-related objects .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Discussion (the less good) - Events could be improved - `no key to decrypt secret` when there is a name/namespace mismatch - no event indicating that a SealedSecret was successfully unsealed - Key rotation could be improved (how to find secrets corresponding to a key?) - If the sealing keys are lost, it's impossible to unseal the SealedSecrets (e.g. cluster reinstall) - ... Which means that we need to back up the sealing keys - ... Which means that we need to be super careful with these backups! .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- ## Other approaches - [Kamus](https://kamus.soluto.io/) ([git](https://github.com/Soluto/kamus)) offers "zero-trust" secrets (the cluster cannot decrypt secrets; only the application can decrypt them) - [Vault](https://learn.hashicorp.com/tutorials/vault/kubernetes-sidecar?in=vault/kubernetes) can do ... a lot - dynamic secrets (generated on the fly for a consumer) - certificate management - integration outside of Kubernetes - and much more! ??? :EN:- The Sealed Secrets Operator :FR:- L'opérateur *Sealed Secrets* .debug[[k8s/sealed-secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/sealed-secrets.md)] --- class: pic .interstitial[] --- name: toc-custom-resource-definitions class: title Custom Resource Definitions .nav[ [Previous part](#toc-sealed-secrets) | [Back to table of contents](#toc-part-5) | [Next part](#toc-ingress-and-tls-certificates) ] .debug[(automatically generated title slide)] --- # Custom Resource Definitions - CRDs are one of the (many) ways to extend the API - CRDs can be defined dynamically (no need to recompile or reload the API server) - A CRD is defined with a CustomResourceDefinition resource (CustomResourceDefinition is conceptually similar to a *metaclass*) .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## Creating a CRD - We will create a CRD to represent different recipes of pizzas - We will be able to run `kubectl get pizzas` and it will list the recipes - Creating/deleting recipes won't do anything else (because we won't implement a *controller*) .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## A bit of history Things related to Custom Resource Definitions: - Kubernetes 1.??: `apiextensions.k8s.io/v1beta1` introduced - Kubernetes 1.16: `apiextensions.k8s.io/v1` introduced - Kubernetes 1.22: `apiextensions.k8s.io/v1beta1` [removed][changes-in-122] - Kubernetes 1.25: [CEL validation rules available in beta][crd-validation-rules-beta] - Kubernetes 1.28: [validation ratcheting][validation-ratcheting] in [alpha][feature-gates] - Kubernetes 1.29: [CEL validation rules available in GA][cel-validation-rules] - Kubernetes 1.30: [validation ratcheting][validation-ratcheting] in [beta][feature-gates]; enabled by default [crd-validation-rules-beta]: https://kubernetes.io/blog/2022/09/23/crd-validation-rules-beta/ [cel-validation-rules]: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules [validation-ratcheting]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/4008-crd-ratcheting [feature-gates]: https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features [changes-in-122]: https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/ .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## First slice of pizza ```yaml apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: pizzas.container.training spec: group: container.training version: v1alpha1 scope: Namespaced names: plural: pizzas singular: pizza kind: Pizza shortNames: - piz ``` .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## The joys of API deprecation - Unfortunately, the CRD manifest on the previous slide is deprecated! - It is using `apiextensions.k8s.io/v1beta1`, which is dropped in Kubernetes 1.22 - We need to use `apiextensions.k8s.io/v1`, which is a little bit more complex (a few optional things become mandatory, see [this guide](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#customresourcedefinition-v122) for details) .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## Second slice of pizza - The next slide will show file [k8s/pizza-2.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/pizza-2.yaml) - Note the `spec.versions` list - we need exactly one version with `storage: true` - we can have multiple versions with `served: true` - `spec.versions[].schema.openAPI3Schema` is required (and must be a valid OpenAPI schema; here it's a trivial one) .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ```yaml apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: pizzas.container.training spec: group: container.training scope: Namespaced names: plural: pizzas singular: pizza kind: Pizza shortNames: - piz versions: - name: v1alpha1 served: true storage: true schema: openAPIV3Schema: type: object ``` .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## Baking some pizza - Let's create the Custom Resource Definition for our Pizza resource .lab[ - Load the CRD: ```bash kubectl apply -f ~/container.training/k8s/pizza-2.yaml ``` - Confirm that it shows up: ```bash kubectl get crds ``` ] .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## Creating custom resources The YAML below defines a resource using the CRD that we just created: ```yaml kind: Pizza apiVersion: container.training/v1alpha1 metadata: name: hawaiian spec: toppings: [ cheese, ham, pineapple ] ``` .lab[ - Try to create a few pizza recipes: ```bash kubectl apply -f ~/container.training/k8s/pizzas.yaml ``` ] .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## Type validation - Recent versions of Kubernetes will issue errors about unknown fields - We need to improve our OpenAPI schema (to add e.g. the `spec.toppings` field used by our pizza resources) .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## Creating a bland pizza - Let's try to create a pizza anyway! .lab[ - Only provide the most basic YAML manifest: ```bash kubectl create -f- <
(e.g. major version downgrades) - checking a key or certificate format or validity - and much more! .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## CRDs in the wild - [gitkube](https://storage.googleapis.com/gitkube/gitkube-setup-stable.yaml) - [A redis operator](https://github.com/amaizfinance/redis-operator/blob/master/deploy/crds/k8s_v1alpha1_redis_crd.yaml) - [cert-manager](https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.yaml) *How big are these YAML files?* *What's the size (e.g. in lines) of each resource?* .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## CRDs in practice - Production-grade CRDs can be extremely verbose (because of the openAPI schema validation) - This can (and usually will) be managed by a framework .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## (Ab)using the API server - If we need to store something "safely" (as in: in etcd), we can use CRDs - This gives us primitives to read/write/list objects (and optionally validate them) - The Kubernetes API server can run on its own (without the scheduler, controller manager, and kubelets) - By loading CRDs, we can have it manage totally different objects (unrelated to containers, clusters, etc.) .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## What's next? - Creating a basic CRD is relatively straightforward - But CRDs generally require a *controller* to do anything useful - The controller will typically *watch* our custom resources (and take action when they are created/updated) - Most serious use-cases will also require *validation web hooks* - When our CRD data format evolves, we'll also need *conversion web hooks* - Doing all that work manually is tedious; use a framework! ??? :EN:- Custom Resource Definitions (CRDs) :FR:- Les CRDs *(Custom Resource Definitions)* .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- class: pic .interstitial[] --- name: toc-ingress-and-tls-certificates class: title Ingress and TLS certificates .nav[ [Previous part](#toc-custom-resource-definitions) | [Back to table of contents](#toc-part-6) | [Next part](#toc-cert-manager) ] .debug[(automatically generated title slide)] --- # Ingress and TLS certificates - Most ingress controllers support TLS connections (in a way that is standard across controllers) - The TLS key and certificate are stored in a Secret - The Secret is then referenced in the Ingress resource: ```yaml spec: tls: - secretName: XXX hosts: - YYY rules: - ZZZ ``` .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Obtaining a certificate - In the next section, we will need a TLS key and certificate - These usually come in [PEM](https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail) format: ``` -----BEGIN CERTIFICATE----- MIIDATCCAemg... ... -----END CERTIFICATE----- ``` - We will see how to generate a self-signed certificate (easy, fast, but won't be recognized by web browsers) - We will also see how to obtain a certificate from [Let's Encrypt](https://letsencrypt.org/) (requires the cluster to be reachable through a domain name) .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- class: extra-details ## In production ... - A very popular option is to use the [cert-manager](https://cert-manager.io/docs/) operator - It's a flexible, modular approach to automated certificate management - For simplicity, in this section, we will use [certbot](https://certbot.eff.org/) - The method shown here works well for one-time certs, but lacks: - automation - renewal .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Which domain to use - If you're doing this in a training: *the instructor will tell you what to use* - If you're doing this on your own Kubernetes cluster: *you should use a domain that points to your cluster* - More precisely: *you should use a domain that points to your ingress controller* - If you don't have a domain name, you can use [nip.io](https://nip.io/) (if your ingress controller is on 1.2.3.4, you can use `whatever.1.2.3.4.nip.io`) .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Setting `$DOMAIN` - We will use `$DOMAIN` in the following section - Let's set it now .lab[ - Set the `DOMAIN` environment variable: ```bash export DOMAIN=... ``` ] .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Choose your own adventure! - We present 3 methods to obtain a certificate - We suggest that you use method 1 (self-signed certificate) - it's the simplest and fastest method - it doesn't rely on other components - You're welcome to try methods 2 and 3 (leveraging certbot) - they're great if you want to understand "how the sausage is made" - they require some hacks (make sure port 80 is available) - they won't be used in production (cert-manager is better) .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Method 1, self-signed certificate - Thanks to `openssl`, generating a self-signed cert is just one command away! .lab[ - Generate a key and certificate: ```bash openssl req \ -newkey rsa -nodes -keyout privkey.pem \ -x509 -days 30 -subj /CN=$DOMAIN/ -out cert.pem ``` ] This will create two files, `privkey.pem` and `cert.pem`. .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Method 2, Let's Encrypt with certbot - `certbot` is an [ACME](https://tools.ietf.org/html/rfc8555) client (Automatic Certificate Management Environment) - We can use it to obtain certificates from Let's Encrypt - It needs to listen to port 80 (to complete the [HTTP-01 challenge](https://letsencrypt.org/docs/challenge-types/)) - If port 80 is already taken by our ingress controller, see method 3 .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- class: extra-details ## HTTP-01 challenge - `certbot` contacts Let's Encrypt, asking for a cert for `$DOMAIN` - Let's Encrypt gives a token to `certbot` - Let's Encrypt then tries to access the following URL: `http://$DOMAIN/.well-known/acme-challenge/
` - That URL needs to be routed to `certbot` - Once Let's Encrypt gets the response from `certbot`, it issues the certificate .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Running certbot - There is a very convenient container image, `certbot/certbot` - Let's use a volume to get easy access to the generated key and certificate .lab[ - Obtain a certificate from Let's Encrypt: ```bash EMAIL=your.address@example.com docker run --rm -p 80:80 -v $PWD/letsencrypt:/etc/letsencrypt \ certbot/certbot certonly \ -m $EMAIL \ --standalone --agree-tos -n \ --domain $DOMAIN \ --test-cert ``` ] This will get us a "staging" certificate. Remove `--test-cert` to obtain a *real* certificate. .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Copying the key and certificate - If everything went fine: - the key and certificate files are in `letsencrypt/live/$DOMAIN` - they are owned by `root` .lab[ - Grant ourselves permissions on these files: ```bash sudo chown -R $USER letsencrypt ``` - Copy the certificate and key to the current directory: ```bash cp letsencrypt/live/test/{cert,privkey}.pem . ``` ] .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Method 3, certbot with Ingress - Sometimes, we can't simply listen to port 80: - we might already have an ingress controller there - our nodes might be on an internal network - But we can define an Ingress to route the HTTP-01 challenge to `certbot`! - Our Ingress needs to route all requests to `/.well-known/acme-challenge` to `certbot` - There are at least two ways to do that: - run `certbot` in a Pod (and extract the cert+key when it's done) - run `certbot` in a container on a node (and manually route traffic to it) - We're going to use the second option (mostly because it will give us an excuse to tinker with Endpoints resources!) .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## The plan - We need the following resources: - an Endpoints¹ listing a hard-coded IP address and port
(where our `certbot` container will be listening) - a Service corresponding to that Endpoints - an Ingress sending requests to `/.well-known/acme-challenge/*` to that Service
(we don't even need to include a domain name in it) - Then we need to start `certbot` so that it's listening on the right address+port .footnote[¹Endpoints is always plural, because even a single resource is a list of endpoints.] .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Creating resources - We prepared a YAML file to create the three resources - However, the Endpoints needs to be adapted to put the current node's address .lab[ - Edit `~/containers.training/k8s/certbot.yaml` (replace `A.B.C.D` with the current node's address) - Create the resources: ```bash kubectl apply -f ~/containers.training/k8s/certbot.yaml ``` ] .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Obtaining the certificate - Now we can run `certbot`, listening on the port listed in the Endpoints (i.e. 8000) .lab[ - Run `certbot`: ```bash EMAIL=your.address@example.com docker run --rm -p 8000:80 -v $PWD/letsencrypt:/etc/letsencrypt \ certbot/certbot certonly \ -m $EMAIL \ --standalone --agree-tos -n \ --domain $DOMAIN \ --test-cert ``` ] This is using the staging environment. Remove `--test-cert` to get a production certificate. .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Copying the certificate - Just like in the previous method, the certificate is in `letsencrypt/live/$DOMAIN` (and owned by root) .lab[ - Grand ourselves permissions on these files: ```bash sudo chown -R $USER letsencrypt ``` - Copy the certificate and key to the current directory: ```bash cp letsencrypt/live/$DOMAIN/{cert,privkey}.pem . ``` ] .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Creating the Secret - We now have two files: - `privkey.pem` (the private key) - `cert.pem` (the certificate) - We can create a Secret to hold them .lab[ - Create the Secret: ```bash kubectl create secret tls $DOMAIN --cert=cert.pem --key=privkey.pem ``` ] .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Ingress with TLS - To enable TLS for an Ingress, we need to add a `tls` section to the Ingress: ```yaml spec: tls: - secretName: DOMAIN hosts: - DOMAIN rules: ... ``` - The list of hosts will be used by the ingress controller (to know which certificate to use with [SNI](https://en.wikipedia.org/wiki/Server_Name_Indication)) - Of course, the name of the secret can be different (here, for clarity and convenience, we set it to match the domain) .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## `kubectl create ingress` - We can also create an Ingress using TLS directly - To do it, add `,tls=secret-name` to an Ingress rule - Example: ```bash kubectl create ingress hello \ --rule=hello.example.com/*=hello:80,tls=hello ``` - The domain will automatically be inferred from the rule .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- class: extra-details ## About the ingress controller - Many ingress controllers can use different "stores" for keys and certificates - Our ingress controller needs to be configured to use secrets (as opposed to, e.g., obtain certificates directly with Let's Encrypt) .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Using the certificate .lab[ - Add the `tls` section to an existing Ingress - If you need to see what the `tls` section should look like, you can: - `kubectl explain ingress.spec.tls` - `kubectl create ingress --dry-run=client -o yaml ...` - check `~/container.training/k8s/ingress.yaml` for inspiration - read the docs - Check that the URL now works over `https` (it might take a minute to be picked up by the ingress controller) ] .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Discussion *To repeat something mentioned earlier ...* - The methods presented here are for *educational purpose only* - In most production scenarios, the certificates will be obtained automatically - A very popular option is to use the [cert-manager](https://cert-manager.io/docs/) operator .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Security - Since TLS certificates are stored in Secrets... - ...It means that our Ingress controller must be able to read Secrets - A vulnerability in the Ingress controller can have dramatic consequences - See [CVE-2021-25742](https://github.com/kubernetes/ingress-nginx/issues/7837) for an example - This can be mitigated by limiting which Secrets the controller can access (RBAC rules can specify resource names) - Downside: each TLS secret must explicitly be listed in RBAC (but that's better than a full cluster compromise, isn't it?) ??? :EN:- Ingress and TLS :FR:- Certificats TLS et *ingress* .debug[[k8s/ingress-tls.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-tls.md)] --- ## Optimizing request flow - With most ingress controllers, requests follow this path: HTTP client → load balancer → NodePort → ingress controller Pod → app Pod - Sometimes, some of these components can be on the same machine (e.g. ingress controller Pod and app Pod) - But they can also be on different machines (each arrow = a potential hop) - This could add some unwanted latency! (See following diagrams) .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- class: pic  .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- class: pic  .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- ## External traffic policy - The Service manifest has a field `spec.externalTrafficPolicy` - Possible values are: - `Cluster` (default) - load balance connections to all pods - `Local` - only send connections to local pods (on the same node) - When the policy is set to `Local`, we avoid one hop: HTTP client → load balancer → NodePort .red[**→**] ingress controller Pod → app Pod (See diagram on next slide) .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- class: pic  .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- ## What if there is no Pod? - If a connection for a Service arrives on a Node through a NodePort... - ...And that Node doesn't host a Pod matching the selector of that Service... (i.e. there is no local Pod) - ...Then the connection is refused - This can be detected from outside (by the external load balancer) - The external load balancer won't send connections to these nodes (See diagram on next slide) .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- class: pic  .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- class: extra-details ## Internal traffic policy - Since Kubernetes 1.21, there is also `spec.internalTrafficPolicy` - It works similarly but for internal traffic - It's an *alpha* feature (not available by default; needs special steps to be enabled on the control plane) - See the [documentation] for more details [documentation]: https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/ .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- ## Other ways to save hops - Run the ingress controller as a DaemonSet, using port 80 on the nodes: HTTP client → load balancer → ingress controller on Node port 80 → app Pod - Then simplify further by setting a set of DNS records pointing to the nodes: HTTP client → ingress controller on Node port 80 → app Pod - Or run a combined load balancer / ingress controller at the edge of the cluster: HTTP client → edge ingress controller → app Pod .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- ## Source IP address - Obtaining the IP address of the HTTP client (from the app Pod) can be tricky! - We should consider (at least) two steps: - obtaining the IP address of the HTTP client (from the ingress controller) - passing that IP address from the ingress controller to the HTTP client - The second step is usually done by injecting an HTTP header (typically `x-forwarded-for`) - Most ingress controllers do that out of the box - But how does the ingress controller obtain the IP address of the HTTP client? 🤔 .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- ## Scenario 1, direct connection - If the HTTP client connects directly to the ingress controller: easy! - e.g. when running a combined load balancer / ingress controller - or when running the ingress controller as a Daemon Set directly on port 80 .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- ## Scenario 2, external load balancer - Most external load balancers running in TCP mode don't expose client addresses (HTTP client connects to load balancer; load balancer connects to ingress controller) - The ingress controller will "see" the IP address of the load balancer (instead of the IP address of the client) - Many external load balancers support the [Proxy Protocol] - This enables the ingress controller to "see" the IP address of the HTTP client - It needs to be enabled on both ends (ingress controller and load balancer) [ProxyProtocol]: https://www.haproxy.com/blog/haproxy/proxy-protocol/ .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- ## Scenario 3, leveraging `externalTrafficPolicy` - In some cases, the external load balancer will preserve the HTTP client address - It is then possible to set `externalTrafficPolicy` to `Local` - The ingress controller will then "see" the HTTP client address - If `externalTrafficPolicy` is set to `Cluster`: - sometimes the client address will be visible - when bouncing the connection to another node, the address might be changed - This is a big "it depends!" - Bottom line: rely on the two other techniques instead? .debug[[k8s/ingress-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/ingress-advanced.md)] --- class: pic .interstitial[] --- name: toc-cert-manager class: title cert-manager .nav[ [Previous part](#toc-ingress-and-tls-certificates) | [Back to table of contents](#toc-part-6) | [Next part](#toc-an-elasticsearch-operator) ] .debug[(automatically generated title slide)] --- # cert-manager - cert-manager¹ facilitates certificate signing through the Kubernetes API: - we create a Certificate object (that's a CRD) - cert-manager creates a private key - it signs that key ... - ... or interacts with a certificate authority to obtain the signature - it stores the resulting key+cert in a Secret resource - These Secret resources can be used in many places (Ingress, mTLS, ...) .footnote[.red[¹]Always lower case, words separated with a dash; see the [style guide](https://cert-manager.io/docs/faq/style/_.)] .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- ## Getting signatures - cert-manager can use multiple *Issuers* (another CRD), including: - self-signed - cert-manager acting as a CA - the [ACME protocol](https://en.wikipedia.org/wiki/Automated_Certificate_Management_Environment]) (notably used by Let's Encrypt) - [HashiCorp Vault](https://www.vaultproject.io/) - Multiple issuers can be configured simultaneously - Issuers can be available in a single namespace, or in the whole cluster (then we use the *ClusterIssuer* CRD) .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- ## cert-manager in action - We will install cert-manager - We will create a ClusterIssuer to obtain certificates with Let's Encrypt (this will involve setting up an Ingress Controller) - We will create a Certificate request - cert-manager will honor that request and create a TLS Secret .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- ## Installing cert-manager - It can be installed with a YAML manifest, or with Helm .lab[ - Let's install the cert-manager Helm chart with this one-liner: ```bash helm install cert-manager cert-manager \ --repo https://charts.jetstack.io \ --create-namespace --namespace cert-manager \ --set installCRDs=true ``` ] - If you prefer to install with a single YAML file, that's fine too! (see [the documentation](https://cert-manager.io/docs/installation/kubernetes/#installing-with-regular-manifests) for instructions) .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- ## ClusterIssuer manifest ```yaml apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: # Remember to update this if you use this manifest to obtain real certificates :) email: hello@example.com server: https://acme-staging-v02.api.letsencrypt.org/directory # To use the production environment, use the following line instead: #server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: issuer-letsencrypt-staging solvers: - http01: ingress: class: traefik ``` .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- ## Creating the ClusterIssuer - The manifest shown on the previous slide is in [k8s/cm-clusterissuer.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/cm-clusterissuer.yaml) .lab[ - Create the ClusterIssuer: ```bash kubectl apply -f ~/container.training/k8s/cm-clusterissuer.yaml ``` ] .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- ## Certificate manifest ```yaml apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: xyz.A.B.C.D.nip.io spec: secretName: xyz.A.B.C.D.nip.io dnsNames: - xyz.A.B.C.D.nip.io issuerRef: name: letsencrypt-staging kind: ClusterIssuer ``` - The `name`, `secretName`, and `dnsNames` don't have to match - There can be multiple `dnsNames` - The `issuerRef` must match the ClusterIssuer that we created earlier .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- ## Creating the Certificate - The manifest shown on the previous slide is in [k8s/cm-certificate.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/cm-certificate.yaml) .lab[ - Edit the Certificate to update the domain name (make sure to replace A.B.C.D with the IP address of one of your nodes!) - Create the Certificate: ```bash kubectl apply -f ~/container.training/k8s/cm-certificate.yaml ``` ] .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- ## What's happening? - cert-manager will create: - the secret key - a Pod, a Service, and an Ingress to complete the HTTP challenge - then it waits for the challenge to complete .lab[ - View the resources created by cert-manager: ```bash kubectl get pods,services,ingresses \ --selector=acme.cert-manager.io/http01-solver=true ``` ] .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- ## HTTP challenge - The CA (in this case, Let's Encrypt) will fetch a particular URL: `http://
/.well-known/acme-challenge/
` .lab[ - Check the *path* of the Ingress in particular: ```bash kubectl describe ingress --selector=acme.cert-manager.io/http01-solver=true ``` ] .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- ## What's missing ? -- An Ingress Controller! 😅 .lab[ - Install an Ingress Controller: ```bash kubectl apply -f ~/container.training/k8s/traefik-v2.yaml ``` - Wait a little bit, and check that we now have a `kubernetes.io/tls` Secret: ```bash kubectl get secrets ``` ] .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- class: extra-details ## Using the secret - For bonus points, try to use the secret in an Ingress! - This is what the manifest would look like: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: xyz spec: tls: - secretName: xyz.A.B.C.D.nip.io hosts: - xyz.A.B.C.D.nip.io rules: ... ``` .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- class: extra-details ## Automatic TLS Ingress with annotations - It is also possible to annotate Ingress resources for cert-manager - If we annotate an Ingress resource with `cert-manager.io/cluster-issuer=xxx`: - cert-manager will detect that annotation - it will obtain a certificate using the specified ClusterIssuer (`xxx`) - it will store the key and certificate in the specified Secret - Note: the Ingress still needs the `tls` section with `secretName` and `hosts` .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- class: extra-details ## Let's Encrypt and nip.io - Let's Encrypt has [rate limits](https://letsencrypt.org/docs/rate-limits/) per domain (the limits only apply to the production environment, not staging) - There is a limit of 50 certificates per registered domain - If we try to use the production environment, we will probably hit the limit - It's fine to use the staging environment for these experiments (our certs won't validate in a browser, but we can always check the details of the cert to verify that it was issued by Let's Encrypt!) ??? :EN:- Obtaining certificates with cert-manager :FR:- Obtenir des certificats avec cert-manager :T: Obtaining TLS certificates with cert-manager .debug[[k8s/cert-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cert-manager.md)] --- ## CA injector - overview - The Kubernetes API server can invoke various webhooks: - conversion webhooks (registered in CustomResourceDefinitions) - mutation webhooks (registered in MutatingWebhookConfigurations) - validation webhooks (registered in ValidatingWebhookConfiguration) - These webhooks must be served over TLS - These webhooks must use valid TLS certificates .debug[[k8s/cainjector.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cainjector.md)] --- ## Webhook certificates - Option 1: certificate issued by a global CA - doesn't work with internal services
(their CN must be `
.
.svc`) - Option 2: certificate issued by private CA + CA certificate in system store - requires access to API server certificates tore - generally not doable on managed Kubernetes clusters - Option 3: certificate issued by private CA + CA certificate in `caBundle` - pass the CA certificate in `caBundle` field
(in CRD or webhook manifests) - can be managed automatically by cert-manager .debug[[k8s/cainjector.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cainjector.md)] --- ## CA injector - details - Add annotation to *injectable* resource (CustomResouceDefinition, MutatingWebhookConfiguration, ValidatingWebhookConfiguration) - Annotation refers to the thing holding the certificate: - `cert-manager.io/inject-ca-from:
/
` - `cert-manager.io/inject-ca-from-secret:
/
` - `cert-manager.io/inject-apiserver-ca: true` (use API server CA) - When injecting from a Secret, the Secret must have a special annotation: `cert-manager.io/allow-direct-injection: "true"` - See [cert-manager documentation] for details [cert-manager documentation]: https://cert-manager.io/docs/concepts/ca-injector/ .debug[[k8s/cainjector.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cainjector.md)] --- class: pic .interstitial[] --- name: toc-an-elasticsearch-operator class: title An ElasticSearch Operator .nav[ [Previous part](#toc-cert-manager) | [Back to table of contents](#toc-part-6) | [Next part](#toc-dynamic-admission-control) ] .debug[(automatically generated title slide)] --- # An ElasticSearch Operator - We will install [Elastic Cloud on Kubernetes](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html), an ElasticSearch operator - This operator requires PersistentVolumes - We will install Rancher's [local path storage provisioner](https://github.com/rancher/local-path-provisioner) to automatically create these - Then, we will create an ElasticSearch resource - The operator will detect that resource and provision the cluster - We will integrate that ElasticSearch cluster with other resources (Kibana, Filebeat, Cerebro ...) .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Installing a Persistent Volume provisioner (This step can be skipped if you already have a dynamic volume provisioner.) - This provisioner creates Persistent Volumes backed by `hostPath` (local directories on our nodes) - It doesn't require anything special ... - ... But losing a node = losing the volumes on that node! .lab[ - Install the local path storage provisioner: ```bash kubectl apply -f ~/container.training/k8s/local-path-storage.yaml ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Making sure we have a default StorageClass - The ElasticSearch operator will create StatefulSets - These StatefulSets will instantiate PersistentVolumeClaims - These PVCs need to be explicitly associated with a StorageClass - Or we need to tag a StorageClass to be used as the default one .lab[ - List StorageClasses: ```bash kubectl get storageclasses ``` ] We should see the `local-path` StorageClass. .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Setting a default StorageClass - This is done by adding an annotation to the StorageClass: `storageclass.kubernetes.io/is-default-class: true` .lab[ - Tag the StorageClass so that it's the default one: ```bash kubectl annotate storageclass local-path \ storageclass.kubernetes.io/is-default-class=true ``` - Check the result: ```bash kubectl get storageclasses ``` ] Now, the StorageClass should have `(default)` next to its name. .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Install the ElasticSearch operator - The operator provides: - a few CustomResourceDefinitions - a Namespace for its other resources - a ValidatingWebhookConfiguration for type checking - a StatefulSet for its controller and webhook code - a ServiceAccount, ClusterRole, ClusterRoleBinding for permissions - All these resources are grouped in a convenient YAML file .lab[ - Install the operator: ```bash kubectl apply -f ~/container.training/k8s/eck-operator.yaml ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Check our new custom resources - Let's see which CRDs were created .lab[ - List all CRDs: ```bash kubectl get crds ``` ] This operator supports ElasticSearch, but also Kibana and APM. Cool! .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Create the `eck-demo` namespace - For clarity, we will create everything in a new namespace, `eck-demo` - This namespace is hard-coded in the YAML files that we are going to use - We need to create that namespace .lab[ - Create the `eck-demo` namespace: ```bash kubectl create namespace eck-demo ``` - Switch to that namespace: ```bash kns eck-demo ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- class: extra-details ## Can we use a different namespace? Yes, but then we need to update all the YAML manifests that we are going to apply in the next slides. The `eck-demo` namespace is hard-coded in these YAML manifests. Why? Because when defining a ClusterRoleBinding that references a ServiceAccount, we have to indicate in which namespace the ServiceAccount is located. .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Create an ElasticSearch resource - We can now create a resource with `kind: ElasticSearch` - The YAML for that resource will specify all the desired parameters: - how many nodes we want - image to use - add-ons (kibana, cerebro, ...) - whether to use TLS or not - etc. .lab[ - Create our ElasticSearch cluster: ```bash kubectl apply -f ~/container.training/k8s/eck-elasticsearch.yaml ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Operator in action - Over the next minutes, the operator will create our ES cluster - It will report our cluster status through the CRD .lab[ - Check the logs of the operator: ```bash stern --namespace=elastic-system operator ``` - Watch the status of the cluster through the CRD: ```bash kubectl get es -w ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Connecting to our cluster - It's not easy to use the ElasticSearch API from the shell - But let's check at least if ElasticSearch is up! .lab[ - Get the ClusterIP of our ES instance: ```bash kubectl get services ``` - Issue a request with `curl`: ```bash curl http://`CLUSTERIP`:9200 ``` ] We get an authentication error. Our cluster is protected! .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Obtaining the credentials - The operator creates a user named `elastic` - It generates a random password and stores it in a Secret .lab[ - Extract the password: ```bash kubectl get secret demo-es-elastic-user \ -o go-template="{{ .data.elastic | base64decode }} " ``` - Use it to connect to the API: ```bash curl -u elastic:`PASSWORD` http://`CLUSTERIP`:9200 ``` ] We should see a JSON payload with the `"You Know, for Search"` tagline. .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Sending data to the cluster - Let's send some data to our brand new ElasticSearch cluster! - We'll deploy a filebeat DaemonSet to collect node logs .lab[ - Deploy filebeat: ```bash kubectl apply -f ~/container.training/k8s/eck-filebeat.yaml ``` - Wait until some pods are up: ```bash watch kubectl get pods -l k8s-app=filebeat ``` - Check that a filebeat index was created: ```bash curl -u elastic:`PASSWORD` http://`CLUSTERIP`:9200/_cat/indices ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Deploying an instance of Kibana - Kibana can visualize the logs injected by filebeat - The ECK operator can also manage Kibana - Let's give it a try! .lab[ - Deploy a Kibana instance: ```bash kubectl apply -f ~/container.training/k8s/eck-kibana.yaml ``` - Wait for it to be ready: ```bash kubectl get kibana -w ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Connecting to Kibana - Kibana is automatically set up to conect to ElasticSearch (this is arranged by the YAML that we're using) - However, it will ask for authentication - It's using the same user/password as ElasticSearch .lab[ - Get the NodePort allocated to Kibana: ```bash kubectl get services ``` - Connect to it with a web browser - Use the same user/password as before ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Setting up Kibana After the Kibana UI loads, we need to click around a bit .lab[ - Pick "explore on my own" - Click on Use Elasticsearch data / Connect to your Elasticsearch index" - Enter `filebeat-*` for the index pattern and click "Next step" - Select `@timestamp` as time filter field name - Click on "discover" (the small icon looking like a compass on the left bar) - Play around! ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Scaling up the cluster - At this point, we have only one node - We are going to scale up - But first, we'll deploy Cerebro, an UI for ElasticSearch - This will let us see the state of the cluster, how indexes are sharded, etc. .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Deploying Cerebro - Cerebro is stateless, so it's fairly easy to deploy (one Deployment + one Service) - However, it needs the address and credentials for ElasticSearch - We prepared yet another manifest for that! .lab[ - Deploy Cerebro: ```bash kubectl apply -f ~/container.training/k8s/eck-cerebro.yaml ``` - Lookup the NodePort number and connect to it: ```bash kubectl get services ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Scaling up the cluster - We can see on Cerebro that the cluster is "yellow" (because our index is not replicated) - Let's change that! .lab[ - Edit the ElasticSearch cluster manifest: ```bash kubectl edit es demo ``` - Find the field `count: 1` and change it to 3 - Save and quit ] ??? :EN:- Deploying ElasticSearch with ECK :FR:- Déployer ElasticSearch avec ECK .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- class: pic .interstitial[] --- name: toc-dynamic-admission-control class: title Dynamic Admission Control .nav[ [Previous part](#toc-an-elasticsearch-operator) | [Back to table of contents](#toc-part-7) | [Next part](#toc-policy-management-with-kyverno) ] .debug[(automatically generated title slide)] --- # Dynamic Admission Control - This is one of the many ways to extend the Kubernetes API - High level summary: dynamic admission control relies on webhooks that are ... - dynamic (can be added/removed on the fly) - running inside our outside the cluster - *validating* (yay/nay) or *mutating* (can change objects that are created/updated) - selective (can be configured to apply only to some kinds, some selectors...) - mandatory or optional (should it block operations when webhook is down?) - Used for themselves (e.g. policy enforcement) or as part of operators .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Use cases - Defaulting *injecting image pull secrets, sidecars, environment variables...* - Policy enforcement and best practices *prevent: `latest` images, deprecated APIs...* *require: PDBs, resource requests/limits, labels/annotations, local registry...* - Problem mitigation *block nodes with vulnerable kernels, inject log4j mitigations...* - Extended validation for operators .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## You said *dynamic?* - Some admission controllers are built in the API server - They are enabled/disabled through Kubernetes API server configuration (e.g. `--enable-admission-plugins`/`--disable-admission-plugins` flags) - Here, we're talking about *dynamic* admission controllers - They can be added/remove while the API server is running (without touching the configuration files or even having access to them) - This is done through two kinds of cluster-scope resources: ValidatingWebhookConfiguration and MutatingWebhookConfiguration .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## You said *webhooks?* - A ValidatingWebhookConfiguration or MutatingWebhookConfiguration contains: - a resource filter
(e.g. "all pods", "deployments in namespace xyz", "everything"...) - an operations filter
(e.g. CREATE, UPDATE, DELETE) - the address of the webhook server - Each time an operation matches the filters, it is sent to the webhook server .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## What gets sent exactly? - The API server will `POST` a JSON object to the webhook - That object will be a Kubernetes API message with `kind` `AdmissionReview` - It will contain a `request` field, with, notably: - `request.uid` (to be used when replying) - `request.object` (the object created/deleted/changed) - `request.oldObject` (when an object is modified) - `request.userInfo` (who was making the request to the API in the first place) (See [the documentation](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#request) for a detailed example showing more fields.) .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## How should the webhook respond? - By replying with another `AdmissionReview` in JSON - It should have a `response` field, with, notably: - `response.uid` (matching the `request.uid`) - `response.allowed` (`true`/`false`) - `response.status.message` (optional string; useful when denying requests) - `response.patchType` (when a mutating webhook changes the object; e.g. `json`) - `response.patch` (the patch, encoded in base64) .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## What if the webhook *does not* respond? - If "something bad" happens, the API server follows the `failurePolicy` option - this is a per-webhook option (specified in the webhook configuration) - it can be `Fail` (the default) or `Ignore` ("allow all, unmodified") - What's "something bad"? - webhook responds with something invalid - webhook takes more than 10 seconds to respond
(this can be changed with `timeoutSeconds` field in the webhook config) - webhook is down or has invalid certificates
(TLS! It's not just a good idea; for admission control, it's the law!) .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## What did you say about TLS? - The webhook configuration can indicate: - either `url` of the webhook server (has to begin with `https://`) - or `service.name` and `service.namespace` of a Service on the cluster - In the latter case, the Service has to accept TLS connections on port 443 - It has to use a certificate with CN `
.
.svc` (**and** a `subjectAltName` extension with `DNS:
.
.svc`) - The certificate needs to be valid (signed by a CA trusted by the API server) ... alternatively, we can pass a `caBundle` in the webhook configuration .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Webhook server inside or outside - "Outside" webhook server is defined with `url` option - convenient for external webooks (e.g. tamper-resistent audit trail) - also great for initial development (e.g. with ngrok) - requires outbound connectivity (duh) and can become a SPOF - "Inside" webhook server is defined with `service` option - convenient when the webhook needs to be deployed and managed on the cluster - also great for air gapped clusters - development can be harder (but tools like [Tilt](https://tilt.dev) can help) .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Developing a simple admission webhook - We're going to register a custom webhook! - First, we'll just dump the `AdmissionRequest` object (using a little Node app) - Then, we'll implement a strict policy on a specific label (using a little Flask app) - Development will happen in local containers, plumbed with ngrok - The we will deploy to the cluster 🔥 .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Running the webhook locally - We prepared a Docker Compose file to start the whole stack (the Node "echo" app, the Flask app, and one ngrok tunnel for each of them) - We will need an ngrok account for the tunnels (a free account is fine) .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- class: extra-details ## What's ngrok? - Ngrok provides secure tunnels to access local services - Example: run `ngrok http 1234` - `ngrok` will display a publicly-available URL (e.g. https://xxxxyyyyzzzz.ngrok.app) - Connections to https://xxxxyyyyzzzz.ngrok.app will terminate at `localhost:1234` - Basic product is free; extra features (vanity domains, end-to-end TLS...) for $$$ - Perfect to develop our webhook! .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- class: extra-details ## Ngrok in production - Ngrok was initially known for its local webhook development features - It now supports production scenarios as well (load balancing, WAF, authentication, circuit-breaking...) - Including some that are very relevant to Kubernetes (e.g. [ngrok Ingress Controller](https://github.com/ngrok/kubernetes-ingress-controller) .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Ngrok tokens - If you're attending a live training, you might have an ngrok token - Look in `~/ngrok.env` and if that file exists, copy it to the stack: .lab[ ```bash cp ~/ngrok.env ~/container.training/webhooks/admission/.env ``` ] .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Starting the whole stack .lab[ - Go to the webhook directory: ```bash cd ~/container.training/webhooks/admission ``` - Start the webhook in Docker containers: ```bash docker-compose up ``` ] *Note the URL in `ngrok-echo_1` looking like `url=https://xxxx.ngrok.io`.* .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Update the webhook configuration - We have a webhook configuration in `k8s/webhook-configuration.yaml` - We need to update the configuration with the correct `url` .lab[ - Edit the webhook configuration manifest: ```bash vim k8s/webhook-configuration.yaml ``` - **Uncomment** the `url:` line - **Update** the `.ngrok.io` URL with the URL shown by Compose - Save and quit ] .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Register the webhook configuration - Just after we register the webhook, it will be called for each matching request (CREATE and UPDATE on Pods in all namespaces) - The `failurePolicy` is `Ignore` (so if the webhook server is down, we can still create pods) .lab[ - Register the webhook: ```bash kubectl apply -f k8s/webhook-configuration.yaml ``` ] It is strongly recommended to tail the logs of the API server while doing that. .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Create a pod - Let's create a pod and try to set a `color` label .lab[ - Create a pod named `chroma`: ```bash kubectl run --restart=Never chroma --image=nginx ``` - Add a label `color` set to `pink`: ```bash kubectl label pod chroma color=pink ``` ] We should see the `AdmissionReview` objects in the Compose logs. Note: the webhook doesn't do anything (other than printing the request payload). .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Use the "real" admission webhook - We have a small Flask app implementing a particular policy on pod labels: - if a pod sets a label `color`, it must be `blue`, `green`, `red` - once that `color` label is set, it cannot be removed or changed - That Flask app was started when we did `docker-compose up` earlier - It is exposed through its own ngrok tunnel - We are going to use that webhook instead of the other one (by changing only the `url` field in the ValidatingWebhookConfiguration) .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Update the webhook configuration .lab[ - First, check the ngrok URL of the tunnel for the Flask app: ```bash docker-compose logs ngrok-flask ``` - Then, edit the webhook configuration: ```bash kubectl edit validatingwebhookconfiguration admission.container.training ``` - Find the `url:` field with the `.ngrok.io` URL and update it - Save and quit; the new configuration is applied immediately ] .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Verify the behavior of the webhook - Try to create a few pods and/or change labels on existing pods - What happens if we try to make changes to the earlier pod? (the one that has `label=pink`) .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Deploying the webhook on the cluster - Let's see what's needed to self-host the webhook server! - The webhook needs to be reachable through a Service on our cluster - The Service needs to accept TLS connections on port 443 - We need a proper TLS certificate: - with the right `CN` and `subjectAltName` (`
.
.svc`) - signed by a trusted CA - We can either use a "real" CA, or use the `caBundle` option to specify the CA cert (the latter makes it easy to use self-signed certs) .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## In practice - We're going to generate a key pair and a self-signed certificate - We will store them in a Secret - We will run the webhook in a Deployment, exposed with a Service - We will update the webhook configuration to use that Service - The Service will be named `admission`, in Namespace `webhooks` (keep in mind that the ValidatingWebhookConfiguration itself is at cluster scope) .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Let's get to work! .lab[ - Make sure we're in the right directory: ```bash cd ~/container.training/webhooks/admission ``` - Create the namespace: ```bash kubectl create namespace webhooks ``` - Switch to the namespace: ```bash kubectl config set-context --current --namespace=webhooks ``` ] .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Deploying the webhook - *Normally,* we would author an image for this - Since our webhook is just *one* Python source file ... ... we'll store it in a ConfigMap, and install dependencies on the fly .lab[ - Load the webhook source in a ConfigMap: ```bash kubectl create configmap admission --from-file=flask/webhook.py ``` - Create the Deployment and Service: ```bash kubectl apply -f k8s/webhook-server.yaml ``` ] .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Generating the key pair and certificate - Let's call OpenSSL to the rescue! (of course, there are plenty others options; e.g. `cfssl`) .lab[ - Generate a self-signed certificate: ```bash NAMESPACE=webhooks SERVICE=admission CN=$SERVICE.$NAMESPACE.svc openssl req -x509 -newkey rsa:4096 -nodes -keyout key.pem -out cert.pem \ -days 30 -subj /CN=$CN -addext subjectAltName=DNS:$CN ``` - Load up the key and cert in a Secret: ```bash kubectl create secret tls admission --cert=cert.pem --key=key.pem ``` ] .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Update the webhook configuration - Let's reconfigure the webhook to use our Service instead of ngrok .lab[ - Edit the webhook configuration manifest: ```bash vim k8s/webhook-configuration.yaml ``` - Comment out the `url:` line - Uncomment the `service:` section - Save, quit - Update the webhook configuration: ```bash kubectl apply -f k8s/webhook-configuration.yaml ``` ] .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Add our self-signed cert to the `caBundle` - The API server won't accept our self-signed certificate - We need to add it to the `caBundle` field in the webhook configuration - The `caBundle` will be our `cert.pem` file, encoded in base64 .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- Shell to the rescue! .lab[ - Load up our cert and encode it in base64: ```bash CA=$(base64 -w0 < cert.pem) ``` - Define a patch operation to update the `caBundle`: ```bash PATCH='[{ "op": "replace", "path": "/webhooks/0/clientConfig/caBundle", "value":"'$CA'" }]' ``` - Patch the webhook configuration: ```bash kubectl patch validatingwebhookconfiguration \ admission.webhook.container.training \ --type='json' -p="$PATCH" ``` ] .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Try it out! - Keep an eye on the API server logs - Tail the logs of the pod running the webhook server - Create a few pods; we should see requests in the webhook server logs - Check that the label `color` is enforced correctly (it should only allow values of `red`, `green`, `blue`) .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- ## Coming soon... - Kubernetes Validating Admission Policies - Integrated with the Kubernetes API server - Lets us define policies using [CEL (Common Expression Language)][cel-spec] - Available in beta in Kubernetes 1.28 - Check this [CNCF Blog Post][cncf-blog-vap] for more details [cncf-blog-vap]: https://www.cncf.io/blog/2023/09/14/policy-management-in-kubernetes-is-changing/ [cel-spec]: https://github.com/google/cel-spec ??? :EN:- Dynamic admission control with webhooks :FR:- Contrôle d'admission dynamique (webhooks) .debug[[k8s/admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/admission.md)] --- class: pic .interstitial[] --- name: toc-policy-management-with-kyverno class: title Policy Management with Kyverno .nav[ [Previous part](#toc-dynamic-admission-control) | [Back to table of contents](#toc-part-7) | [Next part](#toc-the-aggregation-layer) ] .debug[(automatically generated title slide)] --- # Policy Management with Kyverno - The Kubernetes permission management system is very flexible ... - ... But it can't express *everything!* - Examples: - forbid using `:latest` image tag - enforce that each Deployment, Service, etc. has an `owner` label
(except in e.g. `kube-system`) - enforce that each container has at least a `readinessProbe` healthcheck - How can we address that, and express these more complex *policies?* .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Admission control - The Kubernetes API server provides a generic mechanism called *admission control* - Admission controllers will examine each write request, and can: - approve/deny it (for *validating* admission controllers) - additionally *update* the object (for *mutating* admission controllers) - These admission controllers can be: - plug-ins built into the Kubernetes API server
(selectively enabled/disabled by e.g. command-line flags) - webhooks registered dynamically with the Kubernetes API server .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## What's Kyverno? - Policy management solution for Kubernetes - Open source (https://github.com/kyverno/kyverno/) - Compatible with all clusters (doesn't require to reconfigure the control plane, enable feature gates...) - We don't endorse / support it in a particular way, but we think it's cool - It's not the only solution! (see e.g. [Open Policy Agent](https://www.openpolicyagent.org/docs/v0.12.2/kubernetes-admission-control/) or [Validating Admission Policies](https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/)) .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## What can Kyverno do? - *Validate* resource manifests (accept/deny depending on whether they conform to our policies) - *Mutate* resources when they get created or updated (to add/remove/change fields on the fly) - *Generate* additional resources when a resource gets created (e.g. when namespace is created, automatically add quotas and limits) - *Audit* existing resources (warn about resources that violate certain policies) .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## How does it do it? - Kyverno is implemented as a *controller* or *operator* - It typically runs as a Deployment on our cluster - Policies are defined as *custom resource definitions* - They are implemented with a set of *dynamic admission control webhooks* -- 🤔 -- - Let's unpack that! .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Custom resource definitions - When we install Kyverno, it will register new resource types: - Policy and ClusterPolicy (per-namespace and cluster-scope policies) - PolicyReport and ClusterPolicyReport (used in audit mode) - GenerateRequest (used internally when generating resources asynchronously) - We will be able to do e.g. `kubectl get clusterpolicyreports --all-namespaces` (to see policy violations across all namespaces) - Policies will be defined in YAML and registered/updated with e.g. `kubectl apply` .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Dynamic admission control webhooks - When we install Kyverno, it will register a few webhooks for its use (by creating ValidatingWebhookConfiguration and MutatingWebhookConfiguration resources) - All subsequent resource modifications are submitted to these webhooks (creations, updates, deletions) .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Controller - When we install Kyverno, it creates a Deployment (and therefore, a Pod) - That Pod runs the server used by the webhooks - It also runs a controller that will: - run checks in the background (and generate PolicyReport objects) - process GenerateRequest objects asynchronously .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Kyverno in action - We're going to install Kyverno on our cluster - Then, we will use it to implement a few policies .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Installing Kyverno The recommended [installation method][install-kyverno] is to use Helm charts. (It's also possible to install with a single YAML manifest.) .lab[ - Install Kyverno: ```bash helm upgrade --install --repo https://kyverno.github.io/kyverno/ \ --namespace kyverno --create-namespace kyverno kyverno ``` ] [install-kyverno]: https://kyverno.io/docs/installation/methods/ .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Kyverno policies in a nutshell - Which resources does it *select?* - can specify resources to *match* and/or *exclude* - can specify *kinds* and/or *selector* and/or users/roles doing the action - Which operation should be done? - validate, mutate, or generate - For validation, whether it should *enforce* or *audit* failures - Operation details (what exactly to validate, mutate, or generate) .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Painting pods - As an example, we'll implement a policy regarding "Pod color" - The color of a Pod is the value of the label `color` - Example: `kubectl label pod hello color=yellow` to paint a Pod in yellow - We want to implement the following policies: - color is optional (i.e. the label is not required) - if color is set, it *must* be `red`, `green`, or `blue` - once the color has been set, it cannot be changed - once the color has been set, it cannot be removed .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Immutable primary colors, take 1 - First, we will add a policy to block forbidden colors (i.e. only allow `red`, `green`, or `blue`) - One possible approach: - *match* all pods that have a `color` label that is not `red`, `green`, or `blue` - *deny* these pods - We could also *match* all pods, then *deny* with a condition .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- .small[ ```yaml apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: pod-color-policy-1 spec: validationFailureAction: Enforce rules: - name: ensure-pod-color-is-valid match: resources: kinds: - Pod selector: matchExpressions: - key: color operator: Exists - key: color operator: NotIn values: [ red, green, blue ] validate: message: "If it exists, the label color must be red, green, or blue." deny: {} ``` ] .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Testing without the policy - First, let's create a pod with an "invalid" label (while we still can!) - We will use this later .lab[ - Create a pod: ```bash kubectl run test-color-0 --image=nginx ``` - Apply a color label: ```bash kubectl label pod test-color-0 color=purple ``` ] .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Load and try the policy .lab[ - Load the policy: ```bash kubectl apply -f ~/container.training/k8s/kyverno-pod-color-1.yaml ``` - Create a pod: ```bash kubectl run test-color-1 --image=nginx ``` - Try to apply a few color labels: ```bash kubectl label pod test-color-1 color=purple kubectl label pod test-color-1 color=red kubectl label pod test-color-1 color- ``` ] .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Immutable primary colors, take 2 - Next rule: once a `color` label has been added, it cannot be changed (i.e. if `color=red`, we can't change it to `color=blue`) - Our approach: - *match* all pods - add a *precondition* matching pods that have a `color` label
(both in their "before" and "after" states) - *deny* these pods if their `color` label has changed - Again, other approaches are possible! .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- .small[ ```yaml apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: pod-color-policy-2 spec: validationFailureAction: Enforce background: false rules: - name: prevent-color-change match: resources: kinds: - Pod preconditions: - key: "{{ request.operation }}" operator: Equals value: UPDATE - key: "{{ request.oldObject.metadata.labels.color || '' }}" operator: NotEquals value: "" - key: "{{ request.object.metadata.labels.color || '' }}" operator: NotEquals value: "" validate: message: "Once label color has been added, it cannot be changed." deny: conditions: - key: "{{ request.object.metadata.labels.color }}" operator: NotEquals value: "{{ request.oldObject.metadata.labels.color }}" ``` ] .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Comparing "old" and "new" - The fields of the webhook payload are available through `{{ request }}` - For UPDATE requests, we can access: `{{ request.oldObject }}` → the object as it is right now (before the request) `{{ request.object }}` → the object with the changes made by the request .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Missing labels - We can access the `color` label through `{{ request.object.metadata.labels.color }}` - If we reference a label (or any field) that doesn't exist, the policy fails (with an error similar to `JMESPAth query failed: Unknown key ... in path`) - To work around that, [use an OR expression][non-existence-checks]: `{{ requests.object.metadata.labels.color || '' }}` - Note that in older versions of Kyverno, this wasn't always necessary (e.g. in *preconditions*, a missing label would evalute to an empty string) [non-existence-checks]: https://kyverno.io/docs/writing-policies/jmespath/#non-existence-checks .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Load and try the policy .lab[ - Load the policy: ```bash kubectl apply -f ~/container.training/k8s/kyverno-pod-color-2.yaml ``` - Create a pod: ```bash kubectl run test-color-2 --image=nginx ``` - Try to apply a few color labels: ```bash kubectl label pod test-color-2 color=purple kubectl label pod test-color-2 color=red kubectl label pod test-color-2 color=blue --overwrite ``` ] .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## `background` - What is this `background: false` option, and why do we need it? -- - Admission controllers are only invoked when we change an object - Existing objects are not affected (e.g. if we have a pod with `color=pink` *before* installing our policy) - Kyvero can also run checks in the background, and report violations (we'll see later how they are reported) - `background: false` disables that -- - Alright, but ... *why* do we need it? .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Accessing `AdmissionRequest` context - In this specific policy, we want to prevent an *update* (as opposed to a mere *create* operation) - We want to compare the *old* and *new* version (to check if a specific label was removed) - The `AdmissionRequest` object has `object` and `oldObject` fields (the `AdmissionRequest` object is the thing that gets submitted to the webhook) - We access the `AdmissionRequest` object through `{{ request }}` -- - Alright, but ... what's the link with `background: false`? .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## `{{ request }}` - The `{{ request }}` context is only available when there is an `AdmissionRequest` - When a resource is "at rest", there is no `{{ request }}` (and no old/new) - Therefore, a policy that uses `{{ request }}` cannot validate existing objects (it can only be used when an object is actually created/updated/deleted) .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Immutable primary colors, take 3 - Last rule: once a `color` label has been added, it cannot be removed - Our approach is to match all pods that: - *had* a `color` label (in `request.oldObject`) - *don't have* a `color` label (in `request.Object`) - And *deny* these pods - Again, other approaches are possible! .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- .small[ ```yaml apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: pod-color-policy-3 spec: validationFailureAction: Enforce background: false rules: - name: prevent-color-change match: resources: kinds: - Pod preconditions: - key: "{{ request.operation }}" operator: Equals value: UPDATE - key: "{{ request.oldObject.metadata.labels.color || '' }}" operator: NotEquals value: "" - key: "{{ request.object.metadata.labels.color || '' }}" operator: Equals value: "" validate: message: "Once label color has been added, it cannot be removed." deny: conditions: ``` ] .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Load and try the policy .lab[ - Load the policy: ```bash kubectl apply -f ~/container.training/k8s/kyverno-pod-color-3.yaml ``` - Create a pod: ```bash kubectl run test-color-3 --image=nginx ``` - Try to apply a few color labels: ```bash kubectl label pod test-color-3 color=purple kubectl label pod test-color-3 color=red kubectl label pod test-color-3 color- ``` ] .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Background checks - What about the `test-color-0` pod that we create initially? (remember: we did set `color=purple`) - We can see the infringing Pod in a PolicyReport .lab[ - Check that the pod still an "invalid" color: ```bash kubectl get pods -L color ``` - List PolicyReports: ```bash kubectl get policyreports kubectl get polr ``` ] (Sometimes it takes a little while for the infringement to show up, though.) .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Generating objects - When we create a Namespace, we also want to automatically create: - a LimitRange (to set default CPU and RAM requests and limits) - a ResourceQuota (to limit the resources used by the namespace) - a NetworkPolicy (to isolate the namespace) - We can do that with a Kyverno policy with a *generate* action (it is mutually exclusive with the *validate* action) .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Overview - The *generate* action must specify: - the `kind` of resource to generate - the `name` of the resource to generate - its `namespace`, when applicable - *either* a `data` structure, to be used to populate the resource - *or* a `clone` reference, to copy an existing resource Note: the `apiVersion` field appears to be optional. .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## In practice - We will use the policy [k8s/kyverno-namespace-setup.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/kyverno-namespace-setup.yaml) - We need to generate 3 resources, so we have 3 rules in the policy - Excerpt: ```yaml generate: kind: LimitRange name: default-limitrange namespace: "{{request.object.metadata.name}}" data: spec: limits: ``` - Note that we have to specify the `namespace` (and we infer it from the name of the resource being created, i.e. the Namespace) .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Lifecycle - After generated objects have been created, we can change them (Kyverno won't update them) - Except if we use `clone` together with the `synchronize` flag (in that case, Kyverno will watch the cloned resource) - This is convenient for e.g. ConfigMaps shared between Namespaces - Objects are generated only at *creation* (not when updating an old object) .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- class: extra-details ## Managing `ownerReferences` - By default, the generated object and triggering object have independent lifecycles (deleting the triggering object doesn't affect the generated object) - It is possible to associate the generated object with the triggering object (so that deleting the triggering object also deletes the generated object) - This is done by adding the triggering object information to `ownerReferences` (in the generated object `metadata`) - See [Linking resources with ownerReferences][ownerref] for an example [ownerref]: https://kyverno.io/docs/writing-policies/generate/#linking-trigger-with-downstream .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Asynchronous creation - Kyverno creates resources asynchronously (by creating a GenerateRequest resource first) - This is useful when the resource cannot be created (because of permissions or dependency issues) - Kyverno will periodically loop through the pending GenerateRequests - Once the ressource is created, the GenerateRequest is marked as Completed .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Footprint - 8 CRDs - 5 webhooks - 2 Services, 1 Deployment, 2 ConfigMaps - Internal resources (GenerateRequest) "parked" in a Namespace - Kyverno packs a lot of features in a small footprint .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Strengths - Kyverno is very easy to install (it's harder to get easier than one `kubectl apply -f`) - The setup of the webhooks is fully automated (including certificate generation) - It offers both namespaced and cluster-scope policies - The policy language leverages existing constructs (e.g. `matchExpressions`) .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- ## Caveats - The `{{ request }}` context is powerful, but difficult to validate (Kyverno can't know ahead of time how it will be populated) - Advanced policies (with conditionals) have unique, exotic syntax: ```yaml spec: =(volumes): =(hostPath): path: "!/var/run/docker.sock" ``` - Writing and validating policies can be difficult .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- class: extra-details ## Pods created by controllers - When e.g. a ReplicaSet or DaemonSet creates a pod, it "owns" it (the ReplicaSet or DaemonSet is listed in the Pod's `.metadata.ownerReferences`) - Kyverno treats these Pods differently - If my understanding of the code is correct (big *if*): - it skips validation for "owned" Pods - instead, it validates their controllers - this way, Kyverno can report errors on the controller instead of the pod - This can be a bit confusing when testing policies on such pods! ??? :EN:- Policy Management with Kyverno :FR:- Gestion de *policies* avec Kyverno .debug[[k8s/kyverno.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kyverno.md)] --- class: pic .interstitial[] --- name: toc-the-aggregation-layer class: title The Aggregation Layer .nav[ [Previous part](#toc-policy-management-with-kyverno) | [Back to table of contents](#toc-part-8) | [Next part](#toc-checking-node-and-pod-resource-usage) ] .debug[(automatically generated title slide)] --- # The Aggregation Layer - The aggregation layer is a way to extend the Kubernetes API - It is similar to CRDs - it lets us define new resource types - these resources can then be used with `kubectl` and other clients - The implementation is very different - CRDs are handled within the API server - the aggregation layer offloads requests to another process - They are designed for very different use-cases .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## CRDs vs aggregation layer - The Kubernetes API is a REST-ish API with a hierarchical structure - It can be extended with Custom Resource Definifions (CRDs) - Custom resources are managed by the Kubernetes API server - we don't need to write code - the API server does all the heavy lifting - these resources are persisted in Kubernetes' "standard" database
(for most installations, that's `etcd`) - We can also define resources that are *not* managed by the API server (the API server merely proxies the requests to another server) .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## Which one is best? - For things that "map" well to objects stored in a traditional database: *probably CRDs* - For things that "exist" only in Kubernetes and don't represent external resources: *probably CRDs* - For things that are read-only, at least from Kubernetes' perspective: *probably aggregation layer* - For things that can't be stored in etcd because of size or access patterns: *probably aggregation layer* .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## How are resources organized? - Let's have a look at the Kubernetes API hierarchical structure - We'll ask `kubectl` to show us the exacts requests that it's making .lab[ - Check the URI for a cluster-scope, "core" resource, e.g. a Node: ```bash kubectl -v6 get node node1 ``` - Check the URI for a cluster-scope, "non-core" resource, e.g. a ClusterRole: ```bash kubectl -v6 get clusterrole view ``` ] .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## Core vs non-core - This is the structure of the URIs that we just checked: ``` /api/v1/nodes/node1 ↑ ↑ ↑ `version` `kind` `name` /apis/rbac.authorization.k8s.io/v1/clusterroles/view ↑ ↑ ↑ ↑ `group` `version` `kind` `name` ``` - There is no group for "core" resources - Or, we could say that the group, `core`, is implied .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## Group-Version-Kind - In the API server, the Group-Version-Kind triple maps to a Go type (look for all the "GVK" occurrences in the source code!) - In the API server URI router, the GVK is parsed "relatively early" (so that the server can know which resource we're talking about) - "Well, actually ..." Things are a bit more complicated, see next slides! .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- class: extra-details ## Namespaced resources - What about namespaced resources? .lab[ - Check the URI for a namespaced, "core" resource, e.g. a Service: ```bash kubectl -v6 get service kubernetes --namespace default ``` ] - Here are what namespaced resources URIs look like: ``` /api/v1/namespaces/default/services/kubernetes ↑ ↑ ↑ ↑ `version` `namespace` `kind` `name` /apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy ↑ ↑ ↑ ↑ ↑ `group` `version` `namespace` `kind` `name` ``` .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- class: extra-details ## Subresources - Many resources have *subresources*, for instance: - `/status` (decouples status updates from other updates) - `/scale` (exposes a consistent interface for autoscalers) - `/proxy` (allows access to HTTP resources) - `/portforward` (used by `kubectl port-forward`) - `/logs` (access pod logs) - These are added at the end of the URI .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- class: extra-details ## Accessing a subresource .lab[ - List `kube-proxy` pods: ```bash kubectl get pods --namespace=kube-system --selector=k8s-app=kube-proxy PODNAME=$( kubectl get pods --namespace=kube-system --selector=k8s-app=kube-proxy \ -o json | jq -r .items[0].metadata.name) ``` - Execute a command in a pod, showing the API requests: ```bash kubectl -v6 exec --namespace=kube-system $PODNAME -- echo hello world ``` ] -- The full request looks like: ``` POST https://.../api/v1/namespaces/kube-system/pods/kube-proxy-c7rlw/exec? command=echo&command=hello&command=world&container=kube-proxy&stderr=true&stdout=true ``` .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## Listing what's supported on the server - There are at least three useful commands to introspect the API server .lab[ - List resources types, their group, kind, short names, and scope: ```bash kubectl api-resources ``` - List API groups + versions: ```bash kubectl api-versions ``` - List APIServices: ```bash kubectl get apiservices ``` ] -- 🤔 What's the difference between the last two? .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## API registration - `kubectl api-versions` shows all API groups, including `apiregistration.k8s.io` - `kubectl get apiservices` shows the "routing table" for API requests - The latter doesn't show `apiregistration.k8s.io` (APIServices belong to `apiregistration.k8s.io`) - Most API groups are `Local` (handled internally by the API server) - If we're running the `metrics-server`, it should handle `metrics.k8s.io` - This is an API group handled *outside* of the API server - This is the *aggregation layer!* .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## Finding resources The following assumes that `metrics-server` is deployed on your cluster. .lab[ - Check that the metrics.k8s.io is registered with `metrics-server`: ```bash kubectl get apiservices | grep metrics.k8s.io ``` - Check the resource kinds registered in the metrics.k8s.io group: ```bash kubectl api-resources --api-group=metrics.k8s.io ``` ] (If the output of either command is empty, install `metrics-server` first.) .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## `nodes` vs `nodes` - We can have multiple resources with the same name .lab[ - Look for resources named `node`: ```bash kubectl api-resources | grep -w nodes ``` - Compare the output of both commands: ```bash kubectl get nodes kubectl get nodes.metrics.k8s.io ``` ] -- 🤔 What are the second kind of nodes? How can we see what's really in them? .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## Node vs NodeMetrics - `nodes.metrics.k8s.io` (aka NodeMetrics) don't have fancy *printer columns* - But we can look at the raw data (with `-o json` or `-o yaml`) .lab[ - Look at NodeMetrics objects with one of these commands: ```bash kubectl get -o yaml nodes.metrics.k8s.io kubectl get -o yaml NodeMetrics ``` ] -- 💡 Alright, these are the live metrics (CPU, RAM) for our nodes. .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## An easier way to consume metrics - We might have seen these metrics before ... With an easier command! -- .lab[ - Display node metrics: ```bash kubectl top nodes ``` - Check which API requests happen behind the scenes: ```bash kubectl top nodes -v6 ``` ] .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## Aggregation layer in practice - We can write an API server to handle a subset of the Kubernetes API - Then we can register that server by creating an APIService resource .lab[ - Check the definition used for the `metrics-server`: ```bash kubectl describe apiservices v1beta1.metrics.k8s.io ``` ] - Group priority is used when multiple API groups provide similar kinds (e.g. `nodes` and `nodes.metrics.k8s.io` as seen earlier) .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## Authentication flow - We have two Kubernetes API servers: - "aggregator" (the main one; clients connect to it) - "aggregated" (the one providing the extra API; aggregator connects to it) - Aggregator deals with client authentication - Aggregator authenticates with aggregated using mutual TLS - Aggregator passes (/forwards/proxies/...) requests to aggregated - Aggregated performs authorization by calling back aggregator ("can subject X perform action Y on resource Z?") [This doc page](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/#authentication-flow) has very nice swim lanes showing that flow. .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- ## Discussion - Aggregation layer is great for metrics (fast-changing, ephemeral data, that would be outrageously bad for etcd) - It *could* be a good fit to expose other REST APIs as a pass-thru (but it's more common to see CRDs instead) ??? :EN:- The aggregation layer :FR:- Étendre l'API avec le *aggregation layer* .debug[[k8s/aggregation-layer.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/aggregation-layer.md)] --- class: pic .interstitial[] --- name: toc-checking-node-and-pod-resource-usage class: title Checking Node and Pod resource usage .nav[ [Previous part](#toc-the-aggregation-layer) | [Back to table of contents](#toc-part-8) | [Next part](#toc-collecting-metrics-with-prometheus) ] .debug[(automatically generated title slide)] --- # Checking Node and Pod resource usage - We've installed a few things on our cluster so far - How much resources (CPU, RAM) are we using? - We need metrics! .lab[ - Let's try the following command: ```bash kubectl top nodes ``` ] .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Is metrics-server installed? - If we see a list of nodes, with CPU and RAM usage: *great, metrics-server is installed!* - If we see `error: Metrics API not available`: *metrics-server isn't installed, so we'll install it!* .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## The resource metrics pipeline - The `kubectl top` command relies on the Metrics API - The Metrics API is part of the "[resource metrics pipeline]" - The Metrics API isn't served (built into) the Kubernetes API server - It is made available through the [aggregation layer] - It is usually served by a component called metrics-server - It is optional (Kubernetes can function without it) - It is necessary for some features (like the Horizontal Pod Autoscaler) [resource metrics pipeline]: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/ [aggregation layer]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/ .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Other ways to get metrics - We could use a SAAS like Datadog, New Relic... - We could use a self-hosted solution like Prometheus - Or we could use metrics-server - What's special about metrics-server? .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Pros/cons Cons: - no data retention (no history data, just instant numbers) - only CPU and RAM of nodes and pods (no disk or network usage or I/O...) Pros: - very lightweight - doesn't require storage - used by Kubernetes autoscaling .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Why metrics-server - We may install something fancier later (think: Prometheus with Grafana) - But metrics-server will work in *minutes* - It will barely use resources on our cluster - It's required for autoscaling anyway .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## How metric-server works - It runs a single Pod - That Pod will fetch metrics from all our Nodes - It will expose them through the Kubernetes API aggregation layer (we won't say much more about that aggregation layer; that's fairly advanced stuff!) .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Installing metrics-server - In a lot of places, this is done with a little bit of custom YAML (derived from the [official installation instructions](https://github.com/kubernetes-sigs/metrics-server#installation)) - We can also use a Helm chart: ```bash helm upgrade --install metrics-server metrics-server \ --create-namespace --namespace metrics-server \ --repo https://kubernetes-sigs.github.io/metrics-server/ \ --set args={--kubelet-insecure-tls=true} ``` - The `args` flag specified above should be sufficient on most clusters .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- class: extra-details ## Kubelet insecure TLS? - The metrics-server collects metrics by connecting to kubelet - The connection is secured by TLS - This requires a valid certificate - In some cases, the certificate is self-signed - In other cases, it might be valid, but include only the node name (not its IP address, which is used by default by metrics-server) .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Testing metrics-server - After a minute or two, metrics-server should be up - We should now be able to check Nodes resource usage: ```bash kubectl top nodes ``` - And Pods resource usage, too: ```bash kubectl top pods --all-namespaces ``` .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Keep some padding - The RAM usage that we see should correspond more or less to the Resident Set Size - Our pods also need some extra space for buffers, caches... - Do not aim for 100% memory usage! - Some more realistic targets: 50% (for workloads with disk I/O and leveraging caching) 90% (on very big nodes with mostly CPU-bound workloads) 75% (anywhere in between!) .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Other tools - kube-capacity is a great CLI tool to view resources (https://github.com/robscott/kube-capacity) - It can show resource and limits, and compare them with usage - It can show utilization per node, or per pod - kube-resource-report can generate HTML reports (https://codeberg.org/hjacobs/kube-resource-report) ??? :EN:- The resource metrics pipeline :EN:- Installing metrics-server :EN:- Le *resource metrics pipeline* :FR:- Installation de metrics-server .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- class: pic .interstitial[] --- name: toc-collecting-metrics-with-prometheus class: title Collecting metrics with Prometheus .nav[ [Previous part](#toc-checking-node-and-pod-resource-usage) | [Back to table of contents](#toc-part-8) | [Next part](#toc-prometheus-and-grafana) ] .debug[(automatically generated title slide)] --- # Collecting metrics with Prometheus - Prometheus is an open-source monitoring system including: - multiple *service discovery* backends to figure out which metrics to collect - a *scraper* to collect these metrics - an efficient *time series database* to store these metrics - a specific query language (PromQL) to query these time series - an *alert manager* to notify us according to metrics values or trends - We are going to use it to collect and query some metrics on our Kubernetes cluster .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Why Prometheus? - We don't endorse Prometheus more or less than any other system - It's relatively well integrated within the cloud-native ecosystem - It can be self-hosted (this is useful for tutorials like this) - It can be used for deployments of varying complexity: - one binary and 10 lines of configuration to get started - all the way to thousands of nodes and millions of metrics .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Exposing metrics to Prometheus - Prometheus obtains metrics and their values by querying *exporters* - An exporter serves metrics over HTTP, in plain text - This is what the *node exporter* looks like: http://demo.robustperception.io:9100/metrics - Prometheus itself exposes its own internal metrics, too: http://demo.robustperception.io:9090/metrics - If you want to expose custom metrics to Prometheus: - serve a text page like these, and you're good to go - libraries are available in various languages to help with quantiles etc. .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## How Prometheus gets these metrics - The *Prometheus server* will *scrape* URLs like these at regular intervals (by default: every minute; can be more/less frequent) - The list of URLs to scrape (the *scrape targets*) is defined in configuration .footnote[Worried about the overhead of parsing a text format?
Check this [comparison](https://github.com/RichiH/OpenMetrics/blob/master/markdown/protobuf_vs_text.md) of the text format with the (now deprecated) protobuf format!] .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Defining scrape targets This is maybe the simplest configuration file for Prometheus: ```yaml scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] ``` - In this configuration, Prometheus collects its own internal metrics - A typical configuration file will have multiple `scrape_configs` - In this configuration, the list of targets is fixed - A typical configuration file will use dynamic service discovery .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Service discovery This configuration file will leverage existing DNS `A` records: ```yaml scrape_configs: - ... - job_name: 'node' dns_sd_configs: - names: ['api-backends.dc-paris-2.enix.io'] type: 'A' port: 9100 ``` - In this configuration, Prometheus resolves the provided name(s) (here, `api-backends.dc-paris-2.enix.io`) - Each resulting IP address is added as a target on port 9100 .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Dynamic service discovery - In the DNS example, the names are re-resolved at regular intervals - As DNS records are created/updated/removed, scrape targets change as well - Existing data (previously collected metrics) is not deleted - Other service discovery backends work in a similar fashion .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Other service discovery mechanisms - Prometheus can connect to e.g. a cloud API to list instances - Or to the Kubernetes API to list nodes, pods, services ... - Or a service like Consul, Zookeeper, etcd, to list applications - The resulting configurations files are *way more complex* (but don't worry, we won't need to write them ourselves) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Time series database - We could wonder, "why do we need a specialized database?" - One metrics data point = metrics ID + timestamp + value - With a classic SQL or noSQL data store, that's at least 160 bits of data + indexes - Prometheus is way more efficient, without sacrificing performance (it will even be gentler on the I/O subsystem since it needs to write less) - Would you like to know more? Check this video: [Storage in Prometheus 2.0](https://www.youtube.com/watch?v=C4YV-9CrawA) by [Goutham V](https://twitter.com/putadent) at DC17EU .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Checking if Prometheus is installed - Before trying to install Prometheus, let's check if it's already there .lab[ - Look for services with a label `app=prometheus` across all namespaces: ```bash kubectl get services --selector=app=prometheus --all-namespaces ``` ] If we see a `NodePort` service called `prometheus-server`, we're good! (We can then skip to "Connecting to the Prometheus web UI".) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Running Prometheus on our cluster We need to: - Run the Prometheus server in a pod (using e.g. a Deployment to ensure that it keeps running) - Expose the Prometheus server web UI (e.g. with a NodePort) - Run the *node exporter* on each node (with a Daemon Set) - Set up a Service Account so that Prometheus can query the Kubernetes API - Configure the Prometheus server (storing the configuration in a Config Map for easy updates) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Helm charts to the rescue - To make our lives easier, we are going to use a Helm chart - The Helm chart will take care of all the steps explained above (including some extra features that we don't need, but won't hurt) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Step 1: install Helm - If we already installed Helm earlier, this command won't break anything .lab[ - Install the Helm CLI: ```bash curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 \ | bash ``` ] .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Step 2: install Prometheus - The following command, just like the previous ones, is idempotent (it won't error out if Prometheus is already installed) .lab[ - Install Prometheus on our cluster: ```bash helm upgrade prometheus --install prometheus \ --repo https://prometheus-community.github.io/helm-charts \ --namespace prometheus --create-namespace \ --set server.service.type=NodePort \ --set server.service.nodePort=30090 \ --set server.persistentVolume.enabled=false \ --set alertmanager.enabled=false ``` ] Curious about all these flags? They're explained in the next slide. .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## Explaining all the Helm flags - `helm upgrade prometheus` → upgrade the release named `prometheus`
(a "release" is an instance of an app deployed with Helm) - `--install` → if it doesn't exist, install it (instead of upgrading) - `prometheus` → use the chart named `prometheus` - `--repo ...` → the chart is located on the following repository - `--namespace prometheus` → put it in that specific namespace - `--create-namespace` → create the namespace if it doesn't exist - `--set ...` → here are some *values* to be used when rendering the chart's templates .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## Values for the Prometheus chart Helm *values* are parameters to customize our installation. - `server.service.type=NodePort` → expose the Prometheus server with a NodePort - `server.service.nodePort=30090` → set the specific NodePort number to use - `server.persistentVolume.enabled=false` → do not use a PersistentVolumeClaim - `alertmanager.enabled=false` → disable the alert manager entirely .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Connecting to the Prometheus web UI - Let's connect to the web UI and see what we can do .lab[ - Figure out the NodePort that was allocated to the Prometheus server: ```bash kubectl get svc --all-namespaces | grep prometheus-server ``` - With your browser, connect to that port - It should be 30090 if we just installed Prometheus with the Helm chart! ] .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Querying some metrics - This is easy... if you are familiar with PromQL .lab[ - Click on "Graph", and in "expression", paste the following: ``` sum by (instance) ( irate( container_cpu_usage_seconds_total{ pod=~"worker.*" }[5m] ) ) ``` ] - Click on the blue "Execute" button and on the "Graph" tab just below - We see the cumulated CPU usage of worker pods for each node
(if we just deployed Prometheus, there won't be much data to see, though) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Getting started with PromQL - We can't learn PromQL in just 5 minutes - But we can cover the basics to get an idea of what is possible (and have some keywords and pointers) - We are going to break down the query above (building it one step at a time) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Graphing one metric across all tags This query will show us CPU usage across all containers: ``` container_cpu_usage_seconds_total ``` - The suffix of the metrics name tells us: - the unit (seconds of CPU) - that it's the total used since the container creation - Since it's a "total," it is an increasing quantity (we need to compute the derivative if we want e.g. CPU % over time) - We see that the metrics retrieved have *tags* attached to them .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Selecting metrics with tags This query will show us only metrics for worker containers: ``` container_cpu_usage_seconds_total{pod=~"worker.*"} ``` - The `=~` operator allows regex matching - We select all the pods with a name starting with `worker` (it would be better to use labels to select pods; more on that later) - The result is a smaller set of containers .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Transforming counters in rates This query will show us CPU usage % instead of total seconds used: ``` 100*irate(container_cpu_usage_seconds_total{pod=~"worker.*"}[5m]) ``` - The [`irate`](https://prometheus.io/docs/prometheus/latest/querying/functions/#irate) operator computes the "per-second instant rate of increase" - `rate` is similar but allows decreasing counters and negative values - with `irate`, if a counter goes back to zero, we don't get a negative spike - The `[5m]` tells how far to look back if there is a gap in the data - And we multiply with `100*` to get CPU % usage .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Aggregation operators This query sums the CPU usage per node: ``` sum by (instance) ( irate(container_cpu_usage_seconds_total{pod=~"worker.*"}[5m]) ) ``` - `instance` corresponds to the node on which the container is running - `sum by (instance) (...)` computes the sum for each instance - Note: all the other tags are collapsed (in other words, the resulting graph only shows the `instance` tag) - PromQL supports many more [aggregation operators](https://prometheus.io/docs/prometheus/latest/querying/operators/#aggregation-operators) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## What kind of metrics can we collect? - Node metrics (related to physical or virtual machines) - Container metrics (resource usage per container) - Databases, message queues, load balancers, ... (check out this [list of exporters](https://prometheus.io/docs/instrumenting/exporters/)!) - Instrumentation (=deluxe `printf` for our code) - Business metrics (customers served, revenue, ...) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## Node metrics - CPU, RAM, disk usage on the whole node - Total number of processes running, and their states - Number of open files, sockets, and their states - I/O activity (disk, network), per operation or volume - Physical/hardware (when applicable): temperature, fan speed... - ...and much more! .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## Container metrics - Similar to node metrics, but not totally identical - RAM breakdown will be different - active vs inactive memory - some memory is *shared* between containers, and specially accounted for - I/O activity is also harder to track - async writes can cause deferred "charges" - some page-ins are also shared between containers For details about container metrics, see:
http://jpetazzo.github.io/2013/10/08/docker-containers-metrics/ .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## Application metrics - Arbitrary metrics related to your application and business - System performance: request latency, error rate... - Volume information: number of rows in database, message queue size... - Business data: inventory, items sold, revenue... .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## Detecting scrape targets - Prometheus can leverage Kubernetes service discovery (with proper configuration) - Services or pods can be annotated with: - `prometheus.io/scrape: true` to enable scraping - `prometheus.io/port: 9090` to indicate the port number - `prometheus.io/path: /metrics` to indicate the URI (`/metrics` by default) - Prometheus will detect and scrape these (without needing a restart or reload) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Querying labels - What if we want to get metrics for containers belonging to a pod tagged `worker`? - The cAdvisor exporter does not give us Kubernetes labels - Kubernetes labels are exposed through another exporter - We can see Kubernetes labels through metrics `kube_pod_labels` (each container appears as a time series with constant value of `1`) - Prometheus *kind of* supports "joins" between time series - But only if the names of the tags match exactly .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## What if the tags don't match? - Older versions of cAdvisor exporter used tag `pod_name` for the name of a pod - The Kubernetes service endpoints exporter uses tag `pod` instead - See [this blog post](https://www.robustperception.io/exposing-the-software-version-to-prometheus) or [this other one](https://www.weave.works/blog/aggregating-pod-resource-cpu-memory-usage-arbitrary-labels-prometheus/) to see how to perform "joins" - Note that Prometheus cannot "join" time series with different labels (see [Prometheus issue #2204](https://github.com/prometheus/prometheus/issues/2204) for the rationale) - There is a workaround involving relabeling, but it's "not cheap" - see [this comment](https://github.com/prometheus/prometheus/issues/2204#issuecomment-261515520) for an overview - or [this blog post](https://5pi.de/2017/11/09/use-prometheus-vector-matching-to-get-kubernetes-utilization-across-any-pod-label/) for a complete description of the process .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## In practice - Grafana is a beautiful (and useful) frontend to display all kinds of graphs - Not everyone needs to know Prometheus, PromQL, Grafana, etc. - But in a team, it is valuable to have at least one person who know them - That person can set up queries and dashboards for the rest of the team - It's a little bit like knowing how to optimize SQL queries, Dockerfiles... Don't panic if you don't know these tools! ...But make sure at least one person in your team is on it 💯 ??? :EN:- Collecting metrics with Prometheus :FR:- Collecter des métriques avec Prometheus .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: pic .interstitial[] --- name: toc-prometheus-and-grafana class: title Prometheus and Grafana .nav[ [Previous part](#toc-collecting-metrics-with-prometheus) | [Back to table of contents](#toc-part-8) | [Next part](#toc-scaling-with-custom-metrics) ] .debug[(automatically generated title slide)] --- # Prometheus and Grafana - What if we want metrics retention, view graphs, trends? - A very popular combo is Prometheus+Grafana: - Prometheus as the "metrics engine" - Grafana to display comprehensive dashboards - Prometheus also has an alert-manager component to trigger alerts (we won't talk about that one) .debug[[k8s/prometheus-stack.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus-stack.md)] --- ## Installing Prometheus and Grafana - A complete metrics stack needs at least: - the Prometheus server (collects metrics and stores them efficiently) - a collection of *exporters* (exposing metrics to Prometheus) - Grafana - a collection of Grafana dashboards (building them from scratch is tedious) - The Helm chart `kube-prometheus-stack` combines all these elements - ... So we're going to use it to deploy our metrics stack! .debug[[k8s/prometheus-stack.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus-stack.md)] --- ## Installing `kube-prometheus-stack` - Let's install that stack *directly* from its repo (without doing `helm repo add` first) - Otherwise, keep the same naming strategy: ```bash helm upgrade --install kube-prometheus-stack kube-prometheus-stack \ --namespace kube-prometheus-stack --create-namespace \ --repo https://prometheus-community.github.io/helm-charts ``` - This will take a minute... - Then check what was installed: ```bash kubectl get all --namespace kube-prometheus-stack ``` .debug[[k8s/prometheus-stack.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus-stack.md)] --- ## Exposing Grafana - Let's create an Ingress for Grafana ```bash kubectl create ingress --namespace kube-prometheus-stack grafana \ --rule=grafana.`cloudnative.party`/*=kube-prometheus-stack-grafana:80 ``` (as usual, make sure to use *your* domain name above) - Connect to Grafana (remember that the DNS record might take a few minutes to come up) .debug[[k8s/prometheus-stack.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus-stack.md)] --- ## Grafana credentials - What could the login and password be? - Let's look at the Secrets available in the namespace: ```bash kubectl get secrets --namespace kube-prometheus-stack ``` - There is a `kube-prometheus-stack-grafana` that looks promising! - Decode the Secret: ```bash kubectl get secret --namespace kube-prometheus-stack \ kube-prometheus-stack-grafana -o json | jq '.data | map_values(@base64d)' ``` - If you don't have the `jq` tool mentioned above, don't worry... -- - The login/password is hardcoded to `admin`/`prom-operator` 😬 .debug[[k8s/prometheus-stack.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus-stack.md)] --- ## Grafana dashboards - Once logged in, click on the "Dashboards" icon on the left (it's the one that looks like four squares) - Then click on the "Manage" entry - Then click on "Kubernetes / Compute Resources / Cluster" - This gives us a breakdown of resource usage by Namespace - Feel free to explore the other dashboards! ??? :EN:- Installing Prometheus and Grafana :FR:- Installer Prometheus et Grafana :T: Observing our cluster with Prometheus and Grafana :Q: What's the relationship between Prometheus and Grafana? :A: Prometheus collects and graphs metrics; Grafana sends alerts :A: ✔️Prometheus collects metrics; Grafana displays them on dashboards :A: Prometheus collects and graphs metrics; Grafana is its configuration interface :A: Grafana collects and graphs metrics; Prometheus sends alerts .debug[[k8s/prometheus-stack.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus-stack.md)] --- class: pic .interstitial[] --- name: toc-scaling-with-custom-metrics class: title Scaling with custom metrics .nav[ [Previous part](#toc-prometheus-and-grafana) | [Back to table of contents](#toc-part-8) | [Next part](#toc-designing-an-operator) ] .debug[(automatically generated title slide)] --- # Scaling with custom metrics - The HorizontalPodAutoscaler v1 can only scale on Pod CPU usage - Sometimes, we need to scale using other metrics: - memory - requests per second - latency - active sessions - items in a work queue - ... - The HorizontalPodAutoscaler v2 can do it! .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Requirements ⚠️ Autoscaling on custom metrics is fairly complex! - We need some metrics system (Prometheus is a popular option, but others are possible too) - We need our metrics (latency, traffic...) to be fed in the system (with Prometheus, this might require a custom exporter) - We need to expose these metrics to Kubernetes (Kubernetes doesn't "speak" the Prometheus API) - Then we can set up autoscaling! .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## The plan - We will deploy the DockerCoins demo app (one of its components has a bottleneck; its latency will increase under load) - We will use Prometheus to collect and store metrics - We will deploy a tiny HTTP latency monitor (a Prometheus *exporter*) - We will deploy the "Prometheus adapter" (mapping Prometheus metrics to Kubernetes-compatible metrics) - We will create an HorizontalPodAutoscaler 🎉 .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Deploying DockerCoins - That's the easy part! .lab[ - Create a new namespace and switch to it: ```bash kubectl create namespace customscaling kns customscaling ``` - Deploy DockerCoins, and scale up the `worker` Deployment: ```bash kubectl apply -f ~/container.training/k8s/dockercoins.yaml kubectl scale deployment worker --replicas=10 ``` ] .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Current state of affairs - The `rng` service is a bottleneck (it cannot handle more than 10 requests/second) - With enough traffic, its latency increases (by about 100ms per `worker` Pod after the 3rd worker) .lab[ - Check the `webui` port and open it in your browser: ```bash kubectl get service webui ``` - Check the `rng` ClusterIP and test it with e.g. `httping`: ```bash kubectl get service rng ``` ] .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Measuring latency - We will use a tiny custom Prometheus exporter, [httplat](https://github.com/jpetazzo/httplat) - `httplat` exposes Prometheus metrics on port 9080 (by default) - It monitors exactly one URL, that must be passed as a command-line argument .lab[ - Deploy `httplat`: ```bash kubectl create deployment httplat --image=jpetazzo/httplat -- httplat http://rng/ ``` - Expose it: ```bash kubectl expose deployment httplat --port=9080 ``` ] .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- class: extra-details ## Measuring latency in the real world - We are using this tiny custom exporter for simplicity - A more common method to collect latency is to use a service mesh - A service mesh can usually collect latency for *all* services automatically .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Install Prometheus - We will use the Prometheus community Helm chart (because we can configure it dynamically with annotations) .lab[ - If it's not installed yet on the cluster, install Prometheus: ```bash helm upgrade --install prometheus prometheus \ --repo https://prometheus-community.github.io/helm-charts \ --namespace prometheus --create-namespace \ --set server.service.type=NodePort \ --set server.service.nodePort=30090 \ --set server.persistentVolume.enabled=false \ --set alertmanager.enabled=false ``` ] .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Configure Prometheus - We can use annotations to tell Prometheus to collect the metrics .lab[ - Tell Prometheus to "scrape" our latency exporter: ```bash kubectl annotate service httplat \ prometheus.io/scrape=true \ prometheus.io/port=9080 \ prometheus.io/path=/metrics ``` ] If you deployed Prometheus differently, you might have to configure it manually. You'll need to instruct it to scrape http://httplat.customscaling.svc:9080/metrics. .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Make sure that metrics get collected - Before moving on, confirm that Prometheus has our metrics .lab[ - Connect to Prometheus (if you installed it like instructed above, it is exposed as a NodePort on port 30090) - Check that `httplat` metrics are available - You can try to graph the following PromQL expression: ``` rate(httplat_latency_seconds_sum[2m])/rate(httplat_latency_seconds_count[2m]) ``` ] .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Troubleshooting - Make sure that the exporter works: - get the ClusterIP of the exporter with `kubectl get svc httplat` - `curl http://
:9080/metrics` - check that the result includes the `httplat` histogram - Make sure that Prometheus is scraping the exporter: - go to `Status` / `Targets` in Prometheus - make sure that `httplat` shows up in there .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Creating the autoscaling policy - We need custom YAML (we can't use the `kubectl autoscale` command) - It must specify `scaleTargetRef`, the resource to scale - any resource with a `scale` sub-resource will do - this includes Deployment, ReplicaSet, StatefulSet... - It must specify one or more `metrics` to look at - if multiple metrics are given, the autoscaler will "do the math" for each one - it will then keep the largest result .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Details about the `metrics` list - Each item will look like this: ```yaml - type:
: metric: name:
<...optional selector (mandatory for External metrics)...> target: type:
:
``` `
` can be `Resource`, `Pods`, `Object`, or `External`. `
` can be `Utilization`, `Value`, or `AverageValue`. Let's explain the 4 different `
` values! .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## `Resource` Use "classic" metrics served by `metrics-server` (`cpu` and `memory`). ```yaml - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 ``` Compute average *utilization* (usage/requests) across pods. It's also possible to specify `Value` or `AverageValue` instead of `Utilization`. (To scale according to "raw" CPU or memory usage.) .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## `Pods` Use custom metrics. These are still "per-Pod" metrics. ```yaml - type: Pods pods: metric: name: packets-per-second target: type: AverageValue averageValue: 1k ``` `type:` *must* be `AverageValue`. (It cannot be `Utilization`, since these can't be used in Pod `requests`.) .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## `Object` Use custom metrics. These metrics are "linked" to any arbitrary resource. (E.g. a Deployment, Service, Ingress, ...) ```yaml - type: Object object: metric: name: requests-per-second describedObject: apiVersion: networking.k8s.io/v1 kind: Ingress name: main-route target: type: AverageValue value: 100 ``` `type:` can be `Value` or `AverageValue` (see next slide for details). .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## `Value` vs `AverageValue` - `Value` - use the value as-is - useful to pace a client or producer - "target a specific total load on a specific endpoint or queue" - `AverageValue` - divide the value by the number of pods - useful to scale a server or consumer - "scale our systems to meet a given SLA/SLO" .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## `External` Use arbitrary metrics. The series to use is specified with a label selector. ```yaml - type: External external: metric: name: queue_messages_ready selector: "queue=worker_tasks" target: type: AverageValue averageValue: 30 ``` The `selector` will be passed along when querying the metrics API. Its meaninng is implementation-dependent. It may or may not correspond to Kubernetes labels. .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## One more thing ... - We can give a `behavior` set of options - Indicates: - how much to scale up/down in a single step - a *stabilization window* to avoid hysteresis effects - The default stabilization window is 15 seconds for `scaleUp` (we might want to change that!) .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- Putting togeher [k8s/hpa-v2-pa-httplat.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/hpa-v2-pa-httplat.yaml): .small[ ```yaml kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2 metadata: name: rng spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: rng minReplicas: 1 maxReplicas: 20 behavior: scaleUp: stabilizationWindowSeconds: 60 scaleDown: stabilizationWindowSeconds: 180 metrics: - type: Object object: describedObject: apiVersion: v1 kind: Service name: httplat metric: name: httplat_latency_seconds target: type: Value value: 0.1 ``` ] .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Creating the autoscaling policy - We will register the policy - Of course, it won't quite work yet (we're missing the *Prometheus adapter*) .lab[ - Create the HorizontalPodAutoscaler: ```bash kubectl apply -f ~/container.training/k8s/hpa-v2-pa-httplat.yaml ``` - Check the logs of the `controller-manager`: ```bash stern --namespace=kube-system --tail=10 controller-manager ``` ] After a little while we should see messages like this: ``` no custom metrics API (custom.metrics.k8s.io) registered ``` .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## `custom.metrics.k8s.io` - The HorizontalPodAutoscaler will get the metrics *from the Kubernetes API itself* - In our specific case, it will access a resource like this one: .small[ ``` /apis/custom.metrics.k8s.io/v1beta1/namespaces/customscaling/services/httplat/httplat_latency_seconds ``` ] - By default, the Kubernetes API server doesn't implement `custom.metrics.k8s.io` (we can have a look at `kubectl get apiservices`) - We need to: - start an API service implementing this API group - register it with our API server .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## The Prometheus adapter - The Prometheus adapter is an open source project: https://github.com/DirectXMan12/k8s-prometheus-adapter - It's a Kubernetes API service implementing API group `custom.metrics.k8s.io` - It maps the requests it receives to Prometheus metrics - Exactly what we need! .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Deploying the Prometheus adapter - There is ~~an app~~ a Helm chart for that .lab[ - Install the Prometheus adapter: ```bash helm upgrade --install prometheus-adapter prometheus-adapter \ --repo https://prometheus-community.github.io/helm-charts \ --namespace=prometheus-adapter --create-namespace \ --set prometheus.url=http://prometheus-server.prometheus.svc \ --set prometheus.port=80 ``` ] - It comes with some default mappings - But we will need to add `httplat` to these mappings .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Configuring the Prometheus adapter - The Prometheus adapter can be configured/customized through a ConfigMap - We are going to edit that ConfigMap, then restart the adapter - We need to add a rule that will say: - all the metrics series named `httplat_latency_seconds_sum` ... - ... belong to *Services* ... - ... the name of the Service and its Namespace are indicated by the `kubernetes_name` and `kubernetes_namespace` Prometheus tags respectively ... - ... and the exact value to use should be the following PromQL expression .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## The mapping rule Here is the rule that we need to add to the configuration: ```yaml - seriesQuery: 'httplat_latency_seconds_sum{namespace!="",service!=""}' resources: overrides: namespace: resource: namespace service: resource: service name: matches: "httplat_latency_seconds_sum" as: "httplat_latency_seconds" metricsQuery: | rate(httplat_latency_seconds_sum{<<.LabelMatchers>>}[2m])/rate(httplat_latency_seconds_count{<<.LabelMatchers>>}[2m]) ``` (I built it following the [walkthrough](https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/config-walkthrough.md ) in the Prometheus adapter documentation.) .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Editing the adapter's configuration .lab[ - Edit the adapter's ConfigMap: ```bash kubectl edit configmap prometheus-adapter --namespace=prometheus-adapter ``` - Add the new rule in the `rules` section, at the end of the configuration file - Save, quit - Restart the Prometheus adapter: ```bash kubectl rollout restart deployment --namespace=prometheus-adapter prometheus-adapter ``` ] .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Witness the marvel of custom autoscaling (Sort of) - After a short while, the `rng` Deployment will scale up - It should scale up until the latency drops below 100ms (and continue to scale up a little bit more after that) - Then, since the latency will be well below 100ms, it will scale down - ... and back up again, etc. (See pictures on next slides!) .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- class: pic  .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- class: pic  .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## What's going on? - The autoscaler's information is slightly out of date (not by much; probably between 1 and 2 minute) - It's enough to cause the oscillations to happen - One possible fix is to tell the autoscaler to wait a bit after each action - It will reduce oscillations, but will also slow down its reaction time (and therefore, how fast it reacts to a peak of traffic) .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## What's going on? Take 2 - As soon as the measured latency is *significantly* below our target (100ms) ... the autoscaler tries to scale down - If the latency is measured at 20ms ... the autoscaler will try to *divide the number of pods by five!* - One possible solution: apply a formula to the measured latency, so that values between e.g. 10 and 100ms get very close to 100ms. - Another solution: instead of targetting for a specific latency, target a 95th percentile latency or something similar, using a more advanced PromQL expression (and leveraging the fact that we have histograms instead of raw values). .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Troubleshooting Check that the adapter registered itself correctly: ```bash kubectl get apiservices | grep metrics ``` Check that the adapter correctly serves metrics: ```bash kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 ``` Check that our `httplat` metrics are available: ```bash kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1\ /namespaces/customscaling/services/httplat/httplat_latency_seconds ``` Also check the logs of the `prometheus-adapter` and the `kube-controller-manager`. .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Useful links - [Horizontal Pod Autoscaler walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) in the Kubernetes documentation - [Autoscaling design proposal](https://github.com/kubernetes/community/tree/master/contributors/design-proposals/autoscaling) - [Kubernetes custom metrics API alternative implementations](https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md) - [Prometheus adapter configuration walkthrough](https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/config-walkthrough.md) .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- ## Discussion - This system works great if we have a single, centralized metrics system (and the corresponding "adapter" to expose these metrics through the Kubernetes API) - If we have metrics in multiple places, we must aggregate them (good news: Prometheus has exporters for almost everything!) - It is complex and has a steep learning curve - Another approach is [KEDA](https://keda.sh/) ??? :EN:- Autoscaling with custom metrics :FR:- Suivi de charge avancé (HPAv2) .debug[[k8s/hpa-v2.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/hpa-v2.md)] --- class: pic .interstitial[] --- name: toc-designing-an-operator class: title Designing an operator .nav[ [Previous part](#toc-scaling-with-custom-metrics) | [Back to table of contents](#toc-part-9) | [Next part](#toc-writing-a-tiny-operator) ] .debug[(automatically generated title slide)] --- # Designing an operator - Once we understand CRDs and operators, it's tempting to use them everywhere - Yes, we can do (almost) everything with operators ... - ... But *should we?* - Very often, the answer is **“no!”** - Operators are powerful, but significantly more complex than other solutions .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## When should we (not) use operators? - Operators are great if our app needs to react to cluster events (nodes or pods going down, and requiring extensive reconfiguration) - Operators *might* be helpful to encapsulate complexity (manipulate one single custom resource for an entire stack) - Operators are probably overkill if a Helm chart would suffice - That being said, if we really want to write an operator ... Read on! .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## What does it take to write an operator? - Writing a quick-and-dirty operator, or a POC/MVP, is easy - Writing a robust operator is hard - We will describe the general idea - We will identify some of the associated challenges - We will list a few tools that can help us .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Top-down vs. bottom-up - Both approaches are possible - Let's see what they entail, and their respective pros and cons .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Top-down approach - Start with high-level design (see next slide) - Pros: - can yield cleaner design that will be more robust - Cons: - must be able to anticipate all the events that might happen - design will be better only to the extent of what we anticipated - hard to anticipate if we don't have production experience .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## High-level design - What are we solving? (e.g.: geographic databases backed by PostGIS with Redis caches) - What are our use-cases, stories? (e.g.: adding/resizing caches and read replicas; load balancing queries) - What kind of outage do we want to address? (e.g.: loss of individual node, pod, volume) - What are our *non-features*, the things we don't want to address? (e.g.: loss of datacenter/zone; differentiating between read and write queries;
cache invalidation; upgrading to newer major versions of Redis, PostGIS, PostgreSQL) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Low-level design - What Custom Resource Definitions do we need? (one, many?) - How will we store configuration information? (part of the CRD spec fields, annotations, other?) - Do we need to store state? If so, where? - state that is small and doesn't change much can be stored via the Kubernetes API
(e.g.: leader information, configuration, credentials) - things that are big and/or change a lot should go elsewhere
(e.g.: metrics, bigger configuration file like GeoIP) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- class: extra-details ## What can we store via the Kubernetes API? - The API server stores most Kubernetes resources in etcd - Etcd is designed for reliability, not for performance - If our storage needs exceed what etcd can offer, we need to use something else: - either directly - or by extending the API server
(for instance by using the aggregation layer, like [metrics server](https://github.com/kubernetes-incubator/metrics-server) does) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Bottom-up approach - Start with existing Kubernetes resources (Deployment, Stateful Set...) - Run the system in production - Add scripts, automation, to facilitate day-to-day operations - Turn the scripts into an operator - Pros: simpler to get started; reflects actual use-cases - Cons: can result in convoluted designs requiring extensive refactor .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## General idea - Our operator will watch its CRDs *and associated resources* - Drawing state diagrams and finite state automata helps a lot - It's OK if some transitions lead to a big catch-all "human intervention" - Over time, we will learn about new failure modes and add to these diagrams - It's OK to start with CRD creation / deletion and prevent any modification (that's the easy POC/MVP we were talking about) - *Presentation* and *validation* will help our users (more on that later) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Challenges - Reacting to infrastructure disruption can seem hard at first - Kubernetes gives us a lot of primitives to help: - Pods and Persistent Volumes will *eventually* recover - Stateful Sets give us easy ways to "add N copies" of a thing - The real challenges come with configuration changes (i.e., what to do when our users update our CRDs) - Keep in mind that [some] of the [largest] cloud [outages] haven't been caused by [natural catastrophes], or even code bugs, but by configuration changes [some]: https://www.datacenterdynamics.com/news/gcp-outage-mainone-leaked-google-cloudflare-ip-addresses-china-telecom/ [largest]: https://aws.amazon.com/message/41926/ [outages]: https://aws.amazon.com/message/65648/ [natural catastrophes]: https://www.datacenterknowledge.com/amazon/aws-says-it-s-never-seen-whole-data-center-go-down .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Configuration changes - It is helpful to analyze and understand how Kubernetes controllers work: - watch resource for modifications - compare desired state (CRD) and current state - issue actions to converge state - Configuration changes will probably require *another* state diagram or FSA - Again, it's OK to have transitions labeled as "unsupported" (i.e. reject some modifications because we can't execute them) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Tools - CoreOS / RedHat Operator Framework [GitHub](https://github.com/operator-framework) | [Blog](https://developers.redhat.com/blog/2018/12/18/introduction-to-the-kubernetes-operator-framework/) | [Intro talk](https://www.youtube.com/watch?v=8k_ayO1VRXE) | [Deep dive talk](https://www.youtube.com/watch?v=fu7ecA2rXmc) | [Simple example](https://medium.com/faun/writing-your-first-kubernetes-operator-8f3df4453234) - Kubernetes Operator Pythonic Framework (KOPF) [GitHub](https://github.com/nolar/kopf) | [Docs](https://kopf.readthedocs.io/) | [Step-by-step tutorial](https://kopf.readthedocs.io/en/stable/walkthrough/problem/) - Mesosphere Kubernetes Universal Declarative Operator (KUDO) [GitHub](https://github.com/kudobuilder/kudo) | [Blog](https://mesosphere.com/blog/announcing-maestro-a-declarative-no-code-approach-to-kubernetes-day-2-operators/) | [Docs](https://kudo.dev/) | [Zookeeper example](https://github.com/kudobuilder/frameworks/tree/master/repo/stable/zookeeper) - Kubebuilder (Go, very close to the Kubernetes API codebase) [GitHub](https://github.com/kubernetes-sigs/kubebuilder) | [Book](https://book.kubebuilder.io/) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Validation - By default, a CRD is "free form" (we can put pretty much anything we want in it) - When creating a CRD, we can provide an OpenAPI v3 schema ([Example](https://github.com/amaizfinance/redis-operator/blob/master/deploy/crds/k8s_v1alpha1_redis_crd.yaml#L34)) - The API server will then validate resources created/edited with this schema - If we need a stronger validation, we can use a Validating Admission Webhook: - run an [admission webhook server](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#write-an-admission-webhook-server) to receive validation requests - register the webhook by creating a [ValidatingWebhookConfiguration](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-admission-webhooks-on-the-fly) - each time the API server receives a request matching the configuration,
the request is sent to our server for validation .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Presentation - By default, `kubectl get mycustomresource` won't display much information (just the name and age of each resource) - When creating a CRD, we can specify additional columns to print ([Example](https://github.com/amaizfinance/redis-operator/blob/master/deploy/crds/k8s_v1alpha1_redis_crd.yaml#L6), [Docs](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#additional-printer-columns)) - By default, `kubectl describe mycustomresource` will also be generic - `kubectl describe` can show events related to our custom resources (for that, we need to create Event resources, and fill the `involvedObject` field) - For scalable resources, we can define a `scale` sub-resource - This will enable the use of `kubectl scale` and other scaling-related operations .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## About scaling - It is possible to use the HPA (Horizontal Pod Autoscaler) with CRDs - But it is not always desirable - The HPA works very well for homogenous, stateless workloads - For other workloads, your mileage may vary - Some systems can scale across multiple dimensions (for instance: increase number of replicas, or number of shards?) - If autoscaling is desired, the operator will have to take complex decisions (example: Zalando's Elasticsearch Operator ([Video](https://www.youtube.com/watch?v=lprE0J0kAq0))) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Versioning - As our operator evolves over time, we may have to change the CRD (add, remove, change fields) - Like every other resource in Kubernetes, [custom resources are versioned](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/ ) - When creating a CRD, we need to specify a *list* of versions - Versions can be marked as `stored` and/or `served` .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Stored version - Exactly one version has to be marked as the `stored` version - As the name implies, it is the one that will be stored in etcd - Resources in storage are never converted automatically (we need to read and re-write them ourselves) - Yes, this means that we can have different versions in etcd at any time - Our code needs to handle all the versions that still exist in storage .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Served versions - By default, the Kubernetes API will serve resources "as-is" (using their stored version) - It will assume that all versions are compatible storage-wise (i.e. that the spec and fields are compatible between versions) - We can provide [conversion webhooks](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/#webhook-conversion) to "translate" requests (the alternative is to upgrade all stored resources and stop serving old versions) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Operator reliability - Remember that the operator itself must be resilient (e.g.: the node running it can fail) - Our operator must be able to restart and recover gracefully - Do not store state locally (unless we can reconstruct that state when we restart) - As indicated earlier, we can use the Kubernetes API to store data: - in the custom resources themselves - in other resources' annotations .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- ## Beyond CRDs - CRDs cannot use custom storage (e.g. for time series data) - CRDs cannot support arbitrary subresources (like logs or exec for Pods) - CRDs cannot support protobuf (for faster, more efficient communication) - If we need these things, we can use the [aggregation layer](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) instead - The aggregation layer proxies all requests below a specific path to another server (this is used e.g. by the metrics server) - [This documentation page](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#choosing-a-method-for-adding-custom-resources) compares the features of CRDs and API aggregation ??? :EN:- Guidelines to design our own operators :FR:- Comment concevoir nos propres opérateurs .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-design.md)] --- class: pic .interstitial[] --- name: toc-writing-a-tiny-operator class: title Writing a tiny operator .nav[ [Previous part](#toc-designing-an-operator) | [Back to table of contents](#toc-part-9) | [Next part](#toc-kubebuilder) ] .debug[(automatically generated title slide)] --- # Writing a tiny operator - Let's look at a simple operator - It does have: - a control loop - resource lifecycle management - basic logging - It doesn't have: - CRDs (and therefore, resource versioning, conversion webhooks...) - advanced observability (metrics, Kubernetes Events) .debug[[k8s/operators-example.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-example.md)] --- ## Use case *When I push code to my source control system, I want that code to be built into a container image, and that image to be deployed in a staging environment. I want each branch/tag/commit (depending on my needs) to be deployed into its specific Kubernetes Namespace.* - The last part requires the CI/CD pipeline to manage Namespaces - ...And permissions in these Namespaces - This requires elevated privileges for the CI/CD pipeline (read: `cluster-admin`) - If the CI/CD pipeline is compromised, this can lead to cluster compromise - This can be a concern if the CI/CD pipeline is part of the repository (which is the default modus operandi with GitHub, GitLab, Bitbucket...) .debug[[k8s/operators-example.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-example.md)] --- ## Proposed solution - On-demand creation of Namespaces - Creation is triggered by creating a ConfigMap in a dedicated Namespace - Namespaces are set up with basic permissions - Credentials are generated for each Namespace - Credentials only give access to their Namespace - Credentials are exposed back to the dedicated configuration Namespace - Operator implemented as a shell script .debug[[k8s/operators-example.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-example.md)] --- ## An operator in shell... Really? - About 150 lines of code (including comments + white space) - Performance doesn't matter - operator work will be a tiny fraction of CI/CD pipeline work - uses *watch* semantics to minimize control plane load - Easy to understand, easy to audit, easy to tweak .debug[[k8s/operators-example.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-example.md)] --- ## Show me the code! - GitHub repository and documentation: https://github.com/jpetazzo/nsplease - Operator source code: https://github.com/jpetazzo/nsplease/blob/main/nsplease.sh .debug[[k8s/operators-example.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-example.md)] --- ## Main loop ```bash info "Waiting for ConfigMap events in $REQUESTS_NAMESPACE..." kubectl --namespace $REQUESTS_NAMESPACE get configmaps \ --watch --output-watch-events -o json \ | jq --unbuffered --raw-output '[.type,.object.metadata.name] | @tsv' \ | while read TYPE NAMESPACE; do debug "Got event: $TYPE $NAMESPACE" ``` - `--watch` to avoid active-polling the control plane - `--output-watch-events` to disregard e.g. resource deletion, edition - `jq` to process JSON easily .debug[[k8s/operators-example.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-example.md)] --- ## Resource ownership - Check out the `kubectl patch` commands - The created Namespace "owns" the corresponding ConfigMap and Secret - This means that deleting the Namespace will delete the ConfigMap and Secret - We don't need to watch for object deletion to clean up - Clean up will we done automatically even if operator is not running .debug[[k8s/operators-example.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-example.md)] --- ## Why no CRD? - It's easier to create a ConfigMap (e.g. `kubectl create configmap --from-literal=` one-liner) - We don't need the features of CRDs (schemas, printer columns, versioning...) - “This CRD could have been a ConfigMap!” (this doesn't mean *all* CRDs could be ConfigMaps, of course) .debug[[k8s/operators-example.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-example.md)] --- ## Discussion - A lot of simple, yet efficient logic, can be implemented in shell scripts - These can be used to prototype more complex operators - Not all use-cases require CRDs (keep in mind that correct CRDs are *a lot* of work!) - If the algorithms are correct, shell performance won't matter at all (but it will be difficult to keep a resource cache in shell) - Improvement idea: this operator could generate *events* (visible with `kubectl get events` and `kubectl describe`) ??? :EN:- How to write a simple operator with shell scripts :FR:- Comment écrire un opérateur simple en shell script .debug[[k8s/operators-example.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators-example.md)] --- class: pic .interstitial[] --- name: toc-kubebuilder class: title Kubebuilder .nav[ [Previous part](#toc-writing-a-tiny-operator) | [Back to table of contents](#toc-part-9) | [Next part](#toc-events) ] .debug[(automatically generated title slide)] --- # Kubebuilder - Writing a quick and dirty operator is (relatively) easy - Doing it right, however ... -- - We need: - proper CRD with schema validation - controller performing a reconcilation loop - manage errors, retries, dependencies between resources - maybe webhooks for admission and/or conversion 😱 .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Frameworks - There are a few frameworks available out there: - [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) ([book](https://book.kubebuilder.io/)): go-centric, very close to Kubernetes' core types - [operator-framework](https://operatorframework.io/): higher level; also supports Ansible and Helm - [KUDO](https://kudo.dev/): declarative operators written in YAML - [KOPF](https://kopf.readthedocs.io/en/latest/): operators in Python - ... .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Kubebuilder workflow - Kubebuilder will create scaffolding for us (Go stubs for types and controllers) - Then we edit these types and controllers files - Kubebuilder generates CRD manifests from our type definitions (and regenerates the manifests whenver we update the types) - It also gives us tools to quickly run the controller against a cluster (not necessarily *on* the cluster) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Our objective - We're going to implement a *useless machine* [basic example](https://www.youtube.com/watch?v=aqAUmgE3WyM) | [playful example](https://www.youtube.com/watch?v=kproPsch7i0) | [advanced example](https://www.youtube.com/watch?v=Nqk_nWAjBus) | [another advanced example](https://www.youtube.com/watch?v=eLtUB8ncEnA) - A machine manifest will look like this: ```yaml kind: Machine apiVersion: useless.container.training/v1alpha1 metadata: name: machine-1 spec: # Our useless operator will change that to "down" switchPosition: up ``` - Each time we change the `switchPosition`, the operator will move it back to `down` (This is inspired by the [uselessoperator](https://github.com/tilt-dev/uselessoperator) written by [V Körbes](https://twitter.com/veekorbes). Highly recommend!💯) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- class: extra-details ## Local vs remote - Building Go code can be a little bit slow on our modest lab VMs - It will typically be *much* faster on a local machine - All the demos and labs in this section will run fine either way! .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Preparation - Install Go (on our VMs: `sudo snap install go --classic` or `sudo apk add go`) - Install kubebuilder ([get a release](https://github.com/kubernetes-sigs/kubebuilder/releases/), untar, move the `kubebuilder` binary to the `$PATH`) - Initialize our workspace: ```bash mkdir useless cd useless go mod init container.training/useless kubebuilder init --domain container.training ``` .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Create scaffolding - Create a type and corresponding controller: ```bash kubebuilder create api --group useless --version v1alpha1 --kind Machine ``` - Answer `y` to both questions - Then we need to edit the type that just got created! .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Edit type Edit `api/v1alpha1/machine_types.go`. Add the `switchPosition` field in the `spec` structure: ```go // MachineSpec defines the desired state of Machine type MachineSpec struct { // Position of the switch on the machine, for instance up or down. SwitchPosition string ``json:"switchPosition,omitempty"`` } ``` ⚠️ The backticks above should be simple backticks, not double-backticks. Sorry. .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Go markers We can use Go *marker comments* to give `controller-gen` extra details about how to handle our type, for instance: ```go //+kubebuilder:object:root=true ``` → top-level type exposed through API (as opposed to "member field of another type") ```go //+kubebuilder:subresource:status ``` → automatically generate a `status` subresource (very common with many types) ```go //+kubebuilder:printcolumn:JSONPath=".spec.switchPosition",name=Position,type=string ``` (See [marker syntax](https://book.kubebuilder.io/reference/markers.html), [CRD generation](https://book.kubebuilder.io/reference/markers/crd.html), [CRD validation](https://book.kubebuilder.io/reference/markers/crd-validation.html), [Object/DeepCopy](https://master.book.kubebuilder.io/reference/markers/object.html) ) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Installing the CRD After making these changes, we can run `make install`. This will build the Go code, but also: - generate the CRD manifest - and apply the manifest to the cluster .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Creating a machine Edit `config/samples/useless_v1alpha1_machine.yaml`: ```yaml kind: Machine apiVersion: useless.container.training/v1alpha1 metadata: labels: # ... name: machine-1 spec: # Our useless operator will change that to "down" switchPosition: up ``` ... and apply it to the cluster. .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Designing the controller - Our controller needs to: - notice when a `switchPosition` is not `down` - move it to `down` when that happens - Later, we can add fancy improvements (wait a bit before moving it, etc.) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Reconciler logic - Kubebuilder will call our *reconciler* when necessary - When necessary = when changes happen ... - on our resource - or resources that it *watches* (related resources) - After "doing stuff", the reconciler can return ... - `ctrl.Result{},nil` = all is good - `ctrl.Result{Requeue...},nil` = all is good, but call us back in a bit - `ctrl.Result{},err` = something's wrong, try again later .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Loading an object Open `internal/controllers/machine_controller.go`. Add that code in the `Reconcile` method, at the `TODO(user)` location: ```go var machine uselessv1alpha1.Machine logger := log.FromContext(ctx) if err := r.Get(ctx, req.NamespacedName, &machine); err != nil { logger.Info("error getting object") return ctrl.Result{}, err } logger.Info( "reconciling", "machine", req.NamespacedName, "switchPosition", machine.Spec.SwitchPosition, ) ``` .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Running the controller Our controller is not done yet, but let's try what we have right now! This will compile the controller and run it: ``` make run ``` Then: - create a machine - change the `switchPosition` - delete the machine -- We get a bunch of errors and go stack traces! 🤔 .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## `IgnoreNotFound` When we are called for object deletion, the object has *already* been deleted. (Unless we're using finalizers, but that's another story.) When we return `err`, the controller will try to access the object ... ... We need to tell it to *not* do that. Don't just return `err`, but instead, wrap it around `client.IgnoreNotFound`: ```go return ctrl.Result{}, client.IgnoreNotFound(err) ``` Update the code, `make run` again, create/change/delete again. -- 🎉 .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Updating the machine Let's try to update the machine like this: ```go if machine.Spec.SwitchPosition != "down" { machine.Spec.SwitchPosition = "down" if err := r.Update(ctx, &machine); err != nil { logger.Info("error updating switch position") return ctrl.Result{}, client.IgnoreNotFound(err) } } ``` Again - update, `make run`, test. .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Spec vs Status - Spec = desired state - Status = observed state - If Status is lost, the controller should be able to reconstruct it (maybe with degraded behavior in the meantime) - Status will almost always be a sub-resource, so that it can be updated separately (and potentially with different permissions) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- class: extra-details ## Spec vs Status (in depth) - The `/status` subresource is handled differently by the API server - Updates to `/status` don't alter the rest of the object - Conversely, updates to the object ignore changes in the status (See [the docs](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#status-subresource) for the fine print.) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## "Improving" our controller - We want to wait a few seconds before flipping the switch - Let's add the following line of code to the controller: ```go time.Sleep(5 * time.Second) ``` - `make run`, create a few machines, observe what happens -- 💡 Concurrency! .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Controller logic - Our controller shouldn't block (think "event loop") - There is a queue of objects that need to be reconciled - We can ask to be put back on the queue for later processing - When we need to block (wait for something to happen), two options: - ask for a *requeue* ("call me back later") - yield because we know we will be notified by another resource .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## To requeue ... `return ctrl.Result{RequeueAfter: 1 * time.Second}, nil` - That means: "try again in 1 second, and I will check if progress was made" - This *does not* guarantee that we will be called exactly 1 second later: - we might be called before (if other changes happen) - we might be called after (if the controller is busy with other objects) - If we are waiting for another Kubernetes resource to change, there is a better way (explained on next slide) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## ... or not to requeue `return ctrl.Result{}, nil` - That means: "we're done here!" - This is also what we should use if we are waiting for another resource (e.g. a LoadBalancer to be provisioned, a Pod to be ready...) - In that case, we will need to set a *watch* (more on that later) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Keeping track of state - If we simply requeue the object to examine it 1 second later... - ...We'll keep examining/requeuing it forever! - We need to "remember" that we saw it (and when) - Option 1: keep state in controller (e.g. an internal `map`) - Option 2: keep state in the object (typically in its status field) - Tradeoffs: concurrency / failover / control plane overhead... .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## "Improving" our controller, take 2 Let's store in the machine status the moment when we saw it: ```go type MachineStatus struct { // Time at which the machine was noticed by our controller. SeenAt *metav1.Time ``json:"seenAt,omitempty"`` } ``` ⚠️ The backticks above should be simple backticks, not double-backticks. Sorry. Note: `date` fields don't display timestamps in the future. (That's why for this example it's simpler to use `seenAt` rather than `changeAt`.) And for better visibility, add this along with the other `printcolumn` comments: ```go //+kubebuilder:printcolumn:JSONPath=".status.seenAt",name=Seen,type=date ``` .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Set `seenAt` Let's add the following block in our reconciler: ```go if machine.Status.SeenAt == nil { now := metav1.Now() machine.Status.SeenAt = &now if err := r.Status().Update(ctx, &machine); err != nil { logger.Info("error updating status.seenAt") return ctrl.Result{}, client.IgnoreNotFound(err) } return ctrl.Result{RequeueAfter: 5 * time.Second}, nil } ``` (If needed, add `metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"` to our imports.) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Use `seenAt` Our switch-position-changing code can now become: ```go if machine.Spec.SwitchPosition != "down" { now := metav1.Now() changeAt := machine.Status.SeenAt.Time.Add(5 * time.Second) if now.Time.After(changeAt) { machine.Spec.SwitchPosition = "down" machine.Status.SeenAt = nil if err := r.Update(ctx, &machine); err != nil { logger.Info("error updating switch position") return ctrl.Result{}, client.IgnoreNotFound(err) } } } ``` `make run`, create a few machines, tweak their switches. .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Owner and dependents - Next, let's see how to have relationships between objects! - We will now have two kinds of objects: machines, and switches - Machines will store the number of switches in their spec - Machines should have *at least* one switch, possibly *multiple ones* - Our controller will automatically create switches if needed (a bit like the ReplicaSet controller automatically creates Pods) - The switches will be tied to their machine through a label (let's pick `machine=name-of-the-machine`) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Switch state - The position of a switch will now be stored in the switch (not in the machine like in the first scenario) - The machine will also expose the combined state of the switches (through its status) - The machine's status will be automatically updated by the controller (each time a switch is added/changed/removed) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Switches and machines ``` [jp@hex ~]$ kubectl get machines NAME SWITCHES POSITIONS machine-cz2vl 3 ddd machine-vf4xk 1 d [jp@hex ~]$ kubectl get switches --show-labels NAME POSITION SEEN LABELS switch-6wmjw down machine=machine-cz2vl switch-b8csg down machine=machine-cz2vl switch-fl8dq down machine=machine-cz2vl switch-rc59l down machine=machine-vf4xk ``` (The field `status.positions` shows the first letter of the `position` of each switch.) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Tasks 1. Create the new resource type (but don't create a controller) 2. Update `machine_types.go` and `switch_types.go` 3. Implement logic to display machine status (status of its switches) 4. Implement logic to automatically create switches 5. Implement logic to flip all switches down immediately 6. Then tweak it so that a given machine doesn't flip more than one switch every 5 seconds *See next slides for detailed steps!* .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Creating the new type ```bash kubebuilder create api --group useless --version v1alpha1 --kind Switch ``` Note: this time, only create a new custom resource; not a new controller. .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Updating our types - Move the "switch position" and "seen at" to the new `Switch` type - Update the `Machine` type to have: - `spec.switches` (Go type: `int`, JSON type: `integer`) - `status.positions` of type `string` - Bonus points for adding [CRD Validation](https://book.kubebuilder.io/reference/markers/crd-validation.html) to the numbers of switches! - Then install the new CRDs with `make install` - Create a Machine, and a Switch linked to the Machine (by setting the `machine` label) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Listing switches - Switches are associated to Machines with a label (`kubectl label switch switch-xyz machine=machine-xyz`) - We can retrieve associated switches like this: ```go var switches uselessv1alpha1.SwitchList if err := r.List(ctx, &switches, client.InNamespace(req.Namespace), client.MatchingLabels{"machine": req.Name}, ); err != nil { logger.Error(err, "unable to list switches of the machine") return ctrl.Result{}, client.IgnoreNotFound(err) } logger.Info("Found switches", "switches", switches) ``` .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Updating status - Each time we reconcile a Machine, let's update its status: ```go status := "" for _, sw := range switches.Items { status += string(sw.Spec.Position[0]) } machine.Status.Positions = status if err := r.Status().Update(ctx, &machine); err != nil { ... ``` - Run the controller and check that POSITIONS gets updated - Add more switches linked to the same machine - ...The POSITIONS don't get updated, unless we restart the controller - We'll see later how to fix that! .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Creating objects We can use the `Create` method to create a new object: ```go sw := uselessv1alpha1.Switch{ TypeMeta: metav1.TypeMeta{ APIVersion: uselessv1alpha1.GroupVersion.String(), Kind: "Switch", }, ObjectMeta: metav1.ObjectMeta{ GenerateName: "switch-", Namespace: machine.Namespace, Labels: map[string]string{"machine": machine.Name}, }, Spec: uselessv1alpha1.SwitchSpec{ Position: "down", }, } if err := r.Create(ctx, &sw); err != nil { ... ``` .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Create missing switches - In our reconciler, if a machine doesn't have enough switches, create them! - Option 1: directly create the number of missing switches - Option 2: create only one switch (and rely on later requeuing) - Note: option 2 won't quite work yet, since we haven't set up *watches* yet .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Watches - Our controller doesn't react when switches are created/updated/deleted - We need to tell it to watch switches - We also need to tell it how to map a switch to its machine (so that the correct machine gets queued and reconciled when a switch is updated) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Mapping a switch to its machine Define the following helper function: ```go func (r *MachineReconciler) machineOfSwitch(ctx context.Context, obj client.Object) []ctrl.Request { return []ctrl.Request{ ctrl.Request{ NamespacedName: types.NamespacedName{ Name: obj.GetLabels()["machine"], Namespace: obj.GetNamespace(), }, }, } } ``` .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Telling the controller to watch switches Update the `SetupWithManager` method in the controller: ```go // SetupWithManager sets up the controller with the Manager. func (r *MachineReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&uselessv1alpha1.Machine{}). Owns(&uselessv1alpha1.Switch{}). Watches( &uselessv1alpha1.Switch{}, handler.EnqueueRequestsFromMapFunc(r.machineOfSwitch), ). Complete(r) } ``` .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## ...And a few extra imports Import the following packages referenced by the previous code: ```go "sigs.k8s.io/controller-runtime/pkg/handler" "sigs.k8s.io/controller-runtime/pkg/source" "k8s.io/apimachinery/pkg/types" ``` After this, when we update a switch, it should reflect on the machine. (Try to change switch positions and see the machine status update!) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Flipping switches - Now re-add logic to flip switches that are not in "down" position - Re-add logic to wait a few seconds before flipping a switch - Change the logic to toggle one switch per machine every few seconds (i.e. don't change all the switches for a machine; move them one at a time) - Handle "scale down" of a machine (by deleting extraneous switches) - Automatically delete switches when a machine is deleted (ideally, using ownership information) - Test corner cases (e.g. changing a switch label) .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Other possible improvements - Formalize resource ownership (by setting `ownerReferences` in the switches) - This can simplify the watch mechanism a bit - Allow to define a selector (instead of using the hard-coded `machine` label) - And much more! .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- ## Acknowledgements - Useless Operator, by [V Körbes](https://twitter.com/veekorbes) [code](https://github.com/tilt-dev/uselessoperator) | [video (EN)](https://www.youtube.com/watch?v=85dKpsFFju4) | [video (PT)](https://www.youtube.com/watch?v=Vt7Eg4wWNDw) - Zero To Operator, by [Solly Ross](https://twitter.com/directxman12) [code](https://pres.metamagical.dev/kubecon-us-2019/code) | [video](https://www.youtube.com/watch?v=KBTXBUVNF2I) | [slides](https://pres.metamagical.dev/kubecon-us-2019/) - The [kubebuilder book](https://book.kubebuilder.io/) ??? :EN:- Implementing an operator with kubebuilder :FR:- Implémenter un opérateur avec kubebuilder .debug[[k8s/kubebuilder.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubebuilder.md)] --- class: pic .interstitial[] --- name: toc-events class: title Events .nav[ [Previous part](#toc-kubebuilder) | [Back to table of contents](#toc-part-9) | [Next part](#toc-finalizers) ] .debug[(automatically generated title slide)] --- # Events - Kubernetes has an internal structured log of *events* - These events are ordinary resources: - we can view them with `kubectl get events` - they can be viewed and created through the Kubernetes API - they are stored in Kubernetes default database (e.g. etcd) - Most components will generate events to let us know what's going on - Events can be *related* to other resources .debug[[k8s/events.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/events.md)] --- ## Reading events - `kubectl get events` (or `kubectl get ev`) - Can use `--watch` ⚠️ Looks like `tail -f`, but events aren't necessarily sorted! - Can use `--all-namespaces` - Cluster events (e.g. related to nodes) are in the `default` namespace - Viewing all "non-normal" events: ```bash kubectl get ev -A --field-selector=type!=Normal ``` (as of Kubernetes 1.19, `type` can be either `Normal` or `Warning`) .debug[[k8s/events.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/events.md)] --- ## Reading events (take 2) - When we use `kubectl describe` on an object, `kubectl` retrieves the associated events .lab[ - See the API requests happening when we use `kubectl describe`: ```bash kubectl describe service kubernetes --namespace=default -v6 >/dev/null ``` ] .debug[[k8s/events.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/events.md)] --- ## Generating events - This is rarely (if ever) done manually (i.e. by crafting some YAML) - But controllers (e.g. operators) need this! - It's not mandatory, but it helps with *operability* (e.g. when we `kubectl describe` a CRD, we will see associated events) .debug[[k8s/events.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/events.md)] --- ## ⚠️ Work in progress - "Events" can be : - "old-style" events (in core API group, aka `v1`) - "new-style" events (in API group `events.k8s.io`) - See [KEP 383](https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/383-new-event-api-ga-graduation/README.md) in particular this [comparison between old and new APIs](https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/383-new-event-api-ga-graduation/README.md#comparison-between-old-and-new-apis) .debug[[k8s/events.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/events.md)] --- ## Experimenting with events - Let's create an event related to a Node, based on [k8s/event-node.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/event-node.yaml) .lab[ - Edit `k8s/event-node.yaml` - Update the `name` and `uid` of the `involvedObject` - Create the event with `kubectl create -f` - Look at the Node with `kubectl describe` ] .debug[[k8s/events.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/events.md)] --- ## Experimenting with events - Let's create an event related to a Pod, based on [k8s/event-pod.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/event-pod.yaml) .lab[ - Create a pod - Edit `k8s/event-pod.yaml` - Edit the `involvedObject` section (don't forget the `uid`) - Create the event with `kubectl create -f` - Look at the Pod with `kubectl describe` ] .debug[[k8s/events.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/events.md)] --- ## Generating events in practice - In Go, use an `EventRecorder` provided by the `kubernetes/client-go` library - [EventRecorder interface](https://github.com/kubernetes/client-go/blob/release-1.19/tools/record/event.go#L87) - [kubebuilder book example](https://book-v1.book.kubebuilder.io/beyond_basics/creating_events.html) - It will take care of formatting / aggregating events - To get an idea of what to put in the `reason` field, check [kubelet events]( https://github.com/kubernetes/kubernetes/blob/release-1.19/pkg/kubelet/events/event.go) .debug[[k8s/events.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/events.md)] --- ## Cluster operator perspective - Events are kept 1 hour by default - This can be changed with the `--event-ttl` flag on the API server - On very busy clusters, events can be kept on a separate etcd cluster - This is done with the `--etcd-servers-overrides` flag on the API server - Example: ``` --etcd-servers-overrides=/events#http://127.0.0.1:12379 ``` ??? :EN:- Consuming and generating cluster events :FR:- Suivre l'activité du cluster avec les *events* .debug[[k8s/events.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/events.md)] --- class: pic .interstitial[] --- name: toc-finalizers class: title Finalizers .nav[ [Previous part](#toc-events) | [Back to table of contents](#toc-part-9) | [Next part](#toc-extra-content) ] .debug[(automatically generated title slide)] --- # Finalizers - Sometimes, we.red[¹] want to prevent a resource from being deleted: - perhaps it's "precious" (holds important data) - perhaps other resources depend on it (and should be deleted first) - perhaps we need to perform some clean up before it's deleted - *Finalizers* are a way to do that! .footnote[.red[¹]The "we" in that sentence generally stands for a controller.
(We can also use finalizers directly ourselves, but it's not very common.)] .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- ## Examples - Prevent deletion of a PersistentVolumeClaim which is used by a Pod - Prevent deletion of a PersistentVolume which is bound to a PersistentVolumeClaim - Prevent deletion of a Namespace that still contains objects - When a LoadBalancer Service is deleted, make sure that the corresponding external resource (e.g. NLB, GLB, etc.) gets deleted.red[¹] - When a CRD gets deleted, make sure that all the associated resources get deleted.red[²] .footnote[.red[¹²]Finalizers are not the only solution for these use-cases.] .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- ## How do they work? - Each resource can have list of `finalizers` in its `metadata`, e.g.: ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-pvc annotations: ... finalizers: - kubernetes.io/pvc-protection ``` - If we try to delete an resource that has at least one finalizer: - the resource is *not* deleted - instead, its `deletionTimestamp` is set to the current time - we are merely *marking the resource for deletion* .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- ## What happens next? - The controller that added the finalizer is supposed to: - watch for resources with a `deletionTimestamp` - execute necessary clean-up actions - then remove the finalizer - The resource is deleted once all the finalizers have been removed (there is no timeout, so this could take forever) - Until then, the resource can be used normally (but no further finalizer can be *added* to the resource) .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- ## Finalizers in review Let's review the examples mentioned earlier. For each of them, we'll see if there are other (perhaps better) options. .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- ## Volume finalizer - Kubernetes applies the following finalizers: - `kubernetes.io/pvc-protection` on PersistentVolumeClaims - `kubernetes.io/pv-protection` on PersistentVolumes - This prevents removing them when they are in use - Implementation detail: the finalizer is present *even when the resource is not in use* - When the resource is ~~deleted~~ marked for deletion, the controller will check if the finalizer can be removed (Perhaps to avoid race conditions?) .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- ## Namespace finalizer - Kubernetes applies a finalizer named `kubernetes` - It prevents removing the namespace if it still contains objects - *Can we remove the namespace anyway?* - remove the finalizer - delete the namespace - force deletion - It *seems to works* but, in fact, the objects in the namespace still exist (and they will re-appear if we re-create the namespace) See [this blog post](https://www.openshift.com/blog/the-hidden-dangers-of-terminating-namespaces) for more details about this. .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- ## LoadBalancer finalizer - Scenario: We run a custom controller to implement provisioning of LoadBalancer Services. When a Service with type=LoadBalancer is deleted, we want to make sure that the corresponding external resources are properly deleted. - Rationale for using a finalizer: Normally, we would watch and observe the deletion of the Service; but if the Service is deleted while our controller is down, we could "miss" the deletion and forget to clean up the external resource. The finalizer ensures that we will "see" the deletion and clean up the external resource. .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- ## Counterpoint - We could also: - Tag the external resources
(to indicate which Kubernetes Service they correspond to) - Periodically reconcile them against Kubernetes resources - If a Kubernetes resource does no longer exist, delete the external resource - This doesn't have to be a *pre-delete* hook (unless we store important information in the Service, e.g. as annotations) .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- ## CRD finalizer - Scenario: We have a CRD that represents a PostgreSQL cluster. It provisions StatefulSets, Deployments, Services, Secrets, ConfigMaps. When the CRD is deleted, we want to delete all these resources. - Rationale for using a finalizer: Same as previously; we could observe the CRD, but if it is deleted while the controller isn't running, we would miss the deletion, and the other resources would keep running. .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- ## Counterpoint - We could use the same technique as described before (tag the resources with e.g. annotations, to associate them with the CRD) - Even better: we could use `ownerReferences` (this feature is *specifically* designed for that use-case!) .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- ## CRD finalizer (take two) - Scenario: We have a CRD that represents a PostgreSQL cluster. It provisions StatefulSets, Deployments, Services, Secrets, ConfigMaps. When the CRD is deleted, we want to delete all these resources. We also want to store a final backup of the database. We also want to update final usage metrics (e.g. for billing purposes). - Rationale for using a finalizer: We need to take some actions *before* the resources get deleted, not *after*. .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- ## Wrapping up - Finalizers are a great way to: - prevent deletion of a resource that is still in use - have a "guaranteed" pre-delete hook - They can also be (ab)used for other purposes - Code spelunking exercise: *check where finalizers are used in the Kubernetes code base and why!* ??? :EN:- Using "finalizers" to manage resource lifecycle :FR:- Gérer le cycle de vie des ressources avec les *finalizers* .debug[[k8s/finalizers.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/finalizers.md)] --- # (Extra content) .debug[[kube-adv.yml](https://github.com/jpetazzo/container.training/tree/main/slides/kube-adv.yml)] --- class: pic .interstitial[] --- name: toc-owners-and-dependents class: title Owners and dependents .nav[ [Previous part](#toc-extra-content) | [Back to table of contents](#toc-part-9) | [Next part](#toc-api-server-internals) ] .debug[(automatically generated title slide)] --- # Owners and dependents - Some objects are created by other objects (example: pods created by replica sets, themselves created by deployments) - When an *owner* object is deleted, its *dependents* are deleted (this is the default behavior; it can be changed) - We can delete a dependent directly if we want (but generally, the owner will recreate another right away) - An object can have multiple owners .debug[[k8s/owners-and-dependents.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/owners-and-dependents.md)] --- ## Finding out the owners of an object - The owners are recorded in the field `ownerReferences` in the `metadata` block .lab[ - Let's create a deployment running `nginx`: ```bash kubectl create deployment yanginx --image=nginx ``` - Scale it to a few replicas: ```bash kubectl scale deployment yanginx --replicas=3 ``` - Once it's up, check the corresponding pods: ```bash kubectl get pods -l app=yanginx -o yaml | head -n 25 ``` ] These pods are owned by a ReplicaSet named yanginx-xxxxxxxxxx. .debug[[k8s/owners-and-dependents.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/owners-and-dependents.md)] --- ## Listing objects with their owners - This is a good opportunity to try the `custom-columns` output! .lab[ - Show all pods with their owners: ```bash kubectl get pod -o custom-columns=\ NAME:.metadata.name,\ OWNER-KIND:.metadata.ownerReferences[0].kind,\ OWNER-NAME:.metadata.ownerReferences[0].name ``` ] Note: the `custom-columns` option should be one long option (without spaces), so the lines should not be indented (otherwise the indentation will insert spaces). .debug[[k8s/owners-and-dependents.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/owners-and-dependents.md)] --- ## Deletion policy - When deleting an object through the API, three policies are available: - foreground (API call returns after all dependents are deleted) - background (API call returns immediately; dependents are scheduled for deletion) - orphan (the dependents are not deleted) - When deleting an object with `kubectl`, this is selected with `--cascade`: - `--cascade=true` deletes all dependent objects (default) - `--cascade=false` orphans dependent objects .debug[[k8s/owners-and-dependents.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/owners-and-dependents.md)] --- ## What happens when an object is deleted - It is removed from the list of owners of its dependents - If, for one of these dependents, the list of owners becomes empty ... - if the policy is "orphan", the object stays - otherwise, the object is deleted .debug[[k8s/owners-and-dependents.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/owners-and-dependents.md)] --- ## Orphaning pods - We are going to delete the Deployment and Replica Set that we created - ... without deleting the corresponding pods! .lab[ - Delete the Deployment: ```bash kubectl delete deployment -l app=yanginx --cascade=false ``` - Delete the Replica Set: ```bash kubectl delete replicaset -l app=yanginx --cascade=false ``` - Check that the pods are still here: ```bash kubectl get pods ``` ] .debug[[k8s/owners-and-dependents.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/owners-and-dependents.md)] --- class: extra-details ## When and why would we have orphans? - If we remove an owner and explicitly instruct the API to orphan dependents (like on the previous slide) - If we change the labels on a dependent, so that it's not selected anymore (e.g. change the `app: yanginx` in the pods of the previous example) - If a deployment tool that we're using does these things for us - If there is a serious problem within API machinery or other components (i.e. "this should not happen") .debug[[k8s/owners-and-dependents.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/owners-and-dependents.md)] --- ## Finding orphan objects - We're going to output all pods in JSON format - Then we will use `jq` to keep only the ones *without* an owner - And we will display their name .lab[ - List all pods that *do not* have an owner: ```bash kubectl get pod -o json | jq -r " .items[] | select(.metadata.ownerReferences|not) | .metadata.name" ``` ] .debug[[k8s/owners-and-dependents.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/owners-and-dependents.md)] --- ## Deleting orphan pods - Now that we can list orphan pods, deleting them is easy .lab[ - Add `| xargs kubectl delete pod` to the previous command: ```bash kubectl get pod -o json | jq -r " .items[] | select(.metadata.ownerReferences|not) | .metadata.name" | xargs kubectl delete pod ``` ] As always, the [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) has useful extra information and pointers. ??? :EN:- Owners and dependents :FR:- Liens de parenté entre les ressources .debug[[k8s/owners-and-dependents.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/owners-and-dependents.md)] --- class: pic .interstitial[] --- name: toc-api-server-internals class: title API server internals .nav[ [Previous part](#toc-owners-and-dependents) | [Back to table of contents](#toc-part-9) | [Next part](#toc-) ] .debug[(automatically generated title slide)] --- # API server internals - Understanding the internals of the API server is useful.red[¹]: - when extending the Kubernetes API server (CRDs, webhooks...) - when running Kubernetes at scale - Let's dive into a bit of code! .footnote[.red[¹]And by *useful*, we mean *strongly recommended or else...*] .debug[[k8s/apiserver-deepdive.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apiserver-deepdive.md)] --- ## The main handler - The API server parses its configuration, and builds a `GenericAPIServer` - ... which contains an `APIServerHandler` ([src](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/server/handler.go#L37 )) - ... which contains a couple of `http.Handler` fields - Requests go through: - `FullhandlerChain` (a series of HTTP filters, see next slide) - `Director` (switches the request to `GoRestfulContainer` or `NonGoRestfulMux`) - `GoRestfulContainer` is for "normal" APIs; integrates nicely with OpenAPI - `NonGoRestfulMux` is for everything else (e.g. proxy, delegation) .debug[[k8s/apiserver-deepdive.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apiserver-deepdive.md)] --- ## The chain of handlers - API requests go through a complex chain of filters ([src](https://github.com/kubernetes/apiserver/blob/release-1.32/pkg/server/config.go#L1004)) (note when reading that code: requests start at the bottom and go up) - This is where authentication, authorization, and admission happen (as well as a few other things!) - Let's review an arbitrary selection of some of these handlers! *In the following slides, the handlers are in chronological order.* *Note: handlers are nested; so they can act at the beginning and end of a request.* .debug[[k8s/apiserver-deepdive.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apiserver-deepdive.md)] --- ## `WithPanicRecovery` - Reminder about Go: there is no exception handling in Go; instead: - functions typically return a composite `(SomeType, error)` type - when things go really bad, the code can call `panic()` - `panic()` can be caught with `recover()`
(but this is almost never used like an exception handler!) - The API server code is not supposed to `panic()` - But just in case, we have that handler to prevent (some) crashes .debug[[k8s/apiserver-deepdive.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apiserver-deepdive.md)] --- ## `WithRequestInfo` ([src](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/request/requestinfo.go#L163)) - Parse out essential information: API group, version, namespace, resource, subresource, verb ... - WithRequestInfo: parse out API group+version, Namespace, resource, subresource ... - Maps HTTP verbs (GET, PUT, ...) to Kubernetes verbs (list, get, watch, ...) .debug[[k8s/apiserver-deepdive.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apiserver-deepdive.md)] --- class: extra-details ## HTTP verb mapping - POST → create - PUT → update - PATCH → patch - DELETE
→ delete (if a resource name is specified)
→ deletecollection (otherwise) - GET, HEAD
→ get (if a resource name is specified)
→ list (otherwise)
→ watch (if the `?watch=true` option is specified) .debug[[k8s/apiserver-deepdive.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apiserver-deepdive.md)] --- ## `WithWaitGroup` - When we shutdown, tells clients (with in-flight requests) to retry - only for "short" requests - for long running requests, the client needs to do more - Long running requests include `watch` verb, `proxy` sub-resource (See also `WithTimeoutForNonLongRunningRequests`) .debug[[k8s/apiserver-deepdive.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apiserver-deepdive.md)] --- ## AuthN and AuthZ - `WithAuthentication`: the request goes through a *chain* of authenticators ([src](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/filters/authentication.go#L38)) - WithAudit - WithImpersonation: used for e.g. `kubectl ... --as another.user` - WithPriorityAndFairness or WithMaxInFlightLimit (`system:masters` can bypass these) - WithAuthorization .debug[[k8s/apiserver-deepdive.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apiserver-deepdive.md)] --- ## After all these handlers ... - We get to the "director" mentioned above - Api Groups get installed into the "gorestfulhandler" ([src](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/server/genericapiserver.go#L423)) - REST-ish resources are managed by various handlers (in [this directory](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/)) - These files show us the code path for each type of request .debug[[k8s/apiserver-deepdive.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apiserver-deepdive.md)] --- class: extra-details ## Request code path - [create.go](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/create.go): decode to HubGroupVersion; admission; mutating admission; store - [delete.go](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/delete.go): validating admission only; deletion - [get.go](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/get.go) (get, list): directly fetch from rest storage abstraction - [patch.go](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/patch.go): admission; mutating admission; patch - [update.go](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/update.go): decode to HubGroupVersion; admission; mutating admission; store - [watch.go](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/watch.go): similar to get.go, but with watch logic (HubGroupVersion = in-memory, "canonical" version.) ??? :EN:- Kubernetes API server internals :FR:- Fonctionnement interne du serveur API .debug[[k8s/apiserver-deepdive.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apiserver-deepdive.md)] --- class: title, self-paced Thank you! .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/thankyou.md)] --- class: title, in-person That's all, folks!
Questions?  .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/thankyou.md)]