class: title, self-paced Kubernetes
for administrators
and operators
.nav[*Self-paced version*] .debug[ ``` ``` These slides have been built from commit: 45770cc [shared/title.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/title.md)] --- class: title, in-person Kubernetes
for administrators
and operators
.footnote[ **Slides[:](https://www.youtube.com/watch?v=h16zyxiwDLY) https://container.training/** ] .debug[[shared/title.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/title.md)] --- ## Introductions ⚠️ This slide should be customized by the tutorial instructor(s). [@alexbuisine]: https://twitter.com/alexbuisine [EphemeraSearch]: https://ephemerasearch.com/ [@jpetazzo]: https://twitter.com/jpetazzo [@jpetazzo@hachyderm.io]: https://hachyderm.io/@jpetazzo [@s0ulshake]: https://twitter.com/s0ulshake [Quantgene]: https://www.quantgene.com/ .debug[[logistics.md](https://github.com/jpetazzo/container.training/tree/main/slides/logistics.md)] --- ## Exercises - At the end of each day, there is a series of exercises - To make the most out of the training, please try the exercises! (it will help to practice and memorize the content of the day) - We recommend to take at least one hour to work on the exercises (if you understood the content of the day, it will be much faster) - Each day will start with a quick review of the exercises of the previous day .debug[[logistics.md](https://github.com/jpetazzo/container.training/tree/main/slides/logistics.md)] --- ## A brief introduction - This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person, instructor-led workshops and tutorials - Credit is also due to [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors) — thank you! - You can also follow along on your own, at your own pace - We included as much information as possible in these slides - We recommend having a mentor to help you ... - ... Or be comfortable spending some time reading the Kubernetes [documentation](https://kubernetes.io/docs/) ... - ... And looking for answers on [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and other outlets .debug[[k8s/intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/intro.md)] --- class: self-paced ## Hands on, you shall practice - Nobody ever became a Jedi by spending their lives reading Wookiepedia - Likewise, it will take more than merely *reading* these slides to make you an expert - These slides include *tons* of demos, exercises, and examples - They assume that you have access to a Kubernetes cluster - If you are attending a workshop or tutorial:
you will be given specific instructions to access your cluster - If you are doing this on your own:
the first chapter will give you various options to get your own cluster .debug[[k8s/intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/intro.md)] --- ## Accessing these slides now - We recommend that you open these slides in your browser: https://container.training/ - This is a public URL, you're welcome to share it with others! - Use arrows to move to next/previous slide (up, down, left, right, page up, page down) - Type a slide number + ENTER to go to that slide - The slide number is also visible in the URL bar (e.g. .../#123 for slide 123) .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/about-slides.md)] --- ## These slides are open source - The sources of these slides are available in a public GitHub repository: https://github.com/jpetazzo/container.training - These slides are written in Markdown - You are welcome to share, re-use, re-mix these slides - Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ... .footnote[👇 Try it! The source file will be shown and you can view it on GitHub and fork and edit it.] .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/about-slides.md)] --- ## Accessing these slides later - Slides will remain online so you can review them later if needed (let's say we'll keep them online at least 1 year, how about that?) - You can download the slides using this URL: https://container.training/slides.zip (then open the file `kadm-twodays.yml.html`) - You can also generate a PDF of the slides (by printing them to a file; but be patient with your browser!) .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/about-slides.md)] --- ## These slides are constantly updated - Feel free to check the GitHub repository for updates: https://github.com/jpetazzo/container.training - Look for branches named YYYY-MM-... - You can also find specific decks and other resources on: https://container.training/ .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/about-slides.md)] --- class: extra-details ## Extra details - This slide has a little magnifying glass in the top left corner - This magnifying glass indicates slides that provide extra details - Feel free to skip them if: - you are in a hurry - you are new to this and want to avoid cognitive overload - you want only the most essential information - You can review these slides another time if you want, they'll be waiting for you ☺ .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/about-slides.md)] --- ## Chat room - We've set up a chat room that we will monitor during the workshop - Don't hesitate to use it to ask questions, or get help, or share feedback - The chat room will also be available after the workshop - Join the chat room: In person! - Say hi in the chat room! .debug[[shared/chat-room-im.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/chat-room-im.md)] --- name: toc-part-1 ## Part 1 - [Pre-requirements](#toc-pre-requirements) - [Kubernetes architecture](#toc-kubernetes-architecture) - [The Kubernetes API](#toc-the-kubernetes-api) - [Other control plane components](#toc-other-control-plane-components) - [Kubernetes Internal APIs](#toc-kubernetes-internal-apis) - [Building our own cluster (easy)](#toc-building-our-own-cluster-easy) .debug[(auto-generated TOC)] --- name: toc-part-2 ## Part 2 - [Building our own cluster (medium)](#toc-building-our-own-cluster-medium) - [Building our own cluster (hard)](#toc-building-our-own-cluster-hard) - [CNI internals](#toc-cni-internals) .debug[(auto-generated TOC)] --- name: toc-part-3 ## Part 3 - [API server availability](#toc-api-server-availability) - [Setting up Kubernetes](#toc-setting-up-kubernetes) - [Deploying a managed cluster](#toc-deploying-a-managed-cluster) - [Kubernetes distributions and installers](#toc-kubernetes-distributions-and-installers) - [Upgrading clusters](#toc-upgrading-clusters) - [Static pods](#toc-static-pods) .debug[(auto-generated TOC)] --- name: toc-part-4 ## Part 4 - [Backing up clusters](#toc-backing-up-clusters) - [The Cloud Controller Manager](#toc-the-cloud-controller-manager) - [Healthchecks](#toc-healthchecks) .debug[(auto-generated TOC)] --- name: toc-part-5 ## Part 5 - [Deploying a sample application](#toc-deploying-a-sample-application) - [Accessing logs from the CLI](#toc-accessing-logs-from-the-cli) - [Centralized logging](#toc-centralized-logging) - [Authentication and authorization](#toc-authentication-and-authorization) - [Generating user certificates](#toc-generating-user-certificates) - [The CSR API](#toc-the-csr-api) .debug[(auto-generated TOC)] --- name: toc-part-6 ## Part 6 - [OpenID Connect](#toc-openid-connect) - [Securing the control plane](#toc-securing-the-control-plane) - [Network policies](#toc-network-policies) - [Restricting Pod Permissions](#toc-restricting-pod-permissions) - [Pod Security Policies](#toc-pod-security-policies) - [Pod Security Admission](#toc-pod-security-admission) .debug[(auto-generated TOC)] --- name: toc-part-7 ## Part 7 - [Resource Limits](#toc-resource-limits) - [Defining min, max, and default resources](#toc-defining-min-max-and-default-resources) - [Namespace quotas](#toc-namespace-quotas) - [Limiting resources in practice](#toc-limiting-resources-in-practice) - [Checking Node and Pod resource usage](#toc-checking-node-and-pod-resource-usage) - [Cluster sizing](#toc-cluster-sizing) - [Disruptions](#toc-disruptions) - [The Horizontal Pod Autoscaler](#toc-the-horizontal-pod-autoscaler) .debug[(auto-generated TOC)] --- name: toc-part-8 ## Part 8 - [Collecting metrics with Prometheus](#toc-collecting-metrics-with-prometheus) - [Extending the Kubernetes API](#toc-extending-the-kubernetes-api) - [Custom Resource Definitions](#toc-custom-resource-definitions) - [Operators](#toc-operators) - [An ElasticSearch Operator](#toc-an-elasticsearch-operator) .debug[(auto-generated TOC)] --- name: toc-part-9 ## Part 9 - [Last words](#toc-last-words) - [Links and resources](#toc-links-and-resources) - [(All content after this slide is bonus material)](#toc-all-content-after-this-slide-is-bonus-material) .debug[(auto-generated TOC)] --- name: toc-part-10 ## Part 10 - [Volumes](#toc-volumes) - [Managing configuration](#toc-managing-configuration) - [Managing secrets](#toc-managing-secrets) - [Stateful sets](#toc-stateful-sets) - [Running a Consul cluster](#toc-running-a-consul-cluster) - [PV, PVC, and Storage Classes](#toc-pv-pvc-and-storage-classes) - [OpenEBS ](#toc-openebs-) - [Stateful failover](#toc-stateful-failover) .debug[(auto-generated TOC)] .debug[[shared/toc.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/toc.md)] --- class: pic .interstitial[] --- name: toc-pre-requirements class: title Pre-requirements .nav[ [Previous part](#toc-) | [Back to table of contents](#toc-part-1) | [Next part](#toc-kubernetes-architecture) ] .debug[(automatically generated title slide)] --- # Pre-requirements - Kubernetes concepts (pods, deployments, services, labels, selectors) - Hands-on experience working with containers (building images, running them; doesn't matter how exactly) - Familiarity with the UNIX command-line (navigating directories, editing files, using `kubectl`) .debug[[k8s/prereqs-advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prereqs-advanced.md)] --- class: title *Tell me and I forget.*
*Teach me and I remember.*
*Involve me and I learn.* Misattributed to Benjamin Franklin [(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/) .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- ## Hands-on sections - There will be *a lot* of examples and demos - We are going to build, ship, and run containers (and sometimes, clusters!) - If you want, you can run all the examples and demos in your environment (but you don't have to; it's up to you!) - All hands-on sections are clearly identified, like the gray rectangle below .lab[ - This is a command that we're gonna run: ```bash echo hello world ``` ] .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: in-person ## Where are we going to run our containers? .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: in-person, pic  .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- ## If you're attending a live training or workshop - Each person gets a private lab environment (depending on the scenario, this will be one VM, one cluster, multiple clusters...) - The instructor will tell you how to connect to your environment - Your lab environments will be available for the duration of the workshop (check with your instructor to know exactly when they'll be shutdown) .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- ## Running your own lab environments - If you are following a self-paced course... - Or watching a replay of a recorded course... - ...You will need to set up a local environment for the labs - If you want to deliver your own training or workshop: - deployment scripts are available in the [prepare-labs] directory - you can use them to automatically deploy many lab environments - they support many different infrastructure providers [prepare-labs]: https://github.com/jpetazzo/container.training/tree/main/prepare-labs .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: in-person ## Why don't we run containers locally? - Installing this stuff can be hard on some machines (32 bits CPU or OS... Laptops without administrator access... etc.) - *"The whole team downloaded all these container images from the WiFi!
... and it went great!"* (Literally no-one ever) - All you need is a computer (or even a phone or tablet!), with: - an Internet connection - a web browser - an SSH client .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: in-person ## SSH clients - On Linux, OS X, FreeBSD... you are probably all set - On Windows, get one of these: - [putty](http://www.putty.org/) - Microsoft [Win32 OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH) - [Git BASH](https://git-for-windows.github.io/) - [MobaXterm](http://mobaxterm.mobatek.net/) - On Android, [JuiceSSH](https://juicessh.com/) ([Play Store](https://play.google.com/store/apps/details?id=com.sonelli.juicessh)) works pretty well - Nice-to-have: [Mosh](https://mosh.org/) instead of SSH, if your Internet connection tends to lose packets .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: in-person, extra-details ## What is this Mosh thing? *You don't have to use Mosh or even know about it to follow along.
We're just telling you about it because some of us think it's cool!* - Mosh is "the mobile shell" - It is essentially SSH over UDP, with roaming features - It retransmits packets quickly, so it works great even on lossy connections (Like hotel or conference WiFi) - It has intelligent local echo, so it works great even in high-latency connections (Like hotel or conference WiFi) - It supports transparent roaming when your client IP address changes (Like when you hop from hotel to conference WiFi) .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: in-person, extra-details ## Using Mosh - To install it: `(apt|yum|brew) install mosh` - It has been pre-installed on the VMs that we are using - To connect to a remote machine: `mosh user@host` (It is going to establish an SSH connection, then hand off to UDP) - It requires UDP ports to be open (By default, it uses a UDP port between 60000 and 61000) .debug[[shared/handson.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/handson.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-architecture class: title Kubernetes architecture .nav[ [Previous part](#toc-pre-requirements) | [Back to table of contents](#toc-part-1) | [Next part](#toc-the-kubernetes-api) ] .debug[(automatically generated title slide)] --- # Kubernetes architecture We can arbitrarily split Kubernetes in two parts: - the *nodes*, a set of machines that run our containerized workloads; - the *control plane*, a set of processes implementing the Kubernetes APIs. Kubernetes also relies on underlying infrastructure: - servers, network connectivity (obviously!), - optional components like storage systems, load balancers ... .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## What runs on a node - Our containerized workloads - A container engine like Docker, CRI-O, containerd... (in theory, the choice doesn't matter, as the engine is abstracted by Kubernetes) - kubelet: an agent connecting the node to the cluster (it connects to the API server, registers the node, receives instructions) - kube-proxy: a component used for internal cluster communication (note that this is *not* an overlay network or a CNI plugin!) .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## What's in the control plane - Everything is stored in etcd (it's the only stateful component) - Everyone communicates exclusively through the API server: - we (users) interact with the cluster through the API server - the nodes register and get their instructions through the API server - the other control plane components also register with the API server - API server is the only component that reads/writes from/to etcd .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## Communication protocols: API server - The API server exposes a REST API (except for some calls, e.g. to attach interactively to a container) - Almost all requests and responses are JSON following a strict format - For performance, the requests and responses can also be done over protobuf (see this [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) for details) - In practice, protobuf is used for all internal communication (between control plane components, and with kubelet) .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## Communication protocols: on the nodes The kubelet agent uses a number of special-purpose protocols and interfaces, including: - CRI (Container Runtime Interface) - used for communication with the container engine - abstracts the differences between container engines - based on gRPC+protobuf - [CNI (Container Network Interface)](https://github.com/containernetworking/cni/blob/master/SPEC.md) - used for communication with network plugins - network plugins are implemented as executable programs invoked by kubelet - network plugins provide IPAM - network plugins set up network interfaces in pods .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## Control plane location The control plane can run: - in containers, on the same nodes that run other application workloads (default behavior for local clusters like [Minikube](https://github.com/kubernetes/minikube), [kind](https://kind.sigs.k8s.io/)...) - on a dedicated node (default behavior when deploying with kubeadm) - on a dedicated set of nodes ([Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way); [kops](https://github.com/kubernetes/kops); also kubeadm) - outside of the cluster (most managed clusters like AKS, DOK, EKS, GKE, Kapsule, LKE, OKE...) .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic  .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic .interstitial[] --- name: toc-the-kubernetes-api class: title The Kubernetes API .nav[ [Previous part](#toc-kubernetes-architecture) | [Back to table of contents](#toc-part-1) | [Next part](#toc-other-control-plane-components) ] .debug[(automatically generated title slide)] --- # The Kubernetes API [ *The Kubernetes API server is a "dumb server" which offers storage, versioning, validation, update, and watch semantics on API resources.* ]( https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md#proposal-and-motivation ) ([Clayton Coleman](https://twitter.com/smarterclayton), Kubernetes Architect and Maintainer) What does that mean? .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## The Kubernetes API is declarative - We cannot tell the API, "run a pod" - We can tell the API, "here is the definition for pod X" - The API server will store that definition (in etcd) - *Controllers* will then wake up and create a pod matching the definition .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## The core features of the Kubernetes API - We can create, read, update, and delete objects - We can also *watch* objects (be notified when an object changes, or when an object of a given type is created) - Objects are strongly typed - Types are *validated* and *versioned* - Storage and watch operations are provided by etcd (note: the [k3s](https://k3s.io/) project allows us to use sqlite instead of etcd) .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## Let's experiment a bit! - For this section, connect to the first node of the `test` cluster .lab[ - SSH to the first node of the test cluster - Check that the cluster is operational: ```bash kubectl get nodes ``` - All nodes should be `Ready` ] .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- ## Create - Let's create a simple object .lab[ - Create a namespace with the following command: ```bash kubectl create -f- <
(example: this [demo scheduler](https://github.com/kelseyhightower/scheduler) uses the cost of nodes, stored in node annotations) - A pod might stay in `Pending` state for a long time: - if the cluster is full - if the pod has special constraints that can't be met - if the scheduler is not running (!) ??? :EN:- Kubernetes architecture review :FR:- Passage en revue de l'architecture de Kubernetes .debug[[k8s/architecture.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/architecture.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-internal-apis class: title Kubernetes Internal APIs .nav[ [Previous part](#toc-other-control-plane-components) | [Back to table of contents](#toc-part-1) | [Next part](#toc-building-our-own-cluster-easy) ] .debug[(automatically generated title slide)] --- # Kubernetes Internal APIs - Almost every Kubernetes component has some kind of internal API (some components even have multiple APIs on different ports!) - At the very least, these can be used for healthchecks (you *should* leverage this if you are deploying and operating Kubernetes yourself!) - Sometimes, they are used internally by Kubernetes (e.g. when the API server retrieves logs from kubelet) - Let's review some of these APIs! .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- ## API hunting guide This is how we found and investigated these APIs: - look for open ports on Kubernetes nodes (worker nodes or control plane nodes) - check which process owns that port - probe the port (with `curl` or other tools) - read the source code of that process (in particular when looking for API routes) OK, now let's see the results! .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- ## etcd - 2379/tcp → etcd clients - should be HTTPS and require mTLS authentication - 2380/tcp → etcd peers - should be HTTPS and require mTLS authentication - 2381/tcp → etcd healthcheck - HTTP without authentication - exposes two API routes: `/health` and `/metrics` .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- ## kubelet - 10248/tcp → healthcheck - HTTP without authentication - exposes a single API route, `/healthz`, that just returns `ok` - 10250/tcp → internal API - should be HTTPS and require mTLS authentication - used by the API server to obtain logs, `kubectl exec`, etc. .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- class: extra-details ## kubelet API - We can authenticate with e.g. our TLS admin certificate - The following routes should be available: - `/healthz` - `/configz` (serves kubelet configuration) - `/metrics` - `/pods` (returns *desired state*) - `/runningpods` (returns *current state* from the container runtime) - `/logs` (serves files from `/var/log`) - `/containerLogs/
/
/
` (can add e.g. `?tail=10`) - `/run`, `/exec`, `/attach`, `/portForward` - See [kubelet source code](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go) for details! .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- class: extra-details ## Trying the kubelet API The following example should work on a cluster deployed with `kubeadm`. 1. Obtain the key and certificate for the `cluster-admin` user. 2. Log into a node. 3. Copy the key and certificate on the node. 4. Find out the name of the `kube-proxy` pod running on that node. 5. Run the following command, updating the pod name: ```bash curl -d cmd=ls -k --cert admin.crt --key admin.key \ https://localhost:10250/run/kube-system/`kube-proxy-xy123`/kube-proxy ``` ... This should show the content of the root directory in the pod. .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- ## kube-proxy - 10249/tcp → healthcheck - HTTP, without authentication - exposes a few API routes: `/healthz` (just returns `ok`), `/configz`, `/metrics` - 10256/tcp → another healthcheck - HTTP, without authentication - also exposes a `/healthz` API route (but this one shows a timestamp) .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- ## kube-controller and kube-scheduler - 10257/tcp → kube-controller - HTTPS, with optional mTLS authentication - `/healthz` doesn't require authentication - ... but `/configz` and `/metrics` do (use e.g. admin key and certificate) - 10259/tcp → kube-scheduler - similar to kube-controller, with the same routes ??? :EN:- Kubernetes internal APIs :FR:- Les APIs internes de Kubernetes .debug[[k8s/internal-apis.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/internal-apis.md)] --- ## 19,000 words They say, "a picture is worth one thousand words." The following 19 slides show what really happens when we run: ```bash kubectl create deployment web --image=nginx ``` .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/deploymentslideshow.md)] --- class: pic .interstitial[] --- name: toc-building-our-own-cluster-easy class: title Building our own cluster (easy) .nav[ [Previous part](#toc-kubernetes-internal-apis) | [Back to table of contents](#toc-part-1) | [Next part](#toc-building-our-own-cluster-medium) ] .debug[(automatically generated title slide)] --- # Building our own cluster (easy) - Let's build our own cluster! *Perfection is attained not when there is nothing left to add, but when there is nothing left to take away. (Antoine de Saint-Exupery)* - Our goal is to build a minimal cluster allowing us to: - create a Deployment (with `kubectl create deployment`) - expose it with a Service - connect to that service - "Minimal" here means: - smaller number of components - smaller number of command-line flags - smaller number of configuration files .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Non-goals - For now, we don't care about security - For now, we don't care about scalability - For now, we don't care about high availability - All we care about is *simplicity* .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Our environment - We will use the machine indicated as `monokube1` - This machine: - runs Ubuntu LTS - has Kubernetes, Docker, and etcd binaries installed - but nothing is running .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## The fine print - We're going to use a *very old* version of Kubernetes (specifically, 1.19) - Why? - It's much easier to set up than recent versions - it's compatible with Docker (no need to set up CNI) - it doesn't require a ServiceAccount keypair - it can be exposed over plain HTTP (insecure but easier) - We'll do that, and later, move to recent versions of Kubernetes! .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Checking our environment - Let's make sure we have everything we need first .lab[ - Log into the `monokube1` machine - Get root: ```bash sudo -i ``` - Check available versions: ```bash etcd -version kube-apiserver --version dockerd --version ``` ] .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## The plan 1. Start API server 2. Interact with it (create Deployment and Service) 3. See what's broken 4. Fix it and go back to step 2 until it works! .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Dealing with multiple processes - We are going to start many processes - Depending on what you're comfortable with, you can: - open multiple windows and multiple SSH connections - use a terminal multiplexer like screen or tmux - put processes in the background with `&`
(warning: log output might get confusing to read!) .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting API server .lab[ - Try to start the API server: ```bash kube-apiserver # It will fail with "--etcd-servers must be specified" ``` ] Since the API server stores everything in etcd, it cannot start without it. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting etcd .lab[ - Try to start etcd: ```bash etcd ``` ] Success! Note the last line of output: ``` serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged! ``` *Sure, that's discouraged. But thanks for telling us the address!* .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting API server (for real) - Try again, passing the `--etcd-servers` argument - That argument should be a comma-separated list of URLs .lab[ - Start API server: ```bash kube-apiserver --etcd-servers http://127.0.0.1:2379 ``` ] Success! .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Interacting with API server - Let's try a few "classic" commands .lab[ - List nodes: ```bash kubectl get nodes ``` - List services: ```bash kubectl get services ``` ] We should get `No resources found.` and the `kubernetes` service, respectively. Note: the API server automatically created the `kubernetes` service entry. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- class: extra-details ## What about `kubeconfig`? - We didn't need to create a `kubeconfig` file - By default, the API server is listening on `localhost:8080` (without requiring authentication) - By default, `kubectl` connects to `localhost:8080` (without providing authentication) .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Creating a Deployment - Let's run a web server! .lab[ - Create a Deployment with NGINX: ```bash kubectl create deployment web --image=nginx ``` ] Success? .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Checking our Deployment status .lab[ - Look at pods, deployments, etc.: ```bash kubectl get all ``` ] Our Deployment is in bad shape: ``` NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/web 0/1 0 0 2m26s ``` And, there is no ReplicaSet, and no Pod. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## What's going on? - We stored the definition of our Deployment in etcd (through the API server) - But there is no *controller* to do the rest of the work - We need to start the *controller manager* .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting the controller manager .lab[ - Try to start the controller manager: ```bash kube-controller-manager ``` ] The final error message is: ``` invalid configuration: no configuration has been provided ``` But the logs include another useful piece of information: ``` Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. ``` .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Reminder: everyone talks to API server - The controller manager needs to connect to the API server - It *does not* have a convenient `localhost:8080` default - We can pass the connection information in two ways: - `--master` and a host:port combination (easy) - `--kubeconfig` and a `kubeconfig` file - For simplicity, we'll use the first option .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting the controller manager (for real) .lab[ - Start the controller manager: ```bash kube-controller-manager --master http://localhost:8080 ``` ] Success! .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Checking our Deployment status .lab[ - Check all our resources again: ```bash kubectl get all ``` ] We now have a ReplicaSet. But we still don't have a Pod. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## What's going on? In the controller manager logs, we should see something like this: ``` E0404 15:46:25.753376 22847 replica_set.go:450] Sync "default/web-5bc9bd5b8d" failed with `No API token found for service account "default"`, retry after the token is automatically created and added to the service account ``` - The service account `default` was automatically added to our Deployment (and to its pods) - The service account `default` exists - But it doesn't have an associated token (the token is a secret; creating it requires signature; therefore a CA) .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Solving the missing token issue There are many ways to solve that issue. We are going to list a few (to get an idea of what's happening behind the scenes). Of course, we don't need to perform *all* the solutions mentioned here. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Option 1: disable service accounts - Restart the API server with `--disable-admission-plugins=ServiceAccount` - The API server will no longer add a service account automatically - Our pods will be created without a service account .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Option 2: do not mount the (missing) token - Add `automountServiceAccountToken: false` to the Deployment spec *or* - Add `automountServiceAccountToken: false` to the default ServiceAccount - The ReplicaSet controller will no longer create pods referencing the (missing) token .lab[ - Programmatically change the `default` ServiceAccount: ```bash kubectl patch sa default -p "automountServiceAccountToken: false" ``` ] .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Option 3: set up service accounts properly - This is the most complex option! - Generate a key pair - Pass the private key to the controller manager (to generate and sign tokens) - Pass the public key to the API server (to verify these tokens) .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Continuing without service account token - Once we patch the default service account, the ReplicaSet can create a Pod .lab[ - Check that we now have a pod: ```bash kubectl get all ``` ] Note: we might have to wait a bit for the ReplicaSet controller to retry. If we're impatient, we can restart the controller manager. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## What's next? - Our pod exists, but it is in `Pending` state - Remember, we don't have a node so far (`kubectl get nodes` shows an empty list) - We need to: - start a container engine - start kubelet .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting a container engine - We're going to use Docker (because it's the default option) .lab[ - Start the Docker Engine: ```bash dockerd ``` ] Success! Feel free to check that it actually works with e.g.: ```bash docker run alpine echo hello world ``` .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting kubelet - If we start kubelet without arguments, it *will* start - But it will not join the cluster! - It will start in *standalone* mode - Just like with the controller manager, we need to tell kubelet where the API server is - Alas, kubelet doesn't have a simple `--master` option - We have to use `--kubeconfig` - We need to write a `kubeconfig` file for kubelet .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Writing a kubeconfig file - We can copy/paste a bunch of YAML - Or we can generate the file with `kubectl` .lab[ - Create the file `~/.kube/config` with `kubectl`: ```bash kubectl config \ set-cluster localhost --server http://localhost:8080 kubectl config \ set-context localhost --cluster localhost kubectl config \ use-context localhost ``` ] .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Our `~/.kube/config` file The file that we generated looks like the one below. That one has been slightly simplified (removing extraneous fields), but it is still valid. ```yaml apiVersion: v1 kind: Config current-context: localhost contexts: - name: localhost context: cluster: localhost clusters: - name: localhost cluster: server: http://localhost:8080 ``` .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting kubelet .lab[ - Start kubelet with that kubeconfig file: ```bash kubelet --kubeconfig ~/.kube/config ``` ] If it works: great! If it complains about a "cgroup driver", check the next slide. .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Cgroup drivers - Cgroups ("control groups") are a Linux kernel feature - They're used to account and limit resources (e.g.: memory, CPU, block I/O...) - There are multiple ways to manipulate cgroups, including: - through a pseudo-filesystem (typically mounted in /sys/fs/cgroup) - through systemd - Kubelet and the container engine need to agree on which method to use .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Setting the cgroup driver - If kubelet refused to start, mentioning a cgroup driver issue, try: ```bash kubelet --kubeconfig ~/.kube/config --cgroup-driver=systemd ``` - That *should* do the trick! .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Looking at our 1-node cluster - Let's check that our node registered correctly .lab[ - List the nodes in our cluster: ```bash kubectl get nodes ``` ] Our node should show up. Its name will be its hostname (it should be `monokube1`). .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Are we there yet? - Let's check if our pod is running .lab[ - List all resources: ```bash kubectl get all ``` ] -- Our pod is still `Pending`. 🤔 -- Which is normal: it needs to be *scheduled*. (i.e., something needs to decide which node it should go on.) .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Scheduling our pod - Why do we need a scheduling decision, since we have only one node? - The node might be full, unavailable; the pod might have constraints ... - The easiest way to schedule our pod is to start the scheduler (we could also schedule it manually) .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Starting the scheduler - The scheduler also needs to know how to connect to the API server - Just like for controller manager, we can use `--kubeconfig` or `--master` .lab[ - Start the scheduler: ```bash kube-scheduler --master http://localhost:8080 ``` ] - Our pod should now start correctly .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- ## Checking the status of our pod - Our pod will go through a short `ContainerCreating` phase - Then it will be `Running` .lab[ - Check pod status: ```bash kubectl get pods ``` ] Success! .debug[[k8s/dmuc-easy.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-easy.md)] --- class: extra-details ## Scheduling a pod manually - We can schedule a pod in `Pending` state by creating a Binding, e.g.: ```bash kubectl create -f- <
(warning: log output might get confusing to read!) .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Starting API server .lab[ - Try to start the API server: ```bash kube-apiserver # It will complain about permission to /var/run/kubernetes sudo kube-apiserver # Now it will complain about a bunch of missing flags, including: # --etcd-servers # --service-account-issuer # --service-account-signing-key-file ``` ] Just like before, we'll need to start etcd. But we'll also need some TLS keys! .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Generating TLS keys - There are many ways to generate TLS keys (and certificates) - A very popular and modern tool to do that is [cfssl] - We're going to use the old-fashioned [openssl] CLI - Feel free to use cfssl or any other tool if you prefer! [cfssl]: https://github.com/cloudflare/cfssl#using-the-command-line-tool [openssl]: https://www.openssl.org/docs/man3.0/man1/ .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## How many keys do we need? At the very least, we need the following two keys: - ServiceAccount key pair - API client key pair, aka "CA key" (technically, we will need a *certificate* for that key pair) But if we wanted to tighten the cluster security, we'd need many more... .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## The other keys These keys are not strictly necessary at this point: - etcd key pair *without that key, communication with etcd will be insecure* - API server endpoint key pair *the API server will generate this one automatically if we don't* - kubelet key pair (used by API server to connect to kubelets) *without that key, commands like kubectl logs/exec will be insecure* .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Would you like some auth with that? If we want to enable authentication and authorization, we also need various API client key pairs signed by the "CA key" mentioned earlier. That would include (non-exhaustive list): - controller manager key pair - scheduler key pair - in most cases: kube-proxy (or equivalent) key pair - in most cases: key pairs for the nodes joining the cluster (these might be generated through TLS bootstrap tokens) - key pairs for users that will interact with the clusters (unless another authentication mechanism like OIDC is used) .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Generating our keys and certificates .lab[ - Generate the ServiceAccount key pair: ```bash openssl genrsa -out sa.key 2048 ``` - Generate the CA key pair: ```bash openssl genrsa -out ca.key 2048 ``` - Generate a self-signed certificate for the CA key: ```bash openssl x509 -new -key ca.key -out ca.cert -subj /CN=kubernetes/ ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Starting etcd - This one is easy! .lab[ - Start etcd: ```bash etcd ``` ] Note: if you want a bit of extra challenge, you can try to generate the etcd key pair and use it. (You will need to pass it to etcd and to the API server.) .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Starting API server - We need to use the keys and certificate that we just generated .lab[ - Start the API server: ```bash sudo kube-apiserver \ --etcd-servers=http://localhost:2379 \ --service-account-signing-key-file=sa.key \ --service-account-issuer=https://kubernetes \ --service-account-key-file=sa.key \ --client-ca-file=ca.cert ``` ] The API server should now start. But can we really use it? 🤔 .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Trying `kubectl` - Let's try some simple `kubectl` command .lab[ - Try to list Namespaces: ```bash kubectl get namespaces ``` ] We're getting an error message like this one: ``` The connection to the server localhost:8080 was refused - did you specify the right host or port? ``` .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## What's going on? - Recent versions of Kubernetes don't support unauthenticated API access - The API server doesn't support listening on plain HTTP anymore - `kubectl` still tries to connect to `localhost:8080` by default - But there is nothing listening there - Our API server listens on port 6443, using TLS .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Trying to access the API server - Let's use `curl` first to confirm that everything works correctly (and then we will move to `kubectl`) .lab[ - Try to connect with `curl`: ```bash curl https://localhost:6443 # This will fail because the API server certificate is unknown. ``` - Try again, skipping certificate verification: ```bash curl --insecure https://localhost:6443 ``` ] We should now see an `Unauthorized` Kubernetes API error message. We need to authenticate with our key and certificate. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Authenticating with the API server - For the time being, we can use the CA key and cert directly - In a real world scenario, we would *never* do that! (because we don't want the CA key to be out there in the wild) .lab[ - Try again, skipping cert verification, and using the CA key and cert: ```bash curl --insecure --key ca.key --cert ca.cert https://localhost:6443 ``` ] We should see a list of API routes. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- class: extra-details ## Doing it right In the future, instead of using the CA key and certificate, we should generate a new key, and a certificate for that key, signed by the CA key. Then we can use that new key and certificate to authenticate. Example: ``` ### Generate a key pair openssl genrsa -out user.key ### Extract the public key openssl pkey -in user.key -out user.pub -pubout ### Generate a certificate signed by the CA key openssl x509 -new -key ca.key -force_pubkey user.pub -out user.cert \ -subj /CN=kubernetes-user/ ``` .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Writing a kubeconfig file - We now want to use `kubectl` instead of `curl` - We'll need to write a kubeconfig file for `kubectl` - There are many way to do that; here, we're going to use `kubectl config` - We'll need to: - set the "cluster" (API server endpoint) - set the "credentials" (the key and certficate) - set the "context" (referencing the cluster and credentials) - use that context (make it the default that `kubectl` will use) .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Set the cluster The "cluster" section holds the API server endpoint. .lab[ - Set the API server endpoint: ```bash kubectl config set-cluster polykube --server=https://localhost:6443 ``` - Don't verify the API server certificate: ```bash kubectl config set-cluster polykube --insecure-skip-tls-verify ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Set the credentials The "credentials" section can hold a TLS key and certificate, or a token, or configuration information for a plugin (for instance, when using AWS EKS or GCP GKE, they use a plugin). .lab[ - Set the client key and certificate: ```bash kubectl config set-credentials polykube \ --client-key ca.key \ --client-certificate ca.cert ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Set and use the context The "context" section references the "cluster" and "credentials" that we defined earlier. (It can also optionally reference a Namespace.) .lab[ - Set the "context": ```bash kubectl config set-context polykube --cluster polykube --user polykube ``` - Set that context to be the default context: ```bash kubectl config use-context polykube ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Review the kubeconfig file The kubeconfig file should look like this: .small[ ```yaml apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://localhost:6443 name: polykube contexts: - context: cluster: polykube user: polykube name: polykube current-context: polykube kind: Config preferences: {} users: - name: polykube user: client-certificate: /root/ca.cert client-key: /root/ca.key ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Trying the kubeconfig file - We should now be able to access our cluster's API! .lab[ - Try to list Namespaces: ```bash kubectl get namespaces ``` ] This should show the classic `default`, `kube-system`, etc. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- class: extra-details ## Do we need `--client-ca-file` ? Technically, we didn't need to specify the `--client-ca-file` flag! But without that flag, no client can be authenticated. Which means that we wouldn't be able to issue any API request! .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Running pods - We can now try to create a Deployment .lab[ - Create a Deployment: ```bash kubectl create deployment blue --image=jpetazzo/color ``` - Check the results: ```bash kubectl get deployments,replicasets,pods ``` ] Our Deployment exists, but not the Replica Set or Pod. We need to run the controller manager. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Running the controller manager - Previously, we used the `--master` flag to pass the API server address - Now, we need to authenticate properly - The simplest way at this point is probably to use the same kubeconfig file! .lab[ - Start the controller manager: ```bash kube-controller-manager --kubeconfig .kube/config ``` - Check the results: ```bash kubectl get deployments,replicasets,pods ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## What's next? - Normally, the last commands showed us a Pod in `Pending` state - We need two things to continue: - the scheduler (to assign the Pod to a Node) - a Node! - We're going to run `kubelet` to register the Node with the cluster .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Running `kubelet` - Let's try to run `kubelet` and see what happens! .lab[ - Start `kubelet`: ```bash sudo kubelet ``` ] We should see an error about connecting to `containerd.sock`. We need to run a container engine! (For instance, `containerd`.) .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Running `containerd` - We need to install and start `containerd` - You could try another engine if you wanted (but there might be complications!) .lab[ - Install `containerd`: ```bash sudo apt-get install containerd ``` - Start `containerd`: ```bash sudo containerd ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- class: extra-details ## Configuring `containerd` Depending on how we install `containerd`, it might need a bit of extra configuration. Watch for the following symptoms: - `containerd` refuses to start (rare, unless there is an *invalid* configuration) - `containerd` starts but `kubelet` can't connect (could be the case if the configuration disables the CRI socket) - `containerd` starts and things work but Pods keep being killed (may happen if there is a mismatch in the cgroups driver) .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Starting `kubelet` for good - Now that `containerd` is running, `kubelet` should start! .lab[ - Try to start `kubelet`: ```bash sudo kubelet ``` - In another terminal, check if our Node is now visible: ```bash sudo kubectl get nodes ``` ] `kubelet` should now start, but our Node doesn't show up in `kubectl get nodes`! This is because without a kubeconfig file, `kubelet` runs in standalone mode:
it will not connect to a Kubernetes API server, and will only start *static pods*. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Passing the kubeconfig file - Let's start `kubelet` again, with our kubeconfig file .lab[ - Stop `kubelet` (e.g. with `Ctrl-C`) - Restart it with the kubeconfig file: ```bash sudo kubelet --kubeconfig .kube/config ``` - Check our list of Nodes: ```bash kubectl get nodes ``` ] This time, our Node should show up! .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Node readiness - However, our Node shows up as `NotReady` - If we wait a few minutes, the `kubelet` logs will tell us why: *we're missing a CNI configuration!* - As a result, the containers can't be connected to the network - `kubelet` detects that and doesn't become `Ready` until this is fixed .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## CNI configuration - We need to provide a CNI configuration - This is a file in `/etc/cni/net.d` (the name of the file doesn't matter; the first file in lexicographic order will be used) - Usually, when installing a "CNI plugin¹", this file gets installed automatically - Here, we are going to write that file manually .footnote[¹Technically, a *pod network*; typically running as a DaemonSet, which will install the file with a `hostPath` volume.] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Our CNI configuration Create the following file in e.g. `/etc/cni/net.d/kube.conf`: ```json { "cniVersion": "0.3.1", "name": "kube", "type": "bridge", "bridge": "cni0", "isDefaultGateway": true, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local", "subnet": "10.1.1.0/24" } } ``` That's all we need - `kubelet` will detect and validate the file automatically! .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Checking our Node again - After a short time (typically about 10 seconds) the Node should be `Ready` .lab[ - Wait until the Node is `Ready`: ```bash kubectl get nodes ``` ] If the Node doesn't show up as `Ready`, check the `kubelet` logs. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## What's next? - At this point, we have a `Pending` Pod and a `Ready` Node - All we need is the scheduler to bind the former to the latter .lab[ - Run the scheduler: ```bash kube-scheduler --kubeconfig .kube/config ``` - Check that the Pod gets assigned to the Node and becomes `Running`: ```bash kubectl get pods ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Check network access - Let's check that we can connect to our Pod, and that the Pod can connect outside .lab[ - Get the Pod's IP address: ```bash kubectl get pods -o wide ``` - Connect to the Pod (make sure to update the IP address): ```bash curl `10.1.1.2` ``` - Check that the Pod has external connectivity too: ```bash kubectl exec `blue-xxxxxxxxxx-yyyyy` -- ping -c3 1.1 ``` ] .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Expose our Deployment - We can now try to expose the Deployment and connect to the ClusterIP .lab[ - Expose the Deployment: ```bash kubectl expose deployment blue --port=80 ``` - Retrieve the ClusterIP: ```bash kubectl get services ``` - Try to connect to the ClusterIP: ```bash curl `10.0.0.42` ``` ] At this point, it won't work - we need to run `kube-proxy`! .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## Running `kube-proxy` - We need to run `kube-proxy` (also passing it our kubeconfig file) .lab[ - Run `kube-proxy`: ```bash sudo kube-proxy --kubeconfig .kube/config ``` - Try again to connect to the ClusterIP: ```bash curl `10.0.0.42` ``` ] This time, it should work. .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- ## What's next? - Scale up the Deployment, and check that load balancing works properly - Enable RBAC, and generate individual certificates for each controller (check the [certificate paths][certpath] section in the Kubernetes documentation for a detailed list of all the certificates and keys that are used by the control plane, and which flags are used by which components to configure them!) - Add more nodes to the cluster *Feel free to try these if you want to get additional hands-on experience!* [certpath]: https://kubernetes.io/docs/setup/best-practices/certificates/#certificate-paths ??? :EN:- Setting up control plane certificates :EN:- Implementing a basic CNI configuration :FR:- Mettre en place les certificats du plan de contrôle :FR:- Réaliser un configuration CNI basique .debug[[k8s/dmuc-medium.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-medium.md)] --- class: pic .interstitial[] --- name: toc-building-our-own-cluster-hard class: title Building our own cluster (hard) .nav[ [Previous part](#toc-building-our-own-cluster-medium) | [Back to table of contents](#toc-part-2) | [Next part](#toc-cni-internals) ] .debug[(automatically generated title slide)] --- # Building our own cluster (hard) - This section assumes that you already went through *“Building our own cluster (medium)”* - In that previous section, we built a cluster with a single node - In this new section, we're going to add more nodes to the cluster - Note: we will need the lab environment of that previous section - If you haven't done it yet, you should go through that section first .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Our environment - On `polykube1`, we should have our Kubernetes control plane - We're also assuming that we have the kubeconfig file created earlier (in `~/.kube/config`) - We're going to work on `polykube2` and add it to the cluster - This machine has exactly the same setup as `polykube1` (Ubuntu LTS with CNI, etcd, and Kubernetes binaries installed) - Note that we won't need the etcd binaries here (the control plane will run solely on `polykube1`) .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Checklist We need to: - generate the kubeconfig file for `polykube2` - install a container engine - generate a CNI configuration file - start kubelet .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Generating the kubeconfig file - Ideally, we should generate a key pair and certificate for `polykube2`... - ...and generate a kubeconfig file using these - At the moment, for simplicity, we'll use the same key pair and certificate as earlier - We have a couple of options: - copy the required files (kubeconfig, key pair, certificate) - "flatten" the kubeconfig file (embed the key and certificate within) .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- class: extra-details ## To flatten or not to flatten? - "Flattening" the kubeconfig file can seem easier (because it means we'll only have one file to move around) - But it's easier to rotate the key or renew the certificate when they're in separate files .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Flatten and copy the kubeconfig file - We'll flatten the file and copy it over .lab[ - On `polykube1`, flatten the kubeconfig file: ```bash kubectl config view --flatten > kubeconfig ``` - Then copy it to `polykube2`: ```bash scp kubeconfig polykube2: ``` ] .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Generate CNI configuration Back on `polykube2`, put the following in `/etc/cni/net.d/kube.conf`: ```json { "cniVersion": "0.3.1", "name": "kube", "type": "bridge", "bridge": "cni0", "isDefaultGateway": true, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local", "subnet": `"10.1.2.0/24"` } } ``` Note how we changed the subnet! .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Install container engine and start `kubelet` .lab[ - Install `containerd`: ```bash sudo apt-get install containerd -y ``` - Start `containerd`: ```bash sudo systemctl start containerd ``` - Start `kubelet`: ```bash sudo kubelet --kubeconfig kubeconfig ``` ] We're getting errors looking like: ``` "Post \"https://localhost:6443/api/v1/nodes\": ... connect: connection refused" ``` .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Updating the kubeconfig file - Our kubeconfig file still references `localhost:6443` - This was fine on `polykube1` (where `kubelet` was connecting to the control plane running locally) - On `polykube2`, we need to change that and put the address of the API server (i.e. the address of `polykube1`) .lab[ - Update the `kubeconfig` file: ```bash sed -i s/localhost:6443/polykube1:6443/ kubeconfig ``` ] .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Starting `kubelet` - `kubelet` should now start correctly (hopefully!) .lab[ - On `polykube2`, start `kubelet`: ```bash sudo kubelet --kubeconfig kubeconfig ``` - On `polykube1`, check that `polykube2` shows up and is `Ready`: ```bash kubectl get nodes ``` ] .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Testing connectivity - From `polykube1`, can we connect to Pods running on `polykube2`? 🤔 .lab[ - Scale the test Deployment: ```bash kubectl scale deployment blue --replicas=5 ``` - Get the IP addresses of the Pods: ```bash kubectl get pods -o wide ``` - Pick a Pod on `polykube2` and try to connect to it: ```bash curl `10.1.2.2` ``` ] -- At that point, it doesn't work. .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Refresher on the *pod network* - The *pod network* (or *pod-to-pod network*) has a few responsibilities: - allocating and managing Pod IP addresses - connecting Pods and Nodes - connecting Pods together on a given node - *connecting Pods together across nodes* - That last part is the one that's not functioning in our cluster - It typically requires some combination of routing, tunneling, bridging... .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Connecting networks together - We can add manual routes between our nodes - This requires adding `N x (N-1)` routes (on each node, add a route to every other node) - This will work on home labs where nodes are directly connected (e.g. on an Ethernet switch, or same WiFi network, or a bridge between local VMs) - ...Or on clouds where IP address filtering has been disabled (by default, most cloud providers will discard packets going to unknown IP addresses) - If IP address filtering is enabled, you'll have to use e.g. tunneling or overlay networks .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Important warning - The technique that we are about to use doesn't work everywhere - It only works if: - all the nodes are directly connected to each other (at layer 2) - the underlying network allows the IP addresses of our pods - If we are on physical machines connected by a switch: OK - If we are on virtual machines in a public cloud: NOT OK - on AWS, we need to disable "source and destination checks" on our instances - on OpenStack, we need to disable "port security" on our network ports .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Routing basics - We need to tell *each* node: "The subnet 10.1.N.0/24 is located on node N" (for all values of N) - This is how we add a route on Linux: ```bash ip route add 10.1.N.0/24 via W.X.Y.Z ``` (where `W.X.Y.Z` is the internal IP address of node N) - We can see the internal IP addresses of our nodes with: ```bash kubectl get nodes -o wide ``` .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## Adding our route - Let's add a route from `polykube1` to `polykube2` .lab[ - Check the internal address of `polykube2`: ```bash kubectl get node polykube2 -o wide ``` - Now, on `polykube1`, add the route to the Pods running on `polykube2`: ```bash sudo ip route add 10.1.2.0/24 via `A.B.C.D` ``` - Finally, check that we can now connect to a Pod running on `polykube2`: ```bash curl 10.1.2.2 ``` ] .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- ## What's next? - The network configuration feels very manual: - we had to generate the CNI configuration file (in `/etc/cni/net.d`) - we had to manually update the nodes' routing tables - Can we automate that? **YES!** - We could install something like [kube-router](https://www.kube-router.io/) (which specifically takes care of the CNI configuration file and populates routing tables) - Or we could also go with e.g. [Cilium](https://cilium.io/) .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- class: extra-details ## If you want to try Cilium... - Add the `--root-ca-file` flag to the controller manager: - use the certificate automatically generated by the API server
(it should be in `/var/run/kubernetes/apiserver.crt`) - or generate a key pair and certificate for the API server and point to that certificate - without that, you'll get certificate validation errors
(because in our Pods, the `ca.crt` file used to validate the API server will be empty) - Check the Cilium [without kube-proxy][ciliumwithoutkubeproxy] instructions (make sure to pass the API server IP address and port!) - Other pod-to-pod network implementations might also require additional steps [ciliumwithoutkubeproxy]: https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/#kubeproxy-free .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- class: extra-details ## About the API server certificate... - In the previous sections, we've skipped API server certificate verification - To generate a proper certificate, we need to include a `subjectAltName` extension - And make sure that the CA includes the extension in the certificate ```bash openssl genrsa -out apiserver.key 4096 openssl req -new -key apiserver.key -subj /CN=kubernetes/ \ -addext "subjectAltName = DNS:kubernetes.default.svc, \ DNS:kubernetes.default, DNS:kubernetes, \ DNS:localhost, DNS:polykube1" -out apiserver.csr openssl x509 -req -in apiserver.csr -CAkey ca.key -CA ca.cert \ -out apiserver.crt -copy_extensions copy ``` ??? :EN:- Connecting nodes and pods :FR:- Interconnecter les nœuds et les pods .debug[[k8s/dmuc-hard.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/dmuc-hard.md)] --- class: pic .interstitial[] --- name: toc-cni-internals class: title CNI internals .nav[ [Previous part](#toc-building-our-own-cluster-hard) | [Back to table of contents](#toc-part-2) | [Next part](#toc-api-server-availability) ] .debug[(automatically generated title slide)] --- # CNI internals - Kubelet looks for a CNI configuration file (by default, in `/etc/cni/net.d`) - Note: if we have multiple files, the first one will be used (in lexicographic order) - If no configuration can be found, kubelet holds off on creating containers (except if they are using `hostNetwork`) - Let's see how exactly plugins are invoked! .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## General principle - A plugin is an executable program - It is invoked with by kubelet to set up / tear down networking for a container - It doesn't take any command-line argument - However, it uses environment variables to know what to do, which container, etc. - It reads JSON on stdin, and writes back JSON on stdout - There will generally be multiple plugins invoked in a row (at least IPAM + network setup; possibly more) .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## Environment variables - `CNI_COMMAND`: `ADD`, `DEL`, `CHECK`, or `VERSION` - `CNI_CONTAINERID`: opaque identifier (container ID of the "sandbox", i.e. the container running the `pause` image) - `CNI_NETNS`: path to network namespace pseudo-file (e.g. `/var/run/netns/cni-0376f625-29b5-7a21-6c45-6a973b3224e5`) - `CNI_IFNAME`: interface name, usually `eth0` - `CNI_PATH`: path(s) with plugin executables (e.g. `/opt/cni/bin`) - `CNI_ARGS`: "extra arguments" (see next slide) .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## `CNI_ARGS` - Extra key/value pair arguments passed by "the user" - "The user", here, is "kubelet" (or in an abstract way, "Kubernetes") - This is used to pass the pod name and namespace to the CNI plugin - Example: ``` IgnoreUnknown=1 K8S_POD_NAMESPACE=default K8S_POD_NAME=web-96d5df5c8-jcn72 K8S_POD_INFRA_CONTAINER_ID=016493dbff152641d334d9828dab6136c1ff... ``` Note that technically, it's a `;`-separated list, so it really looks like this: ``` CNI_ARGS=IgnoreUnknown=1;K8S_POD_NAMESPACE=default;K8S_POD_NAME=web-96d... ``` .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## JSON in, JSON out - The plugin reads its configuration on stdin - It writes back results in JSON (e.g. allocated address, routes, DNS...) ⚠️ "Plugin configuration" is not always the same as "CNI configuration"! .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## Conf vs Conflist - The CNI configuration can be a single plugin configuration - it will then contain a `type` field in the top-most structure - it will be passed "as-is" - It can also be a "conflist", containing a chain of plugins (it will then contain a `plugins` field in the top-most structure) - Plugins are then invoked in order (reverse order for `DEL` action) - In that case, the input of the plugin is not the whole configuration (see details on next slide) .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## List of plugins - When invoking a plugin in a list, the JSON input will be: - the configuration of the plugin - augmented with `name` (matching the conf list `name`) - augmented with `prevResult` (which will be the output of the previous plugin) - Conceptually, a plugin (generally the first one) will do the "main setup" - Other plugins can do tuning / refinement (firewalling, traffic shaping...) .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## Analyzing plugins - Let's see what goes in and out of our CNI plugins! - We will create a fake plugin that: - saves its environment and input - executes the real plugin with the saved input - saves the plugin output - passes the saved output .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## Our fake plugin ```bash #!/bin/sh PLUGIN=$(basename $0) cat > /tmp/cni.$$.$PLUGIN.in env | sort > /tmp/cni.$$.$PLUGIN.env echo "PPID=$PPID, $(readlink /proc/$PPID/exe)" > /tmp/cni.$$.$PLUGIN.parent $0.real < /tmp/cni.$$.$PLUGIN.in > /tmp/cni.$$.$PLUGIN.out EXITSTATUS=$? cat /tmp/cni.$$.$PLUGIN.out exit $EXITSTATUS ``` Save this script as `/opt/cni/bin/debug` and make it executable. .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## Substituting the fake plugin - For each plugin that we want to instrument: - rename the plugin from e.g. `ptp` to `ptp.real` - symlink `ptp` to our `debug` plugin - There is no need to change the CNI configuration or restart kubelet .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- ## Create some pods and looks at the results - Create a pod - For each instrumented plugin, there will be files in `/tmp`: `cni.PID.pluginname.in` (JSON input) `cni.PID.pluginname.env` (environment variables) `cni.PID.pluginname.parent` (parent process information) `cni.PID.pluginname.out` (JSON output) ❓️ What is calling our plugins? ??? :EN:- Deep dive into CNI internals :FR:- La Container Network Interface (CNI) en détails .debug[[k8s/cni-internals.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cni-internals.md)] --- class: pic .interstitial[] --- name: toc-api-server-availability class: title API server availability .nav[ [Previous part](#toc-cni-internals) | [Back to table of contents](#toc-part-3) | [Next part](#toc-setting-up-kubernetes) ] .debug[(automatically generated title slide)] --- # API server availability - When we set up a node, we need the address of the API server: - for kubelet - for kube-proxy - sometimes for the pod network system (like kube-router) - How do we ensure the availability of that endpoint? (what if the node running the API server goes down?) .debug[[k8s/apilb.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apilb.md)] --- ## Option 1: external load balancer - Set up an external load balancer - Point kubelet (and other components) to that load balancer - Put the node(s) running the API server behind that load balancer - Update the load balancer if/when an API server node needs to be replaced - On cloud infrastructures, some mechanisms provide automation for this (e.g. on AWS, an Elastic Load Balancer + Auto Scaling Group) - [Example in Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#the-kubernetes-frontend-load-balancer) .debug[[k8s/apilb.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apilb.md)] --- ## Option 2: local load balancer - Set up a load balancer (like NGINX, HAProxy...) on *each* node - Configure that load balancer to send traffic to the API server node(s) - Point kubelet (and other components) to `localhost` - Update the load balancer configuration when API server nodes are updated .debug[[k8s/apilb.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apilb.md)] --- ## Updating the local load balancer config - Distribute the updated configuration (push) - Or regularly check for updates (pull) - The latter requires an external, highly available store (it could be an object store, an HTTP server, or even DNS...) - Updates can be facilitated by a DaemonSet (but remember that it can't be used when installing a new node!) .debug[[k8s/apilb.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apilb.md)] --- ## Option 3: DNS records - Put all the API server nodes behind a round-robin DNS - Point kubelet (and other components) to that name - Update the records when needed - Note: this option is not officially supported (but since kubelet supports reconnection anyway, it *should* work) .debug[[k8s/apilb.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apilb.md)] --- ## Option 4: .................... - Many managed clusters expose a high-availability API endpoint (and you don't have to worry about it) - You can also use HA mechanisms that you're familiar with (e.g. virtual IPs) - Tunnels are also fine (e.g. [k3s](https://k3s.io/) uses a tunnel to allow each node to contact the API server) ??? :EN:- Ensuring API server availability :FR:- Assurer la disponibilité du serveur API .debug[[k8s/apilb.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/apilb.md)] --- class: pic .interstitial[] --- name: toc-setting-up-kubernetes class: title Setting up Kubernetes .nav[ [Previous part](#toc-api-server-availability) | [Back to table of contents](#toc-part-3) | [Next part](#toc-deploying-a-managed-cluster) ] .debug[(automatically generated title slide)] --- # Setting up Kubernetes - Kubernetes is made of many components that require careful configuration - Secure operation typically requires TLS certificates and a local CA (certificate authority) - Setting up everything manually is possible, but rarely done (except for learning purposes) - Let's do a quick overview of available options! .debug[[k8s/setup-overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-overview.md)] --- ## Local development - Are you writing code that will eventually run on Kubernetes? - Then it's a good idea to have a development cluster! - Instead of shipping containers images, we can test them on Kubernetes - Extremely useful when authoring or testing Kubernetes-specific objects (ConfigMaps, Secrets, StatefulSets, Jobs, RBAC, etc.) - Extremely convenient to quickly test/check what a particular thing looks like (e.g. what are the fields a Deployment spec?) .debug[[k8s/setup-overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-overview.md)] --- ## One-node clusters - It's perfectly fine to work with a cluster that has only one node - It simplifies a lot of things: - pod networking doesn't even need CNI plugins, overlay networks, etc. - these clusters can be fully contained (no pun intended) in an easy-to-ship VM or container image - some of the security aspects may be simplified (different threat model) - images can be built directly on the node (we don't need to ship them with a registry) - Examples: Docker Desktop, k3d, KinD, MicroK8s, Minikube (some of these also support clusters with multiple nodes) .debug[[k8s/setup-overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-overview.md)] --- ## Managed clusters ("Turnkey Solutions") - Many cloud providers and hosting providers offer "managed Kubernetes" - The deployment and maintenance of the *control plane* is entirely managed by the provider (ideally, clusters can be spun up automatically through an API, CLI, or web interface) - Given the complexity of Kubernetes, this approach is *strongly recommended* (at least for your first production clusters) - After working for a while with Kubernetes, you will be better equipped to decide: - whether to operate it yourself or use a managed offering - which offering or which distribution works best for you and your needs .debug[[k8s/setup-overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-overview.md)] --- ## Node management - Most "Turnkey Solutions" offer fully managed control planes (including control plane upgrades, sometimes done automatically) - However, with most providers, we still need to take care of *nodes* (provisioning, upgrading, scaling the nodes) - Example with Amazon EKS ["managed node groups"](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html): *...when bugs or issues are reported [...] you're responsible for deploying these patched AMI versions to your managed node groups.* .debug[[k8s/setup-overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-overview.md)] --- ## Managed clusters differences - Most providers let you pick which Kubernetes version you want - some providers offer up-to-date versions - others lag significantly (sometimes by 2 or 3 minor versions) - Some providers offer multiple networking or storage options - Others will only support one, tied to their infrastructure (changing that is in theory possible, but might be complex or unsupported) - Some providers let you configure or customize the control plane (generally through Kubernetes "feature gates") .debug[[k8s/setup-overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-overview.md)] --- ## Choosing a provider - Pricing models differ from one provider to another - nodes are generally charged at their usual price - control plane may be free or incur a small nominal fee - Beyond pricing, there are *huge* differences in features between providers - The "major" providers are not always the best ones! - See [this page](https://kubernetes.io/docs/setup/production-environment/turnkey-solutions/) for a list of available providers .debug[[k8s/setup-overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-overview.md)] --- ## Kubernetes distributions and installers - If you want to run Kubernetes yourselves, there are many options (free, commercial, proprietary, open source ...) - Some of them are installers, while some are complete platforms - Some of them leverage other well-known deployment tools (like Puppet, Terraform ...) - There are too many options to list them all (check [this page](https://kubernetes.io/partners/#conformance) for an overview!) .debug[[k8s/setup-overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-overview.md)] --- ## kubeadm - kubeadm is a tool part of Kubernetes to facilitate cluster setup - Many other installers and distributions use it (but not all of them) - It can also be used by itself - Excellent starting point to install Kubernetes on your own machines (virtual, physical, it doesn't matter) - It even supports highly available control planes, or "multi-master" (this is more complex, though, because it introduces the need for an API load balancer) .debug[[k8s/setup-overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-overview.md)] --- ## Manual setup - The resources below are mainly for educational purposes! - [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) by Kelsey Hightower *step by step guide to install Kubernetes on GCP, with certificates, HA...* - [Deep Dive into Kubernetes Internals for Builders and Operators](https://www.youtube.com/watch?v=3KtEAa7_duA) *conference talk setting up a simplified Kubernetes cluster - no security or HA* - 🇫🇷[Démystifions les composants internes de Kubernetes](https://www.youtube.com/watch?v=OCMNA0dSAzc) *improved version of the previous one, with certs and recent k8s versions* .debug[[k8s/setup-overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-overview.md)] --- ## About our training clusters - How did we set up these Kubernetes clusters that we're using? -- - We used `kubeadm` on freshly installed VM instances running Ubuntu LTS 1. Install Docker 2. Install Kubernetes packages 3. Run `kubeadm init` on the first node (it deploys the control plane on that node) 4. Set up Weave (the overlay network) with a single `kubectl apply` command 5. Run `kubeadm join` on the other nodes (with the token produced by `kubeadm init`) 6. Copy the configuration file generated by `kubeadm init` - Check the [prepare VMs README](https://github.com/jpetazzo/container.training/blob/master/prepare-vms/README.md) for more details .debug[[k8s/setup-overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-overview.md)] --- ## `kubeadm` "drawbacks" - Doesn't set up Docker or any other container engine (this is by design, to give us choice) - Doesn't set up the overlay network (this is also by design, for the same reasons) - HA control plane requires [some extra steps](https://kubernetes.io/docs/setup/independent/high-availability/) - Note that HA control plane also requires setting up a specific API load balancer (which is beyond the scope of kubeadm) ??? :EN:- Various ways to install Kubernetes :FR:- Survol des techniques d'installation de Kubernetes .debug[[k8s/setup-overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-overview.md)] --- class: pic .interstitial[] --- name: toc-deploying-a-managed-cluster class: title Deploying a managed cluster .nav[ [Previous part](#toc-setting-up-kubernetes) | [Back to table of contents](#toc-part-3) | [Next part](#toc-kubernetes-distributions-and-installers) ] .debug[(automatically generated title slide)] --- # Deploying a managed cluster *"The easiest way to install Kubernetes is to get someone else to do it for you."
([Jérôme Petazzoni](https://twitter.com/jpetazzo))* - Let's see a few options to install managed clusters! - This is not an exhaustive list (the goal is to show the actual steps to get started) - The list is sorted alphabetically - All the options mentioned here require an account with a cloud provider - ... And a credit card .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## AKS (initial setup) - Install the Azure CLI - Login: ```bash az login ``` - Select a [region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=kubernetes-service®ions=all ) - Create a "resource group": ```bash az group create --name my-aks-group --location westeurope ``` .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## AKS (create cluster) - Create the cluster: ```bash az aks create --resource-group my-aks-group --name my-aks-cluster ``` - Wait about 5-10 minutes - Add credentials to `kubeconfig`: ```bash az aks get-credentials --resource-group my-aks-group --name my-aks-cluster ``` .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## AKS (cleanup) - Delete the cluster: ```bash az aks delete --resource-group my-aks-group --name my-aks-cluster ``` - Delete the resource group: ```bash az group delete --resource-group my-aks-group ``` - Note: delete actions can take a while too! (5-10 minutes as well) .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## AKS (notes) - The cluster has useful components pre-installed, such as the metrics server - There is also a product called [AKS Engine](https://github.com/Azure/aks-engine): - leverages ARM (Azure Resource Manager) templates to deploy Kubernetes - it's "the library used by AKS" - fully customizable - think of it as "half-managed" Kubernetes option .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## Amazon EKS (the old way) - [Read the doc](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html) - Create service roles, VPCs, and a bunch of other oddities - Try to figure out why it doesn't work - Start over, following an [official AWS blog post](https://aws.amazon.com/blogs/aws/amazon-eks-now-generally-available/) - Try to find the missing Cloud Formation template -- .footnote[(╯°□°)╯︵ ┻━┻] .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## Amazon EKS (the new way) - Install `eksctl` - Set the usual environment variables ([AWS_DEFAULT_REGION](https://docs.aws.amazon.com/general/latest/gr/rande.html#eks_region), AWS_ACCESS_KEY, AWS_SECRET_ACCESS_KEY) - Create the cluster: ```bash eksctl create cluster ``` - Cluster can take a long time to be ready (15-20 minutes is typical) - Add cluster add-ons (by default, it doesn't come with metrics-server, logging, etc.) .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## Amazon EKS (cleanup) - Delete the cluster: ```bash eksctl delete cluster
``` - If you need to find the name of the cluster: ```bash eksctl get clusters ``` .footnote[Note: the AWS documentation has been updated and now includes [eksctl instructions](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html).] .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## Amazon EKS (notes) - Convenient if you *have to* use AWS - Needs extra steps to be truly production-ready - [Versions tend to be outdated](https://twitter.com/jpetazzo/status/1252948707680686081) - The only officially supported pod network is the [Amazon VPC CNI plugin](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html) - integrates tightly with security groups and VPC networking - not suitable for high density clusters (with many small pods on big nodes) - other plugins [should still work](https://docs.aws.amazon.com/eks/latest/userguide/alternate-cni-plugins.html) but will require extra work .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## Digital Ocean (initial setup) - Install `doctl` - Generate API token (in web console) - Set up the CLI authentication: ```bash doctl auth init ``` (It will ask you for the API token) - Check the list of regions and pick one: ```bash doctl compute region list ``` (If you don't specify the region later, it will use `nyc1`) .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## Digital Ocean (create cluster) - Create the cluster: ```bash doctl kubernetes cluster create my-do-cluster [--region xxx1] ``` - Wait 5 minutes - Update `kubeconfig`: ```bash kubectl config use-context do-xxx1-my-do-cluster ``` - The cluster comes with some components (like Cilium) but no metrics server .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## Digital Ocean (cleanup) - List clusters (if you forgot its name): ```bash doctl kubernetes cluster list ``` - Delete the cluster: ```bash doctl kubernetes cluster delete my-do-cluster ``` .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## GKE (initial setup) - Install `gcloud` - Login: ```bash gcloud auth init ``` - Create a "project": ```bash gcloud projects create my-gke-project gcloud config set project my-gke-project ``` - Pick a [region](https://cloud.google.com/compute/docs/regions-zones/) (example: `europe-west1`, `us-west1`, ...) .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## GKE (create cluster) - Create the cluster: ```bash gcloud container clusters create my-gke-cluster --region us-west1 --num-nodes=2 ``` (without `--num-nodes` you might exhaust your IP address quota!) - The first time you try to create a cluster in a given project, you get an error - you need to enable the Kubernetes Engine API - the error message gives you a link - follow the link and enable the API (and billing)
(it's just a couple of clicks and it's instantaneous) - Clutser should be ready in a couple of minutes .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## GKE (cleanup) - List clusters (if you forgot its name): ```bash gcloud container clusters list ``` - Delete the cluster: ```bash gcloud container clusters delete my-gke-cluster --region us-west1 ``` - Delete the project (optional): ```bash gcloud projects delete my-gke-project ``` .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## GKE (notes) - Well-rounded product overall (it used to be one of the best managed Kubernetes offerings available; now that many other providers entered the game, that title is debatable) - The cluster comes with many add-ons - Versions lag a bit: - latest minor version (e.g. 1.18) tends to be unsupported - previous minor version (e.g. 1.17) supported through alpha channel - previous versions (e.g. 1.14-1.16) supported .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## Scaleway (initial setup) - After creating your account, make sure you set a password or get an API key (by default, it uses email "magic links" to sign in) - Install `scw` (you need [CLI v2](https://github.com/scaleway/scaleway-cli/tree/v2#Installation), which in beta as of May 2020) - Generate the CLI configuration with `scw init` (it will prompt for your API key, or email + password) .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## Scaleway (create cluster) - Create the cluster: ```bash k8s cluster create name=my-kapsule-cluster version=1.18.3 cni=cilium \ default-pool-config.node-type=DEV1-M default-pool-config.size=3 ``` - After less than 5 minutes, cluster state will be `ready` (check cluster status with e.g. `scw k8s cluster list` on a wide terminal ) - Add connection information to your `.kube/config` file: ```bash scw k8s kubeconfig install `CLUSTERID` ``` (the cluster ID is shown by `scw k8s cluster list`) .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- class: extra-details ## Scaleway (automation) - If you want to obtain the cluster ID programmatically, this will do it: ```bash scw k8s cluster list # or CLUSTERID=$(scw k8s cluster list -o json | \ jq -r '.[] | select(.name="my-kapsule-cluster") | .id') ``` .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## Scaleway (cleanup) - Get cluster ID (e.g. with `scw k8s cluster list`) - Delete the cluster: ```bash scw cluster delete cluster-id=$CLUSTERID ``` - Warning: as of May 2020, load balancers have to be deleted separately! .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## Scaleway (notes) - The `create` command is a bit more complex than with other providers (you must specify the Kubernetes version, CNI plugin, and node type) - To see available versions and CNI plugins, run `scw k8s version list` - As of May 2020, Kapsule supports: - multiple CNI plugins, including: cilium, calico, weave, flannel - Kubernetes versions 1.15 to 1.18 - multiple container runtimes, including: Docker, containerd, CRI-O - To see available node types and their price, check their [pricing page]( https://www.scaleway.com/en/pricing/) .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- ## More options - Alibaba Cloud - [IBM Cloud](https://console.bluemix.net/docs/containers/cs_cli_install.html#cs_cli_install) - [Linode Kubernetes Engine (LKE)](https://www.linode.com/products/kubernetes/) - OVHcloud [Managed Kubernetes Service](https://www.ovhcloud.com/en/public-cloud/kubernetes/) - ... ??? :EN:- Installing a managed cluster :FR:- Installer un cluster infogéré .debug[[k8s/setup-managed.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-managed.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-distributions-and-installers class: title Kubernetes distributions and installers .nav[ [Previous part](#toc-deploying-a-managed-cluster) | [Back to table of contents](#toc-part-3) | [Next part](#toc-upgrading-clusters) ] .debug[(automatically generated title slide)] --- # Kubernetes distributions and installers - Sometimes, we need to run Kubernetes ourselves (as opposed to "use a managed offering") - Beware: it takes *a lot of work* to set up and maintain Kubernetes - It might be necessary if you have specific security or compliance requirements (e.g. national security for states that don't have a suitable domestic cloud) - There are [countless](https://kubernetes.io/docs/setup/pick-right-solution/) distributions available - We can't review them all - We're just going to explore a few options .debug[[k8s/setup-selfhosted.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-selfhosted.md)] --- ## [kops](https://github.com/kubernetes/kops) - Deploys Kubernetes using cloud infrastructure (supports AWS, GCE, Digital Ocean ...) - Leverages special cloud features when possible (e.g. Auto Scaling Groups ...) .debug[[k8s/setup-selfhosted.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-selfhosted.md)] --- ## kubeadm - Provisions Kubernetes nodes on top of existing machines - `kubeadm init` to provision a single-node control plane - `kubeadm join` to join a node to the cluster - Supports HA control plane [with some extra steps](https://kubernetes.io/docs/setup/independent/high-availability/) .debug[[k8s/setup-selfhosted.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-selfhosted.md)] --- ## [kubespray](https://github.com/kubernetes-incubator/kubespray) - Based on Ansible - Works on bare metal and cloud infrastructure (good for hybrid deployments) - The expert says: ultra flexible; slow; complex .debug[[k8s/setup-selfhosted.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-selfhosted.md)] --- ## RKE (Rancher Kubernetes Engine) - Opinionated installer with low requirements - Requires a set of machines with Docker + SSH access - Supports highly available etcd and control plane - The expert says: fast; maintenance can be tricky .debug[[k8s/setup-selfhosted.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-selfhosted.md)] --- ## Terraform + kubeadm - Sometimes it is necessary to build a custom solution - Example use case: - deploying Kubernetes on OpenStack - ... with highly available control plane - ... and Cloud Controller Manager integration - Solution: Terraform + kubeadm (kubeadm driven by remote-exec) - [GitHub repository](https://github.com/enix/terraform-openstack-kubernetes) - [Blog post (in French)](https://enix.io/fr/blog/deployer-kubernetes-1-13-sur-openstack-grace-a-terraform/) .debug[[k8s/setup-selfhosted.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-selfhosted.md)] --- ## And many more ... - [AKS Engine](https://github.com/Azure/aks-engine) - Docker Enterprise Edition - [Lokomotive](https://github.com/kinvolk/lokomotive), leveraging Terraform and [Flatcar Linux](https://www.flatcar-linux.org/) - Pivotal Container Service (PKS) - [Tarmak](https://github.com/jetstack/tarmak), leveraging Puppet and Terraform - Tectonic by CoreOS (now being integrated into Red Hat OpenShift) - [Typhoon](https://typhoon.psdn.io/), leveraging Terraform - VMware Tanzu Kubernetes Grid (TKG) .debug[[k8s/setup-selfhosted.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-selfhosted.md)] --- ## Bottom line - Each distribution / installer has pros and cons - Before picking one, we should sort out our priorities: - cloud, on-premises, hybrid? - integration with existing network/storage architecture or equipment? - are we storing very sensitive data, like finance, health, military? - how many clusters are we deploying (and maintaining): 2, 10, 50? - which team will be responsible for deployment and maintenance?
(do they need training?) - etc. ??? :EN:- Kubernetes distributions and installers :FR:- L'offre Kubernetes "on premises" .debug[[k8s/setup-selfhosted.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/setup-selfhosted.md)] --- class: pic .interstitial[] --- name: toc-upgrading-clusters class: title Upgrading clusters .nav[ [Previous part](#toc-kubernetes-distributions-and-installers) | [Back to table of contents](#toc-part-3) | [Next part](#toc-static-pods) ] .debug[(automatically generated title slide)] --- # Upgrading clusters - It's *recommended* to run consistent versions across a cluster (mostly to have feature parity and latest security updates) - It's not *mandatory* (otherwise, cluster upgrades would be a nightmare!) - Components can be upgraded one at a time without problems .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Checking what we're running - It's easy to check the version for the API server .lab[ - Log into node `oldversion1` - Check the version of kubectl and of the API server: ```bash kubectl version ``` ] - In a HA setup with multiple API servers, they can have different versions - Running the command above multiple times can return different values .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Node versions - It's also easy to check the version of kubelet .lab[ - Check node versions (includes kubelet, kernel, container engine): ```bash kubectl get nodes -o wide ``` ] - Different nodes can run different kubelet versions - Different nodes can run different kernel versions - Different nodes can run different container engines .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Control plane versions - If the control plane is self-hosted (running in pods), we can check it .lab[ - Show image versions for all pods in `kube-system` namespace: ```bash kubectl --namespace=kube-system get pods -o json \ | jq -r ' .items[] | [.spec.nodeName, .metadata.name] + (.spec.containers[].image | split(":")) | @tsv ' \ | column -t ``` ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## What version are we running anyway? - When I say, "I'm running Kubernetes 1.28", is that the version of: - kubectl - API server - kubelet - controller manager - something else? .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Other versions that are important - etcd - kube-dns or CoreDNS - CNI plugin(s) - Network controller, network policy controller - Container engine - Linux kernel .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Important questions - Should we upgrade the control plane before or after the kubelets? - Within the control plane, should we upgrade the API server first or last? - How often should we upgrade? - How long are versions maintained? - All the answers are in [the documentation about version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/)! - Let's review the key elements together ... .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Kubernetes uses semantic versioning - Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.28.9: - MAJOR = 1 - MINOR = 28 - PATCH = 9 - It's always possible to mix and match different PATCH releases (e.g. 1.28.9 and 1.28.13 are compatible) - It is recommended to run the latest PATCH release (but it's mandatory only when there is a security advisory) .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Version skew - API server must be more recent than its clients (kubelet and control plane) - ... Which means it must always be upgraded first - All components support a difference of one¹ MINOR version - This allows live upgrades (since we can mix e.g. 1.28 and 1.29) - It also means that going from 1.28 to 1.30 requires going through 1.29 .footnote[¹Except kubelet, which can be up to two MINOR behind API server, and kubectl, which can be one MINOR ahead or behind API server.] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Release cycle - There is a new PATCH relese whenever necessary (every few weeks, or "ASAP" when there is a security vulnerability) - There is a new MINOR release every 3 months (approximately) - At any given time, three MINOR releases are maintained - ... Which means that MINOR releases are maintained approximately 9 months - We should expect to upgrade at least every 3 months (on average) .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## General guidelines - To update a component, use whatever was used to install it - If it's a distro package, update that distro package - If it's a container or pod, update that container or pod - If you used configuration management, update with that .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Know where your binaries come from - Sometimes, we need to upgrade *quickly* (when a vulnerability is announced and patched) - If we are using an installer, we should: - make sure it's using upstream packages - or make sure that whatever packages it uses are current - make sure we can tell it to pin specific component versions .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## In practice - We are going to update a few cluster components - We will change the kubelet version on one node - We will change the version of the API server - We will work with cluster `oldversion` (nodes `oldversion1`, `oldversion2`, `oldversion3`) .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Updating the API server - This cluster has been deployed with kubeadm - The control plane runs in *static pods* - These pods are started automatically by kubelet (even when kubelet can't contact the API server) - They are defined in YAML files in `/etc/kubernetes/manifests` (this path is set by a kubelet command-line flag) - kubelet automatically updates the pods when the files are changed .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Changing the API server version - We will edit the YAML file to use a different image version .lab[ - Log into node `oldversion1` - Check API server version: ```bash kubectl version ``` - Edit the API server pod manifest: ```bash sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml ``` - Look for the `image:` line, and update it to e.g. `v1.30.1` ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Checking what we've done - The API server will be briefly unavailable while kubelet restarts it .lab[ - Check the API server version: ```bash kubectl version ``` ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Was that a good idea? -- **No!** -- - Remember the guideline we gave earlier: *To update a component, use whatever was used to install it.* - This control plane was deployed with kubeadm - We should use kubeadm to upgrade it! .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Updating the whole control plane - Let's make it right, and use kubeadm to upgrade the entire control plane (note: this is possible only because the cluster was installed with kubeadm) .lab[ - Check what will be upgraded: ```bash sudo kubeadm upgrade plan ``` ] Note 1: kubeadm thinks that our cluster is running 1.24.1.
It is confused by our manual upgrade of the API server! Note 2: kubeadm itself is still version 1.22.1..
It doesn't know how to upgrade do 1.23.X. .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Upgrading kubeadm - First things first: we need to upgrade kubeadm - The Kubernetes package repositories are now split by minor versions (i.e. there is one repository for 1.28, another for 1.29, etc.) - This avoids accidentally upgrading from one minor version to another (e.g. with unattended upgrades or if packages haven't been held/pinned) - We'll need to add the new package repository and unpin packages! .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Installing the new packages - Edit `/etc/apt/sources.list.d/kubernetes.list` (or copy it to e.g. `kubernetes-1.29.list` and edit that) - `apt-get update` - Now edit (or remove) `/etc/apt/preferences.d/kubernetes` - `apt-get install kubeadm` should now upgrade `kubeadm` correctly! 🎉 .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Reverting our manual API server upgrade - First, we should revert our `image:` change (so that kubeadm executes the right migration steps) .lab[ - Edit the API server pod manifest: ```bash sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml ``` - Look for the `image:` line, and restore it to the original value (e.g. `v1.28.9`) - Wait for the control plane to come back up ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Upgrading the cluster with kubeadm - Now we can let kubeadm do its job! .lab[ - Check the upgrade plan: ```bash sudo kubeadm upgrade plan ``` - Perform the upgrade: ```bash sudo kubeadm upgrade apply v1.29.0 ``` ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Updating kubelet - These nodes have been installed using the official Kubernetes packages - We can therefore use `apt` or `apt-get` .lab[ - Log into node `oldversion2` - Update package lists and APT pins like we did before - Then upgrade kubelet ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Checking what we've done .lab[ - Log into node `oldversion1` - Check node versions: ```bash kubectl get nodes -o wide ``` - Create a deployment and scale it to make sure that the node still works ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Was that a good idea? -- **Almost!** -- - Yes, kubelet was installed with distribution packages - However, kubeadm took care of configuring kubelet (when doing `kubeadm join ...`) - We were supposed to run a special command *before* upgrading kubelet! - That command should be executed on each node - It will download the kubelet configuration generated by kubeadm .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Upgrading kubelet the right way - We need to upgrade kubeadm, upgrade kubelet config, then upgrade kubelet (after upgrading the control plane) .lab[ - Execute the whole upgrade procedure on each node: ```bash for N in 1 2 3; do ssh oldversion$N " sudo sed -i s/1.28/1.29/ /etc/apt/sources.list.d/kubernetes.list && sudo rm /etc/apt/preferences.d/kubernetes && sudo apt update && sudo apt install kubeadm -y && sudo kubeadm upgrade node && sudo apt install kubelet -y" done ``` ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Checking what we've done - All our nodes should now be updated to version 1.29 .lab[ - Check nodes versions: ```bash kubectl get nodes -o wide ``` ] .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## And now, was that a good idea? -- **Almost!** -- - The official recommendation is to *drain* a node before performing node maintenance (migrate all workloads off the node before upgrading it) - How do we do that? - Is it really necessary? - Let's see! .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Draining a node - This can be achieved with the `kubectl drain` command, which will: - *cordon* the node (prevent new pods from being scheduled there) - *evict* all the pods running on the node (delete them gracefully) - the evicted pods will automatically be recreated somewhere else - evictions might be blocked in some cases (Pod Disruption Budgets, `emptyDir` volumes) - Once the node is drained, it can safely be upgraded, restarted... - Once it's ready, it can be put back in commission with `kubectl uncordon` .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Is it necessary? - When upgrading kubelet from one patch-level version to another: - it's *probably fine* - When upgrading system packages: - it's *probably fine* - except [when it's not][datadog-systemd-outage] - When upgrading the kernel: - it's *probably fine* - ...as long as we can tolerate a restart of the containers on the node - ...and that they will be unavailable for a few minutes (during the reboot) [datadog-systemd-outage]: https://www.datadoghq.com/blog/engineering/2023-03-08-deep-dive-into-platform-level-impact/ .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Is it necessary? - When upgrading kubelet from one minor version to another: - it *may or may not be fine* - in some cases (e.g. migrating from Docker to containerd) it *will not* - Here's what [the documentation][node-upgrade-docs] says: *Draining nodes before upgrading kubelet ensures that pods are re-admitted and containers are re-created, which may be necessary to resolve some security issues or other important bugs.* - Do it at your own risk, and if you do, test extensively in staging environments! [node-upgrade-docs]: https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/#manual-deployments .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- ## Database operators to the rescue - Moving stateful pods (e.g.: database server) can cause downtime - Database replication can help: - if a node contains database servers, we make sure these servers aren't primaries - if they are primaries, we execute a *switch over* - Some database operators (e.g. [CNPG]) will do that switch over automatically (when they detect that a node has been *cordoned*) [CNPG]: https://cloudnative-pg.io/ .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- class: extra-details ## Skipping versions - This example worked because we went from 1.28 to 1.29 - If you are upgrading from e.g. 1.26, you will have to go through 1.27 first - This means upgrading kubeadm to 1.27.X, then using it to upgrade the cluster - Then upgrading kubeadm to 1.28.X, etc. - **Make sure to read the release notes before upgrading!** ??? :EN:- Best practices for cluster upgrades :EN:- Example: upgrading a kubeadm cluster :FR:- Bonnes pratiques pour la mise à jour des clusters :FR:- Exemple : mettre à jour un cluster kubeadm .debug[[k8s/cluster-upgrade.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-upgrade.md)] --- class: pic .interstitial[] --- name: toc-static-pods class: title Static pods .nav[ [Previous part](#toc-upgrading-clusters) | [Back to table of contents](#toc-part-3) | [Next part](#toc-backing-up-clusters) ] .debug[(automatically generated title slide)] --- # Static pods - Hosting the Kubernetes control plane on Kubernetes has advantages: - we can use Kubernetes' replication and scaling features for the control plane - we can leverage rolling updates to upgrade the control plane - However, there is a catch: - deploying on Kubernetes requires the API to be available - the API won't be available until the control plane is deployed - How can we get out of that chicken-and-egg problem? .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- ## A possible approach - Since each component of the control plane can be replicated... - We could set up the control plane outside of the cluster - Then, once the cluster is fully operational, create replicas running on the cluster - Finally, remove the replicas that are running outside of the cluster *What could possibly go wrong?* .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- ## Sawing off the branch you're sitting on - What if anything goes wrong? (During the setup or at a later point) - Worst case scenario, we might need to: - set up a new control plane (outside of the cluster) - restore a backup from the old control plane - move the new control plane to the cluster (again) - This doesn't sound like a great experience .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- ## Static pods to the rescue - Pods are started by kubelet (an agent running on every node) - To know which pods it should run, the kubelet queries the API server - The kubelet can also get a list of *static pods* from: - a directory containing one (or multiple) *manifests*, and/or - a URL (serving a *manifest*) - These "manifests" are basically YAML definitions (As produced by `kubectl get pod my-little-pod -o yaml`) .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- ## Static pods are dynamic - Kubelet will periodically reload the manifests - It will start/stop pods accordingly (i.e. it is not necessary to restart the kubelet after updating the manifests) - When connected to the Kubernetes API, the kubelet will create *mirror pods* - Mirror pods are copies of the static pods (so they can be seen with e.g. `kubectl get pods`) .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- ## Bootstrapping a cluster with static pods - We can run control plane components with these static pods - They can start without requiring access to the API server - Once they are up and running, the API becomes available - These pods are then visible through the API (We cannot upgrade them from the API, though) *This is how kubeadm has initialized our clusters.* .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- ## Static pods vs normal pods - The API only gives us read-only access to static pods - We can `kubectl delete` a static pod... ...But the kubelet will re-mirror it immediately - Static pods can be selected just like other pods (So they can receive service traffic) - A service can select a mixture of static and other pods .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- ## From static pods to normal pods - Once the control plane is up and running, it can be used to create normal pods - We can then set up a copy of the control plane in normal pods - Then the static pods can be removed - The scheduler and the controller manager use leader election (Only one is active at a time; removing an instance is seamless) - Each instance of the API server adds itself to the `kubernetes` service - Etcd will typically require more work! .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- ## From normal pods back to static pods - Alright, but what if the control plane is down and we need to fix it? - We restart it using static pods! - This can be done automatically with a “pod checkpointer” - The pod checkpointer automatically generates manifests of running pods - The manifests are used to restart these pods if API contact is lost - This pattern is implemented in [openshift/pod-checkpointer-operator] and [bootkube checkpointer] - Unfortunately, as of 2021, both seem abandoned / unmaintained 😢 [openshift/pod-checkpointer-operator]: https://github.com/openshift/pod-checkpointer-operator [bootkube checkpointer]: https://github.com/kubernetes-retired/bootkube/blob/master/cmd/checkpoint/README.md .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- ## Where should the control plane run? *Is it better to run the control plane in static pods, or normal pods?* - If I'm a *user* of the cluster: I don't care, it makes no difference to me - What if I'm an *admin*, i.e. the person who installs, upgrades, repairs... the cluster? - If I'm using a managed Kubernetes cluster (AKS, EKS, GKE...) it's not my problem (I'm not the one setting up and managing the control plane) - If I already picked a tool (kubeadm, kops...) to set up my cluster, the tool decides for me - What if I haven't picked a tool yet, or if I'm installing from scratch? - static pods = easier to set up, easier to troubleshoot, less risk of outage - normal pods = easier to upgrade, easier to move (if nodes need to be shut down) .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- ## Static pods in action - On our clusters, the `staticPodPath` is `/etc/kubernetes/manifests` .lab[ - Have a look at this directory: ```bash ls -l /etc/kubernetes/manifests ``` ] We should see YAML files corresponding to the pods of the control plane. .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- class: static-pods-exercise ## Running a static pod - We are going to add a pod manifest to the directory, and kubelet will run it .lab[ - Copy a manifest to the directory: ```bash sudo cp ~/container.training/k8s/just-a-pod.yaml /etc/kubernetes/manifests ``` - Check that it's running: ```bash kubectl get pods ``` ] The output should include a pod named `hello-node1`. .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- class: static-pods-exercise ## Remarks In the manifest, the pod was named `hello`. ```yaml apiVersion: v1 kind: Pod metadata: name: hello namespace: default spec: containers: - name: hello image: nginx ``` The `-node1` suffix was added automatically by kubelet. If we delete the pod (with `kubectl delete`), it will be recreated immediately. To delete the pod, we need to delete (or move) the manifest file. ??? :EN:- Static pods :FR:- Les *static pods* .debug[[k8s/staticpods.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/staticpods.md)] --- class: pic .interstitial[] --- name: toc-backing-up-clusters class: title Backing up clusters .nav[ [Previous part](#toc-static-pods) | [Back to table of contents](#toc-part-4) | [Next part](#toc-the-cloud-controller-manager) ] .debug[(automatically generated title slide)] --- # Backing up clusters - Backups can have multiple purposes: - disaster recovery (servers or storage are destroyed or unreachable) - error recovery (human or process has altered or corrupted data) - cloning environments (for testing, validation...) - Let's see the strategies and tools available with Kubernetes! .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Important - Kubernetes helps us with disaster recovery (it gives us replication primitives) - Kubernetes helps us clone / replicate environments (all resources can be described with manifests) - Kubernetes *does not* help us with error recovery - We still need to back up/snapshot our data: - with database backups (mysqldump, pgdump, etc.) - and/or snapshots at the storage layer - and/or traditional full disk backups .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## In a perfect world ... - The deployment of our Kubernetes clusters is automated (recreating a cluster takes less than a minute of human time) - All the resources (Deployments, Services...) on our clusters are under version control (never use `kubectl run`; always apply YAML files coming from a repository) - Stateful components are either: - stored on systems with regular snapshots - backed up regularly to an external, durable storage - outside of Kubernetes .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Kubernetes cluster deployment - If our deployment system isn't fully automated, it should at least be documented - Litmus test: how long does it take to deploy a cluster... - for a senior engineer? - for a new hire? - Does it require external intervention? (e.g. provisioning servers, signing TLS certs...) .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Plan B - Full machine backups of the control plane can help - If the control plane is in pods (or containers), pay attention to storage drivers (if the backup mechanism is not container-aware, the backups can take way more resources than they should, or even be unusable!) - If the previous sentence worries you: **automate the deployment of your clusters!** .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Managing our Kubernetes resources - Ideal scenario: - never create a resource directly on a cluster - push to a code repository - a special branch (`production` or even `master`) gets automatically deployed - Some folks call this "GitOps" (it's the logical evolution of configuration management and infrastructure as code) .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## GitOps in theory - What do we keep in version control? - For very simple scenarios: source code, Dockerfiles, scripts - For real applications: add resources (as YAML files) - For applications deployed multiple times: Helm, Kustomize... (staging and production count as "multiple times") .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## GitOps tooling - Various tools exist (Weave Flux, GitKube...) - These tools are still very young - You still need to write YAML for all your resources - There is no tool to: - list *all* resources in a namespace - get resource YAML in a canonical form - diff YAML descriptions with current state .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## GitOps in practice - Start describing your resources with YAML - Leverage a tool like Kustomize or Helm - Make sure that you can easily deploy to a new namespace (or even better: to a new cluster) - When tooling matures, you will be ready .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Plan B - What if we can't describe everything with YAML? - What if we manually create resources and forget to commit them to source control? - What about global resources, that don't live in a namespace? - How can we be sure that we saved *everything*? .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Backing up etcd - All objects are saved in etcd - etcd data should be relatively small (and therefore, quick and easy to back up) - Two options to back up etcd: - snapshot the data directory - use `etcdctl snapshot` .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Making an etcd snapshot - The basic command is simple: ```bash etcdctl snapshot save
``` - But we also need to specify: - the address of the server to back up - the path to the key, certificate, and CA certificate
(if our etcd uses TLS certificates) - an environment variable to specify that we want etcdctl v3
(not necessary anymore with recent versions of etcd) .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Snapshotting etcd on kubeadm - Here is a strategy that works on clusters deployed with kubeadm (and maybe others) - We're going to: - identify a node running the control plane - identify the etcd image - execute `etcdctl snapshot` in a *debug container* - transfer the resulting snapshot with another *debug container* .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Finding an etcd node and image These commands let us retrieve the node and image automatically. .lab[ - Get the name of a control plane node: ```bash NODE=$(kubectl get nodes \ --selector=node-role.kubernetes.io/control-plane \ -o jsonpath={.items[0].metadata.name}) ``` - Get the etcd image: ```bash IMAGE=$(kubectl get pods --namespace kube-system etcd-$NODE \ -o jsonpath={..containers[].image}) ``` ] .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Making a snapshot This relies on the fact that in a `node` debug pod: - the host filesystem is mounted in `/host`, - the debug pod is using the host's network. .lab[ - Execute `etcdctl` in a debug pod: ```bash kubectl debug --attach --profile=general \ node/$NODE --image $IMAGE -- \ etcdctl --endpoints=https://[127.0.0.1]:2379 \ --cacert=/host/etc/kubernetes/pki/etcd/ca.crt \ --cert=/host/etc/kubernetes/pki/etcd/healthcheck-client.crt \ --key=/host/etc/kubernetes/pki/etcd/healthcheck-client.key \ snapshot save /host/tmp/snapshot ``` ] .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Transfer the snapshot We're going to use base64 encoding to ensure that the snapshot doesn't get corrupted in transit. .lab[ - Retrieve the snapshot: ```bash kubectl debug --attach --profile=general --quiet \ node/$NODE --image $IMAGE -- \ base64 /host/tmp/snapshot | base64 -d > snapshot ``` ] We can now work with the `snapshot` file in the current directory! .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Restoring an etcd snapshot - ~~Execute exactly the same command, but replacing `save` with `restore`~~ (Believe it or not, doing that will *not* do anything useful!) - The `restore` command does *not* load a snapshot into a running etcd server - The `restore` command creates a new data directory from the snapshot (it's an offline operation; it doesn't interact with an etcd server) - It will create a new data directory in a temporary container (leaving the running etcd node untouched) .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## When using kubeadm 1. Create a new data directory from the snapshot: ```bash sudo rm -rf /var/lib/etcd docker run --rm -v /var/lib:/var/lib -v $PWD:/vol $IMAGE \ etcdctl snapshot restore /vol/snapshot --data-dir=/var/lib/etcd ``` 2. Provision the control plane, using that data directory: ```bash sudo kubeadm init \ --ignore-preflight-errors=DirAvailable--var-lib-etcd ``` 3. Rejoin the other nodes .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## The fine print - This only saves etcd state - It **does not** save persistent volumes and local node data - Some critical components (like the pod network) might need to be reset - As a result, our pods might have to be recreated, too - If we have proper liveness checks, this should happen automatically .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Accessing etcd directly - Data in etcd is encoded in a binary format - We can retrieve the data with etcdctl, but it's hard to read - There is a tool to decode that data: `auger` - Check the [use cases][auger-use-cases] for an example of how to retrieve and modify etcd data! [auger-use-cases]: https://github.com/etcd-io/auger?tab=readme-ov-file#use-cases .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## More information about etcd backups - [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#built-in-snapshot) about etcd backups - [etcd documentation](https://coreos.com/etcd/docs/latest/op-guide/recovery.html#snapshotting-the-keyspace) about snapshots and restore - [A good blog post by elastisys](https://elastisys.com/2018/12/10/backup-kubernetes-how-and-why/) explaining how to restore a snapshot - [Another good blog post by consol labs](https://labs.consol.de/kubernetes/2018/05/25/kubeadm-backup.html) on the same topic .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Don't forget ... - Also back up the TLS information (at the very least: CA key and cert; API server key and cert) - With clusters provisioned by kubeadm, this is in `/etc/kubernetes/pki` - If you don't: - you will still be able to restore etcd state and bring everything back up - you will need to redistribute user certificates .warning[**TLS information is highly sensitive!
Anyone who has it has full access to your cluster!**] .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Stateful services - It's totally fine to keep your production databases outside of Kubernetes *Especially if you have only one database server!* - Feel free to put development and staging databases on Kubernetes (as long as they don't hold important data) - Using Kubernetes for stateful services makes sense if you have *many* (because then you can leverage Kubernetes automation) .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## Snapshotting persistent volumes - Option 1: snapshot volumes out of band (with the API/CLI/GUI of our SAN/cloud/...) - Option 2: storage system integration (e.g. [Portworx](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-snapshots/) can [create snapshots through annotations](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-snapshots/snaps-annotations/#taking-periodic-snapshots-on-a-running-pod)) - Option 3: [snapshots through Kubernetes API](https://kubernetes.io/docs/concepts/storage/volume-snapshots/) (Generally available since Kuberentes 1.20 for a number of [CSI](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/) volume plugins : GCE, OpenSDS, Ceph, Portworx, etc) .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- ## More backup tools - [Stash](https://appscode.com/products/stash/) back up Kubernetes persistent volumes - [ReShifter](https://github.com/mhausenblas/reshifter) cluster state management - ~~Heptio Ark~~ [Velero](https://github.com/heptio/velero) full cluster backup - [kube-backup](https://github.com/pieterlange/kube-backup) simple scripts to save resource YAML to a git repository - [bivac](https://github.com/camptocamp/bivac) Backup Interface for Volumes Attached to Containers ??? :EN:- Backing up clusters :FR:- Politiques de sauvegarde .debug[[k8s/cluster-backup.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-backup.md)] --- class: pic .interstitial[] --- name: toc-the-cloud-controller-manager class: title The Cloud Controller Manager .nav[ [Previous part](#toc-backing-up-clusters) | [Back to table of contents](#toc-part-4) | [Next part](#toc-healthchecks) ] .debug[(automatically generated title slide)] --- # The Cloud Controller Manager - Kubernetes has many features that are cloud-specific (e.g. providing cloud load balancers when a Service of type LoadBalancer is created) - These features were initially implemented in API server and controller manager - Since Kubernetes 1.6, these features are available through a separate process: the *Cloud Controller Manager* - The CCM is optional, but if we run in a cloud, we probably want it! .debug[[k8s/cloud-controller-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cloud-controller-manager.md)] --- ## Cloud Controller Manager duties - Creating and updating cloud load balancers - Configuring routing tables in the cloud network (specific to GCE) - Updating node labels to indicate region, zone, instance type... - Obtain node name, internal and external addresses from cloud metadata service - Deleting nodes from Kubernetes when they're deleted in the cloud - Managing *some* volumes (e.g. ELBs, AzureDisks...) (Eventually, volumes will be managed by the Container Storage Interface) .debug[[k8s/cloud-controller-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cloud-controller-manager.md)] --- ## In-tree vs. out-of-tree - A number of cloud providers are supported "in-tree" (in the main kubernetes/kubernetes repository on GitHub) - More cloud providers are supported "out-of-tree" (with code in different repositories) - There is an [ongoing effort](https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider) to move everything to out-of-tree providers .debug[[k8s/cloud-controller-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cloud-controller-manager.md)] --- ## In-tree providers The following providers are actively maintained: - Amazon Web Services - Azure - Google Compute Engine - IBM Cloud - OpenStack - VMware vSphere These ones are less actively maintained: - Apache CloudStack - oVirt - VMware Photon .debug[[k8s/cloud-controller-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cloud-controller-manager.md)] --- ## Out-of-tree providers The list includes the following providers: - DigitalOcean - keepalived (not exactly a cloud; provides VIPs for load balancers) - Linode - Oracle Cloud Infrastructure (And possibly others; there is no central registry for these.) .debug[[k8s/cloud-controller-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cloud-controller-manager.md)] --- ## Audience questions - What kind of clouds are you using/planning to use? - What kind of details would you like to see in this section? - Would you appreciate details on clouds that you don't / won't use? .debug[[k8s/cloud-controller-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cloud-controller-manager.md)] --- ## Cloud Controller Manager in practice - Write a configuration file (typically `/etc/kubernetes/cloud.conf`) - Run the CCM process (on self-hosted clusters, this can be a DaemonSet selecting the control plane nodes) - Start kubelet with `--cloud-provider=external` - When using managed clusters, this is done automatically - There is very little documentation on writing the configuration file (except for OpenStack) .debug[[k8s/cloud-controller-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cloud-controller-manager.md)] --- ## Bootstrapping challenges - When a node joins the cluster, it needs to obtain a signed TLS certificate - That certificate must contain the node's addresses - These addresses are provided by the Cloud Controller Manager (at least the external address) - To get these addresses, the node needs to communicate with the control plane - ...Which means joining the cluster (The problem didn't occur when cloud-specific code was running in kubelet: kubelet could obtain the required information directly from the cloud provider's metadata service.) .debug[[k8s/cloud-controller-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cloud-controller-manager.md)] --- ## More information about CCM - CCM configuration and operation is highly specific to each cloud provider (which is why this section remains very generic) - The Kubernetes documentation has *some* information: - [architecture and diagrams](https://kubernetes.io/docs/concepts/architecture/cloud-controller/) - [configuration](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/) (mainly for OpenStack) - [deployment](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/) ??? :EN:- The Cloud Controller Manager :FR:- Le *Cloud Controller Manager* .debug[[k8s/cloud-controller-manager.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cloud-controller-manager.md)] --- class: pic .interstitial[] --- name: toc-healthchecks class: title Healthchecks .nav[ [Previous part](#toc-the-cloud-controller-manager) | [Back to table of contents](#toc-part-4) | [Next part](#toc-deploying-a-sample-application) ] .debug[(automatically generated title slide)] --- # Healthchecks - Healthchecks can improve the reliability of our applications, for instance: - detect when a container has crashed, and restart it automatically - pause a rolling update until the new containers are ready to serve traffic - temporarily remove an overloaded backend from a loadbalancer - There are three kinds of healthchecks, corresponding to different use-cases: `startupProbe`, `readinessProbe`, `livenessProbe` - Healthchecks are optional (in the absence of healthchecks, Kubernetes considers the container to be healthy) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Use-cases in brief 1. *My container takes a long time to boot before being able to serve traffic.* → use a `startupProbe` (but often a `readinessProbe` can also do the job¹) 2. *Sometimes, my container is unavailable or overloaded, and needs to e.g. be taken temporarily out of load balancer rotation.* → use a `readinessProbe` 3. *Sometimes, my container enters a broken state which can only be fixed by a restart.* → use a `livenessProbe` .footnote[¹In fact, we will see that in many cases, a `readinessProbe` is all we need. Stay tuned!] .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Startup probes *My container takes a long time to boot before being able to serve traffic.* - After creating a container, Kubernetes runs its startup probe - The container will be considered "unhealthy" until the probe succeeds - As long as the container is "unhealthy", its Pod...: - is not added to Services' endpoints - is not considered as "available" for rolling update purposes - Readiness and liveness probes are enabled *after* startup probe reports success (if there is no startup probe, readiness and liveness probes are enabled right away) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## When to use a startup probe - For containers that take a long time to start (more than 30 seconds) - Especially if that time can vary a lot (e.g. fast in dev, slow in prod, or the other way around) .footnote[⚠️ Make sure to read the warnings later in this section!] .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Readiness probes *Sometimes, my container "needs a break".* - Check if the container is ready or not - If the container is not ready, its Pod is not ready - If the Pod belongs to a Service, it is removed from its Endpoints (it stops receiving new connections but existing ones are not affected) - If there is a rolling update in progress, it might pause (Kubernetes will try to respect the MaxUnavailable parameter) - As soon as the readiness probe suceeds again, everything goes back to normal .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## When to use a readiness probe - To indicate failure due to an external cause - database is down or unreachable - mandatory auth or other backend service unavailable - To indicate temporary failure or unavailability - runtime is busy doing garbage collection or (re)loading data - application can only service *N* parallel connections - new connections will be directed to other Pods .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Liveness probes *This container is dead, we don't know how to fix it, other than restarting it.* - Check if the container is dead or alive - If Kubernetes determines that the container is dead: - it terminates the container gracefully - it restarts the container (unless the Pod's `restartPolicy` is `Never`) - With the default parameters, it takes: - up to 30 seconds to determine that the container is dead - up to 30 seconds to terminate it .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## When to use a liveness probe - To detect failures that can't be recovered - deadlocks (causing all requests to time out) - internal corruption (causing all requests to error) - Anything where our incident response would be "just restart/reboot it" .footnote[⚠️ Make sure to read the warnings later in this section!] .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Different types of probes - Kubernetes supports the following mechanisms: - `httpGet` (HTTP GET request) - `exec` (arbitrary program execution) - `tcpSocket` (check if a TCP port is accepting connections) - `grpc` (standard [GRPC Health Checking Protocol][grpc]) - All probes give binary results ("it works" or "it doesn't") - Let's see the specific details for each of them! .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## `httpGet` - Make an HTTP GET request to the container - The request will be made by Kubelet (doesn't require extra binaries in the container image) - `port` must be specified - `path` and extra `httpHeaders` can be specified optionally - Kubernetes uses HTTP status code of the response: - 200-399 = success - anything else = failure .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## `httpGet` example The following readiness probe checks that the container responds on `/healthz`: ```yaml apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: frontend image: myregistry.../frontend:v1.0 readinessProbe: httpGet: port: 80 path: /healthz ``` .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## `exec` - Runs an arbitrary program *inside* the container (like with `kubectl exec` or `docker exec`) - The program must be available in the container image - Kubernetes uses the exit status of the program (standard UNIX convention: 0 = success, anything else = failure) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## `exec` example When the worker is ready, it should create `/tmp/ready`.
The following probe will give it 5 minutes to do so. ```yaml apiVersion: v1 kind: Pod metadata: name: queueworker spec: containers: - name: worker image: myregistry.../worker:v1.0 startupProbe: exec: command: - test - -f - /tmp/ready failureThreshold: 30 ``` .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- class: extra-details ## `startupProbe` and `failureThreshold` - Note the `failureThreshold: 30` on the previous manifest - This is important when defining a `startupProbe` - Otherwise, if the container fails to come up within 30 seconds... - ...Kubernetes restarts it! - More on this later .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Using shell constructs - If we want to use pipes, conditionals, etc. we should invoke a shell - Example: ```yaml exec: command: - sh - -c - "curl http://localhost:5000/status | jq .ready | grep true" ``` - All these programs (`curl`, `jq`, `grep`) must be available in the container image .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## `tcpSocket` - Kubernetes checks if the indicated TCP port accepts connections - There is no additional check .warning[It's quite possible for a process to be broken, but still accept TCP connections!] .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## `grpc` - Available in beta since Kubernetes 1.24 - Leverages standard [GRPC Health Checking Protocol][grpc] .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Timing and thresholds - Probes are executed at intervals of `periodSeconds` (default: 10) - The timeout for a probe is set with `timeoutSeconds` (default: 1) .warning[If a probe takes longer than that, it is considered as a FAIL] .warning[For liveness probes **and startup probes** this terminates and restarts the container] - A probe is considered successful after `successThreshold` successes (default: 1) - A probe is considered failing after `failureThreshold` failures (default: 3) - All these parameters can be set independently for each probe .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- class: extra-details ## `initialDelaySeconds` - A probe can have an `initialDelaySeconds` parameter (default: 0) - Kubernetes will wait that amount of time before running the probe for the first time - It is generally better to use a `startupProbe` instead (but this parameter did exist before startup probes were implemented) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Be careful when adding healthchecks - It is tempting to just "add all healthchecks" - This can be counter-productive and cause problems: - cascading failures - containers that fail to start when system is under load - wasting resources by restarting big containers - Let's analyze these problems! .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Liveness probes gotchas .warning[**Do not** use liveness probes for problems that can't be fixed by a restart] - Otherwise we just restart our pods for no reason, creating useless load .warning[**Do not** depend on other services within a liveness probe] - Otherwise we can experience cascading failures (example: web server liveness probe that makes a requests to a database) .warning[**Make sure** that liveness probes respond quickly] - The default probe timeout is 1 second (this can be tuned!) - If the probe takes longer than that, it will eventually cause a restart .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Startup probes gotchas - If a `startupProbe` fails, Kubernetes restarts the corresponding container - In other words: with the default parameters, the container must start within 30 seconds (`failureThreshold` × `periodSeconds`) - This is why we almost always want to adjust the parameters of a `startupProbe` (specifically, its `failureThreshold`) - Sometimes, it's easier/simpler to use a `readinessProbe` instead (see next slide for details) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## When do we need startup probes? - Only beneficial for containers that need a long time to start (more than 30 seconds) - If there is no liveness probe, it's simpler to just use a readiness probe (since we probably want to have a readiness probe anyway) - In other words, startup probes are useful in one situation: *we have a liveness probe, AND the container needs a lot of time to start* - Don't forget to change the `failureThreshold` (otherwise the container will fail to start and be killed) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- class: extra-details ## `readinessProbe` vs `startupProbe` - A lot of blog posts / documentations / tutorials recommend readiness probes... - ...even in scenarios where a startup probe would seem more appropriate! - This is because startup probes are relatively recent (they reached GA status in Kubernetes 1.20) - When there is no `livenessProbe`, using a `readinessProbe` is simpler: - a `startupProbe` generally requires to change the `failureThreshold` - a `startupProbe` generally also requires a `readinessProbe` - a single `readinessProbe` can fulfill both roles .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Best practices for healthchecks - Readiness probes are almost always beneficial - don't hesitate to add them early! - we can even make them *mandatory* - Be more careful with liveness and startup probes - they aren't always necessary - they can even cause harm .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Readiness probes - Almost always beneficial - Exceptions: - web service that doesn't have a dedicated "health" or "ping" route - ...and all requests are "expensive" (e.g. lots of external calls) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Liveness probes - If we're not careful, we end up restarting containers for no reason (which can cause additional load on the cluster, cascading failures, data loss, etc.) - Suggestion: - don't add liveness probes immediately - wait until you have a bit of production experience with that code - then add narrow-scoped healthchecks to detect specific failure modes - Readiness and liveness probes should be different (different check *or* different timeouts *or* different thresholds) .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Recap of the gotchas - The default timeout is 1 second - if a probe takes longer than 1 second to reply, Kubernetes considers that it fails - this can be changed by setting the `timeoutSeconds` parameter
(or refactoring the probe) - Liveness probes should not be influenced by the state of external services - Liveness probes and readiness probes should have different paramters - For startup probes, remember to increase the `failureThreshold` .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Healthchecks for workers (In that context, worker = process that doesn't accept connections) - A relatively easy solution is to use files - For a startup or readiness probe: - worker creates `/tmp/ready` when it's ready - probe checks the existence of `/tmp/ready` - For a liveness probe: - worker touches `/tmp/alive` regularly
(e.g. just before starting to work on a job) - probe checks that the timestamp on `/tmp/alive` is recent - if the timestamp is old, it means that the worker is stuck - Sometimes it can also make sense to embed a web server in the worker [grpc]: https://grpc.github.io/grpc/core/md_doc_health-checking.html ??? :EN:- Using healthchecks to improve availability :FR:- Utiliser des *healthchecks* pour améliorer la disponibilité .debug[[k8s/healthchecks.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks.md)] --- ## Adding healthchecks to an app - Let's add healthchecks to DockerCoins! - We will examine the questions of the previous slide - Then we will review each component individually to add healthchecks .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## Liveness, readiness, or both? - To answer that question, we need to see the app run for a while - Do we get temporary, recoverable glitches? → then use readiness - Or do we get hard lock-ups requiring a restart? → then use liveness - In the case of DockerCoins, we don't know yet! - Let's pick liveness .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## Do we have HTTP endpoints that we can use? - Each of the 3 web services (hasher, rng, webui) has a trivial route on `/` - These routes: - don't seem to perform anything complex or expensive - don't seem to call other services - Perfect! (See next slides for individual details) .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- - [hasher.rb](https://github.com/jpetazzo/container.training/blob/master/dockercoins/hasher/hasher.rb) ```ruby get '/' do "HASHER running on #{Socket.gethostname}\n" end ``` - [rng.py](https://github.com/jpetazzo/container.training/blob/master/dockercoins/rng/rng.py) ```python @app.route("/") def index(): return "RNG running on {}\n".format(hostname) ``` - [webui.js](https://github.com/jpetazzo/container.training/blob/master/dockercoins/webui/webui.js) ```javascript app.get('/', function (req, res) { res.redirect('/index.html'); }); ``` .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## Running DockerCoins - We will run DockerCoins in a new, separate namespace - We will use a set of YAML manifests and pre-built images - We will add our new liveness probe to the YAML of the `rng` DaemonSet - Then, we will deploy the application .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## Creating a new namespace - This will make sure that we don't collide / conflict with previous labs and exercises .lab[ - Create the yellow namespace: ```bash kubectl create namespace yellow ``` - Switch to that namespace: ```bash kns yellow ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## Retrieving DockerCoins manifests - All the manifests that we need are on a convenient repository: https://github.com/jpetazzo/kubercoins .lab[ - Clone that repository: ```bash cd ~ git clone https://github.com/jpetazzo/kubercoins ``` - Change directory to the repository: ```bash cd kubercoins ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## A simple HTTP liveness probe This is what our liveness probe should look like: ```yaml containers: - name: ... image: ... livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 30 periodSeconds: 5 ``` This will give 30 seconds to the service to start. (Way more than necessary!)
It will run the probe every 5 seconds.
It will use the default timeout (1 second).
It will use the default failure threshold (3 failed attempts = dead).
It will use the default success threshold (1 successful attempt = alive). .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## Adding the liveness probe - Let's add the liveness probe, then deploy DockerCoins .lab[ - Edit `rng-deployment.yaml` and add the liveness probe ```bash vim rng-deployment.yaml ``` - Load the YAML for all the resources of DockerCoins: ```bash kubectl apply -f . ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## Testing the liveness probe - The rng service needs 100ms to process a request (because it is single-threaded and sleeps 0.1s in each request) - The probe timeout is set to 1 second - If we send more than 10 requests per second per backend, it will break - Let's generate traffic and see what happens! .lab[ - Get the ClusterIP address of the rng service: ```bash kubectl get svc rng ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## Monitoring the rng service - Each command below will show us what's happening on a different level .lab[ - In one window, monitor cluster events: ```bash kubectl get events -w ``` - In another window, monitor the response time of rng: ```bash httping `
` ``` - In another window, monitor pods status: ```bash kubectl get pods -w ``` ] .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## Generating traffic - Let's use `ab` to send concurrent requests to rng .lab[ - In yet another window, generate traffic: ```bash ab -c 10 -n 1000 http://`
`/1 ``` - Experiment with higher values of `-c` and see what happens ] - The `-c` parameter indicates the number of concurrent requests - The final `/1` is important to generate actual traffic (otherwise we would use the ping endpoint, which doesn't sleep 0.1s per request) .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## Discussion - Above a given threshold, the liveness probe starts failing (about 10 concurrent requests per backend should be plenty enough) - When the liveness probe fails 3 times in a row, the container is restarted - During the restart, there is *less* capacity available - ... Meaning that the other backends are likely to timeout as well - ... Eventually causing all backends to be restarted - ... And each fresh backend gets restarted, too - This goes on until the load goes down, or we add capacity *This wouldn't be a good healthcheck in a real application!* .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## Better healthchecks - We need to make sure that the healthcheck doesn't trip when performance degrades due to external pressure - Using a readiness check would have fewer effects (but it would still be an imperfect solution) - A possible combination: - readiness check with a short timeout / low failure threshold - liveness check with a longer timeout / higher failure threshold .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- ## Healthchecks for redis - A liveness probe is enough (it's not useful to remove a backend from rotation when it's the only one) - We could use an exec probe running `redis-cli ping` .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- class: extra-details ## Exec probes and zombies - When using exec probes, we should make sure that we have a *zombie reaper* 🤔🧐🧟 Wait, what? - When a process terminates, its parent must call `wait()`/`waitpid()` (this is how the parent process retrieves the child's exit status) - In the meantime, the process is in *zombie* state (the process state will show as `Z` in `ps`, `top` ...) - When a process is killed, its children are *orphaned* and attached to PID 1 - PID 1 has the responsibility of *reaping* these processes when they terminate - OK, but how does that affect us? .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- class: extra-details ## PID 1 in containers - On ordinary systems, PID 1 (`/sbin/init`) has logic to reap processes - In containers, PID 1 is typically our application process (e.g. Apache, the JVM, NGINX, Redis ...) - These *do not* take care of reaping orphans - If we use exec probes, we need to add a process reaper - We can add [tini](https://github.com/krallin/tini) to our images - Or [share the PID namespace between containers of a pod](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/) (and have gcr.io/pause take care of the reaping) - Discussion of this in [Video - 10 Ways to Shoot Yourself in the Foot with Kubernetes, #9 Will Surprise You](https://www.youtube.com/watch?v=QKI-JRs2RIE) ??? :EN:- Adding healthchecks to an app :FR:- Ajouter des *healthchecks* à une application .debug[[k8s/healthchecks-more.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/healthchecks-more.md)] --- class: pic .interstitial[] --- name: toc-deploying-a-sample-application class: title Deploying a sample application .nav[ [Previous part](#toc-healthchecks) | [Back to table of contents](#toc-part-5) | [Next part](#toc-accessing-logs-from-the-cli) ] .debug[(automatically generated title slide)] --- # Deploying a sample application - We will connect to our new Kubernetes cluster - We will deploy a sample application, "DockerCoins" - That app features multiple micro-services and a web UI .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- ## Connecting to our Kubernetes cluster - Our cluster has multiple nodes named `node1`, `node2`, etc. - We will do everything from `node1` - We have SSH access to the other nodes, but won't need it (but we can use it for debugging, troubleshooting, etc.) .lab[ - Log into `node1` - Check that all nodes are `Ready`: ```bash kubectl get nodes ``` ] .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- ## Cloning the repository - We will need to clone the training repository - It has the DockerCoins demo app ... - ... as well as these slides, some scripts, more manifests .lab[ - Clone the repository on `node1`: ```bash git clone https://github.com/jpetazzo/container.training ``` ] .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- ## Running the application Without further ado, let's start this application! .lab[ - Apply the manifest for dockercoins: ```bash kubectl apply -f ~/container.training/k8s/dockercoins.yaml ``` ] .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- ## What's this application? -- - It is a DockerCoin miner! 💰🐳📦🚢 -- - No, you can't buy coffee with DockerCoins -- - How DockerCoins works: - generate a few random bytes - hash these bytes - increment a counter (to keep track of speed) - repeat forever! -- - DockerCoins is *not* a cryptocurrency (the only common points are "randomness", "hashing", and "coins" in the name) .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- ## DockerCoins in the microservices era - DockerCoins is made of 5 services: - `rng` = web service generating random bytes - `hasher` = web service computing hash of POSTed data - `worker` = background process calling `rng` and `hasher` - `webui` = web interface to watch progress - `redis` = data store (holds a counter updated by `worker`) - These 5 services are visible in the application's Compose file, [docker-compose.yml]( https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml) .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- ## How DockerCoins works - `worker` invokes web service `rng` to generate random bytes - `worker` invokes web service `hasher` to hash these bytes - `worker` does this in an infinite loop - every second, `worker` updates `redis` to indicate how many loops were done - `webui` queries `redis`, and computes and exposes "hashing speed" in our browser *(See diagram on next slide!)* .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- class: pic  .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- ## Service discovery in container-land How does each service find out the address of the other ones? -- - We do not hard-code IP addresses in the code - We do not hard-code FQDNs in the code, either - We just connect to a service name, and container-magic does the rest (And by container-magic, we mean "a crafty, dynamic, embedded DNS server") .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- ## Example in `worker/worker.py` ```python redis = Redis("`redis`") def get_random_bytes(): r = requests.get("http://`rng`/32") return r.content def hash_bytes(data): r = requests.post("http://`hasher`/", data=data, headers={"Content-Type": "application/octet-stream"}) ``` (Feel free to check the [full source code][dockercoins-worker-code] of the worker!) [dockercoins-worker-code]: https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17 .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- ## Show me the code! - You can check the GitHub repository with all the materials of this workshop:
https://github.com/jpetazzo/container.training - The application is in the [dockercoins]( https://github.com/jpetazzo/container.training/tree/master/dockercoins) subdirectory - The Compose file ([docker-compose.yml]( https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml)) lists all 5 services - `redis` is using an official image from the Docker Hub - `hasher`, `rng`, `worker`, `webui` are each built from a Dockerfile - Each service's Dockerfile and source code is in its own directory (`hasher` is in the [hasher](https://github.com/jpetazzo/container.training/blob/master/dockercoins/hasher/) directory, `rng` is in the [rng](https://github.com/jpetazzo/container.training/blob/master/dockercoins/rng/) directory, etc.) .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- ## Our application at work - We can check the logs of our application's pods .lab[ - Check the logs of the various components: ```bash kubectl logs deploy/worker kubectl logs deploy/hasher ``` ] .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- ## Connecting to the web UI - "Logs are exciting and fun!" (No-one, ever) - The `webui` container exposes a web dashboard; let's view it .lab[ - Check the NodePort allocated to the web UI: ```bash kubectl get svc webui ``` - Open that in a web browser ] A drawing area should show up, and after a few seconds, a blue graph will appear. ??? :EN:- Deploying a sample app with YAML manifests :FR:- Lancer une application de démo avec du YAML .debug[[k8s/kubercoins.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/kubercoins.md)] --- class: pic .interstitial[] --- name: toc-accessing-logs-from-the-cli class: title Accessing logs from the CLI .nav[ [Previous part](#toc-deploying-a-sample-application) | [Back to table of contents](#toc-part-5) | [Next part](#toc-centralized-logging) ] .debug[(automatically generated title slide)] --- # Accessing logs from the CLI - The `kubectl logs` command has limitations: - it cannot stream logs from multiple pods at a time - when showing logs from multiple pods, it mixes them all together - We are going to see how to do it better .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-cli.md)] --- ## Doing it manually - We *could* (if we were so inclined) write a program or script that would: - take a selector as an argument - enumerate all pods matching that selector (with `kubectl get -l ...`) - fork one `kubectl logs --follow ...` command per container - annotate the logs (the output of each `kubectl logs ...` process) with their origin - preserve ordering by using `kubectl logs --timestamps ...` and merge the output -- - We *could* do it, but thankfully, others did it for us already! .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-cli.md)] --- ## Stern [Stern](https://github.com/stern/stern) is an open source project originally by [Wercker](http://www.wercker.com/). From the README: *Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.* *The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.* Exactly what we need! .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-cli.md)] --- ## Checking if Stern is installed - Run `stern` (without arguments) to check if it's installed: ``` $ stern Tail multiple pods and containers from Kubernetes Usage: stern pod-query [flags] ``` - If it's missing, let's see how to install it .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-cli.md)] --- ## Installing Stern - Stern is written in Go - Go programs are usually very easy to install (no dependencies, extra libraries to install, etc) - Binary releases are available [on GitHub][stern-releases] - Stern is also available through most package managers (e.g. on macOS, we can `brew install stern` or `sudo port install stern`) [stern-releases]: https://github.com/stern/stern/releases .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-cli.md)] --- ## Using Stern - There are two ways to specify the pods whose logs we want to see: - `-l` followed by a selector expression (like with many `kubectl` commands) - with a "pod query," i.e. a regex used to match pod names - These two ways can be combined if necessary .lab[ - View the logs for all the pingpong containers: ```bash stern pingpong ``` ] .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-cli.md)] --- ## Stern convenient options - The `--tail N` flag shows the last `N` lines for each container (Instead of showing the logs since the creation of the container) - The `-t` / `--timestamps` flag shows timestamps - The `--all-namespaces` flag is self-explanatory .lab[ - View what's up with the `weave` system containers: ```bash stern --tail 1 --timestamps --all-namespaces weave ``` ] .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-cli.md)] --- ## Using Stern with a selector - When specifying a selector, we can omit the value for a label - This will match all objects having that label (regardless of the value) - Everything created with `kubectl run` has a label `run` - Everything created with `kubectl create deployment` has a label `app` - We can use that property to view the logs of all the pods created with `kubectl create deployment` .lab[ - View the logs for all the things started with `kubectl create deployment`: ```bash stern -l app ``` ] ??? :EN:- Viewing pod logs from the CLI :FR:- Consulter les logs des pods depuis la CLI .debug[[k8s/logs-cli.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-cli.md)] --- class: pic .interstitial[] --- name: toc-centralized-logging class: title Centralized logging .nav[ [Previous part](#toc-accessing-logs-from-the-cli) | [Back to table of contents](#toc-part-5) | [Next part](#toc-authentication-and-authorization) ] .debug[(automatically generated title slide)] --- # Centralized logging - Using `kubectl` or `stern` is simple; but it has drawbacks: - when a node goes down, its logs are not available anymore - we can only dump or stream logs; we want to search/index/count... - We want to send all our logs to a single place - We want to parse them (e.g. for HTTP logs) and index them - We want a nice web dashboard -- - We are going to deploy an EFK stack .debug[[k8s/logs-centralized.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-centralized.md)] --- ## What is EFK? - EFK is three components: - ElasticSearch (to store and index log entries) - Fluentd (to get container logs, process them, and put them in ElasticSearch) - Kibana (to view/search log entries with a nice UI) - The only component that we need to access from outside the cluster will be Kibana .debug[[k8s/logs-centralized.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-centralized.md)] --- ## Deploying EFK on our cluster - We are going to use a YAML file describing all the required resources .lab[ - Load the YAML file into our cluster: ```bash kubectl apply -f ~/container.training/k8s/efk.yaml ``` ] If we [look at the YAML file](https://github.com/jpetazzo/container.training/blob/master/k8s/efk.yaml), we see that it creates a daemon set, two deployments, two services, and a few roles and role bindings (to give fluentd the required permissions). .debug[[k8s/logs-centralized.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-centralized.md)] --- ## The itinerary of a log line (before Fluentd) - A container writes a line on stdout or stderr - Both are typically piped to the container engine (Docker or otherwise) - The container engine reads the line, and sends it to a logging driver - The timestamp and stream (stdout or stderr) is added to the log line - With the default configuration for Kubernetes, the line is written to a JSON file (`/var/log/containers/pod-name_namespace_container-id.log`) - That file is read when we invoke `kubectl logs`; we can access it directly too .debug[[k8s/logs-centralized.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-centralized.md)] --- ## The itinerary of a log line (with Fluentd) - Fluentd runs on each node (thanks to a daemon set) - It bind-mounts `/var/log/containers` from the host (to access these files) - It continuously scans this directory for new files; reads them; parses them - Each log line becomes a JSON object, fully annotated with extra information:
container id, pod name, Kubernetes labels... - These JSON objects are stored in ElasticSearch - ElasticSearch indexes the JSON objects - We can access the logs through Kibana (and perform searches, counts, etc.) .debug[[k8s/logs-centralized.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-centralized.md)] --- ## Accessing Kibana - Kibana offers a web interface that is relatively straightforward - Let's check it out! .lab[ - Check which `NodePort` was allocated to Kibana: ```bash kubectl get svc kibana ``` - With our web browser, connect to Kibana ] .debug[[k8s/logs-centralized.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-centralized.md)] --- ## Using Kibana *Note: this is not a Kibana workshop! So this section is deliberately very terse.* - The first time you connect to Kibana, you must "configure an index pattern" - Just use the one that is suggested, `@timestamp`.red[*] - Then click "Discover" (in the top-left corner) - You should see container logs - Advice: in the left column, select a few fields to display, e.g.: `kubernetes.host`, `kubernetes.pod_name`, `stream`, `log` .red[*]If you don't see `@timestamp`, it's probably because no logs exist yet.
Wait a bit, and double-check the logging pipeline! .debug[[k8s/logs-centralized.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-centralized.md)] --- ## Caveat emptor We are using EFK because it is relatively straightforward to deploy on Kubernetes, without having to redeploy or reconfigure our cluster. But it doesn't mean that it will always be the best option for your use-case. If you are running Kubernetes in the cloud, you might consider using the cloud provider's logging infrastructure (if it can be integrated with Kubernetes). The deployment method that we will use here has been simplified: there is only one ElasticSearch node. In a real deployment, you might use a cluster, both for performance and reliability reasons. But this is outside of the scope of this chapter. The YAML file that we used creates all the resources in the `default` namespace, for simplicity. In a real scenario, you will create the resources in the `kube-system` namespace or in a dedicated namespace. ??? :EN:- Centralizing logs :FR:- Centraliser les logs .debug[[k8s/logs-centralized.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/logs-centralized.md)] --- class: pic .interstitial[] --- name: toc-authentication-and-authorization class: title Authentication and authorization .nav[ [Previous part](#toc-centralized-logging) | [Back to table of contents](#toc-part-5) | [Next part](#toc-generating-user-certificates) ] .debug[(automatically generated title slide)] --- # Authentication and authorization - In this section, we will: - define authentication and authorization - explain how they are implemented in Kubernetes - talk about tokens, certificates, service accounts, RBAC ... - But first: why do we need all this? .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## The need for fine-grained security - The Kubernetes API should only be available for identified users - we don't want "guest access" (except in very rare scenarios) - we don't want strangers to use our compute resources, delete our apps ... - our keys and passwords should not be exposed to the public - Users will often have different access rights - cluster admin (similar to UNIX "root") can do everything - developer might access specific resources, or a specific namespace - supervision might have read only access to *most* resources .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Example: custom HTTP load balancer - Let's imagine that we have a custom HTTP load balancer for multiple apps - Each app has its own *Deployment* resource - By default, the apps are "sleeping" and scaled to zero - When a request comes in, the corresponding app gets woken up - After some inactivity, the app is scaled down again - This HTTP load balancer needs API access (to scale up/down) - What if *a wild vulnerability appears*? .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Consequences of vulnerability - If the HTTP load balancer has the same API access as we do: *full cluster compromise (easy data leak, cryptojacking...)* - If the HTTP load balancer has `update` permissions on the Deployments: *defacement (easy), MITM / impersonation (medium to hard)* - If the HTTP load balancer only has permission to `scale` the Deployments: *denial-of-service* - All these outcomes are bad, but some are worse than others .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Definitions - Authentication = verifying the identity of a person On a UNIX system, we can authenticate with login+password, SSH keys ... - Authorization = listing what they are allowed to do On a UNIX system, this can include file permissions, sudoer entries ... - Sometimes abbreviated as "authn" and "authz" - In good modular systems, these things are decoupled (so we can e.g. change a password or SSH key without having to reset access rights) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Authentication in Kubernetes - When the API server receives a request, it tries to authenticate it (it examines headers, certificates... anything available) - Many authentication methods are available and can be used simultaneously (we will see them on the next slide) - It's the job of the authentication method to produce: - the user name - the user ID - a list of groups - The API server doesn't interpret these; that'll be the job of *authorizers* .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Authentication methods - TLS client certificates (that's the default for clusters provisioned with `kubeadm`) - Bearer tokens (a secret token in the HTTP headers of the request) - [HTTP basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication) (carrying user and password in an HTTP header; [deprecated since Kubernetes 1.19](https://github.com/kubernetes/kubernetes/pull/89069)) - Authentication proxy (sitting in front of the API and setting trusted headers) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Anonymous requests - If any authentication method *rejects* a request, it's denied (`401 Unauthorized` HTTP code) - If a request is neither rejected nor accepted by anyone, it's anonymous - the user name is `system:anonymous` - the list of groups is `[system:unauthenticated]` - By default, the anonymous user can't do anything (that's what you get if you just `curl` the Kubernetes API) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Authentication with TLS certificates - Enabled in almost all Kubernetes deployments - The user name is indicated by the `CN` in the client certificate - The groups are indicated by the `O` fields in the client certificate - From the point of view of the Kubernetes API, users do not exist (i.e. there is no resource with `kind: User`) - The Kubernetes API can be set up to use your custom CA to validate client certs .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Authentication for kubelet - In most clusters, kubelets authenticate using certificates (`O=system:nodes`, `CN=system:node:name-of-the-node`) - The Kubernetes API can act as a CA (by wrapping an X509 CSR into a CertificateSigningRequest resource) - This enables kubelets to renew their own certificates - It can also be used to issue user certificates (but it lacks flexibility; e.g. validity can't be customized) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## User certificates in practice - The Kubernetes API server does not support certificate revocation (see issue [#18982](https://github.com/kubernetes/kubernetes/issues/18982)) - As a result, we don't have an easy way to terminate someone's access (if their key is compromised, or they leave the organization) - Issue short-lived certificates if you use them to authenticate users! (short-lived = a few hours) - This can be facilitated by e.g. Vault, cert-manager... .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## What if a certificate is compromised? - Option 1: wait for the certificate to expire (which is why short-lived certs are convenient!) - Option 2: remove access from that certificate's user and groups - if that user was `bob.smith`, create a new user `bob.smith.2` - if Bob was in groups `dev`, create a new group `dev.2` - let's agree that this is not a great solution! - Option 3: re-create a new CA and re-issue all certificates - let's agree that this is an even worse solution! .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Authentication with tokens - Tokens are passed as HTTP headers: `Authorization: Bearer and-then-here-comes-the-token` - Tokens can be validated through a number of different methods: - static tokens hard-coded in a file on the API server - [bootstrap tokens](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) (special case to create a cluster or join nodes) - [OpenID Connect tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) (to delegate authentication to compatible OAuth2 providers) - service accounts (these deserve more details, coming right up!) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Service accounts - A service account is a user that exists in the Kubernetes API (it is visible with e.g. `kubectl get serviceaccounts`) - Service accounts can therefore be created / updated dynamically (they don't require hand-editing a file and restarting the API server) - A service account can be associated with a set of secrets (the kind that you can view with `kubectl get secrets`) - Service accounts are generally used to grant permissions to applications, services... (as opposed to humans) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Service account tokens evolution - In Kubernetes 1.21 and above, pods use *bound service account tokens*: - these tokens are *bound* to a specific object (e.g. a Pod) - they are automatically invalidated when the object is deleted - these tokens also expire quickly (e.g. 1 hour) and gets rotated automatically - In Kubernetes 1.24 and above, unbound tokens aren't created automatically - before 1.24, we would see unbound tokens with `kubectl get secrets` - with 1.24 and above, these tokens can be created with `kubectl create token` - ...or with a Secret with the right [type and annotation][create-token] [create-token]: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#create-token .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Checking our authentication method - Let's check our kubeconfig file - Do we have a certificate, a token, or something else? .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Inspecting a certificate If we have a certificate, let's use the following command: ```bash kubectl config view \ --raw \ -o json \ | jq -r .users[0].user[\"client-certificate-data\"] \ | openssl base64 -d -A \ | openssl x509 -text \ | grep Subject: ``` This command will show the `CN` and `O` fields for our certificate. .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Breaking down the command - `kubectl config view` shows the Kubernetes user configuration - `--raw` includes certificate information (which shows as REDACTED otherwise) - `-o json` outputs the information in JSON format - `| jq ...` extracts the field with the user certificate (in base64) - `| openssl base64 -d -A` decodes the base64 format (now we have a PEM file) - `| openssl x509 -text` parses the certificate and outputs it as plain text - `| grep Subject:` shows us the line that interests us → We are user `kubernetes-admin`, in group `system:masters`. (We will see later how and why this gives us the permissions that we have.) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Inspecting a token If we have a token, let's use the following command: ```bash kubectl config view \ --raw \ -o json \ | jq -r .users[0].user.token \ | base64 -d \ | cut -d. -f2 \ | base64 -d \ | jq . ``` If our token is a JWT / OIDC token, this command will show its content. .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Other authentication methods - Other types of tokens - these tokens are typically shorter than JWT or OIDC tokens - it is generally not possible to extract information from them - Plugins - some clusters use external `exec` plugins - these plugins typically use API keys to generate or obtain tokens - example: the AWS EKS authenticator works this way .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Token authentication in practice - We are going to list existing service accounts - Then we will extract the token for a given service account - And we will use that token to authenticate with the API .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Listing service accounts .lab[ - The resource name is `serviceaccount` or `sa` for short: ```bash kubectl get sa ``` ] There should be just one service account in the default namespace: `default`. .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Finding the secret .lab[ - List the secrets for the `default` service account: ```bash kubectl get sa default -o yaml SECRET=$(kubectl get sa default -o json | jq -r .secrets[0].name) ``` ] It should be named `default-token-XXXXX`. When running Kubernetes 1.24 and above, this Secret won't exist.
Instead, create a token with `kubectl create token default`. .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Extracting the token - The token is stored in the secret, wrapped with base64 encoding .lab[ - View the secret: ```bash kubectl get secret $SECRET -o yaml ``` - Extract the token and decode it: ```bash TOKEN=$(kubectl get secret $SECRET -o json \ | jq -r .data.token | openssl base64 -d -A) ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Using the token - Let's send a request to the API, without and with the token .lab[ - Find the ClusterIP for the `kubernetes` service: ```bash kubectl get svc kubernetes API=$(kubectl get svc kubernetes -o json | jq -r .spec.clusterIP) ``` - Connect without the token: ```bash curl -k https://$API ``` - Connect with the token: ```bash curl -k -H "Authorization: Bearer $TOKEN" https://$API ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Results - In both cases, we will get a "Forbidden" error - Without authentication, the user is `system:anonymous` - With authentication, it is shown as `system:serviceaccount:default:default` - The API "sees" us as a different user - But neither user has any rights, so we can't do nothin' - Let's change that! .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Authorization in Kubernetes - There are multiple ways to grant permissions in Kubernetes, called [authorizers](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules): - [Node Authorization](https://kubernetes.io/docs/reference/access-authn-authz/node/) (used internally by kubelet; we can ignore it) - [Attribute-based access control](https://kubernetes.io/docs/reference/access-authn-authz/abac/) (powerful but complex and static; ignore it too) - [Webhook](https://kubernetes.io/docs/reference/access-authn-authz/webhook/) (each API request is submitted to an external service for approval) - [Role-based access control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (associates permissions to users dynamically) - The one we want is the last one, generally abbreviated as RBAC .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Role-based access control - RBAC allows to specify fine-grained permissions - Permissions are expressed as *rules* - A rule is a combination of: - [verbs](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb) like create, get, list, update, delete... - resources (as in "API resource," like pods, nodes, services...) - resource names (to specify e.g. one specific pod instead of all pods) - in some case, [subresources](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources) (e.g. logs are subresources of pods) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Listing all possible verbs - The Kubernetes API is self-documented - We can ask it which resources, subresources, and verb exist - One way to do this is to use: - `kubectl get --raw /api/v1` (for core resources with `apiVersion: v1`) - `kubectl get --raw /apis/
/
` (for other resources) - The JSON response can be formatted with e.g. `jq` for readability .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Examples - List all verbs across all `v1` resources ```bash kubectl get --raw /api/v1 | jq -r .resources[].verbs[] | sort -u ``` - List all resources and subresources in `apps/v1` ```bash kubectl get --raw /apis/apps/v1 | jq -r .resources[].name ``` - List which verbs are available on which resources in `networking.k8s.io` ```bash kubectl get --raw /apis/networking.k8s.io/v1 | \ jq -r '.resources[] | .name + ": " + (.verbs | join(", "))' ``` .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## From rules to roles to rolebindings - A *role* is an API object containing a list of *rules* Example: role "external-load-balancer-configurator" can: - [list, get] resources [endpoints, services, pods] - [update] resources [services] - A *rolebinding* associates a role with a user Example: rolebinding "external-load-balancer-configurator": - associates user "external-load-balancer-configurator" - with role "external-load-balancer-configurator" - Yes, there can be users, roles, and rolebindings with the same name - It's a good idea for 1-1-1 bindings; not so much for 1-N ones .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Cluster-scope permissions - API resources Role and RoleBinding are for objects within a namespace - We can also define API resources ClusterRole and ClusterRoleBinding - These are a superset, allowing us to: - specify actions on cluster-wide objects (like nodes) - operate across all namespaces - We can create Role and RoleBinding resources within a namespace - ClusterRole and ClusterRoleBinding resources are global .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Pods and service accounts - A pod can be associated with a service account - by default, it is associated with the `default` service account - as we saw earlier, this service account has no permissions anyway - The associated token is exposed to the pod's filesystem (in `/var/run/secrets/kubernetes.io/serviceaccount/token`) - Standard Kubernetes tooling (like `kubectl`) will look for it there - So Kubernetes tools running in a pod will automatically use the service account .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## In practice - We are going to run a pod - This pod will use the default service account of its namespace - We will check our API permissions (there shouldn't be any) - Then we will bind a role to the service account - We will check that we were granted the corresponding permissions .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Running a pod - We'll use [Nixery](https://nixery.dev/) to run a pod with `curl` and `kubectl` - Nixery automatically generates images with the requested packages .lab[ - Run our pod: ```bash kubectl run eyepod --rm -ti --restart=Never \ --image nixery.dev/shell/curl/kubectl -- bash ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Checking our permissions - Normally, at this point, we don't have any API permission .lab[ - Check our permissions with `kubectl`: ```bash kubectl get pods ``` ] - We should get a message telling us that our service account doesn't have permissions to list "pods" in the current namespace - We can also make requests to the API server directly (use `kubectl -v6` to see the exact request URI!) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Binding a role to the service account - Binding a role = creating a *rolebinding* object - We will call that object `can-view` (but again, we could call it `view` or whatever we like) .lab[ - Create the new role binding: ```bash kubectl create rolebinding can-view \ --clusterrole=view \ --serviceaccount=default:default ``` ] It's important to note a couple of details in these flags... .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Roles vs Cluster Roles - We used `--clusterrole=view` - What would have happened if we had used `--role=view`? - we would have bound the role `view` from the local namespace
(instead of the cluster role `view`) - the command would have worked fine (no error) - but later, our API requests would have been denied - This is a deliberate design decision (we can reference roles that don't exist, and create/update them later) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Users vs Service Accounts - We used `--serviceaccount=default:default` - What would have happened if we had used `--user=default:default`? - we would have bound the role to a user instead of a service account - again, the command would have worked fine (no error) - ...but our API requests would have been denied later - What's about the `default:` prefix? - that's the namespace of the service account - yes, it could be inferred from context, but... `kubectl` requires it .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## Checking our new permissions - We should be able to *view* things, but not to *edit* them .lab[ - Check our permissions with `kubectl`: ```bash kubectl get pods ``` - Try to create something: ```bash kubectl create deployment can-i-do-this --image=nginx ``` - Exit the container with `exit` or `^D` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## `kubectl run --serviceaccount` - `kubectl run` also has a `--serviceaccount` flag - ...But it's supposed to be deprecated "soon" (see [kubernetes/kubernetes#99732](https://github.com/kubernetes/kubernetes/pull/99732) for details) - It's possible to specify the service account with an override: ```bash kubectl run my-pod -ti --image=alpine --restart=Never \ --overrides='{ "spec": { "serviceAccountName" : "my-service-account" } }' ``` .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## `kubectl auth` and other CLI tools - The `kubectl auth can-i` command can tell us: - if we can perform an action - if someone else can perform an action - what actions we can perform - There are also other very useful tools to work with RBAC - Let's do a quick review! .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## `kubectl auth can-i dothis onthat` - These commands will give us a `yes`/`no` answer: ```bash kubectl auth can-i list nodes kubectl auth can-i create pods kubectl auth can-i get pod/name-of-pod kubectl auth can-i get /url-fragment-of-api-request/ kubectl auth can-i '*' services kubectl auth can-i get coffee kubectl auth can-i drink coffee ``` - The RBAC system is flexible - We can check permissions on resources that don't exist yet (e.g. CRDs) - We can check permissions for arbitrary actions .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## `kubectl auth can-i ... --as someoneelse` - We can check permissions on behalf of other users ```bash kubectl auth can-i list nodes \ --as some-user kubectl auth can-i list nodes \ --as system:serviceaccount:
:
``` - We can also use `--as-group` to check permissions for members of a group - `--as` and `--as-group` leverage the *impersonation API* - These flags can be used with many other `kubectl` commands (not just `auth can-i`) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## `kubectl auth can-i --list` - We can list the actions that are available to us: ```bash kubectl auth can-i --list ``` - ... Or to someone else (with `--as SomeOtherUser`) - This is very useful to check users or service accounts for overly broad permissions (or when looking for ways to exploit a security vulnerability!) - To learn more about Kubernetes attacks and threat models around RBAC: 📽️ [Hacking into Kubernetes Security for Beginners](https://www.youtube.com/watch?v=mLsCm9GVIQg) by [V Körbes](https://twitter.com/veekorbes) and [Tabitha Sable](https://twitter.com/TabbySable) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Other useful tools - For auditing purposes, sometimes we want to know who can perform which actions - There are a few tools to help us with that, available as `kubectl` plugins: - `kubectl who-can` / [kubectl-who-can](https://github.com/aquasecurity/kubectl-who-can) by Aqua Security - `kubectl access-matrix` / [Rakkess (Review Access)](https://github.com/corneliusweig/rakkess) by Cornelius Weig - `kubectl rbac-lookup` / [RBAC Lookup](https://github.com/FairwindsOps/rbac-lookup) by FairwindsOps - `kubectl rbac-tool` / [RBAC Tool](https://github.com/alcideio/rbac-tool) by insightCloudSec - `kubectl` plugins can be installed and managed with `krew` - They can also be installed and executed as standalone programs .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Where does this `view` role come from? - Kubernetes defines a number of ClusterRoles intended to be bound to users - `cluster-admin` can do *everything* (think `root` on UNIX) - `admin` can do *almost everything* (except e.g. changing resource quotas and limits) - `edit` is similar to `admin`, but cannot view or edit permissions - `view` has read-only access to most resources, except permissions and secrets *In many situations, these roles will be all you need.* *You can also customize them!* .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Customizing the default roles - If you need to *add* permissions to these default roles (or others),
you can do it through the [ClusterRole Aggregation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) mechanism - This happens by creating a ClusterRole with the following labels: ```yaml metadata: labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" ``` - This ClusterRole permissions will be added to `admin`/`edit`/`view` respectively .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## When should we use aggregation? - By default, CRDs aren't included in `view` / `edit` / etc. (Kubernetes cannot guess which one are security sensitive and which ones are not) - If we edit `view` / `edit` / etc directly, our edits will conflict (imagine if we have two CRDs and they both provide a custom `view` ClusterRole) - Using aggregated roles lets us enrich the default roles without touching them .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## How aggregation works - The corresponding roles will have `aggregationRules` like this: ```yaml aggregationRule: clusterRoleSelectors: - matchLabels: rbac.authorization.k8s.io/aggregate-to-view: "true" ``` - We can define our own custom roles with their own aggregation rules .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## Where do our permissions come from? - When interacting with the Kubernetes API, we are using a client certificate - We saw previously that this client certificate contained: `CN=kubernetes-admin` and `O=system:masters` - Let's look for these in existing ClusterRoleBindings: ```bash kubectl get clusterrolebindings -o yaml | grep -e kubernetes-admin -e system:masters ``` (`system:masters` should show up, but not `kubernetes-admin`.) - Where does this match come from? .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: extra-details ## The `system:masters` group - If we eyeball the output of `kubectl get clusterrolebindings -o yaml`, we'll find out! - It is in the `cluster-admin` binding: ```bash kubectl describe clusterrolebinding cluster-admin ``` - This binding associates `system:masters` with the cluster role `cluster-admin` - And the `cluster-admin` is, basically, `root`: ```bash kubectl describe clusterrole cluster-admin ``` .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- ## `list` vs. `get` ⚠️ `list` grants read permissions to resources! - It's not possible to give permission to list resources without also reading them - This has implications for e.g. Secrets (if a controller needs to be able to enumerate Secrets, it will be able to read them) ??? :EN:- Authentication and authorization in Kubernetes :EN:- Authentication with tokens and certificates :EN:- Authorization with RBAC (Role-Based Access Control) :EN:- Restricting permissions with Service Accounts :EN:- Working with Roles, Cluster Roles, Role Bindings, etc. :FR:- Identification et droits d'accès dans Kubernetes :FR:- Mécanismes d'identification par jetons et certificats :FR:- Le modèle RBAC *(Role-Based Access Control)* :FR:- Restreindre les permissions grâce aux *Service Accounts* :FR:- Comprendre les *Roles*, *Cluster Roles*, *Role Bindings*, etc. .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/authn-authz.md)] --- class: pic .interstitial[] --- name: toc-generating-user-certificates class: title Generating user certificates .nav[ [Previous part](#toc-authentication-and-authorization) | [Back to table of contents](#toc-part-5) | [Next part](#toc-the-csr-api) ] .debug[(automatically generated title slide)] --- # Generating user certificates - The most popular ways to authenticate users with Kubernetes are: - TLS certificates - JSON Web Tokens (OIDC or ServiceAccount tokens) - We're going to see how to use TLS certificates - We will generate a certificate for an user and give them some permissions - Then we will use that certificate to access the cluster .debug[[k8s/user-cert.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/user-cert.md)] --- ## Heads up! - The demos in this section require that we have access to our cluster's CA - This is easy if we are using a cluster deployed with `kubeadm` - Otherwise, we may or may not have access to the cluster's CA - We may or may not be able to use the CSR API instead .debug[[k8s/user-cert.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/user-cert.md)] --- ## Check that we have access to the CA - Make sure that you are logged on the node hosting the control plane (if a cluster has been provisioned for you for a training, it's `node1`) .lab[ - Check that the CA key is here: ```bash sudo ls -l /etc/kubernetes/pki ``` ] The output should include `ca.key` and `ca.crt`. .debug[[k8s/user-cert.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/user-cert.md)] --- ## How it works - The API server is configured to accept all certificates signed by a given CA - The certificate contains: - the user name (in the `CN` field) - the groups the user belongs to (as multiple `O` fields) .lab[ - Check which CA is used by the Kubernetes API server: ```bash sudo grep crt /etc/kubernetes/manifests/kube-apiserver.yaml ``` ] This is the flag that we're looking for: ``` --client-ca-file=/etc/kubernetes/pki/ca.crt ``` .debug[[k8s/user-cert.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/user-cert.md)] --- ## Generating a key and CSR for our user - These operations could be done on a separate machine - We only need to transfer the CSR (Certificate Signing Request) to the CA (we never need to expose the private key) .lab[ - Generate a private key: ```bash openssl genrsa 4096 > user.key ``` - Generate a CSR: ```bash openssl req -new -key user.key -subj /CN=jerome/O=devs/O=ops > user.csr ``` ] .debug[[k8s/user-cert.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/user-cert.md)] --- ## Generating a signed certificate - This has to be done on the machine holding the CA private key (copy the `user.csr` file if needed) .lab[ - Verify the CSR paramters: ```bash openssl req -in user.csr -text | head ``` - Generate the certificate: ```bash sudo openssl x509 -req \ -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key \ -in user.csr -days 1 -set_serial 1234 > user.crt ``` ] If you are using two separate machines, transfer `user.crt` to the other machine. .debug[[k8s/user-cert.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/user-cert.md)] --- ## Adding the key and certificate to kubeconfig - We have to edit our `.kube/config` file - This can be done relatively easily with `kubectl config` .lab[ - Create a new `user` entry in our `.kube/config` file: ```bash kubectl config set-credentials jerome \ --client-key=user.key --client-certificate=user.crt ``` ] The configuration file now points to our local files. We could also embed the key and certs with the `--embed-certs` option. (So that the kubeconfig file can be used without `user.key` and `user.crt`.) .debug[[k8s/user-cert.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/user-cert.md)] --- ## Using the new identity - At the moment, we probably use the admin certificate generated by `kubeadm` (with `CN=kubernetes-admin` and `O=system:masters`) - Let's edit our *context* to use our new certificate instead! .lab[ - Edit the context: ```bash kubectl config set-context --current --user=jerome ``` - Try any command: ```bash kubectl get pods ``` ] Access will be denied, but we should see that were correctly *authenticated* as `jerome`. .debug[[k8s/user-cert.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/user-cert.md)] --- ## Granting permissions - Let's add some read-only permissions to the `devs` group (for instance) .lab[ - Switch back to our admin identity: ```bash kubectl config set-context --current --user=kubernetes-admin ``` - Grant permissions: ```bash kubectl create clusterrolebinding devs-can-view \ --clusterrole=view --group=devs ``` ] .debug[[k8s/user-cert.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/user-cert.md)] --- ## Testing the new permissions - As soon as we create the ClusterRoleBinding, all users in the `devs` group get access - Let's verify that we can e.g. list pods! .lab[ - Switch to our user identity again: ```bash kubectl config set-context --current --user=jerome ``` - Test the permissions: ```bash kubectl get pods ``` ] ??? :EN:- Authentication with user certificates :FR:- Identification par certificat TLS .debug[[k8s/user-cert.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/user-cert.md)] --- class: pic .interstitial[] --- name: toc-the-csr-api class: title The CSR API .nav[ [Previous part](#toc-generating-user-certificates) | [Back to table of contents](#toc-part-5) | [Next part](#toc-openid-connect) ] .debug[(automatically generated title slide)] --- # The CSR API - The Kubernetes API exposes CSR resources - We can use these resources to issue TLS certificates - First, we will go through a quick reminder about TLS certificates - Then, we will see how to obtain a certificate for a user - We will use that certificate to authenticate with the cluster - Finally, we will grant some privileges to that user .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Reminder about TLS - TLS (Transport Layer Security) is a protocol providing: - encryption (to prevent eavesdropping) - authentication (using public key cryptography) - When we access an https:// URL, the server authenticates itself (it proves its identity to us; as if it were "showing its ID") - But we can also have mutual TLS authentication (mTLS) (client proves its identity to server; server proves its identity to client) .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Authentication with certificates - To authenticate, someone (client or server) needs: - a *private key* (that remains known only to them) - a *public key* (that they can distribute) - a *certificate* (associating the public key with an identity) - A message encrypted with the private key can only be decrypted with the public key (and vice versa) - If I use someone's public key to encrypt/decrypt their messages,
I can be certain that I am talking to them / they are talking to me - The certificate proves that I have the correct public key for them .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Certificate generation workflow This is what I do if I want to obtain a certificate. 1. Create public and private keys. 2. Create a Certificate Signing Request (CSR). (The CSR contains the identity that I claim and a public key.) 3. Send that CSR to the Certificate Authority (CA). 4. The CA verifies that I can claim the identity in the CSR. 5. The CA generates my certificate and gives it to me. The CA (or anyone else) never needs to know my private key. .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## The CSR API - The Kubernetes API has a CertificateSigningRequest resource type (we can list them with e.g. `kubectl get csr`) - We can create a CSR object (= upload a CSR to the Kubernetes API) - Then, using the Kubernetes API, we can approve/deny the request - If we approve the request, the Kubernetes API generates a certificate - The certificate gets attached to the CSR object and can be retrieved .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Using the CSR API - We will show how to use the CSR API to obtain user certificates - This will be a rather complex demo - ... And yet, we will take a few shortcuts to simplify it (but it will illustrate the general idea) - The demo also won't be automated (we would have to write extra code to make it fully functional) .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Warning - The CSR API isn't really suited to issue user certificates - It is primarily intended to issue control plane certificates (for instance, deal with kubelet certificates renewal) - The API was expanded a bit in Kubernetes 1.19 to encompass broader usage - There are still lots of gaps in the spec (e.g. how to specify expiration in a standard way) - ... And no other implementation to this date (but [cert-manager](https://cert-manager.io/docs/faq/#kubernetes-has-a-builtin-certificatesigningrequest-api-why-not-use-that) might eventually get there!) .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## General idea - We will create a Namespace named "users" - Each user will get a ServiceAccount in that Namespace - That ServiceAccount will give read/write access to *one* CSR object - Users will use that ServiceAccount's token to submit a CSR - We will approve the CSR (or not) - Users can then retrieve their certificate from their CSR object - ...And use that certificate for subsequent interactions .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Resource naming For a user named `jean.doe`, we will have: - ServiceAccount `jean.doe` in Namespace `users` - CertificateSigningRequest `user=jean.doe` - ClusterRole `user=jean.doe` giving read/write access to that CSR - ClusterRoleBinding `user=jean.doe` binding ClusterRole and ServiceAccount .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- class: extra-details ## About resource name constraints - Most Kubernetes identifiers and names are fairly restricted - They generally are DNS-1123 *labels* or *subdomains* (from [RFC 1123](https://tools.ietf.org/html/rfc1123)) - A label is lowercase letters, numbers, dashes; can't start or finish with a dash - A subdomain is one or multiple labels separated by dots - Some resources have more relaxed constraints, and can be "path segment names" (uppercase are allowed, as well as some characters like `#:?!,_`) - This includes RBAC objects (like Roles, RoleBindings...) and CSRs - See the [Identifiers and Names](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/identifiers.md) design document and the [Object Names and IDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#path-segment-names) documentation page for more details .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Creating the user's resources .warning[If you want to use another name than `jean.doe`, update the YAML file!] .lab[ - Create the global namespace for all users: ```bash kubectl create namespace users ``` - Create the ServiceAccount, ClusterRole, ClusterRoleBinding for `jean.doe`: ```bash kubectl apply -f ~/container.training/k8s/user=jean.doe.yaml ``` ] .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Extracting the user's token - Let's obtain the user's token and give it to them (the token will be their password) .lab[ - List the user's secrets: ```bash kubectl --namespace=users describe serviceaccount jean.doe ``` - Show the user's token: ```bash kubectl --namespace=users describe secret `jean.doe-token-xxxxx` ``` ] .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Configure `kubectl` to use the token - Let's create a new context that will use that token to access the API .lab[ - Add a new identity to our kubeconfig file: ```bash kubectl config set-credentials token:jean.doe --token=... ``` - Add a new context using that identity: ```bash kubectl config set-context jean.doe --user=token:jean.doe --cluster=`kubernetes` ``` (Make sure to adapt the cluster name if yours is different!) - Use that context: ```bash kubectl config use-context jean.doe ``` ] .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Access the API with the token - Let's check that our access rights are set properly .lab[ - Try to access any resource: ```bash kubectl get pods ``` (This should tell us "Forbidden") - Try to access "our" CertificateSigningRequest: ```bash kubectl get csr user=jean.doe ``` (This should tell us "NotFound") ] .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Create a key and a CSR - There are many tools to generate TLS keys and CSRs - Let's use OpenSSL; it's not the best one, but it's installed everywhere (many people prefer cfssl, easyrsa, or other tools; that's fine too!) .lab[ - Generate the key and certificate signing request: ```bash openssl req -newkey rsa:2048 -nodes -keyout key.pem \ -new -subj /CN=jean.doe/O=devs/ -out csr.pem ``` ] The command above generates: - a 2048-bit RSA key, without encryption, stored in key.pem - a CSR for the name `jean.doe` in group `devs` .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Inside the Kubernetes CSR object - The Kubernetes CSR object is a thin wrapper around the CSR PEM file - The PEM file needs to be encoded to base64 on a single line (we will use `base64 -w0` for that purpose) - The Kubernetes CSR object also needs to list the right "usages" (these are flags indicating how the certificate can be used) .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Sending the CSR to Kubernetes .lab[ - Generate and create the CSR resource: ```bash kubectl apply -f - <
cert.pem ``` - Inspect the certificate: ```bash openssl x509 -in cert.pem -text -noout ``` ] .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Using the certificate .lab[ - Add the key and certificate to kubeconfig: ```bash kubectl config set-credentials cert:jean.doe --embed-certs \ --client-certificate=cert.pem --client-key=key.pem ``` - Update the user's context to use the key and cert to authenticate: ```bash kubectl config set-context jean.doe --user cert:jean.doe ``` - Confirm that we are seen as `jean.doe` (but don't have permissions): ```bash kubectl get pods ``` ] .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## What's missing? We have just shown, step by step, a method to issue short-lived certificates for users. To be usable in real environments, we would need to add: - a kubectl helper to automatically generate the CSR and obtain the cert (and transparently renew the cert when needed) - a Kubernetes controller to automatically validate and approve CSRs (checking that the subject and groups are valid) - a way for the users to know the groups to add to their CSR (e.g.: annotations on their ServiceAccount + read access to the ServiceAccount) .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- ## Is this realistic? - Larger organizations typically integrate with their own directory - The general principle, however, is the same: - users have long-term credentials (password, token, ...) - they use these credentials to obtain other, short-lived credentials - This provides enhanced security: - the long-term credentials can use long passphrases, 2FA, HSM... - the short-term credentials are more convenient to use - we get strong security *and* convenience - Systems like Vault also have certificate issuance mechanisms ??? :EN:- Generating user certificates with the CSR API :FR:- Génération de certificats utilisateur avec la CSR API .debug[[k8s/csr-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/csr-api.md)] --- class: pic .interstitial[] --- name: toc-openid-connect class: title OpenID Connect .nav[ [Previous part](#toc-the-csr-api) | [Back to table of contents](#toc-part-6) | [Next part](#toc-securing-the-control-plane) ] .debug[(automatically generated title slide)] --- # OpenID Connect - The Kubernetes API server can perform authentication with OpenID connect - This requires an *OpenID provider* (external authorization server using the OAuth 2.0 protocol) - We can use a third-party provider (e.g. Google) or run our own (e.g. Dex) - We are going to give an overview of the protocol - We will show it in action (in a simplified scenario) .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## Workflow overview - We want to access our resources (a Kubernetes cluster) - We authenticate with the OpenID provider - we can do this directly (e.g. by going to https://accounts.google.com) - or maybe a kubectl plugin can open a browser page on our behalf - After authenticating us, the OpenID provider gives us: - an *id token* (a short-lived signed JSON Web Token, see next slide) - a *refresh token* (to renew the *id token* when needed) - We can now issue requests to the Kubernetes API with the *id token* - The API server will verify that token's content to authenticate us .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## JSON Web Tokens - A JSON Web Token (JWT) has three parts: - a header specifying algorithms and token type - a payload (indicating who issued the token, for whom, which purposes...) - a signature generated by the issuer (the issuer = the OpenID provider) - Anyone can verify a JWT without contacting the issuer (except to obtain the issuer's public key) - Pro tip: we can inspect a JWT with https://jwt.io/ .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## How the Kubernetes API uses JWT - Server side - enable OIDC authentication - indicate which issuer (provider) should be allowed - indicate which audience (or "client id") should be allowed - optionally, map or prefix user and group names - Client side - obtain JWT as described earlier - pass JWT as authentication token - renew JWT when needed (using the refresh token) .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## Demo time! - We will use [Google Accounts](https://accounts.google.com) as our OpenID provider - We will use the [Google OAuth Playground](https://developers.google.com/oauthplayground) as the "audience" or "client id" - We will obtain a JWT through Google Accounts and the OAuth Playground - We will enable OIDC in the Kubernetes API server - We will use the JWT to authenticate .footnote[If you can't or won't use a Google account, you can try to adapt this to another provider.] .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## Checking the API server logs - The API server logs will be particularly useful in this section (they will indicate e.g. why a specific token is rejected) - Let's keep an eye on the API server output! .lab[ - Tail the logs of the API server: ```bash kubectl logs kube-apiserver-node1 --follow --namespace=kube-system ``` ] .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## Authenticate with the OpenID provider - We will use the Google OAuth Playground for convenience - In a real scenario, we would need our own OAuth client instead of the playground (even if we were still using Google as the OpenID provider) .lab[ - Open the Google OAuth Playground: ``` https://developers.google.com/oauthplayground/ ``` - Enter our own custom scope in the text field: ``` https://www.googleapis.com/auth/userinfo.email ``` - Click on "Authorize APIs" and allow the playground to access our email address ] .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## Obtain our JSON Web Token - The previous step gave us an "authorization code" - We will use it to obtain tokens .lab[ - Click on "Exchange authorization code for tokens" ] - The JWT is the very long `id_token` that shows up on the right hand side (it is a base64-encoded JSON object, and should therefore start with `eyJ`) .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## Using our JSON Web Token - We need to create a context (in kubeconfig) for our token (if we just add the token or use `kubectl --token`, our certificate will still be used) .lab[ - Create a new authentication section in kubeconfig: ```bash kubectl config set-credentials myjwt --token=eyJ... ``` - Try to use it: ```bash kubectl --user=myjwt get nodes ``` ] We should get an `Unauthorized` response, since we haven't enabled OpenID Connect in the API server yet. We should also see `invalid bearer token` in the API server log output. .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## Enabling OpenID Connect - We need to add a few flags to the API server configuration - These two are mandatory: `--oidc-issuer-url` → URL of the OpenID provider `--oidc-client-id` → app requesting the authentication
(in our case, that's the ID for the Google OAuth Playground) - This one is optional: `--oidc-username-claim` → which field should be used as user name
(we will use the user's email address instead of an opaque ID) - See the [API server documentation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#configuring-the-api-server ) for more details about all available flags .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## Updating the API server configuration - The instructions below will work for clusters deployed with kubeadm (or where the control plane is deployed in static pods) - If your cluster is deployed differently, you will need to adapt them .lab[ - Edit `/etc/kubernetes/manifests/kube-apiserver.yaml` - Add the following lines to the list of command-line flags: ```yaml - --oidc-issuer-url=https://accounts.google.com - --oidc-client-id=407408718192.apps.googleusercontent.com - --oidc-username-claim=email ``` ] .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## Restarting the API server - The kubelet monitors the files in `/etc/kubernetes/manifests` - When we save the pod manifest, kubelet will restart the corresponding pod (using the updated command line flags) .lab[ - After making the changes described on the previous slide, save the file - Issue a simple command (like `kubectl version`) until the API server is back up (it might take between a few seconds and one minute for the API server to restart) - Restart the `kubectl logs` command to view the logs of the API server ] .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## Using our JSON Web Token - Now that the API server is set up to recognize our token, try again! .lab[ - Try an API command with our token: ```bash kubectl --user=myjwt get nodes kubectl --user=myjwt get pods ``` ] We should see a message like: ``` Error from server (Forbidden): nodes is forbidden: User "jean.doe@gmail.com" cannot list resource "nodes" in API group "" at the cluster scope ``` → We were successfully *authenticated*, but not *authorized*. .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## Authorizing our user - As an extra step, let's grant read access to our user - We will use the pre-defined ClusterRole `view` .lab[ - Create a ClusterRoleBinding allowing us to view resources: ```bash kubectl create clusterrolebinding i-can-view \ --user=`jean.doe@gmail.com` --clusterrole=view ``` (make sure to put *your* Google email address there) - Confirm that we can now list pods with our token: ```bash kubectl --user=myjwt get pods ``` ] .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- ## From demo to production .warning[This was a very simplified demo! In a real deployment...] - We wouldn't use the Google OAuth Playground - We *probably* wouldn't even use Google at all (it doesn't seem to provide a way to include groups!) - Some popular alternatives: - [Dex](https://github.com/dexidp/dex), [Keycloak](https://www.keycloak.org/) (self-hosted) - [Okta](https://developer.okta.com/docs/how-to/creating-token-with-groups-claim/#step-five-decode-the-jwt-to-verify) (SaaS) - We would use a helper (like the [kubelogin](https://github.com/int128/kubelogin) plugin) to automatically obtain tokens .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- class: extra-details ## Service Account tokens - The tokens used by Service Accounts are JWT tokens as well - They are signed and verified using a special service account key pair .lab[ - Extract the token of a service account in the current namespace: ```bash kubectl get secrets -o jsonpath={..token} | base64 -d ``` - Copy-paste the token to a verification service like https://jwt.io - Notice that it says "Invalid Signature" ] .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- class: extra-details ## Verifying Service Account tokens - JSON Web Tokens embed the URL of the "issuer" (=OpenID provider) - The issuer provides its public key through a well-known discovery endpoint (similar to https://accounts.google.com/.well-known/openid-configuration) - There is no such endpoint for the Service Account key pair - But we can provide the public key ourselves for verification .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- class: extra-details ## Verifying a Service Account token - On clusters provisioned with kubeadm, the Service Account key pair is: `/etc/kubernetes/pki/sa.key` (used by the controller manager to generate tokens) `/etc/kubernetes/pki/sa.pub` (used by the API server to validate the same tokens) .lab[ - Display the public key used to sign Service Account tokens: ```bash sudo cat /etc/kubernetes/pki/sa.pub ``` - Copy-paste the key in the "verify signature" area on https://jwt.io - It should now say "Signature Verified" ] ??? :EN:- Authenticating with OIDC :FR:- S'identifier avec OIDC .debug[[k8s/openid-connect.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openid-connect.md)] --- class: pic .interstitial[] --- name: toc-securing-the-control-plane class: title Securing the control plane .nav[ [Previous part](#toc-openid-connect) | [Back to table of contents](#toc-part-6) | [Next part](#toc-network-policies) ] .debug[(automatically generated title slide)] --- # Securing the control plane - Many components accept connections (and requests) from others: - API server - etcd - kubelet - We must secure these connections: - to deny unauthorized requests - to prevent eavesdropping secrets, tokens, and other sensitive information - Disabling authentication and/or authorization is **strongly discouraged** (but it's possible to do it, e.g. for learning / troubleshooting purposes) .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Authentication and authorization - Authentication (checking "who you are") is done with mutual TLS (both the client and the server need to hold a valid certificate) - Authorization (checking "what you can do") is done in different ways - the API server implements a sophisticated permission logic (with RBAC) - some services will defer authorization to the API server (through webhooks) - some services require a certificate signed by a particular CA / sub-CA .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## In practice - We will review the various communication channels in the control plane - We will describe how they are secured - When TLS certificates are used, we will indicate: - which CA signs them - what their subject (CN) should be, when applicable - We will indicate how to configure security (client- and server-side) .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## etcd peers - Replication and coordination of etcd happens on a dedicated port (typically port 2380; the default port for normal client connections is 2379) - Authentication uses TLS certificates with a separate sub-CA (otherwise, anyone with a Kubernetes client certificate could access etcd!) - The etcd command line flags involved are: `--peer-client-cert-auth=true` to activate it `--peer-cert-file`, `--peer-key-file`, `--peer-trusted-ca-file` .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## etcd clients - The only¹ thing that connects to etcd is the API server - Authentication uses TLS certificates with a separate sub-CA (for the same reasons as for etcd inter-peer authentication) - The etcd command line flags involved are: `--client-cert-auth=true` to activate it `--trusted-ca-file`, `--cert-file`, `--key-file` - The API server command line flags involved are: `--etcd-cafile`, `--etcd-certfile`, `--etcd-keyfile` .footnote[¹Technically, there is also the etcd healthcheck. Let's ignore it for now.] .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## etcd authorization - etcd supports RBAC, but Kubernetes doesn't use it by default (note: etcd RBAC is completely different from Kubernetes RBAC!) - By default, etcd access is "all or nothing" (if you have a valid certificate, you get in) - Be very careful if you use the same root CA for etcd and other things (if etcd trusts the root CA, then anyone with a valid cert gets full etcd access) - For more details, check the following resources: - [etcd documentation on authentication](https://etcd.io/docs/current/op-guide/authentication/) - [PKI The Wrong Way](https://www.youtube.com/watch?v=gcOLDEzsVHI) at KubeCon NA 2020 .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## API server clients - The API server has a sophisticated authentication and authorization system - For connections coming from other components of the control plane: - authentication uses certificates (trusting the certificates' subject or CN) - authorization uses whatever mechanism is enabled (most oftentimes, RBAC) - The relevant API server flags are: `--client-ca-file`, `--tls-cert-file`, `--tls-private-key-file` - Each component connecting to the API server takes a `--kubeconfig` flag (to specify a kubeconfig file containing the CA cert, client key, and client cert) - Yes, that kubeconfig file follows the same format as our `~/.kube/config` file! .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Kubelet and API server - Communication between kubelet and API server can be established both ways - Kubelet → API server: - kubelet registers itself ("hi, I'm node42, do you have work for me?") - connection is kept open and re-established if it breaks - that's how the kubelet knows which pods to start/stop - API server → kubelet: - used to retrieve logs, exec, attach to containers .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Kubelet → API server - Kubelet is started with `--kubeconfig` with API server information - The client certificate of the kubelet will typically have: `CN=system:node:
` and groups `O=system:nodes` - Nothing special on the API server side (it will authenticate like any other client) .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## API server → kubelet - Kubelet is started with the flag `--client-ca-file` (typically using the same CA as the API server) - API server will use a dedicated key pair when contacting kubelet (specified with `--kubelet-client-certificate` and `--kubelet-client-key`) - Authorization uses webhooks (enabled with `--authorization-mode=Webhook` on kubelet) - The webhook server is the API server itself (the kubelet sends back a request to the API server to ask, "can this person do that?") .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Scheduler - The scheduler connects to the API server like an ordinary client - The certificate of the scheduler will have `CN=system:kube-scheduler` .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Controller manager - The controller manager is also a normal client to the API server - Its certificate will have `CN=system:kube-controller-manager` - If we use the CSR API, the controller manager needs the CA cert and key (passed with flags `--cluster-signing-cert-file` and `--cluster-signing-key-file`) - We usually want the controller manager to generate tokens for service accounts - These tokens deserve some details (on the next slide!) .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- class: extra-details ## How are these permissions set up? - A bunch of roles and bindings are defined as constants in the API server code: [auth/authorizer/rbac/bootstrappolicy/policy.go](https://github.com/kubernetes/kubernetes/blob/release-1.19/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/policy.go#L188) - They are created automatically when the API server starts: [registry/rbac/rest/storage_rbac.go](https://github.com/kubernetes/kubernetes/blob/release-1.19/pkg/registry/rbac/rest/storage_rbac.go#L140) - We must use the correct Common Names (`CN`) for the control plane certificates (since the bindings defined above refer to these common names) .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Service account tokens - Each time we create a service account, the controller manager generates a token - These tokens are JWT tokens, signed with a particular key - These tokens are used for authentication with the API server (and therefore, the API server needs to be able to verify their integrity) - This uses another keypair: - the private key (used for signature) is passed to the controller manager
(using flags `--service-account-private-key-file` and `--root-ca-file`) - the public key (used for verification) is passed to the API server
(using flag `--service-account-key-file`) .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## kube-proxy - kube-proxy is "yet another API server client" - In many clusters, it runs as a Daemon Set - In that case, it will have its own Service Account and associated permissions - It will authenticate using the token of that Service Account .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Webhooks - We mentioned webhooks earlier; how does that really work? - The Kubernetes API has special resource types to check permissions - One of them is SubjectAccessReview - To check if a particular user can do a particular action on a particular resource: - we prepare a SubjectAccessReview object - we send that object to the API server - the API server responds with allow/deny (and optional explanations) - Using webhooks for authorization = sending SAR to authorize each request .debug[[k8s/control-plane-auth.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/control-plane-auth.md)] --- ## Subject Access Review Here is an example showing how to check if `jean.doe` can `get` some `pods` in `kube-system`: ```bash kubectl -v9 create -f- <
the restriction cannot be overridden by a network policy selecting another pod - This prevents an entity managing network policies in namespace A (but without permission to do so in namespace B) from adding network policies giving them access to namespace B .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## The rationale for network policies - In network security, it is generally considered better to "deny all, then allow selectively" (The other approach, "allow all, then block selectively" makes it too easy to leave holes) - As soon as one network policy selects a pod, the pod enters this "deny all" logic - Further network policies can open additional access - Good network policies should be scoped as precisely as possible - In particular: make sure that the selector is not too broad (Otherwise, you end up affecting pods that were otherwise well secured) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Our first network policy This is our game plan: - run a web server in a pod - create a network policy to block all access to the web server - create another network policy to allow access only from specific pods .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Running our test web server .lab[ - Let's use the `nginx` image: ```bash kubectl create deployment testweb --image=nginx ``` - Find out the IP address of the pod with one of these two commands: ```bash kubectl get pods -o wide -l app=testweb IP=$(kubectl get pods -l app=testweb -o json | jq -r .items[0].status.podIP) ``` - Check that we can connect to the server: ```bash curl $IP ``` ] The `curl` command should show us the "Welcome to nginx!" page. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Adding a very restrictive network policy - The policy will select pods with the label `app=testweb` - It will specify an empty list of ingress rules (matching nothing) .lab[ - Apply the policy in this YAML file: ```bash kubectl apply -f ~/container.training/k8s/netpol-deny-all-for-testweb.yaml ``` - Check if we can still access the server: ```bash curl $IP ``` ] The `curl` command should now time out. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Looking at the network policy This is the file that we applied: ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-all-for-testweb spec: podSelector: matchLabels: app: testweb ingress: [] ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Allowing connections only from specific pods - We want to allow traffic from pods with the label `run=testcurl` - Reminder: this label is automatically applied when we do `kubectl run testcurl ...` .lab[ - Apply another policy: ```bash kubectl apply -f ~/container.training/k8s/netpol-allow-testcurl-for-testweb.yaml ``` ] .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Looking at the network policy This is the second file that we applied: ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-testcurl-for-testweb spec: podSelector: matchLabels: app: testweb ingress: - from: - podSelector: matchLabels: run: testcurl ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Testing the network policy - Let's create pods with, and without, the required label .lab[ - Try to connect to testweb from a pod with the `run=testcurl` label: ```bash kubectl run testcurl --rm -i --image=centos -- curl -m3 $IP ``` - Try to connect to testweb with a different label: ```bash kubectl run testkurl --rm -i --image=centos -- curl -m3 $IP ``` ] The first command will work (and show the "Welcome to nginx!" page). The second command will fail and time out after 3 seconds. (The timeout is obtained with the `-m3` option.) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## An important warning - Some network plugins only have partial support for network policies - For instance, Weave added support for egress rules [in version 2.4](https://github.com/weaveworks/weave/pull/3313) (released in July 2018) - But only recently added support for ipBlock [in version 2.5](https://github.com/weaveworks/weave/pull/3367) (released in Nov 2018) - Unsupported features might be silently ignored (Making you believe that you are secure, when you're not) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Network policies, pods, and services - Network policies apply to *pods* - A *service* can select multiple pods (And load balance traffic across them) - It is possible that we can connect to some pods, but not some others (Because of how network policies have been defined for these pods) - In that case, connections to the service will randomly pass or fail (Depending on whether the connection was sent to a pod that we have access to or not) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Network policies and namespaces - A good strategy is to isolate a namespace, so that: - all the pods in the namespace can communicate together - other namespaces cannot access the pods - external access has to be enabled explicitly - Let's see what this would look like for the DockerCoins app! .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Network policies for DockerCoins - We are going to apply two policies - The first policy will prevent traffic from other namespaces - The second policy will allow traffic to the `webui` pods - That's all we need for that app! .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Blocking traffic from other namespaces This policy selects all pods in the current namespace. It allows traffic only from pods in the current namespace. (An empty `podSelector` means "all pods.") ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-from-other-namespaces spec: podSelector: {} ingress: - from: - podSelector: {} ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Allowing traffic to `webui` pods This policy selects all pods with label `app=webui`. It allows traffic from any source. (An empty `from` field means "all sources.") ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-webui spec: podSelector: matchLabels: app: webui ingress: - from: [] ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Applying both network policies - Both network policies are declared in the file [k8s/netpol-dockercoins.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/netpol-dockercoins.yaml) .lab[ - Apply the network policies: ```bash kubectl apply -f ~/container.training/k8s/netpol-dockercoins.yaml ``` - Check that we can still access the web UI from outside
(and that the app is still working correctly!) - Check that we can't connect anymore to `rng` or `hasher` through their ClusterIP ] Note: using `kubectl proxy` or `kubectl port-forward` allows us to connect regardless of existing network policies. This allows us to debug and troubleshoot easily, without having to poke holes in our firewall. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Cleaning up our network policies - The network policies that we have installed block all traffic to the default namespace - We should remove them, otherwise further demos and exercises will fail! .lab[ - Remove all network policies: ```bash kubectl delete networkpolicies --all ``` ] .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Protecting the control plane - Should we add network policies to block unauthorized access to the control plane? (etcd, API server, etc.) -- - At first, it seems like a good idea ... -- - But it *shouldn't* be necessary: - not all network plugins support network policies - the control plane is secured by other methods (mutual TLS, mostly) - the code running in our pods can reasonably expect to contact the API
(and it can do so safely thanks to the API permission model) - If we block access to the control plane, we might disrupt legitimate code - ...Without necessarily improving security .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Tools and resources - [Cilium Network Policy Editor](https://editor.cilium.io/) - [Tufin Network Policy Viewer](https://orca.tufin.io/netpol/) - [`kubectl np-viewer`](https://github.com/runoncloud/kubectl-np-viewer) (kubectl plugin) - Two resources by [Ahmet Alp Balkan](https://ahmet.im/): - a [very good talk about network policies](https://www.youtube.com/watch?list=PLj6h78yzYM2P-3-xqvmWaZbbI1sW-ulZb&v=3gGpMmYeEO8) at KubeCon North America 2017 - a repository of [ready-to-use recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for network policies .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- ## Documentation - As always, the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) is a good starting point - The API documentation has a lot of detail about the format of various objects: - [NetworkPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#networkpolicy-v1-networking-k8s-io) - [NetworkPolicySpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#networkpolicyspec-v1-networking-k8s-io) - [NetworkPolicyIngressRule](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.28/#networkpolicyingressrule-v1-networking-k8s-io) - etc. ??? :EN:- Isolating workloads with Network Policies :FR:- Isolation réseau avec les *network policies* .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/netpol.md)] --- class: pic .interstitial[] --- name: toc-restricting-pod-permissions class: title Restricting Pod Permissions .nav[ [Previous part](#toc-network-policies) | [Back to table of contents](#toc-part-6) | [Next part](#toc-pod-security-policies) ] .debug[(automatically generated title slide)] --- # Restricting Pod Permissions - By default, our pods and containers can do *everything* (including taking over the entire cluster) - We are going to show an example of a malicious pod (which will give us root access to the whole cluster) - Then we will explain how to avoid this with admission control (PodSecurityAdmission, PodSecurityPolicy, or external policy engine) .debug[[k8s/pod-security-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-intro.md)] --- ## Setting up a namespace - For simplicity, let's work in a separate namespace - Let's create a new namespace called "green" .lab[ - Create the "green" namespace: ```bash kubectl create namespace green ``` - Change to that namespace: ```bash kns green ``` ] .debug[[k8s/pod-security-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-intro.md)] --- ## Creating a basic Deployment - Just to check that everything works correctly, deploy NGINX .lab[ - Create a Deployment using the official NGINX image: ```bash kubectl create deployment web --image=nginx ``` - Confirm that the Deployment, ReplicaSet, and Pod exist, and that the Pod is running: ```bash kubectl get all ``` ] .debug[[k8s/pod-security-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-intro.md)] --- ## One example of malicious pods - We will now show an escalation technique in action - We will deploy a DaemonSet that adds our SSH key to the root account (on *each* node of the cluster) - The Pods of the DaemonSet will do so by mounting `/root` from the host .lab[ - Check the file `k8s/hacktheplanet.yaml` with a text editor: ```bash vim ~/container.training/k8s/hacktheplanet.yaml ``` - If you would like, change the SSH key (by changing the GitHub user name) ] .debug[[k8s/pod-security-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-intro.md)] --- ## Deploying the malicious pods - Let's deploy our "exploit"! .lab[ - Create the DaemonSet: ```bash kubectl create -f ~/container.training/k8s/hacktheplanet.yaml ``` - Check that the pods are running: ```bash kubectl get pods ``` - Confirm that the SSH key was added to the node's root account: ```bash sudo cat /root/.ssh/authorized_keys ``` ] .debug[[k8s/pod-security-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-intro.md)] --- ## Mitigations - This can be avoided with *admission control* - Admission control = filter for (write) API requests - Admission control can use: - plugins (compiled in API server; enabled/disabled by reconfiguration) - webhooks (registered dynamically) - Admission control has many other uses (enforcing quotas, adding ServiceAccounts automatically, etc.) .debug[[k8s/pod-security-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-intro.md)] --- ## Admission plugins - [PodSecurityPolicy](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) (was removed in Kubernetes 1.25) - create PodSecurityPolicy resources - create Role that can `use` a PodSecurityPolicy - create RoleBinding that grants the Role to a user or ServiceAccount - [PodSecurityAdmission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) (alpha since Kubernetes 1.22, stable since 1.25) - use pre-defined policies (privileged, baseline, restricted) - label namespaces to indicate which policies they can use - optionally, define default rules (in the absence of labels) .debug[[k8s/pod-security-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-intro.md)] --- ## Dynamic admission - Leverage ValidatingWebhookConfigurations (to register a validating webhook) - Examples: [Kubewarden](https://www.kubewarden.io/) [Kyverno](https://kyverno.io/policies/pod-security/) [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper) - Pros: available today; very flexible and customizable - Cons: performance and reliability of external webhook .debug[[k8s/pod-security-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-intro.md)] --- ## Validating Admission Policies - Alternative to validating admission webhooks - Evaluated in the API server (don't require an external server; don't add network latency) - Written in CEL (Common Expression Language) - alpha in K8S 1.26; beta in K8S 1.28; GA in K8S 1.30 - Can replace validating webhooks at least in simple cases - Can extend Pod Security Admission - Check [the documentation][vapdoc] for examples [vapdoc]: https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/ .debug[[k8s/pod-security-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-intro.md)] --- ## Acronym salad - PSP = Pod Security Policy **(deprecated)** - an admission plugin called PodSecurityPolicy - a resource named PodSecurityPolicy (`apiVersion: policy/v1beta1`) - PSA = Pod Security Admission - an admission plugin called PodSecurity, enforcing PSS - PSS = Pod Security Standards - a set of 3 policies (privileged, baseline, restricted)\ ??? :EN:- Mechanisms to prevent pod privilege escalation :FR:- Les mécanismes pour limiter les privilèges des pods .debug[[k8s/pod-security-intro.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-intro.md)] --- class: pic .interstitial[] --- name: toc-pod-security-policies class: title Pod Security Policies .nav[ [Previous part](#toc-restricting-pod-permissions) | [Back to table of contents](#toc-part-6) | [Next part](#toc-pod-security-admission) ] .debug[(automatically generated title slide)] --- # Pod Security Policies - "Legacy" policies (deprecated since Kubernetes 1.21; removed in 1.25) - Superseded by Pod Security Standards + Pod Security Admission (available in alpha since Kubernetes 1.22; stable since 1.25) - **Since Kubernetes 1.24 was EOL in July 2023, nobody should use PSPs anymore!** - This section is here mostly for historical purposes, and can be skipped .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Pod Security Policies in theory - To use PSPs, we need to activate their specific *admission controller* - That admission controller will intercept each pod creation attempt - It will look at: - *who/what* is creating the pod - which PodSecurityPolicies they can use - which PodSecurityPolicies can be used by the Pod's ServiceAccount - Then it will compare the Pod with each PodSecurityPolicy one by one - If a PodSecurityPolicy accepts all the parameters of the Pod, it is created - Otherwise, the Pod creation is denied and it won't even show up in `kubectl get pods` .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Pod Security Policies fine print - With RBAC, using a PSP corresponds to the verb `use` on the PSP (that makes sense, right?) - If no PSP is defined, no Pod can be created (even by cluster admins) - Pods that are already running are *not* affected - If we create a Pod directly, it can use a PSP to which *we* have access - If the Pod is created by e.g. a ReplicaSet or DaemonSet, it's different: - the ReplicaSet / DaemonSet controllers don't have access to *our* policies - therefore, we need to give access to the PSP to the Pod's ServiceAccount .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Pod Security Policies in practice - We are going to enable the PodSecurityPolicy admission controller - At that point, we won't be able to create any more pods (!) - Then we will create a couple of PodSecurityPolicies - ...And associated ClusterRoles (giving `use` access to the policies) - Then we will create RoleBindings to grant these roles to ServiceAccounts - We will verify that we can't run our "exploit" anymore .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Enabling Pod Security Policies - To enable Pod Security Policies, we need to enable their *admission plugin* - This is done by adding a flag to the API server - On clusters deployed with `kubeadm`, the control plane runs in static pods - These pods are defined in YAML files located in `/etc/kubernetes/manifests` - Kubelet watches this directory - Each time a file is added/removed there, kubelet creates/deletes the corresponding pod - Updating a file causes the pod to be deleted and recreated .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Updating the API server flags - Let's edit the manifest for the API server pod .lab[ - Have a look at the static pods: ```bash ls -l /etc/kubernetes/manifests ``` - Edit the one corresponding to the API server: ```bash sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml ``` ] .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Adding the PSP admission plugin - There should already be a line with `--enable-admission-plugins=...` - Let's add `PodSecurityPolicy` on that line .lab[ - Locate the line with `--enable-admission-plugins=` - Add `PodSecurityPolicy` It should read: `--enable-admission-plugins=NodeRestriction,PodSecurityPolicy` - Save, quit ] .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Waiting for the API server to restart - The kubelet detects that the file was modified - It kills the API server pod, and starts a new one - During that time, the API server is unavailable .lab[ - Wait until the API server is available again ] .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Check that the admission plugin is active - Normally, we can't create any Pod at this point .lab[ - Try to create a Pod directly: ```bash kubectl run testpsp1 --image=nginx --restart=Never ``` - Try to create a Deployment: ```bash kubectl create deployment testpsp2 --image=nginx ``` - Look at existing resources: ```bash kubectl get all ``` ] We can get hints at what's happening by looking at the ReplicaSet and Events. .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Introducing our Pod Security Policies - We will create two policies: - privileged (allows everything) - restricted (blocks some unsafe mechanisms) - For each policy, we also need an associated ClusterRole granting *use* .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Creating our Pod Security Policies - We have a couple of files, each defining a PSP and associated ClusterRole: - k8s/psp-privileged.yaml: policy `privileged`, role `psp:privileged` - k8s/psp-restricted.yaml: policy `restricted`, role `psp:restricted` .lab[ - Create both policies and their associated ClusterRoles: ```bash kubectl create -f ~/container.training/k8s/psp-restricted.yaml kubectl create -f ~/container.training/k8s/psp-privileged.yaml ``` ] - The privileged policy comes from [the Kubernetes documentation](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#example-policies) - The restricted policy is inspired by that same documentation page .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Check that we can create Pods again - We haven't bound the policy to any user yet - But `cluster-admin` can implicitly `use` all policies .lab[ - Check that we can now create a Pod directly: ```bash kubectl run testpsp3 --image=nginx --restart=Never ``` - Create a Deployment as well: ```bash kubectl create deployment testpsp4 --image=nginx ``` - Confirm that the Deployment is *not* creating any Pods: ```bash kubectl get all ``` ] .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## What's going on? - We can create Pods directly (thanks to our root-like permissions) - The Pods corresponding to a Deployment are created by the ReplicaSet controller - The ReplicaSet controller does *not* have root-like permissions - We need to either: - grant permissions to the ReplicaSet controller *or* - grant permissions to our Pods' ServiceAccount - The first option would allow *anyone* to create pods - The second option will allow us to scope the permissions better .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Binding the restricted policy - Let's bind the role `psp:restricted` to ServiceAccount `green:default` (aka the default ServiceAccount in the green Namespace) - This will allow Pod creation in the green Namespace (because these Pods will be using that ServiceAccount automatically) .lab[ - Create the following RoleBinding: ```bash kubectl create rolebinding psp:restricted \ --clusterrole=psp:restricted \ --serviceaccount=green:default ``` ] .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Trying it out - The Deployments that we created earlier will *eventually* recover (the ReplicaSet controller will retry to create Pods once in a while) - If we create a new Deployment now, it should work immediately .lab[ - Create a simple Deployment: ```bash kubectl create deployment testpsp5 --image=nginx ``` - Look at the Pods that have been created: ```bash kubectl get all ``` ] .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Trying to hack the cluster - Let's create the same DaemonSet we used earlier .lab[ - Create a hostile DaemonSet: ```bash kubectl create -f ~/container.training/k8s/hacktheplanet.yaml ``` - Look at the state of the namespace: ```bash kubectl get all ``` ] .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- class: extra-details ## What's in our restricted policy? - The restricted PSP is similar to the one provided in the docs, but: - it allows containers to run as root - it doesn't drop capabilities - Many containers run as root by default, and would require additional tweaks - Many containers use e.g. `chown`, which requires a specific capability (that's the case for the NGINX official image, for instance) - We still block: hostPath, privileged containers, and much more! .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- class: extra-details ## The case of static pods - If we list the pods in the `kube-system` namespace, `kube-apiserver` is missing - However, the API server is obviously running (otherwise, `kubectl get pods --namespace=kube-system` wouldn't work) - The API server Pod is created directly by kubelet (without going through the PSP admission plugin) - Then, kubelet creates a "mirror pod" representing that Pod in etcd - That "mirror pod" creation goes through the PSP admission plugin - And it gets blocked! - This can be fixed by binding `psp:privileged` to group `system:nodes` .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## .warning[Before moving on...] - Our cluster is currently broken (we can't create pods in namespaces kube-system, default, ...) - We need to either: - disable the PSP admission plugin - allow use of PSP to relevant users and groups - For instance, we could: - bind `psp:restricted` to the group `system:authenticated` - bind `psp:privileged` to the ServiceAccount `kube-system:default` .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- ## Fixing the cluster - Let's disable the PSP admission plugin .lab[ - Edit the Kubernetes API server static pod manifest - Remove the PSP admission plugin - This can be done with this one-liner: ```bash sudo sed -i s/,PodSecurityPolicy// /etc/kubernetes/manifests/kube-apiserver.yaml ``` ] ??? :EN:- Preventing privilege escalation with Pod Security Policies :FR:- Limiter les droits des conteneurs avec les *Pod Security Policies* .debug[[k8s/pod-security-policies.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-policies.md)] --- class: pic .interstitial[] --- name: toc-pod-security-admission class: title Pod Security Admission .nav[ [Previous part](#toc-pod-security-policies) | [Back to table of contents](#toc-part-6) | [Next part](#toc-resource-limits) ] .debug[(automatically generated title slide)] --- # Pod Security Admission - "New" policies (available in alpha since Kubernetes 1.22, and GA since Kubernetes 1.25) - Easier to use (doesn't require complex interaction between policies and RBAC) .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- ## PSA in theory - Leans on PSS (Pod Security Standards) - Defines three policies: - `privileged` (can do everything; for system components) - `restricted` (no root user; almost no capabilities) - `baseline` (in-between with reasonable defaults) - Label namespaces to indicate which policies are allowed there - Also supports setting global defaults - Supports `enforce`, `audit`, and `warn` modes .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- ## Pod Security Standards - `privileged` - can do everything - `baseline` - disables hostNetwork, hostPID, hostIPC, hostPorts, hostPath volumes - limits which SELinux/AppArmor profiles can be used - containers can still run as root and use most capabilities - `restricted` - limits volumes to configMap, emptyDir, ephemeral, secret, PVC - containers can't run as root, only capability is NET_BIND_SERVICE - `baseline` (can't do privileged pods, hostPath, hostNetwork...) .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- class: extra-details ## Why `baseline` ≠ `restricted` ? - `baseline` = should work for that vast majority of images - `restricted` = better, but might break / require adaptation - Many images run as root by default - Some images use CAP_CHOWN (to `chown` files) - Some programs use CAP_NET_RAW (e.g. `ping`) .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- ## Namespace labels - Three optional labels can be added to namespaces: `pod-security.kubernetes.io/enforce` `pod-security.kubernetes.io/audit` `pod-security.kubernetes.io/warn` - The values can be: `baseline`, `restricted`, `privileged` (setting it to `privileged` doesn't really do anything) .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- ## `enforce`, `audit`, `warn` - `enforce` = prevents creation of pods - `warn` = allow creation but include a warning in the API response (will be visible e.g. in `kubectl` output) - `audit` = allow creation but generate an API audit event (will be visible if API auditing has been enabled and configured) .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- ## Blocking privileged pods - Let's block `privileged` pods everywhere - And issue warnings and audit for anything above the `restricted` level .lab[ - Set up the default policy for all namespaces: ```bash kubectl label namespaces \ pod-security.kubernetes.io/enforce=baseline \ pod-security.kubernetes.io/audit=restricted \ pod-security.kubernetes.io/warn=restricted \ --all ``` ] Note: warnings will be issued for infringing pods, but they won't be affected yet. .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- class: extra-details ## Check before you apply - When adding an `enforce` policy, we see warnings (for the pods that would infringe that policy) - It's possible to do a `--dry-run=server` to see these warnings (without applying the label) - It will only show warnings for `enforce` policies (not `warn` or `audit`) .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- ## Relaxing `kube-system` - We have many system components in `kube-system` - These pods aren't affected yet, but if there is a rolling update or something like that, the new pods won't be able to come up .lab[ - Let's allow `privileged` pods in `kube-system`: ```bash kubectl label namespace kube-system \ pod-security.kubernetes.io/enforce=privileged \ pod-security.kubernetes.io/audit=privileged \ pod-security.kubernetes.io/warn=privileged \ --overwrite ``` ] .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- ## What about new namespaces? - If new namespaces are created, they will get default permissions - We can change that by using an *admission configuration* - Step 1: write an "admission configuration file" - Step 2: make sure that file is readable by the API server - Step 3: add a flag to the API server to read that file .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- ## Admission Configuration Let's use [k8s/admission-configuration.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/admission-configuration.yaml): ```yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: - name: PodSecurity configuration: apiVersion: pod-security.admission.config.k8s.io/v1alpha1 kind: PodSecurityConfiguration defaults: enforce: baseline audit: baseline warn: baseline exemptions: usernames: - cluster-admin namespaces: - kube-system ``` .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- ## Copy the file to the API server - We need the file to be available from the API server pod - For convenience, let's copy it do `/etc/kubernetes/pki` (it's definitely not where it *should* be, but that'll do!) .lab[ - Copy the file: ```bash sudo cp ~/container.training/k8s/admission-configuration.yaml \ /etc/kubernetes/pki ``` ] .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- ## Reconfigure the API server - We need to add a flag to the API server to use that file .lab[ - Edit `/etc/kubernetes/manifests/kube-apiserver.yaml` - In the list of `command` parameters, add: `--admission-control-config-file=/etc/kubernetes/pki/admission-configuration.yaml` - Wait until the API server comes back online ] .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- ## Test the new default policy - Create a new Namespace - Try to create the "hacktheplanet" DaemonSet in the new namespace - We get a warning when creating the DaemonSet - The DaemonSet is created - But the Pods don't get created ??? :EN:- Preventing privilege escalation with Pod Security Admission :FR:- Limiter les droits des conteneurs avec *Pod Security Admission* .debug[[k8s/pod-security-admission.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pod-security-admission.md)] --- class: pic .interstitial[] --- name: toc-resource-limits class: title Resource Limits .nav[ [Previous part](#toc-pod-security-admission) | [Back to table of contents](#toc-part-7) | [Next part](#toc-defining-min-max-and-default-resources) ] .debug[(automatically generated title slide)] --- # Resource Limits - We can attach resource indications to our pods (or rather: to the *containers* in our pods) - We can specify *limits* and/or *requests* - We can specify quantities of CPU and/or memory and/or ephemeral storage .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Requests vs limits - *Requests* are *guaranteed reservations* of resources - They are used for scheduling purposes - Kubelet will use cgroups to e.g. guarantee a minimum amount of CPU time - A container **can** use more than its requested resources - A container using *less* than what it requested should never be killed or throttled - A node **cannot** be overcommitted with requests (the sum of all requests **cannot** be higher than resources available on the node) - A small amount of resources is set aside for system components (this explains why there is a difference between "capacity" and "allocatable") .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Requests vs limits - *Limits* are "hard limits" (a container **cannot** exceed its limits) - They aren't taken into account by the scheduler - A container exceeding its memory limit is killed instantly (by the kernel out-of-memory killer) - A container exceeding its CPU limit is throttled - A container exceeding its disk limit is killed (usually with a small delay, since this is checked periodically by kubelet) - On a given node, the sum of all limits **can** be higher than the node size .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Compressible vs incompressible resources - CPU is a *compressible resource* - it can be preempted immediately without adverse effect - if we have N CPU and need 2N, we run at 50% speed - Memory is an *incompressible resource* - it needs to be swapped out to be reclaimed; and this is costly - if we have N GB RAM and need 2N, we might run at... 0.1% speed! - Disk is also an *incompressible resource* - when the disk is full, writes will fail - applications may or may not crash but persistent apps will be in trouble .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Running low on CPU - Two ways for a container to "run low" on CPU: - it's hitting its CPU limit - all CPUs on the node are at 100% utilization - The app in the container will run slower (compared to running without a limit, or if CPU cycles were available) - No other consequence (but this could affect SLA/SLO for latency-sensitive applications!) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: extra-details ## CPU limits implementation details - A container with a CPU limit will be "rationed" by the kernel - Every `cfs_period_us`, it will receive a CPU quota, like an "allowance" (that interval defaults to 100ms) - Once it has used its quota, it will be stalled until the next period - This can easily result in throttling for bursty workloads (see details on next slide) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: extra-details ## A bursty example - Web service receives one request per minute - Each request takes 1 second of CPU - Average load: 1.66% - Let's say we set a CPU limit of 10% - This means CPU quotas of 10ms every 100ms - Obtaining the quota for 1 second of CPU will take 10 seconds - Observed latency will be 10 seconds (... actually 9.9s) instead of 1 second (real-life scenarios will of course be less extreme, but they do happen!) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: extra-details ## Multi-core scheduling details - Each core gets a small share of the container's CPU quota (this avoids locking and contention on the "global" quota for the container) - By default, the kernel distributes that quota to CPUs in 5ms increments (tunable with `kernel.sched_cfs_bandwidth_slice_us`) - If a containerized process (or thread) uses up its local CPU quota: *it gets more from the "global" container quota (if there's some left)* - If it "yields" (e.g. sleeps for I/O) before using its local CPU quota: *the quota is **soon** returned to the "global" container quota, **minus** 1ms* .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: extra-details ## Low quotas on machines with many cores - The local CPU quota is not immediately returned to the global quota - this reduces locking and contention on the global quota - but this can cause starvation when many threads/processes become runnable - That 1ms that "stays" on the local CPU quota is often useful - if the thread/process becomes runnable, it can be scheduled immediately - again, this reduces locking and contention on the global quota - but if the thread/process doesn't become runnable, it is wasted! - this can become a huge problem on machines with many cores .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: extra-details ## CPU limits in a nutshell - Beware if you run small bursty workloads on machines with many cores! ("highly-threaded, user-interactive, non-cpu bound applications") - Check the `nr_throttled` and `throttled_time` metrics in `cpu.stat` - Possible solutions/workarounds: - be generous with the limits - make sure your kernel has the [appropriate patch](https://lkml.org/lkml/2019/5/17/581) - use [static CPU manager policy](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy) For more details, check [this blog post](https://erickhun.com/posts/kubernetes-faster-services-no-cpu-limits/) or these: ([part 1](https://engineering.indeedblog.com/blog/2019/12/unthrottled-fixing-cpu-limits-in-the-cloud/), [part 2](https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/)). .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Running low on memory - When the kernel runs low on memory, it starts to reclaim used memory - Option 1: free up some buffers and caches (fastest option; might affect performance if cache memory runs very low) - Option 2: swap, i.e. write to disk some memory of one process to give it to another (can have a huge negative impact on performance because disks are slow) - Option 3: terminate a process and reclaim all its memory (OOM or Out Of Memory Killer on Linux) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Memory limits on Kubernetes - Kubernetes *does not support swap* (but it may support it in the future, thanks to [KEP 2400]) - If a container exceeds its memory *limit*, it gets killed immediately - If a node memory usage gets too high, it will *evict* some pods (we say that the node is "under pressure", more on that in a bit!) [KEP 2400]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md#implementation-history .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Running low on disk - When the kubelet runs low on disk, it starts to reclaim disk space (similarly to what the kernel does, but in different categories) - Option 1: garbage collect dead pods and containers (no consequence, but their logs will be deleted) - Option 2: remove unused images (no consequence, but these images will have to be repulled if we need them later) - Option 3: evict pods and remove them to reclaim their disk usage - Note: this only applies to *ephemeral storage*, not to e.g. Persistent Volumes! .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Ephemeral storage? - This includes: - the *read-write layer* of the container
(any file creation/modification outside of its volumes) - `emptyDir` volumes mounted in the container - the container logs stored on the node - This does not include: - the container image - other types of volumes (e.g. Persistent Volumes, `hostPath`, or `local` volumes) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: extra-details ## Disk limit enforcement - Disk usage is periodically measured by kubelet (with something equivalent to `du`) - There can be a small delay before pod termination when disk limit is exceeded - It's also possible to enable filesystem *project quotas* (e.g. with EXT4 or XFS) - Remember that container logs are also accounted for! (container log rotation/retention is managed by kubelet) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: extra-details ## `nodefs` and `imagefs` - `nodefs` is the main filesystem of the node (holding, notably, `emptyDir` volumes and container logs) - Optionally, the container engine can be configured to use an `imagefs` - `imagefs` will store container images and container writable layers - When there is a separate `imagefs`, its disk usage is tracked independently - If `imagefs` usage gets too high, kubelet will remove old images first (conversely, if `nodefs` usage gets too high, kubelet won't remove old images) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: extra-details ## CPU and RAM reservation - Kubernetes passes resources requests and limits to the container engine - The container engine applies these requests and limits with specific mechanisms - Example: on Linux, this is typically done with control groups aka cgroups - Most systems use cgroups v1, but cgroups v2 are slowly being rolled out (e.g. available in Ubuntu 22.04 LTS) - Cgroups v2 have new, interesting features for memory control: - ability to set "minimum" memory amounts (to effectively reserve memory) - better control on the amount of swap used by a container .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: extra-details ## What's the deal with swap? - With cgroups v1, it's not possible to disable swap for a cgroup (the closest option is to [reduce "swappiness"](https://unix.stackexchange.com/questions/77939/turning-off-swapping-for-only-one-process-with-cgroups)) - It is possible with cgroups v2 (see the [kernel docs](https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html) and the [fbatx docs](https://facebookmicrosites.github.io/cgroup2/docs/memory-controller.html#using-swap)) - Cgroups v2 aren't widely deployed yet - The architects of Kubernetes wanted to ensure that Guaranteed pods never swap - The simplest solution was to disable swap entirely - Kubelet will refuse to start if it detects that swap is enabled! .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Alternative point of view - Swap enables paging¹ of anonymous² memory - Even when swap is disabled, Linux will still page memory for: - executables, libraries - mapped files - Disabling swap *will reduce performance and available resources* - For a good time, read [kubernetes/kubernetes#53533](https://github.com/kubernetes/kubernetes/issues/53533) - Also read this [excellent blog post about swap](https://jvns.ca/blog/2017/02/17/mystery-swap/) ¹Paging: reading/writing memory pages from/to disk to reclaim physical memory ²Anonymous memory: memory that is not backed by files or blocks .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Enabling swap anyway - If you don't care that pods are swapping, you can enable swap - You will need to add the flag `--fail-swap-on=false` to kubelet (remember: it won't otherwise start if it detects that swap is enabled) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Pod quality of service Each pod is assigned a QoS class (visible in `status.qosClass`). - If limits = requests: - as long as the container uses less than the limit, it won't be affected - if all containers in a pod have *(limits=requests)*, QoS is considered "Guaranteed" - If requests < limits: - as long as the container uses less than the request, it won't be affected - otherwise, it might be killed/evicted if the node gets overloaded - if at least one container has *(requests<limits)*, QoS is considered "Burstable" - If a pod doesn't have any request nor limit, QoS is considered "BestEffort" .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Quality of service impact - When a node is overloaded, BestEffort pods are killed first - Then, Burstable pods that exceed their requests - Burstable and Guaranteed pods below their requests are never killed (except if their node fails) - If we only use Guaranteed pods, no pod should ever be killed (as long as they stay within their limits) (Pod QoS is also explained in [this page](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) of the Kubernetes documentation and in [this blog post](https://medium.com/google-cloud/quality-of-service-class-qos-in-kubernetes-bb76a89eb2c6).) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Specifying resources - Resource requests are expressed at the *container* level - CPU is expressed in "virtual CPUs" (corresponding to the virtual CPUs offered by some cloud providers) - CPU can be expressed with a decimal value, or even a "milli" suffix (so 100m = 0.1) - Memory and ephemeral disk storage are expressed in bytes - These can have k, M, G, T, ki, Mi, Gi, Ti suffixes (corresponding to 10^3, 10^6, 10^9, 10^12, 2^10, 2^20, 2^30, 2^40) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Specifying resources in practice This is what the spec of a Pod with resources will look like: ```yaml containers: - name: blue image: jpetazzo/color resources: limits: cpu: "100m" ephemeral-storage: 10M memory: "100Mi" requests: cpu: "10m" ephemeral-storage: 10M memory: "100Mi" ``` This set of resources makes sure that this service won't be killed (as long as it stays below 100 MB of RAM), but allows its CPU usage to be throttled if necessary. .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Default values - If we specify a limit without a request: the request is set to the limit - If we specify a request without a limit: there will be no limit (which means that the limit will be the size of the node) - If we don't specify anything: the request is zero and the limit is the size of the node *Unless there are default values defined for our namespace!* .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## We need to specify resource values - If we do not set resource values at all: - the limit is "the size of the node" - the request is zero - This is generally *not* what we want - a container without a limit can use up all the resources of a node - if the request is zero, the scheduler can't make a smart placement decision - This is fine when learning/testing, absolutely not in production! .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## How should we set resources? - Option 1: manually, for each container - simple, effective, but tedious - Option 2: automatically, with the [Vertical Pod Autoscaler (VPA)][vpa] - relatively simple, very minimal involvement beyond initial setup - not compatible with HPAv1, can disrupt long-running workloads (see [limitations][vpa-limitations]) - Option 3: semi-automatically, with tools like [Robusta KRR][robusta] - good compromise between manual work and automation - Option 4: by creating LimitRanges in our Namespaces - relatively simple, but "one-size-fits-all" approach might not always work [robusta]: https://github.com/robusta-dev/krr [vpa]: https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler [vpa-limitations]: https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#known-limitations .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: pic .interstitial[] --- name: toc-defining-min-max-and-default-resources class: title Defining min, max, and default resources .nav[ [Previous part](#toc-resource-limits) | [Back to table of contents](#toc-part-7) | [Next part](#toc-namespace-quotas) ] .debug[(automatically generated title slide)] --- # Defining min, max, and default resources - We can create LimitRange objects to indicate any combination of: - min and/or max resources allowed per pod - default resource *limits* - default resource *requests* - maximal burst ratio (*limit/request*) - LimitRange objects are namespaced - They apply to their namespace only .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## LimitRange example ```yaml apiVersion: v1 kind: LimitRange metadata: name: my-very-detailed-limitrange spec: limits: - type: Container min: cpu: "100m" max: cpu: "2000m" memory: "1Gi" default: cpu: "500m" memory: "250Mi" defaultRequest: cpu: "500m" ``` .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Example explanation The YAML on the previous slide shows an example LimitRange object specifying very detailed limits on CPU usage, and providing defaults on RAM usage. Note the `type: Container` line: in the future, it might also be possible to specify limits per Pod, but it's not [officially documented yet](https://github.com/kubernetes/website/issues/9585). .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## LimitRange details - LimitRange restrictions are enforced only when a Pod is created (they don't apply retroactively) - They don't prevent creation of e.g. an invalid Deployment or DaemonSet (but the pods will not be created as long as the LimitRange is in effect) - If there are multiple LimitRange restrictions, they all apply together (which means that it's possible to specify conflicting LimitRanges,
preventing any Pod from being created) - If a LimitRange specifies a `max` for a resource but no `default`,
that `max` value becomes the `default` limit too .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: pic .interstitial[] --- name: toc-namespace-quotas class: title Namespace quotas .nav[ [Previous part](#toc-defining-min-max-and-default-resources) | [Back to table of contents](#toc-part-7) | [Next part](#toc-limiting-resources-in-practice) ] .debug[(automatically generated title slide)] --- # Namespace quotas - We can also set quotas per namespace - Quotas apply to the total usage in a namespace (e.g. total CPU limits of all pods in a given namespace) - Quotas can apply to resource limits and/or requests (like the CPU and memory limits that we saw earlier) - Quotas can also apply to other resources: - "extended" resources (like GPUs) - storage size - number of objects (number of pods, services...) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Creating a quota for a namespace - Quotas are enforced by creating a ResourceQuota object - ResourceQuota objects are namespaced, and apply to their namespace only - We can have multiple ResourceQuota objects in the same namespace - The most restrictive values are used .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Limiting total CPU/memory usage - The following YAML specifies an upper bound for *limits* and *requests*: ```yaml apiVersion: v1 kind: ResourceQuota metadata: name: a-little-bit-of-compute spec: hard: requests.cpu: "10" requests.memory: 10Gi limits.cpu: "20" limits.memory: 20Gi ``` These quotas will apply to the namespace where the ResourceQuota is created. .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Limiting number of objects - The following YAML specifies how many objects of specific types can be created: ```yaml apiVersion: v1 kind: ResourceQuota metadata: name: quota-for-objects spec: hard: pods: 100 services: 10 secrets: 10 configmaps: 10 persistentvolumeclaims: 20 services.nodeports: 0 services.loadbalancers: 0 count/roles.rbac.authorization.k8s.io: 10 ``` (The `count/` syntax allows limiting arbitrary objects, including CRDs.) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## YAML vs CLI - Quotas can be created with a YAML definition - ...Or with the `kubectl create quota` command - Example: ```bash kubectl create quota my-resource-quota --hard=pods=300,limits.memory=300Gi ``` - With both YAML and CLI form, the values are always under the `hard` section (there is no `soft` quota) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Viewing current usage When a ResourceQuota is created, we can see how much of it is used: ``` kubectl describe resourcequota my-resource-quota Name: my-resource-quota Namespace: default Resource Used Hard -------- ---- ---- pods 12 100 services 1 5 services.loadbalancers 0 0 services.nodeports 0 0 ``` .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Advanced quotas and PriorityClass - Pods can have a *priority* - The priority is a number from 0 to 1000000000 (or even higher for system-defined priorities) - High number = high priority = "more important" Pod - Pods with a higher priority can *preempt* Pods with lower priority (= low priority pods will be *evicted* if needed) - Useful when mixing workloads in resource-constrained environments .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Setting the priority of a Pod - Create a PriorityClass (or use an existing one) - When creating the Pod, set the field `spec.priorityClassName` - If the field is not set: - if there is a PriorityClass with `globalDefault`, it is used - otherwise, the default priority will be zero .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: extra-details ## PriorityClass and ResourceQuotas - A ResourceQuota can include a list of *scopes* or a *scope selector* - In that case, the quota will only apply to the scoped resources - Example: limit the resources allocated to "high priority" Pods - In that case, make sure that the quota is created in every Namespace (or use *admission configuration* to enforce it) - See the [resource quotas documentation][quotadocs] for details [quotadocs]: https://kubernetes.io/docs/concepts/policy/resource-quotas/#resource-quota-per-priorityclass .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: pic .interstitial[] --- name: toc-limiting-resources-in-practice class: title Limiting resources in practice .nav[ [Previous part](#toc-namespace-quotas) | [Back to table of contents](#toc-part-7) | [Next part](#toc-checking-node-and-pod-resource-usage) ] .debug[(automatically generated title slide)] --- # Limiting resources in practice - We have at least three mechanisms: - requests and limits per Pod - LimitRange per namespace - ResourceQuota per namespace - Let's see one possible strategy to get started with resource limits .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Set a LimitRange - In each namespace, create a LimitRange object - Set a small default CPU request and CPU limit (e.g. "100m") - Set a default memory request and limit depending on your most common workload - for Java, Ruby: start with "1G" - for Go, Python, PHP, Node: start with "250M" - Set upper bounds slightly below your expected node size (80-90% of your node size, with at least a 500M memory buffer) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Set a ResourceQuota - In each namespace, create a ResourceQuota object - Set generous CPU and memory limits (e.g. half the cluster size if the cluster hosts multiple apps) - Set generous objects limits - these limits should not be here to constrain your users - they should catch a runaway process creating many resources - example: a custom controller creating many pods .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Observe, refine, iterate - Observe the resource usage of your pods (we will see how in the next chapter) - Adjust individual pod limits - If you see trends: adjust the LimitRange (rather than adjusting every individual set of pod limits) - Observe the resource usage of your namespaces (with `kubectl describe resourcequota ...`) - Rinse and repeat regularly .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Underutilization - Remember: when assigning a pod to a node, the scheduler looks at *requests* (not at current utilization on the node) - If pods request resources but don't use them, this can lead to underutilization (because the scheduler will consider that the node is full and can't fit new pods) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Viewing a namespace limits and quotas - `kubectl describe namespace` will display resource limits and quotas .lab[ - Try it out: ```bash kubectl describe namespace default ``` - View limits and quotas for *all* namespaces: ```bash kubectl describe namespace ``` ] .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- ## Additional resources - [A Practical Guide to Setting Kubernetes Requests and Limits](http://blog.kubecost.com/blog/requests-and-limits/) - explains what requests and limits are - provides guidelines to set requests and limits - gives PromQL expressions to compute good values
(our app needs to be running for a while) - [Kube Resource Report](https://codeberg.org/hjacobs/kube-resource-report) - generates web reports on resource usage - [nsinjector](https://github.com/blakelead/nsinjector) - controller to automatically populate a Namespace when it is created ??? :EN:- Setting compute resource limits :EN:- Defining default policies for resource usage :EN:- Managing cluster allocation and quotas :EN:- Resource management in practice :FR:- Allouer et limiter les ressources des conteneurs :FR:- Définir des ressources par défaut :FR:- Gérer les quotas de ressources au niveau du cluster :FR:- Conseils pratiques .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/resource-limits.md)] --- class: pic .interstitial[] --- name: toc-checking-node-and-pod-resource-usage class: title Checking Node and Pod resource usage .nav[ [Previous part](#toc-limiting-resources-in-practice) | [Back to table of contents](#toc-part-7) | [Next part](#toc-cluster-sizing) ] .debug[(automatically generated title slide)] --- # Checking Node and Pod resource usage - We've installed a few things on our cluster so far - How much resources (CPU, RAM) are we using? - We need metrics! .lab[ - Let's try the following command: ```bash kubectl top nodes ``` ] .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Is metrics-server installed? - If we see a list of nodes, with CPU and RAM usage: *great, metrics-server is installed!* - If we see `error: Metrics API not available`: *metrics-server isn't installed, so we'll install it!* .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## The resource metrics pipeline - The `kubectl top` command relies on the Metrics API - The Metrics API is part of the "[resource metrics pipeline]" - The Metrics API isn't served (built into) the Kubernetes API server - It is made available through the [aggregation layer] - It is usually served by a component called metrics-server - It is optional (Kubernetes can function without it) - It is necessary for some features (like the Horizontal Pod Autoscaler) [resource metrics pipeline]: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/ [aggregation layer]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/ .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Other ways to get metrics - We could use a SAAS like Datadog, New Relic... - We could use a self-hosted solution like Prometheus - Or we could use metrics-server - What's special about metrics-server? .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Pros/cons Cons: - no data retention (no history data, just instant numbers) - only CPU and RAM of nodes and pods (no disk or network usage or I/O...) Pros: - very lightweight - doesn't require storage - used by Kubernetes autoscaling .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Why metrics-server - We may install something fancier later (think: Prometheus with Grafana) - But metrics-server will work in *minutes* - It will barely use resources on our cluster - It's required for autoscaling anyway .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## How metric-server works - It runs a single Pod - That Pod will fetch metrics from all our Nodes - It will expose them through the Kubernetes API aggregation layer (we won't say much more about that aggregation layer; that's fairly advanced stuff!) .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Installing metrics-server - In a lot of places, this is done with a little bit of custom YAML (derived from the [official installation instructions](https://github.com/kubernetes-sigs/metrics-server#installation)) - We can also use a Helm chart: ```bash helm upgrade --install metrics-server metrics-server \ --create-namespace --namespace metrics-server \ --repo https://kubernetes-sigs.github.io/metrics-server/ \ --set args={--kubelet-insecure-tls=true} ``` - The `args` flag specified above should be sufficient on most clusters .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- class: extra-details ## Kubelet insecure TLS? - The metrics-server collects metrics by connecting to kubelet - The connection is secured by TLS - This requires a valid certificate - In some cases, the certificate is self-signed - In other cases, it might be valid, but include only the node name (not its IP address, which is used by default by metrics-server) .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Testing metrics-server - After a minute or two, metrics-server should be up - We should now be able to check Nodes resource usage: ```bash kubectl top nodes ``` - And Pods resource usage, too: ```bash kubectl top pods --all-namespaces ``` .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Keep some padding - The RAM usage that we see should correspond more or less to the Resident Set Size - Our pods also need some extra space for buffers, caches... - Do not aim for 100% memory usage! - Some more realistic targets: 50% (for workloads with disk I/O and leveraging caching) 90% (on very big nodes with mostly CPU-bound workloads) 75% (anywhere in between!) .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- ## Other tools - kube-capacity is a great CLI tool to view resources (https://github.com/robscott/kube-capacity) - It can show resource and limits, and compare them with usage - It can show utilization per node, or per pod - kube-resource-report can generate HTML reports (https://codeberg.org/hjacobs/kube-resource-report) ??? :EN:- The resource metrics pipeline :EN:- Installing metrics-server :EN:- Le *resource metrics pipeline* :FR:- Installation de metrics-server .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/metrics-server.md)] --- class: pic .interstitial[] --- name: toc-cluster-sizing class: title Cluster sizing .nav[ [Previous part](#toc-checking-node-and-pod-resource-usage) | [Back to table of contents](#toc-part-7) | [Next part](#toc-disruptions) ] .debug[(automatically generated title slide)] --- # Cluster sizing - What happens when the cluster gets full? - How can we scale up the cluster? - Can we do it automatically? - What are other methods to address capacity planning? .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-sizing.md)] --- ## When are we out of resources? - kubelet monitors node resources: - memory - node disk usage (typically the root filesystem of the node) - image disk usage (where container images and RW layers are stored) - For each resource, we can provide two thresholds: - a hard threshold (if it's met, it provokes immediate action) - a soft threshold (provokes action only after a grace period) - Resource thresholds and grace periods are configurable (by passing kubelet command-line flags) .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-sizing.md)] --- ## What happens then? - If disk usage is too high: - kubelet will try to remove terminated pods - then, it will try to *evict* pods - If memory usage is too high: - it will try to evict pods - The node is marked as "under pressure" - This temporarily prevents new pods from being scheduled on the node .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-sizing.md)] --- ## Which pods get evicted? - kubelet looks at the pods' QoS and PriorityClass - First, pods with BestEffort QoS are considered - Then, pods with Burstable QoS exceeding their *requests* (but only if the exceeding resource is the one that is low on the node) - Finally, pods with Guaranteed QoS, and Burstable pods within their requests - Within each group, pods are sorted by PriorityClass - If there are pods with the same PriorityClass, they are sorted by usage excess (i.e. the pods whose usage exceeds their requests the most are evicted first) .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-sizing.md)] --- class: extra-details ## Eviction of Guaranteed pods - *Normally*, pods with Guaranteed QoS should not be evicted - A chunk of resources is reserved for node processes (like kubelet) - It is expected that these processes won't use more than this reservation - If they do use more resources anyway, all bets are off! - If this happens, kubelet must evict Guaranteed pods to preserve node stability (or Burstable pods that are still within their requested usage) .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-sizing.md)] --- ## What happens to evicted pods? - The pod is terminated - It is marked as `Failed` at the API level - If the pod was created by a controller, the controller will recreate it - The pod will be recreated on another node, *if there are resources available!* - For more details about the eviction process, see: - [this documentation page](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/) about resource pressure and pod eviction, - [this other documentation page](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) about pod priority and preemption. .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-sizing.md)] --- ## What if there are no resources available? - Sometimes, a pod cannot be scheduled anywhere: - all the nodes are under pressure, - or the pod requests more resources than are available - The pod then remains in `Pending` state until the situation improves .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-sizing.md)] --- ## Cluster scaling - One way to improve the situation is to add new nodes - This can be done automatically with the [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) - The autoscaler will automatically scale up: - if there are pods that failed to be scheduled - The autoscaler will automatically scale down: - if nodes have a low utilization for an extended period of time .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-sizing.md)] --- ## Restrictions, gotchas ... - The Cluster Autoscaler only supports a few cloud infrastructures (see the [kubernetes/autoscaler repo][kubernetes-autoscaler-repo] for a list) - The Cluster Autoscaler cannot scale down nodes that have pods using: - local storage - affinity/anti-affinity rules preventing them from being rescheduled - a restrictive PodDisruptionBudget [kubernetes-autoscaler-repo]: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-sizing.md)] --- ## Other way to do capacity planning - "Running Kubernetes without nodes" - Systems like [Virtual Kubelet](https://virtual-kubelet.io/) or [Kiyot](https://static.elotl.co/docs/latest/kiyot/kiyot.html) can run pods using on-demand resources - Virtual Kubelet can leverage e.g. ACI or Fargate to run pods - Kiyot runs pods in ad-hoc EC2 instances (1 instance per pod) - Economic advantage (no wasted capacity) - Security advantage (stronger isolation between pods) Check [this blog post](http://jpetazzo.github.io/2019/02/13/running-kubernetes-without-nodes-with-kiyot/) for more details. ??? :EN:- What happens when the cluster is at, or over, capacity :EN:- Cluster sizing and scaling :FR:- Ce qui se passe quand il n'y a plus assez de ressources :FR:- Dimensionner et redimensionner ses clusters .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/cluster-sizing.md)] --- class: pic .interstitial[] --- name: toc-disruptions class: title Disruptions .nav[ [Previous part](#toc-cluster-sizing) | [Back to table of contents](#toc-part-7) | [Next part](#toc-the-horizontal-pod-autoscaler) ] .debug[(automatically generated title slide)] --- # Disruptions In a perfect world... - hardware never fails - software never has bugs - ...and never needs to be updated - ...and uses a predictable amount of resources - ...and these resources are infinite anyways - network latency and packet loss are zero - humans never make mistakes -- 😬 .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Disruptions In the real world... - hardware will fail randomly (without advance notice) - software has bugs - ...and we constantly add new features - ...and will sometimes use more resources than expected - ...and these resources are limited - network latency and packet loss are NOT zero - humans make mistake (shutting down the wrong machine, the wrong app...) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Disruptions - In Kubernetes, a "disruption" is something that stops the execution of a Pod - There are **voluntary** and **involuntary** disruptions - voluntary = directly initiated by humans (including by mistake!) - involuntary = everything else - In this section, we're going to see what they are and how to prevent them (or at least, mitigate their effects) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Node outage - Example: hardware failure (server or network), low-level error (includes kernel bugs, issues affecting underlying hypervisors or infrastructure...) - **Involuntary** disruption (even if it results from human error!) - Consequence: all workloads on that node become unresponsive - Mitigations: - scale workloads to at least 2 replicas (or more if quorum is needed) - add anti-affinity scheduling constraints (to avoid having all pods on the same node) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Node outage play-by-play - Node goes down (or disconnected from network) - Its lease (in Namespace `kube-node-lease`) doesn't get renewed - Controller manager detects that and mark the node as "unreachable" (this adds both a `NoSchedule` and `NoExecute` taints to the node) - Eventually, the `NoExecute` taint will evict these pods - This will trigger creation of replacement pods by owner controllers (except for pods with a stable network identity, e.g. in a Stateful Set!) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Node outage notes - By default, pods will tolerate the `unreachable:NoExecute` taint for 5 minutes (toleration automatically added by Admission controller `DefaultTolerationSeconds`) - Pods of a Stateful Set don't recover automatically: - as long as the Pod exists, a replacement Pod can't be created - the Pod will exist as long as its Node exists - deleting the Node (manually or automatically) will recover the Pod .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Memory/disk pressure - Example: available memory on a node goes below a specific threshold (because a pod is using too much memory and no limit was set) - **Involuntary** disruption - Consequence: kubelet starts to *evict* some pods - Mitigations: - set *resource limits* on containers to prevent them from using too much resources - set *resource requests* on containers to make sure they don't get evicted
(as long as they use less than what they requested) - make sure that apps don't use more resources than what they've requested .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Memory/disk pressure play-by-play - Memory leak in an application container, slowly causing very high memory usage - Overall free memory on the node goes below the *soft* or the *hard* threshold (default hard threshold = 100Mi; default soft threshold = none) - When reaching the *soft* threshold: - kubelet waits until the "eviction soft grace period" expires - then (if resource usage is still above the threshold) it gracefully evicts pods - When reaching the *hard* threshold: - kubelet immediately and forcefully evicts pods .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Which pods are evicted? - Kubelet only considers pods that are using *more* than what they requested (and only for the resource that is under pressure, e.g. RAM or disk usage) - First, it sorts pods by *priority¹* (as set with the `priorityClassName` in the pod spec) - Then, by how much their resource usage exceeds their request (again, for the resource that is under pressure) - It evicts pods until enough resources have been freed up .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Soft (graceful) vs hard (forceful) eviction - Soft eviction = graceful shutdown of the pod (honor's the pod `terminationGracePeriodSeconds` timeout) - Hard eviction = immediate shutdown of the pod (kills all containers immediately) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Memory/disk pressure notes - If resource usage increases *very fast*, kubelet might not catch it fast enough - For memory: this will trigger the kernel out-of-memory killer - containers killed by OOM are automatically restarted (no eviction) - eviction might happen at a later point though (if memory usage stays high) - For disk: there is no "out-of-disk" killer, but writes will fail - the `write` system call fails with `errno = ENOSPC` / `No space left on device` - eviction typically happens shortly after (when kubelet catches up) - When relying on disk/memory bursts a lot, using `priorityClasses` might help .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Memory/disk pressure delays - By default, no soft threshold is defined - Defining it requires setting both the threshold and the grace period - Grace periods can be different for the different types of resources - When a node is under pressure, kubelet places a `NoSchedule` taint (to avoid adding more pods while the pod is under pressure) - Once the node is no longer under pressure, kubelet clears the taint (after waiting an extra timeout, `evictionPressureTransitionPeriod`, 5 min by default) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Accidental deletion - Example: developer deletes the wrong Deployment, the wrong Namespace... - **Voluntary** disruption (from Kubernetes' perspective!) - Consequence: application is down - Mitigations: - only deploy to production systems through e.g. gitops workflows - enforce peer review of changes - only give users limited (e.g. read-only) access to production systems - use canary deployments (might not catch all mistakes though!) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Bad code deployment - Example: critical bug introduced, application crashes immediately or is non-functional - **Voluntary** disruption (again, from Kubernetes' perspective!) - Consequence: application is down - Mitigations: - readiness probes can mitigate immediate crashes
(rolling update continues only when enough pods are ready) - delayed crashes will require a rollback
(manual intervention, or automated by a canary system) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Node shutdown - Example: scaling down a cluster to save money - **Voluntary** disruption - Consequence: - all workloads running on that node are terminated - this might disrupt workloads that have too many replicas on that node - or workloads that should not be interrupted at all - Mitigations: - terminate workloads one at a time, coordinating with users -- 🤔 .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Node shutdown - Example: scaling down a cluster to save money - **Voluntary** disruption - Consequence: - all workloads running on that node are terminated - this might disrupt workloads that have too many replicas on that node - or workloads that should not be interrupted at all - Mitigations: - ~~terminate workloads one at a time, coordinating with users~~ - use Pod Disruption Budgets .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Pod Disruption Budgets - A PDB is a kind of *contract* between: - "admins" = folks maintaining the cluster (e.g. adding/removing/updating nodes) - "users" = folks deploying apps and workloads on the cluster - A PDB expresses something like: *in that particular set of pods, do not "disrupt" more than X at a time* - Examples: - in that set of frontend pods, do not disrupt more than 1 at a time - in that set of worker pods, always have at least 10 ready
(do not disrupt them if it would bring down the number of ready pods below 10) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## PDB - user side - Cluster users create a PDB with a manifest like this one: ```yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: #minAvailable: 2 #minAvailable: 90% maxUnavailable: 1 #maxUnavailable: 10% selector: matchLabels: app: my-app ``` - The PDB must indicate either `minAvailable` or `maxUnavailable` .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Rounding logic - Percentages are rounded **up** - When specifying `maxUnavailble` as a percentage, this can result in a higher perecentage (e.g. `maxUnavailable: 50%` with 3 pods can result in 2 pods being unavailable!) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Unmanaged pods - Specifying `minAvailable: X` works all the time - Specifying `minAvailable: X%` or `maxUnavaiable` requires *managed pods* (pods that belong to a controller, e.g. Replica Set, Stateful Set...) - This is because the PDB controller needs to know the total number of pods (given by the `replicas` field, not merely by counting pod objects) - The PDB controller will try to resolve the controller using the pod selector - If that fails, the PDB controller will emit warning events (visible with `kubectl describe pdb ...`) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Zero - `maxUnavailable: 0` means "do not disrupt my pods" - Same thing if `minAvailable` is greater than or equal to the number of pods - In that case, cluster admins are supposed to get in touch with cluster users - This will prevent fully automated operation (and some cluster admins automated systems might not honor that request) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## PDB - admin side - As a cluster admin, we need to follow certain rules - Only shut down (or restart) a node when no pods are running on that node (except system pods belonging to Daemon Sets) - To remove pods running on a node, we should use the *eviction API* (which will check PDB constraints and honor them) - To prevent new pods from being scheduled on a node, we can use a *taint* - These operations are streamlined by `kubectl drain`, which will: - *cordon* the node (add a `NoSchedule` taint) - invoke the *eviction API* to remove pods while respecting their PDBs .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Theory vs practice - `kubectl drain` won't evict pods using `emptyDir` volumes (unless the `--delete-emptydir-data` flag is passed as well) - Make sure that `emptyDir` volumes don't hold anything important (they shouldn't, but... who knows!) - Kubernetes lacks a standard way for users to express: *this `emptyDir` volume can/cannot be safely deleted* - If a PDB forbids an eviction, this requires manual coordination .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- class: extra-details ## Unhealthy pod eviction policy - By default, unhealthy pods can only be evicted if PDB allows it (unhealthy = running, but not ready) - In many cases, unhealthy pods aren't healthy anyway, and can be removed - This behavior is enabled by setting the appropriate field in the PDB manifest: ```yaml spec: unhealthyPodEvictionPolicy: AlwaysAllow ``` .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Node upgrade - Example: upgrading kubelet or the Linux kernel on a node - **Voluntary** disruption - Consequence: - all workloads running on that node are temporarily interrupted, and restarted - this might disrupt these workloads - Mitigations: - migrate workloads off the done first (as if we were shutting it down) .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Node upgrade notes - Is it necessary to drain a node before doing an upgrade? - From [the documentation][node-upgrade-docs]: *Draining nodes before upgrading kubelet ensures that pods are re-admitted and containers are re-created, which may be necessary to resolve some security issues or other important bugs.* - It's *probably* safe to upgrade in-place for: - kernel upgrades - kubelet patch-level upgrades (1.X.Y → 1.X.Z) - It's *probably* better to drain the node for minor revisions kubelet upgrades (1.X → 1.Y) - In doubt, test extensively in staging environments! [node-upgrade-docs]: https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/#manual-deployments .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- ## Manual rescheduling - Example: moving workloads around to accommodate noisy neighbors or other issues (e.g. pod X is doing a lot of disk I/O and this is starving other pods) - **Voluntary** disruption - Consequence: - the moved workloads are temporarily interrupted - Mitigations: - define an appropriate number of replicas, declare PDBs - use the [eviction API][eviction-API] to move workloads [eviction-API]: https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/ ??? :EN:- Voluntary and involuntary disruptions :EN:- Pod Disruption Budgets :FR:- "Disruptions" volontaires et involontaires :FR:- Pod Disruption Budgets .debug[[k8s/disruptions.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/disruptions.md)] --- class: pic .interstitial[] --- name: toc-the-horizontal-pod-autoscaler class: title The Horizontal Pod Autoscaler .nav[ [Previous part](#toc-disruptions) | [Back to table of contents](#toc-part-7) | [Next part](#toc-collecting-metrics-with-prometheus) ] .debug[(automatically generated title slide)] --- # The Horizontal Pod Autoscaler - What is the Horizontal Pod Autoscaler, or HPA? - It is a controller that can perform *horizontal* scaling automatically - Horizontal scaling = changing the number of replicas (adding/removing pods) - Vertical scaling = changing the size of individual replicas (increasing/reducing CPU and RAM per pod) - Cluster scaling = changing the size of the cluster (adding/removing nodes) .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Principle of operation - Each HPA resource (or "policy") specifies: - which object to monitor and scale (e.g. a Deployment, ReplicaSet...) - min/max scaling ranges (the max is a safety limit!) - a target resource usage (e.g. the default is CPU=80%) - The HPA continuously monitors the CPU usage for the related object - It computes how many pods should be running: `TargetNumOfPods = ceil(sum(CurrentPodsCPUUtilization) / Target)` - It scales the related object up/down to this target number of pods .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Pre-requirements - The metrics server needs to be running (i.e. we need to be able to see pod metrics with `kubectl top pods`) - The pods that we want to autoscale need to have resource requests (because the target CPU% is not absolute, but relative to the request) - The latter actually makes a lot of sense: - if a Pod doesn't have a CPU request, it might be using 10% of CPU... - ...but only because there is no CPU time available! - this makes sure that we won't add pods to nodes that are already resource-starved .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Testing the HPA - We will start a CPU-intensive web service - We will send some traffic to that service - We will create an HPA policy - The HPA will automatically scale up the service for us .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- ## A CPU-intensive web service - Let's use `jpetazzo/busyhttp` (it is a web server that will use 1s of CPU for each HTTP request) .lab[ - Deploy the web server: ```bash kubectl create deployment busyhttp --image=jpetazzo/busyhttp ``` - Expose it with a ClusterIP service: ```bash kubectl expose deployment busyhttp --port=80 ``` - Get the ClusterIP allocated to the service: ```bash kubectl get svc busyhttp ``` ] .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Monitor what's going on - Let's start a bunch of commands to watch what is happening .lab[ - Monitor pod CPU usage: ```bash watch kubectl top pods -l app=busyhttp ``` - Monitor service latency: ```bash httping http://`$CLUSTERIP`/ ``` - Monitor cluster events: ```bash kubectl get events -w ``` ] .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Send traffic to the service - We will use `ab` (Apache Bench) to send traffic .lab[ - Send a lot of requests to the service, with a concurrency level of 3: ```bash ab -c 3 -n 100000 http://`$CLUSTERIP`/ ``` ] The latency (reported by `httping`) should increase above 3s. The CPU utilization should increase to 100%. (The server is single-threaded and won't go above 100%.) .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Create an HPA policy - There is a helper command to do that for us: `kubectl autoscale` .lab[ - Create the HPA policy for the `busyhttp` deployment: ```bash kubectl autoscale deployment busyhttp --max=10 ``` ] By default, it will assume a target of 80% CPU usage. This can also be set with `--cpu-percent=`. -- *The autoscaler doesn't seem to work. Why?* .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- ## What did we miss? - The events stream gives us a hint, but to be honest, it's not very clear: `missing request for cpu` - We forgot to specify a resource request for our Deployment! - The HPA target is not an absolute CPU% - It is relative to the CPU requested by the pod .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Adding a CPU request - Let's edit the deployment and add a CPU request - Since our server can use up to 1 core, let's request 1 core .lab[ - Edit the Deployment definition: ```bash kubectl edit deployment busyhttp ``` - In the `containers` list, add the following block: ```yaml resources: requests: cpu: "1" ``` ] .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Results - After saving and quitting, a rolling update happens (if `ab` or `httping` exits, make sure to restart it) - It will take a minute or two for the HPA to kick in: - the HPA runs every 30 seconds by default - it needs to gather metrics from the metrics server first - If we scale further up (or down), the HPA will react after a few minutes: - it won't scale up if it already scaled in the last 3 minutes - it won't scale down if it already scaled in the last 5 minutes .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- ## What about other metrics? - The HPA in API group `autoscaling/v1` only supports CPU scaling - The HPA in API group `autoscaling/v2beta2` supports metrics from various API groups: - metrics.k8s.io, aka metrics server (per-Pod CPU and RAM) - custom.metrics.k8s.io, custom metrics per Pod - external.metrics.k8s.io, external metrics (not associated to Pods) - Kubernetes doesn't implement any of these API groups - Using these metrics requires [registering additional APIs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis) - The metrics provided by metrics server are standard; everything else is custom - For more details, see [this great blog post](https://medium.com/uptime-99/kubernetes-hpa-autoscaling-with-custom-and-external-metrics-da7f41ff7846) or [this talk](https://www.youtube.com/watch?v=gSiGFH4ZnS8) .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Cleanup - Since `busyhttp` uses CPU cycles, let's stop it before moving on .lab[ - Delete the `busyhttp` Deployment: ```bash kubectl delete deployment busyhttp ``` ] ??? :EN:- Auto-scaling resources :FR:- *Auto-scaling* (dimensionnement automatique) des ressources .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/horizontal-pod-autoscaler.md)] --- class: pic .interstitial[] --- name: toc-collecting-metrics-with-prometheus class: title Collecting metrics with Prometheus .nav[ [Previous part](#toc-the-horizontal-pod-autoscaler) | [Back to table of contents](#toc-part-8) | [Next part](#toc-extending-the-kubernetes-api) ] .debug[(automatically generated title slide)] --- # Collecting metrics with Prometheus - Prometheus is an open-source monitoring system including: - multiple *service discovery* backends to figure out which metrics to collect - a *scraper* to collect these metrics - an efficient *time series database* to store these metrics - a specific query language (PromQL) to query these time series - an *alert manager* to notify us according to metrics values or trends - We are going to use it to collect and query some metrics on our Kubernetes cluster .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Why Prometheus? - We don't endorse Prometheus more or less than any other system - It's relatively well integrated within the cloud-native ecosystem - It can be self-hosted (this is useful for tutorials like this) - It can be used for deployments of varying complexity: - one binary and 10 lines of configuration to get started - all the way to thousands of nodes and millions of metrics .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Exposing metrics to Prometheus - Prometheus obtains metrics and their values by querying *exporters* - An exporter serves metrics over HTTP, in plain text - This is what the *node exporter* looks like: http://demo.robustperception.io:9100/metrics - Prometheus itself exposes its own internal metrics, too: http://demo.robustperception.io:9090/metrics - If you want to expose custom metrics to Prometheus: - serve a text page like these, and you're good to go - libraries are available in various languages to help with quantiles etc. .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## How Prometheus gets these metrics - The *Prometheus server* will *scrape* URLs like these at regular intervals (by default: every minute; can be more/less frequent) - The list of URLs to scrape (the *scrape targets*) is defined in configuration .footnote[Worried about the overhead of parsing a text format?
Check this [comparison](https://github.com/RichiH/OpenMetrics/blob/master/markdown/protobuf_vs_text.md) of the text format with the (now deprecated) protobuf format!] .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Defining scrape targets This is maybe the simplest configuration file for Prometheus: ```yaml scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] ``` - In this configuration, Prometheus collects its own internal metrics - A typical configuration file will have multiple `scrape_configs` - In this configuration, the list of targets is fixed - A typical configuration file will use dynamic service discovery .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Service discovery This configuration file will leverage existing DNS `A` records: ```yaml scrape_configs: - ... - job_name: 'node' dns_sd_configs: - names: ['api-backends.dc-paris-2.enix.io'] type: 'A' port: 9100 ``` - In this configuration, Prometheus resolves the provided name(s) (here, `api-backends.dc-paris-2.enix.io`) - Each resulting IP address is added as a target on port 9100 .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Dynamic service discovery - In the DNS example, the names are re-resolved at regular intervals - As DNS records are created/updated/removed, scrape targets change as well - Existing data (previously collected metrics) is not deleted - Other service discovery backends work in a similar fashion .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Other service discovery mechanisms - Prometheus can connect to e.g. a cloud API to list instances - Or to the Kubernetes API to list nodes, pods, services ... - Or a service like Consul, Zookeeper, etcd, to list applications - The resulting configurations files are *way more complex* (but don't worry, we won't need to write them ourselves) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Time series database - We could wonder, "why do we need a specialized database?" - One metrics data point = metrics ID + timestamp + value - With a classic SQL or noSQL data store, that's at least 160 bits of data + indexes - Prometheus is way more efficient, without sacrificing performance (it will even be gentler on the I/O subsystem since it needs to write less) - Would you like to know more? Check this video: [Storage in Prometheus 2.0](https://www.youtube.com/watch?v=C4YV-9CrawA) by [Goutham V](https://twitter.com/putadent) at DC17EU .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Checking if Prometheus is installed - Before trying to install Prometheus, let's check if it's already there .lab[ - Look for services with a label `app=prometheus` across all namespaces: ```bash kubectl get services --selector=app=prometheus --all-namespaces ``` ] If we see a `NodePort` service called `prometheus-server`, we're good! (We can then skip to "Connecting to the Prometheus web UI".) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Running Prometheus on our cluster We need to: - Run the Prometheus server in a pod (using e.g. a Deployment to ensure that it keeps running) - Expose the Prometheus server web UI (e.g. with a NodePort) - Run the *node exporter* on each node (with a Daemon Set) - Set up a Service Account so that Prometheus can query the Kubernetes API - Configure the Prometheus server (storing the configuration in a Config Map for easy updates) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Helm charts to the rescue - To make our lives easier, we are going to use a Helm chart - The Helm chart will take care of all the steps explained above (including some extra features that we don't need, but won't hurt) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Step 1: install Helm - If we already installed Helm earlier, this command won't break anything .lab[ - Install the Helm CLI: ```bash curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 \ | bash ``` ] .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Step 2: install Prometheus - The following command, just like the previous ones, is idempotent (it won't error out if Prometheus is already installed) .lab[ - Install Prometheus on our cluster: ```bash helm upgrade prometheus --install prometheus \ --repo https://prometheus-community.github.io/helm-charts \ --namespace prometheus --create-namespace \ --set server.service.type=NodePort \ --set server.service.nodePort=30090 \ --set server.persistentVolume.enabled=false \ --set alertmanager.enabled=false ``` ] Curious about all these flags? They're explained in the next slide. .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## Explaining all the Helm flags - `helm upgrade prometheus` → upgrade the release named `prometheus`
(a "release" is an instance of an app deployed with Helm) - `--install` → if it doesn't exist, install it (instead of upgrading) - `prometheus` → use the chart named `prometheus` - `--repo ...` → the chart is located on the following repository - `--namespace prometheus` → put it in that specific namespace - `--create-namespace` → create the namespace if it doesn't exist - `--set ...` → here are some *values* to be used when rendering the chart's templates .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## Values for the Prometheus chart Helm *values* are parameters to customize our installation. - `server.service.type=NodePort` → expose the Prometheus server with a NodePort - `server.service.nodePort=30090` → set the specific NodePort number to use - `server.persistentVolume.enabled=false` → do not use a PersistentVolumeClaim - `alertmanager.enabled=false` → disable the alert manager entirely .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Connecting to the Prometheus web UI - Let's connect to the web UI and see what we can do .lab[ - Figure out the NodePort that was allocated to the Prometheus server: ```bash kubectl get svc --all-namespaces | grep prometheus-server ``` - With your browser, connect to that port - It should be 30090 if we just installed Prometheus with the Helm chart! ] .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Querying some metrics - This is easy... if you are familiar with PromQL .lab[ - Click on "Graph", and in "expression", paste the following: ``` sum by (instance) ( irate( container_cpu_usage_seconds_total{ pod=~"worker.*" }[5m] ) ) ``` ] - Click on the blue "Execute" button and on the "Graph" tab just below - We see the cumulated CPU usage of worker pods for each node
(if we just deployed Prometheus, there won't be much data to see, though) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Getting started with PromQL - We can't learn PromQL in just 5 minutes - But we can cover the basics to get an idea of what is possible (and have some keywords and pointers) - We are going to break down the query above (building it one step at a time) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Graphing one metric across all tags This query will show us CPU usage across all containers: ``` container_cpu_usage_seconds_total ``` - The suffix of the metrics name tells us: - the unit (seconds of CPU) - that it's the total used since the container creation - Since it's a "total," it is an increasing quantity (we need to compute the derivative if we want e.g. CPU % over time) - We see that the metrics retrieved have *tags* attached to them .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Selecting metrics with tags This query will show us only metrics for worker containers: ``` container_cpu_usage_seconds_total{pod=~"worker.*"} ``` - The `=~` operator allows regex matching - We select all the pods with a name starting with `worker` (it would be better to use labels to select pods; more on that later) - The result is a smaller set of containers .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Transforming counters in rates This query will show us CPU usage % instead of total seconds used: ``` 100*irate(container_cpu_usage_seconds_total{pod=~"worker.*"}[5m]) ``` - The [`irate`](https://prometheus.io/docs/prometheus/latest/querying/functions/#irate) operator computes the "per-second instant rate of increase" - `rate` is similar but allows decreasing counters and negative values - with `irate`, if a counter goes back to zero, we don't get a negative spike - The `[5m]` tells how far to look back if there is a gap in the data - And we multiply with `100*` to get CPU % usage .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Aggregation operators This query sums the CPU usage per node: ``` sum by (instance) ( irate(container_cpu_usage_seconds_total{pod=~"worker.*"}[5m]) ) ``` - `instance` corresponds to the node on which the container is running - `sum by (instance) (...)` computes the sum for each instance - Note: all the other tags are collapsed (in other words, the resulting graph only shows the `instance` tag) - PromQL supports many more [aggregation operators](https://prometheus.io/docs/prometheus/latest/querying/operators/#aggregation-operators) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## What kind of metrics can we collect? - Node metrics (related to physical or virtual machines) - Container metrics (resource usage per container) - Databases, message queues, load balancers, ... (check out this [list of exporters](https://prometheus.io/docs/instrumenting/exporters/)!) - Instrumentation (=deluxe `printf` for our code) - Business metrics (customers served, revenue, ...) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## Node metrics - CPU, RAM, disk usage on the whole node - Total number of processes running, and their states - Number of open files, sockets, and their states - I/O activity (disk, network), per operation or volume - Physical/hardware (when applicable): temperature, fan speed... - ...and much more! .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## Container metrics - Similar to node metrics, but not totally identical - RAM breakdown will be different - active vs inactive memory - some memory is *shared* between containers, and specially accounted for - I/O activity is also harder to track - async writes can cause deferred "charges" - some page-ins are also shared between containers For details about container metrics, see:
http://jpetazzo.github.io/2013/10/08/docker-containers-metrics/ .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## Application metrics - Arbitrary metrics related to your application and business - System performance: request latency, error rate... - Volume information: number of rows in database, message queue size... - Business data: inventory, items sold, revenue... .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## Detecting scrape targets - Prometheus can leverage Kubernetes service discovery (with proper configuration) - Services or pods can be annotated with: - `prometheus.io/scrape: true` to enable scraping - `prometheus.io/port: 9090` to indicate the port number - `prometheus.io/path: /metrics` to indicate the URI (`/metrics` by default) - Prometheus will detect and scrape these (without needing a restart or reload) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## Querying labels - What if we want to get metrics for containers belonging to a pod tagged `worker`? - The cAdvisor exporter does not give us Kubernetes labels - Kubernetes labels are exposed through another exporter - We can see Kubernetes labels through metrics `kube_pod_labels` (each container appears as a time series with constant value of `1`) - Prometheus *kind of* supports "joins" between time series - But only if the names of the tags match exactly .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: extra-details ## What if the tags don't match? - Older versions of cAdvisor exporter used tag `pod_name` for the name of a pod - The Kubernetes service endpoints exporter uses tag `pod` instead - See [this blog post](https://www.robustperception.io/exposing-the-software-version-to-prometheus) or [this other one](https://www.weave.works/blog/aggregating-pod-resource-cpu-memory-usage-arbitrary-labels-prometheus/) to see how to perform "joins" - Note that Prometheus cannot "join" time series with different labels (see [Prometheus issue #2204](https://github.com/prometheus/prometheus/issues/2204) for the rationale) - There is a workaround involving relabeling, but it's "not cheap" - see [this comment](https://github.com/prometheus/prometheus/issues/2204#issuecomment-261515520) for an overview - or [this blog post](https://5pi.de/2017/11/09/use-prometheus-vector-matching-to-get-kubernetes-utilization-across-any-pod-label/) for a complete description of the process .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- ## In practice - Grafana is a beautiful (and useful) frontend to display all kinds of graphs - Not everyone needs to know Prometheus, PromQL, Grafana, etc. - But in a team, it is valuable to have at least one person who know them - That person can set up queries and dashboards for the rest of the team - It's a little bit like knowing how to optimize SQL queries, Dockerfiles... Don't panic if you don't know these tools! ...But make sure at least one person in your team is on it 💯 ??? :EN:- Collecting metrics with Prometheus :FR:- Collecter des métriques avec Prometheus .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/prometheus.md)] --- class: pic .interstitial[] --- name: toc-extending-the-kubernetes-api class: title Extending the Kubernetes API .nav[ [Previous part](#toc-collecting-metrics-with-prometheus) | [Back to table of contents](#toc-part-8) | [Next part](#toc-custom-resource-definitions) ] .debug[(automatically generated title slide)] --- # Extending the Kubernetes API There are multiple ways to extend the Kubernetes API. We are going to cover: - Controllers - Dynamic Admission Webhooks - Custom Resource Definitions (CRDs) - The Aggregation Layer But first, let's re(re)visit the API server ... .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Revisiting the API server - The Kubernetes API server is a central point of the control plane - Everything connects to the API server: - users (that's us, but also automation like CI/CD) - kubelets - network components (e.g. `kube-proxy`, pod network, NPC) - controllers; lots of controllers .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Some controllers - `kube-controller-manager` runs built-on controllers (watching Deployments, Nodes, ReplicaSets, and much more) - `kube-scheduler` runs the scheduler (it's conceptually not different from another controller) - `cloud-controller-manager` takes care of "cloud stuff" (e.g. provisioning load balancers, persistent volumes...) - Some components mentioned above are also controllers (e.g. Network Policy Controller) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## More controllers - Cloud resources can also be managed by additional controllers (e.g. the [AWS Load Balancer Controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller)) - Leveraging Ingress resources requires an Ingress Controller (many options available here; we can even install multiple ones!) - Many add-ons (including CRDs and operators) have controllers as well 🤔 *What's even a controller ?!?* .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## What's a controller? According to the [documentation](https://kubernetes.io/docs/concepts/architecture/controller/): *Controllers are **control loops** that
**watch** the state of your cluster,
then make or request changes where needed.* *Each controller tries to move the current cluster state closer to the desired state.* .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## What controllers do - Watch resources - Make changes: - purely at the API level (e.g. Deployment, ReplicaSet controllers) - and/or configure resources (e.g. `kube-proxy`) - and/or provision resources (e.g. load balancer controller) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Extending Kubernetes with controllers - Random example: - watch resources like Deployments, Services ... - read annotations to configure monitoring - Technically, this is not extending the API (but it can still be very useful!) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Other ways to extend Kubernetes - Prevent or alter API requests before resources are committed to storage: *Admission Control* - Create new resource types leveraging Kubernetes storage facilities: *Custom Resource Definitions* - Create new resource types with different storage or different semantics: *Aggregation Layer* - Spoiler alert: often, we will combine multiple techniques (and involve controllers as well!) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Admission controllers - Admission controllers can vet or transform API requests - The diagram on the next slide shows the path of an API request (courtesy of Banzai Cloud) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- class: pic  .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Types of admission controllers - *Validating* admission controllers can accept/reject the API call - *Mutating* admission controllers can modify the API request payload - Both types can also trigger additional actions (e.g. automatically create a Namespace if it doesn't exist) - There are a number of built-in admission controllers (see [documentation](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#what-does-each-admission-controller-do) for a list) - We can also dynamically define and register our own .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- class: extra-details ## Some built-in admission controllers - ServiceAccount: automatically adds a ServiceAccount to Pods that don't explicitly specify one - LimitRanger: applies resource constraints specified by LimitRange objects when Pods are created - NamespaceAutoProvision: automatically creates namespaces when an object is created in a non-existent namespace *Note: #1 and #2 are enabled by default; #3 is not.* .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Dynamic Admission Control - We can set up *admission webhooks* to extend the behavior of the API server - The API server will submit incoming API requests to these webhooks - These webhooks can be *validating* or *mutating* - Webhooks can be set up dynamically (without restarting the API server) - To setup a dynamic admission webhook, we create a special resource: a `ValidatingWebhookConfiguration` or a `MutatingWebhookConfiguration` - These resources are created and managed like other resources (i.e. `kubectl create`, `kubectl get`...) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Webhook Configuration - A ValidatingWebhookConfiguration or MutatingWebhookConfiguration contains: - the address of the webhook - the authentication information to use with the webhook - a list of rules - The rules indicate for which objects and actions the webhook is triggered (to avoid e.g. triggering webhooks when setting up webhooks) - The webhook server can be hosted in or out of the cluster .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Dynamic Admission Examples - Policy control ([Kyverno](https://kyverno.io/), [Open Policy Agent](https://www.openpolicyagent.org/docs/latest/)) - Sidecar injection (Used by some service meshes) - Type validation (More on this later, in the CRD section) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Kubernetes API types - Almost everything in Kubernetes is materialized by a resource - Resources have a type (or "kind") (similar to strongly typed languages) - We can see existing types with `kubectl api-resources` - We can list resources of a given type with `kubectl get
` .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Creating new types - We can create new types with Custom Resource Definitions (CRDs) - CRDs are created dynamically (without recompiling or restarting the API server) - CRDs themselves are resources: - we can create a new type with `kubectl create` and some YAML - we can see all our custom types with `kubectl get crds` - After we create a CRD, the new type works just like built-in types .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Examples - Representing composite resources (e.g. clusters like databases, messages queues ...) - Representing external resources (e.g. virtual machines, object store buckets, domain names ...) - Representing configuration for controllers and operators (e.g. custom Ingress resources, certificate issuers, backups ...) - Alternate representations of other objects; services and service instances (e.g. encrypted secret, git endpoints ...) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## The aggregation layer - We can delegate entire parts of the Kubernetes API to external servers - This is done by creating APIService resources (check them with `kubectl get apiservices`!) - The APIService resource maps a type (kind) and version to an external service - All requests concerning that type are sent (proxied) to the external service - This allows to have resources like CRDs, but that aren't stored in etcd - Example: `metrics-server` .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Why? - Using a CRD for live metrics would be extremely inefficient (etcd **is not** a metrics store; write performance is way too slow) - Instead, `metrics-server`: - collects metrics from kubelets - stores them in memory - exposes them as PodMetrics and NodeMetrics (in API group metrics.k8s.io) - is registered as an APIService .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Drawbacks - Requires a server - ... that implements a non-trivial API (aka the Kubernetes API semantics) - If we need REST semantics, CRDs are probably way simpler - *Sometimes* synchronizing external state with CRDs might do the trick (unless we want the external state to be our single source of truth) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- ## Documentation - [Custom Resource Definitions: when to use them](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) - [Custom Resources Definitions: how to use them](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) - [Built-in Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) - [Dynamic Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) - [Aggregation Layer](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) ??? :EN:- Overview of Kubernetes API extensions :FR:- Comment étendre l'API Kubernetes .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/extending-api.md)] --- class: pic .interstitial[] --- name: toc-custom-resource-definitions class: title Custom Resource Definitions .nav[ [Previous part](#toc-extending-the-kubernetes-api) | [Back to table of contents](#toc-part-8) | [Next part](#toc-operators) ] .debug[(automatically generated title slide)] --- # Custom Resource Definitions - CRDs are one of the (many) ways to extend the API - CRDs can be defined dynamically (no need to recompile or reload the API server) - A CRD is defined with a CustomResourceDefinition resource (CustomResourceDefinition is conceptually similar to a *metaclass*) .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## Creating a CRD - We will create a CRD to represent different recipes of pizzas - We will be able to run `kubectl get pizzas` and it will list the recipes - Creating/deleting recipes won't do anything else (because we won't implement a *controller*) .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## A bit of history Things related to Custom Resource Definitions: - Kubernetes 1.??: `apiextensions.k8s.io/v1beta1` introduced - Kubernetes 1.16: `apiextensions.k8s.io/v1` introduced - Kubernetes 1.22: `apiextensions.k8s.io/v1beta1` [removed][changes-in-122] - Kubernetes 1.25: [CEL validation rules available in beta][crd-validation-rules-beta] - Kubernetes 1.28: [validation ratcheting][validation-ratcheting] in [alpha][feature-gates] - Kubernetes 1.29: [CEL validation rules available in GA][cel-validation-rules] - Kubernetes 1.30: [validation ratcheting][validation-ratcheting] in [beta][feature-gates]; enabled by default [crd-validation-rules-beta]: https://kubernetes.io/blog/2022/09/23/crd-validation-rules-beta/ [cel-validation-rules]: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules [validation-ratcheting]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/4008-crd-ratcheting [feature-gates]: https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/#feature-gates-for-alpha-or-beta-features [changes-in-122]: https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/ .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## First slice of pizza ```yaml apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: pizzas.container.training spec: group: container.training version: v1alpha1 scope: Namespaced names: plural: pizzas singular: pizza kind: Pizza shortNames: - piz ``` .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## The joys of API deprecation - Unfortunately, the CRD manifest on the previous slide is deprecated! - It is using `apiextensions.k8s.io/v1beta1`, which is dropped in Kubernetes 1.22 - We need to use `apiextensions.k8s.io/v1`, which is a little bit more complex (a few optional things become mandatory, see [this guide](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#customresourcedefinition-v122) for details) .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## Second slice of pizza - The next slide will show file [k8s/pizza-2.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/pizza-2.yaml) - Note the `spec.versions` list - we need exactly one version with `storage: true` - we can have multiple versions with `served: true` - `spec.versions[].schema.openAPI3Schema` is required (and must be a valid OpenAPI schema; here it's a trivial one) .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ```yaml apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: pizzas.container.training spec: group: container.training scope: Namespaced names: plural: pizzas singular: pizza kind: Pizza shortNames: - piz versions: - name: v1alpha1 served: true storage: true schema: openAPIV3Schema: type: object ``` .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## Baking some pizza - Let's create the Custom Resource Definition for our Pizza resource .lab[ - Load the CRD: ```bash kubectl apply -f ~/container.training/k8s/pizza-2.yaml ``` - Confirm that it shows up: ```bash kubectl get crds ``` ] .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## Creating custom resources The YAML below defines a resource using the CRD that we just created: ```yaml kind: Pizza apiVersion: container.training/v1alpha1 metadata: name: hawaiian spec: toppings: [ cheese, ham, pineapple ] ``` .lab[ - Try to create a few pizza recipes: ```bash kubectl apply -f ~/container.training/k8s/pizzas.yaml ``` ] .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## Type validation - Recent versions of Kubernetes will issue errors about unknown fields - We need to improve our OpenAPI schema (to add e.g. the `spec.toppings` field used by our pizza resources) .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## Creating a bland pizza - Let's try to create a pizza anyway! .lab[ - Only provide the most basic YAML manifest: ```bash kubectl create -f- <
(e.g. major version downgrades) - checking a key or certificate format or validity - and much more! .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## CRDs in the wild - [gitkube](https://storage.googleapis.com/gitkube/gitkube-setup-stable.yaml) - [A redis operator](https://github.com/amaizfinance/redis-operator/blob/master/deploy/crds/k8s_v1alpha1_redis_crd.yaml) - [cert-manager](https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.yaml) *How big are these YAML files?* *What's the size (e.g. in lines) of each resource?* .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## CRDs in practice - Production-grade CRDs can be extremely verbose (because of the openAPI schema validation) - This can (and usually will) be managed by a framework .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## (Ab)using the API server - If we need to store something "safely" (as in: in etcd), we can use CRDs - This gives us primitives to read/write/list objects (and optionally validate them) - The Kubernetes API server can run on its own (without the scheduler, controller manager, and kubelets) - By loading CRDs, we can have it manage totally different objects (unrelated to containers, clusters, etc.) .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- ## What's next? - Creating a basic CRD is relatively straightforward - But CRDs generally require a *controller* to do anything useful - The controller will typically *watch* our custom resources (and take action when they are created/updated) - Most serious use-cases will also require *validation web hooks* - When our CRD data format evolves, we'll also need *conversion web hooks* - Doing all that work manually is tedious; use a framework! ??? :EN:- Custom Resource Definitions (CRDs) :FR:- Les CRDs *(Custom Resource Definitions)* .debug[[k8s/crd.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/crd.md)] --- class: pic .interstitial[] --- name: toc-operators class: title Operators .nav[ [Previous part](#toc-custom-resource-definitions) | [Back to table of contents](#toc-part-8) | [Next part](#toc-an-elasticsearch-operator) ] .debug[(automatically generated title slide)] --- # Operators The Kubernetes documentation describes the [Operator pattern] as follows: *Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop.* Another good definition from [CoreOS](https://coreos.com/blog/introducing-operators.html): *An operator represents **human operational knowledge in software,**
to reliably manage an application.* There are many different use cases spanning different domains; but the general idea is: *Manage some resources (that reside inside our outside the cluster),
using Kubernetes manifests and tooling.* [Operator pattern]: https://kubernetes.io/docs/concepts/extend-kubernetes/operator/ .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators.md)] --- ## Some uses cases - Managing external resources ([AWS], [GCP], [KubeVirt]...) - Setting up database replication or distributed systems
(Cassandra, Consul, CouchDB, ElasticSearch, etcd, Kafka, MongoDB, MySQL, PostgreSQL, RabbitMQ, Redis, ZooKeeper...) - Running and configuring CI/CD
([ArgoCD], [Flux]), backups ([Velero]), policies ([Gatekeeper], [Kyverno])... - Automating management of certificates and secrets
([cert-manager]), secrets ([External Secrets Operator], [Sealed Secrets]...) - Configuration of cluster components ([Istio], [Prometheus]) - etc. [ArgoCD]: https://argoproj.github.io/cd/ [AWS]: https://aws-controllers-k8s.github.io/community/docs/community/services/ [cert-manager]: https://cert-manager.io/ [External Secrets Operator]: https://external-secrets.io/ [Flux]: https://fluxcd.io/ [Gatekeeper]: https://open-policy-agent.github.io/gatekeeper/website/docs/ [GCP]: https://github.com/paulczar/gcp-cloud-compute-operator [Istio]: https://istio.io/latest/docs/setup/install/operator/ [KubeVirt]: https://kubevirt.io/ [Kyverno]: https://kyverno.io/ [Prometheus]: https://prometheus-operator.dev/ [Sealed Secrets]: https://github.com/bitnami-labs/sealed-secrets [Velero]: https://velero.io/ .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators.md)] --- ## What are they made from? - Operators combine two things: - Custom Resource Definitions - controller code watching the corresponding resources and acting upon them - A given operator can define one or multiple CRDs - The controller code (control loop) typically runs within the cluster (running as a Deployment with 1 replica is a common scenario) - But it could also run elsewhere (nothing mandates that the code run on the cluster, as long as it has API access) .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators.md)] --- ## Operators for e.g. replicated databases - Kubernetes gives us Deployments, StatefulSets, Services ... - These mechanisms give us building blocks to deploy applications - They work great for services that are made of *N* identical containers (like stateless ones) - They also work great for some stateful applications like Consul, etcd ... (with the help of highly persistent volumes) - They're not enough for complex services: - where different containers have different roles - where extra steps have to be taken when scaling or replacing containers .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators.md)] --- ## How operators work - An operator creates one or more CRDs (i.e., it creates new "Kinds" of resources on our cluster) - The operator also runs a *controller* that will watch its resources - Each time we create/update/delete a resource, the controller is notified (we could write our own cheap controller with `kubectl get --watch`) .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators.md)] --- ## Operators are not magic - Look at this ElasticSearch resource definition: [k8s/eck-elasticsearch.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/eck-elasticsearch.yaml) - What should happen if we flip the TLS flag? Twice? - What should happen if we add another group of nodes? - What if we want different images or parameters for the different nodes? *Operators can be very powerful.
But we need to know exactly the scenarios that they can handle.* ??? :EN:- Kubernetes operators :FR:- Les opérateurs .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/operators.md)] --- class: pic .interstitial[] --- name: toc-an-elasticsearch-operator class: title An ElasticSearch Operator .nav[ [Previous part](#toc-operators) | [Back to table of contents](#toc-part-8) | [Next part](#toc-last-words) ] .debug[(automatically generated title slide)] --- # An ElasticSearch Operator - We will install [Elastic Cloud on Kubernetes](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html), an ElasticSearch operator - This operator requires PersistentVolumes - We will install Rancher's [local path storage provisioner](https://github.com/rancher/local-path-provisioner) to automatically create these - Then, we will create an ElasticSearch resource - The operator will detect that resource and provision the cluster - We will integrate that ElasticSearch cluster with other resources (Kibana, Filebeat, Cerebro ...) .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Installing a Persistent Volume provisioner (This step can be skipped if you already have a dynamic volume provisioner.) - This provisioner creates Persistent Volumes backed by `hostPath` (local directories on our nodes) - It doesn't require anything special ... - ... But losing a node = losing the volumes on that node! .lab[ - Install the local path storage provisioner: ```bash kubectl apply -f ~/container.training/k8s/local-path-storage.yaml ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Making sure we have a default StorageClass - The ElasticSearch operator will create StatefulSets - These StatefulSets will instantiate PersistentVolumeClaims - These PVCs need to be explicitly associated with a StorageClass - Or we need to tag a StorageClass to be used as the default one .lab[ - List StorageClasses: ```bash kubectl get storageclasses ``` ] We should see the `local-path` StorageClass. .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Setting a default StorageClass - This is done by adding an annotation to the StorageClass: `storageclass.kubernetes.io/is-default-class: true` .lab[ - Tag the StorageClass so that it's the default one: ```bash kubectl annotate storageclass local-path \ storageclass.kubernetes.io/is-default-class=true ``` - Check the result: ```bash kubectl get storageclasses ``` ] Now, the StorageClass should have `(default)` next to its name. .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Install the ElasticSearch operator - The operator provides: - a few CustomResourceDefinitions - a Namespace for its other resources - a ValidatingWebhookConfiguration for type checking - a StatefulSet for its controller and webhook code - a ServiceAccount, ClusterRole, ClusterRoleBinding for permissions - All these resources are grouped in a convenient YAML file .lab[ - Install the operator: ```bash kubectl apply -f ~/container.training/k8s/eck-operator.yaml ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Check our new custom resources - Let's see which CRDs were created .lab[ - List all CRDs: ```bash kubectl get crds ``` ] This operator supports ElasticSearch, but also Kibana and APM. Cool! .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Create the `eck-demo` namespace - For clarity, we will create everything in a new namespace, `eck-demo` - This namespace is hard-coded in the YAML files that we are going to use - We need to create that namespace .lab[ - Create the `eck-demo` namespace: ```bash kubectl create namespace eck-demo ``` - Switch to that namespace: ```bash kns eck-demo ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- class: extra-details ## Can we use a different namespace? Yes, but then we need to update all the YAML manifests that we are going to apply in the next slides. The `eck-demo` namespace is hard-coded in these YAML manifests. Why? Because when defining a ClusterRoleBinding that references a ServiceAccount, we have to indicate in which namespace the ServiceAccount is located. .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Create an ElasticSearch resource - We can now create a resource with `kind: ElasticSearch` - The YAML for that resource will specify all the desired parameters: - how many nodes we want - image to use - add-ons (kibana, cerebro, ...) - whether to use TLS or not - etc. .lab[ - Create our ElasticSearch cluster: ```bash kubectl apply -f ~/container.training/k8s/eck-elasticsearch.yaml ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Operator in action - Over the next minutes, the operator will create our ES cluster - It will report our cluster status through the CRD .lab[ - Check the logs of the operator: ```bash stern --namespace=elastic-system operator ``` - Watch the status of the cluster through the CRD: ```bash kubectl get es -w ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Connecting to our cluster - It's not easy to use the ElasticSearch API from the shell - But let's check at least if ElasticSearch is up! .lab[ - Get the ClusterIP of our ES instance: ```bash kubectl get services ``` - Issue a request with `curl`: ```bash curl http://`CLUSTERIP`:9200 ``` ] We get an authentication error. Our cluster is protected! .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Obtaining the credentials - The operator creates a user named `elastic` - It generates a random password and stores it in a Secret .lab[ - Extract the password: ```bash kubectl get secret demo-es-elastic-user \ -o go-template="{{ .data.elastic | base64decode }} " ``` - Use it to connect to the API: ```bash curl -u elastic:`PASSWORD` http://`CLUSTERIP`:9200 ``` ] We should see a JSON payload with the `"You Know, for Search"` tagline. .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Sending data to the cluster - Let's send some data to our brand new ElasticSearch cluster! - We'll deploy a filebeat DaemonSet to collect node logs .lab[ - Deploy filebeat: ```bash kubectl apply -f ~/container.training/k8s/eck-filebeat.yaml ``` - Wait until some pods are up: ```bash watch kubectl get pods -l k8s-app=filebeat ``` - Check that a filebeat index was created: ```bash curl -u elastic:`PASSWORD` http://`CLUSTERIP`:9200/_cat/indices ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Deploying an instance of Kibana - Kibana can visualize the logs injected by filebeat - The ECK operator can also manage Kibana - Let's give it a try! .lab[ - Deploy a Kibana instance: ```bash kubectl apply -f ~/container.training/k8s/eck-kibana.yaml ``` - Wait for it to be ready: ```bash kubectl get kibana -w ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Connecting to Kibana - Kibana is automatically set up to conect to ElasticSearch (this is arranged by the YAML that we're using) - However, it will ask for authentication - It's using the same user/password as ElasticSearch .lab[ - Get the NodePort allocated to Kibana: ```bash kubectl get services ``` - Connect to it with a web browser - Use the same user/password as before ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Setting up Kibana After the Kibana UI loads, we need to click around a bit .lab[ - Pick "explore on my own" - Click on Use Elasticsearch data / Connect to your Elasticsearch index" - Enter `filebeat-*` for the index pattern and click "Next step" - Select `@timestamp` as time filter field name - Click on "discover" (the small icon looking like a compass on the left bar) - Play around! ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Scaling up the cluster - At this point, we have only one node - We are going to scale up - But first, we'll deploy Cerebro, an UI for ElasticSearch - This will let us see the state of the cluster, how indexes are sharded, etc. .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Deploying Cerebro - Cerebro is stateless, so it's fairly easy to deploy (one Deployment + one Service) - However, it needs the address and credentials for ElasticSearch - We prepared yet another manifest for that! .lab[ - Deploy Cerebro: ```bash kubectl apply -f ~/container.training/k8s/eck-cerebro.yaml ``` - Lookup the NodePort number and connect to it: ```bash kubectl get services ``` ] .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- ## Scaling up the cluster - We can see on Cerebro that the cluster is "yellow" (because our index is not replicated) - Let's change that! .lab[ - Edit the ElasticSearch cluster manifest: ```bash kubectl edit es demo ``` - Find the field `count: 1` and change it to 3 - Save and quit ] ??? :EN:- Deploying ElasticSearch with ECK :FR:- Déployer ElasticSearch avec ECK .debug[[k8s/eck.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/eck.md)] --- class: pic .interstitial[] --- name: toc-last-words class: title Last words .nav[ [Previous part](#toc-an-elasticsearch-operator) | [Back to table of contents](#toc-part-9) | [Next part](#toc-links-and-resources) ] .debug[(automatically generated title slide)] --- # Last words - Congratulations! - We learned a lot about Kubernetes, its internals, its advanced concepts -- - That was just the easy part - The hard challenges will revolve around *culture* and *people* -- - ... What does that mean? .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- ## Running an app involves many steps - Write the app - Tests, QA ... - Ship *something* (more on that later) - Provision resources (e.g. VMs, clusters) - Deploy the *something* on the resources - Manage, maintain, monitor the resources - Manage, maintain, monitor the app - And much more .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- ## Who does what? - The old "devs vs ops" division has changed - In some organizations, "ops" are now called "SRE" or "platform" teams (and they have very different sets of skills) - Do you know which team is responsible for each item on the list on the previous page? - Acknowledge that a lot of tasks are outsourced (e.g. if we add "buy/rack/provision machines" in that list) .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- ## What do we ship? - Some organizations embrace "you build it, you run it" - When "build" and "run" are owned by different teams, where's the line? - What does the "build" team ship to the "run" team? - Let's see a few options, and what they imply .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- ## Shipping code - Team "build" ships code (hopefully in a repository, identified by a commit hash) - Team "run" containerizes that code ✔️ no extra work for developers ❌ very little advantage of using containers .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- ## Shipping container images - Team "build" ships container images (hopefully built automatically from a source repository) - Team "run" uses theses images to create e.g. Kubernetes resources ✔️ universal artefact (support all languages uniformly) ✔️ easy to start a single component (good for monoliths) ❌ complex applications will require a lot of extra work ❌ adding/removing components in the stack also requires extra work ❌ complex applications will run very differently between dev and prod .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- ## Shipping Compose files (Or another kind of dev-centric manifest) - Team "build" ships a manifest that works on a single node (as well as images, or ways to build them) - Team "run" adapts that manifest to work on a cluster ✔️ all teams can start the stack in a reliable, deterministic manner ❌ adding/removing components still requires *some* work (but less than before) ❌ there will be *some* differences between dev and prod .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- ## Shipping Kubernetes manifests - Team "build" ships ready-to-run manifests (YAML, Helm charts, Kustomize ...) - Team "run" adjusts some parameters and monitors the application ✔️ parity between dev and prod environments ✔️ "run" team can focus on SLAs, SLOs, and overall quality ❌ requires *a lot* of extra work (and new skills) from the "build" team ❌ Kubernetes is not a very convenient development platform (at least, not yet) .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- ## What's the right answer? - It depends on our teams - existing skills (do they know how to do it?) - availability (do they have the time to do it?) - potential skills (can they learn to do it?) - It depends on our culture - owning "run" often implies being on call - do we reward on-call duty without encouraging hero syndrome? - do we give people resources (time, money) to learn? .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- class: extra-details ## Tools to develop on Kubernetes *If we decide to make Kubernetes the primary development platform, here are a few tools that can help us.* - Docker Desktop - Draft - Minikube - Skaffold - Tilt - ... .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- ## Where do we run? - Managed vs. self-hosted - Cloud vs. on-premises - If cloud: public vs. private - Which vendor/distribution to pick? - Which versions/features to enable? .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- ## Developer experience *These questions constitute a quick "smoke test" for our strategy:* - How do we on-board a new developer? - What do they need to install to get a dev stack? - How does a code change make it from dev to prod? - How does someone add a component to a stack? .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- ## Some guidelines - Start small - Outsource what we don't know - Start simple, and stay simple as long as possible (try to stay away from complex features that we don't need) - Automate (regularly check that we can successfully redeploy by following scripts) - Transfer knowledge (make sure everyone is on the same page/level) - Iterate! .debug[[k8s/lastwords.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/lastwords.md)] --- class: pic .interstitial[] --- name: toc-links-and-resources class: title Links and resources .nav[ [Previous part](#toc-last-words) | [Back to table of contents](#toc-part-9) | [Next part](#toc-all-content-after-this-slide-is-bonus-material) ] .debug[(automatically generated title slide)] --- # Links and resources All things Kubernetes: - [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups - [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes) - [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b) All things Docker: - [Docker documentation](http://docs.docker.com/) - [Docker Hub](https://hub.docker.com) - [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker) - [Play With Docker Hands-On Labs](http://training.play-with-docker.com/) Everything else: - [Local meetups](https://www.meetup.com/) .footnote[These slides (and future updates) are on → http://container.training/] .debug[[k8s/links.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/links.md)] --- class: title, self-paced Thank you! .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/thankyou.md)] --- class: title, in-person That's all, folks!
Questions?  .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/thankyou.md)] --- class: pic .interstitial[] --- name: toc-all-content-after-this-slide-is-bonus-material class: title (All content after this slide is bonus material) .nav[ [Previous part](#toc-links-and-resources) | [Back to table of contents](#toc-part-9) | [Next part](#toc-volumes) ] .debug[(automatically generated title slide)] --- # (All content after this slide is bonus material) .debug[[kadm-twodays.yml](https://github.com/jpetazzo/container.training/tree/main/slides/kadm-twodays.yml)] --- class: pic .interstitial[] --- name: toc-volumes class: title Volumes .nav[ [Previous part](#toc-all-content-after-this-slide-is-bonus-material) | [Back to table of contents](#toc-part-10) | [Next part](#toc-managing-configuration) ] .debug[(automatically generated title slide)] --- # Volumes - Volumes are special directories that are mounted in containers - Volumes can have many different purposes: - share files and directories between containers running on the same machine - share files and directories between containers and their host - centralize configuration information in Kubernetes and expose it to containers - manage credentials and secrets and expose them securely to containers - store persistent data for stateful services - access storage systems (like Ceph, EBS, NFS, Portworx, and many others) .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- class: extra-details ## Kubernetes volumes vs. Docker volumes - Kubernetes and Docker volumes are very similar (the [Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/) says otherwise ...
but it refers to Docker 1.7, which was released in 2015!) - Docker volumes allow us to share data between containers running on the same host - Kubernetes volumes allow us to share data between containers in the same pod - Both Docker and Kubernetes volumes enable access to storage systems - Kubernetes volumes are also used to expose configuration and secrets - Docker has specific concepts for configuration and secrets
(but under the hood, the technical implementation is similar) - If you're not familiar with Docker volumes, you can safely ignore this slide! .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Volumes ≠ Persistent Volumes - Volumes and Persistent Volumes are related, but very different! - *Volumes*: - appear in Pod specifications (we'll see that in a few slides) - do not exist as API resources (**cannot** do `kubectl get volumes`) - *Persistent Volumes*: - are API resources (**can** do `kubectl get persistentvolumes`) - correspond to concrete volumes (e.g. on a SAN, EBS, etc.) - cannot be associated with a Pod directly; but through a Persistent Volume Claim - won't be discussed further in this section .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Adding a volume to a Pod - We will start with the simplest Pod manifest we can find - We will add a volume to that Pod manifest - We will mount that volume in a container in the Pod - By default, this volume will be an `emptyDir` (an empty directory) - It will "shadow" the directory where it's mounted .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Our basic Pod ```yaml apiVersion: v1 kind: Pod metadata: name: nginx-without-volume spec: containers: - name: nginx image: nginx ``` This is a MVP! (Minimum Viable Pod😉) It runs a single NGINX container. .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Trying the basic pod .lab[ - Create the Pod: ```bash kubectl create -f ~/container.training/k8s/nginx-1-without-volume.yaml ``` - Get its IP address: ```bash IPADDR=$(kubectl get pod nginx-without-volume -o jsonpath={.status.podIP}) ``` - Send a request with curl: ```bash curl $IPADDR ``` ] (We should see the "Welcome to NGINX" page.) .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Adding a volume - We need to add the volume in two places: - at the Pod level (to declare the volume) - at the container level (to mount the volume) - We will declare a volume named `www` - No type is specified, so it will default to `emptyDir` (as the name implies, it will be initialized as an empty directory at pod creation) - In that pod, there is also a container named `nginx` - That container mounts the volume `www` to path `/usr/share/nginx/html/` .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## The Pod with a volume ```yaml apiVersion: v1 kind: Pod metadata: name: nginx-with-volume spec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ ``` .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Trying the Pod with a volume .lab[ - Create the Pod: ```bash kubectl create -f ~/container.training/k8s/nginx-2-with-volume.yaml ``` - Get its IP address: ```bash IPADDR=$(kubectl get pod nginx-with-volume -o jsonpath={.status.podIP}) ``` - Send a request with curl: ```bash curl $IPADDR ``` ] (We should now see a "403 Forbidden" error page.) .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Populating the volume with another container - Let's add another container to the Pod - Let's mount the volume in *both* containers - That container will populate the volume with static files - NGINX will then serve these static files - To populate the volume, we will clone the Spoon-Knife repository - this repository is https://github.com/octocat/Spoon-Knife - it's very popular (more than 100K stars!) .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Sharing a volume between two containers .small[ ```yaml apiVersion: v1 kind: Pod metadata: name: nginx-with-git spec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ - name: git image: alpine command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ] volumeMounts: - name: www mountPath: /www/ restartPolicy: OnFailure ``` ] .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Sharing a volume, explained - We added another container to the pod - That container mounts the `www` volume on a different path (`/www`) - It uses the `alpine` image - When started, it installs `git` and clones the `octocat/Spoon-Knife` repository (that repository contains a tiny HTML website) - As a result, NGINX now serves this website .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Trying the shared volume - This one will be time-sensitive! - We need to catch the Pod IP address *as soon as it's created* - Then send a request to it *as fast as possible* .lab[ - Watch the pods (so that we can catch the Pod IP address) ```bash kubectl get pods -o wide --watch ``` ] .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Shared volume in action .lab[ - Create the pod: ```bash kubectl create -f ~/container.training/k8s/nginx-3-with-git.yaml ``` - As soon as we see its IP address, access it: ```bash curl `$IP` ``` - A few seconds later, the state of the pod will change; access it again: ```bash curl `$IP` ``` ] The first time, we should see "403 Forbidden". The second time, we should see the HTML file from the Spoon-Knife repository. .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Explanations - Both containers are started at the same time - NGINX starts very quickly (it can serve requests immediately) - But at this point, the volume is empty (NGINX serves "403 Forbidden") - The other containers installs git and clones the repository (this takes a bit longer) - When the other container is done, the volume holds the repository (NGINX serves the HTML file) .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## The devil is in the details - The default `restartPolicy` is `Always` - This would cause our `git` container to run again ... and again ... and again (with an exponential back-off delay, as explained [in the documentation](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)) - That's why we specified `restartPolicy: OnFailure` .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Inconsistencies - There is a short period of time during which the website is not available (because the `git` container hasn't done its job yet) - With a bigger website, we could get inconsistent results (where only a part of the content is ready) - In real applications, this could cause incorrect results - How can we avoid that? .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Init Containers - We can define containers that should execute *before* the main ones - They will be executed in order (instead of in parallel) - They must all succeed before the main containers are started - This is *exactly* what we need here! - Let's see one in action .footnote[See [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) documentation for all the details.] .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Defining Init Containers .small[ ```yaml apiVersion: v1 kind: Pod metadata: name: nginx-with-init spec: volumes: - name: www containers: - name: nginx image: nginx volumeMounts: - name: www mountPath: /usr/share/nginx/html/ initContainers: - name: git image: alpine command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ] volumeMounts: - name: www mountPath: /www/ ``` ] .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Trying the init container .lab[ - Create the pod: ```bash kubectl create -f ~/container.training/k8s/nginx-4-with-init.yaml ``` - Try to send HTTP requests as soon as the pod comes up ] - This time, instead of "403 Forbidden" we get a "connection refused" - NGINX doesn't start until the git container has done its job - We never get inconsistent results (a "half-ready" container) .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Other uses of init containers - Load content - Generate configuration (or certificates) - Database migrations - Waiting for other services to be up (to avoid flurry of connection errors in main container) - etc. .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- ## Volume lifecycle - The lifecycle of a volume is linked to the pod's lifecycle - This means that a volume is created when the pod is created - This is mostly relevant for `emptyDir` volumes (other volumes, like remote storage, are not "created" but rather "attached" ) - A volume survives across container restarts - A volume is destroyed (or, for remote storage, detached) when the pod is destroyed ??? :EN:- Sharing data between containers with volumes :EN:- When and how to use Init Containers :FR:- Partager des données grâce aux volumes :FR:- Quand et comment utiliser un *Init Container* .debug[[k8s/volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volumes.md)] --- class: pic .interstitial[] --- name: toc-managing-configuration class: title Managing configuration .nav[ [Previous part](#toc-volumes) | [Back to table of contents](#toc-part-10) | [Next part](#toc-managing-secrets) ] .debug[(automatically generated title slide)] --- # Managing configuration - Some applications need to be configured (obviously!) - There are many ways for our code to pick up configuration: - command-line arguments - environment variables - configuration files - configuration servers (getting configuration from a database, an API...) - ... and more (because programmers can be very creative!) - How can we do these things with containers and Kubernetes? .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Passing configuration to containers - There are many ways to pass configuration to code running in a container: - baking it into a custom image - command-line arguments - environment variables - injecting configuration files - exposing it over the Kubernetes API - configuration servers - Let's review these different strategies! .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Baking custom images - Put the configuration in the image (it can be in a configuration file, but also `ENV` or `CMD` actions) - It's easy! It's simple! - Unfortunately, it also has downsides: - multiplication of images - different images for dev, staging, prod ... - minor reconfigurations require a whole build/push/pull cycle - Avoid doing it unless you don't have the time to figure out other options .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Command-line arguments - Indicate what should run in the container - Pass `command` and/or `args` in the container options in a Pod's template - Both `command` and `args` are arrays - Example ([source](https://github.com/jpetazzo/container.training/blob/main/k8s/consul-1.yaml#L70)): ```yaml args: - "agent" - "-bootstrap-expect=3" - "-retry-join=provider=k8s label_selector=\"app=consul\" namespace=\"$(NS)\"" - "-client=0.0.0.0" - "-data-dir=/consul/data" - "-server" - "-ui" ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## `args` or `command`? - Use `command` to override the `ENTRYPOINT` defined in the image - Use `args` to keep the `ENTRYPOINT` defined in the image (the parameters specified in `args` are added to the `ENTRYPOINT`) - In doubt, use `command` - It is also possible to use *both* `command` and `args` (they will be strung together, just like `ENTRYPOINT` and `CMD`) - See the [docs](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes) to see how they interact together .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Command-line arguments, pros & cons - Works great when options are passed directly to the running program (otherwise, a wrapper script can work around the issue) - Works great when there aren't too many parameters (to avoid a 20-lines `args` array) - Requires documentation and/or understanding of the underlying program ("which parameters and flags do I need, again?") - Well-suited for mandatory parameters (without default values) - Not ideal when we need to pass a real configuration file anyway .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Environment variables - Pass options through the `env` map in the container specification - Example: ```yaml env: - name: ADMIN_PORT value: "8080" - name: ADMIN_AUTH value: Basic - name: ADMIN_CRED value: "admin:0pensesame!" ``` .warning[`value` must be a string! Make sure that numbers and fancy strings are quoted.] 🤔 Why this weird `{name: xxx, value: yyy}` scheme? It will be revealed soon! .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## The downward API - In the previous example, environment variables have fixed values - We can also use a mechanism called the *downward API* - The downward API allows exposing pod or container information - either through special files (we won't show that for now) - or through environment variables - The value of these environment variables is computed when the container is started - Remember: environment variables won't (can't) change after container start - Let's see a few concrete examples! .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Exposing the pod's namespace ```yaml - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ``` - Useful to generate FQDN of services (in some contexts, a short name is not enough) - For instance, the two commands should be equivalent: ``` curl api-backend curl api-backend.$MY_POD_NAMESPACE.svc.cluster.local ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Exposing the pod's IP address ```yaml - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP ``` - Useful if we need to know our IP address (we could also read it from `eth0`, but this is more solid) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Exposing the container's resource limits ```yaml - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: containerName: test-container resource: limits.memory ``` - Useful for runtimes where memory is garbage collected - Example: the JVM (the memory available to the JVM should be set with the `-Xmx ` flag) - Best practice: set a memory limit, and pass it to the runtime - Note: recent versions of the JVM can do this automatically (see [JDK-8146115](https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8146115)) and [this blog post](https://very-serio.us/2017/12/05/running-jvms-in-kubernetes/) for detailed examples) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## More about the downward API - [This documentation page](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) tells more about these environment variables - And [this one](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) explains the other way to use the downward API (through files that get created in the container filesystem) - That second link also includes a list of all the fields that can be used with the downward API .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Environment variables, pros and cons - Works great when the running program expects these variables - Works great for optional parameters with reasonable defaults (since the container image can provide these defaults) - Sort of auto-documented (we can see which environment variables are defined in the image, and their values) - Can be (ab)used with longer values ... - ... You *can* put an entire Tomcat configuration file in an environment ... - ... But *should* you? (Do it if you really need to, we're not judging! But we'll see better ways.) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Injecting configuration files - Sometimes, there is no way around it: we need to inject a full config file - Kubernetes provides a mechanism for that purpose: `configmaps` - A configmap is a Kubernetes resource that exists in a namespace - Conceptually, it's a key/value map (values are arbitrary strings) - We can think about them in (at least) two different ways: - as holding entire configuration file(s) - as holding individual configuration parameters *Note: to hold sensitive information, we can use "Secrets", which are another type of resource behaving very much like configmaps. We'll cover them just after!* .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Configmaps storing entire files - In this case, each key/value pair corresponds to a configuration file - Key = name of the file - Value = content of the file - There can be one key/value pair, or as many as necessary (for complex apps with multiple configuration files) - Examples: ``` # Create a configmap with a single key, "app.conf" kubectl create configmap my-app-config --from-file=app.conf # Create a configmap with a single key, "app.conf" but another file kubectl create configmap my-app-config --from-file=app.conf=app-prod.conf # Create a configmap with multiple keys (one per file in the config.d directory) kubectl create configmap my-app-config --from-file=config.d/ ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Configmaps storing individual parameters - In this case, each key/value pair corresponds to a parameter - Key = name of the parameter - Value = value of the parameter - Examples: ``` # Create a configmap with two keys kubectl create cm my-app-config \ --from-literal=foreground=red \ --from-literal=background=blue # Create a configmap from a file containing key=val pairs kubectl create cm my-app-config \ --from-env-file=app.conf ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Exposing configmaps to containers - Configmaps can be exposed as plain files in the filesystem of a container - this is achieved by declaring a volume and mounting it in the container - this is particularly effective for configmaps containing whole files - Configmaps can be exposed as environment variables in the container - this is achieved with the downward API - this is particularly effective for configmaps containing individual parameters - Let's see how to do both! .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Example: HAProxy configuration - We are going to deploy HAProxy, a popular load balancer - It expects to find its configuration in a specific place: `/usr/local/etc/haproxy/haproxy.cfg` - We will create a ConfigMap holding the configuration file - Then we will mount that ConfigMap in a Pod running HAProxy .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Blue/green load balancing - In this example, we will deploy two versions of our app: - the "blue" version in the `blue` namespace - the "green" version in the `green` namespace - In both namespaces, we will have a Deployment and a Service (both named `color`) - We want to load balance traffic between both namespaces (we can't do that with a simple service selector: these don't cross namespaces) .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Deploying the app - We're going to use the image `jpetazzo/color` (it is a simple "HTTP echo" server showing which pod served the request) - We can create each Namespace, Deployment, and Service by hand, or... .lab[ - We can deploy the app with a YAML manifest: ```bash kubectl apply -f ~/container.training/k8s/rainbow.yaml ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Testing the app - Reminder: Service `x` in Namespace `y` is available through: `x.y`, `x.y.svc`, `x.y.svc.cluster.local` - Since the `cluster.local` suffix can change, we'll use `x.y.svc` .lab[ - Check that the app is up and running: ```bash kubectl run --rm -it --restart=Never --image=nixery.dev/curl my-test-pod \ curl color.blue.svc ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Creating the HAProxy configuration Here is the file that we will use, [k8s/haproxy.cfg](https://github.com/jpetazzo/container.training/tree/master/k8s/haproxy.cfg): ``` global daemon defaults mode tcp timeout connect 5s timeout client 50s timeout server 50s listen very-basic-load-balancer bind *:80 server blue color.blue.svc:80 server green color.green.svc:80 # Note: the services above must exist, # otherwise HAproxy won't start. ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Creating the ConfigMap .lab[ - Create a ConfigMap named `haproxy` and holding the configuration file: ```bash kubectl create configmap haproxy --from-file=~/container.training/k8s/haproxy.cfg ``` - Check what our configmap looks like: ```bash kubectl get configmap haproxy -o yaml ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Using the ConfigMap Here is [k8s/haproxy.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/haproxy.yaml), a Pod manifest using that ConfigMap: ```yaml apiVersion: v1 kind: Pod metadata: name: haproxy spec: volumes: - name: config configMap: name: haproxy containers: - name: haproxy image: haproxy:1 volumeMounts: - name: config mountPath: /usr/local/etc/haproxy/ ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Creating the Pod .lab[ - Create the HAProxy Pod: ```bash kubectl apply -f ~/container.training/k8s/haproxy.yaml ``` - Check the IP address allocated to the pod: ```bash kubectl get pod haproxy -o wide IP=$(kubectl get pod haproxy -o json | jq -r .status.podIP) ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Testing our load balancer - If everything went well, when we should see a perfect round robin (one request to `blue`, one request to `green`, one request to `blue`, etc.) .lab[ - Send a few requests: ```bash for i in $(seq 10); do curl $IP done ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Exposing configmaps with the downward API - We are going to run a Docker registry on a custom port - By default, the registry listens on port 5000 - This can be changed by setting environment variable `REGISTRY_HTTP_ADDR` - We are going to store the port number in a configmap - Then we will expose that configmap as a container environment variable .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Creating the configmap .lab[ - Our configmap will have a single key, `http.addr`: ```bash kubectl create configmap registry --from-literal=http.addr=0.0.0.0:80 ``` - Check our configmap: ```bash kubectl get configmap registry -o yaml ``` ] .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Using the configmap We are going to use the following pod definition: ```yaml apiVersion: v1 kind: Pod metadata: name: registry spec: containers: - name: registry image: registry env: - name: REGISTRY_HTTP_ADDR valueFrom: configMapKeyRef: name: registry key: http.addr ``` .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- ## Using the configmap - The resource definition from the previous slide is in [k8s/registry.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/registry.yaml) .lab[ - Create the registry pod: ```bash kubectl apply -f ~/container.training/k8s/registry.yaml ``` - Check the IP address allocated to the pod: ```bash kubectl get pod registry -o wide IP=$(kubectl get pod registry -o json | jq -r .status.podIP) ``` - Confirm that the registry is available on port 80: ```bash curl $IP/v2/_catalog ``` ] ??? :EN:- Managing application configuration :EN:- Exposing configuration with the downward API :EN:- Exposing configuration with Config Maps :FR:- Gérer la configuration des applications :FR:- Configuration au travers de la *downward API* :FR:- Configurer les applications avec des *Config Maps* .debug[[k8s/configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/configuration.md)] --- class: pic .interstitial[] --- name: toc-managing-secrets class: title Managing secrets .nav[ [Previous part](#toc-managing-configuration) | [Back to table of contents](#toc-part-10) | [Next part](#toc-stateful-sets) ] .debug[(automatically generated title slide)] --- # Managing secrets - Sometimes our code needs sensitive information: - passwords - API tokens - TLS keys - ... - *Secrets* can be used for that purpose - Secrets and ConfigMaps are very similar .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- ## Similarities between ConfigMap and Secrets - ConfigMap and Secrets are key-value maps (a Secret can contain zero, one, or many key-value pairs) - They can both be exposed with the downward API or volumes - They can both be created with YAML or with a CLI command (`kubectl create configmap` / `kubectl create secret`) .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- ## ConfigMap and Secrets are different resources - They can have different RBAC permissions (e.g. the default `view` role can read ConfigMaps but not Secrets) - They indicate a different *intent*: *"You should use secrets for things which are actually secret like API keys, credentials, etc., and use config map for not-secret configuration data."* *"In the future there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc."* (Source: [the author of both features](https://stackoverflow.com/a/36925553/580281 )) .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- ## Secrets have an optional *type* - The type indicates which keys must exist in the secrets, for instance: `kubernetes.io/tls` requires `tls.crt` and `tls.key` `kubernetes.io/basic-auth` requires `username` and `password` `kubernetes.io/ssh-auth` requires `ssh-privatekey` `kubernetes.io/dockerconfigjson` requires `.dockerconfigjson` `kubernetes.io/service-account-token` requires `token`, `namespace`, `ca.crt` (the whole list is in [the documentation](https://kubernetes.io/docs/concepts/configuration/secret/#secret-types)) - This is merely for our (human) convenience: “Ah yes, this secret is a ...” .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- ## Accessing private repositories - Let's see how to access an image on a private registry! - These images are protected by a username + password (on some registries, it's token + password, but it's the same thing) - To access a private image, we need to: - create a secret - reference that secret in a Pod template - or reference that secret in a ServiceAccount used by a Pod .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- ## In practice - Let's try to access an image on a private registry! - image = docker-registry.enix.io/jpetazzo/private:latest - user = reader - password = VmQvqdtXFwXfyy4Jb5DR .lab[ - Create a Deployment using that image: ```bash kubectl create deployment priv \ --image=docker-registry.enix.io/jpetazzo/private ``` - Check that the Pod won't start: ```bash kubectl get pods --selector=app=priv ``` ] .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- ## Creating a secret - Let's create a secret with the information provided earlier .lab[ - Create the registry secret: ```bash kubectl create secret docker-registry enix \ --docker-server=docker-registry.enix.io \ --docker-username=reader \ --docker-password=VmQvqdtXFwXfyy4Jb5DR ``` ] Why do we have to specify the registry address? If we use multiple sets of credentials for different registries, it prevents leaking the credentials of one registry to *another* registry. .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- ## Using the secret - The first way to use a secret is to add it to `imagePullSecrets` (in the `spec` section of a Pod template) .lab[ - Patch the `priv` Deployment that we created earlier: ```bash kubectl patch deploy priv --patch=' spec: template: spec: imagePullSecrets: - name: enix ' ``` ] .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- ## Checking the results .lab[ - Confirm that our Pod can now start correctly: ```bash kubectl get pods --selector=app=priv ``` ] .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- ## Another way to use the secret - We can add the secret to the ServiceAccount - This is convenient to automatically use credentials for *all* pods (as long as they're using a specific ServiceAccount, of course) .lab[ - Add the secret to the ServiceAccount: ```bash kubectl patch serviceaccount default --patch=' imagePullSecrets: - name: enix ' ``` ] .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- ## Secrets are displayed with base64 encoding - When shown with e.g. `kubectl get secrets -o yaml`, secrets are base64-encoded - Likewise, when defining it with YAML, `data` values are base64-encoded - Example: ```yaml kind: Secret apiVersion: v1 metadata: name: pin-codes data: onetwothreefour: MTIzNA== zerozerozerozero: MDAwMA== ``` - Keep in mind that this is just *encoding*, not *encryption* - It is very easy to [automatically extract and decode secrets](https://medium.com/@mveritym/decoding-kubernetes-secrets-60deed7a96a3) .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- class: extra-details ## Using `stringData` - When creating a Secret, it is possible to bypass base64 - Just use `stringData` instead of `data`: ```yaml kind: Secret apiVersion: v1 metadata: name: pin-codes stringData: onetwothreefour: 1234 zerozerozerozero: 0000 ``` - It will show up as base64 if you `kubectl get -o yaml` - No `type` was specified, so it defaults to `Opaque` .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- class: extra-details ## Encryption at rest - It is possible to [encrypt secrets at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) - This means that secrets will be safe if someone ... - steals our etcd servers - steals our backups - snoops the e.g. iSCSI link between our etcd servers and SAN - However, starting the API server will now require human intervention (to provide the decryption keys) - This is only for extremely regulated environments (military, nation states...) .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- class: extra-details ## Immutable ConfigMaps and Secrets - Since Kubernetes 1.19, it is possible to mark a ConfigMap or Secret as *immutable* ```bash kubectl patch configmap xyz --patch='{"immutable": true}' ``` - This brings performance improvements when using lots of ConfigMaps and Secrets (lots = tens of thousands) - Once a ConfigMap or Secret has been marked as immutable: - its content cannot be changed anymore - the `immutable` field can't be changed back either - the only way to change it is to delete and re-create it - Pods using it will have to be re-created as well ??? :EN:- Handling passwords and tokens safely :FR:- Manipulation de mots de passe, clés API etc. .debug[[k8s/secrets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/secrets.md)] --- class: pic .interstitial[] --- name: toc-stateful-sets class: title Stateful sets .nav[ [Previous part](#toc-managing-secrets) | [Back to table of contents](#toc-part-10) | [Next part](#toc-running-a-consul-cluster) ] .debug[(automatically generated title slide)] --- # Stateful sets - Stateful sets are a type of resource in the Kubernetes API (like pods, deployments, services...) - They offer mechanisms to deploy scaled stateful applications - At a first glance, they look like Deployments: - a stateful set defines a pod spec and a number of replicas *R* - it will make sure that *R* copies of the pod are running - that number can be changed while the stateful set is running - updating the pod spec will cause a rolling update to happen - But they also have some significant differences .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/statefulsets.md)] --- ## Stateful sets unique features - Pods in a stateful set are numbered (from 0 to *R-1*) and ordered - They are started and updated in order (from 0 to *R-1*) - A pod is started (or updated) only when the previous one is ready - They are stopped in reverse order (from *R-1* to 0) - Each pod knows its identity (i.e. which number it is in the set) - Each pod can discover the IP address of the others easily - The pods can persist data on attached volumes 🤔 Wait a minute ... Can't we already attach volumes to pods and deployments? .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/statefulsets.md)] --- ## Revisiting volumes - [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/) are used for many purposes: - sharing data between containers in a pod - exposing configuration information and secrets to containers - accessing storage systems - Let's see examples of the latter usage .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/statefulsets.md)] --- ## Volumes types - There are many [types of volumes](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes) available: - public cloud storage (GCEPersistentDisk, AWSElasticBlockStore, AzureDisk...) - private cloud storage (Cinder, VsphereVolume...) - traditional storage systems (NFS, iSCSI, FC...) - distributed storage (Ceph, Glusterfs, Portworx...) - Using a persistent volume requires: - creating the volume out-of-band (outside of the Kubernetes API) - referencing the volume in the pod description, with all its parameters .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/statefulsets.md)] --- ## Using a cloud volume Here is a pod definition using an AWS EBS volume (that has to be created first): ```yaml apiVersion: v1 kind: Pod metadata: name: pod-using-my-ebs-volume spec: containers: - image: ... name: container-using-my-ebs-volume volumeMounts: - mountPath: /my-ebs name: my-ebs-volume volumes: - name: my-ebs-volume awsElasticBlockStore: volumeID: vol-049df61146c4d7901 fsType: ext4 ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/statefulsets.md)] --- ## Using an NFS volume Here is another example using a volume on an NFS server: ```yaml apiVersion: v1 kind: Pod metadata: name: pod-using-my-nfs-volume spec: containers: - image: ... name: container-using-my-nfs-volume volumeMounts: - mountPath: /my-nfs name: my-nfs-volume volumes: - name: my-nfs-volume nfs: server: 192.168.0.55 path: "/exports/assets" ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/statefulsets.md)] --- ## Shortcomings of volumes - Their lifecycle (creation, deletion...) is managed outside of the Kubernetes API (we can't just use `kubectl apply/create/delete/...` to manage them) - If a Deployment uses a volume, all replicas end up using the same volume - That volume must then support concurrent access - some volumes do (e.g. NFS servers support multiple read/write access) - some volumes support concurrent reads - some volumes support concurrent access for colocated pods - What we really need is a way for each replica to have its own volume .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/statefulsets.md)] --- ## Individual volumes - The Pods of a Stateful set can have individual volumes (i.e. in a Stateful set with 3 replicas, there will be 3 volumes) - These volumes can be either: - allocated from a pool of pre-existing volumes (disks, partitions ...) - created dynamically using a storage system - This introduces a bunch of new Kubernetes resource types: Persistent Volumes, Persistent Volume Claims, Storage Classes (and also `volumeClaimTemplates`, that appear within Stateful Set manifests!) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/statefulsets.md)] --- ## Stateful set recap - A Stateful sets manages a number of identical pods (like a Deployment) - These pods are numbered, and started/upgraded/stopped in a specific order - These pods are aware of their number (e.g., #0 can decide to be the primary, and #1 can be secondary) - These pods can find the IP addresses of the other pods in the set (through a *headless service*) - These pods can each have their own persistent storage .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/statefulsets.md)] --- ## Obtaining per-pod storage - Stateful Sets can have *persistent volume claim templates* (declared in `spec.volumeClaimTemplates` in the Stateful set manifest) - A claim template will create one Persistent Volume Claim per pod (the PVC will be named `
.
.
`) - Persistent Volume Claims are matched 1-to-1 with Persistent Volumes - Persistent Volume provisioning can be done: - automatically (by leveraging *dynamic provisioning* with a Storage Class) - manually (human operator creates the volumes ahead of time, or when needed) ??? :EN:- Deploying apps with Stateful Sets :EN:- Understanding Persistent Volume Claims and Storage Classes :FR:- Déployer une application avec un *Stateful Set* :FR:- Comprendre les *Persistent Volume Claims* et *Storage Classes* .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/statefulsets.md)] --- class: pic .interstitial[] --- name: toc-running-a-consul-cluster class: title Running a Consul cluster .nav[ [Previous part](#toc-stateful-sets) | [Back to table of contents](#toc-part-10) | [Next part](#toc-pv-pvc-and-storage-classes) ] .debug[(automatically generated title slide)] --- # Running a Consul cluster - Here is a good use-case for Stateful sets! - We are going to deploy a Consul cluster with 3 nodes - Consul is a highly-available key/value store (like etcd or Zookeeper) - One easy way to bootstrap a cluster is to tell each node: - the addresses of other nodes - how many nodes are expected (to know when quorum is reached) .debug[[k8s/consul.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/consul.md)] --- ## Bootstrapping a Consul cluster *After reading the Consul documentation carefully (and/or asking around), we figure out the minimal command-line to run our Consul cluster.* ``` consul agent -data-dir=/consul/data -client=0.0.0.0 -server -ui \ -bootstrap-expect=3 \ -retry-join=`X.X.X.X` \ -retry-join=`Y.Y.Y.Y` ``` - Replace X.X.X.X and Y.Y.Y.Y with the addresses of other nodes - A node can add its own address (it will work fine) - ... Which means that we can use the same command-line on all nodes (convenient!) .debug[[k8s/consul.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/consul.md)] --- ## Cloud Auto-join - Since version 1.4.0, Consul can use the Kubernetes API to find its peers - This is called [Cloud Auto-join] - Instead of passing an IP address, we need to pass a parameter like this: ``` consul agent -retry-join "provider=k8s label_selector=\"app=consul\"" ``` - Consul needs to be able to talk to the Kubernetes API - We can provide a `kubeconfig` file - If Consul runs in a pod, it will use the *service account* of the pod [Cloud Auto-join]: https://www.consul.io/docs/agent/cloud-auto-join.html#kubernetes-k8s- .debug[[k8s/consul.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/consul.md)] --- ## Setting up Cloud auto-join - We need to create a service account for Consul - We need to create a role that can `list` and `get` pods - We need to bind that role to the service account - And of course, we need to make sure that Consul pods use that service account .debug[[k8s/consul.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/consul.md)] --- ## Putting it all together - The file `k8s/consul-1.yaml` defines the required resources (service account, role, role binding, service, stateful set) - Inspired by this [excellent tutorial](https://github.com/kelseyhightower/consul-on-kubernetes) by Kelsey Hightower (many features from the original tutorial were removed for simplicity) .debug[[k8s/consul.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/consul.md)] --- ## Running our Consul cluster - We'll use the provided YAML file .lab[ - Create the stateful set and associated service: ```bash kubectl apply -f ~/container.training/k8s/consul-1.yaml ``` - Check the logs as the pods come up one after another: ```bash stern consul ``` - Check the health of the cluster: ```bash kubectl exec consul-0 -- consul members ``` ] .debug[[k8s/consul.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/consul.md)] --- ## Caveats - The scheduler may place two Consul pods on the same node - if that node fails, we lose two Consul pods at the same time - this will cause the cluster to fail - Scaling down the cluster will cause it to fail - when a Consul member leaves the cluster, it needs to inform the others - otherwise, the last remaining node doesn't have quorum and stops functioning - This Consul cluster doesn't use real persistence yet - data is stored in the containers' ephemeral filesystem - if a pod fails, its replacement starts from a blank slate .debug[[k8s/consul.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/consul.md)] --- ## Improving pod placement - We need to tell the scheduler: *do not put two of these pods on the same node!* - This is done with an `affinity` section like the following one: ```yaml affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: consul topologyKey: kubernetes.io/hostname ``` .debug[[k8s/consul.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/consul.md)] --- ## Using a lifecycle hook - When a Consul member leaves the cluster, it needs to execute: ```bash consul leave ``` - This is done with a `lifecycle` section like the following one: ```yaml lifecycle: preStop: exec: command: [ "sh", "-c", "consul leave" ] ``` .debug[[k8s/consul.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/consul.md)] --- ## Running a better Consul cluster - Let's try to add the scheduling constraint and lifecycle hook - We can do that in the same namespace or another one (as we like) - If we do that in the same namespace, we will see a rolling update (pods will be replaced one by one) .lab[ - Deploy a better Consul cluster: ```bash kubectl apply -f ~/container.training/k8s/consul-2.yaml ``` ] .debug[[k8s/consul.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/consul.md)] --- ## Still no persistence, though - We aren't using actual persistence yet (no `volumeClaimTemplate`, Persistent Volume, etc.) - What happens if we lose a pod? - a new pod gets rescheduled (with an empty state) - the new pod tries to connect to the two others - it will be accepted (after 1-2 minutes of instability) - and it will retrieve the data from the other pods .debug[[k8s/consul.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/consul.md)] --- ## Failure modes - What happens if we lose two pods? - manual repair will be required - we will need to instruct the remaining one to act solo - then rejoin new pods - What happens if we lose three pods? (aka all of them) - we lose all the data (ouch) ??? :EN:- Scheduling pods together or separately :EN:- Example: deploying a Consul cluster :FR:- Lancer des pods ensemble ou séparément :FR:- Exemple : lancer un cluster Consul .debug[[k8s/consul.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/consul.md)] --- class: pic .interstitial[] --- name: toc-pv-pvc-and-storage-classes class: title PV, PVC, and Storage Classes .nav[ [Previous part](#toc-running-a-consul-cluster) | [Back to table of contents](#toc-part-10) | [Next part](#toc-openebs-) ] .debug[(automatically generated title slide)] --- # PV, PVC, and Storage Classes - When an application needs storage, it creates a PersistentVolumeClaim (either directly, or through a volume claim template in a Stateful Set) - The PersistentVolumeClaim is initially `Pending` - Kubernetes then looks for a suitable PersistentVolume (maybe one is immediately available; maybe we need to wait for provisioning) - Once a suitable PersistentVolume is found, the PVC becomes `Bound` - The PVC can then be used by a Pod (as long as the PVC is `Pending`, the Pod cannot run) .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Access modes - PV and PVC have *access modes*: - ReadWriteOnce (only one node can access the volume at a time) - ReadWriteMany (multiple nodes can access the volume simultaneously) - ReadOnlyMany (multiple nodes can access, but they can't write) - ReadWriteOncePod (only one pod can access the volume; new in Kubernetes 1.22) - A PVC lists the access modes that it requires - A PV lists the access modes that it supports ⚠️ A PV with only ReadWriteMany won't satisfy a PVC with ReadWriteOnce! .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Capacity - A PVC must express a storage size request (field `spec.resources.requests.storage`, in bytes) - A PV must express its size (field `spec.capacity.storage`, in bytes) - Kubernetes will only match a PV and PVC if the PV is big enough - These fields are only used for "matchmaking" purposes: - nothing prevents the Pod mounting the PVC from using more space - nothing requires the PV to actually be that big .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Storage Class - What if we have multiple storage systems available? (e.g. NFS and iSCSI; or AzureFile and AzureDisk; or Cinder and Ceph...) - What if we have a storage system with multiple tiers? (e.g. SAN with RAID1 and RAID5; general purpose vs. io optimized EBS...) - Kubernetes lets us define *storage classes* to represent these (see if you have any available at the moment with `kubectl get storageclasses`) .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Using storage classes - Optionally, each PV and each PVC can reference a StorageClass (field `spec.storageClassName`) - When creating a PVC, specifying a StorageClass means “use that particular storage system to provision the volume!” - Storage classes are necessary for [dynamic provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/) (but we can also ignore them and perform manual provisioning) .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Default storage class - We can define a *default storage class* (by annotating it with `storageclass.kubernetes.io/is-default-class=true`) - When a PVC is created, **IF** it doesn't indicate which storage class to use **AND** there is a default storage class **THEN** the PVC `storageClassName` is set to the default storage class .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Additional constraints - A PersistentVolumeClaim can also specify a volume selector (referring to labels on the PV) - A PersistentVolume can also be created with a `claimRef` (indicating to which PVC it should be bound) .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- class: extra-details ## Which PV gets associated to a PVC? - The PV must be `Available` - The PV must satisfy the PVC constraints (access mode, size, optional selector, optional storage class) - The PVs with the closest access mode are picked - Then the PVs with the closest size - It is possible to specify a `claimRef` when creating a PV (this will associate it to the specified PVC, but only if the PV satisfies all the requirements of the PVC; otherwise another PV might end up being picked) - For all the details about the PersistentVolumeClaimBinder, check [this doc](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/persistent-storage.md#matching-and-binding) .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Creating a PVC - Let's create a standalone PVC and see what happens! .lab[ - Check if we have a StorageClass: ```bash kubectl get storageclasses ``` - Create the PVC: ```bash kubectl create -f ~/container.training/k8s/pvc.yaml ``` - Check the PVC: ```bash kubectl get pvc ``` ] .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Four possibilities 1. If we have a default StorageClass with *immediate* binding: *a PV was created and associated to the PVC* 2. If we have a default StorageClass that *waits for first consumer*: *the PVC is still `Pending` but has a `STORAGECLASS`* ⚠️ 3. If we don't have a default StorageClass: *the PVC is still `Pending`, without a `STORAGECLASS`* 4. If we have a StorageClass, but it doesn't work: *the PVC is still `Pending` but has a `STORAGECLASS`* ⚠️ .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Immediate vs WaitForFirstConsumer - Immediate = as soon as there is a `Pending` PVC, create a PV - What if: - the PV is only available on a node (e.g. local volume) - ...or on a subset of nodes (e.g. SAN HBA, EBS AZ...) - the Pod that will use the PVC has scheduling constraints - these constraints turn out to be incompatible with the PV - WaitForFirstConsumer = don't provision the PV until a Pod mounts the PVC .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Using the PVC - Let's mount the PVC in a Pod - We will use a stray Pod (no Deployment, StatefulSet, etc.) - We will use [k8s/mounter.yaml](https://github.com/jpetazzo/container.training/tree/master/k8s/mounter.yaml), shown on the next slide - We'll need to update the `claimName`! ⚠️ .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ```yaml kind: Pod apiVersion: v1 metadata: generateName: mounter- labels: container.training/mounter: "" spec: volumes: - name: pvc persistentVolumeClaim: claimName: my-pvc-XYZ45 containers: - name: mounter image: alpine stdin: true tty: true volumeMounts: - name: pvc mountPath: /pvc workingDir: /pvc ``` .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Running the Pod .lab[ - Edit the `mounter.yaml` manifest - Update the `claimName` to put the name of our PVC - Create the Pod - Check the status of the PV and PVC ] Note: this "mounter" Pod can be useful to inspect the content of a PVC. .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Scenario 1 & 2 If we have a default Storage Class that can provision PVC dynamically... - We should now have a new PV - The PV and the PVC should be `Bound` together .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Scenario 3 If we don't have a default Storage Class, we must create the PV manually. ```bash kubectl create -f ~/container.training/k8s/pv.yaml ``` After a few seconds, check that the PV and PVC are bound: ```bash kubectl get pv,pvc ``` .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Scenario 4 If our default Storage Class can't provision a PV, let's do it manually. The PV must specify the correct `storageClassName`. ```bash STORAGECLASS=$(kubectl get pvc --selector=container.training/pvc \ -o jsonpath={..storageClassName}) kubectl patch -f ~/container.training/k8s/pv.yaml --dry-run=client -o yaml \ --patch '{"spec": {"storageClassName": "'$STORAGECLASS'"}}' \ | kubectl create -f- ``` Check that the PV and PVC are bound: ```bash kubectl get pv,pvc ``` .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Checking the Pod - If the PVC was `Pending`, then the Pod was `Pending` too - Once the PVC is `Bound`, the Pod can be scheduled and can run - Once the Pod is `Running`, check it out with `kubectl attach -ti` .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## PV and PVC lifecycle - We can't delete a PV if it's `Bound` - If we `kubectl delete` it, it goes to `Terminating` state - We can't delete a PVC if it's in use by a Pod - Likewise, if we `kubectl delete` it, it goes to `Terminating` state - Deletion is prevented by *finalizers* (=like a post-it note saying “don't delete me!”) - When the mounting Pods are deleted, their PVCs are freed up - When PVCs are deleted, their PVs are freed up ??? :EN:- Storage provisioning :EN:- PV, PVC, StorageClass :FR:- Création de volumes :FR:- PV, PVC, et StorageClass .debug[[k8s/pv-pvc-sc.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/pv-pvc-sc.md)] --- ## Putting it all together - We want to run that Consul cluster *and* actually persist data - We'll use a StatefulSet that will leverage PV and PVC - If we have a dynamic provisioner: *the cluster will come up right away* - If we don't have a dynamic provisioner: *we will need to create Persistent Volumes manually* .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Persistent Volume Claims and Stateful sets - A Stateful set can define one (or more) `volumeClaimTemplate` - Each `volumeClaimTemplate` will create one Persistent Volume Claim per Pod - Each Pod will therefore have its own individual volume - These volumes are numbered (like the Pods) - Example: - a Stateful set is named `consul` - it is scaled to replicas - it has a `volumeClaimTemplate` named `data` - then it will create pods `consul-0`, `consul-1`, `consul-2` - these pods will have volumes named `data`, referencing PersistentVolumeClaims named `data-consul-0`, `data-consul-1`, `data-consul-2` .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Persistent Volume Claims are sticky - When updating the stateful set (e.g. image upgrade), each pod keeps its volume - When pods get rescheduled (e.g. node failure), they keep their volume (this requires a storage system that is not node-local) - These volumes are not automatically deleted (when the stateful set is scaled down or deleted) - If a stateful set is scaled back up later, the pods get their data back .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Deploying Consul - Let's use a new manifest for our Consul cluster - The only differences between that file and the previous one are: - `volumeClaimTemplate` defined in the Stateful Set spec - the corresponding `volumeMounts` in the Pod spec .lab[ - Apply the persistent Consul YAML file: ```bash kubectl apply -f ~/container.training/k8s/consul-3.yaml ``` ] .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## No dynamic provisioner - If we don't have a dynamic provisioner, we need to create the PVs - We are going to use local volumes (similar conceptually to `hostPath` volumes) - We can use local volumes without installing extra plugins - However, they are tied to a node - If that node goes down, the volume becomes unavailable .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Observing the situation - Let's look at Persistent Volume Claims and Pods .lab[ - Check that we now have an unbound Persistent Volume Claim: ```bash kubectl get pvc ``` - We don't have any Persistent Volume: ```bash kubectl get pv ``` - The Pod `consul-0` is not scheduled yet: ```bash kubectl get pods -o wide ``` ] *Hint: leave these commands running with `-w` in different windows.* .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Explanations - In a Stateful Set, the Pods are started one by one - `consul-1` won't be created until `consul-0` is running - `consul-0` has a dependency on an unbound Persistent Volume Claim - The scheduler won't schedule the Pod until the PVC is bound (because the PVC might be bound to a volume that is only available on a subset of nodes; for instance EBS are tied to an availability zone) .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Creating Persistent Volumes - Let's create 3 local directories (`/mnt/consul`) on node2, node3, node4 - Then create 3 Persistent Volumes corresponding to these directories .lab[ - Create the local directories: ```bash for NODE in node2 node3 node4; do ssh $NODE sudo mkdir -p /mnt/consul done ``` - Create the PV objects: ```bash kubectl apply -f ~/container.training/k8s/volumes-for-consul.yaml ``` ] .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Check our Consul cluster - The PVs that we created will be automatically matched with the PVCs - Once a PVC is bound, its pod can start normally - Once the pod `consul-0` has started, `consul-1` can be created, etc. - Eventually, our Consul cluster is up, and backend by "persistent" volumes .lab[ - Check that our Consul clusters has 3 members indeed: ```bash kubectl exec consul-0 -- consul members ``` ] .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Devil is in the details (1/2) - The size of the Persistent Volumes is bogus (it is used when matching PVs and PVCs together, but there is no actual quota or limit) - The Pod might end up using more than the requested size - The PV may or may not have the capacity that it's advertising - It works well with dynamically provisioned block volumes - ...Less so in other scenarios! .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Devil is in the details (2/2) - This specific example worked because we had exactly 1 free PV per node: - if we had created multiple PVs per node ... - we could have ended with two PVCs bound to PVs on the same node ... - which would have required two pods to be on the same node ... - which is forbidden by the anti-affinity constraints in the StatefulSet - To avoid that, we need to associated the PVs with a Storage Class that has: ```yaml volumeBindingMode: WaitForFirstConsumer ``` (this means that a PVC will be bound to a PV only after being used by a Pod) - See [this blog post](https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/) for more details .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## If we have a dynamic provisioner These are the steps when dynamic provisioning happens: 1. The Stateful Set creates PVCs according to the `volumeClaimTemplate`. 2. The Stateful Set creates Pods using these PVCs. 3. The PVCs are automatically annotated with our Storage Class. 4. The dynamic provisioner provisions volumes and creates the corresponding PVs. 5. The PersistentVolumeClaimBinder associates the PVs and the PVCs together. 6. PVCs are now bound, the Pods can start. .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Validating persistence (1) - When the StatefulSet is deleted, the PVC and PV still exist - And if we recreate an identical StatefulSet, the PVC and PV are reused - Let's see that! .lab[ - Put some data in Consul: ```bash kubectl exec consul-0 -- consul kv put answer 42 ``` - Delete the Consul cluster: ```bash kubectl delete -f ~/container.training/k8s/consul-3.yaml ``` ] .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Validating persistence (2) .lab[ - Wait until the last Pod is deleted: ```bash kubectl wait pod consul-0 --for=delete ``` - Check that PV and PVC are still here: ```bash kubectl get pv,pvc ``` ] .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Validating persistence (3) .lab[ - Re-create the cluster: ```bash kubectl apply -f ~/container.training/k8s/consul-3.yaml ``` - Wait until it's up - Then access the key that we set earlier: ```bash kubectl exec consul-0 -- consul kv get answer ``` ] .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- ## Cleaning up - PV and PVC don't get deleted automatically - This is great (less risk of accidental data loss) - This is not great (storage usage increases) - Managing PVC lifecycle: - remove them manually - add their StatefulSet to their `ownerReferences` - delete the Namespace that they belong to ??? :EN:- Defining volumeClaimTemplates :FR:- Définir des volumeClaimTemplates .debug[[k8s/volume-claim-templates.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/volume-claim-templates.md)] --- class: pic .interstitial[] --- name: toc-openebs- class: title OpenEBS .nav[ [Previous part](#toc-pv-pvc-and-storage-classes) | [Back to table of contents](#toc-part-10) | [Next part](#toc-stateful-failover) ] .debug[(automatically generated title slide)] --- # OpenEBS - [OpenEBS] is a popular open-source storage solution for Kubernetes - Uses the concept of "Container Attached Storage" (1 volume = 1 dedicated controller pod + a set of replica pods) - Supports a wide range of storage engines: - LocalPV: local volumes (hostpath or device), no replication - Jiva: for lighter workloads with basic cloning/snapshotting - cStor: more powerful engine that also supports resizing, RAID, disk pools ... - [Mayastor]: newer, even more powerful engine with NVMe and vhost-user support [OpenEBS]: https://openebs.io/ [Mayastor]: https://github.com/openebs/MayaStor#mayastor .debug[[k8s/openebs.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openebs.md)] --- class: extra-details ## What are all these storage engines? - LocalPV is great if we want good performance, no replication, easy setup (it is similar to the Rancher local path provisioner) - Jiva is great if we want replication and easy setup (data is stored in containers' filesystems) - cStor is more powerful and flexible, but requires more extensive setup - Mayastor is designed to achieve extreme performance levels (with the right hardware and disks) - The OpenEBS documentation has a [good comparison of engines] to help us pick [good comparison of engines]: https://docs.openebs.io/docs/next/casengines.html#cstor-vs-jiva-vs-localpv-features-comparison .debug[[k8s/openebs.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openebs.md)] --- ## Installing OpenEBS with Helm - The OpenEBS control plane can be installed with Helm - It will run as a set of containers on Kubernetes worker nodes .lab[ - Install OpenEBS: ```bash helm upgrade --install openebs openebs \ --repo https://openebs.github.io/charts \ --namespace openebs --create-namespace \ --version 2.12.9 ``` ] ⚠️ We stick to OpenEBS 2.x because 3.x requires additional configuration. .debug[[k8s/openebs.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openebs.md)] --- ## Checking what was installed - Wait a little bit ... .lab[ - Look at the pods in the `openebs` namespace: ```bash kubectl get pods --namespace openebs ``` - And the StorageClasses that were created: ```bash kubectl get sc ``` ] .debug[[k8s/openebs.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openebs.md)] --- ## The default StorageClasses - OpenEBS typically creates three default StorageClasses - `openebs-jiva-default` provisions 3 replicated Jiva pods per volume - data is stored in `/openebs` in the replica pods - `/openebs` is a localpath volume mapped to `/var/openebs/pvc-...` on the node - `openebs-hostpath` uses LocalPV with local directories - volumes are hostpath volumes created in `/var/openebs/local` on each node - `openebs-device` uses LocalPV with local block devices - requires available disks and/or a bit of extra configuration - the default configuration filters out loop, LVM, MD devices .debug[[k8s/openebs.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openebs.md)] --- ## When do we need custom StorageClasses? - To store LocalPV hostpath volumes on a different path on the host - To change the number of replicated Jiva pods - To use a different Jiva pool (i.e. a different path on the host to store the Jiva volumes) - To create a cStor pool - ... .debug[[k8s/openebs.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openebs.md)] --- class: extra-details ## Defining a custom StorageClass Example for a LocalPV hostpath class using an extra mount on `/mnt/vol001`: ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: localpv-hostpath-mntvol001 annotations: openebs.io/cas-type: local cas.openebs.io/config: | - name: BasePath value: "/mnt/vol001" - name: StorageType value: "hostpath" provisioner: openebs.io/local ``` - `provisioner` needs to be set accordingly - Storage engine is chosen by specifying the annotation `openebs.io/cas-type` - Storage engine configuration is set with the annotation `cas.openebs.io/config` .debug[[k8s/openebs.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openebs.md)] --- ## Checking the default hostpath StorageClass - Let's inspect the StorageClass that OpenEBS created for us .lab[ - Let's look at the OpenEBS LocalPV hostpath StorageClass: ```bash kubectl get storageclass openebs-hostpath -o yaml ``` ] .debug[[k8s/openebs.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/openebs.md)] --- ## Create a host path PVC - Let's create a Persistent Volume Claim using an explicit StorageClass .lab[ ```bash kubectl apply -f - <
] .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Testing our PostgreSQL pod - We will use `kubectl exec` to get a shell in the pod - Good to know: we need to use the `postgres` user in the pod .lab[ - Get a shell in the pod, as the `postgres` user: ```bash kubectl exec -ti postgres-0 -- su postgres ``` - Check that default databases have been created correctly: ```bash psql -l ``` ] (This should show us 3 lines: postgres, template0, and template1.) .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Inserting data in PostgreSQL - We will create a database and populate it with `pgbench` .lab[ - Create a database named `demo`: ```bash createdb demo ``` - Populate it with `pgbench`: ```bash pgbench -i demo ``` ] - The `-i` flag means "create tables" - If you want more data in the test tables, add e.g. `-s 10` (to get 10x more rows) .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Checking how much data we have now - The `pgbench` tool inserts rows in table `pgbench_accounts` .lab[ - Check that the `demo` base exists: ```bash psql -l ``` - Check how many rows we have in `pgbench_accounts`: ```bash psql demo -c "select count(*) from pgbench_accounts" ``` - Check that `pgbench_history` is currently empty: ```bash psql demo -c "select count(*) from pgbench_history" ``` ] .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Testing the load generator - Let's use `pgbench` to generate a few transactions .lab[ - Run `pgbench` for 10 seconds, reporting progress every second: ```bash pgbench -P 1 -T 10 demo ``` - Check the size of the history table now: ```bash psql demo -c "select count(*) from pgbench_history" ``` ] Note: on small cloud instances, a typical speed is about 100 transactions/second. .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Generating transactions - Now let's use `pgbench` to generate more transactions - While it's running, we will disrupt the database server .lab[ - Run `pgbench` for 10 minutes, reporting progress every second: ```bash pgbench -P 1 -T 600 demo ``` - You can use a longer time period if you need more time to run the next steps ] .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Find out which node is hosting the database - We can find that information with `kubectl get pods -o wide` .lab[ - Check the node running the database: ```bash kubectl get pod postgres-0 -o wide ``` ] We are going to disrupt that node. -- By "disrupt" we mean: "disconnect it from the network". .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Node failover ⚠️ This will partially break your cluster! - We are going to disconnect the node running PostgreSQL from the cluster - We will see what happens, and how to recover - We will not reconnect the node to the cluster - This whole lab will take at least 10-15 minutes (due to various timeouts) ⚠️ Only do this lab at the very end, when you don't want to run anything else after! .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Disconnecting the node from the cluster .lab[ - Find out where the Pod is running, and SSH into that node: ```bash kubectl get pod postgres-0 -o jsonpath={.spec.nodeName} ssh nodeX ``` - Check the name of the network interface: ```bash sudo ip route ls default ``` - The output should look like this: ``` default via 10.10.0.1 `dev ensX` proto dhcp src 10.10.0.13 metric 100 ``` - Shutdown the network interface: ```bash sudo ip link set ensX down ``` ] .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- class: extra-details ## Another way to disconnect the node - We can also use `iptables` to block all traffic exiting the node (except SSH traffic, so we can repair the node later if needed) .lab[ - SSH to the node to disrupt: ```bash ssh `nodeX` ``` - Allow SSH traffic leaving the node, but block all other traffic: ```bash sudo iptables -I OUTPUT -p tcp --sport 22 -j ACCEPT sudo iptables -I OUTPUT 2 -j DROP ``` ] .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Watch what's going on - Let's look at the status of Nodes, Pods, and Events .lab[ - In a first pane/tab/window, check Nodes and Pods: ```bash watch kubectl get nodes,pods -o wide ``` - In another pane/tab/window, check Events: ```bash kubectl get events --watch ``` ] .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Node Ready → NotReady - After \~30 seconds, the control plane stops receiving heartbeats from the Node - The Node is marked NotReady - It is not *schedulable* anymore (the scheduler won't place new pods there, except some special cases) - All Pods on that Node are also *not ready* (they get removed from service Endpoints) - ... But nothing else happens for now (the control plane is waiting: maybe the Node will come back shortly?) .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Pod eviction - After \~5 minutes, the control plane will evict most Pods from the Node - These Pods are now `Terminating` - The Pods controlled by e.g. ReplicaSets are automatically moved (or rather: new Pods are created to replace them) - But nothing happens to the Pods controlled by StatefulSets at this point (they remain `Terminating` forever) - Why? 🤔 -- - This is to avoid *split brain scenarios* .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- class: extra-details ## Split brain 🧠⚡️🧠 - Imagine that we create a replacement pod `postgres-0` on another Node - And 15 minutes later, the Node is reconnected and the original `postgres-0` comes back - Which one is the "right" one? - What if they have conflicting data? 😱 - We *cannot* let that happen! - Kubernetes won't do it - ... Unless we tell it to .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## The Node is gone - One thing we can do, is tell Kubernetes "the Node won't come back" (there are other methods; but this one is the simplest one here) - This is done with a simple `kubectl delete node` .lab[ - `kubectl delete` the Node that we disconnected ] .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Pod rescheduling - Kubernetes removes the Node - After a brief period of time (\~1 minute) the "Terminating" Pods are removed - A replacement Pod is created on another Node - ... But it doesn't start yet! - Why? 🤔 .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Multiple attachment - By default, a disk can only be attached to one Node at a time (sometimes it's a hardware or API limitation; sometimes enforced in software) - In our Events, we should see `FailedAttachVolume` and `FailedMount` messages - After \~5 more minutes, the disk will be force-detached from the old Node - ... Which will allow attaching it to the new Node! 🎉 - The Pod will then be able to start - Failover is complete! .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Check that our data is still available - We are going to reconnect to the (new) pod and check .lab[ - Get a shell on the pod: ```bash kubectl exec -ti postgres-0 -- su postgres ``` - Check how many transactions are now in the `pgbench_history` table: ```bash psql demo -c "select count(*) from pgbench_history" ``` ] If the 10-second test that we ran earlier gave e.g. 80 transactions per second, and we failed the node after 30 seconds, we should have about 2400 row in that table. .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)] --- ## Double-check that the pod has really moved - Just to make sure the system is not bluffing! .lab[ - Look at which node the pod is now running on ```bash kubectl get pod postgres-0 -o wide ``` ] ??? :EN:- Using highly available persistent volumes :EN:- Example: deploying a database that can withstand node outages :FR:- Utilisation de volumes à haute disponibilité :FR:- Exemple : déployer une base de données survivant à la défaillance d'un nœud .debug[[k8s/stateful-failover.md](https://github.com/jpetazzo/container.training/tree/main/slides/k8s/stateful-failover.md)]