RUN RUN COPY RUN FROM RUN COPY RUN CMD, EXPOSE ... ``` * The build fails as soon as an instruction fails * If `RUN ` fails, the build doesn't produce an image * If it succeeds, it produces a clean image (without test libraries and data) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[] --- name: toc-dockerfile-examples class: title Dockerfile examples .nav[ [Previous part](#toc-tips-for-efficient-dockerfiles) | [Back to table of contents](#toc-part-3) | [Next part](#toc-reducing-image-size) ] .debug[(automatically generated title slide)] --- # Dockerfile examples There are a number of tips, tricks, and techniques that we can use in Dockerfiles. But sometimes, we have to use different (and even opposed) practices depending on: - the complexity of our project, - the programming language or framework that we are using, - the stage of our project (early MVP vs. super-stable production), - whether we're building a final image or a base for further images, - etc. We are going to show a few examples using very different techniques. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## When to optimize an image When authoring official images, it is a good idea to reduce as much as possible: - the number of layers, - the size of the final image. This is often done at the expense of build time and convenience for the image maintainer; but when an image is downloaded millions of time, saving even a few seconds of pull time can be worth it. .small[ ```dockerfile RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \ && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \ && docker-php-ext-install gd ... RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \ && echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \ && tar -xzf wordpress.tar.gz -C /usr/src/ \ && rm wordpress.tar.gz \ && chown -R www-data:www-data /usr/src/wordpress ``` ] (Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## When to *not* optimize an image Sometimes, it is better to prioritize *maintainer convenience*. In particular, if: - the image changes a lot, - the image has very few users (e.g. only 1, the maintainer!), - the image is built and run on the same machine, - the image is built and run on machines with a very fast link ... In these cases, just keep things simple! (Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ```dockerfile FROM debian:sid RUN apt-get update -q RUN apt-get install -yq build-essential make RUN apt-get install -yq zlib1g-dev RUN apt-get install -yq ruby ruby-dev RUN apt-get install -yq python-pygments RUN apt-get install -yq nodejs RUN apt-get install -yq cmake RUN gem install --no-rdoc --no-ri github-pages COPY . /blog WORKDIR /blog VOLUME /blog/_site EXPOSE 4000 CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"] ``` .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## Multi-dimensional versioning systems Images can have a tag, indicating the version of the image. But sometimes, there are multiple important components, and we need to indicate the versions for all of them. This can be done with environment variables: ```dockerfile ENV PIP=9.0.3 \ ZC_BUILDOUT=2.11.2 \ SETUPTOOLS=38.7.0 \ PLONE_MAJOR=5.1 \ PLONE_VERSION=5.1.0 \ PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d ``` (Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## Entrypoints and wrappers It is very common to define a custom entrypoint. That entrypoint will generally be a script, performing any combination of: - pre-flights checks (if a required dependency is not available, display a nice error message early instead of an obscure one in a deep log file), - generation or validation of configuration files, - dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`), - and more. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## A typical entrypoint script ```dockerfile #!/bin/sh set -e # first arg is '-f' or '--some-option' # or first arg is 'something.conf' if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then set -- redis-server "$@" fi # allow the container to be started with '--user' if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then chown -R redis . exec su-exec redis "$0" "$@" fi exec "$@" ``` (Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## Factoring information To facilitate maintenance (and avoid human errors), avoid to repeat information like: - version numbers, - remote asset URLs (e.g. source tarballs) ... Instead, use environment variables. .small[ ```dockerfile ENV NODE_VERSION 10.2.1 ... RUN ... && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \ && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \ && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \ && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \ && tar -xf "node-v$NODE_VERSION.tar.xz" \ && cd "node-v$NODE_VERSION" \ ... ``` ] (Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## Overrides In theory, development and production images should be the same. In practice, we often need to enable specific behaviors in development (e.g. debug statements). One way to reconcile both needs is to use Compose to enable these behaviors. Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## Production image This Dockerfile builds an image leveraging gunicorn: ```dockerfile FROM python RUN pip install flask RUN pip install gunicorn RUN pip install redis COPY . /src WORKDIR /src CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app EXPOSE 5000 ``` (Source: [trainingwheels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## Development Compose file This Compose file uses the same image, but with a few overrides for development: - the Flask development server is used (overriding `CMD`), - the `DEBUG` environment variable is set, - a volume is used to provide a faster local development workflow. .small[ ```yaml services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src ``` ] (Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## How to know which best practices are better? - The main goal of containers is to make our lives easier. - In this chapter, we showed many ways to write Dockerfiles. - These Dockerfiles use sometimes diametrically opposed techniques. - Yet, they were the "right" ones *for a specific situation.* - It's OK (and even encouraged) to start simple and evolve as needed. - Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration! ??? :EN:Optimizing images :EN:- Dockerfile tips, tricks, and best practices :EN:- Reducing build time :EN:- Reducing image size :FR:Optimiser ses images :FR:- Bonnes pratiques, trucs et astuces :FR:- Réduire le temps de build :FR:- Réduire la taille des images .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[] --- name: toc-reducing-image-size class: title Reducing image size .nav[ [Previous part](#toc-dockerfile-examples) | [Back to table of contents](#toc-part-3) | [Next part](#toc-multi-stage-builds) ] .debug[(automatically generated title slide)] --- # Reducing image size * In the previous example, our final image contained: * our `hello` program * its source code * the compiler * Only the first one is strictly necessary. * We are going to see how to obtain an image without the superfluous components. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Can't we remove superfluous files with `RUN`? What happens if we do one of the following commands? - `RUN rm -rf ...` - `RUN apt-get remove ...` - `RUN make clean ...` -- This adds a layer which removes a bunch of files. But the previous layers (which added the files) still exist. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Removing files with an extra layer When downloading an image, all the layers must be downloaded. | Dockerfile instruction | Layer size | Image size | | ---------------------- | ---------- | ---------- | | `FROM ubuntu` | Size of base image | Size of base image | | `...` | ... | Sum of this layer + all previous ones | | `RUN apt-get install somepackage` | Size of files added (e.g. a few MB) | Sum of this layer + all previous ones | | `...` | ... | Sum of this layer + all previous ones | | `RUN apt-get remove somepackage` | Almost zero (just metadata) | Same as previous one | Therefore, `RUN rm` does not reduce the size of the image or free up disk space. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Removing unnecessary files Various techniques are available to obtain smaller images: - collapsing layers, - adding binaries that are built outside of the Dockerfile, - squashing the final image, - multi-stage builds. Let's review them quickly. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Collapsing layers You will frequently see Dockerfiles like this: ```dockerfile FROM ubuntu RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ... ``` Or the (more readable) variant: ```dockerfile FROM ubuntu RUN apt-get update \ && apt-get install xxx \ && ... \ && apt-get remove xxx \ && ... ``` This `RUN` command gives us a single layer. The files that are added, then removed in the same layer, do not grow the layer size. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Collapsing layers: pros and cons Pros: - works on all versions of Docker - doesn't require extra tools Cons: - not very readable - some unnecessary files might still remain if the cleanup is not thorough - that layer is expensive (slow to build) .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Building binaries outside of the Dockerfile This results in a Dockerfile looking like this: ```dockerfile FROM ubuntu COPY xxx /usr/local/bin ``` Of course, this implies that the file `xxx` exists in the build context. That file has to exist before you can run `docker build`. For instance, it can: - exist in the code repository, - be created by another tool (script, Makefile...), - be created by another container image and extracted from the image. See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox). .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Building binaries outside: pros and cons Pros: - final image can be very small Cons: - requires an extra build tool - we're back in dependency hell and "works on my machine" Cons, if binary is added to code repository: - breaks portability across different platforms - grows repository size a lot if the binary is updated frequently .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Squashing the final image The idea is to transform the final image into a single-layer image. This can be done in (at least) two ways. - Activate experimental features and squash the final image: ```bash docker image build --squash ... ``` - Export/import the final image. ```bash docker build -t temp-image . docker run --entrypoint true --name temp-container temp-image docker export temp-container | docker import - final-image docker rm temp-container docker rmi temp-image ``` .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Squashing the image: pros and cons Pros: - single-layer images are smaller and faster to download - removed files no longer take up storage and network resources Cons: - we still need to actively remove unnecessary files - squash operation can take a lot of time (on big images) - squash operation does not benefit from cache (even if we change just a tiny file, the whole image needs to be re-squashed) .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage builds Multi-stage builds allow us to have multiple *stages*. Each stage is a separate image, and can copy files from previous stages. We're going to see how they work in more detail. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- class: pic .interstitial[] --- name: toc-multi-stage-builds class: title Multi-stage builds .nav[ [Previous part](#toc-reducing-image-size) | [Back to table of contents](#toc-part-3) | [Next part](#toc-publishing-images-to-the-docker-hub) ] .debug[(automatically generated title slide)] --- # Multi-stage builds * At any point in our `Dockerfile`, we can add a new `FROM` line. * This line starts a new stage of our build. * Each stage can access the files of the previous stages with `COPY --from=...`. * When a build is tagged (with `docker build -t ...`), the last stage is tagged. * Previous stages are not discarded: they will be used for caching, and can be referenced. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage builds in practice * Each stage is numbered, starting at `0` * We can copy a file from a previous stage by indicating its number, e.g.: ```dockerfile COPY --from=0 /file/from/first/stage /location/in/current/stage ``` * We can also name stages, and reference these names: ```dockerfile FROM golang AS builder RUN ... FROM alpine COPY --from=builder /go/bin/mylittlebinary /usr/local/bin/ ``` .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage builds for our C program We will change our Dockerfile to: * give a nickname to the first stage: `compiler` * add a second stage using the same `ubuntu` base image * add the `hello` binary to the second stage * make sure that `CMD` is in the second stage The resulting Dockerfile is on the next slide. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage build `Dockerfile` Here is the final Dockerfile: ```dockerfile FROM ubuntu AS compiler RUN apt-get update RUN apt-get install -y build-essential COPY hello.c / RUN make hello FROM ubuntu COPY --from=compiler /hello /hello CMD /hello ``` Let's build it, and check that it works correctly: ```bash docker build -t hellomultistage . docker run hellomultistage ``` .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Comparing single/multi-stage build image sizes List our images with `docker images`, and check the size of: - the `ubuntu` base image, - the single-stage `hello` image, - the multi-stage `hellomultistage` image. We can achieve even smaller images if we use smaller base images. However, if we use common base images (e.g. if we standardize on `ubuntu`), these common images will be pulled only once per node, so they are virtually "free." .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- class: extra-details ## Build targets * We can also tag an intermediary stage with the following command: ```bash docker build --target STAGE --tag NAME ``` * This will create an image (named `NAME`) corresponding to stage `STAGE` * This can be used to easily access an intermediary stage for inspection (instead of parsing the output of `docker build` to find out the image ID) * This can also be used to describe multiple images from a single Dockerfile (instead of using multiple Dockerfiles, which could go out of sync) -- class: extra-details ## Dealing with download caches * In some cases, our images contain temporary downloaded files or caches (examples: packages downloaded by `pip`, Maven, etc.) * These can sometimes be disabled (e.g. `pip install --no-cache-dir ...`) * The cache can also be cleaned immediately after installing (e.g. `pip install ... && rm -rf ~/.cache/pip`) .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- class: extra-details ## Download caches and multi-stage builds * Download+install packages in a build stage * Copy the installed packages to a run stage * Example: in the specific case of Python, use a virtual env (install in the virtual env; then copy the virtual env directory) .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- class: extra-details ## Download caches and BuildKit * BuildKit has a caching feature for run stages * It can address download caches elegantly * Example: ```bash RUN --mount=type=cache,target=/pipcache pip install --cache-dir /pipcache ... ``` * The cache won't be in the final image, but it'll persist across builds ??? :EN:Optimizing our images and their build process :EN:- Leveraging multi-stage builds :FR:Optimiser les images et leur construction :FR:- Utilisation d'un *multi-stage build* .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- class: pic .interstitial[] --- name: toc-publishing-images-to-the-docker-hub class: title Publishing images to the Docker Hub .nav[ [Previous part](#toc-multi-stage-builds) | [Back to table of contents](#toc-part-3) | [Next part](#toc-exercise--writing-better-dockerfiles) ] .debug[(automatically generated title slide)] --- # Publishing images to the Docker Hub We have built our first images. We can now publish it to the Docker Hub! *You don't have to do the exercises in this section, because they require an account on the Docker Hub, and we don't want to force anyone to create one.* *Note, however, that creating an account on the Docker Hub is free (and doesn't require a credit card), and hosting public images is free as well.* .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- ## Logging into our Docker Hub account * This can be done from the Docker CLI: ```bash docker login ``` .warning[When running Docker for Mac/Windows, or Docker on a Linux workstation, it can (and will when possible) integrate with your system's keyring to store your credentials securely. However, on most Linux servers, it will store your credentials in `~/.docker/config`.] .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- ## Image tags and registry addresses * Docker images tags are like Git tags and branches. * They are like *bookmarks* pointing at a specific image ID. * Tagging an image doesn't *rename* an image: it adds another tag. * When pushing an image to a registry, the registry address is in the tag. Example: `registry.example.net:5000/image` * What about Docker Hub images? -- * `jpetazzo/clock` is, in fact, `index.docker.io/jpetazzo/clock` * `ubuntu` is, in fact, `library/ubuntu`, i.e. `index.docker.io/library/ubuntu` .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- ## Tagging an image to push it on the Hub * Let's tag our `figlet` image (or any other to our liking): ```bash docker tag figlet jpetazzo/figlet ``` * And push it to the Hub: ```bash docker push jpetazzo/figlet ``` * That's it! -- * Anybody can now `docker run jpetazzo/figlet` anywhere. .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- ## The goodness of automated builds * You can link a Docker Hub repository with a GitHub or BitBucket repository * Each push to GitHub or BitBucket will trigger a build on Docker Hub * If the build succeeds, the new image is available on Docker Hub * You can map tags and branches between source and container images * If you work with public repositories, this is free .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- class: extra-details ## Setting up an automated build * We need a Dockerized repository! * Let's go to https://github.com/jpetazzo/trainingwheels and fork it. * Go to the Docker Hub (https://hub.docker.com/) and sign-in. Select "Repositories" in the blue navigation menu. * Select "Create" in the top-right bar, and select "Create Repository+". * Connect your Docker Hub account to your GitHub account. * Click "Create" button. * Then go to "Builds" folder. * Click on Github icon and select your user and the repository that we just forked. * In "Build rules" block near page bottom, put `/www` in "Build Context" column (or whichever directory the Dockerfile is in). * Click "Save and Build" to build the repository immediately (without waiting for a git push). * Subsequent builds will happen automatically, thanks to GitHub hooks. .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- ## Building on the fly - Some services can build images on the fly from a repository - Example: [ctr.run](https://ctr.run/) .lab[ - Use ctr.run to automatically build a container image and run it: ```bash docker run ctr.run/github.com/undefinedlabs/hello-world ``` ] There might be a long pause before the first layer is pulled, because the API behind `docker pull` doesn't allow to stream build logs, and there is no feedback during the build. It is possible to view the build logs by setting up an account on [ctr.run](https://ctr.run/). ??? :EN:- Publishing images to the Docker Hub :FR:- Publier des images sur le Docker Hub .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- class: pic .interstitial[] --- name: toc-exercise--writing-better-dockerfiles class: title Exercise — writing better Dockerfiles .nav[ [Previous part](#toc-publishing-images-to-the-docker-hub) | [Back to table of contents](#toc-part-3) | [Next part](#toc-naming-and-inspecting-containers) ] .debug[(automatically generated title slide)] --- # Exercise — writing better Dockerfiles Let's update our Dockerfiles to leverage multi-stage builds! The code is at: https://github.com/jpetazzo/wordsmith Use a different tag for these images, so that we can compare their sizes. What's the size difference between single-stage and multi-stage builds? .debug[[containers/Exercise_Dockerfile_Advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Exercise_Dockerfile_Advanced.md)] --- class: pic .interstitial[] --- name: toc-naming-and-inspecting-containers class: title Naming and inspecting containers .nav[ [Previous part](#toc-exercise--writing-better-dockerfiles) | [Back to table of contents](#toc-part-4) | [Next part](#toc-labels) ] .debug[(automatically generated title slide)] --- class: title # Naming and inspecting containers  .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Objectives In this lesson, we will learn about an important Docker concept: container *naming*. Naming allows us to: * Reference easily a container. * Ensure unicity of a specific container. We will also see the `inspect` command, which gives a lot of details about a container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Naming our containers So far, we have referenced containers with their ID. We have copy-pasted the ID, or used a shortened prefix. But each container can also be referenced by its name. If a container is named `thumbnail-worker`, I can do: ```bash $ docker logs thumbnail-worker $ docker stop thumbnail-worker etc. ``` .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Default names When we create a container, if we don't give a specific name, Docker will pick one for us. It will be the concatenation of: * A mood (furious, goofy, suspicious, boring...) * The name of a famous inventor (tesla, darwin, wozniak...) Examples: `happy_curie`, `clever_hopper`, `jovial_lovelace` ... .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Specifying a name You can set the name of the container when you create it. ```bash $ docker run --name ticktock jpetazzo/clock ``` If you specify a name that already exists, Docker will refuse to create the container. This lets us enforce unicity of a given resource. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Renaming containers * You can rename containers with `docker rename`. * This allows you to "free up" a name without destroying the associated container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Inspecting a container The `docker inspect` command will output a very detailed JSON map. ```bash $ docker inspect [{ ... (many pages of JSON here) ... ``` There are multiple ways to consume that information. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Parsing JSON with the Shell * You *could* grep and cut or awk the output of `docker inspect`. * Please, don't. * It's painful. * If you really must parse JSON from the Shell, use JQ! (It's great.) ```bash $ docker inspect | jq . ``` * We will see a better solution which doesn't require extra tools. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Using `--format` You can specify a format string, which will be parsed by Go's text/template package. ```bash $ docker inspect --format '{{ json .Created }}' "2015-02-24T07:21:11.712240394Z" ``` * The generic syntax is to wrap the expression with double curly braces. * The expression starts with a dot representing the JSON object. * Then each field or member can be accessed in dotted notation syntax. * The optional `json` keyword asks for valid JSON output. (e.g. here it adds the surrounding double-quotes.) ??? :EN:Managing container lifecycle :EN:- Naming and inspecting containers :FR:Suivre ses conteneurs à la loupe :FR:- Obtenir des informations détaillées sur un conteneur :FR:- Associer un identifiant unique à un conteneur .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- class: pic .interstitial[] --- name: toc-labels class: title Labels .nav[ [Previous part](#toc-naming-and-inspecting-containers) | [Back to table of contents](#toc-part-4) | [Next part](#toc-restarting-and-attaching-to-containers) ] .debug[(automatically generated title slide)] --- # Labels * Labels allow to attach arbitrary metadata to containers. * Labels are key/value pairs. * They are specified at container creation. * You can query them with `docker inspect`. * They can also be used as filters with some commands (e.g. `docker ps`). .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Labels.md)] --- ## Using labels Let's create a few containers with a label `owner`. ```bash docker run -d -l owner=alice nginx docker run -d -l owner=bob nginx docker run -d -l owner nginx ``` We didn't specify a value for the `owner` label in the last example. This is equivalent to setting the value to be an empty string. .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Labels.md)] --- ## Querying labels We can view the labels with `docker inspect`. ```bash $ docker inspect $(docker ps -lq) | grep -A3 Labels "Labels": { "maintainer": "NGINX Docker Maintainers ", "owner": "" }, ``` We can use the `--format` flag to list the value of a label. ```bash $ docker inspect $(docker ps -q) --format 'OWNER={{.Config.Labels.owner}}' ``` .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Labels.md)] --- ## Using labels to select containers We can list containers having a specific label. ```bash $ docker ps --filter label=owner ``` Or we can list containers having a specific label with a specific value. ```bash $ docker ps --filter label=owner=alice ``` .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Labels.md)] --- ## Use-cases for labels * HTTP vhost of a web app or web service. (The label is used to generate the configuration for NGINX, HAProxy, etc.) * Backup schedule for a stateful service. (The label is used by a cron job to determine if/when to backup container data.) * Service ownership. (To determine internal cross-billing, or who to page in case of outage.) * etc. ??? :EN:- Using labels to identify containers :FR:- Étiqueter ses conteneurs avec des méta-données .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Labels.md)] --- class: pic .interstitial[] --- name: toc-restarting-and-attaching-to-containers class: title Restarting and attaching to containers .nav[ [Previous part](#toc-labels) | [Back to table of contents](#toc-part-4) | [Next part](#toc-getting-inside-a-container) ] .debug[(automatically generated title slide)] --- # Restarting and attaching to containers We have started containers in the foreground, and in the background. In this chapter, we will see how to: * Put a container in the background. * Attach to a background container to bring it to the foreground. * Restart a stopped container. .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Background and foreground The distinction between foreground and background containers is arbitrary. From Docker's point of view, all containers are the same. All containers run the same way, whether there is a client attached to them or not. It is always possible to detach from a container, and to reattach to a container. Analogy: attaching to a container is like plugging a keyboard and screen to a physical server. .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Detaching from a container (Linux/macOS) * If you have started an *interactive* container (with option `-it`), you can detach from it. * The "detach" sequence is `^P^Q`. * Otherwise you can detach by killing the Docker client. (But not by hitting `^C`, as this would deliver `SIGINT` to the container.) What does `-it` stand for? * `-t` means "allocate a terminal." * `-i` means "connect stdin to the terminal." .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Detaching cont. (Win PowerShell and cmd.exe) * Docker for Windows has a different detach experience due to shell features. * `^P^Q` does not work. * `^C` will detach, rather than stop the container. * Using Bash, Subsystem for Linux, etc. on Windows behaves like Linux/macOS shells. * Both PowerShell and Bash work well in Win 10; just be aware of differences. .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- class: extra-details ## Specifying a custom detach sequence * You don't like `^P^Q`? No problem! * You can change the sequence with `docker run --detach-keys`. * This can also be passed as a global option to the engine. Start a container with a custom detach command: ```bash $ docker run -ti --detach-keys ctrl-x,x jpetazzo/clock ``` Detach by hitting `^X x`. (This is ctrl-x then x, not ctrl-x twice!) Check that our container is still running: ```bash $ docker ps -l ``` .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- class: extra-details ## Attaching to a container You can attach to a container: ```bash $ docker attach ``` * The container must be running. * There *can* be multiple clients attached to the same container. * If you don't specify `--detach-keys` when attaching, it defaults back to `^P^Q`. Try it on our previous container: ```bash $ docker attach $(docker ps -lq) ``` Check that `^X x` doesn't work, but `^P ^Q` does. .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Detaching from non-interactive containers * **Warning:** if the container was started without `-it`... * You won't be able to detach with `^P^Q`. * If you hit `^C`, the signal will be proxied to the container. * Remember: you can always detach by killing the Docker client. .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Checking container output * Use `docker attach` if you intend to send input to the container. * If you just want to see the output of a container, use `docker logs`. ```bash $ docker logs --tail 1 --follow ``` .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Restarting a container When a container has exited, it is in stopped state. It can then be restarted with the `start` command. ```bash $ docker start ``` The container will be restarted using the same options you launched it with. You can re-attach to it if you want to interact with it: ```bash $ docker attach ``` Use `docker ps -a` to identify the container ID of a previous `jpetazzo/clock` container, and try those commands. .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Attaching to a REPL * REPL = Read Eval Print Loop * Shells, interpreters, TUI ... * Symptom: you `docker attach`, and see nothing * The REPL doesn't know that you just attached, and doesn't print anything * Try hitting `^L` or `Enter` .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- class: extra-details ## SIGWINCH * When you `docker attach`, the Docker Engine sends SIGWINCH signals to the container. * SIGWINCH = WINdow CHange; indicates a change in window size. * This will cause some CLI and TUI programs to redraw the screen. * But not all of them. ??? :EN:- Restarting old containers :EN:- Detaching and reattaching to container :FR:- Redémarrer des anciens conteneurs :FR:- Se détacher et rattacher à des conteneurs .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- class: pic .interstitial[] --- name: toc-getting-inside-a-container class: title Getting inside a container .nav[ [Previous part](#toc-restarting-and-attaching-to-containers) | [Back to table of contents](#toc-part-4) | [Next part](#toc-limiting-resources) ] .debug[(automatically generated title slide)] --- class: title # Getting inside a container  .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Objectives On a traditional server or VM, we sometimes need to: * log into the machine (with SSH or on the console), * analyze the disks (by removing them or rebooting with a rescue system). In this chapter, we will see how to do that with containers. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Getting a shell Every once in a while, we want to log into a machine. In an perfect world, this shouldn't be necessary. * You need to install or update packages (and their configuration)? Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...) * You need to view logs and metrics? Collect and access them through a centralized platform. In the real world, though ... we often need shell access! .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Not getting a shell Even without a perfect deployment system, we can do many operations without getting a shell. * Installing packages can (and should) be done in the container image. * Configuration can be done at the image level, or when the container starts. * Dynamic configuration can be stored in a volume (shared with another container). * Logs written to stdout are automatically collected by the Docker Engine. * Other logs can be written to a shared volume. * Process information and metrics are visible from the host. _Let's save logging, volumes ... for later, but let's have a look at process information!_ .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Viewing container processes from the host If you run Docker on Linux, container processes are visible on the host. ```bash $ ps faux | less ``` * Scroll around the output of this command. * You should see the `jpetazzo/clock` container. * A containerized process is just like any other process on the host. * We can use tools like `lsof`, `strace`, `gdb` ... To analyze them. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- class: extra-details ## What's the difference between a container process and a host process? * Each process (containerized or not) belongs to *namespaces* and *cgroups*. * The namespaces and cgroups determine what a process can "see" and "do". * Analogy: each process (containerized or not) runs with a specific UID (user ID). * UID=0 is root, and has elevated privileges. Other UIDs are normal users. _We will give more details about namespaces and cgroups later._ .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a running container * Sometimes, we need to get a shell anyway. * We _could_ run some SSH server in the container ... * But it is easier to use `docker exec`. ```bash $ docker exec -ti ticktock sh ``` * This creates a new process (running `sh`) _inside_ the container. * This can also be done "manually" with the tool `nsenter`. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Caveats * The tool that you want to run needs to exist in the container. * Some tools (like `ip netns exec`) let you attach to _one_ namespace at a time. (This lets you e.g. setup network interfaces, even if you don't have `ifconfig` or `ip` in the container.) * Most importantly: the container needs to be running. * What if the container is stopped or crashed? .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a stopped container * A stopped container is only _storage_ (like a disk drive). * We cannot SSH into a disk drive or USB stick! * We need to connect the disk to a running machine. * How does that translate into the container world? .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Analyzing a stopped container As an exercise, we are going to try to find out what's wrong with `jpetazzo/crashtest`. ```bash docker run jpetazzo/crashtest ``` The container starts, but then stops immediately, without any output. What would MacGyver™ do? First, let's check the status of that container. ```bash docker ps -l ``` .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Viewing filesystem changes * We can use `docker diff` to see files that were added / changed / removed. ```bash docker diff ``` * The container ID was shown by `docker ps -l`. * We can also see it with `docker ps -lq`. * The output of `docker diff` shows some interesting log files! .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Accessing files * We can extract files with `docker cp`. ```bash docker cp :/var/log/nginx/error.log . ``` * Then we can look at that log file. ```bash cat error.log ``` (The directory `/run/nginx` doesn't exist.) .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Exploring a crashed container * We can restart a container with `docker start` ... * ... But it will probably crash again immediately! * We cannot specify a different program to run with `docker start` * But we can create a new image from the crashed container ```bash docker commit debugimage ``` * Then we can run a new container from that image, with a custom entrypoint ```bash docker run -ti --entrypoint sh debugimage ``` .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- class: extra-details ## Obtaining a complete dump * We can also dump the entire filesystem of a container. * This is done with `docker export`. * It generates a tar archive. ```bash docker export | tar tv ``` This will give a detailed listing of the content of the container. ??? :EN:- Troubleshooting and getting inside a container :FR:- Inspecter un conteneur en détail, en *live* ou *post-mortem* .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- class: pic .interstitial[] --- name: toc-limiting-resources class: title Limiting resources .nav[ [Previous part](#toc-getting-inside-a-container) | [Back to table of contents](#toc-part-4) | [Next part](#toc-container-networking-basics) ] .debug[(automatically generated title slide)] --- # Limiting resources - So far, we have used containers as convenient units of deployment. - What happens when a container tries to use more resources than available? (RAM, CPU, disk usage, disk and network I/O...) - What happens when multiple containers compete for the same resource? - Can we limit resources available to a container? (Spoiler alert: yes!) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Container processes are normal processes - Containers are closer to "fancy processes" than to "lightweight VMs". - A process running in a container is, in fact, a process running on the host. - Let's look at the output of `ps` on a container host running 3 containers : ``` 0 2662 0.2 0.3 /usr/bin/dockerd -H fd:// 0 2766 0.1 0.1 \_ docker-containerd --config /var/run/docker/containe 0 23479 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir 0 23497 0.0 0.0 | \_ `nginx`: master process nginx -g daemon off; 101 23543 0.0 0.0 | \_ `nginx`: worker process 0 23565 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir 102 23584 9.4 11.3 | \_ `/docker-java-home/jre/bin/java` -Xms2g -Xmx2 0 23707 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir 0 23725 0.0 0.0 \_ `/bin/sh` ``` - The highlighted processes are containerized processes. (That host is running nginx, elasticsearch, and alpine.) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## By default: nothing changes - What happens when a process uses too much memory on a Linux system? -- - Simplified answer: - swap is used (if available); - if there is not enough swap space, eventually, the out-of-memory killer is invoked; - the OOM killer uses heuristics to kill processes; - sometimes, it kills an unrelated process. -- - What happens when a container uses too much memory? - The same thing! (i.e., a process eventually gets killed, possibly in another container.) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Limiting container resources - The Linux kernel offers rich mechanisms to limit container resources. - For memory usage, the mechanism is part of the *cgroup* subsystem. - This subsystem allows limiting the memory for a process or a group of processes. - A container engine leverages these mechanisms to limit memory for a container. - The out-of-memory killer has a new behavior: - it runs when a container exceeds its allowed memory usage, - in that case, it only kills processes in that container. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Limiting memory in practice - The Docker Engine offers multiple flags to limit memory usage. - The two most useful ones are `--memory` and `--memory-swap`. - `--memory` limits the amount of physical RAM used by a container. - `--memory-swap` limits the total amount (RAM+swap) used by a container. - The memory limit can be expressed in bytes, or with a unit suffix. (e.g.: `--memory 100m` = 100 megabytes.) - We will see two strategies: limiting RAM usage, or limiting both .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Limiting RAM usage Example: ```bash docker run -ti --memory 100m python ``` If the container tries to use more than 100 MB of RAM, *and* swap is available: - the container will not be killed, - memory above 100 MB will be swapped out, - in most cases, the app in the container will be slowed down (a lot). If we run out of swap, the global OOM killer still intervenes. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Limiting both RAM and swap usage Example: ```bash docker run -ti --memory 100m --memory-swap 100m python ``` If the container tries to use more than 100 MB of memory, it is killed. On the other hand, the application will never be slowed down because of swap. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## When to pick which strategy? - Stateful services (like databases) will lose or corrupt data when killed - Allow them to use swap space, but monitor swap usage - Stateless services can usually be killed with little impact - Limit their mem+swap usage, but monitor if they get killed - Ultimately, this is no different from "do I want swap, and how much?" .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Limiting CPU usage - There are no less than 3 ways to limit CPU usage: - setting a relative priority with `--cpu-shares`, - setting a CPU% limit with `--cpus`, - pinning a container to specific CPUs with `--cpuset-cpus`. - They can be used separately or together. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Setting relative priority - Each container has a relative priority used by the Linux scheduler. - By default, this priority is 1024. - As long as CPU usage is not maxed out, this has no effect. - When CPU usage is maxed out, each container receives CPU cycles in proportion of its relative priority. - In other words: a container with `--cpu-shares 2048` will receive twice as much than the default. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Setting a CPU% limit - This setting will make sure that a container doesn't use more than a given % of CPU. - The value is expressed in CPUs; therefore: `--cpus 0.1` means 10% of one CPU, `--cpus 1.0` means 100% of one whole CPU, `--cpus 10.0` means 10 entire CPUs. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Pinning containers to CPUs - On multi-core machines, it is possible to restrict the execution on a set of CPUs. - Examples: `--cpuset-cpus 0` forces the container to run on CPU 0; `--cpuset-cpus 3,5,7` restricts the container to CPUs 3, 5, 7; `--cpuset-cpus 0-3,8-11` restricts the container to CPUs 0, 1, 2, 3, 8, 9, 10, 11. - This will not reserve the corresponding CPUs! (They might still be used by other containers, or uncontainerized processes.) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Limiting disk usage - Most storage drivers do not support limiting the disk usage of containers. (With the exception of devicemapper, but the limit cannot be set easily.) - This means that a single container could exhaust disk space for everyone. - In practice, however, this is not a concern, because: - data files (for stateful services) should reside on volumes, - assets (e.g. images, user-generated content...) should reside on object stores or on volume, - logs are written on standard output and gathered by the container engine. - Container disk usage can be audited with `docker ps -s` and `docker diff`. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- class: pic .interstitial[] --- name: toc-container-networking-basics class: title Container networking basics .nav[ [Previous part](#toc-limiting-resources) | [Back to table of contents](#toc-part-5) | [Next part](#toc-container-network-drivers) ] .debug[(automatically generated title slide)] --- class: title # Container networking basics  .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Objectives We will now run network services (accepting requests) in containers. At the end of this section, you will be able to: * Run a network service in a container. * Connect to that network service. * Find a container's IP address. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Running a very simple service - We need something small, simple, easy to configure (or, even better, that doesn't require any configuration at all) - Let's use the official NGINX image (named `nginx`) - It runs a static web server listening on port 80 - It serves a default "Welcome to nginx!" page .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Running an NGINX server ```bash $ docker run -d -P nginx 66b1ce719198711292c8f34f84a7b68c3876cf9f67015e752b94e189d35a204e ``` - Docker will automatically pull the `nginx` image from the Docker Hub - `-d` / `--detach` tells Docker to run it in the background - `P` / `--publish-all` tells Docker to publish all ports (publish = make them reachable from other computers) - ...OK, how do we connect to our web server now? .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Finding our web server port - First, we need to find the *port number* used by Docker (the NGINX container listens on port 80, but this port will be *mapped*) - We can use `docker ps`: ```bash $ docker ps CONTAINER ID IMAGE ... PORTS ... e40ffb406c9e nginx ... 0.0.0.0:`12345`->80/tcp ... ``` - This means: *port 12345 on the Docker host is mapped to port 80 in the container* - Now we need to connect to the Docker host! .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Finding the address of the Docker host - When running Docker on your Linux workstation: *use `localhost`, or any IP address of your machine* - When running Docker on a remote Linux server: *use any IP address of the remote machine* - When running Docker Desktop on Mac or Windows: *use `localhost`* - In other scenarios (`docker-machine`, local VM...): *use the IP address of the Docker VM* .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Connecting to our web server (GUI) Point your browser to the IP address of your Docker host, on the port shown by `docker ps` for container port 80.  .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Connecting to our web server (CLI) You can also use `curl` directly from the Docker host. Make sure to use the right port number if it is different from the example below: ```bash $ curl localhost:12345 Welcome to nginx! ... ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## How does Docker know which port to map? * There is metadata in the image telling "this image has something on port 80". * We can see that metadata with `docker inspect`: ```bash $ docker inspect --format '{{.Config.ExposedPorts}}' nginx map[80/tcp:{}] ``` * This metadata was set in the Dockerfile, with the `EXPOSE` keyword. * We can see that with `docker history`: ```bash $ docker history nginx IMAGE CREATED CREATED BY 7f70b30f2cc6 11 days ago /bin/sh -c #(nop) CMD ["nginx" "-g" "… 11 days ago /bin/sh -c #(nop) STOPSIGNAL [SIGTERM] 11 days ago /bin/sh -c #(nop) EXPOSE 80/tcp ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Why can't we just connect to port 80? - Our Docker host has only one port 80 - Therefore, we can only have one container at a time on port 80 - Therefore, if multiple containers want port 80, only one can get it - By default, containers *do not* get "their" port number, but a random one (not "random" as "crypto random", but as "it depends on various factors") - We'll see later how to force a port number (including port 80!) .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- class: extra-details ## Using multiple IP addresses *Hey, my network-fu is strong, and I have questions...* - Can I publish one container on 127.0.0.2:80, and another on 127.0.0.3:80? - My machine has multiple (public) IP addresses, let's say A.A.A.A and B.B.B.B. Can I have one container on A.A.A.A:80 and another on B.B.B.B:80? - I have a whole IPV4 subnet, can I allocate it to my containers? - What about IPV6? You can do all these things when running Docker directly on Linux. (On other platforms, *generally not*, but there are some exceptions.) .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Finding the web server port in a script Parsing the output of `docker ps` would be painful. There is a command to help us: ```bash $ docker port 80 0.0.0.0:12345 ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Manual allocation of port numbers If you want to set port numbers yourself, no problem: ```bash $ docker run -d -p 80:80 nginx $ docker run -d -p 8000:80 nginx $ docker run -d -p 8080:80 -p 8888:80 nginx ``` * We are running three NGINX web servers. * The first one is exposed on port 80. * The second one is exposed on port 8000. * The third one is exposed on ports 8080 and 8888. Note: the convention is `port-on-host:port-on-container`. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Plumbing containers into your infrastructure There are many ways to integrate containers in your network. * Start the container, letting Docker allocate a public port for it. Then retrieve that port number and feed it to your configuration. * Pick a fixed port number in advance, when you generate your configuration. Then start your container by setting the port numbers manually. * Use an orchestrator like Kubernetes or Swarm. The orchestrator will provide its own networking facilities. Orchestrators typically provide mechanisms to enable direct container-to-container communication across hosts, and publishing/load balancing for inbound traffic. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Finding the container's IP address We can use the `docker inspect` command to find the IP address of the container. ```bash $ docker inspect --format '{{ .NetworkSettings.IPAddress }}' 172.17.0.3 ``` * `docker inspect` is an advanced command, that can retrieve a ton of information about our containers. * Here, we provide it with a format string to extract exactly the private IP address of the container. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Pinging our container Let's try to ping our container *from another container.* ```bash docker run alpine ping `` PING 172.17.0.X (172.17.0.X): 56 data bytes 64 bytes from 172.17.0.X: seq=0 ttl=64 time=0.106 ms 64 bytes from 172.17.0.X: seq=1 ttl=64 time=0.250 ms 64 bytes from 172.17.0.X: seq=2 ttl=64 time=0.188 ms ``` When running on Linux, we can even ping that IP address directly! (And connect to a container's ports even if they aren't published.) .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## How often do we use `-p` and `-P` ? - When running a stack of containers, we will often use Compose - Compose will take care of exposing containers (through a `ports:` section in the `docker-compose.yml` file) - It is, however, fairly common to use `docker run -P` for a quick test - Or `docker run -p ...` when an image doesn't `EXPOSE` a port correctly .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Section summary We've learned how to: * Expose a network port. * Connect to an application running in a container. * Find a container's IP address. ??? :EN:- Exposing single containers :FR:- Exposer un conteneur isolé .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- class: pic .interstitial[] --- name: toc-container-network-drivers class: title Container network drivers .nav[ [Previous part](#toc-container-networking-basics) | [Back to table of contents](#toc-part-5) | [Next part](#toc-the-container-network-model) ] .debug[(automatically generated title slide)] --- # Container network drivers The Docker Engine supports different network drivers. The built-in drivers include: * `bridge` (default) * `null` (for the special network called `none`) * `host` (for the special network called `host`) * `container` (that one is a bit magic!) The network is selected with `docker run --net ...`. Each network is managed by a driver. The different drivers are explained with more details on the following slides. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Network_Drivers.md)] --- ## The default bridge * By default, the container gets a virtual `eth0` interface. (In addition to its own private `lo` loopback interface.) * That interface is provided by a `veth` pair. * It is connected to the Docker bridge. (Named `docker0` by default; configurable with `--bridge`.) * Addresses are allocated on a private, internal subnet. (Docker uses 172.17.0.0/16 by default; configurable with `--bip`.) * Outbound traffic goes through an iptables MASQUERADE rule. * Inbound traffic goes through an iptables DNAT rule. * The container can have its own routes, iptables rules, etc. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Network_Drivers.md)] --- ## The null driver * Container is started with `docker run --net none ...` * It only gets the `lo` loopback interface. No `eth0`. * It can't send or receive network traffic. * Useful for isolated/untrusted workloads. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Network_Drivers.md)] --- ## The host driver * Container is started with `docker run --net host ...` * It sees (and can access) the network interfaces of the host. * It can bind any address, any port (for ill and for good). * Network traffic doesn't have to go through NAT, bridge, or veth. * Performance = native! Use cases: * Performance sensitive applications (VOIP, gaming, streaming...) * Peer discovery (e.g. Erlang port mapper, Raft, Serf...) .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Network_Drivers.md)] --- ## The container driver * Container is started with `docker run --net container:id ...` * It re-uses the network stack of another container. * It shares with this other container the same interfaces, IP address(es), routes, iptables rules, etc. * Those containers can communicate over their `lo` interface. (i.e. one can bind to 127.0.0.1 and the others can connect to it.) ??? :EN:Advanced container networking :EN:- Transparent network access with the "host" driver :EN:- Sharing is caring with the "container" driver :FR:Paramétrage réseau avancé :FR:- Accès transparent au réseau avec le mode "host" :FR:- Partage de la pile réseau avece le mode "container" .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Network_Drivers.md)] --- class: pic .interstitial[] --- name: toc-the-container-network-model class: title The Container Network Model .nav[ [Previous part](#toc-container-network-drivers) | [Back to table of contents](#toc-part-5) | [Next part](#toc-service-discovery-with-containers) ] .debug[(automatically generated title slide)] --- class: title # The Container Network Model  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Objectives We will learn about the CNM (Container Network Model). At the end of this lesson, you will be able to: * Create a private network for a group of containers. * Use container naming to connect services together. * Dynamically connect and disconnect containers to networks. * Set the IP address of a container. We will also explain the principle of overlay networks and network plugins. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## The Container Network Model Docker has "networks". We can manage them with the `docker network` commands; for instance: ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f blog-dev overlay 228a4355d548 blog-prod overlay ``` New networks can be created (with `docker network create`). (Note: networks `none` and `host` are special; let's set them aside for now.) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## What's a network? - Conceptually, a Docker "network" is a virtual switch (we can also think about it like a VLAN, or a WiFi SSID, for instance) - By default, containers are connected to a single network (but they can be connected to zero, or many networks, even dynamically) - Each network has its own subnet (IP address range) - A network can be local (to a single Docker Engine) or global (span multiple hosts) - Containers can have *network aliases* providing DNS-based service discovery (and each network has its own "domain", "zone", or "scope") .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Service discovery - A container can be given a network alias (e.g. with `docker run --net some-network --net-alias db ...`) - The containers running in the same network can resolve that network alias (i.e. if they do a DNS lookup on `db`, it will give the container's address) - We can have a different `db` container in each network (this avoids naming conflicts between different stacks) - When we name a container, it automatically adds the name as a network alias (i.e. `docker run --name xyz ...` is like `docker run --net-alias xyz ...` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Network isolation - Networks are isolated - By default, containers in network A cannot reach those in network B - A container connected to both networks A and B can act as a router or proxy - Published ports are always reachable through the Docker host address (`docker run -P ...` makes a container port available to everyone) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## How to use networks - We typically create one network per "stack" or app that we deploy - More complex apps or stacks might require multiple networks (e.g. `frontend`, `backend`, ...) - Networks allow us to deploy multiple copies of the same stack (e.g. `prod`, `dev`, `pr-442`, ....) - If we use Docker Compose, this is managed automatically for us .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: pic  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: pic  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: pic  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## CNM vs CNI - CNM is the model used by Docker - Kubernetes uses a different model, architectured around CNI (CNI is a kind of API between a container engine and *CNI plugins*) - Docker model: - multiple isolated networks - per-network service discovery - network interconnection requires extra steps - Kubernetes model: - single flat network - per-namespace service discovery - network isolation requires extra steps (Network Policies) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Creating a network Let's create a network called `dev`. ```bash $ docker network create dev 4c1ff84d6d3f1733d3e233ee039cac276f425a9d5228a4355d54878293a889ba ``` The network is now visible with the `network ls` command: ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f dev bridge ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Placing containers on a network We will create a *named* container on this network. It will be reachable with its name, `es`. ```bash $ docker run -d --name es --net dev elasticsearch:2 8abb80e229ce8926c7223beb69699f5f34d6f1d438bfc5682db893e798046863 ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Communication between containers Now, create another container on this network. .small[ ```bash $ docker run -ti --net dev alpine sh root@0ecccdfa45ef:/# ``` ] From this new container, we can resolve and ping the other one, using its assigned name: .small[ ```bash / # ping es PING es (172.18.0.2) 56(84) bytes of data. 64 bytes from es.dev (172.18.0.2): icmp_seq=1 ttl=64 time=0.221 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=2 ttl=64 time=0.114 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=3 ttl=64 time=0.114 ms ^C --- es ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.114/0.149/0.221/0.052 ms root@0ecccdfa45ef:/# ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Resolving container addresses Since Docker Engine 1.10, name resolution is implemented by a dynamic resolver. Archeological note: when CNM was intoduced (in Docker Engine 1.9, November 2015) name resolution was implemented with `/etc/hosts`, and it was updated each time CONTAINERs were added/removed. This could cause interesting race conditions since `/etc/hosts` was a bind-mount (and couldn't be updated atomically). .small[ ```bash [root@0ecccdfa45ef /]# cat /etc/hosts 172.18.0.3 0ecccdfa45ef 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.18.0.2 es 172.18.0.2 es.dev ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: pic .interstitial[] --- name: toc-service-discovery-with-containers class: title Service discovery with containers .nav[ [Previous part](#toc-the-container-network-model) | [Back to table of contents](#toc-part-5) | [Next part](#toc-local-development-workflow-with-docker) ] .debug[(automatically generated title slide)] --- # Service discovery with containers * Let's try to run an application that requires two containers. * The first container is a web server. * The other one is a redis data store. * We will place them both on the `dev` network created before. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Running the web server * The application is provided by the container image `jpetazzo/trainingwheels`. * We don't know much about it so we will try to run it and see what happens! Start the container, exposing all its ports: ```bash $ docker run --net dev -d -P jpetazzo/trainingwheels ``` Check the port that has been allocated to it: ```bash $ docker ps -l ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Test the web server * If we connect to the application now, we will see an error page:  * This is because the Redis service is not running. * This container tries to resolve the name `redis`. Note: we're not using a FQDN or an IP address here; just `redis`. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Start the data store * We need to start a Redis container. * That container must be on the same network as the web server. * It must have the right network alias (`redis`) so the application can find it. Start the container: ```bash $ docker run --net dev --net-alias redis -d redis ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Test the web server again * If we connect to the application now, we should see that the app is working correctly:  * When the app tries to resolve `redis`, instead of getting a DNS error, it gets the IP address of our Redis container. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## A few words on *scope* - Container names are unique (there can be only one `--name redis`) - Network aliases are not unique - We can have the same network alias in different networks: ```bash docker run --net dev --net-alias redis ... docker run --net prod --net-alias redis ... ``` - We can even have multiple containers with the same alias in the same network (in that case, we get multiple DNS entries, aka "DNS round robin") .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Names are *local* to each network Let's try to ping our `es` container from another container, when that other container is *not* on the `dev` network. ```bash $ docker run --rm alpine ping es ping: bad address 'es' ``` Names can be resolved only when containers are on the same network. Containers can contact each other only when they are on the same network (you can try to ping using the IP address to verify). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Network aliases We would like to have another network, `prod`, with its own `es` container. But there can be only one container named `es`! We will use *network aliases*. A container can have multiple network aliases. Network aliases are *local* to a given network (only exist in this network). Multiple containers can have the same network alias (even on the same network). Since Docker Engine 1.11, resolving a network alias yields the IP addresses of all containers holding this alias. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Creating containers on another network Create the `prod` network. ```bash $ docker network create prod 5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c ``` We can now create multiple containers with the `es` alias on the new `prod` network. ```bash $ docker run -d --name prod-es-1 --net-alias es --net prod elasticsearch:2 38079d21caf0c5533a391700d9e9e920724e89200083df73211081c8a356d771 $ docker run -d --name prod-es-2 --net-alias es --net prod elasticsearch:2 1820087a9c600f43159688050dcc164c298183e1d2e62d5694fd46b10ac3bc3d ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Resolving network aliases Let's try DNS resolution first, using the `nslookup` tool that ships with the `alpine` image. ```bash $ docker run --net prod --rm alpine nslookup es Name: es Address 1: 172.23.0.3 prod-es-2.prod Address 2: 172.23.0.2 prod-es-1.prod ``` (You can ignore the `can't resolve '(null)'` errors.) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Connecting to aliased containers Each ElasticSearch instance has a name (generated when it is started). This name can be seen when we issue a simple HTTP request on the ElasticSearch API endpoint. Try the following command a few times: .small[ ```bash $ docker run --rm --net dev centos curl -s es:9200 { "name" : "Tarot", ... } ``` ] Then try it a few times by replacing `--net dev` with `--net prod`: .small[ ```bash $ docker run --rm --net prod centos curl -s es:9200 { "name" : "The Symbiote", ... } ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Good to know ... * Docker will not create network names and aliases on the default `bridge` network. * Therefore, if you want to use those features, you have to create a custom network first. * Network aliases are *not* unique on a given network. * i.e., multiple containers can have the same alias on the same network. * In that scenario, the Docker DNS server will return multiple records. (i.e. you will get DNS round robin out of the box.) * Enabling *Swarm Mode* gives access to clustering and load balancing with IPVS. * Creation of networks and network aliases is generally automated with tools like Compose. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## A few words about round robin DNS Don't rely exclusively on round robin DNS to achieve load balancing. Many factors can affect DNS resolution, and you might see: - all traffic going to a single instance; - traffic being split (unevenly) between some instances; - different behavior depending on your application language; - different behavior depending on your base distro; - different behavior depending on other factors (sic). It's OK to use DNS to discover available endpoints, but remember that you have to re-resolve every now and then to discover new endpoints. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Custom networks When creating a network, extra options can be provided. * `--internal` disables outbound traffic (the network won't have a default gateway). * `--gateway` indicates which address to use for the gateway (when outbound traffic is allowed). * `--subnet` (in CIDR notation) indicates the subnet to use. * `--ip-range` (in CIDR notation) indicates the subnet to allocate from. * `--aux-address` allows specifying a list of reserved addresses (which won't be allocated to containers). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Setting containers' IP address * It is possible to set a container's address with `--ip`. * The IP address has to be within the subnet used for the container. A full example would look like this. ```bash $ docker network create --subnet 10.66.0.0/16 pubnet 42fb16ec412383db6289a3e39c3c0224f395d7f85bcb1859b279e7a564d4e135 $ docker run --net pubnet --ip 10.66.66.66 -d nginx b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09 ``` *Note: don't hard code container IP addresses in your code!* *I repeat: don't hard code container IP addresses in your code!* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Network drivers * A network is managed by a *driver*. * The built-in drivers include: * `bridge` (default) * `none` * `host` * `macvlan` * `overlay` (for Swarm clusters) * More drivers can be provided by plugins (OVS, VLAN...) * A network can have a custom IPAM (IP allocator). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Overlay networks * The features we've seen so far only work when all containers are on a single host. * If containers span multiple hosts, we need an *overlay* network to connect them together. * Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN, *enabled with Swarm Mode*. * Other plugins (Weave, Calico...) can provide overlay networks as well. * Once you have an overlay network, *all the features that we've used in this chapter work identically across multiple hosts.* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (overlay) Out of the scope for this intro-level workshop! Very short instructions: - enable Swarm Mode (`docker swarm init` then `docker swarm join` on other nodes) - `docker network create mynet --driver overlay` - `docker service create --network mynet myimage` If you want to learn more about Swarm mode, you can check [this video](https://www.youtube.com/watch?v=EuzoEaE6Cqs) or [these slides](https://container.training/swarm-selfpaced.yml.html). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (plugins) Out of the scope for this intro-level workshop! General idea: - install the plugin (they often ship within containers) - run the plugin (if it's in a container, it will often require extra parameters; don't just `docker run` it blindly!) - some plugins require configuration or activation (creating a special file that tells Docker "use the plugin whose control socket is at the following location") - you can then `docker network create --driver pluginname` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Connecting and disconnecting dynamically * So far, we have specified which network to use when starting the container. * The Docker Engine also allows connecting and disconnecting while the container is running. * This feature is exposed through the Docker API, and through two Docker CLI commands: * `docker network connect ` * `docker network disconnect ` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Dynamically connecting to a network * We have a container named `es` connected to a network named `dev`. * Let's start a simple alpine container on the default network: ```bash $ docker run -ti alpine sh / # ``` * In this container, try to ping the `es` container: ```bash / # ping es ping: bad address 'es' ``` This doesn't work, but we will change that by connecting the container. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Finding the container ID and connecting it * Figure out the ID of our alpine container; here are two methods: * looking at `/etc/hostname` in the container, * running `docker ps -lq` on the host. * Run the following command on the host: ```bash $ docker network connect dev `` ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Checking what we did * Try again to `ping es` from the container. * It should now work correctly: ```bash / # ping es PING es (172.20.0.3): 56 data bytes 64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms 64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms ^C ``` * Interrupt it with Ctrl-C. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Looking at the network setup in the container We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`: .small[ ```bash / # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 18: eth0@if19: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever 20: eth1@if21: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1 valid_lft forever preferred_lft forever / # ``` ] Each network connection is materialized with a virtual network interface. As we can see, we can be connected to multiple networks at the same time. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Disconnecting from a network * Let's try the symmetrical command to disconnect the container: ```bash $ docker network disconnect dev ``` * From now on, if we try to ping `es`, it will not resolve: ```bash / # ping es ping: bad address 'es' ``` * Trying to ping the IP address directly won't work either: ```bash / # ping 172.20.0.3 ... (nothing happens until we interrupt it with Ctrl-C) ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Network aliases are scoped per network * Each network has its own set of network aliases. * We saw this earlier: `es` resolves to different addresses in `dev` and `prod`. * If we are connected to multiple networks, the resolver looks up names in each of them (as of Docker Engine 18.03, it is the connection order) and stops as soon as the name is found. * Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not** give us the addresses of all the `es` services; but only the ones in `dev` or `prod`. * However, we can lookup `es.dev` or `es.prod` if we need to. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Finding out about our networks and names * We can do reverse DNS lookups on containers' IP addresses. * If the IP address belongs to a network (other than the default bridge), the result will be: ``` name-or-first-alias-or-container-id.network-name ``` * Example: .small[ ```bash $ docker run -ti --net prod --net-alias hello alpine / # apk add --no-cache drill ... OK: 5 MiB in 13 packages / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03 inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ... / # drill -t ptr `3.0.21.172`.in-addr.arpa ... ;; ANSWER SECTION: 3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`. ... ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Building with a custom network * We can build a Dockerfile with a custom network with `docker build --network NAME`. * This can be used to check that a build doesn't access the network. (But keep in mind that most Dockerfiles will fail, because they need to install remote packages and dependencies!) * This may be used to access an internal package repository. (But try to use a multi-stage build instead, if possible!) ??? :EN:Container networking essentials :EN:- The Container Network Model :EN:- Container isolation :EN:- Service discovery :FR:Mettre ses conteneurs en réseau :FR:- Le "Container Network Model" :FR:- Isolation des conteneurs :FR:- *Service discovery* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: pic .interstitial[] --- name: toc-local-development-workflow-with-docker class: title Local development workflow with Docker .nav[ [Previous part](#toc-service-discovery-with-containers) | [Back to table of contents](#toc-part-6) | [Next part](#toc-working-with-volumes) ] .debug[(automatically generated title slide)] --- class: title # Local development workflow with Docker  .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Objectives At the end of this section, you will be able to: * Share code between container and host. * Use a simple local development workflow. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Local development in a container We want to solve the following issues: - "Works on my machine" - "Not the same version" - "Missing dependency" By using Docker containers, we will get a consistent development environment. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Working on the "namer" application * We have to work on some application whose code is at: https://github.com/jpetazzo/namer. * What is it? We don't know yet! * Let's download the code. ```bash $ git clone https://github.com/jpetazzo/namer ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Looking at the code ```bash $ cd namer $ ls -1 company_name_generator.rb config.ru docker-compose.yml Dockerfile Gemfile ``` -- Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe? .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Looking at the `Dockerfile` ```dockerfile FROM ruby COPY . /src WORKDIR /src RUN bundler install CMD ["rackup", "--host", "0.0.0.0"] EXPOSE 9292 ``` * This application is using a base `ruby` image. * The code is copied in `/src`. * Dependencies are installed with `bundler`. * The application is started with `rackup`. * It is listening on port 9292. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Building and running the "namer" application * Let's build the application with the `Dockerfile`! -- ```bash $ docker build -t namer . ``` -- * Then run it. *We need to expose its ports.* -- ```bash $ docker run -dP namer ``` -- * Check on which port the container is listening. -- ```bash $ docker ps -l ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Connecting to our application * Point our browser to our Docker node, on the port allocated to the container. -- * Hit "reload" a few times. -- * This is an enterprise-class, carrier-grade, ISO-compliant company name generator! (With 50% more bullshit than the average competition!) (Wait, was that 50% more, or 50% less? *Anyway!*)  .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Making changes to the code Option 1: * Edit the code locally * Rebuild the image * Re-run the container Option 2: * Enter the container (with `docker exec`) * Install an editor * Make changes from within the container Option 3: * Use a *bind mount* to share local files with the container * Make changes locally * Changes are reflected in the container .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Our first volume We will tell Docker to map the current directory to `/src` in the container. ```bash $ docker run -d -v $(pwd):/src -P namer ``` * `-d`: the container should run in detached mode (in the background). * `-v`: the following host directory should be mounted inside the container. * `-P`: publish all the ports exposed by this image. * `namer` is the name of the image we will run. * We don't specify a command to run because it is already set in the Dockerfile via `CMD`. Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell). .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Mounting volumes inside containers The `-v` flag mounts a directory from your host into your Docker container. The flag structure is: ```bash [host-path]:[container-path]:[rw|ro] ``` * `[host-path]` and `[container-path]` are created if they don't exist. * You can control the write status of the volume with the `ro` and `rw` options. * If you don't specify `rw` or `ro`, it will be `rw` by default. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Hold your horses... and your mounts - The `-v /path/on/host:/path/in/container` syntax is the "old" syntax - The modern syntax looks like this: `--mount type=bind,source=/path/on/host,target=/path/in/container` - `--mount` is more explicit, but `-v` is quicker to type - `--mount` supports all mount types; `-v` doesn't support `tmpfs` mounts - `--mount` fails if the path on the host doesn't exist; `-v` creates it With the new syntax, our command becomes: ```bash docker run --mount=type=bind,source=$(pwd),target=/src -dP namer ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Testing the development container * Check the port used by our new container. ```bash $ docker ps -l CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 045885b68bc5 namer rackup 3 seconds ago Up ... 0.0.0.0:32770->9292/tcp ... ``` * Open the application in your web browser. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Making a change to our application Our customer really doesn't like the color of our text. Let's change it. ```bash $ vi company_name_generator.rb ``` And change ```css color: royalblue; ``` To: ```css color: red; ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Viewing our changes * Reload the application in our browser. -- * The color should have changed.  .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Understanding volumes - Volumes are *not* copying or synchronizing files between the host and the container - Changes made in the host are immediately visible in the container (and vice versa) - When running on Linux: - volumes and bind mounts correspond to directories on the host - if Docker runs in a Linux VM, these directories are in the Linux VM - When running on Docker Desktop: - volumes correspond to directories in a small Linux VM running Docker - access to bind mounts is translated to host filesystem access (a bit like a network filesystem) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Docker Desktop caveats - When running Docker natively on Linux, accessing a mount = native I/O - When running Docker Desktop, accessing a bind mount = file access translation - That file access translation has relatively good performance *in general* (watch out, however, for that big `npm install` working on a bind mount!) - There are some corner cases when watching files (with mechanisms like inotify) - Features like "live reload" or programs like `entr` don't always behave properly (due to e.g. file attribute caching, and other interesting details!) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Trash your servers and burn your code *(This is the title of a [2013 blog post][immutable-deployments] by Chad Fowler, where he explains the concept of immutable infrastructure.)* [immutable-deployments]: https://web.archive.org/web/20160305073617/http://chadfowler.com/blog/2013/06/23/immutable-deployments/ -- * Let's majorly mess up our container. (Remove files or whatever.) * Now, how can we fix this? -- * Our old container (with the blue version of the code) is still running. * See on which port it is exposed: ```bash docker ps ``` * Point our browser to it to confirm that it still works fine. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Immutable infrastructure in a nutshell * Instead of *updating* a server, we deploy a new one. * This might be challenging with classical servers, but it's trivial with containers. * In fact, with Docker, the most logical workflow is to build a new image and run it. * If something goes wrong with the new image, we can always restart the old one. * We can even keep both versions running side by side. If this pattern sounds interesting, you might want to read about *blue/green deployment* and *canary deployments*. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Recap of the development workflow 1. Write a Dockerfile to build an image containing our development environment. (Rails, Django, ... and all the dependencies for our app) 2. Start a container from that image. Use the `-v` flag to mount our source code inside the container. 3. Edit the source code outside the container, using familiar tools. (vim, emacs, textmate...) 4. Test the application. (Some frameworks pick up changes automatically. Others require you to Ctrl-C + restart after each modification.) 5. Iterate and repeat steps 3 and 4 until satisfied. 6. When done, commit+push source code changes. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Debugging inside the container Docker has a command called `docker exec`. It allows users to run a new process in a container which is already running. If sometimes you find yourself wishing you could SSH into a container: you can use `docker exec` instead. You can get a shell prompt inside an existing container this way, or run an arbitrary process for automation. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## `docker exec` example ```bash $ # You can run ruby commands in the area the app is running and more! $ docker exec -it bash root@5ca27cf74c2e:/opt/namer# irb irb(main):001:0> [0, 1, 2, 3, 4].map {|x| x ** 2}.compact => [0, 1, 4, 9, 16] irb(main):002:0> exit ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Stopping the container Now that we're done let's stop our container. ```bash $ docker stop ``` And remove it. ```bash $ docker rm ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Section summary We've learned how to: * Share code between container and host. * Set our working directory. * Use a simple local development workflow. ??? :EN:Developing with containers :EN:- “Containerize” a development environment :FR:Développer au jour le jour :FR:- « Containeriser » son environnement de développement .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- class: pic .interstitial[] --- name: toc-working-with-volumes class: title Working with volumes .nav[ [Previous part](#toc-local-development-workflow-with-docker) | [Back to table of contents](#toc-part-6) | [Next part](#toc-gentle-introduction-to-yaml) ] .debug[(automatically generated title slide)] --- class: title # Working with volumes  .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Objectives At the end of this section, you will be able to: * Create containers holding volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Working with volumes Docker volumes can be used to achieve many things, including: * Bypassing the copy-on-write system to obtain native disk I/O performance. * Bypassing copy-on-write to leave some files out of `docker commit`. * Sharing a directory between multiple containers. * Sharing a directory between the host and a container. * Sharing a *single file* between the host and a container. * Using remote storage and custom storage with *volume drivers*. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Volumes are special directories in a container Volumes can be declared in two different ways: * Within a `Dockerfile`, with a `VOLUME` instruction. ```dockerfile VOLUME /uploads ``` * On the command-line, with the `-v` flag for `docker run`. ```bash $ docker run -d -v /uploads myapp ``` In both cases, `/uploads` (inside the container) will be a volume. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Volumes bypass the copy-on-write system Volumes act as passthroughs to the host filesystem. * The I/O performance on a volume is exactly the same as I/O performance on the Docker host. * When you `docker commit`, the content of volumes is not brought into the resulting image. * If a `RUN` instruction in a `Dockerfile` changes the content of a volume, those changes are not recorded neither. * If a container is started with the `--read-only` flag, the volume will still be writable (unless the volume is a read-only volume). .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Volumes can be shared across containers You can start a container with *exactly the same volumes* as another one. The new container will have the same volumes, in the same directories. They will contain exactly the same thing, and remain in sync. Under the hood, they are actually the same directories on the host anyway. This is done using the `--volumes-from` flag for `docker run`. We will see an example in the following slides. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Sharing app server logs with another container Let's start a Tomcat container: ```bash $ docker run --name webapp -d -p 8080:8080 -v /usr/local/tomcat/logs tomcat ``` Now, start an `alpine` container accessing the same volume: ```bash $ docker run --volumes-from webapp alpine sh -c "tail -f /usr/local/tomcat/logs/*" ``` Then, from another window, send requests to our Tomcat container: ```bash $ curl localhost:8080 ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Volumes exist independently of containers If a container is stopped or removed, its volumes still exist and are available. Volumes can be listed and manipulated with `docker volume` subcommands: ```bash $ docker volume ls DRIVER VOLUME NAME local 5b0b65e4316da67c2d471086640e6005ca2264f3... local pgdata-prod local pgdata-dev local 13b59c9936d78d109d094693446e174e5480d973... ``` Some of those volume names were explicit (pgdata-prod, pgdata-dev). The others (the hex IDs) were generated automatically by Docker. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Naming volumes * Volumes can be created without a container, then used in multiple containers. Let's create a couple of volumes directly. ```bash $ docker volume create webapps webapps ``` ```bash $ docker volume create logs logs ``` Volumes are not anchored to a specific path. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Populating volumes * When an empty volume is mounted on a non-empty directory, the directory is copied to the volume. * This makes it easy to "promote" a normal directory to a volume. * Non-empty volumes are always mounted as-is. Let's populate the webapps volume with the webapps.dist directory from the Tomcat image. ````bash $ docker run -v webapps:/usr/local/tomcat/webapps.dist tomcat true ``` Note: running `true` will cause the container to exit successfully once the `webapps.dist` directory has been copied to the `webapps` volume, instead of starting tomcat. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Using our named volumes * Volumes are used with the `-v` option. * When a host path does not contain a `/`, it is considered a volume name. Let's start a web server using the two previous volumes. ```bash $ docker run -d -p 1234:8080 \ -v logs:/usr/local/tomcat/logs \ -v webapps:/usr/local/tomcat/webapps \ tomcat ``` Check that it's running correctly: ```bash $ curl localhost:1234 ... (Tomcat tells us how happy it is to be up and running) ... ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Using a volume in another container * We will make changes to the volume from another container. * In this example, we will run a text editor in the other container. (But this could be an FTP server, a WebDAV server, a Git receiver...) Let's start another container using the `webapps` volume. ```bash $ docker run -v webapps:/webapps -w /webapps -ti alpine vi ROOT/index.jsp ``` Vandalize the page, save, exit. Then run `curl localhost:1234` again to see your changes. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Using custom "bind-mounts" In some cases, you want a specific directory on the host to be mapped inside the container: * You want to manage storage and snapshots yourself. (With LVM, or a SAN, or ZFS, or anything else!) * You have a separate disk with better performance (SSD) or resiliency (EBS) than the system disk, and you want to put important data on that disk. * You want to share your source directory between your host (where the source gets edited) and the container (where it is compiled or executed). Wait, we already met the last use-case in our example development workflow! Nice. ```bash $ docker run -d -v /path/on/the/host:/path/in/container image ... ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Migrating data with `--volumes-from` The `--volumes-from` option tells Docker to re-use all the volumes of an existing container. * Scenario: migrating from Redis 2.8 to Redis 3.0. * We have a container (`myredis`) running Redis 2.8. * Stop the `myredis` container. * Start a new container, using the Redis 3.0 image, and the `--volumes-from` option. * The new container will inherit the data of the old one. * Newer containers can use `--volumes-from` too. * Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes). .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Data migration in practice Let's create a Redis container. ```bash $ docker run -d --name redis28 redis:2.8 ``` Connect to the Redis container and set some data. ```bash $ docker run -ti --link redis28:redis busybox telnet redis 6379 ``` Issue the following commands: ```bash SET counter 42 INFO server SAVE QUIT ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Upgrading Redis Stop the Redis container. ```bash $ docker stop redis28 ``` Start the new Redis container. ```bash $ docker run -d --name redis30 --volumes-from redis28 redis:3.0 ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Testing the new Redis Connect to the Redis container and see our data. ```bash docker run -ti --link redis30:redis busybox telnet redis 6379 ``` Issue a few commands. ```bash GET counter INFO server QUIT ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Volumes lifecycle * When you remove a container, its volumes are kept around. * You can list them with `docker volume ls`. * You can access them by creating a container with `docker run -v`. * You can remove them with `docker volume rm` or `docker system prune`. Ultimately, _you_ are the one responsible for logging, monitoring, and backup of your volumes. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes defined by an image Wondering if an image has volumes? Just use `docker inspect`: ```bash $ # docker inspect training/datavol [{ "config": { . . . "Volumes": { "/var/webapp": {} }, . . . }] ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes used by a container To look which paths are actually volumes, and to what they are bound, use `docker inspect` (again): ```bash $ docker inspect [{ "ID": "", . . . "Volumes": { "/var/webapp": "/var/lib/docker/vfs/dir/f4280c5b6207ed531efd4cc673ff620cef2a7980f747dbbcca001db61de04468" }, "VolumesRW": { "/var/webapp": true }, }] ``` * We can see that our volume is present on the file system of the Docker host. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Sharing a single file The same `-v` flag can be used to share a single file (instead of a directory). One of the most interesting examples is to share the Docker control socket. ```bash $ docker run -it -v /var/run/docker.sock:/var/run/docker.sock docker sh ``` From that container, you can now run `docker` commands communicating with the Docker Engine running on the host. Try `docker ps`! .warning[Since that container has access to the Docker socket, it has root-like access to the host.] .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Volume plugins You can install plugins to manage volumes backed by particular storage systems, or providing extra features. For instance: * [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS, EFS). * [Portworx](https://portworx.com/) - provides distributed block store for containers. * [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale to several petabytes. It provides interfaces for object, block and file storage. * and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)! .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Volumes vs. Mounts * Since Docker 17.06, a new options is available: `--mount`. * It offers a new, richer syntax to manipulate data in containers. * It makes an explicit difference between: - volumes (identified with a unique name, managed by a storage plugin), - bind mounts (identified with a host path, not managed). * The former `-v` / `--volume` option is still usable. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## `--mount` syntax Binding a host path to a container path: ```bash $ docker run \ --mount type=bind,source=/path/on/host,target=/path/in/container alpine ``` Mounting a volume to a container path: ```bash $ docker run \ --mount source=myvolume,target=/path/in/container alpine ``` Mounting a tmpfs (in-memory, for temporary files): ```bash $ docker run \ --mount type=tmpfs,destination=/path/in/container,tmpfs-size=1000000 alpine ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Section summary We've learned how to: * Create and manage volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: pic .interstitial[] --- name: toc-gentle-introduction-to-yaml class: title Gentle introduction to YAML .nav[ [Previous part](#toc-working-with-volumes) | [Back to table of contents](#toc-part-6) | [Next part](#toc-compose-for-development-stacks) ] .debug[(automatically generated title slide)] --- # Gentle introduction to YAML - YAML Ain't Markup Language (according to [yaml.org][yaml]) - *Almost* required when working with containers: - Docker Compose files - Kubernetes manifests - Many CI pipelines (GitHub, GitLab...) - If you don't know much about YAML, this is for you! [yaml]: https://yaml.org/ .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## What is it? - Data representation language ```yaml - country: France capital: Paris code: fr population: 68042591 - country: Germany capital: Berlin code: de population: 84270625 - country: Norway capital: Oslo code: no # It's a trap! population: 5425270 ``` - Even without knowing YAML, we probably can add a country to that file :) .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Trying YAML - Method 1: in the browser https://onlineyamltools.com/convert-yaml-to-json https://onlineyamltools.com/highlight-yaml - Method 2: in a shell ```bash yq . foo.yaml ``` - Method 3: in Python ```python import yaml; yaml.safe_load(""" - country: France capital: Paris """) ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Basic stuff - Strings, numbers, boolean values, `null` - Sequences (=arrays, lists) - Mappings (=objects) - Superset of JSON (if you know JSON, you can just write JSON) - Comments start with `#` - A single *file* can have multiple *documents* (separated by `---` on a single line) .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Sequences - Example: sequence of strings ```yaml [ "france", "germany", "norway" ] ``` - Example: the same sequence, without the double-quotes ```yaml [ france, germany, norway ] ``` - Example: the same sequence, in "block collection style" (=multi-line) ```yaml - france - germany - norway ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Mappings - Example: mapping strings to numbers ```yaml { "france": 68042591, "germany": 84270625, "norway": 5425270 } ``` - Example: the same mapping, without the double-quotes ```yaml { france: 68042591, germany: 84270625, norway: 5425270 } ``` - Example: the same mapping, in "block collection style" ```yaml france: 68042591 germany: 84270625 norway: 5425270 ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Combining types - In a sequence (or mapping) we can have different types (including other sequences or mappings) - Example: ```yaml questions: [ name, quest, favorite color ] answers: [ "Arthur, King of the Britons", Holy Grail, purple, 42 ] ``` - Note that we need to quote "Arthur" because of the comma - Note that we don't have the same number of elements in questions and answers .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## More combinations - Example: ```yaml - service: nginx ports: [ 80, 443 ] - service: bind ports: [ 53/tcp, 53/udp ] - service: ssh ports: 22 ``` - Note that `ports` doesn't always have the same type (the code handling that data will probably have to be smart!) .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## ⚠️ Automatic booleans ```yaml codes: france: fr germany: de norway: no ``` -- ```json { "codes": { "france": "fr", "germany": "de", "norway": false } } ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## ⚠️ Automatic booleans - `no` can become `false` (it depends on the YAML parser used) - It should be quoted instead: ```yaml codes: france: fr germany: de norway: "no" ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## ⚠️ Automatic floats ```yaml version: libfoo: 1.10 fooctl: 1.0 ``` -- ```json { "version": { "libfoo": 1.1, "fooctl": 1 } } ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## ⚠️ Automatic floats - Trailing zeros disappear - These should also be quoted: ```yaml version: libfoo: "1.10" fooctl: "1.0" ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## ⚠️ Automatic times ```yaml portmap: - 80:80 - 22:22 ``` -- ```json { "portmap": [ "80:80", 1342 ] } ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## ⚠️ Automatic times - `22:22` becomes `1342` - Thats 22 minutes and 22 seconds = 1342 seconds - Again, it should be quoted .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Document separator - A single YAML *file* can have multiple *documents* separated by `---`: ```yaml This is a document consisting of a single string. --- 💡 name: The second document type: This one is a mapping (key→value) --- 💡 - Third document - This one is a sequence ``` - Some folks like to add an extra `---` at the beginning and/or at the end (it's not mandatory but can help e.g. to `cat` multiple files together) .footnote[💡 Ignore this; it's here to work around [this issue][remarkyaml].] [remarkyaml]: https://github.com/gnab/remark/issues/679 .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Multi-line strings Try the following block in a YAML parser: ```yaml add line breaks: "in double quoted strings\n(like this)" preserve line break: | by using a pipe (|) (this is great for embedding shell scripts, configuration files...) do not preserve line breaks: > by using a greater-than (>) (this is great for embedding very long lines) ``` See https://yaml-multiline.info/ for advanced multi-line tips! (E.g. to strip or keep extra `\n` characters at the end of the block.) .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- class: extra-details ## Advanced features Anchors let you "memorize" and re-use content: ```yaml debian: &debian packages: deb latest-stable: bullseye also-debian: *debian ubuntu: <<: *debian latest-stable: jammy ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- class: extra-details ## YAML, good or evil? - Natural progression from XML to JSON to YAML - There are other data languages out there (e.g. HCL, domain-specific things crafted with Ruby, CUE...) - Compromises are made, for instance: - more user-friendly → more "magic" with side effects - more powerful → steeper learning curve - Love it or loathe it but it's a good idea to understand it! - Interesting tool if you appreciate YAML: https://carvel.dev/ytt/ ??? :EN:- Understanding YAML and its gotchas :FR:- Comprendre le YAML et ses subtilités .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- class: pic .interstitial[] --- name: toc-compose-for-development-stacks class: title Compose for development stacks .nav[ [Previous part](#toc-gentle-introduction-to-yaml) | [Back to table of contents](#toc-part-6) | [Next part](#toc-exercise--writing-a-compose-file) ] .debug[(automatically generated title slide)] --- # Compose for development stacks Dockerfile = great to build *one* container image. What if we have multiple containers? What if some of them require particular `docker run` parameters? How do we connect them all together? ... Compose solves these use-cases (and a few more). .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Life before Compose Before we had Compose, we would typically write custom scripts to: - build container images, - run containers using these images, - connect the containers together, - rebuild, restart, update these images and containers. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Life with Compose Compose enables a simple, powerful onboarding workflow: 1. Checkout our code. 2. Run `docker compose up`. 3. Our app is up and running! .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- class: pic  .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Life after Compose (Or: when do we need something else?) - Compose is *not* an orchestrator - It isn't designed to need to run containers on multiple nodes (it can, however, work with Docker Swarm Mode) - Compose isn't ideal if we want to run containers on Kubernetes - it uses different concepts (Compose services ≠ Kubernetes services) - it needs a Docker Engine (although containerd support might be coming) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## First rodeo with Compose 1. Write Dockerfiles 2. Describe our stack of containers in a YAML file (the "Compose file") 3. `docker compose up` (or `docker compose up -d` to run in the background) 4. Compose pulls and builds the required images, and starts the containers 5. Compose shows the combined logs of all the containers (if running in the background, use `docker compose logs`) 6. Hit Ctrl-C to stop the whole stack (if running in the background, use `docker compose stop`) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Iterating After making changes to our source code, we can: 1. `docker compose build` to rebuild container images 2. `docker compose up` to restart the stack with the new images We can also combine both with `docker compose up --build` Compose will be smart, and only recreate the containers that have changed. When working with interpreted languages: - don't rebuild each time - leverage a `volumes` section instead .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose First step: clone the source code for the app we will be working on. ```bash git clone https://github.com/jpetazzo/trainingwheels cd trainingwheels ``` Second step: start the app. ```bash docker compose up ``` Watch Compose build and run the app. That Compose stack exposes a web server on port 8000; try connecting to it. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose We should see a web page like this:  Each time we reload, the counter should increase. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Stopping the app When we hit Ctrl-C, Compose tries to gracefully terminate all of the containers. After ten seconds (or if we press `^C` again) it will forcibly kill them. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## The Compose file * Historically: docker-compose.yml or .yaml * Recently (kind of): can also be named compose.yml or .yaml (Since [version 1.28.6, March 2021](https://docs.docker.com/compose/releases/release-notes/#1286)) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Example Here is the file used in the demo: .small[ ```yaml version: "3" services: www: build: www ports: - ${PORT-8000}:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src redis: image: redis ``` ] .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose file structure A Compose file has multiple sections: * `services` is mandatory. Each service corresponds to a container. * `version` is optional (it used to be mandatory). It can be ignored. * `networks` is optional and indicates to which networks containers should be connected. (By default, containers will be connected on a private, per-compose-file network.) * `volumes` is optional and can define volumes to be used and/or shared by the containers. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- class: extra-details ## Compose file versions * Version 1 is legacy and shouldn't be used. (If you see a Compose file without a `services` block, it's a legacy v1 file.) * Version 2 added support for networks and volumes. * Version 3 added support for deployment options (scaling, rolling updates, etc). The [Docker documentation](https://docs.docker.com/compose/compose-file/) has excellent information about the Compose file format if you need to know more about versions. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Containers in Compose file Each service in the YAML file must contain either `build`, or `image`. * `build` indicates a path containing a Dockerfile. * `image` indicates an image name (local, or on a registry). * If both are specified, an image will be built from the `build` directory and named `image`. The other parameters are optional. They encode the parameters that you would typically add to `docker run`. Sometimes they have several minor improvements. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Container parameters * `command` indicates what to run (like `CMD` in a Dockerfile). * `ports` translates to one (or multiple) `-p` options to map ports. You can specify local ports (i.e. `x:y` to expose public port `x`). * `volumes` translates to one (or multiple) `-v` options. You can use relative paths here. For the full list, check: https://docs.docker.com/compose/compose-file/ .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Environment variables - We can use environment variables in Compose files (like `$THIS` or `${THAT}`) - We can provide default values, e.g. `${PORT-8000}` - Compose will also automatically load the environment file `.env` (it should contain `VAR=value`, one per line) - This is a great way to customize build and run parameters (base image versions to use, build and run secrets, port numbers...) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Configuring a Compose stack - Follow [12-factor app configuration principles][12factorconfig] (configure the app through environment variables) - Provide (in the repo) a default environment file suitable for development (no secret or sensitive value) - Copy the default environment file to `.env` and tweak it (or: provide a script to generate `.env` from a template) [12factorconfig]: https://12factor.net/config .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Running multiple copies of a stack - Copy the stack in two different directories, e.g. `front` and `frontcopy` - Compose prefixes images and containers with the directory name: `front_www`, `front_www_1`, `front_db_1` `frontcopy_www`, `frontcopy_www_1`, `frontcopy_db_1` - Alternatively, use `docker compose -p frontcopy` (to set the `--project-name` of a stack, which default to the dir name) - Each copy is isolated from the others (runs on a different network) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Checking stack status We have `ps`, `docker ps`, and similarly, `docker compose ps`: ```bash $ docker compose ps Name Command State Ports ---------------------------------------------------------------------------- trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp trainingwheels_www_1 python counter.py Up 0.0.0.0:8000->5000/tcp ``` Shows the status of all the containers of our stack. Doesn't show the other containers. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (1) If you have started your application in the background with Compose and want to stop it easily, you can use the `kill` command: ```bash $ docker compose kill ``` Likewise, `docker compose rm` will let you remove containers (after confirmation): ```bash $ docker compose rm Going to remove trainingwheels_redis_1, trainingwheels_www_1 Are you sure? [yN] y Removing trainingwheels_redis_1... Removing trainingwheels_www_1... ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (2) Alternatively, `docker compose down` will stop and remove containers. It will also remove other resources, like networks that were created for the application. ```bash $ docker compose down Stopping trainingwheels_www_1 ... done Stopping trainingwheels_redis_1 ... done Removing trainingwheels_www_1 ... done Removing trainingwheels_redis_1 ... done ``` Use `docker compose down -v` to remove everything including volumes. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Special handling of volumes - When an image gets updated, Compose automatically creates a new container - The data in the old container is lost... - ...Except if the container is using a *volume* - Compose will then re-attach that volume to the new container (and data is then retained across database upgrades) - All good database images use volumes (e.g. all official images) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Gotchas with volumes - Unfortunately, Docker volumes don't have labels or metadata - Compose tracks volumes thanks to their associated container - If the container is deleted, the volume gets orphaned - Example: `docker compose down && docker compose up` - the old volume still exists, detached from its container - a new volume gets created - `docker compose down -v`/`--volumes` deletes volumes (but **not** `docker compose down && docker compose down -v`!) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Managing volumes explicitly Option 1: *named volumes* ```yaml services: app: volumes: - data:/some/path volumes: data: ``` - Volume will be named `_data` - It won't be orphaned with `docker compose down` - It will correctly be removed with `docker compose down -v` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Managing volumes explicitly Option 2: *relative paths* ```yaml services: app: volumes: - ./data:/some/path ``` - Makes it easy to colocate the app and its data (for migration, backups, disk usage accounting...) - Won't be removed by `docker compose down -v` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Managing complex stacks - Compose provides multiple features to manage complex stacks (with many containers) - `-f`/`--file`/`$COMPOSE_FILE` can be a list of Compose files (separated by `:` and merged together) - Services can be assigned to one or more *profiles* - `--profile`/`$COMPOSE_PROFILE` can be a list of comma-separated profiles (see [Using service profiles][profiles] in the Compose documentation) - These variables can be set in `.env` [profiles]: https://docs.docker.com/compose/profiles/ .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Dependencies - A service can have a `depends_on` section (listing one or more other services) - This is used when bringing up individual services (e.g. `docker compose up blah` or `docker compose run foo`) ⚠️ It doesn't make a service "wait" for another one to be up! .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- class: extra-details ## A bit of history and trivia - Compose was initially named "Fig" - Compose is one of the only components of Docker written in Python (almost everything else is in Go) - In 2020, Docker introduced "Compose CLI": - `docker compose` command to deploy Compose stacks to some clouds - in Go instead of Python - progressively getting feature parity with `docker compose` - also provides numerous improvements (e.g. leverages BuildKit by default) ??? :EN:- Using compose to describe an environment :EN:- Connecting services together with a *Compose file* :FR:- Utiliser Compose pour décrire son environnement :FR:- Écrire un *Compose file* pour connecter les services entre eux .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- class: pic .interstitial[] --- name: toc-exercise--writing-a-compose-file class: title Exercise — writing a Compose file .nav[ [Previous part](#toc-compose-for-development-stacks) | [Back to table of contents](#toc-part-6) | [Next part](#toc-installing-docker) ] .debug[(automatically generated title slide)] --- # Exercise — writing a Compose file Let's write a Compose file for the wordsmith app! The code is at: https://github.com/jpetazzo/wordsmith .debug[[containers/Exercise_Composefile.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Exercise_Composefile.md)] --- class: pic .interstitial[] --- name: toc-installing-docker class: title Installing Docker .nav[ [Previous part](#toc-exercise--writing-a-compose-file) | [Back to table of contents](#toc-part-7) | [Next part](#toc-docker-engine-and-other-container-engines) ] .debug[(automatically generated title slide)] --- class: title # Installing Docker  .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Objectives At the end of this lesson, you will know: * How to install Docker. * When to use `sudo` when running Docker commands. *Note:* if you were provided with a training VM for a hands-on tutorial, you can skip this chapter, since that VM already has Docker installed, and Docker has already been setup to run without `sudo`. .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Installing Docker There are many ways to install Docker. We can arbitrarily distinguish: * Installing Docker on an existing Linux machine (physical or VM) * Installing Docker on macOS or Windows * Installing Docker on a fleet of cloud VMs .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Installing Docker on Linux * The recommended method is to install the packages supplied by Docker Inc : - add Docker Inc.'s package repositories to your system configuration - install the Docker Engine * Detailed installation instructions (distro by distro) are available on: https://docs.docker.com/engine/installation/ * You can also install from binaries (if your distro is not supported): https://docs.docker.com/engine/installation/linux/docker-ce/binaries/ * To quickly setup a dev environment, Docker provides a convenience install script: ```bash curl -fsSL get.docker.com | sh ``` .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- class: extra-details ## Docker Inc. packages vs distribution packages * Docker Inc. releases new versions monthly (edge) and quarterly (stable) * Releases are immediately available on Docker Inc.'s package repositories * Linux distros don't always update to the latest Docker version (Sometimes, updating would break their guidelines for major/minor upgrades) * Sometimes, some distros have carried packages with custom patches * Sometimes, these patches added critical security bugs ☹ * Installing through Docker Inc.'s repositories is a bit of extra work … … but it is generally worth it! .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Installing Docker on macOS and Windows * On macOS, the recommended method is to use Docker Desktop for Mac: https://docs.docker.com/docker-for-mac/install/ * On Windows 10 Pro, Enterprise, and Education, you can use Docker Desktop for Windows: https://docs.docker.com/docker-for-windows/install/ * On older versions of Windows, you can use the Docker Toolbox: https://docs.docker.com/toolbox/toolbox_install_windows/ * On Windows Server 2016, you can also install the native engine: https://docs.docker.com/install/windows/docker-ee/ .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Docker Desktop * Special Docker edition available for Mac and Windows * Integrates well with the host OS: * installed like normal user applications on the host * provides user-friendly GUI to edit Docker configuration and settings * Only support running one Docker VM at a time ... ... but we can use `docker-machine`, the Docker Toolbox, VirtualBox, etc. to get a cluster. .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- class: extra-details ## Docker Desktop internals * Leverages the host OS virtualization subsystem (e.g. the [Hypervisor API](https://developer.apple.com/documentation/hypervisor) on macOS) * Under the hood, runs a tiny VM (transparent to our daily use) * Accesses network resources like normal applications (and therefore, plays better with enterprise VPNs and firewalls) * Supports filesystem sharing through volumes (we'll talk about this later) .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Running Docker on macOS and Windows When you execute `docker version` from the terminal: * the CLI connects to the Docker Engine over a standard socket, * the Docker Engine is, in fact, running in a VM, * ... but the CLI doesn't know or care about that, * the CLI sends a request using the REST API, * the Docker Engine in the VM processes the request, * the CLI gets the response and displays it to you. All communication with the Docker Engine happens over the API. This will also allow to use remote Engines exactly as if they were local. .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Important PSA about security * If you have access to the Docker control socket, you can take over the machine (Because you can run containers that will access the machine's resources) * Therefore, on Linux machines, the `docker` user is equivalent to `root` * You should restrict access to it like you would protect `root` * By default, the Docker control socket belongs to the `docker` group * You can add trusted users to the `docker` group * Otherwise, you will have to prefix every `docker` command with `sudo`, e.g.: ```bash sudo docker version ``` .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- class: pic .interstitial[] --- name: toc-docker-engine-and-other-container-engines class: title Docker Engine and other container engines .nav[ [Previous part](#toc-installing-docker) | [Back to table of contents](#toc-part-7) | [Next part](#toc-init-systems-and-pid-) ] .debug[(automatically generated title slide)] --- # Docker Engine and other container engines * We are going to cover the architecture of the Docker Engine. * We will also present other container engines. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- class: pic ## Docker Engine external architecture  .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## Docker Engine external architecture * The Engine is a daemon (service running in the background). * All interaction is done through a REST API exposed over a socket. * On Linux, the default socket is a UNIX socket: `/var/run/docker.sock`. * We can also use a TCP socket, with optional mutual TLS authentication. * The `docker` CLI communicates with the Engine over the socket. Note: strictly speaking, the Docker API is not fully REST. Some operations (e.g. dealing with interactive containers and log streaming) don't fit the REST model. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- class: pic ## Docker Engine internal architecture  .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## Docker Engine internal architecture * Up to Docker 1.10: the Docker Engine is one single monolithic binary. * Starting with Docker 1.11, the Engine is split into multiple parts: - `dockerd` (REST API, auth, networking, storage) - `containerd` (container lifecycle, controlled over a gRPC API) - `containerd-shim` (per-container; does almost nothing but allows to restart the Engine without restarting the containers) - `runc` (per-container; does the actual heavy lifting to start the container) * Some features (like image and snapshot management) are progressively being pushed from `dockerd` to `containerd`. For more details, check [this short presentation by Phil Estes](https://www.slideshare.net/PhilEstes/diving-through-the-layers-investigating-runc-containerd-and-the-docker-engine-architecture). .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## Other container engines The following list is not exhaustive. Furthermore, we limited the scope to Linux containers. We can also find containers (or things that look like containers) on other platforms like Windows, macOS, Solaris, FreeBSD ... .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## LXC * The venerable ancestor (first released in 2008). * Docker initially relied on it to execute containers. * No daemon; no central API. * Each container is managed by a `lxc-start` process. * Each `lxc-start` process exposes a custom API over a local UNIX socket, allowing to interact with the container. * No notion of image (container filesystems have to be managed manually). * Networking has to be set up manually. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## LXD * Re-uses LXC code (through liblxc). * Builds on top of LXC to offer a more modern experience. * Daemon exposing a REST API. * Can manage images, snapshots, migrations, networking, storage. * "offers a user experience similar to virtual machines but using Linux containers instead." .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## CRI-O * Designed to be used with Kubernetes as a simple, basic runtime. * Compares to `containerd`. * Daemon exposing a gRPC interface. * Controlled using the CRI API (Container Runtime Interface defined by Kubernetes). * Needs an underlying OCI runtime (e.g. runc). * Handles storage, images, networking (through CNI plugins). We're not aware of anyone using it directly (i.e. outside of Kubernetes). .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## systemd * "init" system (PID 1) in most modern Linux distributions. * Offers tools like `systemd-nspawn` and `machinectl` to manage containers. * `systemd-nspawn` is "In many ways it is similar to chroot(1), but more powerful". * `machinectl` can interact with VMs and containers managed by systemd. * Exposes a DBUS API. * Basic image support (tar archives and raw disk images). * Network has to be set up manually. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## Kata containers * OCI-compliant runtime. * Fusion of two projects: Intel Clear Containers and Hyper runV. * Run each container in a lightweight virtual machine. * Requires running on bare metal *or* with nested virtualization. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## gVisor * OCI-compliant runtime. * Implements a subset of the Linux kernel system calls. * Written in go, uses a smaller subset of system calls. * Can be heavily sandboxed. * Can run in two modes: * KVM (requires bare metal or nested virtualization), * ptrace (no requirement, but slower). .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## Overall ... * The Docker Engine is very developer-centric: - easy to install - easy to use - no manual setup - first-class image build and transfer * As a result, it is a fantastic tool in development environments. * On servers: - Docker is a good default choice - If you use Kubernetes, the engine doesn't matter .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- class: pic .interstitial[] --- name: toc-init-systems-and-pid- class: title Init systems and PID 1 .nav[ [Previous part](#toc-docker-engine-and-other-container-engines) | [Back to table of contents](#toc-part-7) | [Next part](#toc-advanced-dockerfile-syntax) ] .debug[(automatically generated title slide)] --- # Init systems and PID 1 In this chapter, we will consider: - the role of PID 1 in the world of Docker, - how to avoid some common pitfalls due to the misuse of init systems. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- ## What's an init system? - On UNIX, the "init system" (or "init" in short) is PID 1. - It is the first process started by the kernel when the system starts. - It has multiple responsibilities: - start every other process on the machine, - reap orphaned zombie processes. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- class: extra-details ## Orphaned zombie processes ?!? - When a process exits (or "dies"), it becomes a "zombie". (Zombie processes show up in `ps` or `top` with the status code `Z`.) - Its parent process must *reap* the zombie process. (This is done by calling `waitpid()` to retrieve the process' exit status.) - When a process exits, if it has child processes, these processes are "orphaned." - They are then re-parented to PID 1, init. - Init therefore needs to take care of these orphaned processes when they exit. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- ## Don't use init systems in containers - It's often tempting to use an init system or a process manager. (Examples: *systemd*, *supervisord*...) - Our containers are then called "system containers". (By contrast with "application containers".) - "System containers" are similar to lightweight virtual machines. - They have multiple downsides: - when starting multiple processes, their logs get mixed on stdout, - if the application process dies, the container engine doesn't see it. - Overall, they make it harder to operate troubleshoot containerized apps. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- ## Exceptions and workarounds - Sometimes, it's convenient to run a real init system like *systemd*. (Example: a CI system whose goal is precisely to test an init script or unit file.) - If we need to run multiple processes: can we use multiple containers? (Example: [this Compose file](https://github.com/jpetazzo/container.training/blob/master/compose/simple-k8s-control-plane/docker-compose.yaml) runs multiple processes together.) - When deploying with Kubernetes: - a container belong to a pod, - a pod can have multiple containers. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- ## What about these zombie processes? - Our application runs as PID 1 in the container. - Our application may or may not be designed to reap zombie processes. - If our application uses subprocesses and doesn't reap them ... ... this can lead to PID exhaustion! (Or, more realistically, to a confusing herd of zombie processes.) - How can we solve this? .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- ## Tini to the rescue - Docker can automatically provide a minimal `init` process. - This is enabled with `docker run --init ...` - It uses a small init system ([tini](https://github.com/krallin/tini)) as PID 1: - it reaps zombies, - it forwards signals, - it exits when the child exits. - It is totally transparent to our application. - We should use it if our application creates subprocess but doesn't reap them. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- class: extra-details ## What about Kubernetes? - Kubernetes does not expose that `--init` option. - However, we can achieve the same result with [Process Namespace Sharing](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/). - When Process Namespace Sharing is enabled, PID 1 will be `pause`. - That `pause` process takes care of reaping zombies. - Process Namespace Sharing is available since Kubernetes 1.16. - If you're using an older version of Kubernetes ... ... you might have to add `tini` explicitly to your Docker image. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- class: pic .interstitial[] --- name: toc-advanced-dockerfile-syntax class: title Advanced Dockerfile Syntax .nav[ [Previous part](#toc-init-systems-and-pid-) | [Back to table of contents](#toc-part-7) | [Next part](#toc-buildkit) ] .debug[(automatically generated title slide)] --- class: title # Advanced Dockerfile Syntax  .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## Objectives We have seen simple Dockerfiles to illustrate how Docker build container images. In this section, we will give a recap of the Dockerfile syntax, and introduce advanced Dockerfile commands that we might come across sometimes; or that we might want to use in some specific scenarios. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## `Dockerfile` usage summary * `Dockerfile` instructions are executed in order. * Each instruction creates a new layer in the image. * Docker maintains a cache with the layers of previous builds. * When there are no changes in the instructions and files making a layer, the builder re-uses the cached layer, without executing the instruction for that layer. * The `FROM` instruction MUST be the first non-comment instruction. * Lines starting with `#` are treated as comments. * Some instructions (like `CMD` or `ENTRYPOINT`) update a piece of metadata. (As a result, each call to these instructions makes the previous one useless.) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `RUN` instruction The `RUN` instruction can be specified in two ways. With shell wrapping, which runs the specified command inside a shell, with `/bin/sh -c`: ```dockerfile RUN apt-get update ``` Or using the `exec` method, which avoids shell string expansion, and allows execution in images that don't have `/bin/sh`: ```dockerfile RUN [ "apt-get", "update" ] ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## More about the `RUN` instruction `RUN` will do the following: * Execute a command. * Record changes made to the filesystem. * Work great to install libraries, packages, and various files. `RUN` will NOT do the following: * Record state of *processes*. * Automatically start daemons. If you want to start something automatically when the container runs, you should use `CMD` and/or `ENTRYPOINT`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## Collapsing layers It is possible to execute multiple commands in a single step: ```dockerfile RUN apt-get update && apt-get install -y wget && apt-get clean ``` It is also possible to break a command onto multiple lines: ```dockerfile RUN apt-get update \ && apt-get install -y wget \ && apt-get clean ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `EXPOSE` instruction The `EXPOSE` instruction tells Docker what ports are to be published in this image. ```dockerfile EXPOSE 8080 EXPOSE 80 443 EXPOSE 53/tcp 53/udp ``` * All ports are private by default. * Declaring a port with `EXPOSE` is not enough to make it public. * The `Dockerfile` doesn't control on which port a service gets exposed. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## Exposing ports * When you `docker run -p ...`, that port becomes public. (Even if it was not declared with `EXPOSE`.) * When you `docker run -P ...` (without port number), all ports declared with `EXPOSE` become public. A *public port* is reachable from other containers and from outside the host. A *private port* is not reachable from outside. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `COPY` instruction The `COPY` instruction adds files and content from your host into the image. ```dockerfile COPY . /src ``` This will add the contents of the *build context* (the directory passed as an argument to `docker build`) to the directory `/src` in the container. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## Build context isolation Note: you can only reference files and directories *inside* the build context. Absolute paths are taken as being anchored to the build context, so the two following lines are equivalent: ```dockerfile COPY . /src COPY / /src ``` Attempts to use `..` to get out of the build context will be detected and blocked with Docker, and the build will fail. Otherwise, a `Dockerfile` could succeed on host A, but fail on host B. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## `ADD` `ADD` works almost like `COPY`, but has a few extra features. `ADD` can get remote files: ```dockerfile ADD http://www.example.com/webapp.jar /opt/ ``` This would download the `webapp.jar` file and place it in the `/opt` directory. `ADD` will automatically unpack zip files and tar archives: ```dockerfile ADD ./assets.zip /var/www/htdocs/assets/ ``` This would unpack `assets.zip` into `/var/www/htdocs/assets`. *However,* `ADD` will not automatically unpack remote archives. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## `ADD`, `COPY`, and the build cache * Before creating a new layer, Docker checks its build cache. * For most Dockerfile instructions, Docker only looks at the `Dockerfile` content to do the cache lookup. * For `ADD` and `COPY` instructions, Docker also checks if the files to be added to the container have been changed. * `ADD` always needs to download the remote file before it can check if it has been changed. (It cannot use, e.g., ETags or If-Modified-Since headers.) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## `VOLUME` The `VOLUME` instruction tells Docker that a specific directory should be a *volume*. ```dockerfile VOLUME /var/lib/mysql ``` Filesystem access in volumes bypasses the copy-on-write layer, offering native performance to I/O done in those directories. Volumes can be attached to multiple containers, allowing to "port" data over from a container to another, e.g. to upgrade a database to a newer version. It is possible to start a container in "read-only" mode. The container filesystem will be made read-only, but volumes can still have read/write access if necessary. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `WORKDIR` instruction The `WORKDIR` instruction sets the working directory for subsequent instructions. It also affects `CMD` and `ENTRYPOINT`, since it sets the working directory used when starting the container. ```dockerfile WORKDIR /src ``` You can specify `WORKDIR` again to change the working directory for further operations. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `ENV` instruction The `ENV` instruction specifies environment variables that should be set in any container launched from the image. ```dockerfile ENV WEBAPP_PORT 8080 ``` This will result in an environment variable being created in any containers created from this image of ```bash WEBAPP_PORT=8080 ``` You can also specify environment variables when you use `docker run`. ```bash $ docker run -e WEBAPP_PORT=8000 -e WEBAPP_HOST=www.example.com ... ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `USER` instruction The `USER` instruction sets the user name or UID to use when running the image. It can be used multiple times to change back to root or to another user. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `CMD` instruction The `CMD` instruction is a default command run when a container is launched from the image. ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` Means we don't need to specify `nginx -g "daemon off;"` when running the container. Instead of: ```bash $ docker run /web_image nginx -g "daemon off;" ``` We can just do: ```bash $ docker run /web_image ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## More about the `CMD` instruction Just like `RUN`, the `CMD` instruction comes in two forms. The first executes in a shell: ```dockerfile CMD nginx -g "daemon off;" ``` The second executes directly, without shell processing: ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `CMD` instruction The `CMD` can be overridden when you run a container. ```bash $ docker run -it /web_image bash ``` Will run `bash` instead of `nginx -g "daemon off;"`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `ENTRYPOINT` instruction The `ENTRYPOINT` instruction is like the `CMD` instruction, but arguments given on the command line are *appended* to the entry point. Note: you have to use the "exec" syntax (`[ "..." ]`). ```dockerfile ENTRYPOINT [ "/bin/ls" ] ``` If we were to run: ```bash $ docker run training/ls -l ``` Instead of trying to run `-l`, the container will run `/bin/ls -l`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `ENTRYPOINT` instruction The entry point can be overridden as well. ```bash $ docker run -it training/ls bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr $ docker run -it --entrypoint bash training/ls root@d902fb7b1fc7:/# ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## How `CMD` and `ENTRYPOINT` interact The `CMD` and `ENTRYPOINT` instructions work best when used together. ```dockerfile ENTRYPOINT [ "nginx" ] CMD [ "-g", "daemon off;" ] ``` The `ENTRYPOINT` specifies the command to be run and the `CMD` specifies its options. On the command line we can then potentially override the options when needed. ```bash $ docker run -d /web_image -t ``` This will override the options `CMD` provided with new flags. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## Advanced Dockerfile instructions * `ONBUILD` lets you stash instructions that will be executed when this image is used as a base for another one. * `LABEL` adds arbitrary metadata to the image. * `ARG` defines build-time variables (optional or mandatory). * `STOPSIGNAL` sets the signal for `docker stop` (`TERM` by default). * `HEALTHCHECK` defines a command assessing the status of the container. * `SHELL` sets the default program to use for string-syntax RUN, CMD, etc. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## The `ONBUILD` instruction The `ONBUILD` instruction is a trigger. It sets instructions that will be executed when another image is built from the image being build. This is useful for building images which will be used as a base to build other images. ```dockerfile ONBUILD COPY . /src ``` * You can't chain `ONBUILD` instructions with `ONBUILD`. * `ONBUILD` can't be used to trigger `FROM` instructions. ??? :EN:- Advanced Dockerfile syntax :FR:- Dockerfile niveau expert .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- class: pic .interstitial[] --- name: toc-buildkit class: title Buildkit .nav[ [Previous part](#toc-advanced-dockerfile-syntax) | [Back to table of contents](#toc-part-7) | [Next part](#toc-application-configuration) ] .debug[(automatically generated title slide)] --- # Buildkit - "New" backend for Docker builds - announced in 2017 - ships with Docker Engine 18.09 - enabled by default on Docker Desktop in 2021 - Huge improvements in build efficiency - 100% compatible with existing Dockerfiles - New features for multi-arch - Not just for building container images .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Old vs New - Classic `docker build`: - copy whole build context - linear execution - `docker run` + `docker commit` + `docker run` + `docker commit`... - Buildkit: - copy files only when they are needed; cache them - compute dependency graph (dependencies are expressed by `COPY`) - parallel execution - doesn't rely on Docker, but on internal runner/snapshotter - can run in "normal" containers (including in Kubernetes pods) .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Parallel execution - In multi-stage builds, all stages can be built in parallel (example: https://github.com/jpetazzo/shpod; [before][shpod-before-parallel] and [after][shpod-after-parallel]) - Stages are built only when they are necessary (i.e. if their output is tagged or used in another necessary stage) - Files are copied from context only when needed - Files are cached in the builder [shpod-before-parallel]: https://github.com/jpetazzo/shpod/blob/c6efedad6d6c3dc3120dbc0ae0a6915f85862474/Dockerfile [shpod-after-parallel]: https://github.com/jpetazzo/shpod/blob/d20887bbd56b5fcae2d5d9b0ce06cae8887caabf/Dockerfile .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Turning it on and off - On recent version of Docker Desktop (since 2021): *enabled by default* - On older versions, or on Docker CE (Linux): `export DOCKER_BUILDKIT=1` - Turning it off: `export DOCKER_BUILDKIT=0` .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Multi-arch support - Historically, Docker only ran on x86_64 / amd64 (Intel/AMD 64 bits architecture) - Folks have been running it on 32-bit ARM for ages (e.g. Raspberry Pi) - This required a Go compiler and appropriate base images (which means changing/adapting Dockerfiles to use these base images) - Docker [image manifest v2 schema 2][manifest] introduces multi-arch images (`FROM alpine` automatically gets the right image for your architecture) [manifest]: https://docs.docker.com/registry/spec/manifest-v2-2/ .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Why? - Raspberry Pi (32-bit and 64-bit ARM) - Other ARM-based embedded systems (ODROID, NVIDIA Jetson...) - Apple M1, M2... - AWS Graviton - Ampere Altra (e.g. on Hetzner, Oracle Cloud, Scaleway...) .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Multi-arch builds in a nutshell Use the `docker buildx build` command: ```bash docker buildx build … \ --platform linux/amd64,linux/arm64,linux/arm/v7,linux/386 \ [--tag jpetazzo/hello --push] ``` - Requires all base images to be available for these platforms - Must not use binary downloads with hard-coded architectures! (streamlining a Dockerfile for multi-arch: [before][shpod-before-multiarch], [after][shpod-after-multiarch]) [shpod-before-multiarch]: https://github.com/jpetazzo/shpod/blob/d20887bbd56b5fcae2d5d9b0ce06cae8887caabf/Dockerfile [shpod-after-multiarch]: https://github.com/jpetazzo/shpod/blob/c50789e662417b34fea6f5e1d893721d66d265b7/Dockerfile .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Native vs emulated vs cross - Native builds: *aarch64 machine running aarch64 programs building aarch64 images/binaries* - Emulated builds: *x86_64 machine running aarch64 programs building aarch64 images/binaries* - Cross builds: *x86_64 machine running x86_64 programs building aarch64 images/binaries* .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Native - Dockerfiles are (relatively) simple to write (nothing special to do to handle multi-arch; just avoid hard-coded archs) - Best performance - Requires "exotic" machines - Requires setting up a build farm .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Emulated - Dockerfiles are (relatively) simple to write - Emulation performance can vary (from "OK" to "ouch this is slow") - Emulation isn't always perfect (weird bugs/crashes are rare but can happen) - Doesn't require special machines - Supports arbitrary architectures thanks to QEMU .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Cross - Dockerfiles are more complicated to write - Requires cross-compilation toolchains - Performance is good - Doesn't require special machines .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Native builds - Requires base images to be available - To view available architectures for an image: ```bash regctl manifest get --list docker manifest inspect ``` - Nothing special to do, *except* when downloading binaries! ``` https://releases.hashicorp.com/terraform/1.1.5/terraform_1.1.5_linux_`amd64`.zip ``` .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Finding the right architecture `uname -m` → armv7l, aarch64, i686, x86_64 `GOARCH` (from `go env`) → arm, arm64, 386, amd64 In Dockerfile, add `ARG TARGETARCH` (or `ARG TARGETPLATFORM`) - `TARGETARCH` matches `GOARCH` - `TARGETPLAFORM` → linux/arm/v7, linux/arm64, linux/386, linux/amd64 .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- class: extra-details ## Welp Sometimes, binary releases be like: ``` Linux_arm64.tar.gz Linux_ppc64le.tar.gz Linux_s390x.tar.gz Linux_x86_64.tar.gz ``` This needs a bit of custom mapping. .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Emulation - Leverages `binfmt_misc` and QEMU on Linux - Enabling: ```bash docker run --rm --privileged aptman/qus -s -- -p ``` - Disabling: ```bash docker run --rm --privileged aptman/qus -- -r ``` - Checking status: ```bash ls -l /proc/sys/fs/binfmt_misc ``` .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- class: extra-details ## How it works - `binfmt_misc` lets us register _interpreters_ for binaries, e.g.: - [DOSBox][dosbox] for DOS programs - [Wine][wine] for Windows programs - [QEMU][qemu] for Linux programs for other architectures - When we try to execute e.g. a SPARC binary on our x86_64 machine: - `binfmt_misc` detects the binary format and invokes `qemu- the-binary ...` - QEMU translates SPARC instructions to x86_64 instructions - system calls go straight to the kernel [dosbox]: https://www.dosbox.com/ [QEMU]: https://www.qemu.org/ [wine]: https://www.winehq.org/ .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- class: extra-details ## QEMU registration - The `aptman/qus` image mentioned earlier contains static QEMU builds - It registers all these interpreters with the kernel - For more details, check: - https://github.com/dbhi/qus - https://dbhi.github.io/qus/ .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Cross-compilation - Cross-compilation is about 10x faster than emulation (non-scientific benchmarks!) - In Dockerfile, add: `ARG BUILDARCH BUILDPLATFORM TARGETARCH TARGETPLATFORM` - Can use `FROM --platform=$BUILDPLATFORM ` - Then use `$TARGETARCH` or `$TARGETPLATFORM` (e.g. for Go, `export GOARCH=$TARGETARCH`) - Check [tonistiigi/xx][xx] and [Toni's blog][toni] for some amazing cross tools! [xx]: https://github.com/tonistiigi/xx [toni]: https://medium.com/@tonistiigi/faster-multi-platform-builds-dockerfile-cross-compilation-guide-part-1-ec087c719eaf .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Checking runtime capabilities Build and run the following Dockerfile: ```dockerfile FROM --platform=linux/amd64 busybox AS amd64 FROM --platform=linux/arm64 busybox AS arm64 FROM --platform=linux/arm/v7 busybox AS arm32 FROM --platform=linux/386 busybox AS ia32 FROM alpine RUN apk add file WORKDIR /root COPY --from=amd64 /bin/busybox /root/amd64/busybox COPY --from=arm64 /bin/busybox /root/arm64/busybox COPY --from=arm32 /bin/busybox /root/arm32/busybox COPY --from=ia32 /bin/busybox /root/ia32/busybox CMD for A in *; do echo "$A => $($A/busybox uname -a)"; done ``` It will indicate which executables can be run on your engine. .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Cache directories ```bash RUN --mount=type=cache,target=/pipcache pip install --cache-dir /pipcache ... ``` - The `/pipcache` directory won't be in the final image - But it will persist across builds - This can simplify Dockerfiles a lot - we no longer need to `download package && install package && rm package` - download to a cache directory, and skip `rm` phase - Subsequent builds will also be faster, thanks to caching .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## More than builds - Buildkit is also used in other systems: - [Earthly] - generic repeatable build pipelines - [Dagger] - CICD pipelines that run anywhere - and more! [Earthly]: https://earthly.dev/ [Dagger]: https://dagger.io/ .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- class: pic .interstitial[] --- name: toc-application-configuration class: title Application Configuration .nav[ [Previous part](#toc-buildkit) | [Back to table of contents](#toc-part-8) | [Next part](#toc-logging) ] .debug[(automatically generated title slide)] --- # Application Configuration There are many ways to provide configuration to containerized applications. There is no "best way" — it depends on factors like: * configuration size, * mandatory and optional parameters, * scope of configuration (per container, per app, per customer, per site, etc), * frequency of changes in the configuration. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Command-line parameters ```bash docker run jpetazzo/hamba 80 www1:80 www2:80 ``` * Configuration is provided through command-line parameters. * In the above example, the `ENTRYPOINT` is a script that will: - parse the parameters, - generate a configuration file, - start the actual service. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Command-line parameters pros and cons * Appropriate for mandatory parameters (without which the service cannot start). * Convenient for "toolbelt" services instantiated many times. (Because there is no extra step: just run it!) * Not great for dynamic configurations or bigger configurations. (These things are still possible, but more cumbersome.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Environment variables ```bash docker run -e ELASTICSEARCH_URL=http://es42:9201/ kibana ``` * Configuration is provided through environment variables. * The environment variable can be used straight by the program, or by a script generating a configuration file. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Environment variables pros and cons * Appropriate for optional parameters (since the image can provide default values). * Also convenient for services instantiated many times. (It's as easy as command-line parameters.) * Great for services with lots of parameters, but you only want to specify a few. (And use default values for everything else.) * Ability to introspect possible parameters and their default values. * Not great for dynamic configurations. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Baked-in configuration ``` FROM prometheus COPY prometheus.conf /etc ``` * The configuration is added to the image. * The image may have a default configuration; the new configuration can: - replace the default configuration, - extend it (if the code can read multiple configuration files). .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Baked-in configuration pros and cons * Allows arbitrary customization and complex configuration files. * Requires writing a configuration file. (Obviously!) * Requires building an image to start the service. * Requires rebuilding the image to reconfigure the service. * Requires rebuilding the image to upgrade the service. * Configured images can be stored in registries. (Which is great, but requires a registry.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Configuration volume ```bash docker run -v appconfig:/etc/appconfig myapp ``` * The configuration is stored in a volume. * The volume is attached to the container. * The image may have a default configuration. (But this results in a less "obvious" setup, that needs more documentation.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Configuration volume pros and cons * Allows arbitrary customization and complex configuration files. * Requires creating a volume for each different configuration. * Services with identical configurations can use the same volume. * Doesn't require building / rebuilding an image when upgrading / reconfiguring. * Configuration can be generated or edited through another container. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Dynamic configuration volume * This is a powerful pattern for dynamic, complex configurations. * The configuration is stored in a volume. * The configuration is generated / updated by a special container. * The application container detects when the configuration is changed. (And automatically reloads the configuration when necessary.) * The configuration can be shared between multiple services if needed. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Dynamic configuration volume example In a first terminal, start a load balancer with an initial configuration: ```bash $ docker run --name loadbalancer jpetazzo/hamba \ 80 goo.gl:80 ``` In another terminal, reconfigure that load balancer: ```bash $ docker run --rm --volumes-from loadbalancer jpetazzo/hamba reconfigure \ 80 google.com:80 ``` The configuration could also be updated through e.g. a REST API. (The REST API being itself served from another container.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Keeping secrets .warning[Ideally, you should not put secrets (passwords, tokens...) in:] * command-line or environment variables (anyone with Docker API access can get them), * images, especially stored in a registry. Secrets management is better handled with an orchestrator (like Swarm or Kubernetes). Orchestrators will allow to pass secrets in a "one-way" manner. Managing secrets securely without an orchestrator can be contrived. E.g.: - read the secret on stdin when the service starts, - pass the secret using an API endpoint. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- class: pic .interstitial[] --- name: toc-logging class: title Logging .nav[ [Previous part](#toc-application-configuration) | [Back to table of contents](#toc-part-8) | [Next part](#toc-orchestration-an-overview) ] .debug[(automatically generated title slide)] --- # Logging In this chapter, we will explain the different ways to send logs from containers. We will then show one particular method in action, using ELK and Docker's logging drivers. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## There are many ways to send logs - The simplest method is to write on the standard output and error. - Applications can write their logs to local files. (The files are usually periodically rotated and compressed.) - It is also very common (on UNIX systems) to use syslog. (The logs are collected by syslogd or an equivalent like journald.) - In large applications with many components, it is common to use a logging service. (The code uses a library to send messages to the logging service.) *All these methods are available with containers.* .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Writing on stdout/stderr - The standard output and error of containers is managed by the container engine. - This means that each line written by the container is received by the engine. - The engine can then do "whatever" with these log lines. - With Docker, the default configuration is to write the logs to local files. - The files can then be queried with e.g. `docker logs` (and the equivalent API request). - This can be customized, as we will see later. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Writing to local files - If we write to files, it is possible to access them but cumbersome. (We have to use `docker exec` or `docker cp`.) - Furthermore, if the container is stopped, we cannot use `docker exec`. - If the container is deleted, the logs disappear. - What should we do for programs who can only log to local files? -- - There are multiple solutions. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Using a volume or bind mount - Instead of writing logs to a normal directory, we can place them on a volume. - The volume can be accessed by other containers. - We can run a program like `filebeat` in another container accessing the same volume. (`filebeat` reads local log files continuously, like `tail -f`, and sends them to a centralized system like ElasticSearch.) - We can also use a bind mount, e.g. `-v /var/log/containers/www:/var/log/tomcat`. - The container will write log files to a directory mapped to a host directory. - The log files will appear on the host and be consumable directly from the host. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Using logging services - We can use logging frameworks (like log4j or the Python `logging` package). - These frameworks require some code and/or configuration in our application code. - These mechanisms can be used identically inside or outside of containers. - Sometimes, we can leverage containerized networking to simplify their setup. - For instance, our code can send log messages to a server named `log`. - The name `log` will resolve to different addresses in development, production, etc. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Using syslog - What if our code (or the program we are running in containers) uses syslog? - One possibility is to run a syslog daemon in the container. - Then that daemon can be setup to write to local files or forward to the network. - Under the hood, syslog clients connect to a local UNIX socket, `/dev/log`. - We can expose a syslog socket to the container (by using a volume or bind-mount). - Then just create a symlink from `/dev/log` to the syslog socket. - Voilà! .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Using logging drivers - If we log to stdout and stderr, the container engine receives the log messages. - The Docker Engine has a modular logging system with many plugins, including: - json-file (the default one) - syslog - journald - gelf - fluentd - splunk - etc. - Each plugin can process and forward the logs to another process or system. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## A word of warning about `json-file` - By default, log file size is unlimited. - This means that a very verbose container *will* use up all your disk space. (Or a less verbose container, but running for a very long time.) - Log rotation can be enabled by setting a `max-size` option. - Older log files can be removed by setting a `max-file` option. - Just like other logging options, these can be set per container, or globally. Example: ```bash $ docker run --log-opt max-size=10m --log-opt max-file=3 elasticsearch ``` .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Demo: sending logs to ELK - We are going to deploy an ELK stack. - It will accept logs over a GELF socket. - We will run a few containers with the `gelf` logging driver. - We will then see our logs in Kibana, the web interface provided by ELK. *Important foreword: this is not an "official" or "recommended" setup; it is just an example. We used ELK in this demo because it's a popular setup and we keep being asked about it; but you will have equal success with Fluent or other logging stacks!* .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## What's in an ELK stack? - ELK is three components: - ElasticSearch (to store and index log entries) - Logstash (to receive log entries from various sources, process them, and forward them to various destinations) - Kibana (to view/search log entries with a nice UI) - The only component that we will configure is Logstash - We will accept log entries using the GELF protocol - Log entries will be stored in ElasticSearch, and displayed on Logstash's stdout for debugging .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Running ELK - We are going to use a Compose file describing the ELK stack. - The Compose file is in the container.training repository on GitHub. ```bash $ git clone https://github.com/jpetazzo/container.training $ cd container.training $ cd elk $ docker-compose up ``` - Let's have a look at the Compose file while it's deploying. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Our basic ELK deployment - We are using images from the Docker Hub: `elasticsearch`, `logstash`, `kibana`. - We don't need to change the configuration of ElasticSearch. - We need to tell Kibana the address of ElasticSearch: - it is set with the `ELASTICSEARCH_URL` environment variable, - by default it is `localhost:9200`, we change it to `elasticsearch:9200`. - We need to configure Logstash: - we pass the entire configuration file through command-line arguments, - this is a hack so that we don't have to create an image just for the config. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Sending logs to ELK - The ELK stack accepts log messages through a GELF socket. - The GELF socket listens on UDP port 12201. - To send a message, we need to change the logging driver used by Docker. - This can be done globally (by reconfiguring the Engine) or on a per-container basis. - Let's override the logging driver for a single container: ```bash $ docker run --log-driver=gelf --log-opt=gelf-address=udp://localhost:12201 \ alpine echo hello world ``` .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Viewing the logs in ELK - Connect to the Kibana interface. - It is exposed on port 5601. - Browse http://X.X.X.X:5601. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## "Configuring" Kibana - Kibana should offer you to "Configure an index pattern": in the "Time-field name" drop down, select "@timestamp", and hit the "Create" button. - Then: - click "Discover" (in the top-left corner), - click "Last 15 minutes" (in the top-right corner), - click "Last 1 hour" (in the list in the middle), - click "Auto-refresh" (top-right corner), - click "5 seconds" (top-left of the list). - You should see a series of green bars (with one new green bar every minute). - Our 'hello world' message should be visible there. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Important afterword **This is not a "production-grade" setup.** It is just an educational example. Since we have only one node , we did set up a single ElasticSearch instance and a single Logstash instance. In a production setup, you need an ElasticSearch cluster (both for capacity and availability reasons). You also need multiple Logstash instances. And if you want to withstand bursts of logs, you need some kind of message queue: Redis if you're cheap, Kafka if you want to make sure that you don't drop messages on the floor. Good luck. If you want to learn more about the GELF driver, have a look at [this blog post]( https://jpetazzo.github.io/2017/01/20/docker-logging-gelf/). .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- class: pic .interstitial[] --- name: toc-orchestration-an-overview class: title Orchestration, an overview .nav[ [Previous part](#toc-logging) | [Back to table of contents](#toc-part-8) | [Next part](#toc-links-and-resources) ] .debug[(automatically generated title slide)] --- # Orchestration, an overview In this chapter, we will: * Explain what is orchestration and why we would need it. * Present (from a high-level perspective) some orchestrators. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## What's orchestration?  .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## What's orchestration? According to Wikipedia: *Orchestration describes the __automated__ arrangement, coordination, and management of complex computer systems, middleware, and services.* -- *[...] orchestration is often discussed in the context of __service-oriented architecture__, __virtualization__, provisioning, Converged Infrastructure and __dynamic datacenter__ topics.* -- What does that really mean? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances -- - Q: do we always use 100% of our servers? -- - A: obviously not! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances - Every night, scale down (by shutting down extraneous replicated instances) - Every morning, scale up (by deploying new copies) - "Pay for what you use" (i.e. save big $$$ here) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances How do we implement this? - Crontab - Autoscaling (save even bigger $$$) That's *relatively* easy. Now, how are things for our IAAS provider? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter - Q: what's the #1 cost in a datacenter? -- - A: electricity! -- - Q: what uses electricity? -- - A: servers, obviously - A: ... and associated cooling -- - Q: do we always use 100% of our servers? -- - A: obviously not! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter - If only we could turn off unused servers during the night... - Problem: we can only turn off a server if it's totally empty! (i.e. all VMs on it are stopped/moved) - Solution: *migrate* VMs and shutdown empty servers (e.g. combine two hypervisors with 40% load into 80%+0%, and shut down the one at 0%) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter How do we implement this? - Shut down empty hosts (but keep some spare capacity) - Start hosts again when capacity gets low - Ability to "live migrate" VMs (Xen already did this 10+ years ago) - Rebalance VMs on a regular basis - what if a VM is stopped while we move it? - should we allow provisioning on hosts involved in a migration? *Scheduling* becomes more complex. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## What is scheduling? According to Wikipedia (again): *In computing, scheduling is the method by which threads, processes or data flows are given access to system resources.* The scheduler is concerned mainly with: - throughput (total amount of work done per time unit); - turnaround time (between submission and completion); - response time (between submission and start); - waiting time (between job readiness and execution); - fairness (appropriate times according to priorities). In practice, these goals often conflict. **"Scheduling" = decide which resources to use.** .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Exercise 1 - You have: - 5 hypervisors (physical machines) - Each server has: - 16 GB RAM, 8 cores, 1 TB disk - Each week, your team requests: - one VM with X RAM, Y CPU, Z disk Scheduling = deciding which hypervisor to use for each VM. Difficulty: easy! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Exercise 2 - You have: - 1000+ hypervisors (and counting!) - Each server has different resources: - 8-500 GB of RAM, 4-64 cores, 1-100 TB disk - Multiple times a day, a different team asks for: - up to 50 VMs with different characteristics Scheduling = deciding which hypervisor to use for each VM. Difficulty: ??? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Exercise 2 - You have: - 1000+ hypervisors (and counting!) - Each server has different resources: - 8-500 GB of RAM, 4-64 cores, 1-100 TB disk - Multiple times a day, a different team asks for: - up to 50 VMs with different characteristics Scheduling = deciding which hypervisor to use for each VM.  .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Exercise 3 - You have machines (physical and/or virtual) - You have containers - You are trying to put the containers on the machines - Sounds familiar? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with one resource .center[] ## We can't fit a job of size 6 :( .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with one resource .center[] ## ... Now we can! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with two resources .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with three resources .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## You need to be good at this .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## But also, you must be quick! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## And be web scale! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## And think outside (?) of the box! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## Good luck! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## TL,DR * Scheduling with multiple resources (dimensions) is hard. * Don't expect to solve the problem with a Tiny Shell Script. * There are literally tons of research papers written on this. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## But our orchestrator also needs to manage ... * Network connectivity (or filtering) between containers. * Load balancing (external and internal). * Failure recovery (if a node or a whole datacenter fails). * Rolling out new versions of our applications. (Canary deployments, blue/green deployments...) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Some orchestrators We are going to present briefly a few orchestrators. There is no "absolute best" orchestrator. It depends on: - your applications, - your requirements, - your pre-existing skills... .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Nomad - Open Source project by Hashicorp. - Arbitrary scheduler (not just for containers). - Great if you want to schedule mixed workloads. (VMs, containers, processes...) - Less integration with the rest of the container ecosystem. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Mesos - Open Source project in the Apache Foundation. - Arbitrary scheduler (not just for containers). - Two-level scheduler. - Top-level scheduler acts as a resource broker. - Second-level schedulers (aka "frameworks") obtain resources from top-level. - Frameworks implement various strategies. (Marathon = long running processes; Chronos = run at intervals; ...) - Commercial offering through DC/OS by Mesosphere. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Rancher - Rancher 1 offered a simple interface for Docker hosts. - Rancher 2 is a complete management platform for Docker and Kubernetes. - Technically not an orchestrator, but it's a popular option. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Swarm - Tightly integrated with the Docker Engine. - Extremely simple to deploy and setup, even in multi-manager (HA) mode. - Secure by default. - Strongly opinionated: - smaller set of features, - easier to operate. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Kubernetes - Open Source project initiated by Google. - Contributions from many other actors. - *De facto* standard for container orchestration. - Many deployment options; some of them very complex. - Reputation: steep learning curve. - Reality: - true, if we try to understand *everything*; - false, if we focus on what matters. ??? :EN:- Orchestration overview :FR:- Survol de techniques d'orchestration .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: title, self-paced Thank you! .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/thankyou.md)] --- class: title, in-person That's all, folks! Questions?  .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/thankyou.md)] --- class: pic .interstitial[] --- name: toc-links-and-resources class: title Links and resources .nav[ [Previous part](#toc-orchestration-an-overview) | [Back to table of contents](#toc-part-9) | [Next part](#toc-) ] .debug[(automatically generated title slide)] --- # Links and resources - [Docker Community Slack](https://community.docker.com/registrations/groups/4316) - [Docker Community Forums](https://forums.docker.com/) - [Docker Hub](https://hub.docker.com) - [Docker Blog](https://blog.docker.com/) - [Docker documentation](https://docs.docker.com/) - [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker) - [Docker on Twitter](https://twitter.com/docker) - [Play With Docker Hands-On Labs](https://training.play-with-docker.com/) .footnote[These slides (and future updates) are on → https://container.training/] .debug[[containers/links.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/links.md)]
RUN CMD, EXPOSE ... ``` * The build fails as soon as an instruction fails * If `RUN ` fails, the build doesn't produce an image * If it succeeds, it produces a clean image (without test libraries and data) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[] --- name: toc-dockerfile-examples class: title Dockerfile examples .nav[ [Previous part](#toc-tips-for-efficient-dockerfiles) | [Back to table of contents](#toc-part-3) | [Next part](#toc-reducing-image-size) ] .debug[(automatically generated title slide)] --- # Dockerfile examples There are a number of tips, tricks, and techniques that we can use in Dockerfiles. But sometimes, we have to use different (and even opposed) practices depending on: - the complexity of our project, - the programming language or framework that we are using, - the stage of our project (early MVP vs. super-stable production), - whether we're building a final image or a base for further images, - etc. We are going to show a few examples using very different techniques. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## When to optimize an image When authoring official images, it is a good idea to reduce as much as possible: - the number of layers, - the size of the final image. This is often done at the expense of build time and convenience for the image maintainer; but when an image is downloaded millions of time, saving even a few seconds of pull time can be worth it. .small[ ```dockerfile RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \ && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \ && docker-php-ext-install gd ... RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \ && echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \ && tar -xzf wordpress.tar.gz -C /usr/src/ \ && rm wordpress.tar.gz \ && chown -R www-data:www-data /usr/src/wordpress ``` ] (Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## When to *not* optimize an image Sometimes, it is better to prioritize *maintainer convenience*. In particular, if: - the image changes a lot, - the image has very few users (e.g. only 1, the maintainer!), - the image is built and run on the same machine, - the image is built and run on machines with a very fast link ... In these cases, just keep things simple! (Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ```dockerfile FROM debian:sid RUN apt-get update -q RUN apt-get install -yq build-essential make RUN apt-get install -yq zlib1g-dev RUN apt-get install -yq ruby ruby-dev RUN apt-get install -yq python-pygments RUN apt-get install -yq nodejs RUN apt-get install -yq cmake RUN gem install --no-rdoc --no-ri github-pages COPY . /blog WORKDIR /blog VOLUME /blog/_site EXPOSE 4000 CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"] ``` .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## Multi-dimensional versioning systems Images can have a tag, indicating the version of the image. But sometimes, there are multiple important components, and we need to indicate the versions for all of them. This can be done with environment variables: ```dockerfile ENV PIP=9.0.3 \ ZC_BUILDOUT=2.11.2 \ SETUPTOOLS=38.7.0 \ PLONE_MAJOR=5.1 \ PLONE_VERSION=5.1.0 \ PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d ``` (Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## Entrypoints and wrappers It is very common to define a custom entrypoint. That entrypoint will generally be a script, performing any combination of: - pre-flights checks (if a required dependency is not available, display a nice error message early instead of an obscure one in a deep log file), - generation or validation of configuration files, - dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`), - and more. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## A typical entrypoint script ```dockerfile #!/bin/sh set -e # first arg is '-f' or '--some-option' # or first arg is 'something.conf' if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then set -- redis-server "$@" fi # allow the container to be started with '--user' if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then chown -R redis . exec su-exec redis "$0" "$@" fi exec "$@" ``` (Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## Factoring information To facilitate maintenance (and avoid human errors), avoid to repeat information like: - version numbers, - remote asset URLs (e.g. source tarballs) ... Instead, use environment variables. .small[ ```dockerfile ENV NODE_VERSION 10.2.1 ... RUN ... && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \ && curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \ && gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \ && grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \ && tar -xf "node-v$NODE_VERSION.tar.xz" \ && cd "node-v$NODE_VERSION" \ ... ``` ] (Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## Overrides In theory, development and production images should be the same. In practice, we often need to enable specific behaviors in development (e.g. debug statements). One way to reconcile both needs is to use Compose to enable these behaviors. Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example. .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## Production image This Dockerfile builds an image leveraging gunicorn: ```dockerfile FROM python RUN pip install flask RUN pip install gunicorn RUN pip install redis COPY . /src WORKDIR /src CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app EXPOSE 5000 ``` (Source: [trainingwheels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## Development Compose file This Compose file uses the same image, but with a few overrides for development: - the Flask development server is used (overriding `CMD`), - the `DEBUG` environment variable is set, - a volume is used to provide a faster local development workflow. .small[ ```yaml services: www: build: www ports: - 8000:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src ``` ] (Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml)) .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- ## How to know which best practices are better? - The main goal of containers is to make our lives easier. - In this chapter, we showed many ways to write Dockerfiles. - These Dockerfiles use sometimes diametrically opposed techniques. - Yet, they were the "right" ones *for a specific situation.* - It's OK (and even encouraged) to start simple and evolve as needed. - Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration! ??? :EN:Optimizing images :EN:- Dockerfile tips, tricks, and best practices :EN:- Reducing build time :EN:- Reducing image size :FR:Optimiser ses images :FR:- Bonnes pratiques, trucs et astuces :FR:- Réduire le temps de build :FR:- Réduire la taille des images .debug[[containers/Dockerfile_Tips.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Dockerfile_Tips.md)] --- class: pic .interstitial[] --- name: toc-reducing-image-size class: title Reducing image size .nav[ [Previous part](#toc-dockerfile-examples) | [Back to table of contents](#toc-part-3) | [Next part](#toc-multi-stage-builds) ] .debug[(automatically generated title slide)] --- # Reducing image size * In the previous example, our final image contained: * our `hello` program * its source code * the compiler * Only the first one is strictly necessary. * We are going to see how to obtain an image without the superfluous components. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Can't we remove superfluous files with `RUN`? What happens if we do one of the following commands? - `RUN rm -rf ...` - `RUN apt-get remove ...` - `RUN make clean ...` -- This adds a layer which removes a bunch of files. But the previous layers (which added the files) still exist. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Removing files with an extra layer When downloading an image, all the layers must be downloaded. | Dockerfile instruction | Layer size | Image size | | ---------------------- | ---------- | ---------- | | `FROM ubuntu` | Size of base image | Size of base image | | `...` | ... | Sum of this layer + all previous ones | | `RUN apt-get install somepackage` | Size of files added (e.g. a few MB) | Sum of this layer + all previous ones | | `...` | ... | Sum of this layer + all previous ones | | `RUN apt-get remove somepackage` | Almost zero (just metadata) | Same as previous one | Therefore, `RUN rm` does not reduce the size of the image or free up disk space. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Removing unnecessary files Various techniques are available to obtain smaller images: - collapsing layers, - adding binaries that are built outside of the Dockerfile, - squashing the final image, - multi-stage builds. Let's review them quickly. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Collapsing layers You will frequently see Dockerfiles like this: ```dockerfile FROM ubuntu RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ... ``` Or the (more readable) variant: ```dockerfile FROM ubuntu RUN apt-get update \ && apt-get install xxx \ && ... \ && apt-get remove xxx \ && ... ``` This `RUN` command gives us a single layer. The files that are added, then removed in the same layer, do not grow the layer size. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Collapsing layers: pros and cons Pros: - works on all versions of Docker - doesn't require extra tools Cons: - not very readable - some unnecessary files might still remain if the cleanup is not thorough - that layer is expensive (slow to build) .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Building binaries outside of the Dockerfile This results in a Dockerfile looking like this: ```dockerfile FROM ubuntu COPY xxx /usr/local/bin ``` Of course, this implies that the file `xxx` exists in the build context. That file has to exist before you can run `docker build`. For instance, it can: - exist in the code repository, - be created by another tool (script, Makefile...), - be created by another container image and extracted from the image. See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox). .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Building binaries outside: pros and cons Pros: - final image can be very small Cons: - requires an extra build tool - we're back in dependency hell and "works on my machine" Cons, if binary is added to code repository: - breaks portability across different platforms - grows repository size a lot if the binary is updated frequently .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Squashing the final image The idea is to transform the final image into a single-layer image. This can be done in (at least) two ways. - Activate experimental features and squash the final image: ```bash docker image build --squash ... ``` - Export/import the final image. ```bash docker build -t temp-image . docker run --entrypoint true --name temp-container temp-image docker export temp-container | docker import - final-image docker rm temp-container docker rmi temp-image ``` .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Squashing the image: pros and cons Pros: - single-layer images are smaller and faster to download - removed files no longer take up storage and network resources Cons: - we still need to actively remove unnecessary files - squash operation can take a lot of time (on big images) - squash operation does not benefit from cache (even if we change just a tiny file, the whole image needs to be re-squashed) .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage builds Multi-stage builds allow us to have multiple *stages*. Each stage is a separate image, and can copy files from previous stages. We're going to see how they work in more detail. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- class: pic .interstitial[] --- name: toc-multi-stage-builds class: title Multi-stage builds .nav[ [Previous part](#toc-reducing-image-size) | [Back to table of contents](#toc-part-3) | [Next part](#toc-publishing-images-to-the-docker-hub) ] .debug[(automatically generated title slide)] --- # Multi-stage builds * At any point in our `Dockerfile`, we can add a new `FROM` line. * This line starts a new stage of our build. * Each stage can access the files of the previous stages with `COPY --from=...`. * When a build is tagged (with `docker build -t ...`), the last stage is tagged. * Previous stages are not discarded: they will be used for caching, and can be referenced. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage builds in practice * Each stage is numbered, starting at `0` * We can copy a file from a previous stage by indicating its number, e.g.: ```dockerfile COPY --from=0 /file/from/first/stage /location/in/current/stage ``` * We can also name stages, and reference these names: ```dockerfile FROM golang AS builder RUN ... FROM alpine COPY --from=builder /go/bin/mylittlebinary /usr/local/bin/ ``` .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage builds for our C program We will change our Dockerfile to: * give a nickname to the first stage: `compiler` * add a second stage using the same `ubuntu` base image * add the `hello` binary to the second stage * make sure that `CMD` is in the second stage The resulting Dockerfile is on the next slide. .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Multi-stage build `Dockerfile` Here is the final Dockerfile: ```dockerfile FROM ubuntu AS compiler RUN apt-get update RUN apt-get install -y build-essential COPY hello.c / RUN make hello FROM ubuntu COPY --from=compiler /hello /hello CMD /hello ``` Let's build it, and check that it works correctly: ```bash docker build -t hellomultistage . docker run hellomultistage ``` .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- ## Comparing single/multi-stage build image sizes List our images with `docker images`, and check the size of: - the `ubuntu` base image, - the single-stage `hello` image, - the multi-stage `hellomultistage` image. We can achieve even smaller images if we use smaller base images. However, if we use common base images (e.g. if we standardize on `ubuntu`), these common images will be pulled only once per node, so they are virtually "free." .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- class: extra-details ## Build targets * We can also tag an intermediary stage with the following command: ```bash docker build --target STAGE --tag NAME ``` * This will create an image (named `NAME`) corresponding to stage `STAGE` * This can be used to easily access an intermediary stage for inspection (instead of parsing the output of `docker build` to find out the image ID) * This can also be used to describe multiple images from a single Dockerfile (instead of using multiple Dockerfiles, which could go out of sync) -- class: extra-details ## Dealing with download caches * In some cases, our images contain temporary downloaded files or caches (examples: packages downloaded by `pip`, Maven, etc.) * These can sometimes be disabled (e.g. `pip install --no-cache-dir ...`) * The cache can also be cleaned immediately after installing (e.g. `pip install ... && rm -rf ~/.cache/pip`) .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- class: extra-details ## Download caches and multi-stage builds * Download+install packages in a build stage * Copy the installed packages to a run stage * Example: in the specific case of Python, use a virtual env (install in the virtual env; then copy the virtual env directory) .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- class: extra-details ## Download caches and BuildKit * BuildKit has a caching feature for run stages * It can address download caches elegantly * Example: ```bash RUN --mount=type=cache,target=/pipcache pip install --cache-dir /pipcache ... ``` * The cache won't be in the final image, but it'll persist across builds ??? :EN:Optimizing our images and their build process :EN:- Leveraging multi-stage builds :FR:Optimiser les images et leur construction :FR:- Utilisation d'un *multi-stage build* .debug[[containers/Multi_Stage_Builds.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Multi_Stage_Builds.md)] --- class: pic .interstitial[] --- name: toc-publishing-images-to-the-docker-hub class: title Publishing images to the Docker Hub .nav[ [Previous part](#toc-multi-stage-builds) | [Back to table of contents](#toc-part-3) | [Next part](#toc-exercise--writing-better-dockerfiles) ] .debug[(automatically generated title slide)] --- # Publishing images to the Docker Hub We have built our first images. We can now publish it to the Docker Hub! *You don't have to do the exercises in this section, because they require an account on the Docker Hub, and we don't want to force anyone to create one.* *Note, however, that creating an account on the Docker Hub is free (and doesn't require a credit card), and hosting public images is free as well.* .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- ## Logging into our Docker Hub account * This can be done from the Docker CLI: ```bash docker login ``` .warning[When running Docker for Mac/Windows, or Docker on a Linux workstation, it can (and will when possible) integrate with your system's keyring to store your credentials securely. However, on most Linux servers, it will store your credentials in `~/.docker/config`.] .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- ## Image tags and registry addresses * Docker images tags are like Git tags and branches. * They are like *bookmarks* pointing at a specific image ID. * Tagging an image doesn't *rename* an image: it adds another tag. * When pushing an image to a registry, the registry address is in the tag. Example: `registry.example.net:5000/image` * What about Docker Hub images? -- * `jpetazzo/clock` is, in fact, `index.docker.io/jpetazzo/clock` * `ubuntu` is, in fact, `library/ubuntu`, i.e. `index.docker.io/library/ubuntu` .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- ## Tagging an image to push it on the Hub * Let's tag our `figlet` image (or any other to our liking): ```bash docker tag figlet jpetazzo/figlet ``` * And push it to the Hub: ```bash docker push jpetazzo/figlet ``` * That's it! -- * Anybody can now `docker run jpetazzo/figlet` anywhere. .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- ## The goodness of automated builds * You can link a Docker Hub repository with a GitHub or BitBucket repository * Each push to GitHub or BitBucket will trigger a build on Docker Hub * If the build succeeds, the new image is available on Docker Hub * You can map tags and branches between source and container images * If you work with public repositories, this is free .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- class: extra-details ## Setting up an automated build * We need a Dockerized repository! * Let's go to https://github.com/jpetazzo/trainingwheels and fork it. * Go to the Docker Hub (https://hub.docker.com/) and sign-in. Select "Repositories" in the blue navigation menu. * Select "Create" in the top-right bar, and select "Create Repository+". * Connect your Docker Hub account to your GitHub account. * Click "Create" button. * Then go to "Builds" folder. * Click on Github icon and select your user and the repository that we just forked. * In "Build rules" block near page bottom, put `/www` in "Build Context" column (or whichever directory the Dockerfile is in). * Click "Save and Build" to build the repository immediately (without waiting for a git push). * Subsequent builds will happen automatically, thanks to GitHub hooks. .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- ## Building on the fly - Some services can build images on the fly from a repository - Example: [ctr.run](https://ctr.run/) .lab[ - Use ctr.run to automatically build a container image and run it: ```bash docker run ctr.run/github.com/undefinedlabs/hello-world ``` ] There might be a long pause before the first layer is pulled, because the API behind `docker pull` doesn't allow to stream build logs, and there is no feedback during the build. It is possible to view the build logs by setting up an account on [ctr.run](https://ctr.run/). ??? :EN:- Publishing images to the Docker Hub :FR:- Publier des images sur le Docker Hub .debug[[containers/Publishing_To_Docker_Hub.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Publishing_To_Docker_Hub.md)] --- class: pic .interstitial[] --- name: toc-exercise--writing-better-dockerfiles class: title Exercise — writing better Dockerfiles .nav[ [Previous part](#toc-publishing-images-to-the-docker-hub) | [Back to table of contents](#toc-part-3) | [Next part](#toc-naming-and-inspecting-containers) ] .debug[(automatically generated title slide)] --- # Exercise — writing better Dockerfiles Let's update our Dockerfiles to leverage multi-stage builds! The code is at: https://github.com/jpetazzo/wordsmith Use a different tag for these images, so that we can compare their sizes. What's the size difference between single-stage and multi-stage builds? .debug[[containers/Exercise_Dockerfile_Advanced.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Exercise_Dockerfile_Advanced.md)] --- class: pic .interstitial[] --- name: toc-naming-and-inspecting-containers class: title Naming and inspecting containers .nav[ [Previous part](#toc-exercise--writing-better-dockerfiles) | [Back to table of contents](#toc-part-4) | [Next part](#toc-labels) ] .debug[(automatically generated title slide)] --- class: title # Naming and inspecting containers  .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Objectives In this lesson, we will learn about an important Docker concept: container *naming*. Naming allows us to: * Reference easily a container. * Ensure unicity of a specific container. We will also see the `inspect` command, which gives a lot of details about a container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Naming our containers So far, we have referenced containers with their ID. We have copy-pasted the ID, or used a shortened prefix. But each container can also be referenced by its name. If a container is named `thumbnail-worker`, I can do: ```bash $ docker logs thumbnail-worker $ docker stop thumbnail-worker etc. ``` .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Default names When we create a container, if we don't give a specific name, Docker will pick one for us. It will be the concatenation of: * A mood (furious, goofy, suspicious, boring...) * The name of a famous inventor (tesla, darwin, wozniak...) Examples: `happy_curie`, `clever_hopper`, `jovial_lovelace` ... .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Specifying a name You can set the name of the container when you create it. ```bash $ docker run --name ticktock jpetazzo/clock ``` If you specify a name that already exists, Docker will refuse to create the container. This lets us enforce unicity of a given resource. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Renaming containers * You can rename containers with `docker rename`. * This allows you to "free up" a name without destroying the associated container. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Inspecting a container The `docker inspect` command will output a very detailed JSON map. ```bash $ docker inspect [{ ... (many pages of JSON here) ... ``` There are multiple ways to consume that information. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Parsing JSON with the Shell * You *could* grep and cut or awk the output of `docker inspect`. * Please, don't. * It's painful. * If you really must parse JSON from the Shell, use JQ! (It's great.) ```bash $ docker inspect | jq . ``` * We will see a better solution which doesn't require extra tools. .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- ## Using `--format` You can specify a format string, which will be parsed by Go's text/template package. ```bash $ docker inspect --format '{{ json .Created }}' "2015-02-24T07:21:11.712240394Z" ``` * The generic syntax is to wrap the expression with double curly braces. * The expression starts with a dot representing the JSON object. * Then each field or member can be accessed in dotted notation syntax. * The optional `json` keyword asks for valid JSON output. (e.g. here it adds the surrounding double-quotes.) ??? :EN:Managing container lifecycle :EN:- Naming and inspecting containers :FR:Suivre ses conteneurs à la loupe :FR:- Obtenir des informations détaillées sur un conteneur :FR:- Associer un identifiant unique à un conteneur .debug[[containers/Naming_And_Inspecting.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Naming_And_Inspecting.md)] --- class: pic .interstitial[] --- name: toc-labels class: title Labels .nav[ [Previous part](#toc-naming-and-inspecting-containers) | [Back to table of contents](#toc-part-4) | [Next part](#toc-restarting-and-attaching-to-containers) ] .debug[(automatically generated title slide)] --- # Labels * Labels allow to attach arbitrary metadata to containers. * Labels are key/value pairs. * They are specified at container creation. * You can query them with `docker inspect`. * They can also be used as filters with some commands (e.g. `docker ps`). .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Labels.md)] --- ## Using labels Let's create a few containers with a label `owner`. ```bash docker run -d -l owner=alice nginx docker run -d -l owner=bob nginx docker run -d -l owner nginx ``` We didn't specify a value for the `owner` label in the last example. This is equivalent to setting the value to be an empty string. .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Labels.md)] --- ## Querying labels We can view the labels with `docker inspect`. ```bash $ docker inspect $(docker ps -lq) | grep -A3 Labels "Labels": { "maintainer": "NGINX Docker Maintainers ", "owner": "" }, ``` We can use the `--format` flag to list the value of a label. ```bash $ docker inspect $(docker ps -q) --format 'OWNER={{.Config.Labels.owner}}' ``` .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Labels.md)] --- ## Using labels to select containers We can list containers having a specific label. ```bash $ docker ps --filter label=owner ``` Or we can list containers having a specific label with a specific value. ```bash $ docker ps --filter label=owner=alice ``` .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Labels.md)] --- ## Use-cases for labels * HTTP vhost of a web app or web service. (The label is used to generate the configuration for NGINX, HAProxy, etc.) * Backup schedule for a stateful service. (The label is used by a cron job to determine if/when to backup container data.) * Service ownership. (To determine internal cross-billing, or who to page in case of outage.) * etc. ??? :EN:- Using labels to identify containers :FR:- Étiqueter ses conteneurs avec des méta-données .debug[[containers/Labels.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Labels.md)] --- class: pic .interstitial[] --- name: toc-restarting-and-attaching-to-containers class: title Restarting and attaching to containers .nav[ [Previous part](#toc-labels) | [Back to table of contents](#toc-part-4) | [Next part](#toc-getting-inside-a-container) ] .debug[(automatically generated title slide)] --- # Restarting and attaching to containers We have started containers in the foreground, and in the background. In this chapter, we will see how to: * Put a container in the background. * Attach to a background container to bring it to the foreground. * Restart a stopped container. .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Background and foreground The distinction between foreground and background containers is arbitrary. From Docker's point of view, all containers are the same. All containers run the same way, whether there is a client attached to them or not. It is always possible to detach from a container, and to reattach to a container. Analogy: attaching to a container is like plugging a keyboard and screen to a physical server. .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Detaching from a container (Linux/macOS) * If you have started an *interactive* container (with option `-it`), you can detach from it. * The "detach" sequence is `^P^Q`. * Otherwise you can detach by killing the Docker client. (But not by hitting `^C`, as this would deliver `SIGINT` to the container.) What does `-it` stand for? * `-t` means "allocate a terminal." * `-i` means "connect stdin to the terminal." .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Detaching cont. (Win PowerShell and cmd.exe) * Docker for Windows has a different detach experience due to shell features. * `^P^Q` does not work. * `^C` will detach, rather than stop the container. * Using Bash, Subsystem for Linux, etc. on Windows behaves like Linux/macOS shells. * Both PowerShell and Bash work well in Win 10; just be aware of differences. .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- class: extra-details ## Specifying a custom detach sequence * You don't like `^P^Q`? No problem! * You can change the sequence with `docker run --detach-keys`. * This can also be passed as a global option to the engine. Start a container with a custom detach command: ```bash $ docker run -ti --detach-keys ctrl-x,x jpetazzo/clock ``` Detach by hitting `^X x`. (This is ctrl-x then x, not ctrl-x twice!) Check that our container is still running: ```bash $ docker ps -l ``` .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- class: extra-details ## Attaching to a container You can attach to a container: ```bash $ docker attach ``` * The container must be running. * There *can* be multiple clients attached to the same container. * If you don't specify `--detach-keys` when attaching, it defaults back to `^P^Q`. Try it on our previous container: ```bash $ docker attach $(docker ps -lq) ``` Check that `^X x` doesn't work, but `^P ^Q` does. .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Detaching from non-interactive containers * **Warning:** if the container was started without `-it`... * You won't be able to detach with `^P^Q`. * If you hit `^C`, the signal will be proxied to the container. * Remember: you can always detach by killing the Docker client. .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Checking container output * Use `docker attach` if you intend to send input to the container. * If you just want to see the output of a container, use `docker logs`. ```bash $ docker logs --tail 1 --follow ``` .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Restarting a container When a container has exited, it is in stopped state. It can then be restarted with the `start` command. ```bash $ docker start ``` The container will be restarted using the same options you launched it with. You can re-attach to it if you want to interact with it: ```bash $ docker attach ``` Use `docker ps -a` to identify the container ID of a previous `jpetazzo/clock` container, and try those commands. .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- ## Attaching to a REPL * REPL = Read Eval Print Loop * Shells, interpreters, TUI ... * Symptom: you `docker attach`, and see nothing * The REPL doesn't know that you just attached, and doesn't print anything * Try hitting `^L` or `Enter` .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- class: extra-details ## SIGWINCH * When you `docker attach`, the Docker Engine sends SIGWINCH signals to the container. * SIGWINCH = WINdow CHange; indicates a change in window size. * This will cause some CLI and TUI programs to redraw the screen. * But not all of them. ??? :EN:- Restarting old containers :EN:- Detaching and reattaching to container :FR:- Redémarrer des anciens conteneurs :FR:- Se détacher et rattacher à des conteneurs .debug[[containers/Start_And_Attach.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Start_And_Attach.md)] --- class: pic .interstitial[] --- name: toc-getting-inside-a-container class: title Getting inside a container .nav[ [Previous part](#toc-restarting-and-attaching-to-containers) | [Back to table of contents](#toc-part-4) | [Next part](#toc-limiting-resources) ] .debug[(automatically generated title slide)] --- class: title # Getting inside a container  .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Objectives On a traditional server or VM, we sometimes need to: * log into the machine (with SSH or on the console), * analyze the disks (by removing them or rebooting with a rescue system). In this chapter, we will see how to do that with containers. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Getting a shell Every once in a while, we want to log into a machine. In an perfect world, this shouldn't be necessary. * You need to install or update packages (and their configuration)? Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...) * You need to view logs and metrics? Collect and access them through a centralized platform. In the real world, though ... we often need shell access! .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Not getting a shell Even without a perfect deployment system, we can do many operations without getting a shell. * Installing packages can (and should) be done in the container image. * Configuration can be done at the image level, or when the container starts. * Dynamic configuration can be stored in a volume (shared with another container). * Logs written to stdout are automatically collected by the Docker Engine. * Other logs can be written to a shared volume. * Process information and metrics are visible from the host. _Let's save logging, volumes ... for later, but let's have a look at process information!_ .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Viewing container processes from the host If you run Docker on Linux, container processes are visible on the host. ```bash $ ps faux | less ``` * Scroll around the output of this command. * You should see the `jpetazzo/clock` container. * A containerized process is just like any other process on the host. * We can use tools like `lsof`, `strace`, `gdb` ... To analyze them. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- class: extra-details ## What's the difference between a container process and a host process? * Each process (containerized or not) belongs to *namespaces* and *cgroups*. * The namespaces and cgroups determine what a process can "see" and "do". * Analogy: each process (containerized or not) runs with a specific UID (user ID). * UID=0 is root, and has elevated privileges. Other UIDs are normal users. _We will give more details about namespaces and cgroups later._ .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a running container * Sometimes, we need to get a shell anyway. * We _could_ run some SSH server in the container ... * But it is easier to use `docker exec`. ```bash $ docker exec -ti ticktock sh ``` * This creates a new process (running `sh`) _inside_ the container. * This can also be done "manually" with the tool `nsenter`. .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Caveats * The tool that you want to run needs to exist in the container. * Some tools (like `ip netns exec`) let you attach to _one_ namespace at a time. (This lets you e.g. setup network interfaces, even if you don't have `ifconfig` or `ip` in the container.) * Most importantly: the container needs to be running. * What if the container is stopped or crashed? .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Getting a shell in a stopped container * A stopped container is only _storage_ (like a disk drive). * We cannot SSH into a disk drive or USB stick! * We need to connect the disk to a running machine. * How does that translate into the container world? .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Analyzing a stopped container As an exercise, we are going to try to find out what's wrong with `jpetazzo/crashtest`. ```bash docker run jpetazzo/crashtest ``` The container starts, but then stops immediately, without any output. What would MacGyver™ do? First, let's check the status of that container. ```bash docker ps -l ``` .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Viewing filesystem changes * We can use `docker diff` to see files that were added / changed / removed. ```bash docker diff ``` * The container ID was shown by `docker ps -l`. * We can also see it with `docker ps -lq`. * The output of `docker diff` shows some interesting log files! .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Accessing files * We can extract files with `docker cp`. ```bash docker cp :/var/log/nginx/error.log . ``` * Then we can look at that log file. ```bash cat error.log ``` (The directory `/run/nginx` doesn't exist.) .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- ## Exploring a crashed container * We can restart a container with `docker start` ... * ... But it will probably crash again immediately! * We cannot specify a different program to run with `docker start` * But we can create a new image from the crashed container ```bash docker commit debugimage ``` * Then we can run a new container from that image, with a custom entrypoint ```bash docker run -ti --entrypoint sh debugimage ``` .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- class: extra-details ## Obtaining a complete dump * We can also dump the entire filesystem of a container. * This is done with `docker export`. * It generates a tar archive. ```bash docker export | tar tv ``` This will give a detailed listing of the content of the container. ??? :EN:- Troubleshooting and getting inside a container :FR:- Inspecter un conteneur en détail, en *live* ou *post-mortem* .debug[[containers/Getting_Inside.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Getting_Inside.md)] --- class: pic .interstitial[] --- name: toc-limiting-resources class: title Limiting resources .nav[ [Previous part](#toc-getting-inside-a-container) | [Back to table of contents](#toc-part-4) | [Next part](#toc-container-networking-basics) ] .debug[(automatically generated title slide)] --- # Limiting resources - So far, we have used containers as convenient units of deployment. - What happens when a container tries to use more resources than available? (RAM, CPU, disk usage, disk and network I/O...) - What happens when multiple containers compete for the same resource? - Can we limit resources available to a container? (Spoiler alert: yes!) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Container processes are normal processes - Containers are closer to "fancy processes" than to "lightweight VMs". - A process running in a container is, in fact, a process running on the host. - Let's look at the output of `ps` on a container host running 3 containers : ``` 0 2662 0.2 0.3 /usr/bin/dockerd -H fd:// 0 2766 0.1 0.1 \_ docker-containerd --config /var/run/docker/containe 0 23479 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir 0 23497 0.0 0.0 | \_ `nginx`: master process nginx -g daemon off; 101 23543 0.0 0.0 | \_ `nginx`: worker process 0 23565 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir 102 23584 9.4 11.3 | \_ `/docker-java-home/jre/bin/java` -Xms2g -Xmx2 0 23707 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir 0 23725 0.0 0.0 \_ `/bin/sh` ``` - The highlighted processes are containerized processes. (That host is running nginx, elasticsearch, and alpine.) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## By default: nothing changes - What happens when a process uses too much memory on a Linux system? -- - Simplified answer: - swap is used (if available); - if there is not enough swap space, eventually, the out-of-memory killer is invoked; - the OOM killer uses heuristics to kill processes; - sometimes, it kills an unrelated process. -- - What happens when a container uses too much memory? - The same thing! (i.e., a process eventually gets killed, possibly in another container.) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Limiting container resources - The Linux kernel offers rich mechanisms to limit container resources. - For memory usage, the mechanism is part of the *cgroup* subsystem. - This subsystem allows limiting the memory for a process or a group of processes. - A container engine leverages these mechanisms to limit memory for a container. - The out-of-memory killer has a new behavior: - it runs when a container exceeds its allowed memory usage, - in that case, it only kills processes in that container. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Limiting memory in practice - The Docker Engine offers multiple flags to limit memory usage. - The two most useful ones are `--memory` and `--memory-swap`. - `--memory` limits the amount of physical RAM used by a container. - `--memory-swap` limits the total amount (RAM+swap) used by a container. - The memory limit can be expressed in bytes, or with a unit suffix. (e.g.: `--memory 100m` = 100 megabytes.) - We will see two strategies: limiting RAM usage, or limiting both .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Limiting RAM usage Example: ```bash docker run -ti --memory 100m python ``` If the container tries to use more than 100 MB of RAM, *and* swap is available: - the container will not be killed, - memory above 100 MB will be swapped out, - in most cases, the app in the container will be slowed down (a lot). If we run out of swap, the global OOM killer still intervenes. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Limiting both RAM and swap usage Example: ```bash docker run -ti --memory 100m --memory-swap 100m python ``` If the container tries to use more than 100 MB of memory, it is killed. On the other hand, the application will never be slowed down because of swap. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## When to pick which strategy? - Stateful services (like databases) will lose or corrupt data when killed - Allow them to use swap space, but monitor swap usage - Stateless services can usually be killed with little impact - Limit their mem+swap usage, but monitor if they get killed - Ultimately, this is no different from "do I want swap, and how much?" .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Limiting CPU usage - There are no less than 3 ways to limit CPU usage: - setting a relative priority with `--cpu-shares`, - setting a CPU% limit with `--cpus`, - pinning a container to specific CPUs with `--cpuset-cpus`. - They can be used separately or together. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Setting relative priority - Each container has a relative priority used by the Linux scheduler. - By default, this priority is 1024. - As long as CPU usage is not maxed out, this has no effect. - When CPU usage is maxed out, each container receives CPU cycles in proportion of its relative priority. - In other words: a container with `--cpu-shares 2048` will receive twice as much than the default. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Setting a CPU% limit - This setting will make sure that a container doesn't use more than a given % of CPU. - The value is expressed in CPUs; therefore: `--cpus 0.1` means 10% of one CPU, `--cpus 1.0` means 100% of one whole CPU, `--cpus 10.0` means 10 entire CPUs. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Pinning containers to CPUs - On multi-core machines, it is possible to restrict the execution on a set of CPUs. - Examples: `--cpuset-cpus 0` forces the container to run on CPU 0; `--cpuset-cpus 3,5,7` restricts the container to CPUs 3, 5, 7; `--cpuset-cpus 0-3,8-11` restricts the container to CPUs 0, 1, 2, 3, 8, 9, 10, 11. - This will not reserve the corresponding CPUs! (They might still be used by other containers, or uncontainerized processes.) .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- ## Limiting disk usage - Most storage drivers do not support limiting the disk usage of containers. (With the exception of devicemapper, but the limit cannot be set easily.) - This means that a single container could exhaust disk space for everyone. - In practice, however, this is not a concern, because: - data files (for stateful services) should reside on volumes, - assets (e.g. images, user-generated content...) should reside on object stores or on volume, - logs are written on standard output and gathered by the container engine. - Container disk usage can be audited with `docker ps -s` and `docker diff`. .debug[[containers/Resource_Limits.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Resource_Limits.md)] --- class: pic .interstitial[] --- name: toc-container-networking-basics class: title Container networking basics .nav[ [Previous part](#toc-limiting-resources) | [Back to table of contents](#toc-part-5) | [Next part](#toc-container-network-drivers) ] .debug[(automatically generated title slide)] --- class: title # Container networking basics  .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Objectives We will now run network services (accepting requests) in containers. At the end of this section, you will be able to: * Run a network service in a container. * Connect to that network service. * Find a container's IP address. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Running a very simple service - We need something small, simple, easy to configure (or, even better, that doesn't require any configuration at all) - Let's use the official NGINX image (named `nginx`) - It runs a static web server listening on port 80 - It serves a default "Welcome to nginx!" page .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Running an NGINX server ```bash $ docker run -d -P nginx 66b1ce719198711292c8f34f84a7b68c3876cf9f67015e752b94e189d35a204e ``` - Docker will automatically pull the `nginx` image from the Docker Hub - `-d` / `--detach` tells Docker to run it in the background - `P` / `--publish-all` tells Docker to publish all ports (publish = make them reachable from other computers) - ...OK, how do we connect to our web server now? .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Finding our web server port - First, we need to find the *port number* used by Docker (the NGINX container listens on port 80, but this port will be *mapped*) - We can use `docker ps`: ```bash $ docker ps CONTAINER ID IMAGE ... PORTS ... e40ffb406c9e nginx ... 0.0.0.0:`12345`->80/tcp ... ``` - This means: *port 12345 on the Docker host is mapped to port 80 in the container* - Now we need to connect to the Docker host! .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Finding the address of the Docker host - When running Docker on your Linux workstation: *use `localhost`, or any IP address of your machine* - When running Docker on a remote Linux server: *use any IP address of the remote machine* - When running Docker Desktop on Mac or Windows: *use `localhost`* - In other scenarios (`docker-machine`, local VM...): *use the IP address of the Docker VM* .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Connecting to our web server (GUI) Point your browser to the IP address of your Docker host, on the port shown by `docker ps` for container port 80.  .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Connecting to our web server (CLI) You can also use `curl` directly from the Docker host. Make sure to use the right port number if it is different from the example below: ```bash $ curl localhost:12345 Welcome to nginx! ... ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## How does Docker know which port to map? * There is metadata in the image telling "this image has something on port 80". * We can see that metadata with `docker inspect`: ```bash $ docker inspect --format '{{.Config.ExposedPorts}}' nginx map[80/tcp:{}] ``` * This metadata was set in the Dockerfile, with the `EXPOSE` keyword. * We can see that with `docker history`: ```bash $ docker history nginx IMAGE CREATED CREATED BY 7f70b30f2cc6 11 days ago /bin/sh -c #(nop) CMD ["nginx" "-g" "… 11 days ago /bin/sh -c #(nop) STOPSIGNAL [SIGTERM] 11 days ago /bin/sh -c #(nop) EXPOSE 80/tcp ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Why can't we just connect to port 80? - Our Docker host has only one port 80 - Therefore, we can only have one container at a time on port 80 - Therefore, if multiple containers want port 80, only one can get it - By default, containers *do not* get "their" port number, but a random one (not "random" as "crypto random", but as "it depends on various factors") - We'll see later how to force a port number (including port 80!) .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- class: extra-details ## Using multiple IP addresses *Hey, my network-fu is strong, and I have questions...* - Can I publish one container on 127.0.0.2:80, and another on 127.0.0.3:80? - My machine has multiple (public) IP addresses, let's say A.A.A.A and B.B.B.B. Can I have one container on A.A.A.A:80 and another on B.B.B.B:80? - I have a whole IPV4 subnet, can I allocate it to my containers? - What about IPV6? You can do all these things when running Docker directly on Linux. (On other platforms, *generally not*, but there are some exceptions.) .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Finding the web server port in a script Parsing the output of `docker ps` would be painful. There is a command to help us: ```bash $ docker port 80 0.0.0.0:12345 ``` .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Manual allocation of port numbers If you want to set port numbers yourself, no problem: ```bash $ docker run -d -p 80:80 nginx $ docker run -d -p 8000:80 nginx $ docker run -d -p 8080:80 -p 8888:80 nginx ``` * We are running three NGINX web servers. * The first one is exposed on port 80. * The second one is exposed on port 8000. * The third one is exposed on ports 8080 and 8888. Note: the convention is `port-on-host:port-on-container`. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Plumbing containers into your infrastructure There are many ways to integrate containers in your network. * Start the container, letting Docker allocate a public port for it. Then retrieve that port number and feed it to your configuration. * Pick a fixed port number in advance, when you generate your configuration. Then start your container by setting the port numbers manually. * Use an orchestrator like Kubernetes or Swarm. The orchestrator will provide its own networking facilities. Orchestrators typically provide mechanisms to enable direct container-to-container communication across hosts, and publishing/load balancing for inbound traffic. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Finding the container's IP address We can use the `docker inspect` command to find the IP address of the container. ```bash $ docker inspect --format '{{ .NetworkSettings.IPAddress }}' 172.17.0.3 ``` * `docker inspect` is an advanced command, that can retrieve a ton of information about our containers. * Here, we provide it with a format string to extract exactly the private IP address of the container. .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Pinging our container Let's try to ping our container *from another container.* ```bash docker run alpine ping `` PING 172.17.0.X (172.17.0.X): 56 data bytes 64 bytes from 172.17.0.X: seq=0 ttl=64 time=0.106 ms 64 bytes from 172.17.0.X: seq=1 ttl=64 time=0.250 ms 64 bytes from 172.17.0.X: seq=2 ttl=64 time=0.188 ms ``` When running on Linux, we can even ping that IP address directly! (And connect to a container's ports even if they aren't published.) .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## How often do we use `-p` and `-P` ? - When running a stack of containers, we will often use Compose - Compose will take care of exposing containers (through a `ports:` section in the `docker-compose.yml` file) - It is, however, fairly common to use `docker run -P` for a quick test - Or `docker run -p ...` when an image doesn't `EXPOSE` a port correctly .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- ## Section summary We've learned how to: * Expose a network port. * Connect to an application running in a container. * Find a container's IP address. ??? :EN:- Exposing single containers :FR:- Exposer un conteneur isolé .debug[[containers/Container_Networking_Basics.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Networking_Basics.md)] --- class: pic .interstitial[] --- name: toc-container-network-drivers class: title Container network drivers .nav[ [Previous part](#toc-container-networking-basics) | [Back to table of contents](#toc-part-5) | [Next part](#toc-the-container-network-model) ] .debug[(automatically generated title slide)] --- # Container network drivers The Docker Engine supports different network drivers. The built-in drivers include: * `bridge` (default) * `null` (for the special network called `none`) * `host` (for the special network called `host`) * `container` (that one is a bit magic!) The network is selected with `docker run --net ...`. Each network is managed by a driver. The different drivers are explained with more details on the following slides. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Network_Drivers.md)] --- ## The default bridge * By default, the container gets a virtual `eth0` interface. (In addition to its own private `lo` loopback interface.) * That interface is provided by a `veth` pair. * It is connected to the Docker bridge. (Named `docker0` by default; configurable with `--bridge`.) * Addresses are allocated on a private, internal subnet. (Docker uses 172.17.0.0/16 by default; configurable with `--bip`.) * Outbound traffic goes through an iptables MASQUERADE rule. * Inbound traffic goes through an iptables DNAT rule. * The container can have its own routes, iptables rules, etc. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Network_Drivers.md)] --- ## The null driver * Container is started with `docker run --net none ...` * It only gets the `lo` loopback interface. No `eth0`. * It can't send or receive network traffic. * Useful for isolated/untrusted workloads. .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Network_Drivers.md)] --- ## The host driver * Container is started with `docker run --net host ...` * It sees (and can access) the network interfaces of the host. * It can bind any address, any port (for ill and for good). * Network traffic doesn't have to go through NAT, bridge, or veth. * Performance = native! Use cases: * Performance sensitive applications (VOIP, gaming, streaming...) * Peer discovery (e.g. Erlang port mapper, Raft, Serf...) .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Network_Drivers.md)] --- ## The container driver * Container is started with `docker run --net container:id ...` * It re-uses the network stack of another container. * It shares with this other container the same interfaces, IP address(es), routes, iptables rules, etc. * Those containers can communicate over their `lo` interface. (i.e. one can bind to 127.0.0.1 and the others can connect to it.) ??? :EN:Advanced container networking :EN:- Transparent network access with the "host" driver :EN:- Sharing is caring with the "container" driver :FR:Paramétrage réseau avancé :FR:- Accès transparent au réseau avec le mode "host" :FR:- Partage de la pile réseau avece le mode "container" .debug[[containers/Network_Drivers.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Network_Drivers.md)] --- class: pic .interstitial[] --- name: toc-the-container-network-model class: title The Container Network Model .nav[ [Previous part](#toc-container-network-drivers) | [Back to table of contents](#toc-part-5) | [Next part](#toc-service-discovery-with-containers) ] .debug[(automatically generated title slide)] --- class: title # The Container Network Model  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Objectives We will learn about the CNM (Container Network Model). At the end of this lesson, you will be able to: * Create a private network for a group of containers. * Use container naming to connect services together. * Dynamically connect and disconnect containers to networks. * Set the IP address of a container. We will also explain the principle of overlay networks and network plugins. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## The Container Network Model Docker has "networks". We can manage them with the `docker network` commands; for instance: ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f blog-dev overlay 228a4355d548 blog-prod overlay ``` New networks can be created (with `docker network create`). (Note: networks `none` and `host` are special; let's set them aside for now.) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## What's a network? - Conceptually, a Docker "network" is a virtual switch (we can also think about it like a VLAN, or a WiFi SSID, for instance) - By default, containers are connected to a single network (but they can be connected to zero, or many networks, even dynamically) - Each network has its own subnet (IP address range) - A network can be local (to a single Docker Engine) or global (span multiple hosts) - Containers can have *network aliases* providing DNS-based service discovery (and each network has its own "domain", "zone", or "scope") .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Service discovery - A container can be given a network alias (e.g. with `docker run --net some-network --net-alias db ...`) - The containers running in the same network can resolve that network alias (i.e. if they do a DNS lookup on `db`, it will give the container's address) - We can have a different `db` container in each network (this avoids naming conflicts between different stacks) - When we name a container, it automatically adds the name as a network alias (i.e. `docker run --name xyz ...` is like `docker run --net-alias xyz ...` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Network isolation - Networks are isolated - By default, containers in network A cannot reach those in network B - A container connected to both networks A and B can act as a router or proxy - Published ports are always reachable through the Docker host address (`docker run -P ...` makes a container port available to everyone) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## How to use networks - We typically create one network per "stack" or app that we deploy - More complex apps or stacks might require multiple networks (e.g. `frontend`, `backend`, ...) - Networks allow us to deploy multiple copies of the same stack (e.g. `prod`, `dev`, `pr-442`, ....) - If we use Docker Compose, this is managed automatically for us .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: pic  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: pic  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: pic  .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## CNM vs CNI - CNM is the model used by Docker - Kubernetes uses a different model, architectured around CNI (CNI is a kind of API between a container engine and *CNI plugins*) - Docker model: - multiple isolated networks - per-network service discovery - network interconnection requires extra steps - Kubernetes model: - single flat network - per-namespace service discovery - network isolation requires extra steps (Network Policies) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Creating a network Let's create a network called `dev`. ```bash $ docker network create dev 4c1ff84d6d3f1733d3e233ee039cac276f425a9d5228a4355d54878293a889ba ``` The network is now visible with the `network ls` command: ```bash $ docker network ls NETWORK ID NAME DRIVER 6bde79dfcf70 bridge bridge 8d9c78725538 none null eb0eeab782f4 host host 4c1ff84d6d3f dev bridge ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Placing containers on a network We will create a *named* container on this network. It will be reachable with its name, `es`. ```bash $ docker run -d --name es --net dev elasticsearch:2 8abb80e229ce8926c7223beb69699f5f34d6f1d438bfc5682db893e798046863 ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Communication between containers Now, create another container on this network. .small[ ```bash $ docker run -ti --net dev alpine sh root@0ecccdfa45ef:/# ``` ] From this new container, we can resolve and ping the other one, using its assigned name: .small[ ```bash / # ping es PING es (172.18.0.2) 56(84) bytes of data. 64 bytes from es.dev (172.18.0.2): icmp_seq=1 ttl=64 time=0.221 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=2 ttl=64 time=0.114 ms 64 bytes from es.dev (172.18.0.2): icmp_seq=3 ttl=64 time=0.114 ms ^C --- es ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.114/0.149/0.221/0.052 ms root@0ecccdfa45ef:/# ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Resolving container addresses Since Docker Engine 1.10, name resolution is implemented by a dynamic resolver. Archeological note: when CNM was intoduced (in Docker Engine 1.9, November 2015) name resolution was implemented with `/etc/hosts`, and it was updated each time CONTAINERs were added/removed. This could cause interesting race conditions since `/etc/hosts` was a bind-mount (and couldn't be updated atomically). .small[ ```bash [root@0ecccdfa45ef /]# cat /etc/hosts 172.18.0.3 0ecccdfa45ef 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.18.0.2 es 172.18.0.2 es.dev ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: pic .interstitial[] --- name: toc-service-discovery-with-containers class: title Service discovery with containers .nav[ [Previous part](#toc-the-container-network-model) | [Back to table of contents](#toc-part-5) | [Next part](#toc-local-development-workflow-with-docker) ] .debug[(automatically generated title slide)] --- # Service discovery with containers * Let's try to run an application that requires two containers. * The first container is a web server. * The other one is a redis data store. * We will place them both on the `dev` network created before. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Running the web server * The application is provided by the container image `jpetazzo/trainingwheels`. * We don't know much about it so we will try to run it and see what happens! Start the container, exposing all its ports: ```bash $ docker run --net dev -d -P jpetazzo/trainingwheels ``` Check the port that has been allocated to it: ```bash $ docker ps -l ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Test the web server * If we connect to the application now, we will see an error page:  * This is because the Redis service is not running. * This container tries to resolve the name `redis`. Note: we're not using a FQDN or an IP address here; just `redis`. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Start the data store * We need to start a Redis container. * That container must be on the same network as the web server. * It must have the right network alias (`redis`) so the application can find it. Start the container: ```bash $ docker run --net dev --net-alias redis -d redis ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Test the web server again * If we connect to the application now, we should see that the app is working correctly:  * When the app tries to resolve `redis`, instead of getting a DNS error, it gets the IP address of our Redis container. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## A few words on *scope* - Container names are unique (there can be only one `--name redis`) - Network aliases are not unique - We can have the same network alias in different networks: ```bash docker run --net dev --net-alias redis ... docker run --net prod --net-alias redis ... ``` - We can even have multiple containers with the same alias in the same network (in that case, we get multiple DNS entries, aka "DNS round robin") .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Names are *local* to each network Let's try to ping our `es` container from another container, when that other container is *not* on the `dev` network. ```bash $ docker run --rm alpine ping es ping: bad address 'es' ``` Names can be resolved only when containers are on the same network. Containers can contact each other only when they are on the same network (you can try to ping using the IP address to verify). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Network aliases We would like to have another network, `prod`, with its own `es` container. But there can be only one container named `es`! We will use *network aliases*. A container can have multiple network aliases. Network aliases are *local* to a given network (only exist in this network). Multiple containers can have the same network alias (even on the same network). Since Docker Engine 1.11, resolving a network alias yields the IP addresses of all containers holding this alias. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Creating containers on another network Create the `prod` network. ```bash $ docker network create prod 5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c ``` We can now create multiple containers with the `es` alias on the new `prod` network. ```bash $ docker run -d --name prod-es-1 --net-alias es --net prod elasticsearch:2 38079d21caf0c5533a391700d9e9e920724e89200083df73211081c8a356d771 $ docker run -d --name prod-es-2 --net-alias es --net prod elasticsearch:2 1820087a9c600f43159688050dcc164c298183e1d2e62d5694fd46b10ac3bc3d ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Resolving network aliases Let's try DNS resolution first, using the `nslookup` tool that ships with the `alpine` image. ```bash $ docker run --net prod --rm alpine nslookup es Name: es Address 1: 172.23.0.3 prod-es-2.prod Address 2: 172.23.0.2 prod-es-1.prod ``` (You can ignore the `can't resolve '(null)'` errors.) .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Connecting to aliased containers Each ElasticSearch instance has a name (generated when it is started). This name can be seen when we issue a simple HTTP request on the ElasticSearch API endpoint. Try the following command a few times: .small[ ```bash $ docker run --rm --net dev centos curl -s es:9200 { "name" : "Tarot", ... } ``` ] Then try it a few times by replacing `--net dev` with `--net prod`: .small[ ```bash $ docker run --rm --net prod centos curl -s es:9200 { "name" : "The Symbiote", ... } ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Good to know ... * Docker will not create network names and aliases on the default `bridge` network. * Therefore, if you want to use those features, you have to create a custom network first. * Network aliases are *not* unique on a given network. * i.e., multiple containers can have the same alias on the same network. * In that scenario, the Docker DNS server will return multiple records. (i.e. you will get DNS round robin out of the box.) * Enabling *Swarm Mode* gives access to clustering and load balancing with IPVS. * Creation of networks and network aliases is generally automated with tools like Compose. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## A few words about round robin DNS Don't rely exclusively on round robin DNS to achieve load balancing. Many factors can affect DNS resolution, and you might see: - all traffic going to a single instance; - traffic being split (unevenly) between some instances; - different behavior depending on your application language; - different behavior depending on your base distro; - different behavior depending on other factors (sic). It's OK to use DNS to discover available endpoints, but remember that you have to re-resolve every now and then to discover new endpoints. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Custom networks When creating a network, extra options can be provided. * `--internal` disables outbound traffic (the network won't have a default gateway). * `--gateway` indicates which address to use for the gateway (when outbound traffic is allowed). * `--subnet` (in CIDR notation) indicates the subnet to use. * `--ip-range` (in CIDR notation) indicates the subnet to allocate from. * `--aux-address` allows specifying a list of reserved addresses (which won't be allocated to containers). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Setting containers' IP address * It is possible to set a container's address with `--ip`. * The IP address has to be within the subnet used for the container. A full example would look like this. ```bash $ docker network create --subnet 10.66.0.0/16 pubnet 42fb16ec412383db6289a3e39c3c0224f395d7f85bcb1859b279e7a564d4e135 $ docker run --net pubnet --ip 10.66.66.66 -d nginx b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09 ``` *Note: don't hard code container IP addresses in your code!* *I repeat: don't hard code container IP addresses in your code!* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Network drivers * A network is managed by a *driver*. * The built-in drivers include: * `bridge` (default) * `none` * `host` * `macvlan` * `overlay` (for Swarm clusters) * More drivers can be provided by plugins (OVS, VLAN...) * A network can have a custom IPAM (IP allocator). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Overlay networks * The features we've seen so far only work when all containers are on a single host. * If containers span multiple hosts, we need an *overlay* network to connect them together. * Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN, *enabled with Swarm Mode*. * Other plugins (Weave, Calico...) can provide overlay networks as well. * Once you have an overlay network, *all the features that we've used in this chapter work identically across multiple hosts.* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (overlay) Out of the scope for this intro-level workshop! Very short instructions: - enable Swarm Mode (`docker swarm init` then `docker swarm join` on other nodes) - `docker network create mynet --driver overlay` - `docker service create --network mynet myimage` If you want to learn more about Swarm mode, you can check [this video](https://www.youtube.com/watch?v=EuzoEaE6Cqs) or [these slides](https://container.training/swarm-selfpaced.yml.html). .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Multi-host networking (plugins) Out of the scope for this intro-level workshop! General idea: - install the plugin (they often ship within containers) - run the plugin (if it's in a container, it will often require extra parameters; don't just `docker run` it blindly!) - some plugins require configuration or activation (creating a special file that tells Docker "use the plugin whose control socket is at the following location") - you can then `docker network create --driver pluginname` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Connecting and disconnecting dynamically * So far, we have specified which network to use when starting the container. * The Docker Engine also allows connecting and disconnecting while the container is running. * This feature is exposed through the Docker API, and through two Docker CLI commands: * `docker network connect ` * `docker network disconnect ` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Dynamically connecting to a network * We have a container named `es` connected to a network named `dev`. * Let's start a simple alpine container on the default network: ```bash $ docker run -ti alpine sh / # ``` * In this container, try to ping the `es` container: ```bash / # ping es ping: bad address 'es' ``` This doesn't work, but we will change that by connecting the container. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Finding the container ID and connecting it * Figure out the ID of our alpine container; here are two methods: * looking at `/etc/hostname` in the container, * running `docker ps -lq` on the host. * Run the following command on the host: ```bash $ docker network connect dev `` ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Checking what we did * Try again to `ping es` from the container. * It should now work correctly: ```bash / # ping es PING es (172.20.0.3): 56 data bytes 64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms 64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms ^C ``` * Interrupt it with Ctrl-C. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Looking at the network setup in the container We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`: .small[ ```bash / # ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 18: eth0@if19: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever 20: eth1@if21: mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1 valid_lft forever preferred_lft forever / # ``` ] Each network connection is materialized with a virtual network interface. As we can see, we can be connected to multiple networks at the same time. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- ## Disconnecting from a network * Let's try the symmetrical command to disconnect the container: ```bash $ docker network disconnect dev ``` * From now on, if we try to ping `es`, it will not resolve: ```bash / # ping es ping: bad address 'es' ``` * Trying to ping the IP address directly won't work either: ```bash / # ping 172.20.0.3 ... (nothing happens until we interrupt it with Ctrl-C) ``` .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Network aliases are scoped per network * Each network has its own set of network aliases. * We saw this earlier: `es` resolves to different addresses in `dev` and `prod`. * If we are connected to multiple networks, the resolver looks up names in each of them (as of Docker Engine 18.03, it is the connection order) and stops as soon as the name is found. * Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not** give us the addresses of all the `es` services; but only the ones in `dev` or `prod`. * However, we can lookup `es.dev` or `es.prod` if we need to. .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Finding out about our networks and names * We can do reverse DNS lookups on containers' IP addresses. * If the IP address belongs to a network (other than the default bridge), the result will be: ``` name-or-first-alias-or-container-id.network-name ``` * Example: .small[ ```bash $ docker run -ti --net prod --net-alias hello alpine / # apk add --no-cache drill ... OK: 5 MiB in 13 packages / # ifconfig eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03 inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ... / # drill -t ptr `3.0.21.172`.in-addr.arpa ... ;; ANSWER SECTION: 3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`. ... ``` ] .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: extra-details ## Building with a custom network * We can build a Dockerfile with a custom network with `docker build --network NAME`. * This can be used to check that a build doesn't access the network. (But keep in mind that most Dockerfiles will fail, because they need to install remote packages and dependencies!) * This may be used to access an internal package repository. (But try to use a multi-stage build instead, if possible!) ??? :EN:Container networking essentials :EN:- The Container Network Model :EN:- Container isolation :EN:- Service discovery :FR:Mettre ses conteneurs en réseau :FR:- Le "Container Network Model" :FR:- Isolation des conteneurs :FR:- *Service discovery* .debug[[containers/Container_Network_Model.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Network_Model.md)] --- class: pic .interstitial[] --- name: toc-local-development-workflow-with-docker class: title Local development workflow with Docker .nav[ [Previous part](#toc-service-discovery-with-containers) | [Back to table of contents](#toc-part-6) | [Next part](#toc-working-with-volumes) ] .debug[(automatically generated title slide)] --- class: title # Local development workflow with Docker  .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Objectives At the end of this section, you will be able to: * Share code between container and host. * Use a simple local development workflow. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Local development in a container We want to solve the following issues: - "Works on my machine" - "Not the same version" - "Missing dependency" By using Docker containers, we will get a consistent development environment. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Working on the "namer" application * We have to work on some application whose code is at: https://github.com/jpetazzo/namer. * What is it? We don't know yet! * Let's download the code. ```bash $ git clone https://github.com/jpetazzo/namer ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Looking at the code ```bash $ cd namer $ ls -1 company_name_generator.rb config.ru docker-compose.yml Dockerfile Gemfile ``` -- Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe? .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Looking at the `Dockerfile` ```dockerfile FROM ruby COPY . /src WORKDIR /src RUN bundler install CMD ["rackup", "--host", "0.0.0.0"] EXPOSE 9292 ``` * This application is using a base `ruby` image. * The code is copied in `/src`. * Dependencies are installed with `bundler`. * The application is started with `rackup`. * It is listening on port 9292. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Building and running the "namer" application * Let's build the application with the `Dockerfile`! -- ```bash $ docker build -t namer . ``` -- * Then run it. *We need to expose its ports.* -- ```bash $ docker run -dP namer ``` -- * Check on which port the container is listening. -- ```bash $ docker ps -l ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Connecting to our application * Point our browser to our Docker node, on the port allocated to the container. -- * Hit "reload" a few times. -- * This is an enterprise-class, carrier-grade, ISO-compliant company name generator! (With 50% more bullshit than the average competition!) (Wait, was that 50% more, or 50% less? *Anyway!*)  .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Making changes to the code Option 1: * Edit the code locally * Rebuild the image * Re-run the container Option 2: * Enter the container (with `docker exec`) * Install an editor * Make changes from within the container Option 3: * Use a *bind mount* to share local files with the container * Make changes locally * Changes are reflected in the container .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Our first volume We will tell Docker to map the current directory to `/src` in the container. ```bash $ docker run -d -v $(pwd):/src -P namer ``` * `-d`: the container should run in detached mode (in the background). * `-v`: the following host directory should be mounted inside the container. * `-P`: publish all the ports exposed by this image. * `namer` is the name of the image we will run. * We don't specify a command to run because it is already set in the Dockerfile via `CMD`. Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell). .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Mounting volumes inside containers The `-v` flag mounts a directory from your host into your Docker container. The flag structure is: ```bash [host-path]:[container-path]:[rw|ro] ``` * `[host-path]` and `[container-path]` are created if they don't exist. * You can control the write status of the volume with the `ro` and `rw` options. * If you don't specify `rw` or `ro`, it will be `rw` by default. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Hold your horses... and your mounts - The `-v /path/on/host:/path/in/container` syntax is the "old" syntax - The modern syntax looks like this: `--mount type=bind,source=/path/on/host,target=/path/in/container` - `--mount` is more explicit, but `-v` is quicker to type - `--mount` supports all mount types; `-v` doesn't support `tmpfs` mounts - `--mount` fails if the path on the host doesn't exist; `-v` creates it With the new syntax, our command becomes: ```bash docker run --mount=type=bind,source=$(pwd),target=/src -dP namer ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Testing the development container * Check the port used by our new container. ```bash $ docker ps -l CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 045885b68bc5 namer rackup 3 seconds ago Up ... 0.0.0.0:32770->9292/tcp ... ``` * Open the application in your web browser. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Making a change to our application Our customer really doesn't like the color of our text. Let's change it. ```bash $ vi company_name_generator.rb ``` And change ```css color: royalblue; ``` To: ```css color: red; ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Viewing our changes * Reload the application in our browser. -- * The color should have changed.  .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Understanding volumes - Volumes are *not* copying or synchronizing files between the host and the container - Changes made in the host are immediately visible in the container (and vice versa) - When running on Linux: - volumes and bind mounts correspond to directories on the host - if Docker runs in a Linux VM, these directories are in the Linux VM - When running on Docker Desktop: - volumes correspond to directories in a small Linux VM running Docker - access to bind mounts is translated to host filesystem access (a bit like a network filesystem) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Docker Desktop caveats - When running Docker natively on Linux, accessing a mount = native I/O - When running Docker Desktop, accessing a bind mount = file access translation - That file access translation has relatively good performance *in general* (watch out, however, for that big `npm install` working on a bind mount!) - There are some corner cases when watching files (with mechanisms like inotify) - Features like "live reload" or programs like `entr` don't always behave properly (due to e.g. file attribute caching, and other interesting details!) .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Trash your servers and burn your code *(This is the title of a [2013 blog post][immutable-deployments] by Chad Fowler, where he explains the concept of immutable infrastructure.)* [immutable-deployments]: https://web.archive.org/web/20160305073617/http://chadfowler.com/blog/2013/06/23/immutable-deployments/ -- * Let's majorly mess up our container. (Remove files or whatever.) * Now, how can we fix this? -- * Our old container (with the blue version of the code) is still running. * See on which port it is exposed: ```bash docker ps ``` * Point our browser to it to confirm that it still works fine. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Immutable infrastructure in a nutshell * Instead of *updating* a server, we deploy a new one. * This might be challenging with classical servers, but it's trivial with containers. * In fact, with Docker, the most logical workflow is to build a new image and run it. * If something goes wrong with the new image, we can always restart the old one. * We can even keep both versions running side by side. If this pattern sounds interesting, you might want to read about *blue/green deployment* and *canary deployments*. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Recap of the development workflow 1. Write a Dockerfile to build an image containing our development environment. (Rails, Django, ... and all the dependencies for our app) 2. Start a container from that image. Use the `-v` flag to mount our source code inside the container. 3. Edit the source code outside the container, using familiar tools. (vim, emacs, textmate...) 4. Test the application. (Some frameworks pick up changes automatically. Others require you to Ctrl-C + restart after each modification.) 5. Iterate and repeat steps 3 and 4 until satisfied. 6. When done, commit+push source code changes. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Debugging inside the container Docker has a command called `docker exec`. It allows users to run a new process in a container which is already running. If sometimes you find yourself wishing you could SSH into a container: you can use `docker exec` instead. You can get a shell prompt inside an existing container this way, or run an arbitrary process for automation. .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## `docker exec` example ```bash $ # You can run ruby commands in the area the app is running and more! $ docker exec -it bash root@5ca27cf74c2e:/opt/namer# irb irb(main):001:0> [0, 1, 2, 3, 4].map {|x| x ** 2}.compact => [0, 1, 4, 9, 16] irb(main):002:0> exit ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- class: extra-details ## Stopping the container Now that we're done let's stop our container. ```bash $ docker stop ``` And remove it. ```bash $ docker rm ``` .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- ## Section summary We've learned how to: * Share code between container and host. * Set our working directory. * Use a simple local development workflow. ??? :EN:Developing with containers :EN:- “Containerize” a development environment :FR:Développer au jour le jour :FR:- « Containeriser » son environnement de développement .debug[[containers/Local_Development_Workflow.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Local_Development_Workflow.md)] --- class: pic .interstitial[] --- name: toc-working-with-volumes class: title Working with volumes .nav[ [Previous part](#toc-local-development-workflow-with-docker) | [Back to table of contents](#toc-part-6) | [Next part](#toc-gentle-introduction-to-yaml) ] .debug[(automatically generated title slide)] --- class: title # Working with volumes  .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Objectives At the end of this section, you will be able to: * Create containers holding volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Working with volumes Docker volumes can be used to achieve many things, including: * Bypassing the copy-on-write system to obtain native disk I/O performance. * Bypassing copy-on-write to leave some files out of `docker commit`. * Sharing a directory between multiple containers. * Sharing a directory between the host and a container. * Sharing a *single file* between the host and a container. * Using remote storage and custom storage with *volume drivers*. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Volumes are special directories in a container Volumes can be declared in two different ways: * Within a `Dockerfile`, with a `VOLUME` instruction. ```dockerfile VOLUME /uploads ``` * On the command-line, with the `-v` flag for `docker run`. ```bash $ docker run -d -v /uploads myapp ``` In both cases, `/uploads` (inside the container) will be a volume. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Volumes bypass the copy-on-write system Volumes act as passthroughs to the host filesystem. * The I/O performance on a volume is exactly the same as I/O performance on the Docker host. * When you `docker commit`, the content of volumes is not brought into the resulting image. * If a `RUN` instruction in a `Dockerfile` changes the content of a volume, those changes are not recorded neither. * If a container is started with the `--read-only` flag, the volume will still be writable (unless the volume is a read-only volume). .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Volumes can be shared across containers You can start a container with *exactly the same volumes* as another one. The new container will have the same volumes, in the same directories. They will contain exactly the same thing, and remain in sync. Under the hood, they are actually the same directories on the host anyway. This is done using the `--volumes-from` flag for `docker run`. We will see an example in the following slides. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Sharing app server logs with another container Let's start a Tomcat container: ```bash $ docker run --name webapp -d -p 8080:8080 -v /usr/local/tomcat/logs tomcat ``` Now, start an `alpine` container accessing the same volume: ```bash $ docker run --volumes-from webapp alpine sh -c "tail -f /usr/local/tomcat/logs/*" ``` Then, from another window, send requests to our Tomcat container: ```bash $ curl localhost:8080 ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Volumes exist independently of containers If a container is stopped or removed, its volumes still exist and are available. Volumes can be listed and manipulated with `docker volume` subcommands: ```bash $ docker volume ls DRIVER VOLUME NAME local 5b0b65e4316da67c2d471086640e6005ca2264f3... local pgdata-prod local pgdata-dev local 13b59c9936d78d109d094693446e174e5480d973... ``` Some of those volume names were explicit (pgdata-prod, pgdata-dev). The others (the hex IDs) were generated automatically by Docker. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Naming volumes * Volumes can be created without a container, then used in multiple containers. Let's create a couple of volumes directly. ```bash $ docker volume create webapps webapps ``` ```bash $ docker volume create logs logs ``` Volumes are not anchored to a specific path. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Populating volumes * When an empty volume is mounted on a non-empty directory, the directory is copied to the volume. * This makes it easy to "promote" a normal directory to a volume. * Non-empty volumes are always mounted as-is. Let's populate the webapps volume with the webapps.dist directory from the Tomcat image. ````bash $ docker run -v webapps:/usr/local/tomcat/webapps.dist tomcat true ``` Note: running `true` will cause the container to exit successfully once the `webapps.dist` directory has been copied to the `webapps` volume, instead of starting tomcat. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Using our named volumes * Volumes are used with the `-v` option. * When a host path does not contain a `/`, it is considered a volume name. Let's start a web server using the two previous volumes. ```bash $ docker run -d -p 1234:8080 \ -v logs:/usr/local/tomcat/logs \ -v webapps:/usr/local/tomcat/webapps \ tomcat ``` Check that it's running correctly: ```bash $ curl localhost:1234 ... (Tomcat tells us how happy it is to be up and running) ... ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Using a volume in another container * We will make changes to the volume from another container. * In this example, we will run a text editor in the other container. (But this could be an FTP server, a WebDAV server, a Git receiver...) Let's start another container using the `webapps` volume. ```bash $ docker run -v webapps:/webapps -w /webapps -ti alpine vi ROOT/index.jsp ``` Vandalize the page, save, exit. Then run `curl localhost:1234` again to see your changes. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Using custom "bind-mounts" In some cases, you want a specific directory on the host to be mapped inside the container: * You want to manage storage and snapshots yourself. (With LVM, or a SAN, or ZFS, or anything else!) * You have a separate disk with better performance (SSD) or resiliency (EBS) than the system disk, and you want to put important data on that disk. * You want to share your source directory between your host (where the source gets edited) and the container (where it is compiled or executed). Wait, we already met the last use-case in our example development workflow! Nice. ```bash $ docker run -d -v /path/on/the/host:/path/in/container image ... ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Migrating data with `--volumes-from` The `--volumes-from` option tells Docker to re-use all the volumes of an existing container. * Scenario: migrating from Redis 2.8 to Redis 3.0. * We have a container (`myredis`) running Redis 2.8. * Stop the `myredis` container. * Start a new container, using the Redis 3.0 image, and the `--volumes-from` option. * The new container will inherit the data of the old one. * Newer containers can use `--volumes-from` too. * Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes). .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Data migration in practice Let's create a Redis container. ```bash $ docker run -d --name redis28 redis:2.8 ``` Connect to the Redis container and set some data. ```bash $ docker run -ti --link redis28:redis busybox telnet redis 6379 ``` Issue the following commands: ```bash SET counter 42 INFO server SAVE QUIT ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Upgrading Redis Stop the Redis container. ```bash $ docker stop redis28 ``` Start the new Redis container. ```bash $ docker run -d --name redis30 --volumes-from redis28 redis:3.0 ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Testing the new Redis Connect to the Redis container and see our data. ```bash docker run -ti --link redis30:redis busybox telnet redis 6379 ``` Issue a few commands. ```bash GET counter INFO server QUIT ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Volumes lifecycle * When you remove a container, its volumes are kept around. * You can list them with `docker volume ls`. * You can access them by creating a container with `docker run -v`. * You can remove them with `docker volume rm` or `docker system prune`. Ultimately, _you_ are the one responsible for logging, monitoring, and backup of your volumes. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes defined by an image Wondering if an image has volumes? Just use `docker inspect`: ```bash $ # docker inspect training/datavol [{ "config": { . . . "Volumes": { "/var/webapp": {} }, . . . }] ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: extra-details ## Checking volumes used by a container To look which paths are actually volumes, and to what they are bound, use `docker inspect` (again): ```bash $ docker inspect [{ "ID": "", . . . "Volumes": { "/var/webapp": "/var/lib/docker/vfs/dir/f4280c5b6207ed531efd4cc673ff620cef2a7980f747dbbcca001db61de04468" }, "VolumesRW": { "/var/webapp": true }, }] ``` * We can see that our volume is present on the file system of the Docker host. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Sharing a single file The same `-v` flag can be used to share a single file (instead of a directory). One of the most interesting examples is to share the Docker control socket. ```bash $ docker run -it -v /var/run/docker.sock:/var/run/docker.sock docker sh ``` From that container, you can now run `docker` commands communicating with the Docker Engine running on the host. Try `docker ps`! .warning[Since that container has access to the Docker socket, it has root-like access to the host.] .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Volume plugins You can install plugins to manage volumes backed by particular storage systems, or providing extra features. For instance: * [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS, EFS). * [Portworx](https://portworx.com/) - provides distributed block store for containers. * [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale to several petabytes. It provides interfaces for object, block and file storage. * and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)! .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Volumes vs. Mounts * Since Docker 17.06, a new options is available: `--mount`. * It offers a new, richer syntax to manipulate data in containers. * It makes an explicit difference between: - volumes (identified with a unique name, managed by a storage plugin), - bind mounts (identified with a host path, not managed). * The former `-v` / `--volume` option is still usable. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## `--mount` syntax Binding a host path to a container path: ```bash $ docker run \ --mount type=bind,source=/path/on/host,target=/path/in/container alpine ``` Mounting a volume to a container path: ```bash $ docker run \ --mount source=myvolume,target=/path/in/container alpine ``` Mounting a tmpfs (in-memory, for temporary files): ```bash $ docker run \ --mount type=tmpfs,destination=/path/in/container,tmpfs-size=1000000 alpine ``` .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- ## Section summary We've learned how to: * Create and manage volumes. * Share volumes across containers. * Share a host directory with one or many containers. .debug[[containers/Working_With_Volumes.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Working_With_Volumes.md)] --- class: pic .interstitial[] --- name: toc-gentle-introduction-to-yaml class: title Gentle introduction to YAML .nav[ [Previous part](#toc-working-with-volumes) | [Back to table of contents](#toc-part-6) | [Next part](#toc-compose-for-development-stacks) ] .debug[(automatically generated title slide)] --- # Gentle introduction to YAML - YAML Ain't Markup Language (according to [yaml.org][yaml]) - *Almost* required when working with containers: - Docker Compose files - Kubernetes manifests - Many CI pipelines (GitHub, GitLab...) - If you don't know much about YAML, this is for you! [yaml]: https://yaml.org/ .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## What is it? - Data representation language ```yaml - country: France capital: Paris code: fr population: 68042591 - country: Germany capital: Berlin code: de population: 84270625 - country: Norway capital: Oslo code: no # It's a trap! population: 5425270 ``` - Even without knowing YAML, we probably can add a country to that file :) .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Trying YAML - Method 1: in the browser https://onlineyamltools.com/convert-yaml-to-json https://onlineyamltools.com/highlight-yaml - Method 2: in a shell ```bash yq . foo.yaml ``` - Method 3: in Python ```python import yaml; yaml.safe_load(""" - country: France capital: Paris """) ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Basic stuff - Strings, numbers, boolean values, `null` - Sequences (=arrays, lists) - Mappings (=objects) - Superset of JSON (if you know JSON, you can just write JSON) - Comments start with `#` - A single *file* can have multiple *documents* (separated by `---` on a single line) .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Sequences - Example: sequence of strings ```yaml [ "france", "germany", "norway" ] ``` - Example: the same sequence, without the double-quotes ```yaml [ france, germany, norway ] ``` - Example: the same sequence, in "block collection style" (=multi-line) ```yaml - france - germany - norway ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Mappings - Example: mapping strings to numbers ```yaml { "france": 68042591, "germany": 84270625, "norway": 5425270 } ``` - Example: the same mapping, without the double-quotes ```yaml { france: 68042591, germany: 84270625, norway: 5425270 } ``` - Example: the same mapping, in "block collection style" ```yaml france: 68042591 germany: 84270625 norway: 5425270 ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Combining types - In a sequence (or mapping) we can have different types (including other sequences or mappings) - Example: ```yaml questions: [ name, quest, favorite color ] answers: [ "Arthur, King of the Britons", Holy Grail, purple, 42 ] ``` - Note that we need to quote "Arthur" because of the comma - Note that we don't have the same number of elements in questions and answers .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## More combinations - Example: ```yaml - service: nginx ports: [ 80, 443 ] - service: bind ports: [ 53/tcp, 53/udp ] - service: ssh ports: 22 ``` - Note that `ports` doesn't always have the same type (the code handling that data will probably have to be smart!) .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## ⚠️ Automatic booleans ```yaml codes: france: fr germany: de norway: no ``` -- ```json { "codes": { "france": "fr", "germany": "de", "norway": false } } ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## ⚠️ Automatic booleans - `no` can become `false` (it depends on the YAML parser used) - It should be quoted instead: ```yaml codes: france: fr germany: de norway: "no" ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## ⚠️ Automatic floats ```yaml version: libfoo: 1.10 fooctl: 1.0 ``` -- ```json { "version": { "libfoo": 1.1, "fooctl": 1 } } ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## ⚠️ Automatic floats - Trailing zeros disappear - These should also be quoted: ```yaml version: libfoo: "1.10" fooctl: "1.0" ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## ⚠️ Automatic times ```yaml portmap: - 80:80 - 22:22 ``` -- ```json { "portmap": [ "80:80", 1342 ] } ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## ⚠️ Automatic times - `22:22` becomes `1342` - Thats 22 minutes and 22 seconds = 1342 seconds - Again, it should be quoted .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Document separator - A single YAML *file* can have multiple *documents* separated by `---`: ```yaml This is a document consisting of a single string. --- 💡 name: The second document type: This one is a mapping (key→value) --- 💡 - Third document - This one is a sequence ``` - Some folks like to add an extra `---` at the beginning and/or at the end (it's not mandatory but can help e.g. to `cat` multiple files together) .footnote[💡 Ignore this; it's here to work around [this issue][remarkyaml].] [remarkyaml]: https://github.com/gnab/remark/issues/679 .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- ## Multi-line strings Try the following block in a YAML parser: ```yaml add line breaks: "in double quoted strings\n(like this)" preserve line break: | by using a pipe (|) (this is great for embedding shell scripts, configuration files...) do not preserve line breaks: > by using a greater-than (>) (this is great for embedding very long lines) ``` See https://yaml-multiline.info/ for advanced multi-line tips! (E.g. to strip or keep extra `\n` characters at the end of the block.) .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- class: extra-details ## Advanced features Anchors let you "memorize" and re-use content: ```yaml debian: &debian packages: deb latest-stable: bullseye also-debian: *debian ubuntu: <<: *debian latest-stable: jammy ``` .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- class: extra-details ## YAML, good or evil? - Natural progression from XML to JSON to YAML - There are other data languages out there (e.g. HCL, domain-specific things crafted with Ruby, CUE...) - Compromises are made, for instance: - more user-friendly → more "magic" with side effects - more powerful → steeper learning curve - Love it or loathe it but it's a good idea to understand it! - Interesting tool if you appreciate YAML: https://carvel.dev/ytt/ ??? :EN:- Understanding YAML and its gotchas :FR:- Comprendre le YAML et ses subtilités .debug[[shared/yaml.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/yaml.md)] --- class: pic .interstitial[] --- name: toc-compose-for-development-stacks class: title Compose for development stacks .nav[ [Previous part](#toc-gentle-introduction-to-yaml) | [Back to table of contents](#toc-part-6) | [Next part](#toc-exercise--writing-a-compose-file) ] .debug[(automatically generated title slide)] --- # Compose for development stacks Dockerfile = great to build *one* container image. What if we have multiple containers? What if some of them require particular `docker run` parameters? How do we connect them all together? ... Compose solves these use-cases (and a few more). .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Life before Compose Before we had Compose, we would typically write custom scripts to: - build container images, - run containers using these images, - connect the containers together, - rebuild, restart, update these images and containers. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Life with Compose Compose enables a simple, powerful onboarding workflow: 1. Checkout our code. 2. Run `docker compose up`. 3. Our app is up and running! .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- class: pic  .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Life after Compose (Or: when do we need something else?) - Compose is *not* an orchestrator - It isn't designed to need to run containers on multiple nodes (it can, however, work with Docker Swarm Mode) - Compose isn't ideal if we want to run containers on Kubernetes - it uses different concepts (Compose services ≠ Kubernetes services) - it needs a Docker Engine (although containerd support might be coming) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## First rodeo with Compose 1. Write Dockerfiles 2. Describe our stack of containers in a YAML file (the "Compose file") 3. `docker compose up` (or `docker compose up -d` to run in the background) 4. Compose pulls and builds the required images, and starts the containers 5. Compose shows the combined logs of all the containers (if running in the background, use `docker compose logs`) 6. Hit Ctrl-C to stop the whole stack (if running in the background, use `docker compose stop`) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Iterating After making changes to our source code, we can: 1. `docker compose build` to rebuild container images 2. `docker compose up` to restart the stack with the new images We can also combine both with `docker compose up --build` Compose will be smart, and only recreate the containers that have changed. When working with interpreted languages: - don't rebuild each time - leverage a `volumes` section instead .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose First step: clone the source code for the app we will be working on. ```bash git clone https://github.com/jpetazzo/trainingwheels cd trainingwheels ``` Second step: start the app. ```bash docker compose up ``` Watch Compose build and run the app. That Compose stack exposes a web server on port 8000; try connecting to it. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Launching Our First Stack with Compose We should see a web page like this:  Each time we reload, the counter should increase. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Stopping the app When we hit Ctrl-C, Compose tries to gracefully terminate all of the containers. After ten seconds (or if we press `^C` again) it will forcibly kill them. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## The Compose file * Historically: docker-compose.yml or .yaml * Recently (kind of): can also be named compose.yml or .yaml (Since [version 1.28.6, March 2021](https://docs.docker.com/compose/releases/release-notes/#1286)) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Example Here is the file used in the demo: .small[ ```yaml version: "3" services: www: build: www ports: - ${PORT-8000}:5000 user: nobody environment: DEBUG: 1 command: python counter.py volumes: - ./www:/src redis: image: redis ``` ] .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Compose file structure A Compose file has multiple sections: * `services` is mandatory. Each service corresponds to a container. * `version` is optional (it used to be mandatory). It can be ignored. * `networks` is optional and indicates to which networks containers should be connected. (By default, containers will be connected on a private, per-compose-file network.) * `volumes` is optional and can define volumes to be used and/or shared by the containers. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- class: extra-details ## Compose file versions * Version 1 is legacy and shouldn't be used. (If you see a Compose file without a `services` block, it's a legacy v1 file.) * Version 2 added support for networks and volumes. * Version 3 added support for deployment options (scaling, rolling updates, etc). The [Docker documentation](https://docs.docker.com/compose/compose-file/) has excellent information about the Compose file format if you need to know more about versions. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Containers in Compose file Each service in the YAML file must contain either `build`, or `image`. * `build` indicates a path containing a Dockerfile. * `image` indicates an image name (local, or on a registry). * If both are specified, an image will be built from the `build` directory and named `image`. The other parameters are optional. They encode the parameters that you would typically add to `docker run`. Sometimes they have several minor improvements. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Container parameters * `command` indicates what to run (like `CMD` in a Dockerfile). * `ports` translates to one (or multiple) `-p` options to map ports. You can specify local ports (i.e. `x:y` to expose public port `x`). * `volumes` translates to one (or multiple) `-v` options. You can use relative paths here. For the full list, check: https://docs.docker.com/compose/compose-file/ .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Environment variables - We can use environment variables in Compose files (like `$THIS` or `${THAT}`) - We can provide default values, e.g. `${PORT-8000}` - Compose will also automatically load the environment file `.env` (it should contain `VAR=value`, one per line) - This is a great way to customize build and run parameters (base image versions to use, build and run secrets, port numbers...) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Configuring a Compose stack - Follow [12-factor app configuration principles][12factorconfig] (configure the app through environment variables) - Provide (in the repo) a default environment file suitable for development (no secret or sensitive value) - Copy the default environment file to `.env` and tweak it (or: provide a script to generate `.env` from a template) [12factorconfig]: https://12factor.net/config .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Running multiple copies of a stack - Copy the stack in two different directories, e.g. `front` and `frontcopy` - Compose prefixes images and containers with the directory name: `front_www`, `front_www_1`, `front_db_1` `frontcopy_www`, `frontcopy_www_1`, `frontcopy_db_1` - Alternatively, use `docker compose -p frontcopy` (to set the `--project-name` of a stack, which default to the dir name) - Each copy is isolated from the others (runs on a different network) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Checking stack status We have `ps`, `docker ps`, and similarly, `docker compose ps`: ```bash $ docker compose ps Name Command State Ports ---------------------------------------------------------------------------- trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp trainingwheels_www_1 python counter.py Up 0.0.0.0:8000->5000/tcp ``` Shows the status of all the containers of our stack. Doesn't show the other containers. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (1) If you have started your application in the background with Compose and want to stop it easily, you can use the `kill` command: ```bash $ docker compose kill ``` Likewise, `docker compose rm` will let you remove containers (after confirmation): ```bash $ docker compose rm Going to remove trainingwheels_redis_1, trainingwheels_www_1 Are you sure? [yN] y Removing trainingwheels_redis_1... Removing trainingwheels_www_1... ``` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Cleaning up (2) Alternatively, `docker compose down` will stop and remove containers. It will also remove other resources, like networks that were created for the application. ```bash $ docker compose down Stopping trainingwheels_www_1 ... done Stopping trainingwheels_redis_1 ... done Removing trainingwheels_www_1 ... done Removing trainingwheels_redis_1 ... done ``` Use `docker compose down -v` to remove everything including volumes. .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Special handling of volumes - When an image gets updated, Compose automatically creates a new container - The data in the old container is lost... - ...Except if the container is using a *volume* - Compose will then re-attach that volume to the new container (and data is then retained across database upgrades) - All good database images use volumes (e.g. all official images) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Gotchas with volumes - Unfortunately, Docker volumes don't have labels or metadata - Compose tracks volumes thanks to their associated container - If the container is deleted, the volume gets orphaned - Example: `docker compose down && docker compose up` - the old volume still exists, detached from its container - a new volume gets created - `docker compose down -v`/`--volumes` deletes volumes (but **not** `docker compose down && docker compose down -v`!) .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Managing volumes explicitly Option 1: *named volumes* ```yaml services: app: volumes: - data:/some/path volumes: data: ``` - Volume will be named `_data` - It won't be orphaned with `docker compose down` - It will correctly be removed with `docker compose down -v` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Managing volumes explicitly Option 2: *relative paths* ```yaml services: app: volumes: - ./data:/some/path ``` - Makes it easy to colocate the app and its data (for migration, backups, disk usage accounting...) - Won't be removed by `docker compose down -v` .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Managing complex stacks - Compose provides multiple features to manage complex stacks (with many containers) - `-f`/`--file`/`$COMPOSE_FILE` can be a list of Compose files (separated by `:` and merged together) - Services can be assigned to one or more *profiles* - `--profile`/`$COMPOSE_PROFILE` can be a list of comma-separated profiles (see [Using service profiles][profiles] in the Compose documentation) - These variables can be set in `.env` [profiles]: https://docs.docker.com/compose/profiles/ .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- ## Dependencies - A service can have a `depends_on` section (listing one or more other services) - This is used when bringing up individual services (e.g. `docker compose up blah` or `docker compose run foo`) ⚠️ It doesn't make a service "wait" for another one to be up! .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- class: extra-details ## A bit of history and trivia - Compose was initially named "Fig" - Compose is one of the only components of Docker written in Python (almost everything else is in Go) - In 2020, Docker introduced "Compose CLI": - `docker compose` command to deploy Compose stacks to some clouds - in Go instead of Python - progressively getting feature parity with `docker compose` - also provides numerous improvements (e.g. leverages BuildKit by default) ??? :EN:- Using compose to describe an environment :EN:- Connecting services together with a *Compose file* :FR:- Utiliser Compose pour décrire son environnement :FR:- Écrire un *Compose file* pour connecter les services entre eux .debug[[containers/Compose_For_Dev_Stacks.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Compose_For_Dev_Stacks.md)] --- class: pic .interstitial[] --- name: toc-exercise--writing-a-compose-file class: title Exercise — writing a Compose file .nav[ [Previous part](#toc-compose-for-development-stacks) | [Back to table of contents](#toc-part-6) | [Next part](#toc-installing-docker) ] .debug[(automatically generated title slide)] --- # Exercise — writing a Compose file Let's write a Compose file for the wordsmith app! The code is at: https://github.com/jpetazzo/wordsmith .debug[[containers/Exercise_Composefile.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Exercise_Composefile.md)] --- class: pic .interstitial[] --- name: toc-installing-docker class: title Installing Docker .nav[ [Previous part](#toc-exercise--writing-a-compose-file) | [Back to table of contents](#toc-part-7) | [Next part](#toc-docker-engine-and-other-container-engines) ] .debug[(automatically generated title slide)] --- class: title # Installing Docker  .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Objectives At the end of this lesson, you will know: * How to install Docker. * When to use `sudo` when running Docker commands. *Note:* if you were provided with a training VM for a hands-on tutorial, you can skip this chapter, since that VM already has Docker installed, and Docker has already been setup to run without `sudo`. .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Installing Docker There are many ways to install Docker. We can arbitrarily distinguish: * Installing Docker on an existing Linux machine (physical or VM) * Installing Docker on macOS or Windows * Installing Docker on a fleet of cloud VMs .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Installing Docker on Linux * The recommended method is to install the packages supplied by Docker Inc : - add Docker Inc.'s package repositories to your system configuration - install the Docker Engine * Detailed installation instructions (distro by distro) are available on: https://docs.docker.com/engine/installation/ * You can also install from binaries (if your distro is not supported): https://docs.docker.com/engine/installation/linux/docker-ce/binaries/ * To quickly setup a dev environment, Docker provides a convenience install script: ```bash curl -fsSL get.docker.com | sh ``` .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- class: extra-details ## Docker Inc. packages vs distribution packages * Docker Inc. releases new versions monthly (edge) and quarterly (stable) * Releases are immediately available on Docker Inc.'s package repositories * Linux distros don't always update to the latest Docker version (Sometimes, updating would break their guidelines for major/minor upgrades) * Sometimes, some distros have carried packages with custom patches * Sometimes, these patches added critical security bugs ☹ * Installing through Docker Inc.'s repositories is a bit of extra work … … but it is generally worth it! .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Installing Docker on macOS and Windows * On macOS, the recommended method is to use Docker Desktop for Mac: https://docs.docker.com/docker-for-mac/install/ * On Windows 10 Pro, Enterprise, and Education, you can use Docker Desktop for Windows: https://docs.docker.com/docker-for-windows/install/ * On older versions of Windows, you can use the Docker Toolbox: https://docs.docker.com/toolbox/toolbox_install_windows/ * On Windows Server 2016, you can also install the native engine: https://docs.docker.com/install/windows/docker-ee/ .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Docker Desktop * Special Docker edition available for Mac and Windows * Integrates well with the host OS: * installed like normal user applications on the host * provides user-friendly GUI to edit Docker configuration and settings * Only support running one Docker VM at a time ... ... but we can use `docker-machine`, the Docker Toolbox, VirtualBox, etc. to get a cluster. .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- class: extra-details ## Docker Desktop internals * Leverages the host OS virtualization subsystem (e.g. the [Hypervisor API](https://developer.apple.com/documentation/hypervisor) on macOS) * Under the hood, runs a tiny VM (transparent to our daily use) * Accesses network resources like normal applications (and therefore, plays better with enterprise VPNs and firewalls) * Supports filesystem sharing through volumes (we'll talk about this later) .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Running Docker on macOS and Windows When you execute `docker version` from the terminal: * the CLI connects to the Docker Engine over a standard socket, * the Docker Engine is, in fact, running in a VM, * ... but the CLI doesn't know or care about that, * the CLI sends a request using the REST API, * the Docker Engine in the VM processes the request, * the CLI gets the response and displays it to you. All communication with the Docker Engine happens over the API. This will also allow to use remote Engines exactly as if they were local. .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- ## Important PSA about security * If you have access to the Docker control socket, you can take over the machine (Because you can run containers that will access the machine's resources) * Therefore, on Linux machines, the `docker` user is equivalent to `root` * You should restrict access to it like you would protect `root` * By default, the Docker control socket belongs to the `docker` group * You can add trusted users to the `docker` group * Otherwise, you will have to prefix every `docker` command with `sudo`, e.g.: ```bash sudo docker version ``` .debug[[containers/Installing_Docker.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Installing_Docker.md)] --- class: pic .interstitial[] --- name: toc-docker-engine-and-other-container-engines class: title Docker Engine and other container engines .nav[ [Previous part](#toc-installing-docker) | [Back to table of contents](#toc-part-7) | [Next part](#toc-init-systems-and-pid-) ] .debug[(automatically generated title slide)] --- # Docker Engine and other container engines * We are going to cover the architecture of the Docker Engine. * We will also present other container engines. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- class: pic ## Docker Engine external architecture  .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## Docker Engine external architecture * The Engine is a daemon (service running in the background). * All interaction is done through a REST API exposed over a socket. * On Linux, the default socket is a UNIX socket: `/var/run/docker.sock`. * We can also use a TCP socket, with optional mutual TLS authentication. * The `docker` CLI communicates with the Engine over the socket. Note: strictly speaking, the Docker API is not fully REST. Some operations (e.g. dealing with interactive containers and log streaming) don't fit the REST model. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- class: pic ## Docker Engine internal architecture  .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## Docker Engine internal architecture * Up to Docker 1.10: the Docker Engine is one single monolithic binary. * Starting with Docker 1.11, the Engine is split into multiple parts: - `dockerd` (REST API, auth, networking, storage) - `containerd` (container lifecycle, controlled over a gRPC API) - `containerd-shim` (per-container; does almost nothing but allows to restart the Engine without restarting the containers) - `runc` (per-container; does the actual heavy lifting to start the container) * Some features (like image and snapshot management) are progressively being pushed from `dockerd` to `containerd`. For more details, check [this short presentation by Phil Estes](https://www.slideshare.net/PhilEstes/diving-through-the-layers-investigating-runc-containerd-and-the-docker-engine-architecture). .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## Other container engines The following list is not exhaustive. Furthermore, we limited the scope to Linux containers. We can also find containers (or things that look like containers) on other platforms like Windows, macOS, Solaris, FreeBSD ... .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## LXC * The venerable ancestor (first released in 2008). * Docker initially relied on it to execute containers. * No daemon; no central API. * Each container is managed by a `lxc-start` process. * Each `lxc-start` process exposes a custom API over a local UNIX socket, allowing to interact with the container. * No notion of image (container filesystems have to be managed manually). * Networking has to be set up manually. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## LXD * Re-uses LXC code (through liblxc). * Builds on top of LXC to offer a more modern experience. * Daemon exposing a REST API. * Can manage images, snapshots, migrations, networking, storage. * "offers a user experience similar to virtual machines but using Linux containers instead." .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## CRI-O * Designed to be used with Kubernetes as a simple, basic runtime. * Compares to `containerd`. * Daemon exposing a gRPC interface. * Controlled using the CRI API (Container Runtime Interface defined by Kubernetes). * Needs an underlying OCI runtime (e.g. runc). * Handles storage, images, networking (through CNI plugins). We're not aware of anyone using it directly (i.e. outside of Kubernetes). .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## systemd * "init" system (PID 1) in most modern Linux distributions. * Offers tools like `systemd-nspawn` and `machinectl` to manage containers. * `systemd-nspawn` is "In many ways it is similar to chroot(1), but more powerful". * `machinectl` can interact with VMs and containers managed by systemd. * Exposes a DBUS API. * Basic image support (tar archives and raw disk images). * Network has to be set up manually. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## Kata containers * OCI-compliant runtime. * Fusion of two projects: Intel Clear Containers and Hyper runV. * Run each container in a lightweight virtual machine. * Requires running on bare metal *or* with nested virtualization. .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## gVisor * OCI-compliant runtime. * Implements a subset of the Linux kernel system calls. * Written in go, uses a smaller subset of system calls. * Can be heavily sandboxed. * Can run in two modes: * KVM (requires bare metal or nested virtualization), * ptrace (no requirement, but slower). .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- ## Overall ... * The Docker Engine is very developer-centric: - easy to install - easy to use - no manual setup - first-class image build and transfer * As a result, it is a fantastic tool in development environments. * On servers: - Docker is a good default choice - If you use Kubernetes, the engine doesn't matter .debug[[containers/Container_Engines.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Container_Engines.md)] --- class: pic .interstitial[] --- name: toc-init-systems-and-pid- class: title Init systems and PID 1 .nav[ [Previous part](#toc-docker-engine-and-other-container-engines) | [Back to table of contents](#toc-part-7) | [Next part](#toc-advanced-dockerfile-syntax) ] .debug[(automatically generated title slide)] --- # Init systems and PID 1 In this chapter, we will consider: - the role of PID 1 in the world of Docker, - how to avoid some common pitfalls due to the misuse of init systems. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- ## What's an init system? - On UNIX, the "init system" (or "init" in short) is PID 1. - It is the first process started by the kernel when the system starts. - It has multiple responsibilities: - start every other process on the machine, - reap orphaned zombie processes. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- class: extra-details ## Orphaned zombie processes ?!? - When a process exits (or "dies"), it becomes a "zombie". (Zombie processes show up in `ps` or `top` with the status code `Z`.) - Its parent process must *reap* the zombie process. (This is done by calling `waitpid()` to retrieve the process' exit status.) - When a process exits, if it has child processes, these processes are "orphaned." - They are then re-parented to PID 1, init. - Init therefore needs to take care of these orphaned processes when they exit. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- ## Don't use init systems in containers - It's often tempting to use an init system or a process manager. (Examples: *systemd*, *supervisord*...) - Our containers are then called "system containers". (By contrast with "application containers".) - "System containers" are similar to lightweight virtual machines. - They have multiple downsides: - when starting multiple processes, their logs get mixed on stdout, - if the application process dies, the container engine doesn't see it. - Overall, they make it harder to operate troubleshoot containerized apps. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- ## Exceptions and workarounds - Sometimes, it's convenient to run a real init system like *systemd*. (Example: a CI system whose goal is precisely to test an init script or unit file.) - If we need to run multiple processes: can we use multiple containers? (Example: [this Compose file](https://github.com/jpetazzo/container.training/blob/master/compose/simple-k8s-control-plane/docker-compose.yaml) runs multiple processes together.) - When deploying with Kubernetes: - a container belong to a pod, - a pod can have multiple containers. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- ## What about these zombie processes? - Our application runs as PID 1 in the container. - Our application may or may not be designed to reap zombie processes. - If our application uses subprocesses and doesn't reap them ... ... this can lead to PID exhaustion! (Or, more realistically, to a confusing herd of zombie processes.) - How can we solve this? .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- ## Tini to the rescue - Docker can automatically provide a minimal `init` process. - This is enabled with `docker run --init ...` - It uses a small init system ([tini](https://github.com/krallin/tini)) as PID 1: - it reaps zombies, - it forwards signals, - it exits when the child exits. - It is totally transparent to our application. - We should use it if our application creates subprocess but doesn't reap them. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- class: extra-details ## What about Kubernetes? - Kubernetes does not expose that `--init` option. - However, we can achieve the same result with [Process Namespace Sharing](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/). - When Process Namespace Sharing is enabled, PID 1 will be `pause`. - That `pause` process takes care of reaping zombies. - Process Namespace Sharing is available since Kubernetes 1.16. - If you're using an older version of Kubernetes ... ... you might have to add `tini` explicitly to your Docker image. .debug[[containers/Init_Systems.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Init_Systems.md)] --- class: pic .interstitial[] --- name: toc-advanced-dockerfile-syntax class: title Advanced Dockerfile Syntax .nav[ [Previous part](#toc-init-systems-and-pid-) | [Back to table of contents](#toc-part-7) | [Next part](#toc-buildkit) ] .debug[(automatically generated title slide)] --- class: title # Advanced Dockerfile Syntax  .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## Objectives We have seen simple Dockerfiles to illustrate how Docker build container images. In this section, we will give a recap of the Dockerfile syntax, and introduce advanced Dockerfile commands that we might come across sometimes; or that we might want to use in some specific scenarios. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## `Dockerfile` usage summary * `Dockerfile` instructions are executed in order. * Each instruction creates a new layer in the image. * Docker maintains a cache with the layers of previous builds. * When there are no changes in the instructions and files making a layer, the builder re-uses the cached layer, without executing the instruction for that layer. * The `FROM` instruction MUST be the first non-comment instruction. * Lines starting with `#` are treated as comments. * Some instructions (like `CMD` or `ENTRYPOINT`) update a piece of metadata. (As a result, each call to these instructions makes the previous one useless.) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `RUN` instruction The `RUN` instruction can be specified in two ways. With shell wrapping, which runs the specified command inside a shell, with `/bin/sh -c`: ```dockerfile RUN apt-get update ``` Or using the `exec` method, which avoids shell string expansion, and allows execution in images that don't have `/bin/sh`: ```dockerfile RUN [ "apt-get", "update" ] ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## More about the `RUN` instruction `RUN` will do the following: * Execute a command. * Record changes made to the filesystem. * Work great to install libraries, packages, and various files. `RUN` will NOT do the following: * Record state of *processes*. * Automatically start daemons. If you want to start something automatically when the container runs, you should use `CMD` and/or `ENTRYPOINT`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## Collapsing layers It is possible to execute multiple commands in a single step: ```dockerfile RUN apt-get update && apt-get install -y wget && apt-get clean ``` It is also possible to break a command onto multiple lines: ```dockerfile RUN apt-get update \ && apt-get install -y wget \ && apt-get clean ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `EXPOSE` instruction The `EXPOSE` instruction tells Docker what ports are to be published in this image. ```dockerfile EXPOSE 8080 EXPOSE 80 443 EXPOSE 53/tcp 53/udp ``` * All ports are private by default. * Declaring a port with `EXPOSE` is not enough to make it public. * The `Dockerfile` doesn't control on which port a service gets exposed. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## Exposing ports * When you `docker run -p ...`, that port becomes public. (Even if it was not declared with `EXPOSE`.) * When you `docker run -P ...` (without port number), all ports declared with `EXPOSE` become public. A *public port* is reachable from other containers and from outside the host. A *private port* is not reachable from outside. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `COPY` instruction The `COPY` instruction adds files and content from your host into the image. ```dockerfile COPY . /src ``` This will add the contents of the *build context* (the directory passed as an argument to `docker build`) to the directory `/src` in the container. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## Build context isolation Note: you can only reference files and directories *inside* the build context. Absolute paths are taken as being anchored to the build context, so the two following lines are equivalent: ```dockerfile COPY . /src COPY / /src ``` Attempts to use `..` to get out of the build context will be detected and blocked with Docker, and the build will fail. Otherwise, a `Dockerfile` could succeed on host A, but fail on host B. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## `ADD` `ADD` works almost like `COPY`, but has a few extra features. `ADD` can get remote files: ```dockerfile ADD http://www.example.com/webapp.jar /opt/ ``` This would download the `webapp.jar` file and place it in the `/opt` directory. `ADD` will automatically unpack zip files and tar archives: ```dockerfile ADD ./assets.zip /var/www/htdocs/assets/ ``` This would unpack `assets.zip` into `/var/www/htdocs/assets`. *However,* `ADD` will not automatically unpack remote archives. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## `ADD`, `COPY`, and the build cache * Before creating a new layer, Docker checks its build cache. * For most Dockerfile instructions, Docker only looks at the `Dockerfile` content to do the cache lookup. * For `ADD` and `COPY` instructions, Docker also checks if the files to be added to the container have been changed. * `ADD` always needs to download the remote file before it can check if it has been changed. (It cannot use, e.g., ETags or If-Modified-Since headers.) .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## `VOLUME` The `VOLUME` instruction tells Docker that a specific directory should be a *volume*. ```dockerfile VOLUME /var/lib/mysql ``` Filesystem access in volumes bypasses the copy-on-write layer, offering native performance to I/O done in those directories. Volumes can be attached to multiple containers, allowing to "port" data over from a container to another, e.g. to upgrade a database to a newer version. It is possible to start a container in "read-only" mode. The container filesystem will be made read-only, but volumes can still have read/write access if necessary. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `WORKDIR` instruction The `WORKDIR` instruction sets the working directory for subsequent instructions. It also affects `CMD` and `ENTRYPOINT`, since it sets the working directory used when starting the container. ```dockerfile WORKDIR /src ``` You can specify `WORKDIR` again to change the working directory for further operations. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `ENV` instruction The `ENV` instruction specifies environment variables that should be set in any container launched from the image. ```dockerfile ENV WEBAPP_PORT 8080 ``` This will result in an environment variable being created in any containers created from this image of ```bash WEBAPP_PORT=8080 ``` You can also specify environment variables when you use `docker run`. ```bash $ docker run -e WEBAPP_PORT=8000 -e WEBAPP_HOST=www.example.com ... ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `USER` instruction The `USER` instruction sets the user name or UID to use when running the image. It can be used multiple times to change back to root or to another user. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `CMD` instruction The `CMD` instruction is a default command run when a container is launched from the image. ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` Means we don't need to specify `nginx -g "daemon off;"` when running the container. Instead of: ```bash $ docker run /web_image nginx -g "daemon off;" ``` We can just do: ```bash $ docker run /web_image ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## More about the `CMD` instruction Just like `RUN`, the `CMD` instruction comes in two forms. The first executes in a shell: ```dockerfile CMD nginx -g "daemon off;" ``` The second executes directly, without shell processing: ```dockerfile CMD [ "nginx", "-g", "daemon off;" ] ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `CMD` instruction The `CMD` can be overridden when you run a container. ```bash $ docker run -it /web_image bash ``` Will run `bash` instead of `nginx -g "daemon off;"`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## The `ENTRYPOINT` instruction The `ENTRYPOINT` instruction is like the `CMD` instruction, but arguments given on the command line are *appended* to the entry point. Note: you have to use the "exec" syntax (`[ "..." ]`). ```dockerfile ENTRYPOINT [ "/bin/ls" ] ``` If we were to run: ```bash $ docker run training/ls -l ``` Instead of trying to run `-l`, the container will run `/bin/ls -l`. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## Overriding the `ENTRYPOINT` instruction The entry point can be overridden as well. ```bash $ docker run -it training/ls bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr $ docker run -it --entrypoint bash training/ls root@d902fb7b1fc7:/# ``` .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## How `CMD` and `ENTRYPOINT` interact The `CMD` and `ENTRYPOINT` instructions work best when used together. ```dockerfile ENTRYPOINT [ "nginx" ] CMD [ "-g", "daemon off;" ] ``` The `ENTRYPOINT` specifies the command to be run and the `CMD` specifies its options. On the command line we can then potentially override the options when needed. ```bash $ docker run -d /web_image -t ``` This will override the options `CMD` provided with new flags. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- ## Advanced Dockerfile instructions * `ONBUILD` lets you stash instructions that will be executed when this image is used as a base for another one. * `LABEL` adds arbitrary metadata to the image. * `ARG` defines build-time variables (optional or mandatory). * `STOPSIGNAL` sets the signal for `docker stop` (`TERM` by default). * `HEALTHCHECK` defines a command assessing the status of the container. * `SHELL` sets the default program to use for string-syntax RUN, CMD, etc. .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- class: extra-details ## The `ONBUILD` instruction The `ONBUILD` instruction is a trigger. It sets instructions that will be executed when another image is built from the image being build. This is useful for building images which will be used as a base to build other images. ```dockerfile ONBUILD COPY . /src ``` * You can't chain `ONBUILD` instructions with `ONBUILD`. * `ONBUILD` can't be used to trigger `FROM` instructions. ??? :EN:- Advanced Dockerfile syntax :FR:- Dockerfile niveau expert .debug[[containers/Advanced_Dockerfiles.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Advanced_Dockerfiles.md)] --- class: pic .interstitial[] --- name: toc-buildkit class: title Buildkit .nav[ [Previous part](#toc-advanced-dockerfile-syntax) | [Back to table of contents](#toc-part-7) | [Next part](#toc-application-configuration) ] .debug[(automatically generated title slide)] --- # Buildkit - "New" backend for Docker builds - announced in 2017 - ships with Docker Engine 18.09 - enabled by default on Docker Desktop in 2021 - Huge improvements in build efficiency - 100% compatible with existing Dockerfiles - New features for multi-arch - Not just for building container images .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Old vs New - Classic `docker build`: - copy whole build context - linear execution - `docker run` + `docker commit` + `docker run` + `docker commit`... - Buildkit: - copy files only when they are needed; cache them - compute dependency graph (dependencies are expressed by `COPY`) - parallel execution - doesn't rely on Docker, but on internal runner/snapshotter - can run in "normal" containers (including in Kubernetes pods) .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Parallel execution - In multi-stage builds, all stages can be built in parallel (example: https://github.com/jpetazzo/shpod; [before][shpod-before-parallel] and [after][shpod-after-parallel]) - Stages are built only when they are necessary (i.e. if their output is tagged or used in another necessary stage) - Files are copied from context only when needed - Files are cached in the builder [shpod-before-parallel]: https://github.com/jpetazzo/shpod/blob/c6efedad6d6c3dc3120dbc0ae0a6915f85862474/Dockerfile [shpod-after-parallel]: https://github.com/jpetazzo/shpod/blob/d20887bbd56b5fcae2d5d9b0ce06cae8887caabf/Dockerfile .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Turning it on and off - On recent version of Docker Desktop (since 2021): *enabled by default* - On older versions, or on Docker CE (Linux): `export DOCKER_BUILDKIT=1` - Turning it off: `export DOCKER_BUILDKIT=0` .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Multi-arch support - Historically, Docker only ran on x86_64 / amd64 (Intel/AMD 64 bits architecture) - Folks have been running it on 32-bit ARM for ages (e.g. Raspberry Pi) - This required a Go compiler and appropriate base images (which means changing/adapting Dockerfiles to use these base images) - Docker [image manifest v2 schema 2][manifest] introduces multi-arch images (`FROM alpine` automatically gets the right image for your architecture) [manifest]: https://docs.docker.com/registry/spec/manifest-v2-2/ .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Why? - Raspberry Pi (32-bit and 64-bit ARM) - Other ARM-based embedded systems (ODROID, NVIDIA Jetson...) - Apple M1, M2... - AWS Graviton - Ampere Altra (e.g. on Hetzner, Oracle Cloud, Scaleway...) .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Multi-arch builds in a nutshell Use the `docker buildx build` command: ```bash docker buildx build … \ --platform linux/amd64,linux/arm64,linux/arm/v7,linux/386 \ [--tag jpetazzo/hello --push] ``` - Requires all base images to be available for these platforms - Must not use binary downloads with hard-coded architectures! (streamlining a Dockerfile for multi-arch: [before][shpod-before-multiarch], [after][shpod-after-multiarch]) [shpod-before-multiarch]: https://github.com/jpetazzo/shpod/blob/d20887bbd56b5fcae2d5d9b0ce06cae8887caabf/Dockerfile [shpod-after-multiarch]: https://github.com/jpetazzo/shpod/blob/c50789e662417b34fea6f5e1d893721d66d265b7/Dockerfile .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Native vs emulated vs cross - Native builds: *aarch64 machine running aarch64 programs building aarch64 images/binaries* - Emulated builds: *x86_64 machine running aarch64 programs building aarch64 images/binaries* - Cross builds: *x86_64 machine running x86_64 programs building aarch64 images/binaries* .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Native - Dockerfiles are (relatively) simple to write (nothing special to do to handle multi-arch; just avoid hard-coded archs) - Best performance - Requires "exotic" machines - Requires setting up a build farm .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Emulated - Dockerfiles are (relatively) simple to write - Emulation performance can vary (from "OK" to "ouch this is slow") - Emulation isn't always perfect (weird bugs/crashes are rare but can happen) - Doesn't require special machines - Supports arbitrary architectures thanks to QEMU .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Cross - Dockerfiles are more complicated to write - Requires cross-compilation toolchains - Performance is good - Doesn't require special machines .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Native builds - Requires base images to be available - To view available architectures for an image: ```bash regctl manifest get --list docker manifest inspect ``` - Nothing special to do, *except* when downloading binaries! ``` https://releases.hashicorp.com/terraform/1.1.5/terraform_1.1.5_linux_`amd64`.zip ``` .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Finding the right architecture `uname -m` → armv7l, aarch64, i686, x86_64 `GOARCH` (from `go env`) → arm, arm64, 386, amd64 In Dockerfile, add `ARG TARGETARCH` (or `ARG TARGETPLATFORM`) - `TARGETARCH` matches `GOARCH` - `TARGETPLAFORM` → linux/arm/v7, linux/arm64, linux/386, linux/amd64 .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- class: extra-details ## Welp Sometimes, binary releases be like: ``` Linux_arm64.tar.gz Linux_ppc64le.tar.gz Linux_s390x.tar.gz Linux_x86_64.tar.gz ``` This needs a bit of custom mapping. .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Emulation - Leverages `binfmt_misc` and QEMU on Linux - Enabling: ```bash docker run --rm --privileged aptman/qus -s -- -p ``` - Disabling: ```bash docker run --rm --privileged aptman/qus -- -r ``` - Checking status: ```bash ls -l /proc/sys/fs/binfmt_misc ``` .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- class: extra-details ## How it works - `binfmt_misc` lets us register _interpreters_ for binaries, e.g.: - [DOSBox][dosbox] for DOS programs - [Wine][wine] for Windows programs - [QEMU][qemu] for Linux programs for other architectures - When we try to execute e.g. a SPARC binary on our x86_64 machine: - `binfmt_misc` detects the binary format and invokes `qemu- the-binary ...` - QEMU translates SPARC instructions to x86_64 instructions - system calls go straight to the kernel [dosbox]: https://www.dosbox.com/ [QEMU]: https://www.qemu.org/ [wine]: https://www.winehq.org/ .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- class: extra-details ## QEMU registration - The `aptman/qus` image mentioned earlier contains static QEMU builds - It registers all these interpreters with the kernel - For more details, check: - https://github.com/dbhi/qus - https://dbhi.github.io/qus/ .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Cross-compilation - Cross-compilation is about 10x faster than emulation (non-scientific benchmarks!) - In Dockerfile, add: `ARG BUILDARCH BUILDPLATFORM TARGETARCH TARGETPLATFORM` - Can use `FROM --platform=$BUILDPLATFORM ` - Then use `$TARGETARCH` or `$TARGETPLATFORM` (e.g. for Go, `export GOARCH=$TARGETARCH`) - Check [tonistiigi/xx][xx] and [Toni's blog][toni] for some amazing cross tools! [xx]: https://github.com/tonistiigi/xx [toni]: https://medium.com/@tonistiigi/faster-multi-platform-builds-dockerfile-cross-compilation-guide-part-1-ec087c719eaf .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Checking runtime capabilities Build and run the following Dockerfile: ```dockerfile FROM --platform=linux/amd64 busybox AS amd64 FROM --platform=linux/arm64 busybox AS arm64 FROM --platform=linux/arm/v7 busybox AS arm32 FROM --platform=linux/386 busybox AS ia32 FROM alpine RUN apk add file WORKDIR /root COPY --from=amd64 /bin/busybox /root/amd64/busybox COPY --from=arm64 /bin/busybox /root/arm64/busybox COPY --from=arm32 /bin/busybox /root/arm32/busybox COPY --from=ia32 /bin/busybox /root/ia32/busybox CMD for A in *; do echo "$A => $($A/busybox uname -a)"; done ``` It will indicate which executables can be run on your engine. .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## Cache directories ```bash RUN --mount=type=cache,target=/pipcache pip install --cache-dir /pipcache ... ``` - The `/pipcache` directory won't be in the final image - But it will persist across builds - This can simplify Dockerfiles a lot - we no longer need to `download package && install package && rm package` - download to a cache directory, and skip `rm` phase - Subsequent builds will also be faster, thanks to caching .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- ## More than builds - Buildkit is also used in other systems: - [Earthly] - generic repeatable build pipelines - [Dagger] - CICD pipelines that run anywhere - and more! [Earthly]: https://earthly.dev/ [Dagger]: https://dagger.io/ .debug[[containers/Buildkit.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Buildkit.md)] --- class: pic .interstitial[] --- name: toc-application-configuration class: title Application Configuration .nav[ [Previous part](#toc-buildkit) | [Back to table of contents](#toc-part-8) | [Next part](#toc-logging) ] .debug[(automatically generated title slide)] --- # Application Configuration There are many ways to provide configuration to containerized applications. There is no "best way" — it depends on factors like: * configuration size, * mandatory and optional parameters, * scope of configuration (per container, per app, per customer, per site, etc), * frequency of changes in the configuration. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Command-line parameters ```bash docker run jpetazzo/hamba 80 www1:80 www2:80 ``` * Configuration is provided through command-line parameters. * In the above example, the `ENTRYPOINT` is a script that will: - parse the parameters, - generate a configuration file, - start the actual service. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Command-line parameters pros and cons * Appropriate for mandatory parameters (without which the service cannot start). * Convenient for "toolbelt" services instantiated many times. (Because there is no extra step: just run it!) * Not great for dynamic configurations or bigger configurations. (These things are still possible, but more cumbersome.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Environment variables ```bash docker run -e ELASTICSEARCH_URL=http://es42:9201/ kibana ``` * Configuration is provided through environment variables. * The environment variable can be used straight by the program, or by a script generating a configuration file. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Environment variables pros and cons * Appropriate for optional parameters (since the image can provide default values). * Also convenient for services instantiated many times. (It's as easy as command-line parameters.) * Great for services with lots of parameters, but you only want to specify a few. (And use default values for everything else.) * Ability to introspect possible parameters and their default values. * Not great for dynamic configurations. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Baked-in configuration ``` FROM prometheus COPY prometheus.conf /etc ``` * The configuration is added to the image. * The image may have a default configuration; the new configuration can: - replace the default configuration, - extend it (if the code can read multiple configuration files). .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Baked-in configuration pros and cons * Allows arbitrary customization and complex configuration files. * Requires writing a configuration file. (Obviously!) * Requires building an image to start the service. * Requires rebuilding the image to reconfigure the service. * Requires rebuilding the image to upgrade the service. * Configured images can be stored in registries. (Which is great, but requires a registry.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Configuration volume ```bash docker run -v appconfig:/etc/appconfig myapp ``` * The configuration is stored in a volume. * The volume is attached to the container. * The image may have a default configuration. (But this results in a less "obvious" setup, that needs more documentation.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Configuration volume pros and cons * Allows arbitrary customization and complex configuration files. * Requires creating a volume for each different configuration. * Services with identical configurations can use the same volume. * Doesn't require building / rebuilding an image when upgrading / reconfiguring. * Configuration can be generated or edited through another container. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Dynamic configuration volume * This is a powerful pattern for dynamic, complex configurations. * The configuration is stored in a volume. * The configuration is generated / updated by a special container. * The application container detects when the configuration is changed. (And automatically reloads the configuration when necessary.) * The configuration can be shared between multiple services if needed. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Dynamic configuration volume example In a first terminal, start a load balancer with an initial configuration: ```bash $ docker run --name loadbalancer jpetazzo/hamba \ 80 goo.gl:80 ``` In another terminal, reconfigure that load balancer: ```bash $ docker run --rm --volumes-from loadbalancer jpetazzo/hamba reconfigure \ 80 google.com:80 ``` The configuration could also be updated through e.g. a REST API. (The REST API being itself served from another container.) .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- ## Keeping secrets .warning[Ideally, you should not put secrets (passwords, tokens...) in:] * command-line or environment variables (anyone with Docker API access can get them), * images, especially stored in a registry. Secrets management is better handled with an orchestrator (like Swarm or Kubernetes). Orchestrators will allow to pass secrets in a "one-way" manner. Managing secrets securely without an orchestrator can be contrived. E.g.: - read the secret on stdin when the service starts, - pass the secret using an API endpoint. .debug[[containers/Application_Configuration.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Application_Configuration.md)] --- class: pic .interstitial[] --- name: toc-logging class: title Logging .nav[ [Previous part](#toc-application-configuration) | [Back to table of contents](#toc-part-8) | [Next part](#toc-orchestration-an-overview) ] .debug[(automatically generated title slide)] --- # Logging In this chapter, we will explain the different ways to send logs from containers. We will then show one particular method in action, using ELK and Docker's logging drivers. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## There are many ways to send logs - The simplest method is to write on the standard output and error. - Applications can write their logs to local files. (The files are usually periodically rotated and compressed.) - It is also very common (on UNIX systems) to use syslog. (The logs are collected by syslogd or an equivalent like journald.) - In large applications with many components, it is common to use a logging service. (The code uses a library to send messages to the logging service.) *All these methods are available with containers.* .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Writing on stdout/stderr - The standard output and error of containers is managed by the container engine. - This means that each line written by the container is received by the engine. - The engine can then do "whatever" with these log lines. - With Docker, the default configuration is to write the logs to local files. - The files can then be queried with e.g. `docker logs` (and the equivalent API request). - This can be customized, as we will see later. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Writing to local files - If we write to files, it is possible to access them but cumbersome. (We have to use `docker exec` or `docker cp`.) - Furthermore, if the container is stopped, we cannot use `docker exec`. - If the container is deleted, the logs disappear. - What should we do for programs who can only log to local files? -- - There are multiple solutions. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Using a volume or bind mount - Instead of writing logs to a normal directory, we can place them on a volume. - The volume can be accessed by other containers. - We can run a program like `filebeat` in another container accessing the same volume. (`filebeat` reads local log files continuously, like `tail -f`, and sends them to a centralized system like ElasticSearch.) - We can also use a bind mount, e.g. `-v /var/log/containers/www:/var/log/tomcat`. - The container will write log files to a directory mapped to a host directory. - The log files will appear on the host and be consumable directly from the host. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Using logging services - We can use logging frameworks (like log4j or the Python `logging` package). - These frameworks require some code and/or configuration in our application code. - These mechanisms can be used identically inside or outside of containers. - Sometimes, we can leverage containerized networking to simplify their setup. - For instance, our code can send log messages to a server named `log`. - The name `log` will resolve to different addresses in development, production, etc. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Using syslog - What if our code (or the program we are running in containers) uses syslog? - One possibility is to run a syslog daemon in the container. - Then that daemon can be setup to write to local files or forward to the network. - Under the hood, syslog clients connect to a local UNIX socket, `/dev/log`. - We can expose a syslog socket to the container (by using a volume or bind-mount). - Then just create a symlink from `/dev/log` to the syslog socket. - Voilà! .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Using logging drivers - If we log to stdout and stderr, the container engine receives the log messages. - The Docker Engine has a modular logging system with many plugins, including: - json-file (the default one) - syslog - journald - gelf - fluentd - splunk - etc. - Each plugin can process and forward the logs to another process or system. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## A word of warning about `json-file` - By default, log file size is unlimited. - This means that a very verbose container *will* use up all your disk space. (Or a less verbose container, but running for a very long time.) - Log rotation can be enabled by setting a `max-size` option. - Older log files can be removed by setting a `max-file` option. - Just like other logging options, these can be set per container, or globally. Example: ```bash $ docker run --log-opt max-size=10m --log-opt max-file=3 elasticsearch ``` .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Demo: sending logs to ELK - We are going to deploy an ELK stack. - It will accept logs over a GELF socket. - We will run a few containers with the `gelf` logging driver. - We will then see our logs in Kibana, the web interface provided by ELK. *Important foreword: this is not an "official" or "recommended" setup; it is just an example. We used ELK in this demo because it's a popular setup and we keep being asked about it; but you will have equal success with Fluent or other logging stacks!* .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## What's in an ELK stack? - ELK is three components: - ElasticSearch (to store and index log entries) - Logstash (to receive log entries from various sources, process them, and forward them to various destinations) - Kibana (to view/search log entries with a nice UI) - The only component that we will configure is Logstash - We will accept log entries using the GELF protocol - Log entries will be stored in ElasticSearch, and displayed on Logstash's stdout for debugging .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Running ELK - We are going to use a Compose file describing the ELK stack. - The Compose file is in the container.training repository on GitHub. ```bash $ git clone https://github.com/jpetazzo/container.training $ cd container.training $ cd elk $ docker-compose up ``` - Let's have a look at the Compose file while it's deploying. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Our basic ELK deployment - We are using images from the Docker Hub: `elasticsearch`, `logstash`, `kibana`. - We don't need to change the configuration of ElasticSearch. - We need to tell Kibana the address of ElasticSearch: - it is set with the `ELASTICSEARCH_URL` environment variable, - by default it is `localhost:9200`, we change it to `elasticsearch:9200`. - We need to configure Logstash: - we pass the entire configuration file through command-line arguments, - this is a hack so that we don't have to create an image just for the config. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Sending logs to ELK - The ELK stack accepts log messages through a GELF socket. - The GELF socket listens on UDP port 12201. - To send a message, we need to change the logging driver used by Docker. - This can be done globally (by reconfiguring the Engine) or on a per-container basis. - Let's override the logging driver for a single container: ```bash $ docker run --log-driver=gelf --log-opt=gelf-address=udp://localhost:12201 \ alpine echo hello world ``` .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Viewing the logs in ELK - Connect to the Kibana interface. - It is exposed on port 5601. - Browse http://X.X.X.X:5601. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## "Configuring" Kibana - Kibana should offer you to "Configure an index pattern": in the "Time-field name" drop down, select "@timestamp", and hit the "Create" button. - Then: - click "Discover" (in the top-left corner), - click "Last 15 minutes" (in the top-right corner), - click "Last 1 hour" (in the list in the middle), - click "Auto-refresh" (top-right corner), - click "5 seconds" (top-left of the list). - You should see a series of green bars (with one new green bar every minute). - Our 'hello world' message should be visible there. .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- ## Important afterword **This is not a "production-grade" setup.** It is just an educational example. Since we have only one node , we did set up a single ElasticSearch instance and a single Logstash instance. In a production setup, you need an ElasticSearch cluster (both for capacity and availability reasons). You also need multiple Logstash instances. And if you want to withstand bursts of logs, you need some kind of message queue: Redis if you're cheap, Kafka if you want to make sure that you don't drop messages on the floor. Good luck. If you want to learn more about the GELF driver, have a look at [this blog post]( https://jpetazzo.github.io/2017/01/20/docker-logging-gelf/). .debug[[containers/Logging.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Logging.md)] --- class: pic .interstitial[] --- name: toc-orchestration-an-overview class: title Orchestration, an overview .nav[ [Previous part](#toc-logging) | [Back to table of contents](#toc-part-8) | [Next part](#toc-links-and-resources) ] .debug[(automatically generated title slide)] --- # Orchestration, an overview In this chapter, we will: * Explain what is orchestration and why we would need it. * Present (from a high-level perspective) some orchestrators. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## What's orchestration?  .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## What's orchestration? According to Wikipedia: *Orchestration describes the __automated__ arrangement, coordination, and management of complex computer systems, middleware, and services.* -- *[...] orchestration is often discussed in the context of __service-oriented architecture__, __virtualization__, provisioning, Converged Infrastructure and __dynamic datacenter__ topics.* -- What does that really mean? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances -- - Q: do we always use 100% of our servers? -- - A: obviously not! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances - Every night, scale down (by shutting down extraneous replicated instances) - Every morning, scale up (by deploying new copies) - "Pay for what you use" (i.e. save big $$$ here) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Example 1: dynamic cloud instances How do we implement this? - Crontab - Autoscaling (save even bigger $$$) That's *relatively* easy. Now, how are things for our IAAS provider? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter - Q: what's the #1 cost in a datacenter? -- - A: electricity! -- - Q: what uses electricity? -- - A: servers, obviously - A: ... and associated cooling -- - Q: do we always use 100% of our servers? -- - A: obviously not! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter - If only we could turn off unused servers during the night... - Problem: we can only turn off a server if it's totally empty! (i.e. all VMs on it are stopped/moved) - Solution: *migrate* VMs and shutdown empty servers (e.g. combine two hypervisors with 40% load into 80%+0%, and shut down the one at 0%) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Example 2: dynamic datacenter How do we implement this? - Shut down empty hosts (but keep some spare capacity) - Start hosts again when capacity gets low - Ability to "live migrate" VMs (Xen already did this 10+ years ago) - Rebalance VMs on a regular basis - what if a VM is stopped while we move it? - should we allow provisioning on hosts involved in a migration? *Scheduling* becomes more complex. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## What is scheduling? According to Wikipedia (again): *In computing, scheduling is the method by which threads, processes or data flows are given access to system resources.* The scheduler is concerned mainly with: - throughput (total amount of work done per time unit); - turnaround time (between submission and completion); - response time (between submission and start); - waiting time (between job readiness and execution); - fairness (appropriate times according to priorities). In practice, these goals often conflict. **"Scheduling" = decide which resources to use.** .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Exercise 1 - You have: - 5 hypervisors (physical machines) - Each server has: - 16 GB RAM, 8 cores, 1 TB disk - Each week, your team requests: - one VM with X RAM, Y CPU, Z disk Scheduling = deciding which hypervisor to use for each VM. Difficulty: easy! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Exercise 2 - You have: - 1000+ hypervisors (and counting!) - Each server has different resources: - 8-500 GB of RAM, 4-64 cores, 1-100 TB disk - Multiple times a day, a different team asks for: - up to 50 VMs with different characteristics Scheduling = deciding which hypervisor to use for each VM. Difficulty: ??? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Exercise 2 - You have: - 1000+ hypervisors (and counting!) - Each server has different resources: - 8-500 GB of RAM, 4-64 cores, 1-100 TB disk - Multiple times a day, a different team asks for: - up to 50 VMs with different characteristics Scheduling = deciding which hypervisor to use for each VM.  .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Exercise 3 - You have machines (physical and/or virtual) - You have containers - You are trying to put the containers on the machines - Sounds familiar? .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with one resource .center[] ## We can't fit a job of size 6 :( .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with one resource .center[] ## ... Now we can! .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with two resources .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## Scheduling with three resources .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## You need to be good at this .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## But also, you must be quick! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## And be web scale! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## And think outside (?) of the box! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: pic ## Good luck! .center[] .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## TL,DR * Scheduling with multiple resources (dimensions) is hard. * Don't expect to solve the problem with a Tiny Shell Script. * There are literally tons of research papers written on this. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## But our orchestrator also needs to manage ... * Network connectivity (or filtering) between containers. * Load balancing (external and internal). * Failure recovery (if a node or a whole datacenter fails). * Rolling out new versions of our applications. (Canary deployments, blue/green deployments...) .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Some orchestrators We are going to present briefly a few orchestrators. There is no "absolute best" orchestrator. It depends on: - your applications, - your requirements, - your pre-existing skills... .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Nomad - Open Source project by Hashicorp. - Arbitrary scheduler (not just for containers). - Great if you want to schedule mixed workloads. (VMs, containers, processes...) - Less integration with the rest of the container ecosystem. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Mesos - Open Source project in the Apache Foundation. - Arbitrary scheduler (not just for containers). - Two-level scheduler. - Top-level scheduler acts as a resource broker. - Second-level schedulers (aka "frameworks") obtain resources from top-level. - Frameworks implement various strategies. (Marathon = long running processes; Chronos = run at intervals; ...) - Commercial offering through DC/OS by Mesosphere. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Rancher - Rancher 1 offered a simple interface for Docker hosts. - Rancher 2 is a complete management platform for Docker and Kubernetes. - Technically not an orchestrator, but it's a popular option. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Swarm - Tightly integrated with the Docker Engine. - Extremely simple to deploy and setup, even in multi-manager (HA) mode. - Secure by default. - Strongly opinionated: - smaller set of features, - easier to operate. .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- ## Kubernetes - Open Source project initiated by Google. - Contributions from many other actors. - *De facto* standard for container orchestration. - Many deployment options; some of them very complex. - Reputation: steep learning curve. - Reality: - true, if we try to understand *everything*; - false, if we focus on what matters. ??? :EN:- Orchestration overview :FR:- Survol de techniques d'orchestration .debug[[containers/Orchestration_Overview.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/Orchestration_Overview.md)] --- class: title, self-paced Thank you! .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/thankyou.md)] --- class: title, in-person That's all, folks! Questions?  .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/main/slides/shared/thankyou.md)] --- class: pic .interstitial[] --- name: toc-links-and-resources class: title Links and resources .nav[ [Previous part](#toc-orchestration-an-overview) | [Back to table of contents](#toc-part-9) | [Next part](#toc-) ] .debug[(automatically generated title slide)] --- # Links and resources - [Docker Community Slack](https://community.docker.com/registrations/groups/4316) - [Docker Community Forums](https://forums.docker.com/) - [Docker Hub](https://hub.docker.com) - [Docker Blog](https://blog.docker.com/) - [Docker documentation](https://docs.docker.com/) - [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker) - [Docker on Twitter](https://twitter.com/docker) - [Play With Docker Hands-On Labs](https://training.play-with-docker.com/) .footnote[These slides (and future updates) are on → https://container.training/] .debug[[containers/links.md](https://github.com/jpetazzo/container.training/tree/main/slides/containers/links.md)]