Movatterモバイル変換


[0]ホーム

URL:


Packt
Search iconClose icon
Search icon CANCEL
Subscription
0
Cart icon
Your Cart(0 item)
Close icon
You have no products in your basket yet
Save more on your purchases!discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Profile icon
Account
Close icon

Change country

Modal Close icon
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timerSALE ENDS IN
0Days
:
00Hours
:
00Minutes
:
00Seconds
Home> Cloud & Networking> Cloud Computing> Kubernetes – An Enterprise Guide
Kubernetes – An Enterprise Guide
Kubernetes – An Enterprise Guide

Kubernetes – An Enterprise Guide: Master containerized application deployments, integrate enterprise systems, and achieve scalability , Third Edition

Arrow left icon
Profile Icon Marc BoorshteinProfile Icon Scott Surovich
Arrow right icon
€28.99€32.99
Full star iconFull star iconFull star iconFull star iconHalf star icon4.8(13 Ratings)
eBookAug 2024682 pages3rd Edition
eBook
€28.99 €32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Marc BoorshteinProfile Icon Scott Surovich
Arrow right icon
€28.99€32.99
Full star iconFull star iconFull star iconFull star iconHalf star icon4.8(13 Ratings)
eBookAug 2024682 pages3rd Edition
eBook
€28.99 €32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€28.99 €32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature iconInstant access to your Digital eBook purchase
Product feature icon Download this book inEPUB andPDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature iconDRM FREE - Read whenever, wherever and however you want
Product feature iconAI Assistant (beta) to help accelerate your learning
OR

Contact Details

Modal Close icon
Payment Processing...
tickCompleted

Billing Address

Table of content iconView table of contentsPreview book icon Preview Book

Kubernetes – An Enterprise Guide

Docker and Container Essentials

Containers have become an incredibly popular and influential technology that brings significant changes from legacy applications. Everyone, from tech companies to big corporations and end users, not has widely embraced containers to handle their day-to-day tasks. It’s worth noting that the conventional method of installing ready-made commercial applications is gradually transforming into fully containerized setups. Considering the sheer magnitude of this technological shift, it becomes essential for people working in the field of information technology to gain knowledge and understand the concept of containers.

This chapter will provide an overview of the issues that containers aim to solve. We will begin by highlighting the significance of containers. Then, we will introduceDocker, the runtime that played a pivotal role in the rise of containerization, and discuss its relationship withKubernetes.

This chapter intends to provide you with an understanding of running containers in Docker. One common question you may have heard is: “What is the relationship of Docker to Kubernetes?” Well, in today’s world, Docker is not tied to Kubernetes at all – you do not need Docker to run Kubernetes and you don’t need it to create containers. We are discussing Docker in this chapter to provide you with the skills to run containers locally and test your images before you deploy them to a Kubernetes cluster.

By the end of this chapter, you will have a clear understanding of how to install Docker and how to effectively utilize the commonly usedDocker command-line interface (CLI) commands.

In this chapter, we will cover the following main topics:

  • Understanding the need for containerization
  • Understanding why Kubernetes removed Docker
  • Understanding Docker
  • Installing Docker
  • Using the Docker CLI

Technical requirements

This chapter has the following technical requirements:

Understanding the need for containerization

You may have experienced a conversation like this at your office or school:

Developer: “Here’s the new application. It went through weeks of testing and you are the first to get the new release.

….. A little while later …..

User: “It’s not working. When I click the submit button, it shows an error about a missing dependency.”

Developer: “That’s weird; it’s working fine on my machine.

Encountering such issues can be incredibly frustrating for developers when they’re deploying an application. Oftentimes, these problems occur due to a missing library in the final package that the developer had on their own machine. One might think that a simple solution would be to include all the libraries in the release, but what if this release includes a newer version of a library that replaces an older version, which another application may still rely on?

Developers have to carefully consider their new releases and the potential conflicts they may cause with existing software on users’ workstations. It becomes a delicate balancing act that often requires larger deployment teams to thoroughly test the application on various system configurations. This situation can result in additional work for the developer or, in extreme cases, render the application completely incompatible with an existing one.

Over the years, there have been several attempts to simplify application delivery. One solution is VMware’sThinApp, which aims to virtualize an application (not to be confused with virtualizing the entire operating system (OS)). It allows you to bundle the application and its dependencies into a single executable package. By doing so, all the application’s dependencies are contained within the package, eliminating conflicts with other application dependencies. This not only ensures application isolation but also enhances security and reduces the complexities of OS migrations.

You might not have come across terms like application packaging or application-on-a-stick until now, but it seems like a great solution to the infamous “it worked on my machine” problem. However, there are reasons why it hasn’t gained widespread adoption as anticipated. Firstly, most solutions in this space are paid offerings that require a significant investment. Additionally, they require a “clean PC,” meaning that for each application you want to virtualize, you need to start with a fresh system. The package you create captures the differences between the base installation and any changes made afterward. These differences are then packaged into a distribution file, which can be executed on any workstation.

We’ve mentioned application virtualization to highlight that application issues such as “it works on my machine” have had different solutions over the years. Products such asThinApp are just one attempt at solving the problem. Other attempts include running the application on a server usingCitrix,Remote Desktop,Linux containers,chroot jails, and evenvirtual machines.

Understanding why Kubernetes removed Docker

Kubernetes removed all support for Docker in version 1.24 as a supported container runtime. While it has been removed as a runtime engine option, you can create new containers using Docker and they will run on any runtime that supports theOpen Container Initiative (OCI) specification. OCI is a set of standards for containers and their runtimes. These standards ensure that containers remain portable, regardless of the container platform or the runtime used to execute them.

When you create a container using Docker, you are creating a container that is fully OCI compliant, so it will still run on Kubernetes clusters that are running any Kubernetes-compatible container runtime.

To fully explain the impact and the supported alternatives, we need to understand what a container runtime is. A high-level definition would be that a container runtime is the software layer that runs and manages containers. Like many components that make up a Kubernetes cluster, the runtime is not included as part of Kubernetes – it is a pluggable module that needs to be supplied by a vendor, or by you, to create a functioning cluster.

There are many technical reasons that led to the decision to deprecate and remove Docker, but at a high level, the main concerns were as follows:

  • Docker contains multiple pieces inside of the Docker runtime to support its own remote API anduser experience (UX). Kubernetes only requires one component in the executable, dockerd, which is the runtime process that manages containers. All other pieces of the executable contribute nothing to using Docker in a Kubernetes cluster. These extra components make the binary bloated and can lead to additional bugs, security, or performance issues.
  • Docker does not conform to theContainer Runtime Interface (CRI) standard, which was introduced to create a set of standards to easily integrate container runtimes in Kubernetes. Since it doesn’t comply, the Kubernetes team has had extra work that only caters to supporting Docker.

When it comes to local container testing and development, you can still use Docker on your workstation or server. Considering the previous statement, if you build a container on Docker and the container successfully runs on your Docker runtime system, it will run on a Kubernetes cluster that does not use Docker as the runtime.

Removing Docker will have very little impact on most users of Kubernetes in new clusters. Containers will still run using any standard method, as they would with Docker as the container runtime. If you happen to manage a cluster, you may need to learn new commands when you troubleshoot a Kubernetes node – you will not have a Docker command on the node to look at running containers, clean up volumes, and so on.

Kubernetes supports a number of runtimes in place of Docker. Two of the most commonly used runtimes are as follows:

  • containerd
  • CRI-O

While these are the two commonly used runtimes, there are a number of other compatible runtimes available. You can always view the latest supported runtimes on the Kubernetes GitHub page athttps://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md.

For more details on the impact of deprecating and removing Docker, refer to the article calledDon’t Panic: Kubernetes and Docker on the Kubernetes.io site athttps://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/.

Introducing Docker

Both the industry and end users were seeking a solution that was both convenient and affordable, and this is whereDocker containers came in. While containers have been utilized in different ways over time, Docker has brought about a transformation by providing a runtime and tools for everyday users and developers.

Docker brought an abstraction layer to the masses. It was easy to use and didn’t require a clean PC for every application before creating a package, thus offering a solution for dependency issues, but most attractive of all, it was free. Docker became a standard for many projects on GitHub, where teams would often create a Docker container and distribute the Docker image or Dockerfile to team members, providing a standard testing or development environment. This adoption by end users is what eventually brought Docker to the enterprise and, ultimately, what made it the standard it has become today.

Within the scope of this book, we will be focusing on what you will need to know when trying to use a local Kubernetes environment. Docker has a long and interesting history of how it evolved into the standard container image format that we use today. We encourage you to read about the company and how they ushered in the container world we know today.

While our focus is not to teach Docker inside out, we feel that those of you who are new to Docker would benefit from a quick primer on general container concepts.

If you have some Docker experience and understand terminology such as ephemeral and stateless, you can jump to theInstalling Docker section.

Docker versus Moby

When the Docker runtime was developed, it was a single code base. The single code base contained every function that Docker offered, whether you have used them or not. This led to inefficiencies, and it started to hinder the progression of Docker and containers in general.

The following table shows the differences between the Docker and Moby projects.

Feature

Docker

Moby

Development

The primary contributor is Docker, with some community support

It is open-source software with heavy community development and support

Project scope

The complete platform that includes all components to build and run containers

It is a modular platform for building container-based components and solutions

Ownership

It is a branded product, offered by Docker, Inc.

It is an open-source project that is used to build various container solutions

Configuration

A full default configuration is included to make it easy for users to use it quickly

It has more available customizations, providing users with the ability to address their specific requirements

Commercial support

It offers full support, including enterprise support

It is offered as open-source software; there is no support direct from the Moby project

Table 1.1: Docker versus Moby features

To recap –Moby is a project that was started by Docker, but it is not the complete Docker runtime. The Docker runtime uses the components from Moby to create the Docker runtime, which includes the Moby open-source components and Docker’s own open-sourced components.

Now, let’s move on to understanding Docker a little more and how you can use it to create and manage containers.

Understanding Docker

This book assumes that you have a foundational understanding of Docker and container concepts. However, we know that not everyone will have prior experience with Docker or containers. Therefore, we have included this crash course to introduce you to container concepts and guide you through the usage of Docker.

If you are new to containers, we suggest reading the documentation that can be found on Docker’s website for additional information:https://docs.docker.com/.

Containers are ephemeral

The first thing to understand is that containers are ephemeral.

The term “ephemeral” means something that exists for a short period.Containers can be intentionally terminated, or automatically restarted without any user involvement or consequences. To better understand this concept, let’s look at an example – imagine someone interactively adds files to a web server running within a container. The uploaded files are temporary because they were not originally part of the base image.

This means that once a container is built and running, any changes that are made to the container will not be saved once it is removed, or destroyed, from the Docker host. Let’s look at a full example:

  1. You start a container running a web server usingNGINX on your host without any baseHTML pages.
  2. Using a Docker command, you execute acopy command to copy some web files into the container’s filesystem.
  3. To test that the copy was successful, you go to the website and confirm that it is serving the correct web pages.
  4. Happy with the results, you stop the container and remove it from the host. Later that day, you want to show a coworker the website and you start yourNGINX container. You go to the website again, but when the site opens, you receive a404 error (page not found error).

What happened to the files you uploaded before you stopped and removed the container from the host?

The reason your web pages cannot be found after the container was restarted is that all containers are ephemeral. Whatever is in the base container image is all that will be included each time the container is initially started. Any changes that you make inside a container are short-lived.

If you need to add permanent files to an existing image, you need to rebuild the image with the files included or, as we will explain in thePersistent data section later in this chapter, you could mount a Docker volume in your container.

At this point, the main concept to understand is that containers areephemeral.

But wait! You may be wondering, “If containers are ephemeral, how did I add web pages to the server?”Ephemeral just means that changes will not be saved; it doesn’t stop you from making changes to a running container.

Any changes made to a running container will be written to a temporary layer, called thecontainer layer, which is a directory on the localhost filesystem. Docker uses astorage driver, which is in charge of handling requests that use the container layer. The storage driver is responsible for managing and storing images and containers on your Docker host. It controls the mechanisms and processes involved in their storage and management.

This location will store all changes in the container’s filesystem so that when you add the HTML pages to the container, they will be stored on the local host. The container layer is tied to thecontainer ID of the running image and it will remain on the host system until the container is removed from Docker, either by using the CLI or by running a Dockerprune job (seeFigure 1.1 on the next page).

Considering that containers are temporary and are read only, you might wonder how it’s possible to modify data within a container. Docker addresses this by utilizingimage layering, which involves creating interconnected layers that collectively function as a single filesystem. Through this, changes can be made to the container’s data, even though the underlying image remainsimmutable.

Docker images

A Docker image is composed of multiple image layers, each accompanied by aJavaScript Object Notation (JSON) file that stores metadata specific to the layer. When a container image is launched, these layers are combined to form the application that users interact with.

You can read more about the contents of an image on Docker’s GitHub athttps://github.com/moby/moby/blob/master/image/spec/v1.1.md.

Image layers

As we mentioned in the previous section, a runningcontainer uses acontainer layer that is “on top” of the baseimage layer, as shown in the following diagram:

Figure 1.1 – Docker image layers

Figure 1.1: Docker image layers

The image layers cannot be written to since they are in a read-only state, but the temporary container layer is in a writeable state. Any data that you add to the container is stored in this layer and will be retained as long as the container is running.

To deal with multiple layers efficiently, Docker implementscopy-on-write, which means that if a file already exists, it will not be created. However, if a file is required that does not exist in the current image, it will be written. In the container world, if a file exists in a lower layer, the layers above it do not need to include it. For example, if layer 1 had a file called/opt/nginx/index.html in it, layer 2 does not need the same file in its layer.

This explains how the system handles files that either exist or do not exist, but what about a file that has been modified? There will be times when you’ll need to replace a file that is in a lower layer. You may need to do this when you are building an image or as a temporary fix to a running container issue. The copy-on-write system knows how to deal with these issues. Since images read from the top down, the container uses only the highest layer file. If your system had a/opt/nginx/index.html file in layer 1 and you modified and saved the file, the running container would store the new file in the container layer. Since the container layer is the topmost layer, the new copy ofindex.html would always be read before the older version in the image layer.

Persistent data

Being limited to ephemeral-onlycontainers would severely limit the use cases for Docker. You will probably encounter use cases where persistent storage is needed or data must be retained even if a container is stopped.

Remember, when you store data in the container image layer, the base image does not change. When the container is removed from the host, the container layer is also removed. If the same image is used to start a new container, a new container image layer is created. While containers themselves are ephemeral, you can achieve data persistence by incorporating a Docker volume. By utilizing aDocker volume, data can be stored externally in the container, enabling it to persist beyond the container’s lifespan.

Accessing services running in containers

Unlike a physical machine or a virtual machine, containers do not connect to a network directly. When a container needs to send or receive traffic, it goes through the Docker host system using a bridgednetwork address translation (NAT) connection. This means that when you run a container and you want to receive incoming traffic requests, you need to expose the ports for each of the containers that you wish to receive traffic on. On a Linux-based system,iptables has rules to forward traffic to the Docker daemon, which will service the assigned ports for each container. There is no need to worry about how theiptables rules are created, as Docker will handle that for you by using the port information provided when you start the container. If you are new to Linux,iptables may be new to you.

At a high level,iptables is used to manage network traffic and keep it secure within a cluster. It controls the flow of network connections between components in the cluster, deciding which connections are allowed and which ones are blocked.

That concludes the introduction to container fundamentals and Docker concepts. In the next section, we will guide you through the process of installing Docker on your host.

Installing Docker

The hands-on exercises in this book will require that you have a working Docker host. To install Docker, we have included a script located in this book’s GitHub repository, in thechapter1 directory, calledinstall-docker.sh.

Today, you can install Docker on just about every hardware platform out there. Each version of Docker acts and looks the same on each platform, making development and using Docker easy for people who need to develop cross-platform applications. By making the functions and commands the same between different platforms, developers do not need to learn a different container runtime to run images.

The following is a table of Docker’s available platforms. As you can see, there are installations for multiple OSs, as well as multiple architectures:

Desktop Platform

x86_64/amd64

arm64 (Apple Silicon)

Docker Desktop (Linux)

Checkmark

Docker Desktop (macOS)

Checkmark

Checkmark

Docker Desktop (Windows)

Checkmark

Server Platform

x86_64/amd64

arm64/aarch64

arm (32-bit)

ppcc64le

s390x

CentOS

Checkmark

Checkmark

Checkmark

Debian

Checkmark

Checkmark

Checkmark

Checkmark

Fedora

Checkmark

Checkmark

Checkmark

Raspberry Pi OS

Checkmark

RHEL (s390)

Checkmark

Checkmark

SLES

Checkmark

Checkmark

Checkmark

Checkmark

Checkmark

Ubuntu

Checkmark

Checkmark

Checkmark

Table 1.2: Available Docker platforms

Images that are created using one architecture cannot run on a different architecture. This means that you cannot create an image based on x86 hardware and expect that same image to run on your Raspberry Pi running an ARM processor. It’s also important to note that while you can run a Linux container on a Windows machine, you cannot run a Windows container on a Linux machine.

While images, by default, are not cross-architecture compatible, there are new tools to create what’s known as a multi-platform image. Multi-platform images are images that can be used across different architectures or processors in a single container, rather than having multiple images, such as one forNGINX on x86, another one forARM, and another one forPowerPC. This will help you simplify your management and deployment of containerized applications. Since multi-platform images contain various versions, one for each architecture you include, you need to specify the architecture when deploying the image. Luckily, the container runtime will help out and automatically select the correct architecture from the image manifest.

The use of multi-platform images provides portability, flexibility, and scalability for your containers across cloud platforms, edge deployments, and hybrid infrastructure. With the use of ARM-based servers growing in the industry and the heavy use of Raspberry Pi by people learning Kubernetes, cross-platform images will help make consuming containers quicker and easier.

For example, in 2020, Apple released the M1 chip, ending the era of Apple running Intel processors in favor of the ARM processor. We’re not going to get into the details of the difference, only that they are different and this leads to important challenges for container developers and users. Docker does haveDocker Desktop, a macOS tool for running containers that lets you use the same workflows that you used if you had a Docker installation on Linux, Windows, or x86 macOS. Docker will try to match the architecture of the underlying host when pulling or building images. On ARM-based systems, if you are attempting to pull an image that does not have an ARM version, Docker will throw an error due to the architecture incompatibilities. If you are attempting to build an image, it will build an ARM version on macOS, which cannot run on x86 machines.

Multi-platform images can be complex to create. If you want additional details on creating multi-platform images, visit theMulti-platform images page on Docker’s website:https://docs.docker.com/build/building/multi-platform/.

The installation procedures that are used to install Docker vary between platforms. Luckily, Docker has documented many of them on their website:https://docs.docker.com/install/.

In this chapter, we will install Docker on anUbuntu 22.04 system. If you do not have an Ubuntu machine to install on, you can still read about the installation steps, as each step will be explained and does not require that you have a running system to understand the process. If you have a different Linux installation, you can use the installation procedures outlined on Docker’s site athttps://docs.docker.com/. Steps are provided for CentOS, Debian, Fedora, and Ubuntu, and there are generic steps for other Linux distributions.

Preparing to install Docker

Now that we have introduced Docker, the next step is to select an installation method. Docker’s installation changes between not only different Linux distributions but also versions of the same Linux distribution. Our script is based on using an Ubuntu 22.04 server, so it may not work on other versions of Ubuntu. You can install Docker using one of two methods:

  • Add the Docker repositories to your host system
  • Install using Docker scripts

The first option is considered the best option since it allows for easy installation and updates to the Docker engine. The second option is designed for installing Docker on testing/development environments and is not recommended for deployment in production environments.

Since the preferred method is to add Docker’s repository to our host, we will use that option.

Installing Docker on Ubuntu

Now that we have added the requiredrepositories, the next step is to install Docker.

We have provided a script in thechapter1 folder of the Git repository calledinstall-docker.sh. When you execute the script, it will automatically install all of the necessary binaries required for Docker to run.

To provide a brief summary of the script, it begins by modifying a specific value in the/etc/needrestart/needrestart.conf file. In Ubuntu 22.04, there was a change in how daemons are restarted, where users might be required to manually select which system daemons to restart. To simplify the exercises described in the book, we alter therestart value in theneedsrestart.conf file to “automatic” instead of prompting for each changed service.

Next, we install a few utilities likevim,ca-certificates,curl, andGnuPG. The first three utilities are fairly common, but the last one,GnuPG, may be newer to some readers and might need some explaining.GnuPG, an acronym forGNU Privacy Guard, enhances Ubuntu with a range of cryptographic capabilities such asencryption,decryption,digital signatures, andkey management.

In our Docker deployment, we need to add Docker’sGPG public key. which is a cryptographic key pair that secures communication and maintains data integrity. GPG keys use asymmetrical encryption, which involves the use of two different, but related, keys, known as apublic key and aprivate key. These keys are generated together as a pair, but they provide different functions. The private key, which remains confidential, is used to generate the digital signatures on the downloaded files. The public key is publicly available and is used to verify digital signatures created by the private key.

Next, we need to add Docker’s repository to our local repository list. When we add the repository to the list, we need to include the Docker certificate. Thedocker.gpg certificate was downloaded by the script from Docker’s site and stored on the local server under/etc/apt/keyings/docker.gpg. When we add the repository to the repository list, we add the key by using the signed-by option in the/etc/apt/sources.list.d/docker.list file. The full command is shown here:

deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu   jammy stable

By including the Docker repository in our localapt repository list, we gain the ability to install the Docker binaries effortlessly. This process entails using a straightforwardapt-get install command, which will install the five essential binaries for Docker:docker-ce,docker-ce-cli,containerd.io,docker-buildx-plugin, anddocker-compose-plugin. As previously stated, all these files are signed with Docker’s GPG key. Thanks to the inclusion of Docker’s key on our server, we can be confident that the files are safe and originate from a reliable source.

Once Docker is successfully installed, the next step involves enabling and configuring the Docker daemon to start automatically during system boot using thesystemctl command. This process follows the standard procedure applied to most system daemons installed on Linux servers.

Rather than go over each line of code in each script, we have included comments in the scripts to help you understand how what each command and step is executing. Where it may help with some topics, we will include some section of code in the chapters for reference.

After installingDocker, let’s get some configuration out of the way. First, you will rarely execute commands as root in the real world, so we need to grant permissions to use Docker to your user.

Granting Docker permissions

In a default installation, Docker requires root access, so you will need to run all Docker commands asroot. Rather than usingsudo with every Docker command, you can add your user account to a new group on the server that provides Docker access without requiringsudo for every command.

If you are logged on as a standard user and try to run a Docker command, you will receive an error:

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/images/json: dial unix /var/run/docker.sock: connect: permission denied

To allow your user, or any other user you may want to add, to execute Docker commands, you need to add the users to a new group calleddocker that was created during the installation of Docker. The following is an example command you can use to add the currently logged-on user to the group:

sudo usermod -aG docker $USER

To add the new members to your account, you can either log off and log back into the Docker host, or activate the group changes using thenewgrp command:

newgrp docker

Now, let’s test that Docker is working by running the standardhello-world image (note that we do not requiresudo to run the Docker command):

docker run hello-world

You should see the following output, which verifies that your user has access to Docker:

Unable to find image 'hello-world:latest' locallylatest: Pulling from library/hello-world2db29710123e: Pull completeDigest: sha256:37a0b92b08d4919615c3ee023f7ddb068d12b8387475d64c622ac30f45c29c51Status: Downloaded newer image for hello-world:latestHello from Docker!

This message shows that your installation is working correctly – congratulations!

To generate this message, Docker took the following steps:

  1. The Docker client contacted the Docker daemon.
  2. The Docker daemon pulled thehello-world image from Docker Hub (amd64).
  3. The Docker daemon created a new container from the image that runs the executable that produces the output you are currently reading.
  4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.

To try something more ambitious – you can run an Ubuntu container with the following:

$docker run -it ubuntu bash

For more examples and ideas, visithttps://docs.docker.com/get-started/.

Now that we’ve granted Docker permission, we can start unlocking the most common Docker commands by learning how to use the Docker CLI.

Using the Docker CLI

You used the Docker CLIwhen you ran thehello-world container to test your installation. The Docker command is what you will use to interact with the Docker daemon. Using this single executable, you can do the following, and more:

  • Start and stop containers
  • Pull and push images
  • Run a shell in an active container
  • Look at container logs
  • Create Docker volumes
  • Create Docker networks
  • Prune old images and volumes

This chapter is not meant to include an exhaustive explanation of every Docker command; instead, we will explain some of the common commands that you will need to use to interact with the Docker daemon and containers.

You can break down Docker commands into two categories: general Docker commands and Docker management commands. The standard Docker commands allow you to manage containers, while management commands allow you to manage Docker options such as managing volumes and networking.

docker help

It is quite common to forget the syntax or options of a command, and Docker acknowledges this. If you ever find yourself in a situation where you can’t recall a command, you can always depend on thedocker help command. It will help you by providing what the command can do and how to use it.

docker run

To run a container, use thedocker run command with the provided image name. But, before executing adocker run command, you should understand the options you can supply when starting a container.

In its simplest form, an example command you can use to run an NGINX web server would bedocker run bitnami/nginx:latest. This will start a container running NGINX, and it will run in the foreground, showing logs of the application running in the container. PressingCtrl +C will stop the running container and terminate the NGINX server:

nginx 22:52:27.42nginx 22:52:27.42 Welcome to the Bitnami nginx containernginx 22:52:27.43 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-nginxnginx 22:52:27.43 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-nginx/issuesnginx 22:52:27.44nginx 22:52:27.44 INFO  ==> ** Starting NGINX setup **nginx 22:52:27.49 INFO  ==> Validating settings in NGINX_* env varsnginx 22:52:27.50 INFO  ==> Initializing NGINXnginx 22:52:27.53 INFO  ==> ** NGINX setup finished! **nginx 22:52:27.57 INFO  ==> ** Starting NGINX **

As you saw, when you usedCtrl +C to stop the container, NGINX also stopped. In most cases, you want a container to start and continue to run without being in the foreground, allowing the system to run other tasks while the container also continues to run. To run a container as a background process, you need to add the-d, or--detach option to your Docker command, which will run your container in detached mode. Now, when you run a detached container, you will only see the container ID, instead of the interactive or attached screen:

[root@localhost ~]# docker run -d bitnami/nginx:latest13bdde13d0027e366a81d9a19a56c736c28feb6d8354b363ee738d2399023f80[root@localhost ~]#

By default, containers will be given a random name once they are started. In our previous detached example, if we list the running containers, we will see that the container has been given the namesilly_keldysh, as shown in the following output:

CONTAINER ID      IMAGE                      NAMES13bdde13d002      bitnami/nginx:l

If you do not assign a name to your container, it can quickly get confusing as you start to run multiple containers on a single host. To make management easier, you should always start your container with a name that will make it easier to manage. Docker provides another option with therun command: the--name option. Building on our previous example, we will name our containernginx-test. Our newdocker run command will be as follows:

docker run --name nginx-test -d bitnami/nginx:latest

Just like running any detached image, this will return the container ID, but not the name you provided. In order to verify that the container ran with the namenginx-test, we can list the containers using thedocker ps command, which we will explain next.

docker ps

Often, you will need to retrieve a list of running containers or a list of containers that have been stopped. The Docker CLI has a flag calledps that will list all running and stopped containers, by adding the extra flag to theps command. The output will list the containers, including their container ID, image tag,entry command, creation date, status, ports, and container name. The following is an example of containers that are currently running:

CONTAINER ID   IMAGE                  COMMAND                 CREATED13bdde13d002   bitnami/nginx:latest   "/opt/bitnami/script…"  Up 4 hours3302f2728133   registry:2             "/entrypoint.sh /etc…"  Up 3 hours

This is helpful if the container you are looking for is currently running, but what if the container has stopped, or even worse, what if the container failed to start and then stopped? You can view the status of all containers, including previously run containers, by adding the-a flag to thedocker ps command. When you executedocker ps -a, you will see the same output from a standardps command, but you will notice that the list may include additional containers.

How can you tell which containers are running versus which ones have stopped? If you look at theSTATUS field of the list, the running containers will show a running time; for example,Up xx hours, orUp xx days. However, if the container has been stopped for any reason, the status will show when it stopped; for example,Exited (0) 10 minutes ago.

IMAGE                  COMMAND                  CREATED         STATUSbitnami/nginx:latest   "/opt/bitnami/script…"   10 minutes ago  Up 10 minutesbitnami/nginx:latest   "/opt/bitnami/script…"   12 minutes ago  Exited (0) 10 minutes ago

A stopped container does not mean there was an issue with running the image. There are containers that may execute a single task and, once completed, the container may stop gracefully. One way to determine whether an exit was graceful or whether it was due to a failed startup is to look at the exited status code. There are a number of exit codes that you can use to find out why a container has exited.

Exit Code

Description

0

The command was executed successfully without any issues.

1

The command failed due to an unexpected error.

2

The command was unable to find the specified resource or encountered a similar issue.

125

The command failed due to a Docker-related error.

126

The command failed because the Docker binary or script could not be executed.

127

The command failed because the Docker binary or script could not be found.

128+

The command failed due to a specific Docker-related error or exception.

Table 1.3: Docker exit codes

docker start and stop

You may need to stop a container due to limited system resources, limiting you to running a few containers simultaneously. To stop a running container and free up resources, use thedocker stop command with the name of the container, or the container ID, you want to stop.

If you need to start that container at a future time for additional testing or development, executedocker start <name>, which will start the container with all of the options that it was originally started with, including any networks or volumes that were assigned.

docker attach

In order to troubleshoot an issue or inspect a log file, it may be necessary to interact with a container. One way to connect to a container that is currently running is by using thedocker attach <container ID/name> command. When you perform this action, you establish a connection with the active process of the running container. If you attach to a container that is executing a process, it is unlikely that you will see any prompt. In fact, it’s likely that you will see a blank screen for a period of time until the container starts producing output that is displayed on the screen.

You should always be cautious when attaching to a container. It’s easy to accidentally stop the running process and, in turn, stop the container. Let’s use an example of attaching to a web server running NGINX. First, we need to verify that the container is running usingdocker ps:

CONTAINER ID   IMAGE                 COMMAND                   STATUS4a77c14a236a   nginx                 "/docker-entrypoint.…"    Up 33 seconds

Using theattach command, we executedocker attach 4a77c14a236a.

When you attach to a process, you will only be able to interact with the running process, and the only output you will see is data being sent to standard output. In the case of the NGINX container, theattach command has been attached to the NGINX process. To show this, we will leave the attachment andcurl to the web server from another session. Once wecurl to the container, we will see logs outputted to the attached console:

[root@astra-master manifests]# docker attach 4a77c14a236a172.17.0.1 - - [15/Oct/2021:23:28:31 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"172.17.0.1 - - [15/Oct/2021:23:28:33 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"172.17.0.1 - - [15/Oct/2021:23:28:34 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"172.17.0.1 - - [15/Oct/2021:23:28:35 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"172.17.0.1 - - [15/Oct/2021:23:28:36 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"

We mentioned that you need to be careful once you attach to the container. Those who are new to Docker may attach to the NGINX image and assume that nothing is happening on the server or a process appears to be hung so they may decide to break out of the container using the standardCtrl +C keyboard command. This will stop the container and send them back to a Bash prompt, where they may rundocker ps to look at the running containers:

root@localhost:~# docker psCONTAINER ID      IMAGE  COMMAND    CREATED    STATUSroot@localhost:~#

What happened to the NGINX container? We didn’t execute adocker stop command, and the container was running until we attached to the container. Why did the container stop after we attached to it?

As we mentioned, when an attachment is made to a container, you are attached to the running process. All keyboard commands will act in the same way as if you were at a physical server that was running NGINX in a regular shell. This means that when the user usedCtrl +C to return to a prompt, they stopped the running NGINX process.

If we pressCtrl +C to exit the container, we will receive an output that shows that the process has been terminated. The following output shows an example of what happens in our NGINX example:

2023/06/27 19:38:02 [notice] 1#1: signal 2 (SIGINT) received, exiting2023/06/27 19:38:02 [notice] 31#31: exiting2023/06/27 19:38:02 [notice] 30#30: exiting2023/06/27 19:38:02 [notice] 29#29: exiting2023/06/27 19:38:02 [notice] 31#31: exit2023/06/27 19:38:02 [notice] 30#30: exit2023/06/27 19:38:02 [notice] 29#29: exit2023/06/27 19:38:02 [notice] 32#32: exiting2023/06/27 19:38:02 [notice] 32#32: exit2023/06/27 19:38:03 [notice] 1#1: signal 17 (SIGCHLD) received from 312023/06/27 19:38:03 [notice] 1#1: worker process 29 exited with code 02023/06/27 19:38:03 [notice] 1#1: worker process 31 exited with code 02023/06/27 19:38:03 [notice] 1#1: worker process 32 exited with code 02023/06/27 19:38:03 [notice] 1#1: signal 29 (SIGIO) received2023/06/27 19:38:03 [notice] 1#1: signal 17 (SIGCHLD) received from 292023/06/27 19:38:03 [notice] 1#1: signal 17 (SIGCHLD) received from 302023/06/27 19:38:03 [notice] 1#1: worker process 30 exited with code 02023/06/27 19:38:03 [notice] 1#1: exit

If a container’s running process stops, the container will also stop, and that’s why thedocker ps command does not show a running NGINX container.

To exit an attachment, rather than useCtrl +C to return to a prompt, you should have usedCtrl +P, followed byCtrl +Q, which will exit the container without stopping the running process.

There is an alternative to theattach command: thedocker exec command. Theexec command differs from theattach command since you supply the process to execute on the container.

docker exec

A better option when it comes to interacting with a running container is theexec command. Rather than attach to the container, you can use thedocker exec command to execute a process in the container. You need to supply the container name and the process you want to execute in the image. Of course, the process must be included in the running image – if you do not have the Bash executable in the image, you will receive an error when trying to execute Bash in the container.

We will use an NGINX container as an example again. We will verify that NGINX is running usingdocker ps and then, using the container ID or the name, we execute into the container. The command syntax isdocker exec <options> <container name> <command>:

root@localhost:~# docker exec -it nginx-test bashI have no name!@a7c916e7411:/app$

The option we included is-it, which tells theexec to run in an interactive TTY session. Here, the process we want to execute is Bash.

Notice how the prompt name changed from the original user and hostname. The hostname islocalhost, while the container name isa7c916e7411. You may also have noticed that the current working directory changed from~ to/app and that the prompt is not running as a root user, as shown by the$ prompt.

You can use this session the same way you would a standardSSH connection; you are running Bash in the container and since we are not attached to the running process in the container,Ctrl +C will not stop any process from running.

To exit an interactive session, you only need to type inexit, followed byEnter, which will exit the container. If you then rundocker ps, you will notice that the container is still in a running state.

Next, let’s see what we can learn about Docker log files.

docker logs

Thedocker logs command allows you to retrieve logs from a container using the container name or container ID. You can view the logs from any container that is listed in yourps command; it doesn’t matter if it’s currently running or stopped.

Log files are often the only way to troubleshoot why a container may not be starting up, or why a container is in an exited state. For example, if you attempt to run an image and the image starts and suddenly stops, you may find the answer by looking at the logs for that container.

To look at the logs for a container, you can use thedocker logs <container ID or name> command.

To view the logs for a container with a container ID of7967c50b260f, you would use the following command:

docker logs 7967c50b260f

This will output the logs from the container to your screen, which may be very long and verbose. Since many logs may contain a lot of information, you can limit the output by supplying thelogs command with additional options. The following table lists the options available for viewing logs:

Log Options

Description

-f

Follow the log output (can also use--follow).

--tail xx

Show the log output starting from the end of the file and retrievexx lines.

--until xxx

Show the log output before thexxx timestamp.

xxx can be a timestamp; for example,2020-02-23T18:35:13.

xxx can be a relative time; for example,60m.

--since xxx

Show the log output after thexxx timestamp.

xxx can be a timestamp; for example,2020-02-23T18:35:13.

xxx can be a relative time; for example,60m.

Table 1.4: Log options

Checking log files is a process you will find yourself doing often, and since they can be very lengthy, knowing options liketail,until, andsince will help you to find the information in a log quicker.

docker rm

Once you assign a name to a container, the assigned name cannot be used on a different container unless you remove it using thedocker rm command. If you had a container running callednginx-test that was stopped and you attempted to start another container with the namenginx-test, the Docker daemon would return an error, stating that the name is in use:

Conflict.  The container name "/nginx-test" is already in use

The originalnginx-test container is not running, but the daemon knows that the container name was used previously and that it’s still in the list of previously run containers.

When you want to reuse a specific name, you must first remove the existing container before launching a new one with the same name. This scenario commonly occurs during container image testing. You may initially start a container but encounter issues with the application or image. In such instances, you would stop the container, resolve the problems with the image or application, and wish to redeploy it using the same name. However, since the previous container with that name still exists in the Docker history, it becomes necessary to remove it before reutilizing the name.

You can also add the--rm option to your Docker command to automatically remove the image after it is stopped.

To remove thenginx-test container, simply executedocker rm nginx-test:

root@localhost ~:# docker rm nginx-testnginx-testroot@localhost ~:#

Assuming the container name is correct and it’s not running, the only output you will see is the name of the image that you have removed.

We haven’t discussed Docker volumes, but when removing a container that has a volume, or volumes, attached, it’s a good practice to add the-v option to your remove command. Adding the-v option to thedocker rm command will remove any volumes that were attached to the container.

docker pull/run

When running apull, make sure to specify the architecture.dockerpull andrun are used to either pull an image or run an image. If you try to run a container that doesn’t exist on the Docker host already, it will initiate apull request to get the container and then run it.

When you attempt topull orrun a container, Docker will download a container that is compatible with the host’s architecture. If you want to download a different image that is based on a different architecture, you can add the--platform tag to thebuild command. For example, if you are on a system that is running on arm64 architecture and you want to pull an x86 image, you would addlinux/arm64 as your platform. When running a pull, make sure to specify the architecture:

root@localhost ~:# docker pull --platform=linux/amd64 ubuntu:22.0422.04: Pulling from library/ubuntu6b851dcae6ca: Pull completeDigest: sha256:6120be6a2b7ce665d0cbddc3ce6eae60fe94637c6a66985312d1f02f63cc0bcdStatus: Downloaded newer image for ubuntu:22.04WARNING: image with reference ubuntu was found but does not match the specified platform: wanted linux/amd64, actual: linux/arm64/v8docker.io/library/ubuntu:22.04

Adding--platform=linux/amd64 is what told Docker to get the right platform. You can use the same parameter fordocker run to make sure that the right container image platform is used.

docker build

Similar topull andrun, Docker will attempt to build the image based on the host’s architecture:arm64. Assuming you are building on an arm64-based image system, you can tell Docker to create an x86 image by using thebuildx sub-command:

root@localhost ~:# docker buildx build --platform linux/amd64 --tag docker.io/mlbiam/openunison-kubernetes-operator --no-cache -f ./src/main/docker/Dockerfile .

This addition tells Docker to generate thex86 version, which will run on any x86-based hardware.

Summary

In this chapter, you learned how Docker can be used to solve common development issues, including the dreaded “it works on my machine” problem. We also presented an introduction to the most commonly used Docker CLI commands that you will use on a daily basis.

In the next chapter, we will start our Kubernetes journey with an introduction toKinD, a utility that provides an easy way to run multi-node Kubernetes test servers on a single workstation.

Questions

  1. Asingle Docker image can be used on any Docker host, regardless of the architecture used.
    1. True
    2. False

Answer: b

We added the topic of cross-platform images

  1. What does Docker use to merge multiple image layers into a single filesystem?
    1. Merged filesystem
    2. NTFS filesystem
    3. EXT4 filesystem
    4. Union filesystem

Answer: d

  1. Kubernetes is only compatible with the Docker runtime engine.
    1. True
    2. False

Answer: b

  1. When you edit a container’s filesystem interactively, what layer are the changes written to?
    1. OS layer
    2. Bottom-most layer
    3. Container layer
    4. Ephemeral layer

Answer: c

  1. Assuming the image contains the required binaries, what Docker command allows you to gain access to a running container’s bash prompt?
    1. docker shell -it <container> /bin/bash
    2. docker run -it <container> /bin/bash
    3. docker exec -it <container> /bin/bash
    4. docker spawn -it <container> /bin/bash

Answer: c

  1. If you start a container with a simplerun command, without any flags, and the container is stopped, the Docker daemon will delete all traces of the container.
    1. True
    2. False

Answer: b

  1. What command will show you a list of all containers, including any stopped containers?
    1. docker ps -all
    2. docker ps -a
    3. docker ps -list
    4. docker list all

Answer: b

Join our book’s Discord space

Join the book’s Discord workspace for a monthlyAsk Me Anything session with the authors:

https://packt.link/K8EntGuide

Left arrow icon

Page1 of 10

Right arrow icon
Download code iconDownload Code

Key benefits

  • Practical insights on running Kubernetes in enterprise environments, backed by real-world experience
  • Strategies for securing clusters with runtime security, direct pod mounting, and Vault integration for secrets management
  • A dual-perspective approach that covers Kubernetes administration and development for a complete understanding

Description

Kubernetes – An Enterprise Guide, Third Edition, provides a practical and up-to-date resource for navigating modern cloud-native technologies. This edition covers advanced Kubernetes deployments, security best practices, and key strategies for managing enterprise workloads efficiently.The book explores critical topics such as virtual clusters, container security, and secrets management, offering actionable insights for running Kubernetes in production environments. Learn how to transition to microservices with Istio, implement GitOps and CI/CD for streamlined deployments, and enhance security using OPA/Gatekeeper and KubeArmor.Designed for professionals, this guide equips you with the knowledge to integrate Kubernetes with industry-leading tools and optimize business-critical applications. Stay ahead in the evolving cloud landscape with strategies that drive efficiency, security, and scalability.

Who is this book for?

This book is designed for DevOps engineers, developers, and system administrators looking to deepen their knowledge of Kubernetes for enterprise environments. It is ideal for professionals who want to enhance their skills in containerization, automation, and cloud-native deployments. While prior experience with Docker and Kubernetes is helpful, beginners can get up to speed with the included Kubernetes bootcamp, which provides foundational concepts and a refresher for those needing it.

What you will learn

  • Manage secrets securely using Vault and External Secret Operator
  • Create multitenant clusters with vCluster for isolated environments
  • Monitor Kubernetes clusters with Prometheus and visualize metrics using Grafana
  • Aggregate and analyze logs centrally with OpenSearch for deeper insights
  • Build a CI/CD developer platform by integrating GitLab and ArgoCD
  • Deploy applications in an Istio service mesh and enforce security with OPA and GateKeeper
  • Secure container runtimes and prevent attacks using KubeArmor

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date :Aug 30, 2024
Length:682 pages
Edition :3rd
Language :English
ISBN-13 :9781835081754
Languages :

What do you get with eBook?

Product feature iconInstant access to your Digital eBook purchase
Product feature icon Download this book inEPUB andPDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature iconDRM FREE - Read whenever, wherever and however you want
Product feature iconAI Assistant (beta) to help accelerate your learning
OR

Contact Details

Modal Close icon
Payment Processing...
tickCompleted

Billing Address

Product Details

Publication date :Aug 30, 2024
Length:682 pages
Edition :3rd
Language :English
ISBN-13 :9781835081754
Category :
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99billed monthly
Feature tick iconUnlimited access to Packt's library of 7,000+ practical books and videos
Feature tick iconConstantly refreshed with 50+ new titles a month
Feature tick iconExclusive Early access to books as they're written
Feature tick iconSolve problems while you work with advanced search and reference features
Feature tick iconOffline reading on the mobile app
Feature tick iconSimple pricing, no contract
€189.99billed annually
Feature tick iconUnlimited access to Packt's library of 7,000+ practical books and videos
Feature tick iconConstantly refreshed with 50+ new titles a month
Feature tick iconExclusive Early access to books as they're written
Feature tick iconSolve problems while you work with advanced search and reference features
Feature tick iconOffline reading on the mobile app
Feature tick iconChoose a DRM-free eBook or Video every month to keep
Feature tick iconPLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick iconExclusive print discounts
€264.99billed in 18 months
Feature tick iconUnlimited access to Packt's library of 7,000+ practical books and videos
Feature tick iconConstantly refreshed with 50+ new titles a month
Feature tick iconExclusive Early access to books as they're written
Feature tick iconSolve problems while you work with advanced search and reference features
Feature tick iconOffline reading on the mobile app
Feature tick iconChoose a DRM-free eBook or Video every month to keep
Feature tick iconPLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick iconExclusive print discounts

Frequently bought together


Kubernetes – An Enterprise Guide
Kubernetes – An Enterprise Guide
Read more
Aug 2024682 pages
Full star icon4.8 (13)
eBook
eBook
€28.99€32.99
€41.99
Mastering Git
Mastering Git
Read more
Aug 2024444 pages
eBook
eBook
€26.98€29.99
€37.99
Implementing GitOps with Kubernetes
Implementing GitOps with Kubernetes
Read more
Aug 2024444 pages
Full star icon1 (1)
eBook
eBook
€23.99€26.99
€33.99
Stars icon
Total113.97
Kubernetes – An Enterprise Guide
€41.99
Mastering Git
€37.99
Implementing GitOps with Kubernetes
€33.99
Total113.97Stars icon

Table of Contents

21 Chapters
Docker and Container EssentialsChevron down iconChevron up icon
Docker and Container Essentials
Technical requirements
Understanding the need for containerization
Understanding why Kubernetes removed Docker
Understanding Docker
Installing Docker
Using the Docker CLI
Summary
Questions
Join our book’s Discord space
Deploying Kubernetes Using KinDChevron down iconChevron up icon
Deploying Kubernetes Using KinD
Technical requirements
Introducing Kubernetes components and objects
Using development clusters
Installing KinD
Creating a KinD cluster
Reviewing your KinD cluster
Adding a custom load balancer for Ingress
Summary
Questions
Kubernetes BootcampChevron down iconChevron up icon
Kubernetes Bootcamp
Technical requirements
An overview of Kubernetes components
Exploring the control plane
Understanding the worker node components
Interacting with the API server
Introducing Kubernetes resources
Summary
Questions
Join our book’s Discord space
Services, Load Balancing, and Network PoliciesChevron down iconChevron up icon
Services, Load Balancing, and Network Policies
Technical requirements
Exposing workloads to requests
Introduction to load balancers
Layer 7 load balancers
Layer 4 load balancers
Introducing Network Policies
Summary
Questions
External DNS and Global Load BalancingChevron down iconChevron up icon
External DNS and Global Load Balancing
Technical requirements
Making service names available externally
Exposing CoreDNS to external requests
Load balancing between multiple clusters
Summary
Questions
Join our book’s Discord space
Integrating Authentication into Your ClusterChevron down iconChevron up icon
Integrating Authentication into Your Cluster
Technical requirements
Getting Help
Understanding how Kubernetes knows who you are
Understanding OpenID Connect
Configuring KinD for OpenID Connect
Introducing impersonation to integrate authentication with cloud-managed clusters
Configuring your cluster for impersonation
Authenticating from pipelines
Summary
Questions
Answers
RBAC Policies and AuditingChevron down iconChevron up icon
RBAC Policies and Auditing
Technical requirements
Introduction to RBAC
Mapping enterprise identities to Kubernetes to authorize access to resources
Implementing namespace multi-tenancy
Kubernetes auditing
Using audit2rbac to debug policies
Summary
Questions
Answers
Join our book’s Discord space
Managing SecretsChevron down iconChevron up icon
Managing Secrets
Technical Requirements
Getting Help
Examining the difference between Secrets and Configuration Data
Understanding Secrets Managers
Integrating Secrets into Your Deployments
Summary
Questions
Answers
Building Multitenant Clusters with vClustersChevron down iconChevron up icon
Building Multitenant Clusters with vClusters
Technical requirements
Getting Help
The Benefits and Challenges of Multitenancy
Using vClusters for Tenants
Building a Multitenant Cluster with Self Service
Summary
Questions
Answers
Join our book’s Discord space
Deploying a Secured Kubernetes DashboardChevron down iconChevron up icon
Deploying a Secured Kubernetes Dashboard
Technical requirements
How does the dashboard know who you are?
Understanding dashboard security risks
Deploying the dashboard with a reverse proxy
Integrating the dashboard with OpenUnison
What’s changed in the Kubernetes Dashboard 7.0
Summary
Questions
Answers
Extending Security Using Open Policy AgentChevron down iconChevron up icon
Extending Security Using Open Policy Agent
Technical requirements
Introduction to dynamic admission controllers
What is OPA and how does it work?
Using Rego to write policies
Enforcing Ingress policies
Mutating objects and default values
Creating policies without Rego
Summary
Questions
Answers
Join our book’s Discord space
Node Security with GatekeeperChevron down iconChevron up icon
Node Security with Gatekeeper
Technical requirements
What is node security?
Enforcing node security with Gatekeeper
Using Pod Security Standards to enforce Node Security
Summary
Questions
Answers
KubeArmor Securing Your RuntimeChevron down iconChevron up icon
KubeArmor Securing Your Runtime
Technical requirements
What is runtime security?
Introducing KubeArmor
Deploying KubeArmor
Enabling KubeArmor logging
KubeArmor and LSM policies
Creating a KubeArmorSecurityPolicy
Using karmor to interact with KubeArmor
Summary
Questions
Answers
Join our book’s Discord space
Backing Up WorkloadsChevron down iconChevron up icon
Backing Up Workloads
Technical requirements
Understanding Kubernetes backups
Performing an etcd backup
Introducing and setting up VMware’s Velero
Using Velero to back up workloads and PVCs
Managing Velero using the CLI
Restoring from a backup
Summary
Questions
Answers
Monitoring Clusters and WorkloadsChevron down iconChevron up icon
Monitoring Clusters and Workloads
Technical Requirements
Managing Metrics in Kubernetes
How Kubernetes Provides Metrics
Deploying the Prometheus Stack
Log Management in Kubernetes
Summary
Questions
Answers
Join our book’s Discord space
An Introduction to IstioChevron down iconChevron up icon
An Introduction to Istio
Technical requirements
Understanding the Control Plane and Data Plane
Why should you care about a Service mesh?
Introduction to Istio concepts
Installing Istio
Introducing Istio resources
Deploying add-on components to provide observability
Deploying an application into the Service mesh
The future: Ambient mesh
Summary
Questions
Answers
Building and Deploying Applications on IstioChevron down iconChevron up icon
Building and Deploying Applications on Istio
Technical requirements
Comparing microservices and monoliths
Deploying a monolith
Building a microservice
Do I need an API gateway?
Summary
Questions
Join our book’s Discord space
Provisioning a Multitenant PlatformChevron down iconChevron up icon
Provisioning a Multitenant Platform
Technical requirements
Designing a pipeline
Designing our platform architecture
Using Infrastructure as Code for deployment
Automating tenant onboarding
Considerations for building an Internal Developer Platform
Summary
Questions
Answers
Building a Developer PortalChevron down iconChevron up icon
Building a Developer Portal
Technical Requirements
Deploying our IDP
Onboarding a Tenant
Deploying an Application
Adding Users to a Tenant
Expanding Our Platform
Summary
Questions
Answers
Join our book’s Discord space
Other Books You May EnjoyChevron down iconChevron up icon
Other Books You May Enjoy
Share your thohughts
IndexChevron down iconChevron up icon
Index

Recommendations for you

Left arrow icon
Solutions Architect's Handbook
Solutions Architect's Handbook
Read more
Mar 2024582 pages
Full star icon4.7 (60)
eBook
eBook
€24.99€35.99
€35.98€44.99
Mastering Terraform
Mastering Terraform
Read more
Jul 2024506 pages
Full star icon5 (21)
eBook
eBook
€26.98€29.99
€37.99
The Ultimate Linux Shell Scripting Guide
The Ultimate Linux Shell Scripting Guide
Read more
Oct 2024696 pages
Full star icon4.8 (5)
eBook
eBook
€26.98€29.99
€37.99
Mastering PowerShell Scripting
Mastering PowerShell Scripting
Read more
May 2024826 pages
Full star icon5 (27)
eBook
eBook
€26.98€29.99
€37.99
Kubernetes – An Enterprise Guide
Kubernetes – An Enterprise Guide
Read more
Aug 2024682 pages
Full star icon4.8 (13)
eBook
eBook
€28.99€32.99
€41.99
The Self-Taught Cloud Computing Engineer
The Self-Taught Cloud Computing Engineer
Read more
Sep 2023480 pages
Full star icon5 (180)
eBook
eBook
€26.98€29.99
€37.99
CI/CD Design Patterns
CI/CD Design Patterns
Read more
Dec 2024356 pages
eBook
eBook
€20.99€23.99
€29.99
Platform Engineering for Architects
Platform Engineering for Architects
Read more
Oct 2024374 pages
Full star icon5 (2)
eBook
eBook
€26.98€29.99
€37.99
Microsoft Azure Fundamentals Certification and Beyond
Microsoft Azure Fundamentals Certification and Beyond
Read more
Jan 2024284 pages
Full star icon4.8 (29)
eBook
eBook
€23.99€26.99
€33.99
The Ultimate Docker Container Book
The Ultimate Docker Container Book
Read more
Aug 2023626 pages
Full star icon4.5 (12)
eBook
eBook
€26.98€29.99
€37.99
Right arrow icon

Customer reviews

Top Reviews
Rating distribution
Full star iconFull star iconFull star iconFull star iconHalf star icon4.8
(13 Ratings)
5 star92.3%
4 star0%
3 star7.7%
2 star0%
1 star0%
Filter icon Filter
Top Reviews

Filter reviews by




DKSep 01, 2024
Full star iconFull star iconFull star iconFull star iconFull star icon5
The book offers a deep and thoughtful exploration of managing Kubernetes in an enterprise environment. The book covers a broad range of topics essential for effectively deploying and scaling Kubernetes in complex organizational settings. The authors provide a wealth of practical advice, drawing on their extensive experience in the field.While the book excels in many areas, such as its detailed discussions on integrating Kubernetes with enterprise systems and managing multi-tenant clusters, it does feel like it’s missing a few critical points, particularly in areas like advanced security practices or some emerging Kubernetes tools. That said, the depth and clarity of the content make it a valuable resource for anyone looking to extend their Kubernetes knowledge beyond the basics.Overall, it’s a solid guide that will be especially useful for DevOps teams working in enterprise environments. Just be prepared to supplement it with additional resources for the most cutting-edge practices.
Amazon Verified reviewAmazon
Boom ShakalakaSep 09, 2024
Full star iconFull star iconFull star iconFull star iconFull star icon5
As a reader with a few years of hands-on experience deploying at scale and managing both Amazon EKS and Google Kubernetes Engine (GKE), "Kubernetes - An Enterprise Guide" by Marc Boorshetein and Scott Surovich is an excellent guide for those just learning about Kubernetes as well as seasoned professionals. The end of each chapter includes a summary and questions that can be used for study. They also include a link to their Discord community if you would have questions.The authors start by covering container fundamentals and Docker concepts. Docker is one of the most widely used tooling for developing with containers. It is geared towards individuals who have prior experience with containers. I found this information useful as someone with years of experience. This leads into working with Kubernetes locally by deploying a cluster with KinD (Kubernetes in Docker). This is a great way to learn the basics of Kubernetes without having to spend money on cloud resources or procuring hardware. Chapter 2 covers common components of Kubernetes as well as how to interact with a cluster using kubectl. Example screenshots and configurations are provided which should be helpful for those new to the tooling and concepts when learning KinD.There is a wealth of information in regards to Kubernetes concepts; therefore, it would be quite difficult to learn by simply reading a textbook. The information is similar to what you might find in the documentation online, but with more in depth analysis and focused sections. A beginner would likely need to revisit sections based on what that person's role is in terms of interacting with Kubernetes. Overall this book includes some great information for an individual migrating their organization to Kubernetes.I was able to learn some new things even as an experienced user, for example, the chapter about the K8GB I found particularly enlightening. I knew the basics going in, but have not been working with Kubernetes lately so it was a good refresher to see them again. There were also some insightful explanations I can use moving forward. I definitely recommend reading for anyone with some experience with containers and is ready to level up and start using Kubernetes at scale.There are many solutions to choose from when it comes to cluster components. The authors have chosen some of the most popular solutions to cover in depth so this information is immediately useful for users looking to implement Kubernetes at an Enterprise level.
Amazon Verified reviewAmazon
Eugene C. OlsenSep 23, 2024
Full star iconFull star iconFull star iconFull star iconFull star icon5
From the authors, “We created this book to help DevOps teams to expand their skills beyond the basics of Kubernetes.” Also, “Readers should have some experience with Kubernetes and DevOps before reading this book.” The book is intermediate to advanced in level and is very hands-on. This said, Chapter 3 provides an excellent Kubernetes bootcamp. I would recommend, however, that before diving into Chapter 3, the beginner should invest some time getting an overview of Kubernetes, including some common use cases.To follow the book with the hands-on instructions that fill most of the chapters (more practice than theory), Ubuntu Linux 22.04 or higher is required. This could be running on a high-end Raspberry Pi or anything more powerful. The book is not meant to be a page-turner. If the reader has their Ubuntu machine nearby they will get some great experience deploying Kubernetes and related tools. It will take time. Kubernetes has lots of moving parts.Seven of the 19 chapters are dedicated to security with security concerns also peppered throughout most of the rest of the chapters. Chapter 15 needs more computing power (maybe an Orange Pi Max with 16GB RAM) and Chapter 19 requires considerably more computing power.Targeted at DevOps, the book might appeal to a wider audience wishing for a deep, hands-on dive into Kubernetes.
Amazon Verified reviewAmazon
Kevin J.Sep 30, 2024
Full star iconFull star iconFull star iconFull star iconFull star icon5
I received the digital pre released book and I am very impressed. Well written and covers a breadth of topics relating to Kubernetes. My current work as a DevOps engineer utilizes ECS Fargate for container orchestration so I am spending my own time to receive additional experience with Kubernetes. This book is an excellent resource for this purpose and I highly recommend it.
Amazon Verified reviewAmazon
Jim SnodgrassSep 04, 2024
Full star iconFull star iconFull star iconFull star iconFull star icon5
I read this book during a few days of unexpected downtime, and just in time for a much-needed re-orientation & update on these concepts. The authors offer an easy-to-follow guide that seamlessly bridges the gap between beginner and advanced concepts, making the content accessible to both novices and seasoned DevOps professionals.Each chapter builds on the previous one in a way that feels intuitive. I appreciate how the flow of content builds a deep understanding as the chapters progress, allowing readers to naturally grasp increasingly complex topics. The avoidance of industry jargon wherever possible makes the material more approachable, ensuring that the concepts are clear and digestible. The book’s focus on practical, real-world applications is particularly valuable for those looking to implement Kubernetes in enterprise environments.It effectively addresses the challenges companies face in managing Kubernetes clusters, providing actionable insights and some useful best practices. The coverage of topics like security, multitenancy, and monitoring is thorough, offering readers the insight needed to handle enterprise-level Kubernetes deployments. Overall, the book is structured, well-written, and a highly recommended resource for anyone looking to deepen their understanding of Kubernetes beyond the basics.
Amazon Verified reviewAmazon
  • Arrow left icon Previous
  • 1
  • 2
  • 3
  • Arrow right icon Next

About the authors

Left arrow icon
Profile icon Marc Boorshtein
Marc Boorshtein
LinkedIn iconGithub icon
Marc Boorshtein has been a software engineer and consultant for 20 years and is currently the CTO (Chief Technology Officer) of Tremolo Security, Inc. Marc has spent most of his career building identity management solutions for large enterprises, U.S. Government civilian agencies, and local government public safety systems.
Read more
See other products by Marc Boorshtein
Profile icon Scott Surovich
Scott Surovich
LinkedIn icon
Scott Surovich has been involved in the industry for over 25 years and is currently the Global Container Engineering Lead at a tier 1 bank as the Global on-premises Kubernetes product owner architecting and, delivering cluster standards, including the surrounding ecosystem. His previous roles include working on other global engineering teams, including Windows, Linux, and virtualization.
Read more
See other products by Scott Surovich
Right arrow icon
Getfree access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook?Chevron down iconChevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website?Chevron down iconChevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook?Chevron down iconChevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support?Chevron down iconChevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks?Chevron down iconChevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook?Chevron down iconChevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.


[8]ページ先頭

©2009-2025 Movatter.jp