Docker image creation. Docker container management: basic features

Help with Docker image and container management commands.

Terms

Image is a static build based on a specific OS.

Container- this is a running image instance.

Permissions to run docker

To run Docker containers under your user (without sudo), you need to be added to the appropriate group:

Sudo usermod -aG docker YOU_USER

Docker service

Docker service management:

Sudo service docker start|stop|restart|status sudo restart docker # alias

Images

List of available images:

Docker images

Download the image (or the entire repository) from the official registry (image repository):

Docker pull ubuntu:14.04

View image information:

Docker inspect ubuntu

Delete image:

Docker commit CONTAINER_ID IMAGE_NAME

Containers

Attention!

After starting the Docker container, services/daemons (such as SSH, Supervisor and others) will not start automatically! I spent several hours debugging the error: " ssh_exchange_identification: read: Connection reset by peer", when trying to connect to the container via SSH. But it turned out that the sshd daemon did not start. You will have to manually start the necessary daemons or supervisor after the container starts:

Docker exec CONTAINER_ID bash -c "service ssh start"

List of all containers (running and stopped):

Docker ps -a

Remove container(s):

Docker rm CONTAINER_ID CONTAINER_ID

Delete all containers:

Docker rm $(docker ps -aq)

Create and run a Docker container with Ubuntu 14.04 in interactive mode (open the shell of this container):

Docker run -it ubuntu bash docker run [options] image [command] -i Interactive mode, keep STDIN open -t Allocate/creates a pseudo-TTY that attaches stdin and stdout --name Container name instead of ID -w Specify working directory ( --workdir) -e Set an environment variable in the container -u User:group under which the container should be run -v Mount a file or directory of the host system into the container -p Forward the port(s) of the container -<порт хост-системы>:<порт контейнера>(--publish=) --entrypoint Replace the default command from the ENTRYPOINT Dockerfile

Note

To detach the TTY without stopping the container, press Ctr + P + Ctrl + Q .

Create and run a Docker container in daemon mode with SSH port forwarding:

Docker run -itd -p 127.0.0.1:221:22 ubuntu

Create and run a container and then delete this container after stopping (useful for debugging):

Docker run -i -t --rm ubuntu bash

Start a stopped container interactively:

Docker start -i CONTAINER_ID

Connect to the daemonized container:

Docker attach CONTAINER_ID

Docker commands

Usage: docker COMMAND docker daemon [ --help | ... ] docker [ --help | -v | --version ] A self-sufficient runtime for containers. Options: --config=~/.docker Location of client config files -D, --debug=false Enable debug mode --disable-legacy-registry=false Do not contact legacy registries -H, --host= Daemon socket( s) to connect to -h, --help=false Print usage -l, --log-level=info Set the logging level --tls=false Use TLS; implied by --tlsverify --tlscacert=~/.docker/ca.pem Trust certs signed only by this CA --tlscert=~/.docker/cert.pem Path to TLS certificate file --tlskey=~/.docker/ key.pem Path to TLS key file --tlsverify=false Use TLS and verify the remote -v, --version=false Print version information and quit Commands: attach Attach to a running container build Build an image from a Dockerfile commit Create a new image from a container"s changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes on a container"s filesystem events Get real time events from the server exec Run a command in a running container export Export a container"s filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on a container or image kill Kill a running container load Load an image from a tar archive or STDIN login Register or log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container network Manage Docker networks pause Pause all processes within a container port List port mappings or a specific mapping for the CONTAINER ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart a container rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save an image(s) to a tar archive search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop a running container tag Tag an image into a repository top Display the running processes of a container unpause Unpause all processes within a container volume Manage Docker volumes wait Block until a container stops, then print its exit code Run "docker COMMAND --help" for more information on a command.

Docker is a popular tool that, through the use of containers, provides everything you need to run applications. By using Docker containers, you can be sure that your application will run the same on any machine you run it on.

In this tutorial, you'll learn about the relationship between containers and Docker images, and how to install, start, stop, and remove containers.

Review

Docker image can be thought of as a template that is used to create containers. Images typically start with a root file system, which is then layered with various changes and their corresponding launch parameters. Unlike typical Linux distributions, a Docker image usually contains only the parts that are needed to run the application. Images have no statuses and do not change. It would be more accurate to say that they are the starting point, the basis for Docker containers.

The images come to life the moment you issue the docker run command - it immediately creates a container by adding a new read-write layer on top of the image. This combination of read-only layers (with a read-write layer added on top) is also known as UnionFS, a file system that cascades union mounts of file systems. When a change is made to an existing file in a running container, the file is copied from the read-only layer to the read-write layer, where the changes are applied. And now the original file is hidden by the read-write version, but it is not deleted. Such changes in write and read levels only exist within a given individual container. When a container is deleted, all changes are also lost (unless they were saved).

Working with containers

Each time you use the docker run command, a new container is created from the image you specify. More specific examples will be discussed below.

Step 1: Create two containers

The docker run command below creates a new container that will use the Ubuntu image as its base. The -t switch will provide the terminal, and -i will provide the ability to interact with it. To get inside the container, you can use the standard bash command. That is, you can enter:

$ docker run -ti ubuntu

$ docker run -i -t ubuntu:14.04 /bin/bash

(in the second case, you will run the /bin/bash command inside the container and you will automatically be inside the container)

The command line will confirm that you are inside the container as the superuser. After the @ sign you will see the ID of the container you are in:

Root@11cc47339ee1:/#

Now, using the echo command, make changes to the /tmp directory, and then check that the changes were written using the cat command:

Echo "Example1" > /tmp/Example1.txt cat /tmp/Example1.txt

On the screen you should see:

Now exit the container:

Once this command has been executed and you exit the command line, the Docker container stops working. You can see this if you use the docker ps command:

Among the running containers, you will not see the one used above:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

However, you can add the -a switch to see all containers - both running and stopped - and then the container you were working in will be highlighted:

$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 11cc47339ee1 ubuntu "/bin/bash" 9 minutes ago Exited (127) 10 seconds ago small_sinoussi

When a container is created, it has an ID and an automatically generated name. In this case, 11cc47339ee1 is the identification number (ID) of the container, and small_sinoussi is the generated name. The ps -a command shows this data, as well as the image from which the container was created (in this case ubuntu), when the container was created (9 minutes ago), and what command was run in it ("/bin/bash"). You can also see the status of the container (it was left 10 seconds ago). If the container were still running, you would see the “Up” status and the time it has been running.

Now you can enter the command again to create the container:

$ docker run -ti ubuntu

Even though the command looks the same as last time, it will create a completely new container - it will have a different ID number, and if you try to look at the contents of the Example1 file you edited earlier, you won't find it.

Root@6e4341887b69:/# cat /tmp/Example1

The output will be:

Cat: /tmp/Example1: No such file or directory

It may seem to you that the data has disappeared, but of course that is not the case. Exit the second container to ensure that both containers (including the first one with the desired file) exist on the system.

Root@6e4341887b69:/# exit $ docker ps -a

The output will be:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6e4341887b69 ubuntu "/bin/bash" About a minute ago Exited (1) 6 seconds ago kickass_borg 11cc47339ee1 ubuntu "/bin/bash" 15 minutes ago Exited (127) 6 minutes ago small_sinoussi

Step 2: Restart the first container

In order to restart an already created container, you need to use the start command with two -ai switches. At the end, you need to write the identification number of the container you want to work with, or its name. As a result, your command will look like this:

Docker start -ai 11cc47339ee1

Now you're back in the bash shell inside the container, and you can verify that the file you created at the beginning of the article is still here:

Cat /tmp/Example1.txt

You will see on the screen:

Now you can exit the container:

This way, all changes inside the container are saved, even if you stop and then restart the container. Data is deleted only when the container itself is deleted. Also, the example above shows that the changes affect one individual container (and not all containers at once).

Step 3: Remove both containers

The final step is to delete the two containers you created by following this tutorial. To do this, you need to use the docker rm command. However, it only affects stopped containers. After the command, you must specify the identification number or name of one or more containers. For example, to delete containers created earlier, you need to enter the command:

Docker rm 6e4341887b69 small_sinoussi

The screen will display:

6e4341887b69 small_sinoussi

Now both containers have been deleted.

Conclusion

In this tutorial, you learned about the basic commands for working in Docker and learned how to create, stop, restart, and delete containers.

Docker is the most common containerization system, allowing you to run the software needed for development in containers without installing it on your local system. In this material, we will analyze Docker container management.

Docker consists of several components:
  1. Image— a set of software configured by developers, which is downloaded from the official website
  2. Container— implementation of the image — an entity on the server created from it, the container does not have to be an exact copy and can be adjusted using Dockerfile
  3. Volume— the area on the disk that the container uses and into which data is saved. After deleting the container, no software remains, but the data can be used in the future

A network is built over the entire structure in a special way, which allows you to forward ports in the desired way and make the container accessible from the outside (by default it runs on a local IP address) through a virtual bridge. In this case, the container can be accessible both to the world and to one address.

Docker container management: basic features

Let's install Docker on an Ubuntu or Debian server if it is not already installed according to the instructions. It's better to run commands as an unprivileged user via sudo

Running the simplest container will show that everything works

Basic commands for managing containers

You can display all active containers like this:

With the -a switch, all containers will be displayed, including inactive ones

Dicker assigns names to containers randomly, if necessary you can specify the name directly

docker run --name hello-world

We launch a container named my-linux-container based on the ubuntu image and go to the container console using the bash shell

docker run -it —name my-linux-container ubuntu bash

To exit the container and return to the host system you need to execute

All images on the basis of which containers are created are downloaded from hub.docker.com automatically when the container is first created; those that already exist locally can be seen by running docker images

Creating a container from an already downloaded image will be much faster (almost instantly)

When leaving the container with exit, it stops, so that this does not happen, you can exit using a keyboard shortcut CTRL + A + P

You can remove all containers that are not active

docker rm $(docker ps -a -f status=exited -q)

Or delete them one by one

Instead of an identifier in the last command, you can specify a name

In docker, containers are managed using a small number of intuitive commands:

docker container start ID

docker container stop ID

docker container restart ID

docker container inspection ID

The latter is especially useful; it displays all the information about the container, configuration files and used disk partitions. The entire list of commands can be easily found in the help or on the official Docker website

Creating your own Docker image and using the Dockerfile

Images are usually created from existing ones by using additional options specified in the Dockerfile

FROM ubuntu
CMD echo "hello world"

Now a new image is being created based on the standard one ubuntu

We build the image by giving it a name (the dot at the end of the command means that the current directory is used, and therefore the Dockerfile in it)

docker build -t my-ubuntu .

docker images now the newly created my-ubuntu image will also be shown

You can run it, and it will display in the console: hello world and this is the only difference from the default image

Usually we need more complex rules, for example we need to include python3 in the image - let's go to a new directory and create a Dockerfile

FROM ubuntu
CMD apt-get update && apt-get install python3

All instructions are written on one line

docker build -t my-ubuntu-with-python3 .

We launch the container by going inside

docker run -it my-ubuntu-with-python3 bash

Inside, as root, you need to run dpkg -l | grep python3, the command will show that the package is present in the system, which means success

We have touched on the topic more than once and considered many systems for their construction. Today we will introduce another great system, Docker containers.

Let's start by describing the basic functionality that will be useful in further articles in the series, and briefly recall the Docker architecture. Docker uses a client-server architecture and consists of a client - the docker utility, which accesses the server using RESTful API, and a daemon in the Linux operating system (see Fig. 1). Although Docker runs on non-Linux operating systems, they are not covered in this article.

Main components of Docker:
    • Containers– user environments in which applications are executed, isolated using operating system technologies. The easiest way to define a Docker container is as an application running from an image. By the way, this is precisely what ideologically distinguishes Docker from, for example, LXC ( Linux Containers), although they use the same Linux kernel technologies. The developers of the Docker project follow the principle: one container equals one application.
    • Images– read-only application templates. New layers can be added on top of existing images that collectively represent the file system, modifying or extending the previous layer. Typically, a new image is created either by saving an already running container into a new image on top of the existing one, or by using special instructions for the utility. To separate different container levels at the file system level can be used AUFS, btrfs, vfs and Device Mapper. If you plan to use Docker in conjunction with SELinux, then it is required Device Mapper.
    • Registries containing repositories ( repository) images, – network storage of images. They can be either private or public. The most famous registry is .

To isolate containers in GNU/Linux operating systems, standard Linux kernel technologies are used, such as:
  • Namespaces ( Linux Namespaces).
  • Control groups ( Cgroups).
  • Privilege Management Tools ( Linux Capabilities).
  • Additional, mandatory security systems, such as AppArmor or SELinux.

Let's look at the listed technologies in a little more detail.

Control group mechanism (Cgroups) provides a tool for fine-grained control over the allocation, prioritization and management of system resources. Control groups are implemented in the Linux kernel. In modern distributions, control groups are managed through systemd, however, it remains possible to control using the library libcgroup and utilities cgconfig. The main hierarchies of cgroups (also called controllers) are listed below:

  • blkio– sets limits on I/O operations and access to block devices;
  • CPU– using the process scheduler, distributes processor time between tasks;
  • cpuacct– creates automatic reports on the use of CPU resources. Works in conjunction with the controller CPU, described above;
  • cpuset– assigns specific processors and memory nodes to tasks;
  • devices– regulates tasks’ access to certain devices;
  • freezer– pauses or resumes tasks;
  • memory– sets limits and generates reports on memory usage by control group tasks;
  • net_cls– tags network packets with a class identifier ( classid). This allows the traffic controller ( tc team) and firewall ( iptables) take these tags into account when processing traffic;
  • perf_event– allows you to monitor control groups using the utility perf;
  • hugetlb– allows you to use large virtual memory pages and apply limits to them.

Namespaces in turn, they control not the distribution of resources, but access to kernel data structures. In fact, this means isolating processes from each other and the ability to have parallel “identical”, but not intersecting, hierarchies of processes, users and network interfaces. If desired, different services can even have their own loopback interfaces.

Examples of namespaces used by Docker:
  • PID, Process ID– isolation of the process hierarchy.
  • NET, Networking– isolation of network interfaces.
  • PC, InterProcess Communication– management of interaction between processes.
  • MNT, Mount– management of mount points.
  • UTS, Unix Timesharing System– isolation of the kernel and version identifiers.

A mechanism called Capabilities allows you to break the root user's privileges into small groups of privileges and assign them individually. This functionality appeared in GNU/Linux starting with version kernel 2.2. Initially, containers are launched with a limited set of privileges.

Using the docker command options, you can enable or disable:
  • mounting operations;
  • socket access;
  • Performing some file system operations, such as changing file attributes or ownership.

You can learn more about privileges using the man page CAPABILITIES(7).

Installing Docker

Let's look at installing Docker using CentOS as an example. When running CentOS you have a choice: use the latest version from u pstream or the version compiled by the CentOS project with Red Hat additions. A description of the changes is available on the page.

This is mainly a backport of fixes from new upstream versions and changes proposed by Red Hat developers, but not yet adopted into the main code. The most noticeable difference at the time of writing was that in new versions the docker service was divided into three parts: the daemon docker, containerd and runc. Red Hat does not yet believe this change is stable and is shipping the monolithic executable version 1.10.

Repository settings for installation upstream versions, as well as instructions for installation in other distributions and operating systems, are given in the installation guide on the official website. Specifically, the settings for the CentOS 7 repository:

# cat /etc/yum.repos.d/docker.repo name=Docker Repository baseurl=https://yum.dockerproject.org/repo/main/centos/7 enabled=1 gpgcheck=1 gpgkey=https://yum .dockerproject.org/gpg

# cat /etc/yum.repos.d/docker.repo

name = Repository

baseurl=https://yum.dockerproject.org/repo/main/centos/7

enabled = 1

gpgcheck=1 gpgkey=https://yum.dockerproject.org/gpg

Install the necessary packages on and start and enable the service:

# yum install -y docker-engine # systemctl start docker.service # systemctl enable docker.service

# yum install -y docker-engine

# systemctl start docker.service

# systemctl enable docker.service

Checking the service status:

# systemctl status docker.service

# systemctl status docker.service

You can also view system information about Docker and the environment:

# docker info

If you run the same command and install Docker from the CentOS repositories, you will see minor differences due to using an older version of the software. From the output docker info we can find out that it is used as a driver for storing data Device Mapper, and as storage – a file in /var/lib/docker/:

# ls -lh /var/lib/docker/devicemapper/devicemapper/data -rw-------. 1 root root 100G Dec 27 12:00 /var/lib/docker/devicemapper/devicemapper/data

# ls -lh /var/lib/docker/devicemapper/devicemapper/data

Rw -- -- -- - . 1 root root 100G Dec 27 12:00 /var/lib//devicemapper/devicemapper/data

Options for starting the daemon, as is typical on CentOS, are stored in /etc/sysconfig/. In this case, the file name is docker. Corresponding line /etc/sysconfig/docker, describing the options:

OPTIONS="--selinux-enabled --log-driver=journald"

If you were to run the docker command as a non-root user or a user who is not a member of the docker group, you would see an error like this:

$ docker search mysql

$ search mysql

Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon. Is the docker daemon running on this host?). Using system default: https://index. docker.io/v1/

Cannot connect to the Docker daemon. Is the docker daemon running on this host?

Note that effectively adding a user to the docker group is the same as adding that user to the root group.

The RHEL/CentOS developers have a slightly different approach to Docker daemon security than the upstream developers of Docker itself. Read more about Red Hat's approach in an article by RHEL distribution developer Dan Walsh.

If you want the “standard” behavior of Docker installed from the CentOS repositories (i.e., described in the official documentation), then you need to create a docker group and add to the daemon launch options:

OPTIONS="--selinux-enabled --log-driver=journald ↵ --group=docker"

OPTIONS = "--selinux-enabled --log-driver=journald ↵ --group=docker"

Then we restart the service and check that the docker socket file belongs to the docker group and not root:

# ls -l /var/run/docker.sock

Docker image search and tags

Let's try to find a container on Docker Hub.

$ docker search haproxy

$search haproxy


In this output, we received a list of a number of HA Proxy images. The topmost item in the list is HA Proxy from the official repository. Such images are distinguished by the fact that the name does not contain the symbol «/» , separating the name of the user's repository from the name of the container itself. The official example shows two haproxy images from public user repositories eeacms and million12.

You can create images like the two below yourself by registering on Docker Hub. The official ones are supported by a special team sponsored by Docker, Inc. Features of the official repository:

  • These are recommended images for use, based on best practices and guidelines.
  • They provide basic images that can serve as a starting point for more fine-tuning. For example, base images of Ubuntu, CentOS or libraries and development environments.
  • Contains the latest versions of software with vulnerabilities fixed.
  • This is the official product distribution channel. To search only official images, you can use the option –filter “is-official=true” teams docker search.

Number of stars in command output docker search corresponds to the popularity of the image. This is similar to a button Like on social networks or bookmarks for other users. Automated means that the image is built automatically from a special script using Docker Hub. Typically, you should give preference to automatically collected images due to the fact that its contents can be verified by familiarization with the corresponding .

Download the official HA Proxy image:

$ docker pull haproxy Using default tag: latest

The full image name might look like this:

[username]image name[:tag]

You can view the list of downloaded images with the command docker images:

Running containers

To run a container, it is not necessary to first download the image. If it is available, it will be downloaded automatically. Let's try to run a container with Ubuntu. We will not specify a repository and will download the latest official image supported by Canonical.

$ docker run -it ubuntu root@d7402d1f7c54:/#

$ run - it ubuntu

root @d7402d1f7c54 : / #

Besides the team run, we specified two options: -i– the container should start in interactive mode and -t– a pseudo-terminal must be allocated. As you can see from the output, we have root privileges in the container, and the container ID is displayed as the host name. The latter may not be true for all containers and depends on the container developer. Let's check that this is indeed an Ubuntu environment:

root@d7402d1f7c54:/# cat /etc/*release | grep DISTRIB_DESCRIPTION DISTRIB_DESCRIPTION="Ubuntu 16.04.1 LTS"

root@d7402d1f7c54 : / # cat /etc/*release | grep DISTRIB_DESCRIPTION

DISTRIB_DESCRIPTION = "Ubuntu 16.04.1 LTS"

uname command -a It cannot be used for such purposes, since the container works with the host kernel.

One option would be to specify a unique container name that can be referenced for convenience, in addition to Container ID. It is given as –name<имя>. If the option is omitted, the name is generated automatically.

Automatically generated container names do not carry any semantic load, however, as an interesting fact, it can be noted that the names are randomly generated from an adjective and the name of a famous scientist, inventor or hacker. In the generator code for each name you can find a brief description of what the given figure is known for.

You can view the list of running containers with the command. To do this, open a second terminal:

However, if we issue the command, we will not find the container created from the mysql image. Let's use the option -a, which shows all containers, not just running ones:

Obviously, required parameters were not specified when starting the container. For a description of the environment variables required to run a container, you can find the official MySQL image on Docker Hub. Let's try again using the option -e, which sets the environment variables in the container:

$ docker run --name mysql-test ↵ -e MYSQL_ROOT_PASSWORD=docker -d mysql

The last parameter is the command that we want to execute inside the container. In this case it is the command interpreter Bash. Options -it similar in purpose to those used earlier in the command docker run.

In fact, after running this command into the container mysql-test another process is added - bash. This can be clearly seen using the pstree command. Shortened output to a command docker exec: