Docker has quickly become one of the frontrunners when it comes to container management systems and it is easy to see why, considering what it can do to boost a company’s productivity.
If you have worked with Docker before, then you know that the same containers that a developer builds and tests are capable of being run in production, on VMs, and many other places due to the nature of containerization.
This post will cover some useful ways to boost productivity when using Docker. We will also look at features provided by Docker EE, Docker’s premium enterprise-grade offering.
Tip 1 - Keeping your docker images lightweight
A Dockerfile is a set of instructions that describes the process of building an image.
It contains:
Files included
Environment variables
Installation steps
Relevant commands
Networking details
File context has a huge influence on the build-time performance of Dockerfiles. Context outlines the specified files that are required to build your container and the larger this context is, the slower your build will be.
This can raise the question: What should you do if you have a large build context for your container?
The common causes of this are large asset files or additional library files that are .dockerignore file which excludes the files from your build.
Once created, you can easily check the size of your Docker image by running the command docker image
Here’s a quick example where we build an image without using a .dockerignore file. In this example, we are adding the /var/opt directory that could have a large amount of unneeded log files.
Tip 2 - Utilize multi-stage builds to remove build dependencies
One issue that we can run into when building in consistent environments is the size of our image. In these types of environments, our images include all of the build-time dependencies that are not necessary at runtime.
We can use multi-stage builds to address this and reduce the size of our image, meaning faster build times and fewer resources consumed by our system.
Multi-stage builds are easy to recognize — they have multiple FROM statements.
Every FROM starts a new stage of the build process.
With multi-stage builds, we use the AS keyword to reference our build dependencies and create a consistent environment.
Tip 3 - Docker command completion in the CLI
Docker’s CLI syntax is very plentiful and continues to expand, adding new commands and options. This, of course, can make it difficult to recall every possible command. That is where command completion for your terminal comes into play.
Command completion is a plugin available for your terminal that gives you auto-complete options by hitting TAB.
The Docker team prepared this for docker, docker-machine, and docker-compose for both Bash and Zsh shell
To install command completion locally on your mac:
brew install bash-completion
Next, you would place the appropriate completion script in
/etc/bash_completion.d/
Completion will then be available after your next login.
The official Docker documentation for installation:
https://docs.docker.com/machine/completion/#installing-command-completion
Tip 4 - Some Network Tricks
When it comes to networking, Docker has an internal pool of IPs that it uses for its container’s IP addresses. These are invisible to the outside by default and are accessible via bridge interfaces.
There are going to be times that you find yourself wanting to create a new container and connect it to a network stack that is already in existence. This can come in handy for debugging or auditing your network.
For troubleshooting, if we had a network connectivity issue, we could use
docker run --network/net
to determine the root cause of the issue.
To use the Docker Host Network Stack:
$ docker run --net=host …
Running this command gives your new container the ability to attach to the same network interface as the Docker Host. Generally speaking,
--net=host
is only needed when you are running programs with very specific, unusual network needs. This command allows reusing the host network stack and is therefore considered insecure.
Let’s say that you have a situation where you have Nginx running inside of a docker container and a MySql running on localhost, and want to connect to MySql from within Nginx. Since MySql is running on localhost, the port is not exposed to the outside world.
You could run the following command to share the network stack with the docker host and from the container point of view, localhost (127.0.0.1) will refer to the docker host.
In this situation, any port opened in your docker container would be opened on the docker host.
What I want to highlight is that both the docker host and docker container share the exact same network interface and therefore, behave as the same IP address.
To use another container’s network stack:
$ docker run --net=container:<nameid> …
Running this command attaches a new container to the same network interfaces as the other container. You can specify the target container by id or name.
For Docker Enterprise Edition:
Tip 5 - Utilize the Live API to enhance automation
One of the featured tools that come with Docker’s Enterprise Edition is the Universal Control Plane, or the UCP for short. The Universal Control Plane (UCP) API is a REST (representational state transfer) API available using HTTPS.
It enables programmatic access to swarm resources managed by UCP. The UCP exposes the full Docker Engine API, making it possible to extend your existing code with the UCP features.
It is secured with RBAC policies to ensure that only those that are authorized to make changes or deploy applications to Docker are the ones that can.
It is easy to access as well. Once you are in your UCP, you would go to Live API to access it.
Here, in the Live API, you can do anything on your browser that you can do programmatically.
The system manages swarm resources by using collections, which you can access by selecting the /collection endpoint.
One such function that a developer would be interested in would be located under the /services endpoint.
Frequently, a service is the image for a microservice within the context of a larger application.
If a developer wants to run tests before deploying an application, rather than build testing into their environment, they could jump into the Live API.
Here they could change any parameters and then test it by clicking “Try it out”.
This would run the command and show the output and corresponding HTTP status codes, for example, a 400 would tell the developer there was a bad parameter.
If the testing was successful and produced no errors, then the copy feature can be used to paste the parameters right into their code.
On the administrative side of things, the /accounts endpoint would be a popular choice for performing operations in bulk user accounts.
Final Words
While there are many other hacks available to Docker that can boost your productivity, these are just a few to get started with. If you are new to Docker or you have setup a successful proof of concept and need some direction, Stone Door Group offers our Docker Accelerator. This comprehensive services offering enables you to transition a Docker pet project into a secure production Docker enterprise environment.
About the Author
Amber Ernst is a Docker Certified Associate and Docker Accredited Instructor for Stone Door Group. She is part of a team of certified and experienced DevOps consultants who tackle some of the most challenging enterprise digital transformation projects. To talk to Amber about your Docker deployment, drop us a line at letsdothis@stonedoorgroup.com.