Docker is one of the most popular container-management platforms. In the good old days, organizations used to rely on Virtual Machines to implement their microservices architecture. Virtual Machines provide a secure and isolated environment and are quite useful when you want to execute those tasks that are sensitive to security breaches, that might carry the risk of infecting the host. These include beta testing of an operating system, working with virus-infected files, etc. 

However, Virtual Machines virtualize the hardware of the underlying operating system and sit on top of a software called a hypervisor. Each guest operating system (VM instance) running on the underlying host machine has its operating system and kernel. Hence, VMs consume a lot of resources. Since each guest OS has its system and configuration files, the size of a VM instance is quite large, usually in the range of gigabytes. They also took quite a lot of time to boot up. Because of lots of such downsides, developers decided to adopt a newer and simpler technology called containers. You can check out the complete guide on Docker Vs. Virtual Machines for better clarity.

In this tutorial, you will explore everything about containers and the most popular container-management platform called Docker. You will look at the Docker architecture and understand why it is better to use Docker containers. So with no further ado, let’s get started.

You can check out this complete step-by-step guide on Docker.

Improve Your Earning Potential Further!

DevOps Engineer Masters ProgramExplore Program
Improve Your Earning Potential Further!

What is Docker?

Docker is a free-to-use, open-source container-management platform that provides tons of tools and utilities to build, test, and deploy applications. You can create packaged, isolated, and platform-independent containers with all the libraries and dependencies pre-built. This will allow you to develop and share applications easily.

You can separate your applications from the underlying host infrastructure and can deliver software applications quickly by reducing the delay in building the code and deploying production. The Docker containers sit on top of the operating system of the underlying host machine. They share the kernel space of the host machine. However, each container is isolated from other containers because they have separate userspace. Only the kernel space is shared in the case of containers. 

This is the main reason Docker containers are so fast, easy to implement, lightweight (in the range of megabytes), and quick-to-boot up. Containers are not a new concept. The concept was introduced way back in 2000 by FreeBSD jail. In 2008, they released the LXC containers. They used a set of control groups and namespaces to implement containerization. Docker was introduced in 2014 with a few major changes. Docker was created to run only on Linux machines. But now, it supports macOS and Windows. 

You can check these tutorials to install Docker on Linux and Windows.

An Example to Understand the Use of Docker Containers

Suppose you and your other team members are working on a web application deployed as a microservices architecture. Your application has the following microservices - a front-end node, a back-end or server node, a database node, and a separate node for caching. Although each team member works on their individual laptops, they might need a copy of each of the components to work on.

Now, suppose a feature in the server component requires an older version of a dependency to be installed. However, another member who wants to have access to the same component has a newer version of the dependency installed on his system. This will lead to a dependency conflict. Now, imagine there are thousands of such dependency conflicts in each component of your application. This will make the development and sharing of application codes quite hectic. 

In such cases, you need a shared environment that all your team members can access and run standalone components of your application, each deployed as a microservice and isolated from other components. Moreover, these isolated environments have all the packages, libraries, dependencies, etc. that are required to run that particular component. This will allow each team member to easily build, test, deploy, and share applications. This is exactly what Docker containers bring to the table.

The Docker Platform

You can leverage the tools and utilities provided by the Docker platform to package, build, and run a software application in isolated and containerized environments called containers. You can run multiple Docker containers on a single host machine. The security and isolation provided by the Docker architecture allow you to do so. 

When you compare containers to virtual machines, containers are very light in weight. Regardless of this, they contain everything that you would need to run an application without relying much on the host system infrastructure. Moreover, Docker containers are platform-independent. This means that the same container environment can be accessed by all your team members who work on different host machines. This allows you to applications very easily, ensuring that each member gets the same container environment and version of the application.

Docker provides several tools to manage container lifecycles. 

  • You can easily contain an environment with pre-built packages and develop your applications inside it.
  • You can even deploy different components of your application in different containers and allow each of these containers to talk to each other and share data through Docker networking
  • That particular container becomes the standalone unit for testing and distributing your application.
  • Once your application is up and running, you can easily deploy it into the production environment using that container itself or even as an orchestrated service. 
  • Doesn’t matter whether your host is a cloud service, virtual machine, bare metal server, or a Linux computer, the container provides the same service throughout. 

What Can You Use Docker for?

There are multiple reasons why DevOps professionals love Docker and organizations of all levels switch over to it at a rapid pace. Consider the following use-cases.

You can allow your developers to write programs and codes in their local system inside Docker containers. They can share the work with their team members and colleagues and can even work on the same container simultaneously. You can push your applications into the test environment, integrate them with CI/CD workflows, and perform automated as well as manual tests. When you find a bug, you can fix it inside the container environment itself. Once done, you can redeploy it to the testing environment for validation. 

Moreover, getting a bug fix to the user is as simple as pushing the new image back to the production environment. In this way, Docker allows you to work in a standardized environment and streamlines the overall development lifecycle. The containers can be run in the local system, uploaded as a Docker image to registries, and shared with members across the organization. In simpler words, Docker allows faster and consistent delivery of software applications. 

You can create highly portable workloads and dynamically manage them. Responsive deployment and up-scaling become easy with the use of Docker. You can run containers anywhere - local desktop, data centers, cloud, or even on hybrid environments. 

Since Docker containers are very light in weight, you can easily manage your workloads, scale them up, or tear them down. 

It provides a cost-effective and viable alternative to resource-intensive hypervisor-based virtual machines. It is perfect for all types of organizations - small startups, medium, and large tech giants. You can run multiple containers on the same hardware, thus, saving up a lot on hardware resources.

Workflow of Docker

Docker uses a client-server architecture to carry out all of its operations. The three major components that become an integral part of the Docker architecture are - 

  • The Docker Daemon (Server)
  • REST API (Docker Engine)
  • Docker CLI (Client)

These components work together to allow communication between the client and the server. You will see each of this one by one.

Docker Daemon

 Daemon is a background process that runs persistently and is responsible for managing all the Docker objects - Images, Containers, Volumes, Networks. It listens to the Docker API for instructions in the form of requests, processes them, and acts accordingly.

Docker REST API

The API acts as a middleman between the server and the client. The client application uses it to interact with the server (Daemon). The REST API is accessed only by the HTTP clients.

Docker Client 

Clients are used to interacting with the Docker daemon. It can be as simple as a Command Line Interface. You can directly talk to the server by executing simple commands inside the command line (client) to create and manage Docker objects.

The Docker daemon does the heavy lifting of creating, running, and sharing Docker containers. This occurs when a Docker user executes commands in the command line. The interaction between the Docker daemon and the CLI is possible through the Docker engine called the REST API, over a network interface or UNIX sockets. Both the client and the daemon run on the same machine. However, you can also connect to a daemon on a remote machine. 

Docker Architecture

DockerArchitecture_1

Source

The four key components that make up the entire Docker architecture are - 

  1. The Docker Daemon or the server
  2. The Docker Command Line Interface or the client
  3. Docker Registries
  4. Docker Objects - 
    1. Images
    2. Containers
    3. Network
    4. Storage

Now, this tutorial will explore each of the Docker architectural components one-by-one.

  • Docker Daemon

The Docker daemon, also known as ‘dockerd’, consistently listens to the requests put forward by the Docker API. It is used to carry out all the heavy tasks such as creating and managing Docker objects including containers, volumes, images, and networks. A Docker daemon is also capable of communicating with other daemons in the same or different host machines. For example, in the case of a swarm cluster, the host machine’s daemon can communicate with daemons on other nodes to carry out tasks.

  • Docker CLI

The Docker users can leverage simple HTTP clients like Command line to interact with Docker. When a user executes a Docker command such as Docker run, the CLI will send this request to the dockerd via the REST API. The Docker CLI can also communicate with over one daemon. 

  • Docker Registries

The official Docker registry called Dockerhub contains several official image repositories. A repository contains a set of similar Docker images that are uniquely identified by Docker tags. Dockerhub provides tons of useful official and vendor-specific images to its users. Some of them include Nginx, Apache, Python, Java, Mongo, Node, MySQL, Ubuntu, Fedora, Centos, etc.

You can even create your private repository inside Dockerhub and store your custom Docker images using the Docker push command. Docker allows you to create your own private Docker registry in your local machine using an image called ‘registry’. Once you run a container associated with the registry image, you can use the Docker push command to push images to this private registry.

  • Docker Objects

A Docker user frequently interacts with Docker objects such as images, containers, volumes, plugins, networks, and so on. Now, you will see each of them briefly. 

  • Docker Images - Docker Images are read-only templates that are built using multi-layers of file. You can build Docker images using a simple text file called Dockerfile which contains instructions to build Docker images. The first instruction is a FROM instruction which can pull a base image from any Docker registry. Once this base image layer is created, several instructions are then used to create the container environment. Each instruction adds a new layer on top of the previous one.

    A Docker image is simply a blueprint of the container environment. Once you create a container, it creates a writable layer on top of the image, and then, you can make changes. The images all the metadata that describes the container environment. You can either directly pull a Docker image from Dockerhub or create your customized image over a base image using a Dockerfile. Once you have created a Docker image, you can push it on Dockerhub or any other registry and share it with the outside world.

  • Docker Containers - Docker containers are isolated, encapsulated, packaged, and secured application environments that contain all the packages, libraries, and dependencies required to run an application. For example, if you create a container associated with the Ubuntu image, you will have access to an isolated Ubuntu environment. You can also access the bash of this Ubuntu environment and execute commands.

    Containers have all the access to the resources that you define while using the Dockerfile while creating an image. Such configurations include build context, network connections, storage, CPU, memory, ports, etc. For example, if you want access to a container with libraries of Java installed, you can use the Java image from the Dockerhub and run a container associated with this image using the Docker run command.

    You can also create containers associated with the custom images that you create for your application using the Dockerfiles. Containers are very light and can be spun within a matter of seconds.

  • Networks - You can create a secured channel so that all the isolated containers in a cluster can communicate and share data or information. You can use several network drivers and plugins to achieve this. Docker networks become the base of communication in any Docker network cluster. There are 5 chief types of Docker network drivers that are available. They are -

    • Bridge Driver - Bridge network driver is mostly used when you have a multi-container application running in the same host machine. This is the default network driver.

    • Host Driver - If you don’t require any type of network isolation between the Docker host machine and the containers on the network, you can use the Host driver.

    • Overlay Driver - When you use Docker swarm mode to run containers on different hosts on the same network, you can use the overlay network driver. It allows different swarm services hosting different components of multi-container applications to communicate with each other.

    • Macvlan - The macvlan driver assigns mac addresses to each container in the network. Due to this, each container can act as a standalone physical host. The mac addresses are used to route the traffic to appropriate containers. This can be used in cases such as migration of a VM setup, etc.

    • None - The only use of the None network driver is to disable the networking services.

  • Storage - As soon as you exit a container, all your progress and data inside the container are lost. To avoid this, you need a solution for persistent storage. Docker provides several options for persistent storage using which you can share, store, and backup your valuable data. These are -

    • Volumes - You can use directories inside your host machine and mount them as volumes inside Docker containers. These are located in the host machine’s file system which is outside the copy-on-write mechanism of the container. Docker has several commands that you can use to create, manage, list, and delete volumes.

    • Volume Container - You can use a dedicated container as a volume and mount it to other containers. This container will be independent of other containers and can be easily mounted to multiple containers.

    • Directory mounts - You can mount a local directory present in the host machine to a container. In the case of volumes, the directory must be within the volumes folder in the host machine to be mounted. However, in the case of directory mounts, you can easily mount any directory on your host as a source.

    • Storage Plugins - You can use storage plugins to connect to any external storage platform such as an array, appliance, etc, by mapping them with the host storage.

Basic Commands to Work With Docker

Check out some of the basic Docker commands that will help you to pull or build Docker images, run Docker containers, list them, etc. 

Pulling an Image

You can use the Docker pull command to pull a Docker image from the Dockerhub registry. The syntax of the Docker pull command is - 

$ docker pull [OPTIONS] NAME[:TAG|@DIGEST]

On executing this command, the Docker daemon will first check whether the image that you want to pull already exists or not. It does so by comparing the digests of the image. If a match is found, then it does not pull the image again. If not, then it will start pulling the required image from the Dockerhub registry.

Let’s try to pull an Ubuntu image with a tag called latest.

$ docker pull ubuntu:latest

DockerArchitecture_2.

You can also verify the image pull using the Docker images command to list all the images.

$ docker images

DockerArchitecture_3.

This command lists all the images that exist in your local machine including parameters such as repository name, tag, ID, creation date, and size.

Build Docker Images

Another way to create Docker images is by using a Dockerfile. You can write instructions inside a Dockerfile to define your container environment. After that, you can execute the Docker build command mentioned below to build the image using the Dockerfile.

$ docker build [OPTIONS] PATH | URL | -

You can specify the name and tag of the image by using the -t option. You also need to specify the location of the Dockerfile which can be any path in your localhost or even a GitHub URL. 

Now, use the Dockerfile below to create an Nginx web server using a Docker container. You need to simply copy this text inside a file called Dockerfile without any extension.

FROM nginx:latest
COPY ./index.html /usr/share/nginx/html/index.html

Here, you used the FROM instruction to pull a base image called Nginx from Dockerhub. This image will contain pre-built libraries to create an Nginx environment. Next, you can simply use the COPY instruction to copy your index file to the following directory - /usr/share/nginx/html/. This is the directory where the Nginx daemon looks for files that you want to serve. 

Your final directory structure should contain your index.html file and the Dockerfile. This also becomes your build context. Inside the same directory, you can execute the Docker build command mentioned below.

$ docker build -t webserver:latest .

Here, you specified the name of the image and the tag using the -t option. The next parameter takes the location of Dockerfile. A dot here means that your Dockerfile is in the current directory.

DockerArchitecture_4

You can see that all the instructions were executed step-by-step and each instruction creates a new image layer. All these layers are still read-only.

Create and Run a Docker Container

Now that you successfully created your Docker Image, you can use the Docker run command to create and run a container associated with this image. The Docker run command acts as a mixture of the Docker pull command to pull the image, Docker creates a command to create a container, and the Docker start command to start the container.

The syntax of the Docker run command is - 

$ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]

Now, try to run a container associated with the Nginx image.

$ docker run -it --rm -d -p 8080:80 --name=myweb webserver:latest

Here, you must use the -p option to publish the 8080 port of the container to the 80 port of the host machine. This will allow you to get access to the served pages on the browser on your own machine. You have used the --name option to provide a name to the container. You have also used the i (interactive) and t (pseudo-TTY) options to get access to the container in interactive mode. Finally, the --rm option is used to remove the container as soon as you exit it. 

DockerArchitecture_5.

You can see that the container is actively running. You can access your served HTML pages on the location localhost:8080. 

DockerArchitecture_6

Earn the Most Coveted DevOps Certification!

DevOps Engineer Masters ProgramExplore Program
Earn the Most Coveted DevOps Certification!

Other Useful Docker Commands

Some other useful Docker commands are - 

  • Docker rm - You can use this command to remove one or more containers.

  • Docker rmi - This command can be used to remove one or more images.
  • Docker ps - This command can be used to list all those containers that are actively running on your machine.
  • Docker ps -a - You can use this command to list all the containers.
  • Docker exec - This command can be used to run a command inside a running container in background mode.
  • Docker start - This command can be used to start a container that is in exited state.
  • Docker stop - This command can be used to stop a running container.

You can check out the complete list and explanation of Docker commands.

Interested to begin a career in DevOps? Enroll now for the DevOps Certification Course. Click to check out the course curriculum.

Final Thoughts!

In this article, you looked into how Docker plays a significant role in the world of containerization. It is no surprise that Docker provides you with tons of tools and utilities. With the help of simple Docker commands, you can quickly and easily spin up a container and manage it. Further, you learned the key components that make up the entire Docker architecture including Docker daemon, Docker CLI, and the REST API. You also saw how you can leverage Docker objects such as images, volumes, networks, and containers to achieve the full power of containerization.

In the end, you also looked into some important and most frequently used Docker commands to pull and build Docker images and work with containers as well. You can check out this Docker Tutorial for Beginners. This comprehensive and step-by-step guide is all you need to get your hands dirty with Docker. 

Why stop here? To give yourself a chance to work as a DevOps professional, you can check out the following courses provided by Simplilearn.

You can leverage these online courses to skill yourself up and attain industry-grade certificates taught by Docker professionals. 

For any queries or suggestions, please use the comment box. Our experts get back to you as soon as possible.

Happy Learning!