Docker from the crust to the core, what and how it Works

Docker from the crust to the core, what and how it Works

·

4 min read

at the beginning, know that docker has a fundamental concept nowadays, and a container is a useful tool for packing, shipping and running applications with no specific hardware or software, but before the start, we must know the difference between “Virtual machine” (VM) and “Container”

Alt Text

What are the containers and VMs?

first, the goal for the two is the same, and it is Isolate an application and its dependencies in a standalone unit that can run anywhere

Virtual Machines: A VM is an emulation of a real computer that works and executes programs like a real computer. VMs run on top of a physical machine using a hypervisor.

hypervisor: is a piece of software or hardware that is used to control the functioning of virtual machines. Hypervisors themselves run on physical computers, referred to as the “host machine.” Hypervisors provide the virtual hardware resources they need, like RAM and CPU. They split these resources between VMs. So if a virtual machine is running a heavy application, the hypervisor will allocate more resources to it and the virtual machine does not have direct access to the hardware only by the hypervisor.

Containers: The one big difference between containers and VMs is that containers share the host system’s kernel with other containers, unlike the VMs that everyone uses its operating system, This means that each container has a separate workspace from the other containers resource and shares the host kernel with other containers.

Alt Text

What are the docker components?

  • Docker Engine
  • Docker Client
  • Docker Daemon
  • Dockerfile
  • Docker Image
  • Union File Systems
  • Volumes
  • Docker Containers
  • Docker Registry

Alt Text

How does Docker work?

Docker Engine:

Is the layer on which Docker runs. Its primary responsibility manages containers, images, builds, and more.

Docker Client:

its layer for the user to communicate with the Docker Daemon, it’s like a UI for Docker.

Docker Daemon:

runs in the host computer, execute the commands sent by the Docker Client - like creating, running, and distributing containers.

Dockerfile:

The Docker File is where we write the steps or instructions to create a Docker Image. Some types of instructions:

  • ENV for creating environment variables > ENV API-URL www.example.com
  • RUN for executing commands > RUN apt-get -y update
  • COPY files from out or in docker workspace dir to another director > COPY . /usr/src/my-app

Docker Image:

A Docker image is a file comprising multiple layers. These layers created with instructions from Docker-file. This image is used to implement the code in a Docker container. An image is a complete, executable version of an application that, when a Docker user launches an image, it can become one or multiple instances of that container and images are read-only, Docker adds a read-write file system over the read-only file system of the image to create a container.

Union File Systems:

It’s used to create images, and Docker images stored as a series of read-only layers. When we start the container, Docker takes the read-only image and adds a read-write layer on top. If the container in-progress changes to an existing file, then the file copied from the base read-only layer to the top-level read-write layer where the changes applied.

Volumes:

They are directories (or files) outside of the default Union file system and exist as regular directories and files on the host file system to save data and share data between containers.

Docker container:

The Docker container encapsulates the application software, so it contains everything the application needs to run. It includes application icon, operating system, runtime, system tools, system libraries, etc. Docker containers created from Docker images. Docker creates a network interface so the container can talk to the local-host, attach an available IP address to the container, and perform the operation you specified to run your application when the image selected. Once the container has been created, you can run it in any environment without the need to make changes.

Docker Registry:

It’s a server-side application that stores and lets you distribute Docker images. The registry is open source and we use it to:

  • Control tightly where your images are stored
  • be the owner for the image publication pipeline
  • Integrate image storage and deployment tightly into your internal development workflow

Alt Text

Conclusion:

Docker is a powerful technology that every software developer should learn and use because there is no limit as before to run applications on specific requirements for software or hardware.

If there is any question, please feel free and contact me or leave it in the comments.

To see similar works and also very important for every developer or cognitive researcher, you can do so by following me on different social networks.
👉 YouTube, Twitter, LinkedIn 👈