Docker Infrastructure

Muhammad Hanif
3 min readMay 24, 2021

--

What is Docker?

Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. Because all of the containers share the services of a single operating system kernel, they use fewer resources than virtual machines

Docker Infrastructure

We talked a bit about containers and images in the previous sections, but the essential components that make docker work aren’t just containers and images. Visualized, the entire docker architecture looks like this:

source :https://docs.docker.com/get-started/overview/

Docker Client

A terminal application that users use to input commands. Docker commands use the docker API to communicate with the docker daemon. Hence, if the docker daemon isn’t running, you can’t use docker commands.

Docker Daemon

A daemon that listens for docker API requests and manages docker objects.

Docker Images

A read-only template with instructions for creating a docker container. Using an already existing image is simple: you only need to pull them from a registry and use it. To build a new image, however, requires a Dockerfile, which contains instructions on how to create the image. You can also make an image that is based on another image, and customize it to your liking. Once you’ve built an image, it is static.

Docker Containers

A runnable instance of an image. Can be created, deleted, moved, started and stopped by using the docker daemon.

Container Registry

A registry containing images that users can pull from and push to. Can be utilized to send your images to your production server and for CI/CD (more on that later).

Docker implementation in our project

For our project, we use docker to deploy our application to heroku. Conveniently, heroku has an API that automatically pulls our latest image and rebuilds our application based off of it, which makes CI/CD so much easier (recall that I mentioned CI/CD at the beginning of the article).

For our dockerfile, we use the same exact dockerfile that I used for my example on the previous section. All that’s left to do is to edit our ci.yml file to accomodate docker orchestration.

In our “staging” stage (which is exactly the same as deployment, just to a different url for beta testing), we first login to the heroku container registry with our credentials, which were already set as environment variables. After that, we pull the latest version of our image (prior to this one). Then, we build the image using our dockerfile (using caching to speed things up)

How it run in our project

The first thing that we do, we push into gitlab. AWS Runner automatically create docker image and push to registry (using gitlab-ci). After that, registry save our newest file. Last but not least, Our app automatically pull image that we are created and the newest one and updated the app from the newest image.

--

--

Muhammad Hanif
Muhammad Hanif

No responses yet