Demystifying Docker: An Introduction
Many many years back the scenario was that a server was capable of running only one application. So at that time if you have 2000 applications then you need to have 2000 servers. Such a dreadful scenario it was! VMware tried solving this problem using virtual machines. So virtual machines enabled us to run several applications on a single server. But there was a catch, in virtual machines for each application we need to have a dedicated operating system. So if you are running four applications on the same server, four instances of OS will be there. Another problem was that if any one of the virtual machines crashes then it can lead to the downtime of all those projects which were hosted on the same machine. Code from the one program running on one virtual environment can read data of other applications running in other virtual environments on the same machine.
So, as a cure to the above problems, containers were invented. Containers are purely independent of each other. They cannot read the code of other containers and besides this, failure in one container is not going to affect the other container. The container is like a whole package that takes care of everything like the code files, dependencies of your project, and with the help of this containerization, your developer friend will be able to run your project super-easily!!
🤔What is Docker Image?
📌A docker image is a snapshot or a blueprint of a complete environment for an application.
📌It includes everything that an application needs to run like libraries, config, dependencies, etc.
📌It kinds of encapsulates the application and all the requirements together.
📌We can create our own images or many third parties also provide images that we can use. On DockerHub you can find these third-party images.
🤔What is Docker Container?
📍A docker container is like a magical box that can make an exact copy of the docker image you had.
📍A container is an actual instance of the environment configured by the image.
📍When we run a docker image, it creates a live and a running container.
📍These containers are isolated and lightweight virtual machines.
A big difference between containers and a virtual machine is that containers share the host OS Kernel, which makes them lightweight.
Suppose you have to run a project on three different machines having different operating systems like Linux, Windows, and Mac. Now you are going to deal with different issues like some dependencies will work on one OS while on another OS it can show errors.
But when containers come into the picture then the scenario is that we can create a docker container on all three machines and now this container will take care of how it has to deal with different underlying operating systems. It has nothing to do with the already installed dependencies environment. It’s a completely isolated environment. Yes, that cool a container is!!
🧨Once we have have docker image, to run the container these commands we have to use:
🦉docker run -it - rm imageName
it-->is for interactive mode
🦉To view information about the docker container: docker ps
🦉To kill/exit a docker container: docker kill containerId
🦉To remove a docker cotainer: docker rm containerId
🧨Running a docker container in the background
🦉docker run - detach -it imageName
🧨To go to the docker container running in the background
🦉docker attach containerId
🧨We can also give a custom name to the docker container
🦉 docker run -it --name nameYouWant imageName
🧨To get more information about the docker image:
🦉docker inspect imageName
🧨To pause a container:
🦉docker pause containerId
🧨To unpause a container:
🦉docker unpause containerId
🧨Suppose a docker container is running in the background.Then we can run commands on this container without running attach command to it like
🦉docker exec -it containerId bash-->Going inside the bash shell
or
🦉docker exec -it containerId ls
🧨To create your own docker image:
🦉First create a DockerFile.Then run docker build -t nameYouWantToGive
🦉To make container from your image: docker run -it yourImageName
🧨Dockerizing a local project
Suppose I have a node project in which I am having a index.js file.
Now to dockerize it make a DockerFile
Delete Node modules from this project
In DockerFile you can write something like:
FROM node
WORKDIR /developers/nodejs/mysys
COPY . . -->That means copy everything from current directory to the WORKDIR
RUN npm ci
CMD ["node", "index.js"]-->Once the container will be booted up this command will run
🦉Now build the image: docker build -t yourImageName
🦉docker run -it yourImageName
🧨If you want to control the process inside your container from your host machine then run it like this
🦉docker run -it --init imageNam
🧨Suppose you want to expose some port of your container so that you can access it from your local machine:
🦉docker run -it --init --publish portOnYourMachine:exposedPortOfContainer imageName
🧨Clean up the whole system and delete every image, container etc.
🦉docker system prune -a
🧨To see all docker images
🦉docker image ls
🎉The difference between Docker exec and run is:
🎭Docker run always creates a new container and then works on it. It always takes imageName with it.
🎭Docker exec works always on already created/existing containers. It takes the container name or ID.
🎊Remember Docker File is case sensitive.
👓Dockerizing Project from GitHub
Suppose you have a project on GitHub and you want to operate that in a container, these steps you have to follow
Make a empty folder.
Make a Dockerfile into it.Into this file you can write:
FROM node-->making node as base container
WORKDIR /coding/files
RUN apt-get update && apt-get install -y git
#RUN command will run these lines in CLI for setting up the project
RUN git clone linkToGitRepo
RUN npm ci
CMD ["node", "start"]
🔐Dockerizing a Python Project
Suppose we have a flask project and we want to dockerize it. So this is how it can be done:
Firstly, make a app.py file
from flask import flask
app = Flask(__name__)
@app.route('/home')
def execute():
return 'hello'
if __name__ = 'main':
app.run(host='0.0.0.0', port=3005)
Secondly make a Dockerfile
FROM Python -->Taking Python as a base container
WORKDIR /myfiles/newfolder/ -->Directory of the container
COPY . . -->Copying everything from the source directory to the destination directory
RUN pip install --no-cache-dir Flask -->Installing Flask while building the image
CMD["python", "app.py"]-->This will run while booting up the container
Now build this image
docker build -t nameYouWnt . -->Dot signifies current directory-->Flask and everthing will be installed in this step itself.
Now run this container
docker run -it --init --publish portOnmysystem:portOnContainer imageName
🔖Bind Mount
Suppose I have dockerized a project and now I am making some changes in the files in my local machine. Now I want to reflect those changes also in the files in the docker container. So for that, we have to use this command:
docker run -it - init -p portOnOurMachine:portOnConatiner -v "${pwd}":directoryPathOnContainer imageName
Bind Mount is like a two-way pipeline, If you will be making some changes in the container they will be visible in your files of the local machine, or if you will be doing some changes in the file of your local machine, they will be visible in the container.
🌈Docker Volume
Suppose you want some data to be independent of the container of which it is a part of. You want that data to persist no matter whether that container is booted up or not, whether that container is deleted or not. That data will also be shareable between different containers. The use case of this is that you can put node modules in the Docker Volume.
docker volume create nameYouWnt
docker run -it --init -p portOnOurMachine:portOnConatiner -v "${pwd}":directoryPathOnContainer -v nameYouGivenAbove:filePathInContainer imageName
🚀Communication between microservices using Docker
Two Docker containers cannot interact directly. But there are cases when we want communication between docker containers. So for that, we need a bridge.
⚡To list out all the bridges:
🌊docker network ls
⚡To create a bridge:
🌊docker network createbridgeName
⚡To inspect this bridge:
🌊docker inspect nameYouHadGiven
🌄On running docker prune command all the bridges gets deleted
Now while booting up both the containers we have to give custom names to the containers:
🌞docker run -it --init --name nameYouWnt --network bridgeName -p portOnOurMachine:portOnConatiner -v "${pwd}":directoryPathOnContainer -v nameYouGivenAbove:filePathInContainer imageName
🌌Docker Compose
Suppose we have 5 projects booted up in different containers. Now managing them all individually is a typical task. So to make this easy we use Docker Compose. With Compose, we use a YAML file to configure our application’s services. Then, with a single command, we create and start all the services from your configuration.
For Docker compose we firstly have to create a file:
docker-compose.yml
version : "3"
networks:
networkName:
driver:bridge
#In case you already have not create volumes mentioned below only then you hve to create these here
volumes:
volume1:
volume2:
services:
project1:
build:"write realtive path to that folder"
networks:
-networkNameYouCreateAbove
ports:-3000:3001
volumes:
#For bind mount
-./Project1:develop/node/newFolder
#Docker volume
-.volume1:pathOnContainer
project2:
build:"write realtive path to that folder"
networks:
-networkNameYouCreateAbove
ports:-3000:3001
volumes:
#For bind mount
-./Project1:develop/node/newFolder
#Docker volume
-.volume2:pathOnContainer
##Indentation matters in yml file
Run these commands now:
docker compose up -d
###Now all the microservices can be controlled with this single command
💥Pushing your image to DockerHub
🥇docker login
#put a tag
🥇docker tag imageName username/nameYouWant
🥇docker push tagName
So this sums up this article. I hope you enjoyed the read! This blog post has been crafted with the invaluable insights and knowledge gained from the Node.js Backend Course, which was expertly led by Sanket Singh.