Developers sometimes work with complicated environments. Most likely setup of this environment takes lot’s of time and it’s painfull. There is solution for that called docker.
The simplest thing you can do is just install Docker and then using one command invoke some action on it, for example list files inside directory and remove container right away.
This will automatically download alpine linux with version 3.10.0, invoke shell command ls, give output to the console and cleanup container.
Previous way to use some complicated environment after n-th setup person frustrated with work usually started using virtual machine, most likely VirtualBox to set up environment and then pass this virtual machine on pen drive. This usually work in small teams but won’t work when there are houndreds developers working on same codebase. Now there is great tool that allows us to combine those processes running on this virtual machine to one program, extend, change it and keep environment clear. And most important run it all using one command so if new employee come to the office, just say to him to install docker or preinstall docker on his development machine. Then he just need to download repository with code and type docker-compose up to see application running.
Ok so let’s start.
The main concept of docker is that you run single program on single container so you need to use many containers to run many programs. To create container you need to use special file named Dockerfile and to group containers there are many tools and I use docker-compose because it’s simple for development purposes but currently less mainstream as docker now comes with kubernetes orchestrator preinstalled.
Applications on container run until console is blocked and you can define what to do after crash (ex. restart).
Dockerfile is simple KEY VALUE definition that allows user to manage dependencies of software.
As you can see there will be two containers defined by directories inside docker folder demo_db that will serve mariadb and demo_webservice that will serve flask web application. Those two services will be managed by docker-compose using docker-compose.yml file. Flask application source code will be in workspace/demo_webservice. And my local python environment will be in workspace/env/demo_webservice-env
Ok so lets start with picking base image for demo_db from docker hub cause main advantage of docker is also that you don’t need to install everything from scratch. For mariadb there are several options.
Let’s pick latest one 10.4.6-bionic as a base and create simple Dockerfile.
The convention is to make docker file as small as possible so it won’t take to much space in container registry if we want to use one and it is more flexible to extend it later and define some parameters in compose file.
- FROM is base image name followed by version
- MAINTAINER is usually email address of person who maintain the container
- ADD is command to add file to container, to complicate things I will create schema using pure SQL. This script according to the documentation will be loaded on start of container. Lines in dockerfile are very important and they have very powerfull feature as each line has it’s own hash and commit. For example if you add something on the line 4 let’s say
Docker will only process last line as it has the base image of those three lines. But if you modify line 3 it will rebuild starting from line 2.
SQL script inside file database-schema.sql will be simple creation of one table like this :
So now I can build demo_db container to see if everything is working by running
Then the docker image can be seen by running
To test if sql script is working I can run my demo_db instance from command line and expose port 3306 to my local machine so I can connect to it using local mysql client or use mysql client inside container once I start it.
-- rm tells to remove container after stop
-d tells to not block current console and start container in background
-e is environment variable to set root password and setup demo database
-p is port first number is local machine second number is container port
Second line is output command hash of just runned container. Also there is possibility to operate by name by defining --name of the container otherwise name is randomly generated. To see if container is running type
I don’t need to have mysql client installed localy I can simply go to container shell by running.
This gives me great ability to test new software without installing it on my local machine. When I stop container by running
it will disappear automatically and all my changes also so be sure to store changes somewhere else then inside container if you need them. Best way is to treat container as another application runtime.
Table is there but there is no data and no application so let’s build simple one. I will use python virtual environment to keep my python version separated. To create new environment I type
Then I install my dependencies:
And create some basic application inside demo_webservice/main.py This paplication will connect to database using this code on my local machine
but on container the demo_webservice can resolve demo_db name so I can change host to demo_db name
My app is simple REST webservice with some basic routes to manage todo list.
It is using json format for information exchange between backend and frontend.
This way frontend of the application is just 4KB not mimimized one file named index.html.
After both frontend and backend is created it’s time to create Dockerfile for demo_webservice. This container will be a little more complicated as I need to get some dependencies for database connection and install some python dependencies inside container.
Let’s start with save python dependenices to requirements.txt file by typing
Here is dockerfile for demo_webservice
So this time base for container is python:3.7.3-alpine3.8.
Third line is adding python packages build dependencies by running.
Then I copy my requirements.txt to base directory of container.
After that I simply install those dependencies by running command.
At the end I set directory that will be base directory whenever I run the container.
I can test my build by typing in docker directory.
To remove images from my local docker repository I can type either image name:version or IMAGE ID So for now let’s remove images and run everyting with single command.
To start all using one command I need docker compose. Docker compose can keep configuration of docker run parameters in single yaml file. This way I don’t need to worry how to run every container and I can focus on development.
So here is how my compose yaml file looks like. I will go trough all parameters below definition.
Starting from the top. First line is docker-compose file version.
Then I defined containers as services and in this demo there are only 2 services.
Next inside service there are definitions for docker-compose. First is what directory docker compose should use to build container image then information what image it should use, how to name a container and how container should be visible inside internal network.
After that there is information what to do when container crash, definition of container network, what are dependencies of this container (we can define multiple dependencies so theoretically container won’t start until dependencies start), definition of mapped directories, port exposed inside container netwrok, ports exposed from container to external network (my local machine), command that will be starting container and also environment variables
Finally there is simple network definition demo_network so only containers inside this network can communicate with each other.
Ok so to start application I need to go to docker directory and type
It takes a little time for database to initialise for the first time so there will be some errors visible from python application as it can’t connect to database right away but at the end everyting will work.
To start compose without blocking terminal
After all of this struggles I can navigate to localhost:5000 on mac or if I use windows / linux probably to ip defined by following command and see web application.
Because application runs in debug mode and container have parameter restart always I can modify main.py and see results inside browser or inside container imidiatelly without bothering to run my code if I made some changes.
As you could see first start will create docker/data directory where maria db will store all of database data. When I stop environment I can zip this directory and save it if I want snapshot of my database.
Some more usefull commands for compose are remove all containers and it’s data and compose network
Rebuild of environment
That’s all for this post.
As always all code is available on github