Shiva Bhusal
Shiva's Blog


Shiva's Blog

Introduction to Docker and containerization

Introduction to Docker and containerization

Shiva Bhusal's photo
Shiva Bhusal
·Jan 10, 2019·

10 min read

Play this article


In Computer Science, Containerization is a concept of running a process in a small container isolated from other processes running in the system with its own operating environment.

Any dependencies of the application are packed within the container.

Linux Containers (LXC)

The concept of Containerization is not introduced by Docker. The API for LXC has been there in Linux Kernel since 2008. It provided powerful APIs to create containers with their own memory and processors and file-system. is the umbrella project behind LXC, LXD and LXCFS. The goal is to offer a distro and vendor neutral environment for the development of Linux container technologies. --

Ruby API for LXC

Ruby-LXC provides Ruby API for you so that you can create Linux Based containers programatically.

Use Case:-

  • Platform As A Service (PAAS) like Heroku or DigitalOcean. You can monitor the containers and charge the user based on usage and band-width consumed.
  • You can deploy applications build with concept of Micro-Service Architecture
  • You can build a CI/CD platform like travis, Gitlab, etc

For more info see


currently (as of 2019), lxc is not published to, so you can install it in your project via github.

gem "ruby-lxc", github: "lxc/ruby-lxc", require: "lxc"

Life Cycle management of containers

To build a PAAS service, you need to be able to create a container in runtime, run some commands in it, monitor system attributes any time, shutdown the container, backup and destroy.

create, start, stop and destroy

  require 'lxc'
  c ='foo')
  c.create('ubuntu') # create a container named foo with ubuntu template
  # attach to a running container
  c.attach do
    LXC.run_command('ifconfig eth0')


It creates a structure for the container according to the given template. This usually consists of downloading and installing a Linux distribution inside the container's root-file-system.

Other tasks like clone and inspection are also easy. See the github-doc for more info.


These are computer softwares that create an abstraction layer between the Physical machine you have and the Operating systems you wish to run on top of that. Example:- You have a x86 machine and wish to run a macOS, Ubuntu and Win8 on the same machine simultaneously. You can easily switch between the OSes.

There are two types of Hypervisors:-

  • Type1: They are directly run on the machine without the help of any Host OS.

  • Type2: They run on top of Host OS

Type 1 and Type 2 Hypervisors

Docker / Docker Engine

Docker is a containerization tool that makes possible to create containers in all the OSes like linux, windows, MacOSX. Since, containers are natively supported by linux kernel but not supported by other OSes, therefore, it uses hypervisors like VirtualBox and Hyper-V to run a linux destribution like Boot2Docker on the host OS using docker-machine tool.


For macOSx and Windows installing Docker Desktop installs all the tools like docker, docker-machine and docker-compose.

For Linux : Ubuntu

Remove previously installed docker and tools.

sudo apt-get remove docker docker-engine containerd runc

then, add the docker-package repository to your apt-get and install

sudo apt-get update

and, install libraries to make https calls

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \

Dockerizing a simple Ruby App

  • Make a directory mkdir simple_rack_app
  • go inside that dir cd simple_rack_app
  • create a ruby app

    • We will use Bundler as dependency manager. We will have Gemfile to add dependent libraries
  • Create a Dockerfile with following content



require File.absolute_path('./my_rack_app', __dir__)



# Gemfile

# without this line; following error will occur
# -----------------------------------------------------------------------
# Your Gemfile has no gem server sources. If you need gems that are not already on
# your machine, add a line like this to your Gemfile:
# source ''
# Could not find concurrent-ruby-1.1.5 in any of the sources
# ------------------------------------------------------------------------------
source ''

gem 'rack'
gem 'puma'
gem 'faker'


# my_rack_app.rb

require 'rack'
require 'faker'

class MyApp
  # this rack app will return random strings everytime you reload the browser
  def call(env)
    ['200', { 'Content-Type' => 'text/html' }, ["A barebones rack app. #{}"]]


# Dockerfile

# Use an official Ruby 2.6.3 runtime as a parent image
FROM ruby:2.6.3

# Set the working directory to /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install Latest version of bundler
RUN gem install bundler

# Install any needed packages/gems
RUN bundle install

# Make port 3000 available to the world outside this container
# if not exposed, you need to publish using -p flag like
#  -t 3000:4000

# Define environment variable

# Run puma when the container launches
# I am running app in the container in port 3000; you can run in any;
#  if you choose to use 80, do not forget to expose port 80 outside the container
CMD puma -p 3000


Now we are going to compile the docker-image, which we are going to run everytime we need test or deploy in production. We can also publish the docker image publicly.

docker build --tag=simple_rack_app .

This will create an image named simple_rack_app in your local-machine.

Actually it will run all the commands in a temporary container and after successful execution, stores the container as an image in local file-system. You can see the file-size of the image, its normally 800MB-900MB huge. If you want the container to be online/live, you need to run the image using the docker run command.

You can see the list of all the images in your file-system using this command.

docker image ls

you can also specify version of this particular build --tag=simple_rack_app:v0.0.1


REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
simple_rack_app     latest              3389ebd1a559        2 hours ago         883MB
<none>              <none>              decd859ed7bc        2 hours ago         842MB
ruby                2.6.3               8fe6e1f7b421        8 days ago          840MB
hello-world         latest              fce289e99eb9        7 months ago        1.84kB

Running Image

# in Linux
docker run simple_rack_app -p 4000:3000 -t

# In Mac try this
docker run -p 3000:3000 -t simple_rack_app


Puma starting in single mode...
* Version 4.0.1 (ruby 2.6.3-p62), codename: 4 Fast 4 Furious
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://
Use Ctrl-C to stop

Also, it responds to CTRL-C

Note: Remember to pass a -t flag so that the docker process can attach a pseudo TTY to the child process(our image). Otherwise, the image won't respond to ctrl+c command. And to terminate, you will need to kill/restart the docker-engine.

Viewing all open Containers

> docker container ls

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
f63d55a711ea        simple_rack_app     "/bin/sh -c 'puma -p…"   2 hours ago         Up 2 hours          3000/tcp                 cranky_chatelet
30b2faf38218        simple_rack_app     "/bin/sh -c 'puma -p…"   2 hours ago         Up 2 hours>3000/tcp   amazing_hermann

Stopping Container

> docker stop f63d55a711ea


> docker container stop f63d55a711ea


Docker Machine

Docker Machine helps you to install docker-engine in multiple virtual-hosts. docker-machine command is used to provision multiple virtual-machines using various drivers like virtual-box, vmware, hyper-v, etc.

Docker Machine

Docker Storage

You would definitely wish your data to persist no matter how many times you re-build your image. Normally this won't be the case. Everytime you build the image, all the data insite the image your create during the session is wiped out because, all the files created inside a container will be stored on a writable container layer. Also, this layer is very private to the container and are impossible to share among the fellow containers on the same host.

Types of Mounts in docker

Sharing Codebase in Development between host and Container

In Development, you wish your rails application auto loads the modules changed and its reflected in the browser when your reload. You wont want to re-build the image everytime your change your code base. Its really disturbing.

Bind mounts have been around since the early days of Docker. Bind mounts have limited functionality compared to volumes. When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its full or relative path on the host machine. By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine , and Docker manages that directory’s contents.


docker run -d \
  -it \
  --name devtest \
  --mount type=bind,source="$(pwd)"/target,target=/app \

Persisting data between multiple-builds eg. Database

The best way to make data persistent between re-build is creating a volume in the host machine and, sharing it with the containers.


# Create a new volume
docker volume create bundler-cache-vol

# Check if the volume is created
docker volume inspect bundler-cache-vol

# Run an image using the same volume
docker run -t \
  -p 3000:80
  --mount source=bundler-cache-vol,target=/bundle \

Making database persistent:

# Create a new volume
docker volume create pg-database-vol

# Check if the volume is created
docker volume inspect pg-database-vol

# Run an image using the same volume
docker run -t \
  -p 3000:80
  --mount source=pg-database-vol,target=/var/lib/postgresql/data \

Docker Compose


Docker Compose is a tool that helps you to run multiple containers that belongs to the same application. Normally, you use docker-compose to manage dependencies/services of the application. For eg. your Rails App needs databases like PG, Redis, and NGinx at the same time; you spawn containers for those, and link them by exposing some ports. You define the configuration in docker-compose.yml file.

docker comppose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the list of features.


For Mac and Windows users, when you install Docker Desktop, Docker Compose comes along.

For Linux users, download the binaries to /usr/local/bin/

sudo curl -L "$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

test if it runs

docker-compose --version

docker-compose version 1.24.1, build 1110ad01

If Fails: Create a symbolic link to /usr/bin/docker-compose

sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose


It is mainly used to orchestrate multiple services needed to run an application in different environments. You can define the dependencies as services, choose images for each of them, which port to connect to, location to mount the shared volumes and shared directories.

It is basically a three step process:-

  • Define your app’s environment with a Dockerfile so it can be reproduced anywhere.

  • Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.

  • Run docker-compose up and Compose starts and runs your entire app.

A sample docker-compose.yml

version: '3'

    image: postgres:9.6
      - '5432:5432'
      - postgres:/var/lib/postgresql/data

    build: .
    command: bundle exec rails s
      - .env
    volumes: # maps the volume `bundle` to `/bundle` to cache gems
      - bundle:/bundle
      - .:/app
      - '3000:3000'
      - postgres




Docker is a very useful tool to make sure everything runs in production. It also helps us to configure the development environment when new developer comes in the team. Otherwise, it used to take us a whole day to install all the dependencies and prepare seed data.

Docker-compose helps to configure dependencies into separate containers and orchestrate them using single command. We can easily peek into the containers to see the logs and other variables.

Docker-machine is a tool to spawn multiple virtual or physical machine to deploy the application in micro-service fashion.

Share this