Containerization
In Computer Science, Containerization is a concept of running a process in a small container isolated from other processes running in the system with its own operating environment.
Any dependencies of the application are packed within the container.
Linux Containers (LXC)
The concept of Containerization
is not introduced by Docker
.
The API for LXC has been there in Linux Kernel since 2008. It provided powerful APIs to create
containers with their own memory
and processors
and file-system
.
linuxcontainers.org is the umbrella project behind LXC, LXD and LXCFS. The goal is to offer a distro and vendor neutral environment for the development of Linux container technologies. -- https://linuxcontainers.org
Ruby API for LXC
Ruby-LXC
provides Ruby API for you so that you can create Linux Based containers programatically.
Use Case:-
- Platform As A Service (PAAS) like Heroku or DigitalOcean. You can monitor the containers and charge the user based on usage and band-width consumed.
- You can deploy applications build with concept of Micro-Service Architecture
- You can build a CI/CD platform like travis, Gitlab, etc
For more info see https://github.com/lxc/ruby-lxc
Installation
currently (as of 2019), lxc
is not published to RubyGems.org, so you can install it in your project via github.
gem "ruby-lxc", github: "lxc/ruby-lxc", require: "lxc"
Life Cycle management of containers
To build a PAAS service, you need to be able to create
a container in runtime, run
some commands in it,
monitor
system attributes any time, shutdown
the container, backup
and destroy
.
create, start, stop and destroy
require 'lxc'
c = LXC::Container.new('foo')
c.create('ubuntu') # create a container named foo with ubuntu template
c.start
# attach to a running container
c.attach do
LXC.run_command('ifconfig eth0')
end
c.stop
c.destroy
c.create('ubuntu')
It creates a structure for the container according to the given template. This usually consists of downloading and installing a Linux distribution inside the container's root-file-system.
Other tasks like clone
and inspection
are also easy. See the github-doc for more info.
Hypervisor
These are computer softwares that create an abstraction layer between the Physical machine you have and
the Operating systems you wish to run on top of that. Example:- You have a x86
machine and wish to
run a macOS
, Ubuntu
and Win8
on the same machine simultaneously. You can easily switch between
the OSes.
There are two types of Hypervisors:-
Type1: They are directly run on the machine without the help of any Host OS.
Type2: They run on top of Host OS
Docker / Docker Engine
Docker is a containerization tool that makes possible to create containers in all the OSes like
linux
, windows
, MacOSX
. Since, containers
are natively supported by linux
kernel but not
supported by other OSes, therefore, it uses hypervisor
s like VirtualBox
and Hyper-V
to run a
linux destribution like Boot2Docker
on the host OS using docker-machine
tool.
Installation
For macOSx
and Windows
installing Docker Desktop
installs all the tools like docker
, docker-machine
and docker-compose
.
For Linux : Ubuntu
Remove previously installed docker and tools.
sudo apt-get remove docker docker-engine docker.io containerd runc
then, add the docker-package repository to your apt-get
and install
sudo apt-get update
and, install libraries to make https calls
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
Dockerizing a simple Ruby App
- Make a directory
mkdir simple_rack_app
- go inside that dir
cd simple_rack_app
create a ruby app
- We will use
Bundler
as dependency manager. We will haveGemfile
to add dependent libraries
- We will use
Create a
Dockerfile
with following content
Files
./config.ru
# config.ru
require File.absolute_path('./my_rack_app', __dir__)
run MyApp.new
Gemfile
# Gemfile
# without this line; following error will occur
# -----------------------------------------------------------------------
# Your Gemfile has no gem server sources. If you need gems that are not already on
# your machine, add a line like this to your Gemfile:
# source 'https://rubygems.org'
# Could not find concurrent-ruby-1.1.5 in any of the sources
# ------------------------------------------------------------------------------
source 'https://rubygems.org'
gem 'rack'
gem 'puma'
gem 'faker'
./my_rack_app.rb
# my_rack_app.rb
require 'rack'
require 'faker'
class MyApp
# this rack app will return random strings everytime you reload the browser
def call(env)
['200', { 'Content-Type' => 'text/html' }, ["A barebones rack app. #{Faker::Name.name}"]]
end
end
Dockerfile
# Dockerfile
# Use an official Ruby 2.6.3 runtime as a parent image
FROM ruby:2.6.3
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install Latest version of bundler
RUN gem install bundler
# Install any needed packages/gems
RUN bundle install
# Make port 3000 available to the world outside this container
# if not exposed, you need to publish using -p flag like
# -t 3000:4000
# -t PORT_INSIDE:PORT_OUTSIDE
EXPOSE 3000
# Define environment variable
ENV NAME World
# Run puma when the container launches
# I am running app in the container in port 3000; you can run in any;
# if you choose to use 80, do not forget to expose port 80 outside the container
CMD puma -p 3000
Compilation
Now we are going to compile the docker-image, which we are going to run everytime we need test or deploy in production. We can also publish the docker image publicly.
docker build --tag=simple_rack_app .
This will create an image named simple_rack_app
in your local-machine.
Actually it will run all the commands in a temporary container and after successful
execution, stores the container as an image in local file-system
. You can see
the file-size of the image, its normally 800MB
-900MB
huge. If you want the
container to be online/live, you need to run the image using the docker run
command.
You can see the list of all the images in your file-system using this command.
docker image ls
you can also specify version of this particular build --tag=simple_rack_app:v0.0.1
Output
REPOSITORY TAG IMAGE ID CREATED SIZE
simple_rack_app latest 3389ebd1a559 2 hours ago 883MB
<none> <none> decd859ed7bc 2 hours ago 842MB
ruby 2.6.3 8fe6e1f7b421 8 days ago 840MB
hello-world latest fce289e99eb9 7 months ago 1.84kB
Running Image
# in Linux
docker run simple_rack_app -p 4000:3000 -t
# In Mac try this
docker run -p 3000:3000 -t simple_rack_app
OUTPUT
Puma starting in single mode...
* Version 4.0.1 (ruby 2.6.3-p62), codename: 4 Fast 4 Furious
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://0.0.0.0:3000
Use Ctrl-C to stop
Also, it responds to CTRL-C
Note:
Remember to pass a -t
flag so that the docker
process can attach a
pseudo TTY
to the child process(our image). Otherwise, the image won't respond to
ctrl+c
command. And to terminate, you will need to kill/restart the docker-engine
.
Viewing all open Containers
> docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f63d55a711ea simple_rack_app "/bin/sh -c 'puma -p…" 2 hours ago Up 2 hours 3000/tcp cranky_chatelet
30b2faf38218 simple_rack_app "/bin/sh -c 'puma -p…" 2 hours ago Up 2 hours 0.0.0.0:3000->3000/tcp amazing_hermann
Stopping Container
> docker stop f63d55a711ea
or
> docker container stop f63d55a711ea
f63d55a711ea
Docker Machine
Docker Machine helps you to install docker-engine
in multiple virtual-hosts.
docker-machine
command is used to provision multiple virtual-machines using various drivers
like virtual-box
, vmware
, hyper-v
, etc.
Docker Storage
You would definitely wish your data to persist no matter how many times you re-build your image.
Normally this won't be the case. Everytime you build the image, all the data insite the
image your create during the session is wiped out because, all the files created inside a container will be
stored on a writable container layer
. Also, this layer is very private to the container and are
impossible to share among the fellow containers on the same host.
Sharing Codebase in Development between host and Container
In Development, you wish your rails application auto loads the modules changed and its reflected in the browser when your reload. You wont want to re-build the image everytime your change your code base. Its really disturbing.
Bind mounts
have been around since the early days ofDocker
. Bind mounts have limited functionality compared to volumes. When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its full or relative path on thehost machine
. By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine , andDocker
manages that directory’s contents.
Example:
docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Persisting data between multiple-builds eg. Database
The best way to make data persistent between re-build is creating a volume in the host machine and, sharing it with the containers.
Example:
# Create a new volume
docker volume create bundler-cache-vol
# Check if the volume is created
docker volume inspect bundler-cache-vol
# Run an image using the same volume
docker run -t \
-p 3000:80
--mount source=bundler-cache-vol,target=/bundle \
myrailsapp
Making database persistent:
# Create a new volume
docker volume create pg-database-vol
# Check if the volume is created
docker volume inspect pg-database-vol
# Run an image using the same volume
docker run -t \
-p 3000:80
--mount source=pg-database-vol,target=/var/lib/postgresql/data \
myrailsapp
Docker Compose
Introduction
Docker Compose is a tool that helps you to run multiple containers that belongs to the same application.
Normally, you use docker-compose
to manage dependencies/services of the application. For eg. your Rails
App
needs databases like PG
, Redis
, and NGinx
at the same time; you spawn containers for those, and
link them by exposing some ports. You define the configuration in docker-compose.yml
file.
Compose
is a tool for defining and running multi-containerDocker
applications. WithCompose
, you use aYAML
file to configure your application’s services. Then, with a single command, youcreate
andstart
all the services from your configuration. To learn more about all the features of Compose, see the list of features.
Installation
For Mac and Windows users, when you install Docker Desktop
, Docker Compose
comes along.
For Linux users, download the binaries to /usr/local/bin/
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
test if it runs
docker-compose --version
docker-compose version 1.24.1, build 1110ad01
If Fails: Create a symbolic link to /usr/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
Usage
It is mainly used to orchestrate multiple services needed to run an application in different environments.
You can define the dependencies as services
, choose images for each of them, which port
to connect to,
location to mount the shared volumes and shared directories.
It is basically a three step process:-
Define your
app
’s environment with aDockerfile
so it can be reproduced anywhere.Define the services that make up your app in
docker-compose.yml
so they can be run together in an isolated environment.Run
docker-compose up
and Compose starts and runs your entire app.
A sample docker-compose.yml
version: '3'
services:
postgres:
image: postgres:9.6
ports:
- '5432:5432'
volumes:
- postgres:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s
env_file:
- .env
volumes: # maps the volume `bundle` to `/bundle` to cache gems
- bundle:/bundle
- .:/app
ports:
- '3000:3000'
links:
- postgres
volumes:
bundle:
postgres:
References
https://docs.docker.com/storage/ https://docs.docker.com/storage/volumes/
Summary
Docker is a very useful tool to make sure everything runs in production. It also helps us to configure the development environment when new developer comes in the team. Otherwise, it used to take us a whole day to install all the dependencies and prepare seed data.
Docker-compose helps to configure dependencies into separate containers and orchestrate them using single command. We can easily peek into the containers to see the logs and other variables.
Docker-machine is a tool to spawn multiple virtual or physical machine to deploy the application in micro-service fashion.