Bootstrapping Your Django/Rails/Whatever App With Docker

Table of Contents

I recently created a fairly simple web site for a non-profit. Since I just finished reading The Docker Book 1, I thought it would be nice to use this project as my first non-trivial Docker-based exercise.

Naturally, since this is a real application that people are depending on I wanted to make something good, not just sprinkle in enough Docker to pad my resume. This wasn't simply a fun project that I could throw away if things got weird after all. I therefore not only had to learn how to make things work for five minutes with Docker - I had to learn how to make something that was supportable, reliable and easy to maintain.

I'm happy to say that I'm pretty happy with my results and I can't wait to share my experiences. My first lesson was in setting up my development tools using Docker containers.

Software Prerequisites

Don't you hate it when a tutorial makes you install a bunch of weird-ass tools? Me too. So I'll try to keep it as simple as possible.

All of the scripts below were tested on a Debian 8 machine. Here's my Docker version:

sudo docker version
Client:
 Version:      17.03.0-ce
 API version:  1.26
 Go version:   go1.7.5
 Git commit:   60ccb22
 Built:        Thu Feb 23 10:53:29 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.0-ce
 API version:  1.26 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   60ccb22
 Built:        Thu Feb 23 10:53:29 2017
 OS/Arch:      linux/amd64
 Experimental: false

All you absolutely need to run the examples below is a Docker version >= 1.9. If you're using Windows or OS X you may need to also run the examples from within some type of Linux-based VM. I can't provide instructions on how to do that however because I don't have any experience working with Docker on those platforms. If you run into any issues or have any tips then I would love to hear about them!

There are a few commands below that require non-standard tools like curl or elinks. I provide alternatives to these scripts when they are used.

Wait But Why Use Docker For Development?

Here's what I mean. Docker is a great tool for hosting applications. You simply take your code deliverable (e.g. war file, Rails project, etc.), squish it together with your runtime (e.g. Tomcat, Unicorn, etc.) and then run that as a binary blob on your server. It makes deployments a lot more consistent and reliable (in addition to other benefits).

But what I'm talking about at this phase is using a Docker container as a lightweight VM to host everything I need develop my application, and by that I mean everything that I need to:

  • Run a dev app server
  • Compile, test and debug my code
  • Build any executables

Now what you may be saying is, Tom, this is dumb. It's easy to install Django (or Rails, or Spring Boot, or whatever), and it's easy to create a new project. Why on earth would you choose to add a Docker layer at this phase?

Well, the short answer is that it's actually not easy to install everything that you need to develop a lot of applications, and I'm going to use the Django framework (which is easier than most) as an example.

Note:

The example in this article builds a development environment for a Django-based application, but using the basic steps below you could build a development environment for virtually any type of application (e.g. Rails, Node, JEE, etc.).

This is a Docker tutorial that uses Django as a means to an end.

If you visit the How to intall Django page you see that all you really have to do is run the following command:

sudo pip install Django

Great, right? Well, yes, if you're running Linux or OSX. If you're a Windows user, there's a special howto. Also, even if you are running some or of *nix kernel, you have to worry about the following:

  • Do you have a previous version installed? Does anything running on your system depend on this older version?
  • Do you have another version of Django that was installed by your system's package manager (e.g. apt-get)? Will this code "collide" somehow with the version that you'll install using pip?
    • Will any of Django's dependencies collide with similar code that is installed by your system's package manager? Will having two versions of dependency libraries cause unrelated software on your laptop to magically stop working? And how on earth will you find out?
  • Which version of Python are you using? If you have both Python 2 and 3 installed, then you may also have two versions of pip. Which one should you use?
  • Most Python devs would also recommend that you use virtualenv to isolate your Django libraries. But again, are you using the Python 2 version or the Python 3 version?
  • Oh, and you don't know how to use virtualenv? Just read another series of tutorials. But hopefully after you're done with that you will also look into virtualenvwrapper, which makes working with virtualenv so much easier.

Needless to say, things get complicated pretty easily. And this is just for one framework. There's a whole different list of bullet points for pretty much every other framework and language out there and the knowledge of one can't be used with another. Being an expert with pip won't help you use npm, just like knowing a ton about rvm won't help you do anything with leiningen.

Wouldn't it be great if you could just run some sort of standalone EXE-type thing that had everything you needed to develop your app and didn't mess up anything else on your system? Oh, and ideally, wouldn't it be great to be able to use a single skillset to run your dev server, regardless of the code's language, framework or choice of build tool?

I agree, so that's what we're going to tackle first.

Iteration 1 - Starting Our Project

Creating Our Dev Docker Image

Let's first create an initial folder structure for our application which I'm going to call fun_with_docker:

mkdir -vp $HOME/Dev/Python/fun_with_docker_project/fun_with_docker

Here's our results:

mkdir: created directory ‘/home/tom/Dev/Python/fun_with_docker_project’
mkdir: created directory ‘/home/tom/Dev/Python/fun_with_docker_project/fun_with_docker’

Note:

For the remainder of this tutorial, this will be the base folder for our Django project. If you would like to use a different folder then don't forget to update the values in all of the code snippets below.

Next let's create our "dev" Docker image like so:

tee <<EOF > $HOME/Dev/Python/fun_with_docker_project/fun_with_docker/Dockerfile-app-dev
FROM python:3.4-alpine
MAINTAINER Tom Purl "<tom@tompurl.com>"
ENV REFRESHED_AT 2017-02-28
RUN apk add --update build-base postgresql-dev
RUN rm -rf /var/cache/apk/*
RUN pip install Django psycopg2
WORKDIR /usr/src/app

EXPOSE 8000
EOF

Note:

The tee command above writes to a file without using a text editor using something called a heredoc. I use them because they're quick and easy and I am lazy.

If you would rather just edit the file in a text editor, then the file name is the path after the > character in the first line. Simply copy everything between the first line and the EOF line into the Dockerfile.

The Dockerfile-app-dev is a manifest that file will create the Docker image file that we will use to run our container. You can think of a Docker image as the "EXE" and the container as the running executable.

So what exactly is this file doing?

  • The FROM statement specifies the base image on which we're building ours. This is the official Python 3.4 image from the Docker Hub.
    • The alpine part in the tag means that it was built on top of the Alpine Linux distribution, which means that the image is smaller than one built on top of a more traditional distro like Ubuntu or Fedora.
  • The first RUN line installs some build tools and the Postgres development libs using the Alpine package manager, apk.
  • In the third run line we're installing a few Python libs, notably Django.
  • The WORKDIR line "cd's" us to the /usr/src/app directory.
  • Port 8000 is then exposed to the host OS. We'll use this later to connect to Django's built-in app server.

So let's build our image. Please note that you will probably replace the tompurl part with something that is meaningful to you and unique on your machine.

cd "$HOME/Dev/Python/fun_with_docker_project/fun_with_docker"
sudo docker build -f Dockerfile-app-dev -t tompurl/fwd-app-dev .

Here's the results. I truncated them a little bit for brevity's sake. Also, your results an be fairly different. The key is that the last line tells you that it was "Successfully built".

Sending build context to Docker daemon 2.048 kB

Step 1/8 : FROM python:3.4-alpine
 ---> 765c483d587c
Step 2/8 : MAINTAINER Tom Purl "<tom@tompurl.com>"
 ---> Using cache
 ---> c06a20173caa
Step 3/8 : ENV REFRESHED_AT 2017-02-28
 ---> Using cache
 ---> a0204fa17dab
Step 4/8 : RUN apk add --update build-base postgresql-dev
 ---> Running in f6269aa6a7bd
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/community/x86_64/APKINDEX.tar.gz
(1/29) Upgrading libcrypto1.0 (1.0.2j-r0 -> 1.0.2k-r0)
(2/29) Upgrading libssl1.0 (1.0.2j-r0 -> 1.0.2k-r0)
(3/29) Installing binutils-libs (2.26-r0)
...
(29/29) Installing postgresql-dev (9.5.6-r0)
Executing busybox-1.24.2-r12.trigger
OK: 199 MiB in 60 packages
 ---> 9db064430f4e
Removing intermediate container f6269aa6a7bd
Step 5/8 : RUN rm -rf /var/cache/apk/*
 ---> Running in b3d662a53028
 ---> c807f0cf28e8
Removing intermediate container b3d662a53028
Step 6/8 : RUN pip install Django psycopg2
 ---> Running in b8761032db71
Collecting Django
  Downloading Django-1.10.6-py2.py3-none-any.whl (6.8MB)
Collecting psycopg2
  Downloading psycopg2-2.7.1.tar.gz (421kB)
Installing collected packages: Django, psycopg2
  Running setup.py install for psycopg2: started
    Running setup.py install for psycopg2: finished with status 'done'
Successfully installed Django-1.10.6 psycopg2-2.7.1
 ---> 2ea51a9c99ca
Removing intermediate container b8761032db71
Step 7/8 : WORKDIR /usr/src/app
 ---> e79d7862ff12
Removing intermediate container 340a21534462
Step 8/8 : EXPOSE 8000
 ---> Running in c56e7674544d
 ---> 6f381df6a598
Removing intermediate container c56e7674544d
Successfully built 6f381df6a598

Yay! That was successful and we have an image. If you wanted to actually see it you could type this command:

sudo docker image ls tompurl/fwd-app-dev

Here's the results:

REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
tompurl/fwd-app-dev   latest              6f381df6a598        23 seconds ago      286 MB

Sweetness!

Running Our Dev Container

Now remember, no container is actually running at this point. What we've done so far is create the image, not the container. The container is the running version of an image. This may seem pedantic but it's an important distinction to make when working with Docker.

At this point you probably want to start "kicking the tires" on your Django environment to see what you've actually done so far. To do this, we can actually launch a container that we can interact with using the following command:

cd "$HOME/Dev/Python/fun_with_docker_project/fun_with_docker"
sudo docker run \
     --name fwd-app-dev \
     --rm \
     --user "$(id -u):$(id -g)" \
     -it \
     -v "$PWD":/usr/src/app \
     -w /usr/src/app \
     -p 8000:8000 \
     tompurl/fwd-app-dev \
     ash

So what do these commands mean?

  • --name: This is just an arbitrary name for our container. This has to be unique for saved containers, which I'll refer to later.
  • --rm: Once this container is stopped it will be automatically removed. This means that (almost) all changes we make will disappear when we exit. I say almost because we're also using the -v switch.
  • -it: Opens an interactive TTY. This is what gives us the ability to open an interactive session.
  • -v: Here we're telling Docker to map our current directory (on the host) to the /usr/src/app directory in the container. This is called a volume, and everything we write into this directory will persist, even if we delete this container.
    • This is why it's ok to use the --rm switch above :-)
  • --user We are telling Docker here to run all commands in the container as our user id. This keeps us from creating files in our volume that are owned by the root user.
  • -w: cd's to that directory.
  • -p: This command maps port 8000 on the host to port 8000 in the container, which we export in the Dockerfile-app-dev file above.
  • tompurl/fwd-app-dev: This is the name of the image that the container is based on, and should be the name of the image that we just created (which probably won't have the string tompurl in it :-) ).
  • ash This is the command that we're passing to the container. ash is a lighter-weight version of the bash shell that is the default in Alpine Linux. So we're saying "launch this container and run the ash command". Along with the -it switches this gives us our interactive shell.

You are now "logged into" a container running the image we just built (although you haven't really "logged in" - we're not using ssh or anything like that). You should see the following in your console:

/usr/src/app $

Let's take a look at the name of the container's OS. Run the following commands from within the container:

cat /etc/os-release

You should see the following output:

NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.4.6
PRETTY_NAME="Alpine Linux v3.4"
HOME_URL="http://alpinelinux.org"
BUG_REPORT_URL="http://bugs.alpinelinux.org"

Sweet! At this point we can now actually create our Django project. From within the container run this command:

django-admin.py startproject fun_with_docker .

You shouldn't see any output.

Iteration 1 Testing

Ok, just for fun let's now exit the container by typing the exit command. You should now be in your Django project dir. Execute the following command:

cd "$HOME/Dev/Python/fun_with_docker_project/fun_with_docker"
tree .

Here's the results:

.
├── Dockerfile-app-dev
├── fun_with_docker
│   ├── __init__.py
│   ├── settings.py
│   ├── urls.py
│   └── wsgi.py
└── manage.py

1 directory, 6 files

If you don't have the tree application installed on your computer then you can also view the files in your favorite file explorer.

As you can see, even though we "threw away" our container by exiting it, our files are still there because they're part of the volume that we mapped. So everything that we create under /usr/src/app while the container is running persists. Everything else thought is thrown away, so please don't change anything else unless you remove the --rm switch in the run command.

Ok, so at this point we just have a skeleton Django project without a database, but that doesn't mean that we can run a super simple test. But first, let's run our Dev server in a container. Remember, you have to exit the container first if you still have terminal session.

cd "$HOME/Dev/Python/fun_with_docker_project/fun_with_docker"
sudo docker run \
     --name fwd-app-dev \
     --rm \
     --user "$(id -u):$(id -g)" \
     -v "$PWD":/usr/src/app \
     -w /usr/src/app \
     -p 8000:8000 \
     -d \
     tompurl/fwd-app-dev \
     ash -c "python manage.py runserver 0.0.0.0:8000"

Here's the results, which is just a hash for your container. Your results will be different than mine:

ddfa2e06a9d32b33bfca40b34c773821a627e227384a2b0685c4ea0db44c1b25

Ok, what we just did it a little different than anything else we've done so far. For starters, you shouldn't have an interactive session in your container. The command should have returned a hash string similar to the one you see above.

So what just happened? Well first let's talk about the differences in how we executed the docker run command:

  • What we removed:
    • -it: We don't want to run an interactive session, so we don't want a TTY or to redirect STDIN or anything like that.
  • What we added:
    • -d: This tell Docker to daemonize the container, meaning that it will run in the background.
    • ash -c yada yada: You may have noticed that our ash command got a lot longer. This time we actually want to run the built-in Django app server instead of just an ash session.

So now you may be wondering if your runserver process is even running. Well, let's take a look!

sudo docker ps

Here's something similar to what you'll see:

CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                    NAMES
ddfa2e06a9d3        tompurl/fwd-app-dev   "ash -c 'python ma..."   14 seconds ago      Up 11 seconds       0.0.0.0:8000->8000/tcp   fwd-app-dev

Yeppers, there it is. Now let's actually see if we can poke and prod our skeleton app:

curl http://localhost:8000/ | elinks -dump

And here's what I get:

                                   It worked!

Congratulations on your first Django-powered page.

   Of course, you haven't actually done any work yet. Next, start your first
   app by running python manage.py startapp [app_label].

   You're seeing this message because you have DEBUG = True in your Django
   settings file and you haven't configured any URLs. Get to work!

I tested this with elinks, which is a console-based web browser. You can also test this with a regular web browser if you'd like, but remember, I'm lazy so I like to use scripts :-)

At this point you may be saying "this is great, but I'm flying blind - how do I see what that process was writing to the console inside your container?

Well, you could look at the logs for the runserver process if you run this command from your Docker host (i.e. not from within the container):

sudo docker logs fwd-app-dev 2>&1

You should see results like this:

[24/Mar/2017 01:15:17] "GET / HTTP/1.1" 200 1767

What this command does is read the output of the command that you specified when you started your container. In our case this is that command:

ash -c "python manage.py runserver 0.0.0.0:8000"

Please note that this command should not be backgrounded with an & or nohup command. If you did that then you wouldn't be able to read the logs using the docker logs command.

If you wanted to tail the logs you could just run this command:

sudo docker logs -f fwd-app-dev

If you feel too disconnected from your server using this strategy then it's easy to change. First, simple shut down your container using the following command:

sudo docker stop fwd-app-dev

You should see the following results:

fwd-app-dev

Then you can run the -it version of the run command above, the one that simply runs the ash command. This open up an interactive session in the container where you can more easily and quickly poke and prod your app.

Iteration 2 - Networking And A Proper Database

Creating A Database Image

At this point it's really tempting to start developing your Django application using the built-in SQLite driver because it's so easy. However, we're going to end up deploying our application on top of Postgres, so we really should be developing with Postgres too.

However, like Django installing Postgres can be an difficult package to install. Thank goodness that Docker makes it so easy.

First, let's create a folder where we can store the database data so that it doesn't disappear if we destroy the Postgres container:

mkdir -vp "$HOME/docker/container/fwd-pg-dev/var/lib/postgresql/data"

Here's my results:

mkdir: created directory ‘/home/tom/docker/container/fwd-pg-dev’
mkdir: created directory ‘/home/tom/docker/container/fwd-pg-dev/var’
mkdir: created directory ‘/home/tom/docker/container/fwd-pg-dev/var/lib’
mkdir: created directory ‘/home/tom/docker/container/fwd-pg-dev/var/lib/postgresql’
mkdir: created directory ‘/home/tom/docker/container/fwd-pg-dev/var/lib/postgresql/data’

Super! Now let's create our database with this command:

export HOST_PGDATA_HOME="$HOME/docker/container/fwd-pg-dev/var/lib/postgresql/data"
export POSTGRES_PASSWORD=somethingClever
export POSTGRES_USER=fwd
export POSTGRES_DB=fwddb

sudo docker run \
     --name fwd-pg-dev \
     -e POSTGRES_PASSWORD="$POSTGRES_PASSWORD" \
     -e POSTGRES_USER="$POSTGRES_USER" \
     -e POSTGRES_DB="$POSTGRES_DB" \
     -v "$HOST_PGDATA_HOME":/var/lib/postgresql/data \
     -d \
     postgres:9.6-alpine

You should just get a unique hash returned that looks something like this:

91edde5df2b452731b2c06c9a6a59442a7eb8b04ef3048758144aa9f887ad457

The run command should look pretty familiar by now, but we're changing things yet again. First, you may have noticed that we didn't create a Dockerfile this time. We just launch an instance of the postgres:9.6-alpine image.That's because the official Postgres image on the Docker Hub is good enough for our needs, so we don't need to install anything on top of it.

Next you'll notice that we're setting the value of a bunch of environment variables in the container with the -e switch. That's because the Postgres image is designed to be configured using environment variables, which makes our job much easier.

You'll also notice that I set the value of these container-level environment variables with bash variables. This is not necessary. I just like setting these values at the top of the script to make things easier to read and change.

The first time you start the database it will take a few minutes. Remember that the docker logs command is your friend.

sudo docker logs fwd-pg-dev 2>&1

Here's what I got:

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
sh: locale: not found
performing post-bootstrap initialization ... No usable system locales were found.
Use the option "--debug" to see details.
ok
syncing data to disk ... ok

Success. You can now start the database server using:


WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
    pg_ctl -D /var/lib/postgresql/data -l logfile start

waiting for server to start....LOG:  database system was shut down at 2017-03-24 01:16:14 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
 done
server started
CREATE DATABASE

CREATE ROLE


/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*

waiting for server to shut down...LOG:  received fast shutdown request
.LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down
 done
server stopped

PostgreSQL init process complete; ready for start up.

LOG:  database system was shut down at 2017-03-24 01:16:48 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

When you see the LOG: database system is ready to accept connections message then you know that the db is up.

Another thing about the run command above. You may have also noticed that I didn't include the --rm option this time. That's because it makes life a little easier for us to keep this container around, even when we want to shut it down. It's not necessary because we're storing the files on our filesystem, but it makes things a little easier and faster.

So how does that change things? Well, first you can shut down the container using the stop command from before:

sudo docker stop fwd-pg-dev

If everything goes well you should just get this:

fwd-pg-dev

Using the ps command we can now see that the container isn't running:

sudo docker ps

If no other containers are running you should just see this:

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

However, the container is still there on your filesystem. You can see it like this:

sudo docker ps -a | grep fwd-pg-dev

…which should show you something similar to this:

91edde5df2b4        postgres:9.6-alpine        "docker-entrypoint..."   About a minute ago   Exited (0) 12 seconds ago                       fwd-pg-dev

The next time you want to start it, all you have to do is execute this command:

sudo docker start fwd-pg-dev

Which should just return this:

fwd-pg-dev

Please note that we didn't execute the run command, we executed the start command, and this takes us back to the distinction between images and containers. Running an image with the run command creates a container, which can then be stopped (using stop) and restarted (using start if you don't automatically remove it with the --rm option).

Confusing? Yes, a little and it was a big stumbling block for me when I first started. And if it's unclear, don't worry because you'll figure it out soon. Then big thing to remember is that there's a difference between images and containers, and they both have their own verbs that sound very similar.

Networking Our Containers

So far, we're in great shape. We have an app skeleton container and a database container. Now we just need them to talk to each other. Docker gives us lots of options for setting this up, but my favorite by far is the one that confusing refer to as Docker Networking.

What does that mean? Well, again let's explain with an example. First, let's create a "Docker Network" with the following command:

sudo docker network create fwd-network-dev

This command also returns a unique hash that looks something like this:

87d2d5aa8fa2823f1d386e2510cc600db12fb9e2029c5c7144a2ec94b9589cd0

Ok, so what did we just do? Well, we created a virtual network that Docker containers can use to talk to each other. Right now it's pretty empty though, which you can see here with the inspect command:

sudo docker network inspect fwd-network-dev

This commands returns the JSON representation of your network, and it should look very similar to this:

[
    {
        "Name": "fwd-network-dev",
        "Id": "87d2d5aa8fa2823f1d386e2510cc600db12fb9e2029c5c7144a2ec94b9589cd0",
        "Created": "2017-03-23T20:18:06.605061419-05:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.21.0.0/16",
                    "Gateway": "172.21.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

Notice at the empty Containers property. Let's fix that by adding our database to this network. Start up your fwd-network-dev container if it's not already running and run the following command:

sudo docker network connect fwd-network-dev fwd-pg-dev

This command won't return any output if it's successful. You can interrogate your network using the inspect command again:

sudo docker network inspect fwd-network-dev

Here are the results:

[
    {
        "Name": "fwd-network-dev",
        "Id": "87d2d5aa8fa2823f1d386e2510cc600db12fb9e2029c5c7144a2ec94b9589cd0",
        "Created": "2017-03-23T20:18:06.605061419-05:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.21.0.0/16",
                    "Gateway": "172.21.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {
            "91edde5df2b452731b2c06c9a6a59442a7eb8b04ef3048758144aa9f887ad457": {
                "Name": "fwd-pg-dev",
                "EndpointID": "38748730ef4b7634d6adf1c20ca3dc6a00fda4be561d544713c4e1feed446628",
                "MacAddress": "02:42:ac:15:00:02",
                "IPv4Address": "172.21.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Yay! There's our Postgres container. Now let's run a fwd-app-dev container and connect it to the network. We'll do that by using an updated version of our previous run command:

cd "$HOME/Dev/Python/fun_with_docker_project/fun_with_docker"
sudo docker run \
     --name fwd-app-dev \
     --net=fwd-network-dev \
     --rm \
     --user "$(id -u):$(id -g)" \
     -it \
     -v "$PWD":/usr/src/app \
     -w /usr/src/app \
     -p 8000:8000 \
     tompurl/fwd-app-dev \
     ash

This will drop you into the container's shell.

Now at this point, you could open another terminal window and execute the network inspect command again and see both of your containers listed. But I'm going to show you another test you can run that highlights a really cool feature of Docker Networks.

From within your fwd-app-dev container, execute the following command:

nc -zvv fwd-pg-dev 5432

You should see a response that looks something like this:

fwd-pg-dev (172.21.0.2:5432) open
sent 0, rcvd 0

Ok, so what does this mean? Well, first let's look at the netcat (nc) command. This is "pinging" port 5432 on our db server, which is the port on which Postgres is listening. And the response is that yes, this port is actually open and listening for connections.

But wait a minute - we ran that command from our fwd-app-dev container. How on earth does it know how that our Postgres server container's name is fwd-pg-dev?

That's one of the cool features of Docker Networking. Every container in our virtual network is part of something like a virtual DNS registry. So you can actually refer to any container by its container name as if they were hostnames, and the DNS-like configuration is automatic. You can see what I mean by running this command from within the fwd-app-dev container:

nslookup fwd-pg-dev

That should give you something like the following:

nslookup: can't resolve '(null)': Name does not resolve

Name:      fwd-pg-dev
Address 1: 172.21.0.2 fwd-pg-dev.fwd-network-dev

As you can see, the fwd-pg-dev container has an IP address of 172.21.0.2 and a FQDN of fwd-pg-dev.fwd-network-dev.

Running A Migration

Now let's configure our application to talk to the database so we can populate some tables. First, let's configure the application by opening this file in your favorite text editor:

  • $HOME/Dev/Python/fun_with_docker_project/fun_with_docker/fun_with_docker/settings.py

Locate the DATABASES section and replace it with the following:

DATABASES = {
    'default': {
	'ENGINE': 'django.db.backends.postgresql',
	'NAME': 'fwddb',
	'USER': 'fwd',
	'PASSWORD': 'somethingClever',
	'HOST': 'fwd-pg-dev',
    }
}

Note:

Please note that you absolutely don't have to edit this file from within the container. One of the nice things about this setup is that you can edit your code from your laptop and then test it using a container.

Next, open an interactive terminal into the fwd-app-dev image using this command:

cd "$HOME/Dev/Python/fun_with_docker_project/fun_with_docker"
sudo docker run \
     --name fwd-app-dev \
     --net=fwd-network-dev \
     --rm \
     --user "$(id -u):$(id -g)" \
     -it \
     -v "$PWD":/usr/src/app \
     -w /usr/src/app \
     -p 8000:8000 \
     tompurl/fwd-app-dev \
     ash

Once you are "inside" the container, run this command:

python ./manage.py migrate

You should see output that looks something like this:

Operations to perform:
  Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying sessions.0001_initial... OK

Great! We're in business. Now just for funzies let's add a super user, again from within the fwd-app-dev container.

cat <<EOF | python ./manage.py shell
from django.contrib.auth.models import User
User.objects.filter(username="tom").exists() or \
    User.objects.create_superuser("tom", "tom@tompurl.com", "somethingClever")
EOF

You should get output that looks something like this:

Python 3.4.6 (default, Jan 20 2017, 21:41:39) 
[GCC 5.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> >>> <User: tom>
>>>

Iterattion 2 Testing

A nice holistic test of all of our work so far would be to log into the default Django admin interface. To do this, first execute the runserver command by specifying the network name. If you still have an interactive session open in the fwd-app-dev container, exit it first.

Now run this:

cd "$HOME/Dev/Python/fun_with_docker_project/fun_with_docker"
sudo docker run \
     --name fwd-app-dev \
     --net=fwd-network-dev \
     --rm \
     --user "$(id -u):$(id -g)" \
     -v "$PWD":/usr/src/app \
     -w /usr/src/app \
     -p 8000:8000 \
     -d \
     tompurl/fwd-app-dev \
     ash -c "python manage.py runserver 0.0.0.0:8000"

Now visit the following URL in your favorite web browser:

Log in using the credential for the super user that you created.

Conclusion

What Do We Have!

All right, so what do we have?

  • A skeleton Django application
  • A fwd-app-dev image that we can use to run and test our Django application.
  • A fwd-pg-dev container that we can use as a "real" database.
  • Also, all of these components are sandboxed in their own containers and network.

Now that you have all of this set up you can start developing your next awesome Django application. Here's some tips:

  • The Django app server restarts automatically every time you save a file in your project. You therefore shouldn't have to restart the fwd-app-dev container very often.
  • You should be able to leave the fwd-pg-dev container running all of the time. If you reboot your laptop then simply start the container with this command:
    • sudo docker start fwd-pg-dev

Was It Worth Our Time?

Earlier in the document I mentioned that it would be nice to be able to use a single skillset for setting up development environments on our laptops. We could then invest our precious time into learning how to use and improve those skills instead of learning how to install the latest cool kid language/framework/build tool combination quickly and safely.

I'm confident that I could use the basic process above to set up a development environment for many different frameworks quickly and easily. I therefore feel that my time invested in learning about Docker was worthwhile. The basic process above has become my preferred method for setting up everything I need to start working on a new application.

What's Next?

In my next tutorial, I'm going to show you how you can host your code in a "production" container using Nginx and uwsgi. I'll also cover a simple workflow for deploying your image to external servers.

Good luck!

Footnotes:

1

I highly recommend this book as an intro to Docker. It's by far the best book that I've found on the topic and it's dirt cheap.