Skip to main content

Drupal continuous integration with Docker

Continuous integration platforms are a vital component of any development shop. We rely on it heavily to keep projects at the quality they deserve. Being early adopters of Docker (0.7.6) for our QA and Staging platform we thought it was time to take our CI environment to the next level!

by nick.schuch /

The platforms

So first a little bit about the platforms.

Out with the old

This original environment was powered by Jenkins which is awesome, Jenkins is a powerful CI with many community contributed plugins, however, we were running 2 environments per project. These were:

  • PR - Environment to run test suite when pull requests are created on Github.
  • HEAD - Environment to run test suite when commits are pushed to master.

This has caused a few issues, the main issue being we would occasionaly have left over files and services such as solr still indexed. This is a huge issue when it comes to ensuring consistent builds. Our other major issue was this infrastructure was on a single host which meant it didn't scale very well (if at all), builds would have to come to a stop for us to turn off the host and increase resources.

In with the new

We went into the development of this new infrastructure with the following goals:

  • Leverage our existing Docker containers (QA/Staging environemnts) for platform consistency.
  • Allow concurrent builds so we get a faster feedback loop on popular projects.
  • Use a Jenkins master/agent configuration for scalability. If we have a busy month, no worries, we will add more agents.
  • Leave current PR builds available for QA purposes while the Pull Request is still open.
  • Provide ability to switch versions of PHP, Solr, RabbitMQ etc.

As you can see we had some ambitious goals, and we were able to achieve them!

The workflow

So let's look into this build process a little more. Some key takeaways from this is:

  • Jobs are sent to a node on the cluster. Not run on the host itself.
  • All builds are fresh environments with the services being started first to we can leverage Docker container links. What this means is, on every single build we are spinning up a new environment with no build artifacts that can tamper with results. The environments are also isolated, opposed to the old CI where we were using the same mysql instance for all the databases.
  • We are running phing tasks so we have consistency in the commands that we run to test our code.
  • Github pull requests get notified with a message that looks like the below. While this is something we have seen time and time again, I still think this is awesome.

Under the hood

So as you might have already guessed we are using Docker under the hood but what about the glue that holds it together. We utilize the following technologies:

  • PuppetWe take advantage of the Docker puppet module to lock in our Docker version and ensure that we have all our containers pulled from the Docker Hub and ready to go. We also use puppet to define our builds for projects with Hiera data.
    human_name:      'PNX: Pull request'
    description:     'Triggered by Github.'
    project:         'pnx'
    github_project:  'previousnext/pnx'
    application:     'previousnext/lamp55'
      - 'phing prepare'
      - 'phing test'
  • Nginx - We call this the "Router". This as a single point proxy to route to our built environments. This also provides a nice security layer.
  • Jenkins - This is our "trigger man". It runs all our builds and is in charge of our nodes that we build on.
  • Bash - These are bash scripts generated by Puppet. The scripts range from builds, to github commenting and container cleanup.

To get a good start at Jenkins and Docker go check out The Docker Book. It was released very recently and is a great source for getting started with Docker and how to intergrate it with Jenkins.

Below is a diagram of a slave used for builds. It depicts how all these technologies work together.


What we have achieved in such a short period of time is making a big difference. We now not only have more consistent builds (and everything else discussed above), we also have a CI framework that has opened up more than just testing possibilities (will cover in future blog posts). Until then if you are looking for a better way to do CI, I am happy to say, this is a great option.


Posted by nick.schuch
Sys Ops Lead



Comment by ygerasimov


Great article! Thank you a lot!

How much time does it take to run a build with spinning docker environment?

Comment by nick.schuch


Docker containers spin up almost instantly and spend roughly 15sec starting the relevant processes. We are running coding standards 1.5min into the build and Simpletest/Behat tests (after database sync and preparation) 4 to 5 minutes.

Comment by ygerasimov


nice! another question -- how do you have such a nice comment in github when build is done? have you written custom jenkins plugin for that?

Comment by Tom Behets


Hi, first of all, great post! I am currently looking into setting up a CI system and the combination of puppet and docker looks very promising.

I have a few questions:
- How is your master/slave setup handled? I read you use jenkins swarm. So this means you have jenkins running on every slave?
- What about database syncing? You have a database server that gets synced once in a while, and where the builds are syncing to?
- Are you willing to share some code?


Comment by nick.schuch


- How is your master/slave setup handled? I read you use jenkins swarm. So this means you have jenkins running on every slave?

The "Puppet jenkins module" above provides examples on how to configure a master and slaves. Essentially the Jenkins master gets a full install and the slaves get a small jar file that acts as a daemon to connect to the master.

- What about database syncing? You have a database server that gets synced once in a while, and where the builds are syncing to?

"QA" would be our "database server" that we sync to once in a while, so we don't pull from Production frequently. We perform a database sync aon each build (since each container spun up is it's own mysql install).

- Are you willing to share some code?

We won't be able to share as our repo currently has private data. But we have linked to our containers and some awesome contrib modules. As a first start I would recommend reading The final chapters focus on using Jenkins as a CI with Docker, which provides a good foundation.

Comment by Tom Behets


Allright, thanks for your answer.

Comment by Ibn Saeed



Is there an alternative to Jenkins, java is not in our workflow.