Automating Ghost theme pipeline

Since I first learned about CI/CD, back in my university days, I have always been keen to apply these practices in any scenario that I found myself in. Nonetheless, the existing setup of my blog and the associated workflow had made the application of these practices a bit of a challenge.

For the past two months, I have been doing improvements around the theme experience, initially from a user experience perspective, and then later, from a process perspective, specifically in the delivery pipeline for the theme, so that changes in the source code are more easily delivered into the Bits of Knowledge production instance.

As with any Continuous Integration/ Continuous Delivery scenario, my goal is to manage the release of new versions of the theme in a repeatable, predictable and automatic way.

Non-automated workflow

As I already wrote on a previous post, I have been running my blog on Docker, specifically using docker-compose for over a year now. However from a CI/CD pipeline point of view, that only represents the final stage of the workflow, which is how to run the system in production. Then, what about the remaining usual stages in a CI/CD workflow?

Glad you asked!! Well there wasn’t a pipeline per se at this point, instead it was succession of operations, each performed manually, to get the changes from source code to live in production. A simplified version of the manual workflow I was following can be seen in the following image:


Thus getting a new version of the theme alive, meant that I had to manually log in to the remote server to pull the changes from Github and then restart the docker containers so the running ghost application could picked up the changes in the theme. As a result from this, if I ever forgot to either pull the changes or restart the containers, the blog would not run with the most up to date version of my theme.

Towards an automated workflow

Before actually working on solutions for these siloed stages, I first focused on the actual expectations for the new process and how it should look like, as shown in the diagram below.


There are several changes regarding the original non-automated workflow, which are going to be implemented over several iterations of the pipeline process:

  • Introduction of a CI step to run tests and validations on the theme, which didn’t use to happen before, relying instead on manual validation of my changes.
  • The pipeline is moving away from a git-based workflow for deployment towards the usage of binaries/build artifacts, generated as part of the CI process.
  • There are no manual steps anymore in the pipeline, which means that from a push to github the pipeline should be fully executed.

First iteration (semi-automated process)

The first iteration was focused on the introduction of a CI system for automated tests and binaries generation. In this case, the choice was Jenkins due to the need to run locally on the blog’s Droplet and to leverage my existing knowledge and practices with Jenkinsfiles and Groovy.

Before diving in some nitty grity details, if you need an introduction to these topics go check the followings:

  • For a primer in Jenkinsfiles and pipelines, check out this
  • For the plugin used to connect Jenkins and Github, check out Github Branch Source.

CI/CD pipeline

Since I was already running the blog on docker, I decided to publish new versions of the theme as docker images built on top of the ghost image. That way, the changes to the current infrastructure would be minimal, needing only to replace the image name in the docker-compose file running the services in the droplet and reconfigure the associated volumes.


Let’s dive into the dockerfile used in the bitknown repo. It is structured using multi-stage builds to generate production assets of the theme file first with a node image, and then, copy them into the themes folder in the ghost image.

FROM node:10 as builder

ENV NPM_CONFIG_PREFIX=/home/node/.npm-global

USER node
RUN mkdir ~/.npm-global \
    && mkdir ~/app \
    && npm install -g yarn

WORKDIR /home/node/app

COPY . ./
RUN yarn install --no-cache --frozen-lockfile

RUN yarn build

FROM ghost:2.4.0

LABEL mantainer "renehernandez"

COPY --from=builder /home/node/app/dist  /var/lib/ghost/content/themes/BitKnown


Now let’s focus on the jenkins side of the of solution. For jenkins to generate binaries every time I push a commit, the repo needs to have a Jenkinsfile that specifies the pipeline behavior and outcomes. As it is shown in the code section below, the Jenkinfile does the following:

  1. Build the docker image for testing
  2. Run the tests in this docker image
  3. Get the version number of the prod image about to be published
  4. Build the production image
  5. Finally, publish the prod image (under the renehr9102/bitknown_ghost name) to dockerhub and if is release version (meaning that we are on the master branch, push an associated latest tag).
def isReleaseVersion(){
    return env.BRANCH_NAME == 'master'

def formatVersion(def version) {
    if (!isReleaseVersion()) {
        return "${version}-${env.BRANCH_NAME}${env.BUILD_ID}"
    return "${version}"

def productionImageName = "renehr9102/bitknown_ghost"
def testImageName = "bitknown_test"

timestamps {
    node('master') {
        checkout scm
        def version = ''

            def testDockerfile = 'Dockerfile.test'
            stage('Build Test Image') {
                sh "docker build -f ${testDockerfile} -t $testImageName ./"

            stage('Run Tests') {
                sh "docker run --rm $testImageName yarn test"

            stage('Get Version') {
                def testImage = docker.image(testImageName)

                testImage.inside {
                    def packageJSON = readJSON file: 'package.json'
                    version = packageJSON.version

            stage('Build Production Image') {
                def formattedVersion = formatVersion(version)
                def image ="$productionImageName:$formattedVersion")

                stage("Publish image with tag $formattedVersion") {
                    docker.withRegistry('', 'dockerhub') {

                        if (isReleaseVersion()) {
                            stage('Update latest tag') {
        finally {
            stage("Cleanup images") {
                sh "docker rmi $testImageName:latest"

                if (version) {
                    sh "docker rmi $productionImageName:${formatVersion(version)}"
                    sh "docker rmi $productionImageName:latest"

Compose and run it in production

With these moving parts now in place, we still need to figure out how to download the newest image of the blog into the server.

As I showed in the docker compose example in the previous post at Ghost Setup section, I was using a single docker volume to store all the content information from the blog and mapping the theme’s clone in the server to the corresponding theme folder in the ghost image, as it is shown in the code below.

version: "3"
    image: ghost
      - ghost_content:/var/lib/ghost/content
      - $GHOST_THEME:/var/lib/ghost/content/themes/BitKnown
      - $GHOST_CONFIG:/var/lib/ghost/config.production.json


Unfortunately, I could not use the docker-compose as-is and just replace the ghost image with the renehr9102/bitknown_ghost image, because the volumes mappings would override the existing content information as follows:

  • ghost_content volume will override the content of the /var/lib/ghost/content path with what is currently stored in the volume, thus effectively making inaccessible what was present before, in this case the Bitknown theme at /var/lib/ghost/content/themes/BitKnown path
  • The envvar $GHOST_THEME will map the clone content to Bitknown theme folder in the container, thus possibly loading a completely different version of the theme.

After plenty of reading and googling, I realized that the best alternative would be to break down the ghost_content volume into multiple named volumes to avoid overriding the existing Bitknown theme folder in the image. Therefore, I went with backing up separately each subfolder of the original named volume and then restoring it into separated named volumes, by following the steps in the docker guide for volume backups and restoration. After all these operational changes, I ended up with the following compose specification for the ghost container:

version: "3"
    image: renehr9102/bitknown_ghost
      - ghost_content_apps:/var/lib/ghost/content/apps
      - ghost_content_data:/var/lib/ghost/content/data
      - ghost_content_images:/var/lib/ghost/content/images
      - ghost_content_logs:/var/lib/ghost/content/logs
      - ghost_content_settings:/var/lib/ghost/content/settings
      - $GHOST_CONFIG:/var/lib/ghost/config.production.json



To sum up, in this post we went over the first iteration of the CI/CD pipeline for the blog theme. We described the previous non-automated workflow and its challenges, as well as the main stages for the new process. Then, we dove into the details of the first iteration of the pipeline process and how the selected tools work together to start bringing to fruition these automation goals.

These practices are likely to evolve over time, as I come up with new ways of streamline the CI/CD workflow for the theme. Nevertheless, they could serve as a good starting point for anyone interested in geting into the Devops and automation.

Last, but not least, a big kudos to Nancy for her help with suggestions about improving the user experience of the Bitknown theme.

Thank you so much for reading this post. Hope you liked reading it as much as I did writing it. Stay tuned for more!!

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer’s view in any way.