/ dinner-dash

Dinner Dash - Days 9 - 27 - Containerize

Unfortunately very little has progressed on this over the last 2 1/2 weeks. On the plus side, I am moving house! Down side, productivity has hit the floor.

In between removal services, estate agents and viewings, I finally found some time over the last week to complete Bret Fisher's Docker Course (Great little course, expect a review in coming weeks). Taking the principles of the course, I then decided to try and apply them to this application. After all how hard could it be?

The first thing I tried to set up was the Dockerfile and docker-compose file, I wanted this to ensure I'd have a replicable base image across all environments. I based it off of Bret Fisher's suggested Node Defaults, I might develop these further as the front-end becomes involved but for now it seems to give me a good setup. The final Dockerfile is below:

FROM node:8-alpine

RUN npm install -g mocha

RUN mkdir -p /opt/app

ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV

ARG PORT=80
ENV PORT $PORT
EXPOSE $PORT 5858 9229

# check every 30s to ensure this service returns HTTP 200
HEALTHCHECK CMD curl -fs http://localhost:$PORT/healthz || exit 1

WORKDIR /opt
COPY package.json /opt
COPY yarn.lock /opt
RUN npm install -g -s --no-progress yarn && \
    yarn && \
    yarn cache clean
ENV PATH /opt/node_modules/.bin:$PATH

# copy in our source code last, as it changes the most

#/usr/local/bin/

WORKDIR /opt/app
COPY . /opt/app

# if you want to use npm start instead, then use `docker run --init in production`
# so that signals are passed properly. Note the code in index.js is needed to catch Docker signals
# using node here is still more graceful stopping then npm with --init afaik
# I still can't come up with a good production way to run with npm and graceful shutdown
CMD [ "yarn", "start" ]

This should build an image from Node's official images, install Mocha (to allow tests to run). It should then copy the package.json into the image and run yarn and yarn cache clean. It then copies the application code into the image and exposes ports 80, 5858 and 9229 to allow debugging tools to work. Finally, it then runs yarn start to run the application.

Note for Webstorm users
If you want to use Webstorm's Docker integration, it's worth taking note of a few things. Firstly if you get an error that reads Cannot run program "docker-compose" (in directory "/Users/{username}/personal/{appName}"): error=2, No such file or directory check that in Build, Execution, Deployment > Docker > Tools you've got the docker-compose set to the correct location, it took me an embarrassingly long time to find out the fix for this.

Once I'd got the Dockerfile set the way I wanted it came time to write the docker-compose.yml file.

version: '3.1'

services:
  node:
    build:
      context: .
      args:
        - NODE_ENV=development
    image: dinnerdash:latest
    # you can use legacy debug config or new inspect
    #command: ../node_modules/.bin/nodemon --debug=0.0.0.0:5858
    command: ../node_modules/.bin/nodemon --inspect=0.0.0.0:9229
    ports:
      - "80:80"
      - "5858:5858"
      - "9229:9229"
    volumes:
      - .:/opt/app
      # this is a workaround to prevent host node_modules from accidentally getting mounted in container
      # in case you want to use node/npm both outside container for test/lint etc. and also inside container
      # this will overwrite the default node_modules dir in container so it won't conflict with our
      # /opt/node_modules location. Thanks to PR from @brnluiz
      - notused:/opt/app/node_modules
    environment:
      - NODE_ENV=development
  mocha:
    image: dinnerdash:latest
    entrypoint: mocha tests/**/*.test.js

volumes:
    notused:

This allows me to specify the services to spin up, not as useful now but when I start running databases and back-end API's it'll be helpful to break them out into separate microservices. For now, I just have it set to create the node service which will run the Express application and attach the volumes. I also have a mocha service setup which extends the node image and runs the test suite against it.

With the local Docker setup created it was time to update TravisCI to run using Docker instead. Initially, I tried just adding the docker-compose up command as a before_install step. However, I kept encountering an issue where it seemed like yarn was trying to overwrite a directory it didn't have access to. I kept receiving an EACCESS permission denied error. Unfortunately, my Googling didn't turn up anything, so if anyone does know the answer to this, please drop it in the comments below.

After a bit of playing around, I came to the conclusion the best way forward would be to skip TravisCI's install step as this would be run in the container regardless.

The next obstacle was getting TravisCI to build the image and then push it to Heroku's private image registry. To achieve this, I wrote a quick bash script which will detect if the current branch is develop and if so log in to Heroku's image registry and push the image up after tagging it. This takes the place of the old deploy strategy.

The current final .travis.yml is as follows:

language: node_js
node_js:
- '6'
- '7'
- '8'
services:
  - docker
before_install:
  - docker-compose build node
install: true
after_success: ./scripts/registry-push.sh

Todo

With this done the next line of DevOps work to be done is to look at transitioning away from Heroku and into AWS. The next task I'm going to tackle however it the creation of the login process.

On further updates

As mentioned I will be moving over the next couple of weeks so updates are going to be flaky. I'll try develop some more this week but after that I think I'll be off-comms until after the 26th September so I'll pick up the updates from then.

Chris Gray

Chris Gray

Formerly a QA/Test Engineer, I have decided to progress further down the path of development. You can find my ramblings here as well as any new projects I work on.

Read More