Fat Codes

Software engineering related ramblings by fat

CI/CD for NodeJS Microservices with Gitlab

At Apio we write a lot of microservices, we have a main platform on which our customers can interact with their IoT fleet, but we also have more vertical platforms like the Smart Lighting one for public lighting or the Smart Industry one for controlling production machines.

To handle this amount of microservices we needed to establish some sort of standard path for the whole microservice lifecycle, from design to deployment. In this post I will focus on the last part, deployment.

TLDR: We write mostly NodeJS HTTP microservices, attach a .gitlab-ci.yml file to the git repository, run every needed test and then, if the commit is a release, we build a docker image of that release.

Our average .gitlab-ci file looks like this

stages:
  - test
  - build
  

build:
  stage: build
  only:
    - tags
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  script:
    - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
    - /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG


lint:
  image: node:lts-alpine
  stage: test
  script:
    - npm ci
    - npm run lint

audit:
  image: node:lts-alpine
  stage: test
  script:
    - npm audit
    
test:
  image: node:lts-alpine
  stage: test
  script:
    - npm ci
    - npm i -g mocha nyc
    - npm run test

Now you might say “Dude there’s no deployment code in there!” and you would be almost right. Due to the nature of our products we have several environments, around one per client and some of them have prod and test sub environments. Sometimes N environments out of the total T needs to stick to an older version of a service until it dependent services get updated too. For this reason we tend to be selective on which environments can get automatic updates.

But how do we handle automatic updates? Well, since we extensively use docker, we found Watchtower to be very useful. Watchtower can run as a docker container on your hosts and it will continuously look for updated images, when it finds new images for the containers you want to update automatically, it will pull the new image and replace the old container with a new one.

docker login <your private registry url>
sudo docker run -d   \
    --name watchtower  \
    -v /root/.docker/config.json:/config.json  \
    -v /var/run/docker.sock:/var/run/docker.sock \
    containrrr/watchtower

Everytime we publish a new tag for one of our microservices, a new docker image is built and published to the registry. Our environments which are in auto-update mode thanks to Watchtower get an automatic upgrade of the microservice.

If something goes wrong, we would notice thanks to our monitoring system (prometheus + jaeger) + error reporting system (Sentry deployment)

comments powered by Disqus