Automating Node.js Deployment with GitLab CI/CD, Docker, and OpenShift

Saad Hasan
9 min readJan 20, 2024

--

In this simple demo, you don’t need to be an expert to implement it, it just puts you on the right track for kickstart.

Let’s dive right into the action without any formal introduction.

Prerequisites

Before we begin, make sure you have the following tools installed:

  1. Node.js and npm ( locally )
  2. Git
  3. Docker / Docker Desktop
  4. GitLab account
  5. DockerHub account
  6. OpenShift CLI (oc) — You can use the free tier Sandbox plan for 60 days.

Step 1: Set up a Node.js Project on GitLab

After you create your account if you don’t have one already, clone the code from this repo and then push it to your repo.

The project tree structure is as follows, we have an app folder containing the source code, the node dependence file, an ocp folder containing the deployment.yml file, a Dockerfile to dockrize our app and push to the Dockerhub repo, and one hidden file .gitlab-ci.yml for GitLab CICD pipeline.

Step 2: Dockerize the Node.js Application

In this step, we’ll create a Docker image for our Node.js application using a Dockerfile. A Dockerfile is a script that contains instructions for building a Docker image. Please make sure to place the Dockerfile in the root dir, otherwise, docker build/run/push will not work.

Here’s the content of the Dockerfile

# Use an official Node.js runtime as a parent image (amd64 architecture) for the builder stage
FROM amd64/node:21-alpine3.18 as builder

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install app dependencies
RUN npm install

# Copy the application code
COPY . .

# Stage 2: Create the final AMD64 image
FROM amd64/node:21-alpine3.18

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy the application code from the builder stage
COPY --from=builder /usr/src/app /usr/src/app

# Expose the port your app runs on
EXPOSE 9000

# Command to run your application
CMD ["node", "app/index.js"]

Let me discuss line amd64/node:21-alpine3.18 in the Dockerfile. The choice of the base image architecture in a Dockerfile depends on the underlying architecture of the host machine or the environment where you plan to run your Docker containers ( MacBook with chip type x86_64 in my case). The directive FROM in a Dockerfile specifies the base image, and specifying an architecture helps ensure compatibility and indicates that the base image is built for the amd64 architecture, which is the most common architecture for desktop and server machines, including many cloud environments. In your case, you might not have this issue, but in case you have like myself, this’s the way of handling it 🤝

Hint: to check your architecture chip type, run the command uname -m in the terminal.

Alright 👍 it was necessary to explain this to avoid confusion.

next step is to build the Docker image locally :

docker buildx build --platform linux/amd64 -t saadrepo/nodeapp:2 .
#This command builds the Docker image with the specified tag (-t your-dockerhub-username/node-app) using the current directory (.) as the build context.

please also note, that we put buildx to create an image compatible with image architecture amd64, in your case you might need to run it as below:

docker build -t saadrepo/nodeapp:2 .

Ok, you should have results similar to below

docker build command

To verify image creation, run the command docker images in case you build it locally and not pushed to the DockerHub repo.

 docker build -t nodeapp:2 .

check image creation :

list docker images

Once the image is built, you can run it locally to test the application:

docker run -d -p 9000:900 nodeapp:2
#make sure the port 9000 is not in-use , or if it in-use , kill the PID for it
docker run locally

Alright, the run is completed, how do we check if the container is created or not? since we ran it locally, we can check the Docker desktop app ( which MUST be up and running during the image build and run ) and see if it’s created.

Docker container app in Docker Desktop

This maps port 9000 on your host machine to port 9000 on the Docker container. Accessing http://localhost:3000 in your web browser should now display your Node.js application result, or from the docker desktop icon you can click on the browser icon.

Hello Docker!

Step 3: Push Docker Image to DockerHub

As I mentioned above, You need to have an account in DockerHub, then create a repo and make it public or private as you want.

Before pushing the image, you must log in to DockerHub using the docker login command. This command prompts you to enter your DockerHub username and password.

Login succeeds!

Next, Use the docker push command to upload your Docker image to DockerHub.

docker buildx build --platform linux/amd64 -t saadrepo/nodeapp:2 --push .
# Or as below
# docker push your-dockerhub-username/node-app:latest
docker push to DockerHub repo

Go to Dockerhub and check the image pushed

DockerHub image

Step 4: OpenShift Console

Now, before we get into creating the GitLab pipeline, it’s a prerequisite to making sure you have an OpenShift account and you have already created a namespace ( or project ) to deploy your application into.

OpenShift allows developers to create isolated environments for testing and development, often called “developer sandboxes”.

Here is what it looks like

OCP Sandbox

Now we are ready for the OpenShift part.

Step 5: Configure GitLab CI/CD Pipeline

Go to GitLab, create a new file in the root ( must be ), and Create a .gitlab-ci.yml file.

stages:
- test
- build
- deploy-oc

variables:
IMAGE_NAME: saadrepo/nodeapp:2
OCP_CLUSTER_URL: https://api.sandbox-m2.ll9k.p1.openshiftapps.com:6443
PROJECT_NAME: nodeapp
DOCKER_DRIVER: overlay2

run-test:
stage: test
image: node:14-alpine3.14
before_script:
- apk add --no-cache nodejs npm
script:
- npm install
- echo "test is done "


build:
stage: build
image: docker:25.0.0-rc.2-cli
services:
- docker:25.0.0-rc.2-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
script:
- docker buildx build --platform linux/x86_64 -t $IMAGE_NAME .
- docker push $IMAGE_NAME
- docker buildx build --platform linux/x86_64 -t $IMAGE_NAME --push .

deploy:
stage: deploy-oc
only:
- main
script:
- wget -O oc.tar https://downloads-openshift-console.apps.sandbox-m2.ll9k.p1.openshiftapps.com/amd64/linux/oc.tar
- tar -xf oc.tar
- chmod +x oc
- mv oc /usr/local/bin/
- rm oc.tar
- oc login --server=$OCP_CLUSTER_URL --token=$OCP_TOKEN
- oc project saadrh-dev
- cd ocp
- oc apply -f deployment.yml

The .gitlab-ci.yml file is a GitLab CI/CD configuration file that defines the pipeline for a Node.js application. Let's break down the key components:

  1. Stages: The pipeline has three stages: test, build, and deploy-oc.
  2. Variables:
  • IMAGE_NAME: The name and tag for the Docker image.
  • OCP_CLUSTER_URL: The URL for the OpenShift cluster.
  • PROJECT_NAME: The name of the OpenShift project.
  • DOCKER_DRIVER: The Docker storage driver.

3. Test Stage (run-test job):

  • This stage uses the node:14-alpine3.14 image.
  • It installs Node.js and npm.
  • It runs npm install and echoes "test is done".

4. Build Stage (build job):

  • This stage uses the docker:25.0.0-rc.2-cli image with Docker-in-Docker service (docker:25.0.0-rc.2-dind).
  • It logs in to the Docker registry.
  • It builds a multi-platform Docker image using docker buildx and pushes it to the registry.
  1. Deploy-oc Stage (deploy job):
  • This stage is configured to run only on changes to the main branch.
  • It downloads the oc CLI for OpenShift- you can do this step in the before_script job and save it in the artifact for later use.
  • It logs in to the OpenShift cluster using the provided URL and token.
  • It sets the OpenShift project to saadrh-dev.
  • It navigates to the ocp directory and applies the deployment configuration from deployment.yml.

Now , let’s talk about the file deployment.yml

The purpose of this file is to define Kubernetes or OpenShift resources for deploying and managing a containerized application.

apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-web-app-deployment
spec:
replicas: 1
selector:
matchLabels:
app: simple-web-app
template:
metadata:
labels:
app: simple-web-app
spec:
containers:
- name: simple-web-app
image: saadrepo/nodeapp:2
ports:
- containerPort: 9000
resources:
limits:
memory: 64Mi # Adjust based on your application requirements
cpu: 10m # Adjust based on your application requirements
requests:
memory: 64Mi # Adjust based on your application requirements
cpu: 10m
---
apiVersion: v1
kind: Service
metadata:
name: simple-web-app-service
spec:
selector:
app: simple-web-app
ports:
- protocol: TCP
port: 9000
targetPort: 9000
type: LoadBalancer

This example includes a Deployment resource specifying the number of replicas, a container image (saadrepo/nodeapp:2), and a Service resource exposing the application on port 80.

Cool, don’t forget to add the environment variable for CI_REGISTRY_USER and CI_REGISTRY_PASSWORD ( which is the DockerHub username, password, and OpenShift token).

Once you commit the change to the GitLab repo, it automatically triggers the file .gitlab-ci.yml and consequently creates the CICD pipeline which shows the three stages we declared in the file.

List the GitLab pipeline and it’s status

Click on the pipeline created to check the job status

The three jobs ran successfully ✌️

For sure it will not work smoothly from the first time, but you can edit it back and forth and then check the pipeline status.

Step 6: OpenShift Deployment Validation

The validation step is to confirm that the application has been successfully deployed and is accessible. To do so, go to the OpenShift console and navigate to the project created ( saadrh-dev in this case) , then go to the deployment section > Workloads > then check the deployment creation and check the pod status.

OpenShift Deployments
Pod creation ( blue is good, red is bad)

Now, the last step is to check if the app that we deployed inside the pod if its accessible and app result is successfully shown. Go to Networking > Routes > and click on the URL route created to open the app.

OpenShift Routes
Congratulations, the app is up and running 😎

One last note, in our deployment.yml file, we didn’t specify the part for route creation, you can expose it from the service created already using the below OC command:

oc expose service simple-web-app-service
# oc expose service Service_Name

Fantastic news! We’ve completed this significant milestone. In the upcoming article, I’ll delve deeper into the GitLab Runner. Stay tuned for more insights!

--

--

Saad Hasan
Saad Hasan

Written by Saad Hasan

AWS Cloud Engineer ، Kafka Admin , OpenShift , I write about cloud knowledge.

Responses (1)