Using a GitHub Actions Pipeline For Creating Containers

GitHub Actions is a powerful CI/CD tool, and it’s one of my preferred choices due to its seamless integration with GitHub. A common workflow I use involves building a container from a Dockerfile and pushing it to a remote repository.

Prerequisites

In this post, I’ll quickly walk you through the steps to set up this type of pipeline. To get the most out of it, you should already have a basic understanding of containers, how writing dockerfiles, and what a container registry is. We’re going to breifly touch on each one.

What is GitHub actions

GitHub actions is a CI/CD tool useful for automating integration and deployment steps on a foreign machine. It allows us to start with a fresh build and hooks in really nicely with github. It’s my tool of choice for running my deployment automations.

Docker and Containers

Containers are just a way to package and run software. Docker is the most popular containerization software by far. Don’t count our Podman, many of commands line up with Docker and the team over there has done a nice job bringing it up to spec.

It does that by virualizing and packaging things up at the OS level. This allows for nice isolated packages and runtime environments. I’m used to working with them so that’s why I use them. There may be something better but in this case I stick to what I know.

Container Registry

This is our storage for our containers. Containers are just blobs of data at the end of the day. However, registries give us a nice way of naming, indexing and tagging different versions of our blobs.

Our Example Project

Let’s get started! First, we’ll scaffold a project. Here, we have a simple Python script that runs a web server. Our first step is to Containerize this project.

Overall Project Structure

We have a simple Python project in our repository. It’s made up of 3 files and from a high level view it will look like the following:

├── main.py                       # This file makes up our application
├── dockerfile                    # Dockerfile we'll be building
└── .github/workflows/deploy.yml  # Where we define our pipeline

Defining our files

Our python application is deathly simple, it’s a hello world script. We won’t get into building and packaging more complicated Python projects, thats for another turorial.

main.py

def main():
    print("Hello World, from docker container!")

if __name__ == "__main__":
    main()

Our dockerfile is a bit more complicated. We defined our base image, copy in our python script and add a command to run it.

dockerfile

FROM python:3.12
ADD main.py .
CMD [“python”, “./main.py”] 

The Basic Pipeline

Let’s get into the meat of the post, AKA making our GitHub actions pipeline. We’ll start small by setting up a basic pipeline and grow from there.

GitHub Actions YAML Setup

Next, we’ll build the pipeline. To do this, create a YAML file in the .github/workflows directory. GitHub Actions specifically looks in this directory to find Actions workflows. For now, we’ll set up the pipeline to build our Docker image.

Our Starting file

Luckily there are a bunch of actions that help us so we don’t have to write out all the steps from scratch. I’ll quickly review what each on does. We define a job that runs on Ubuntu. There we checkout the code from the git repository, commence with the build and push to registry.

  • actions/checkout This action checks out our git repo onto our runner machine. This is where most actions will start.

  • docker/setup-qemu-action This action installs QEMU. Qemu is lower level software that provides emulation and virtualization so Docker can run.

  • docker/setup-buildx-action This goes ahead and sets up our docker builder. This is the actual software that compiles our container blobs.

  • docker/build-push-action This action contains the commands to run the docker build and docker push commands. It’s wrapped up in an interface so you don’t have to call these steps manually.

The Basic Pipeline

This is those actions together look like to make up our basic pipeline.

name: test-cd-pipeline

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    steps:
      - 
        name: Checkout main
        uses: actions/checkout@v2
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Build and push
        id: docker_build
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: myregistry.com/python-app:latest

The Advanced Pipeline

The above should work but there are a few things to clean up. We want to deal with dynamic tagging, handling private registries and controlling when the action runs.

Additional Configruations

These are the configurations that I like to add to my production pipelines.

  1. Specifying registry credentials for our pipeline When registries are private, we need a way to log into the registry. This is done using the docker/login-action. Basic authentication is shown below(there are other ways to authenticate btw):

    name: Login to Container Registry
    uses: docker/login-action@v2 
    with:
    registry: myregistry.com
    username: username
    password: password
    
  2. Tagging our Application Uniquely Using latest for every release is not a great way to tag different versions. I personally prefer to make a tag based on the commit hash. This is show in this step:

    name: Get Tag
    run: |
    tag=$(git rev-parse --short HEAD)
    echo tag
    echo "TAG=${tag}" >> $GITHUB_ENV
    
  3. When does this run? This makes sure that this workflow only runs on the master branch when a push occurs. The config workflow_dispatch allows us to kick off jobs from the github UI.

    on:
    push:
      branches:
        - 'master'
    workflow_dispatch:
      branches:
        - 'master'
    

Exposed Secrest, Lets not do that

To avoid storing credentials directly in the YAML file, we’ll use GitHub Actions Secrets. You can store your username and password as secret. We reference them with {{secrets.}} templating syntax. Now, our pipeline is set up to push to a private registry securely.

  name: Login to Container Registry
  uses: docker/login-action@v2 
  with:
    registry: myregistry.com
          username: ${{ secrets.REGISTRY_USERNAME }}
          password: ${{ secrets.REGISTRY_PASSWORD }}

Final Configruation

Here is the final confiuration. We’ll just add everything together.

name: test-cd-pipeline

on:
  push:
    branches:
      - 'master'
  workflow_dispatch:
    branches:
      - 'master'
jobs:
  build-and-push:
    runs-on: ubuntu-latest
    steps:
      - 
        name: Checkout main
        uses: actions/checkout@v2
      - 
        name: Get Tag
        run: |
          tag=$(git rev-parse --short HEAD)
          echo tag
          echo "TAG=${tag}" >> $GITHUB_ENV
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      -
        name: Login to Container Registry
        uses: docker/login-action@v2 
        with:
          registry: myregistry.com
          username: ${{ secrets.REGISTRY_USERNAME }}
          password: ${{ secrets.REGISTRY_PASSWORD }}
      -
        name: Build and push
        id: docker_build
        uses: docker/build-push-action@v3
        with:
          context: .
          push: true
          tags: myregistry.com/python-app:${{ env.TAG }}

Trying it Out

Go ahead and try this out on your own project. There are a lot of places to go from here. You can run test cases, update downstream infrastructure with your new container tag and even expand this for test branches.

I write about devops releated content. If your into that sort of thing, check out out how to containerize a Django application and use that container in NixOS.

More sweet content?

Stay up to date!

I don't sell your data. Read my privacy policy .

Related Articles