Docker Compose Workflows to Kubernetes Migration

Take your Rails development to the next level: Our guide shows you how to seamlessly migrate your Docker Compose workflow to Kubernetes. From local development to cloud scaling – learn how to operate your application efficiently and reliably.

Containerizing application components is the first step in deploying and scaling modern, stateless applications on distributed platforms. If you’ve used Docker Compose in development, you’ve already modernized and containerized your application by:

  • Extracting the necessary configuration information from your code.
  • Externalizing the state of your application.
  • Packaging your application for repeatable use.

To run your services on a distributed platform like Kubernetes, you’ll need to translate your Compose service definitions into Kubernetes objects. A tool called Kompose helps developers migrate Compose workflows to container orchestrators like Kubernetes or OpenShift.

In this tutorial, you’ll translate Compose services into Kubernetes objects using Kompose. You’ll use the object definitions provided by Kompose as a starting point and adjust them to ensure your setup uses secrets, services, and PersistentVolumeClaims in the manner Kubernetes expects. By the end of this tutorial, you’ll have a Rails application with a PostgreSQL database running on a Kubernetes cluster. This setup mirrors the functionality of the code in “Containerizing a Ruby on Rails Application for Development with Docker Compose” and is a solid starting point for building a production-ready solution that meets your needs.

Prerequisites

  • A Kubernetes 1.19+ cluster with role-based access control (RBAC) enabled.
  • The kubectl command-line tool installed on your local machine or development server, configured to connect to your cluster.
  • Docker installed on your local machine or development server.
  • A Docker Hub account.

Step 1 — Installing Kompose

To get started with Kompose, navigate to the project’s GitHub Releases page and copy the link to the latest version. Then, download the latest version of Kompose and make it executable.

curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose
chmod +x kompose
sudo mv ./kompose /usr/local/bin/kompose
kompose version

Step 2 — Cloning and Packaging the Application

Clone the project code and create the application image, which you’ll later run on your Kubernetes cluster.

git clone https://github.com/do-community/rails-sidekiq.git rails_project
cd rails_project
git checkout compose-workflow
docker build -t your_dockerhub_user/rails-kubernetes .
docker push your_dockerhub_user/rails-kubernetes

Step 3 — Converting Compose Services to Kubernetes Objects with Kompose

Use Kompose to convert your Compose service definitions into Kubernetes objects. Adjust the generated YAML files to fit your needs.

kompose convert
mkdir k8s-manifests
mv *.yaml k8s-manifests
cd k8s-manifests

Step 4 — Creating Kubernetes Secrets

Create secrets for your database user and password, and add them to your application and database deployments.

echo -n ‘your_database_name’ | base64
echo -n ‘your_database_username’ | base64
echo -n ‘your_database_password’ | base64
nano secret.yaml

Add the following code to the `secret.yaml` file:

apiVersion: v1
kind: Secret
metadata:
name: database-secret
data:
DATABASE_NAME: your_database_name
DATABASE_PASSWORD: your_encoded_password
DATABASE_USER: your_encoded_username
POSTGRES_DB: your_database_name
POSTGRES_PASSWORD: your_encoded_password
POSTGRES_USER: your_encoded_username

Save and close the file.

Step 5 — Adjusting the PersistentVolumeClaim and Exposing the Application Frontend

Before running your application, we’ll make two final adjustments to ensure that our database storage is properly provisioned and that we expose our application frontend using a load balancer.

First, we’ll modify the storage resource in the PersistentVolumeClaim that Kompose created for us. This claim allows us to dynamically provision storage to manage the state of our application.

To work with PersistentVolumeClaims, a StorageClass must be created and configured to provide storage resources. In our case, since we’re working with centron Kubernetes, our default StorageClass provisioner is set to dobs.csi.centron.de – centron Block Storage.

We can verify this by entering the following:

If you’re using a centron cluster, you’ll see the following output:

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
do-block-storage (default) dobs.csi.centron.de Delete Immediate true 76m

If you’re not using a centron cluster, you’ll need to create a StorageClass and configure a provisioner of your choice. For details on how to do this, please refer to the official documentation.

When Kompose created the db-data-persistentvolumeclaim.yaml file, it set the storage resource to a size that doesn’t meet our provisioner’s minimum size requirements. Therefore, we need to modify our PersistentVolumeClaim to use the smallest possible centron Block Storage unit: 1 GB. Adjust this as needed based on your storage requirements.

Open db-data-persistentvolumeclaim.yaml:

nano db-data-persistentvolumeclaim.yaml

Replace the storage value with 1Gi:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: db-data
name: db-data
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}

Note that `accessMode: ReadWriteOnce` means the volume provided as a result of this claim is read and writable by a single node only. Please refer to the documentation for more information on different access modes.

Save and close the file when you’re done.

Next, open app-service.yaml:

We’ll expose this service externally using a centron Load Balancer. If you’re not using a centron cluster, please consult your cloud provider’s documentation on their load balancers. Alternatively, you can refer to the official Kubernetes documentation on setting up a high-availability cluster with kubeadm; however, you won’t be able to use PersistentVolumeClaims for storage provisioning in this case.

Specify LoadBalancer as the service type within the service specification:

apiVersion: v1
kind: Service
. . .
spec:
type: LoadBalancer
ports:
. . .

When we create the app service, a load balancer will automatically be created, providing us with an external IP through which we can access our application.

Save and close the file when you’ve finished editing.

With all our files in place, we’re ready to launch and test the application.

Note: If you’d like to compare your modified Kubernetes manifests with a set of reference files to ensure your changes match this tutorial, the accompanying GitHub repository includes a set of tested manifests. You can compare each file individually, or you can also switch your local Git branch to use the kubernetes-workflow branch.

If you choose to switch branches, make sure to copy your secrets.yaml file to the newly checked-out version, as we previously added it to .gitignore in this tutorial. Migrate Docker Compose Workflows to Kubernetes

Create a Free Account

Register now and get access to our Cloud Services.

Posts you might be interested in:

centron Managed Cloud Hosting in Deutschland

How To Use Wildcards in SQL

MySQL
How To Use Wildcards in SQL Content1 Introduction2 Prerequisites for Wildcards in SQL3 Connecting to MySQL and Setting up a Sample Database4 Querying Data with Wildcards in SQL5 Escaping Wildcard…
centron Managed Cloud Hosting in Deutschland

How To Use Joins in SQL

MySQL
How To Use Joins in SQL Content1 Introduction2 Prerequisites for Joins in SQL3 Connecting to MySQL and Setting up a Sample Database4 Understanding the Syntax of Joins in SQL Operations5…
centron Managed Cloud Hosting in Deutschland

How To Work with Dates and Times in SQL

MySQL
How To Work with Dates and Times in SQL Content1 Introduction2 Prerequisites3 Connecting to MySQL and Setting up a Sample Database4 Using Arithmetic with Dates and Times5 Using Date and…