Julio Rodriguez

Automating Deployments with GitLab CI/CD on Kubernetes

Context

At Reach Tools, deploying web applications was a manual, time-consuming process. Engineers had to SSH into servers, pull code, build artifacts, run migrations, and restart services — all by hand. A single deployment could take a couple of hours, and with multiple projects shipping daily, the accumulated time and risk were unsustainable.

I designed and implemented a fully automated CI/CD platform using GitLab pipelines orchestrated by Kubernetes-native runners — transforming deployments from a manual burden into a repeatable, self-service process that completes in under 20 minutes.

The Problem I Solved

  • Manual deployments: Every release required SSH access, manual builds, and service restarts — error-prone and impossible to standardize
  • Hours per deployment: A single deploy could take 2+ hours when accounting for build, test, config changes, and verification across environments
  • No pipeline standardization: Each project had its own ad-hoc deployment process — some used scripts, some were entirely manual
  • Mixed tech stacks: Java, Node.js, and AWS Lambda projects each had different build and deploy requirements with no unified approach
  • No rollback capability: If a deployment failed, rolling back meant manually reverting code and repeating the entire process
  • Runner bottlenecks: Shared CI runners on VMs couldn’t scale with the growing number of projects and parallel pipelines

My Approach

GitLab Runner Architecture on Kubernetes

Instead of running GitLab Runners on static VMs that become bottlenecks, I deployed runners directly on the EKS cluster using the Kubernetes executor. Each pipeline job spawns an ephemeral pod that executes the job and is destroyed after completion.

GitLab ServerGitLab Runner - Controller PodEKS ClusterP1P2P3P4gitlab-runners namespaceJob Pod 1 - Build JavaJob Pod 2 - Build Node.jsJob Pod 3 - Deploy LambdaJob Pod N - ... Assigns jobsSpawns pods

Why this architecture:

  • Infinite horizontal scaling — the cluster spawns as many job pods as needed, no more waiting in queue
  • Clean execution environments — every job starts from a fresh pod, eliminating “works on my runner” issues
  • Resource efficiency — pods are destroyed after job completion, no idle runners consuming resources
  • Isolation — each job runs in its own pod with its own resources, no noisy-neighbor interference between pipelines

Standardized Pipeline Templates

The biggest productivity win was creating reusable pipeline templates that any project could adopt with minimal configuration. Instead of each team writing pipelines from scratch, I built shared templates for each tech stack.

Shared Pipeline TemplatesProjectsJava TemplateNode.js TemplateLambda TemplateJava APIReact AppPayment LambdaNode.js Service extendsextendsextendsextends

Pipeline Flow — End to End

Developer pushes codeGitLab detects changeRunner spawns job podBuild StageTest StagePublish StageDeploy StageRollout Status CheckMaven build + JARnpm ci + buildnpm ci + zipUnit testsLintingBuild + Push to ECRPackage Lambda artifactkubectl set image + rolloutaws lambda update-function-code

Semantic Versioning & Branch Strategy

I standardized the deployment triggers across all projects:

  • Feature branches → run build + test stages only (validate before merge)
  • Staging branch → full pipeline including deploy to staging environment
  • Main branch → full pipeline including deploy to production environment
  • Tags → trigger versioned releases with semantic versioning

This gave teams a clear, predictable deployment model: merge to staging to test, merge to main to ship.

Security & Secrets Management

All sensitive values are managed outside the codebase:

  • GitLab CI/CD Variables — masked and protected, scoped per environment
  • AWS IAM Roles — the runner service account assumes scoped roles for ECR push and EKS deploy via IRSA (IAM Roles for Service Accounts)
  • No hardcoded credentials — ECR login uses IAM-based authentication, kubectl uses the runner’s service account token

Results

  • Deployment time reduced from ~2 hours to under 20 minutes — the full pipeline (build, test, publish, deploy) completes automatically without manual intervention
  • 100% of projects automated — Java, Node.js, and Lambda workloads all deploy through standardized pipelines
  • Zero runner bottlenecks — Kubernetes-native runners scale horizontally, spawning pods on demand instead of queuing on shared VMs
  • Consistent deployments — every release follows the same stages regardless of tech stack, eliminating ad-hoc processes
  • Built-in rollback — failed deployments auto-detect via kubectl rollout status; previous versions are always available in ECR for instant rollback
  • Developer self-service — engineers deploy by merging code, no SSH access or manual steps required
  • Clean environments — ephemeral pods eliminate state drift and “works on my runner” issues

Key Takeaways

  1. Kubernetes-native runners eliminate CI bottlenecks — ephemeral pods scale with demand and provide clean, isolated execution environments
  2. Shared pipeline templates save weeks of work — invest time in reusable templates once, every new project inherits the standard automatically
  3. Branch-based deployment triggers are predictable — teams know exactly what happens when they merge to staging or main
  4. Separate build from deploy — publishing Docker images to ECR decouples the artifact from the deployment target, enabling rollbacks and multi-environment deploys
  5. Scoped IAM with IRSA — avoid storing AWS credentials; let Kubernetes service accounts assume roles directly

Tools & Technologies

  • GitLab CI/CD — Pipeline orchestration and shared templates
  • GitLab Runner (Kubernetes Executor) — Ephemeral job pods on EKS
  • AWS EKS — Kubernetes cluster hosting runners and application workloads
  • AWS ECR — Docker image registry
  • AWS Lambda — Serverless function deployments
  • Helm — GitLab Runner deployment and configuration
  • Docker — Container image builds
  • Maven — Java build toolchain
  • kubectl — Kubernetes deployment operations
  • IRSA — IAM Roles for Service Accounts (secure credential-free access)