Building Self-Hosted CI/CD with Rust and Kubernetes
![]()
Building Self-Hosted CI/CD with Rust and Kubernetes
In today’s cloud-first world, many development teams rely heavily on managed CI/CD services. While these services offer convenience, they also introduce dependencies on external providers and can become expensive as teams scale. This guide explores how to build a robust, self-hosted CI/CD pipeline using Rust-based tools and Kubernetes.
Why Self-Hosted CI/CD?
Self-hosting your CI/CD infrastructure offers several advantages:
- Control: Full control over your build environment and data
- Cost: Predictable costs that scale with your infrastructure, not your usage
- Security: Your code and artifacts never leave your infrastructure
- Customization: Complete flexibility to customize your pipeline
- Compliance: Easier to meet regulatory requirements
Architecture Overview
Our self-hosted CI/CD solution consists of:
- Git Server: Gitea for repository hosting
- CI/CD Engine: Tekton Pipelines on Kubernetes
- Artifact Storage: Harbor container registry
- Monitoring: Prometheus and Grafana
- Security Scanning: Trivy for vulnerability scanning
Setting Up the Foundation
Kubernetes Cluster Setup
First, ensure you have a Kubernetes cluster. For this guide, we’ll assume you have a cluster with at least 3 nodes and sufficient resources.
# cluster-config.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ci-cd
---
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
Installing Tekton Pipelines
Tekton provides cloud-native CI/CD capabilities:
# Install Tekton Pipelines
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
# Install Tekton Dashboard
kubectl apply --filename https://storage.googleapis.com/tekton-releases/dashboard/latest/tekton-dashboard-release.yaml
Rust-Based Build Tools
Custom Pipeline Controller
We’ll create a Rust-based controller to manage our pipelines:
use k8s_openapi::api::core::v1::Pod;
use kube::{Api, Client, ResourceExt};
use serde::{Deserialize, Serialize};
use tokio;
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct BuildRequest {
pub repository: String,
pub branch: String,
pub commit_sha: String,
}
pub struct PipelineController {
client: Client,
}
impl PipelineController {
pub fn new(client: Client) -> Self {
Self { client }
}
pub async fn trigger_build(&self, request: BuildRequest) -> Result<String, Box<dyn std::error::Error>> {
// Create Tekton TaskRun from build request
let task_run = self.create_task_run(&request).await?;
// Submit to Kubernetes
let api: Api<Pod> = Api::namespaced(self.client.clone(), "ci-cd");
let result = api.create(&Default::default(), &task_run).await?;
Ok(result.name_any())
}
async fn create_task_run(&self, request: &BuildRequest) -> Result<Pod, Box<dyn std::error::Error>> {
// Implementation for creating Tekton TaskRun
todo!("Implement TaskRun creation")
}
}
Build Agent
A lightweight Rust agent for executing build steps:
use std::process::Command;
use tokio::fs;
use tracing::{info, error};
pub struct BuildAgent {
workspace: String,
}
impl BuildAgent {
pub fn new(workspace: String) -> Self {
Self { workspace }
}
pub async fn clone_repository(&self, repo_url: &str, branch: &str) -> Result<(), Box<dyn std::error::Error>> {
info!("Cloning repository: {} branch: {}", repo_url, branch);
let output = Command::new("git")
.args(&["clone", "--branch", branch, repo_url, &self.workspace])
.output()?;
if !output.status.success() {
error!("Failed to clone repository: {}", String::from_utf8_lossy(&output.stderr));
return Err("Git clone failed".into());
}
Ok(())
}
pub async fn run_tests(&self) -> Result<(), Box<dyn std::error::Error>> {
info!("Running tests");
let output = Command::new("cargo")
.args(&["test", "--release"])
.current_dir(&self.workspace)
.output()?;
if !output.status.success() {
error!("Tests failed: {}", String::from_utf8_lossy(&output.stderr));
return Err("Tests failed".into());
}
Ok(())
}
pub async fn build_container(&self, image_name: &str) -> Result<(), Box<dyn std::error::Error>> {
info!("Building container image: {}", image_name);
let output = Command::new("docker")
.args(&["build", "-t", image_name, "."])
.current_dir(&self.workspace)
.output()?;
if !output.status.success() {
error!("Container build failed: {}", String::from_utf8_lossy(&output.stderr));
return Err("Container build failed".into());
}
Ok(())
}
}
Pipeline Definitions
Basic Rust Application Pipeline
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: rust-app-pipeline
namespace: ci-cd
spec:
params:
- name: repo-url
type: string
- name: branch
type: string
default: main
- name: image-name
type: string
workspaces:
- name: shared-data
tasks:
- name: fetch-source
taskRef:
name: git-clone
workspaces:
- name: output
workspace: shared-data
params:
- name: url
value: $(params.repo-url)
- name: revision
value: $(params.branch)
- name: test
runAfter: ["fetch-source"]
taskRef:
name: rust-test
workspaces:
- name: source
workspace: shared-data
- name: build
runAfter: ["test"]
taskRef:
name: rust-build
workspaces:
- name: source
workspace: shared-data
params:
- name: image
value: $(params.image-name)
- name: security-scan
runAfter: ["build"]
taskRef:
name: trivy-scan
params:
- name: image
value: $(params.image-name)
- name: deploy-staging
runAfter: ["security-scan"]
taskRef:
name: kubectl-deploy
params:
- name: image
value: $(params.image-name)
- name: environment
value: staging
Custom Rust Build Task
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: rust-build
namespace: ci-cd
spec:
params:
- name: image
type: string
workspaces:
- name: source
steps:
- name: cargo-build
image: rust:1.75
workingDir: $(workspaces.source.path)
script: |
#!/bin/bash
set -e
# Cache dependencies
cargo fetch
# Run clippy for linting
cargo clippy -- -D warnings
# Build release binary
cargo build --release
# Run security audit
cargo audit
- name: container-build
image: gcr.io/kaniko-project/executor:latest
workingDir: $(workspaces.source.path)
env:
- name: DOCKER_CONFIG
value: /kaniko/.docker
command:
- /kaniko/executor
- --dockerfile=./Dockerfile
- --destination=$(params.image)
- --cache=true
Monitoring and Observability
Rust Metrics Collector
use prometheus::{Counter, Histogram, IntGauge, Registry, Encoder, TextEncoder};
use std::collections::HashMap;
use warp::Filter;
pub struct MetricsCollector {
registry: Registry,
build_counter: Counter,
build_duration: Histogram,
active_builds: IntGauge,
}
impl MetricsCollector {
pub fn new() -> Self {
let registry = Registry::new();
let build_counter = Counter::new("ci_builds_total", "Total number of builds")
.expect("Failed to create counter");
let build_duration = Histogram::new("ci_build_duration_seconds", "Build duration in seconds")
.expect("Failed to create histogram");
let active_builds = IntGauge::new("ci_active_builds", "Number of active builds")
.expect("Failed to create gauge");
registry.register(Box::new(build_counter.clone())).unwrap();
registry.register(Box::new(build_duration.clone())).unwrap();
registry.register(Box::new(active_builds.clone())).unwrap();
Self {
registry,
build_counter,
build_duration,
active_builds,
}
}
pub fn increment_builds(&self) {
self.build_counter.inc();
}
pub fn observe_build_duration(&self, duration: f64) {
self.build_duration.observe(duration);
}
pub fn set_active_builds(&self, count: i64) {
self.active_builds.set(count);
}
pub async fn metrics_handler(&self) -> Result<impl warp::Reply, warp::Rejection> {
let encoder = TextEncoder::new();
let metric_families = self.registry.gather();
let mut buffer = Vec::new();
encoder.encode(&metric_families, &mut buffer).unwrap();
Ok(warp::reply::with_header(
String::from_utf8(buffer).unwrap(),
"content-type",
"text/plain; version=0.0.4",
))
}
}
Security Considerations
Secret Management
use k8s_openapi::api::core::v1::Secret;
use kube::{Api, Client};
use base64::{Engine as _, engine::general_purpose};
pub struct SecretManager {
client: Client,
namespace: String,
}
impl SecretManager {
pub fn new(client: Client, namespace: String) -> Self {
Self { client, namespace }
}
pub async fn get_secret(&self, name: &str, key: &str) -> Result<String, Box<dyn std::error::Error>> {
let api: Api<Secret> = Api::namespaced(self.client.clone(), &self.namespace);
let secret = api.get(name).await?;
if let Some(data) = secret.data {
if let Some(value) = data.get(key) {
let decoded = general_purpose::STANDARD.decode(&value.0)?;
return Ok(String::from_utf8(decoded)?);
}
}
Err(format!("Secret key {} not found in {}", key, name).into())
}
}
Deployment and Scaling
Auto-scaling Build Agents
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: build-agent-hpa
namespace: ci-cd
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: build-agent
minReplicas: 2
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Pods
pods:
metric:
name: pending_builds
target:
type: AverageValue
averageValue: "2"
Conclusion
Building a self-hosted CI/CD pipeline with Rust and Kubernetes provides excellent control, performance, and cost benefits. The Rust ecosystem offers powerful tools for building custom CI/CD components that are both fast and reliable.
Key takeaways:
- Start Simple: Begin with basic pipelines and add complexity gradually
- Monitor Everything: Comprehensive monitoring is crucial for troubleshooting
- Security First: Implement security scanning and secret management from the start
- Scale Gradually: Use auto-scaling to handle varying build loads efficiently
The combination of Rust’s performance and safety guarantees with Kubernetes’ orchestration capabilities creates a powerful foundation for any development team’s CI/CD needs.
Questions or feedback? Feel free to reach out on LinkedIn or GitHub.