AboutServicesExperienceProjectsBlogContactResume Buy Me a Coffee Start a Project
Back to Blog
From Code to Containers: What Docker and Kubernetes Taught Me
Technical
Feb 20, 2026·5 min read·By Rugved Chandekar

From Code to Containers: What Docker and Kubernetes Taught Me

DockerKubernetesDevOpsBackend

At a certain point in backend development, writing good code isn't enough. The real question becomes: how reliably does that code run across different environments?

On one machine it works perfectly. In staging it fails. In production it behaves differently again. This is the environment problem, and Docker and Kubernetes are the industry's answer to it.

The Problem

Before containers, deploying an application meant managing environments manually. You'd document dependencies, hope the versions matched, and debug mysterious failures caused by differences between your laptop and the server. The classic "it works on my machine" problem.

The more complex the application — multiple services, specific runtime versions, environment variables, database connections — the more painful this became.

My First Docker Container

Docker solves the environment problem by packaging the application and all its dependencies into a container — a lightweight, isolated unit that runs identically everywhere.

A basic Python application Dockerfile:

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Build it once, run it anywhere. No dependency conflicts, no environment drift.

For multi-service applications, Docker Compose orchestrates them locally:

version: '3.8'
services:
  api:
    build: .
    ports:
      - "8000:8000"
    environment:
      - DATABASE_URL=postgresql://user:pass@db:5432/mydb
    depends_on:
      - db

  db:
    image: postgres:15
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
      - POSTGRES_DB=mydb
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  postgres_data:

One command — docker-compose up — starts the entire stack. Consistent every time.

Understanding Kubernetes

Docker handles running containers. Kubernetes handles running containers at scale. It's a container orchestration platform that manages deployment, scaling, load balancing, and self-healing across clusters of machines.

A basic Kubernetes deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: my-api:latest
        ports:
        - containerPort: 8000
        resources:
          requests:
            memory: "128Mi"
            cpu: "250m"
          limits:
            memory: "256Mi"
            cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
  name: api-service
spec:
  selector:
    app: api
  ports:
  - port: 80
    targetPort: 8000
  type: LoadBalancer

This runs 3 replicas of the API, exposes it through a load balancer, and automatically restarts any container that fails. Infrastructure resilience without manual intervention.

The Mental Model Shift

Learning Docker and Kubernetes wasn't just about commands. It changed how I think about applications.

Previously, I thought in terms of servers: this code runs on this machine. Now I think in terms of workloads: this service needs X resources and should run Y replicas. The infrastructure underneath becomes an abstraction.

This shift matters because it separates application concerns from infrastructure concerns. Your code describes what it needs. Kubernetes figures out where and how to run it.

Practical Outcomes

After learning and applying these tools:

  • Zero environment-related bugs — the container either works or it doesn't, and the failure is reproducible
  • Faster onboarding — new team members clone the repo and run docker-compose up; they're running the full stack in minutes
  • Consistent staging/production parity — same container image, same behavior in every environment
  • Easier horizontal scaling — adjusting replicas is a one-line change in a YAML file

Why This Changed How I Architect Systems

Containerization is now part of my architecture thinking from day one — not something I add after the fact when deployment becomes painful.

When I design a new service, I'm already thinking about how it will be containerized, what its resource requirements are, and how it will be exposed. The deployment story is part of the design story.

Good code that can't be deployed reliably isn't production-ready. Docker and Kubernetes close that gap.

Building backend systems that need to run reliably at scale? Let's discuss architecture.

Get In Touch
RC
Rugved Chandekar AI Systems Engineer @ Idyllic Services — Docker & Kubernetes — IEEE Author