Kubernetes for Developers: Deployments, Services, ConfigMaps, Ingress and Scaling
Kubernetes (K8s) is the standard platform for running containerized applications at scale. While DevOps engineers manage cluster infrastructure, developers need to understand how to write Kubernetes manifests, debug running workloads, and reason about deployments. This guide focuses on what application developers need to know.
Core Concepts
codeCluster βββ Node (virtual/physical machine) βββ Pod (one or more containers, shared network + storage) βββ Container (your Docker image) Workloads: Deployment -- manages ReplicaSet, handles rolling updates StatefulSet -- for stateful apps (databases) DaemonSet -- one pod per node (log collectors, monitoring agents) Job / CronJob -- run-to-completion tasks Networking: Service -- stable endpoint for a set of pods Ingress -- HTTP routing from outside the cluster Config: ConfigMap -- non-sensitive configuration Secret -- sensitive data (passwords, tokens) Storage: PersistentVolumeClaim -- request storage for stateful workloads
Deployments
A Deployment manages a set of identical Pods and handles rolling updates:
yaml-- deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: api-server namespace: production labels: app: api-server spec: replicas: 3 selector: matchLabels: app: api-server strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 -- allow 1 extra pod during update maxUnavailable: 0 -- never have fewer than desired replicas template: metadata: labels: app: api-server version: "1.2.0" spec: containers: - name: api image: myregistry/api-server:1.2.0 ports: - containerPort: 3000 env: - name: NODE_ENV value: production - name: DATABASE_URL valueFrom: secretKeyRef: name: db-credentials key: url - name: LOG_LEVEL valueFrom: configMapKeyRef: name: api-config key: log_level resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "500m" memory: "512Mi" readinessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 10 periodSeconds: 5 failureThreshold: 3 livenessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 30 periodSeconds: 10
Key fields:
resources.requests: what the scheduler uses for placement decisionsresources.limits: hard caps β container is OOMKilled if memory exceeds limitsreadinessProbe: pod receives traffic only when this passeslivenessProbe: pod is restarted if this fails
Services
A Service provides a stable network endpoint for a set of pods (which come and go):
yaml-- service.yaml apiVersion: v1 kind: Service metadata: name: api-server namespace: production spec: selector: app: api-server -- routes to pods with this label ports: - port: 80 -- service port targetPort: 3000 -- container port type: ClusterIP -- only accessible within the cluster
Service types:
ClusterIP(default) β internal cluster communication onlyNodePortβ exposes on a port on every node (development use)LoadBalancerβ provisions a cloud load balancer (AWS ALB, GCP LB)
Other services in the cluster reach this service at
api-server.production.svc.cluster.local:80ConfigMaps and Secrets
yaml-- configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: api-config namespace: production data: log_level: "info" max_connections: "100" feature_flags: | new_dashboard=true beta_api=false -- secret.yaml apiVersion: v1 kind: Secret metadata: name: db-credentials namespace: production type: Opaque stringData: -- plaintext, Kubernetes base64-encodes automatically url: "postgres://user:password@db:5432/myapp" password: "supersecretpassword"
Mount as environment variables (shown in Deployment above) or as files:
yamlvolumeMounts: - name: config-volume mountPath: /app/config volumes: - name: config-volume configMap: name: api-config
In production, use sealed-secrets, Vault, or cloud-provider secret managers instead of plain Kubernetes Secrets (which are only base64-encoded, not encrypted by default).
Ingress
Ingress routes external HTTP(S) traffic to internal services:
yaml-- ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: api-ingress namespace: production annotations: nginx.ingress.kubernetes.io/rewrite-target: / cert-manager.io/cluster-issuer: letsencrypt-prod spec: ingressClassName: nginx tls: - hosts: - api.myapp.com secretName: api-tls-cert rules: - host: api.myapp.com http: paths: - path: /api pathType: Prefix backend: service: name: api-server port: number: 80 - path: / pathType: Prefix backend: service: name: frontend port: number: 80
Requires an Ingress Controller (nginx, Traefik, AWS ALB Ingress Controller) to be installed in the cluster.
Horizontal Pod Autoscaler
Scale pods automatically based on CPU or memory:
yaml-- hpa.yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: api-server-hpa namespace: production spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: api-server minReplicas: 2 maxReplicas: 20 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60 -- scale up when average CPU > 60% - type: Resource resource: name: memory target: type: AverageValue averageValue: 400Mi
The HPA requires resources.requests to be set on your containers β it uses requests as the baseline for utilization calculations.
Rolling Updates and Rollbacks
bash-- Deploy a new image version kubectl set image deployment/api-server api=myregistry/api-server:1.3.0 -n production -- Watch the rollout kubectl rollout status deployment/api-server -n production -- View rollout history kubectl rollout history deployment/api-server -n production -- Rollback to previous version kubectl rollout undo deployment/api-server -n production -- Rollback to a specific revision kubectl rollout undo deployment/api-server --to-revision=2 -n production
Essential kubectl Commands for Developers
bash-- View resources kubectl get pods -n production kubectl get pods -n production -o wide -- show node and IP kubectl get all -n production -- pods, services, deployments kubectl get events -n production --sort-by=.lastTimestamp -- Inspect a resource kubectl describe pod api-server-7d4b9f-abc12 -n production kubectl describe deployment api-server -n production -- View logs kubectl logs api-server-7d4b9f-abc12 -n production kubectl logs api-server-7d4b9f-abc12 -n production --previous -- crashed container kubectl logs -l app=api-server -n production --tail=100 -- all pods with label kubectl logs api-server-7d4b9f-abc12 -n production -f -- follow -- Execute commands in a pod kubectl exec -it api-server-7d4b9f-abc12 -n production -- sh kubectl exec api-server-7d4b9f-abc12 -n production -- node -e "console.log(process.env)" -- Port forward for local debugging kubectl port-forward pod/api-server-7d4b9f-abc12 3000:3000 -n production kubectl port-forward service/api-server 8080:80 -n production -- Apply/delete manifests kubectl apply -f deployment.yaml kubectl apply -f ./k8s/ -- apply all files in directory kubectl delete -f deployment.yaml -- Scale manually kubectl scale deployment api-server --replicas=5 -n production
Common Interview Questions
Q: What is the difference between a Pod and a Deployment?
A Pod is the smallest deployable unit in Kubernetes β one or more containers sharing a network namespace and storage. Pods are ephemeral; if a Pod crashes, it is not automatically restarted unless something manages it. A Deployment manages a desired number of Pod replicas, automatically restarts failed Pods, and handles rolling updates and rollbacks. Always use Deployments (or StatefulSets) rather than creating Pods directly.
Q: What is the difference between readinessProbe and livenessProbe?
readinessProbe determines if a Pod should receive traffic. If it fails, the Pod is removed from Service endpoints β it gets no traffic until the probe passes again. livenessProbe determines if a Pod is healthy enough to keep running. If it fails repeatedly, Kubernetes restarts the container. Use readiness to signal "not ready to handle requests yet"; use liveness to detect and recover from deadlocks or stuck processes.
Q: How do you debug a pod that is stuck in CrashLoopBackOff?
Check the logs of the crashed container: kubectl logs <pod> --previous (the --previous flag shows logs from the last crashed instance). Use kubectl describe pod <pod> to see events, exit codes, and reasons. If the container exits immediately, add a command override to keep it alive for inspection: command: ["sleep", "infinity"] in the pod spec.
Practice DevOps on Froquiz
Kubernetes and container orchestration are tested in backend and platform engineering interviews. Explore our Docker and DevOps quizzes on Froquiz to test your knowledge.
Summary
- Pods are ephemeral; Deployments manage desired replicas, updates, and restarts
resources.requestsdrives scheduling;resources.limitsprevents runaway containersreadinessProbecontrols traffic routing;livenessProbecontrols container restarts- Services provide stable DNS names and load balancing across pod replicas
- ConfigMaps for non-sensitive config; Secrets for passwords β encrypt Secrets at rest in production
- HPA scales pods automatically based on CPU/memory β requires
resources.requeststo be set - Rolling updates deploy new versions gradually β
kubectl rollout undorolls back instantly kubectl logs --previous,kubectl describe, andkubectl execare your debugging tools