Kubernetes Networking
Networking is the foundation of distributed Kubernetes systems. Understanding how packets actually flow lets you debug effectively, design resilient architectures, and reason about performance.
This lesson covers pod-to-pod communication, services, ingress, CNI plugins, and network policies β the layers that make K8s networking work.
The Networking Model
Kubernetes networking has four core principles:
External access is opt-in: pods are private by default; you have to explicitly expose them via Services or Ingress.
The Network Layers
A typical K8s cluster has multiple overlapping networks:
CNI: The Plugin That Makes Pod Networking Work
Kubernetes itself doesnβt implement pod networking. It delegates to a CNI (Container Network Interface) plugin.
When a pod is created, the CNI plugin:
- Assigns an IP from the pod CIDR.
- Creates a virtual network interface in the podβs network namespace.
- Sets up routing rules so other pods can reach this one.
- Applies network policies (if supported).
Major CNI plugins
Service Networking
Pods are ephemeral. Services provide stable virtual IPs that proxy to the current set of healthy backend pods.
How Services actually work
Pod β ClusterIP (10.96.42.7)
β
βΌ (kube-proxy iptables/IPVS rules)
β
ββββββ΄βββββ¬βββββββββ¬βββββββββ
βΌ βΌ βΌ βΌ
Pod IP1 Pod IP2 Pod IP3 Pod IP4
(the actual current backend pods)
kube-proxy watches Services and Endpoints, maintaining iptables (or IPVS) rules on every node. When a pod sends a packet to a ClusterIP, the kernelβs netfilter rewrites the destination to a real pod IP β random or round-robin among the available backends.
Service types in detail
DNS
CoreDNS runs in every cluster, exposing service names as DNS records:
my-service.my-namespace.svc.cluster.localβ ClusterIPpod-ip.my-namespace.pod.cluster.localβ individual pod (rarely used directly)
Pods get the cluster DNS in their /etc/resolv.conf, so a simple curl http://my-service (within namespace) or curl http://my-service.my-namespace (cross-namespace) resolves correctly.
Ingress: HTTP/HTTPS Routing
Services give you Layer 4. Ingress gives you Layer 7.
[ Internet ]
β
βΌ
[ Cloud Load Balancer ]
β
βΌ
[ Ingress Controller Pod ]
(nginx, Traefik, ALB, etc.)
β
βββββββββββββββββββΌββββββββββββββββββ
βΌ βΌ βΌ
[Service A] [Service B] [Service C]
/api/* /admin/* /static/*
The Ingress resource is declarative routing config. The Ingress controller is the actual implementation β runs as pods, watches Ingress resources, configures itself accordingly.
Gateway API: The Ingress Successor
Gateway API is the modern replacement for Ingress. More expressive, role-oriented:
Gateway API will eventually replace Ingress in most clusters. New clusters should use it; existing should migrate.
Network Policies
By default, all pods can talk to all other pods. NetworkPolicies restrict this.
# Allow only pods labeled "app=frontend" to talk to "app=api"
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-from-frontend-only
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
For production, default-deny is the right baseline:
# Block all ingress to all pods unless explicitly allowed elsewhere
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
Service Mesh: Layer 7 Networking as Infrastructure
Service meshes (Istio, Linkerd, Consul Connect) layer additional networking on top of the basic K8s networking:
We covered this in Service Meshes; in K8s context, itβs typically deployed as sidecar proxies (Envoy for Istio, linkerd2-proxy for Linkerd) injected into every pod.
Multi-Cluster Networking
For larger SaaS platforms running multiple K8s clusters:
Recap
- Kubernetes networking has four principles: unique pod IPs, direct pod-to-pod, node-to-pod, consistent IP visibility.
- Four overlapping networks: cluster, node, pod, service. Each managed differently.
- CNI plugins implement pod networking β pick consciously (Cilium, Calico, Flannel, cloud-native).
- Services proxy stable virtual IPs to current backend pods via kube-proxy and iptables/IPVS.
- Ingress provides L7 routing; Gateway API is the modern replacement.
- NetworkPolicies restrict pod-to-pod traffic; need a CNI that supports them.
- Service mesh layers L7 networking concerns (mTLS, traffic management, observability) as infrastructure.
- Multi-cluster networking enables federated platforms across regions and providers.