Kubernetes Networking Fundamentals: Complete Guide
Kubernetes networking is notoriously complex. This guide explains how pod-to-pod, pod-to-service, and external-to-cluster networking works, along with CNI plugins and cloud provider integrations.
Kubernetes Networking Model
Kubernetes imposes specific requirements on any networking implementation:
Core Requirements
- All pods can communicate with all other pods without NAT
- Nodes can communicate with all pods without NAT
- Pod's IP as seen by itself matches what others see
Networking Layers
┌──────────────────────────────────────────────────────────┐
│ External Traffic (Internet) │
└─────────────────────────┬────────────────────────────────┘
│ Ingress / LoadBalancer
┌─────────────────────────▼────────────────────────────────┐
│ Service (ClusterIP, NodePort, LoadBalancer) │
└─────────────────────────┬────────────────────────────────┘
│ kube-proxy / iptables / IPVS
┌─────────────────────────▼────────────────────────────────┐
│ Pod Networking (CNI) │
│ Pod-to-Pod communication across nodes │
└──────────────────────────────────────────────────────────┘
Pod Networking
How Pods Get IPs
- Kubelet creates new pod
- CNI plugin called to set up networking
- Pod gets IP from cluster CIDR range
- Routes configured for pod-to-pod traffic
CNI (Container Network Interface)
Plugins that handle the actual network setup:
| CNI Plugin | Best For | Key Features |
|---|---|---|
| AWS VPC CNI | EKS | Native VPC IPs, security groups per pod |
| Calico | Multi-cloud, on-prem | Network policies, BGP, eBPF |
| Cilium | Advanced networking | eBPF, L7 policies, observability |
| Flannel | Simple overlay | Easy setup, VXLAN/host-gw |
| Azure CNI | AKS | VNet integration, NSG support |
Kubernetes Services
ClusterIP
Internal-only virtual IP:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ClusterIP
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
- Gets virtual IP from service CIDR
- kube-proxy programs iptables/IPVS rules
- Traffic load-balanced across pods
NodePort
- Opens port on every node (30000-32767)
- Traffic to NodeIP:NodePort forwarded to service
- Rarely used directly in production
LoadBalancer
- Provisions cloud load balancer
- External IP exposed
- Integrates with AWS ELB, Azure LB, GCP LB
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- port: 443
targetPort: 8443
Ingress
HTTP(S) routing for multiple services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: api.example.com
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: api-v1
port:
number: 80
- path: /v2
pathType: Prefix
backend:
service:
name: api-v2
port:
number: 80
tls:
- hosts:
- api.example.com
secretName: api-tls
Ingress Controllers
- NGINX: Most common, feature-rich
- AWS ALB Controller: Native ALB integration
- Traefik: Modern, auto-discovery
- Istio Gateway: Part of service mesh
Network Policies
Firewall rules for pod-to-pod traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-network-policy
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
Key Points
- Default: All traffic allowed (no policies)
- With policy: Only explicitly allowed traffic passes
- CNI support required: Not all CNIs support network policies
Cloud Provider Integration
AWS EKS Networking
- VPC CNI: Pods get VPC IPs directly
- Security Groups per Pod: Native SG enforcement
- AWS Load Balancer Controller: ALB/NLB provisioning
# Pod IP from VPC, not overlay
kubectl get pod -o wide
NAME IP NODE
my-pod 10.0.1.50 10.0.1.100 # VPC IP
GKE Networking
- VPC-native: Alias IP ranges for pods
- Dataplane V2: Cilium-based, eBPF
- Private clusters: Nodes without public IPs
AKS Networking
- Azure CNI: VNet IPs for pods
- Kubenet: Overlay with routes
- Azure CNI Overlay: Better IP efficiency
DNS in Kubernetes
CoreDNS
- Default DNS server in Kubernetes
- Resolves service names to ClusterIPs
- Forwards external DNS to upstream
DNS Resolution
# Service DNS name
my-service.my-namespace.svc.cluster.local
# Pod DNS name
10-0-1-50.my-namespace.pod.cluster.local
# Shortened (within namespace)
my-service
Troubleshooting Kubernetes Networking
Common Issues
- Pod can't reach service: Check service selector, endpoints
- DNS not resolving: Check CoreDNS pods, configmap
- Network policy blocking: Review policies, default deny
- External traffic not reaching: Check ingress, load balancer, security groups
Debugging Commands
# Check endpoints
kubectl get endpoints my-service
# Test DNS
kubectl run test --rm -it --image=busybox -- nslookup my-service
# Test connectivity
kubectl run test --rm -it --image=nicolaka/netshoot -- curl my-service
# View iptables rules
kubectl get pod -n kube-system -l k8s-app=kube-proxy -o name | \
xargs -I {} kubectl exec -n kube-system {} -- iptables -t nat -L
Key Takeaways
- CNI plugin handles pod IP assignment and routing
- Services provide stable endpoints for pod groups
- Ingress handles HTTP routing to services
- Network policies control pod-to-pod traffic
- Cloud-native CNIs (VPC CNI, Azure CNI) provide VPC-level integration
Need Kubernetes Networking Help?
We design and optimize Kubernetes network architectures. Contact us for a consultation.