Kubernetes Eviction
Kubernetes Eviction#
- Out-of-memory (OOM) in Kubernetes – Part 4: Pod evictions, OOM scenarios and flows leading to them
- Pod evictions
- Allocatable
--kube-reserved
- Eviction mechanism at a glance
- Node Allocatable, illustrated
- Metric watched for during pod evictions
- OOM Scenario #2: Pods’ memory usage exceeds node’s “allocatable” value
--eviction-hard
- OOM Scenario #3: Node available memory drops below the
--eviction-hard
flag value - Changing the –eviction-hard memory threshold
- Interactions between Kubelet’s pod eviction mechanism and the kernel’s OOM killer
- Is Kubelet killing containers due to OOM?
- Conclusions around pod evictions
- Is it a problem that kubectl top pod shows a memory usage >100%?
- Signals and exit codes
- Metrics Testing
- Flows leading to out-of-memory situations
- OOM Scenarios
- OOM1: Container is OOMKilled when it exceeds its memory limit
- OOM2: Pods’ memory usage exceeds node’s “allocatable” value
- OOM3: Node available memory drops below the hard eviction threshold
- OOM4: Pods’ memory usage exceeds node’s “allocatable” value (fast allocation)
- OOM5: Container has a limit set, app inside allocates memory but the app’s runtime eventually fails the allocations way before the limit
- Q&A
- Utils
- Pod evictions