Introduction
CPU affinity—also known as processor affinity—is a mechanism that binds or “affines” a specific process or thread to one or more CPU cores. When affinity is set, the operating system scheduler restricts execution of that task to the assigned CPU(s), rather than allowing it to float freely across all available cores.
CPU affinity is a critical concept in performance tuning, real-time systems, and high-performance computing (HPC). Properly managing affinity can reduce context switches, cache misses, and resource contention, leading to improved execution predictability and performance.
What Is CPU Affinity?
When a system has multiple CPU cores (multi-core or multi-processor), the scheduler typically places tasks on any available core. CPU affinity overrides this behavior by explicitly assigning tasks to specific cores.
There are two primary types:
- Hard (static) affinity
The process can only run on a fixed set of CPUs. - Soft (preferred) affinity
The process is encouraged to run on a given set but can be migrated if needed.
Why Use CPU Affinity?
| Goal | Benefit |
|---|---|
| Improved performance | Keeps data in CPU caches (reduces cache thrashing) |
| Determinism | Enhances predictability for real-time tasks |
| Reduced overhead | Minimizes inter-core memory access and migration costs |
| Resource partitioning | Enables isolated workloads (e.g., containers, VMs) |
| Hyper-threading control | Avoid scheduling on sibling threads if needed for latency sensitivity |
How It Works (Conceptually)
When a process is pinned to a CPU:
- The OS scheduler considers only the specified CPUs for that process.
- The CPU cache (L1/L2/L3) becomes more effective if the same CPU repeatedly runs the task.
- Thread migration is prevented unless explicitly allowed.
Without affinity, threads may jump between cores, causing:
- Cache invalidation
- TLB flushes
- NUMA penalties on multi-socket systems
Setting CPU Affinity: Examples by Platform
Linux: taskset
# Run a program on CPU 0 and 1
taskset -c 0,1 ./myprogram
# Check current affinity of a process
taskset -cp
C Code (Linux – sched_setaffinity)
cpu_set_t mask;
CPU_ZERO(&mask);
CPU_SET(2, &mask); // Pin to CPU 2
sched_setaffinity(0, sizeof(mask), &mask);
Python (using psutil)
import psutil
p = psutil.Process()
p.cpu_affinity([0, 1])
Windows
# Set affinity to CPU 0 and 2 for process with PID 1234
(Get-Process -Id 1234).ProcessorAffinity = 0x5
(0x5 is binary 0101, meaning CPU 0 and CPU 2)
Thread-Level Affinity
Many threading libraries allow binding individual threads:
OpenMP
#pragma omp parallel
{
int tid = omp_get_thread_num();
cpu_set_t cpuset;
CPU_ZERO(&cpuset);
CPU_SET(tid, &cpuset);
sched_setaffinity(0, sizeof(cpuset), &cpuset);
}
POSIX Threads
pthread_t thread;
pthread_create(&thread, NULL, my_function, NULL);
cpu_set_t cpuset;
CPU_ZERO(&cpuset);
CPU_SET(1, &cpuset);
pthread_setaffinity_np(thread, sizeof(cpu_set_t), &cpuset);
Use Cases
| Domain | Benefit of CPU Affinity |
|---|---|
| High-frequency trading | Locks latency-sensitive threads to isolated CPUs |
| Gaming engines | Assign physics, rendering, and AI to separate cores |
| Databases | Pin worker threads to CPUs for NUMA-aware performance |
| Real-time audio | Reduces jitter and latency by avoiding preemption |
| Embedded systems | Predictable performance with strict timing requirements |
| Virtualization/Containers | Isolate tenant workloads to specific CPU cores |
NUMA and CPU Affinity
On Non-Uniform Memory Access (NUMA) systems (e.g., multi-socket servers), memory access time depends on which CPU accesses which region of memory. Combining:
- CPU affinity (pin task to a NUMA node’s CPU)
- Memory affinity (allocate memory on the same NUMA node)
…is critical for maximizing throughput and minimizing latency.
Linux NUMA tools:
numactl --cpunodebind=0 --membind=0 ./app
Measuring Performance Impact
| Metric | What to Monitor |
|---|---|
| Context switches | Reduced when threads stay on same CPU |
| Cache hit rates | Higher when CPU affinity is stable |
| CPU utilization | More balanced across cores |
| Latency/jitter | Lower variance with pinning |
Tools
htop/topwith thread viewperf(Linux)numactl,numastat- Windows Resource Monitor
psutilorschedtool
Best Practices
| Practice | Why It Matters |
|---|---|
| Use affinity selectively | Only pin high-priority or real-time threads |
| Avoid over-pinning | Let OS balance background work |
| Combine with CPU isolation | Reserve CPUs for real-time tasks using isolcpus (Linux) |
| Coordinate with hyper-threading | Avoid sibling cores for critical threads |
| Test under load | Behavior may change with workload or system pressure |
Pitfalls and Risks
| Issue | Description |
|---|---|
| Resource underutilization | Pinned threads may leave other CPUs idle |
| Starvation | Background threads may get ignored if too many pinned threads exist |
| Migration prevention | Disabling migration can reduce OS’s ability to balance load |
| Hardcoding CPU IDs | Non-portable and brittle across systems |
CPU Sets and Isolation on Linux
Advanced setups may use cgroups or CPU sets to restrict entire applications:
# Create a CPU set and run a task within it
cset shield --cpu=2,3 --kthread=on
cset shield --exec -- ./my_rt_app
Also, isolcpus= boot parameter removes CPU from scheduler’s control for dedicated workloads.
CPU Affinity in Virtualization and Cloud
| Platform | Affinity Support |
|---|---|
| Docker | --cpuset-cpus="1,2" to pin containers |
| Kubernetes | Use cpuManagerPolicy=static for deterministic placement |
| VMware/KVM | Pin vCPUs to pCPUs for latency-sensitive virtual machines |
| AWS EC2 | Use CPUOptions to control threads per core |
Conclusion
CPU affinity gives developers and system administrators a fine-grained control mechanism to optimize performance, predictability, and resource isolation. By carefully pinning threads or processes to specific CPUs, systems can benefit from:
- Lower latency
- Reduced cache misses
- Improved determinism
- Isolation in multi-tenant environments
However, affinity should be used judiciously. When overused or misconfigured, it can lead to poor CPU utilization and unexpected bottlenecks. As such, affinity is a powerful but nuanced tool in the concurrency and systems performance toolbox.
Related Keywords
- Cache Locality
- Context Switch
- CPU Isolation
- CPU Set
- Hyper-Threading
- NUMA Node
- Preemptive Scheduling
- Processor Binding
- Real-Time Scheduling
- Sched Set Affinity
- Soft Affinity
- Static Pinning
- Taskset
- Thread Migration
- Workload Partitioning









