Work stealing
Go's scheduler uses work stealing internally. Treat scheduler behavior as an implementation detail and measure before tuning around it.
Canonical guidance
- assume the scheduler is adaptive, not predictable
- optimize blocking and contention before second-guessing work stealing
- use trace when scheduler interactions are the real question
Use when
- investigating uneven worker utilization
- debugging scheduler-sensitive latency
- reasoning about CPU-bound parallel work
Avoid
- assuming goroutines run fairly or in launch order
- shaping correctness around observed local scheduling behavior
- attributing all latency variance to the scheduler first
Preferred pattern
- measure runnable, blocked, and CPU time with trace before changing pool sizes or scheduler knobs
Anti-pattern
- cargo-cult tuning based on a single local run
Explanation: This is tempting because scheduler behavior is visible in symptoms, but symptoms often come from blocking or load shape instead.
Why
- runtime scheduling is dynamic and workload-dependent
Related pages
Sources
- runtime package - Go Team
- Diagnostics - Go Team