Worker pools
Use worker pools when you need bounded concurrency, backpressure, and predictable lifecycle management.
Canonical guidance
- use worker pools to cap concurrency, not as a default pattern for all parallel work
- make queueing, cancellation, and shutdown explicit
- prefer simple worker loops over elaborate pool frameworks
Use when
- bounded background processing
- controlled fan-out over many jobs
- limiting DB, network, or CPU concurrency
Avoid
- unbounded goroutine-per-item models when load can spike
- worker pools with no stop path
- queues that can grow forever with no policy
Preferred pattern
func Run(ctx context.Context, workers int, jobs <-chan Job) error {
var wg sync.WaitGroup
for i := 0; i < workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-ctx.Done():
return
case job, ok := <-jobs:
if !ok {
return
}
job.Process(ctx)
}
}
}()
}
wg.Wait()
return ctx.Err()
}
Anti-pattern
- spawning “workers” while also spawning a fresh goroutine per task
Explanation: This anti-pattern is common because goroutine-per-task is easy to write, but it defeats the whole resource-control purpose of a worker pool.
Why
- a worker pool is mainly a resource-control pattern
Related pages
Sources
- Go Concurrency Patterns: Pipelines and cancellation - Sameer Ajmani
- Go Concurrency Patterns: Context - Sameer Ajmani