Profiling
Measure first with pprof and tracing before changing code for performance.
Canonical guidance
- optimize only after measurement
- use CPU, heap, alloc, block, mutex, and trace data as needed
- prefer trace when latency depends on scheduler or blocking interactions
- identify the real bottleneck before redesigning code
- use PGO after stable measurement shows it helps
Use when
- latency regressions
- CPU spikes
- memory growth
- suspected allocation overhead
Avoid
- micro-optimizing without profiles
- trusting intuition over measurements
- only benchmarking synthetic happy paths
Preferred pattern
package debugserver
import (
"log"
"net/http"
_ "net/http/pprof"
)
func Start() {
go func() {
log.Println(http.ListenAndServe("localhost:6060", nil))
}()
}
Anti-pattern
- rewriting APIs around assumed performance problems with no data
Explanation: This anti-pattern is tempting because intuition is faster than instrumentation, but it often optimizes the wrong thing and adds complexity.
Why
- Go gives strong built-in tooling; skipping it usually wastes time
Related pages
Sources
- Profiling Go Programs - Russ Cox
- Diagnostics - Go Team
- Profile-guided optimization - Go Team