You deploy a new microservice. It runs flawlessly for three days. On the fourth day, the SRE team flags a gradual memory creep. There are no massive allocation spikes, yet the heap usage forms a distinct "sawtooth" pattern that rises higher with every garbage collection cycle until the OOM killer terminates the pod. The culprit is rarely a heavy variable or a global map; it is almost always a goroutine leak. In Go, goroutines are cheap to create but expensive to orphan. A leaked goroutine holds its stack (starting at 2KB but often growing), keeps references to heap variables, and blocks the garbage collector from reclaiming associated memory. This post dissects the mechanics of context propagation failures and provides rigorous patterns to ensure every goroutine you spawn eventually dies. The Root Cause: Cooperative Multitasking To fix leaks, we must understand why they happen. The Go runtime scheduler does not expose a mechanism to forcibly kill a goroutine from the outsid...
Practical programming blog with step-by-step tutorials, production-ready code, performance and security tips, and API/AI integration guides. Coverage: Next.js, React, Angular, Node.js, Python, Java, .NET, SQL/NoSQL, GraphQL, Docker, Kubernetes, CI/CD, cloud (Amazon AWS, Microsoft Azure, Google Cloud) and AI APIs (OpenAI, ChatGPT, Anthropic, Claude, DeepSeek, Google Gemini, Qwen AI, Perplexity AI. Grok AI, Meta AI). Fast, high-value solutions for developers.