Skip to main content

Fixing 'JavaScript heap out of memory' in Node.js Docker Containers

 The error is all too familiar. Your Node.js application runs flawlessly on your local machine. You deploy it to a Kubernetes cluster or a Docker container, and suddenly, under load, it crashes.

The logs reveal the culprit:

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory

Or, even more confusingly, the pod restarts with code 137 (OOM Killed), yet your monitoring tools show the heap usage was seemingly below the container's memory limit.

This is not necessarily a memory leak in your code. It is often a fundamental misunderstanding between the Node.js runtime (V8) and the Linux container environment (cgroups). This article explains the disconnect and provides a production-grade configuration to resolve it.

The Root Cause: V8 vs. Linux Cgroups

To fix the problem, you must understand how V8 allocates memory. By default, Node.js tries to determine the optimal size for the Old Generation heap (where long-lived objects reside) based on the available system memory.

The Misunderstanding

In a non-containerized environment, V8 inspects the Operating System's total memory. If your server has 16GB of RAM, V8 sets a heap limit relative to that 16GB (usually roughly 25-50% depending on the version).

However, inside a Docker container, os.totalmem() still returns the host machine's total memory, not the container's limit.

The Scenario:

  1. Your Kubernetes node is an AWS m5.xlarge with 16GB RAM.
  2. You restrict your Pod to limits: memory: 512Mi.
  3. Node.js starts. It sees 16GB of host RAM.
  4. V8 defaults the max-old-space-size to something aggressive, perhaps 2GB or 4GB.
  5. Your app runs. The heap grows to 600MB.
  6. Crash. The Linux kernel invokes the OOM Killer because the process exceeded the 512MB container constraint, even though V8 thought it had gigabytes of headroom remaining.

While recent versions of Node.js (v14+) attempt to respect cgroup memory limits, relying on implicit behavior is dangerous in production. The safest approach is explicit configuration.

The Fix: Explicit Heap Sizing with NODE_OPTIONS

The solution requires restricting V8's heap usage so that it stays comfortably within the container's memory limits. You must leave room for non-heap memory, which includes:

  • Stack memory
  • Buffers (File I/O, Network I/O)
  • Native C++ modules
  • V8 internal structures

A widely accepted best practice is to set the heap limit to roughly 75% of the available container memory.

Method 1: The Kubernetes Downward API (Recommended)

This is the most robust, "Cloud Native" approach. We dynamically inject the memory limit into the container as an environment variable and use it to calculate the Node flags.

1. Update Kubernetes Deployment

Modify your deployment.yaml to pass the memory limit into the container environment using the Downward API.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-api
spec:
  template:
    spec:
      containers:
        - name: api
          image: my-node-app:latest
          resources:
            requests:
              memory: "512Mi"
              cpu: "250m"
            limits:
              memory: "512Mi" # Hard limit
          env:
            # Expose the limit to the container as an integer (in Megabytes)
            - name: K8S_MEMORY_LIMIT_MIB
              valueFrom:
                resourceFieldRef:
                  containerName: api
                  resource: limits.memory
                  divisor: 1Mi

2. Configure NODE_OPTIONS in Docker

You do not need to change your application code. Instead, modify how the application starts in your Dockerfile or package.json.

We will use NODE_OPTIONS to pass --max-old-space-size. This environment variable allows you to inject flags into the Node process globally.

Option A: Hardcoded (Simple but brittle) If you know your limit is 512MB, set the heap to ~384MB.

ENV NODE_OPTIONS="--max-old-space-size=384"
CMD ["node", "dist/index.js"]

Option B: Dynamic Calculation (Production Grade) This script calculates the limit at runtime based on the environment variable provided by Kubernetes. This allows you to scale your pod limits without rebuilding the Docker image.

Create a script docker-entrypoint.sh:

#!/bin/sh
# docker-entrypoint.sh

# Default to 512MB if not set
limit_in_mib=${K8S_MEMORY_LIMIT_MIB:-512}

# Reserve ~25% for non-heap overhead (Stack, Buffers, etc)
# This prevents OOM Kills when heap is full but container memory is maxed
heap_size=$(awk "BEGIN { print int($limit_in_mib * 0.75) }")

echo "Container Memory Limit: ${limit_in_mib}MiB"
echo "Configuring Node Heap Limit: ${heap_size}MiB"

export NODE_OPTIONS="${NODE_OPTIONS} --max-old-space-size=${heap_size}"

# Execute the passed command
exec "$@"

Update your Dockerfile:

FROM node:20-alpine

WORKDIR /app
COPY . .

# Ensure the script is executable
RUN chmod +x docker-entrypoint.sh

# Set the entrypoint
ENTRYPOINT ["/app/docker-entrypoint.sh"]
CMD ["node", "dist/index.js"]

Deep Dive: Why 75%?

You might wonder, "If I have 512MB, why can't I give 500MB to the heap?"

Node.js memory usage is composed of RSS (Resident Set Size), which is the total memory allocated by the process.

$$ \text{RSS} = \text{Heap} + \text{Stack} + \text{Code Segment} + \text{Off-Heap Buffers} $$

  1. Heap: Where your JavaScript objects (Variables, closures) live. Controlled by --max-old-space-size.
  2. Stack: Function calls and primitive variables.
  3. Buffers: Buffer.from(), file streams, and crypto operations often allocate memory outside the V8 heap in C++ land.

If you set --max-old-space-size=512 and your container limit is 512Mi, the moment your application handles a file upload or a large crypto operation, the Total RSS will exceed 512Mi.

Since the heap cannot shrink fast enough—and V8 doesn't track off-heap Buffers as part of that limit—the Linux OOM killer will terminate the process immediately. The 25% buffer zone is the "safety margin" for these off-heap operations.

Common Edge Cases

1. libuv Thread Pool and Connections

If your application is high-concurrency (e.g., thousands of WebSocket connections), the memory overhead per connection (TCP buffers) is significant and off-heap. In these cases, you may need to lower the heap ratio to 60% or 50%.

2. Node.js Versions < 14

If you are running legacy Node.js (v10 or v12), this configuration is mandatory. Older V8 engines are completely unaware of container limits and will aggressively crash without explicit caps.

3. Tooling Mismatches

Be careful when reading metrics.

  • process.memoryUsage().heapUsed: The actual JS objects.
  • process.memoryUsage().rss: The total memory seen by the OS.
  • Kubernetes Metrics: Usually tracks RSS + Cache.

If Kubernetes says you are using 100% memory, but heapUsed is low, you have a non-heap memory leak (often in native modules or uncontrolled Buffer creation), and increasing max-old-space-size will not fix it.

Conclusion

The "JavaScript heap out of memory" error in Docker is rarely a bug in your code logic; it is a configuration drift between the runtime and the infrastructure.

By accepting that os.totalmem() is unreliable in containers and explicitly calculating --max-old-space-size via NODE_OPTIONS, you ensure that V8's Garbage Collector runs before the container hits its hard limit. This results in stable applications, predictable scaling, and significantly fewer 3:00 AM pager alerts.