Skip to main content

Posts

Showing posts with the label PEFT

How to Fix 'CUDA out of memory' When Fine-Tuning Llama 3

  You have prepared your dataset, configured your environment, and loaded the Llama 3 weights. You initiate the   SFTTrainer , look away for a moment, and return to find the dreaded   RuntimeError: CUDA out of memory . This is the most common bottleneck in LLM engineering today. Even developers with NVIDIA RTX 4090s (24GB VRAM) or A100s encounter this when attempting to fine-tune Llama 3 8B, let alone the 70B variant. The issue is rarely the raw size of the model weights. The problem lies in the training overhead—gradients, optimizer states, and activation maps—which can balloon memory usage to 4x or 5x the model size. This guide provides a rigorous, architectural approach to solving OOM errors using PyTorch, QLoRA, and the latest Hugging Face ecosystem. The Anatomy of an OOM Error To fix the memory leak, you must understand where the VRAM is going. When you load Llama 3 8B in standard FP16 (16-bit floating point), the math looks like this: Model Weights:  ~15GB (8 b...

How to Fix "CUDA Out of Memory" When Fine-Tuning Llama 3 with LoRA

  You have successfully downloaded the Llama 3 8B weights. You set up your PyTorch environment, configured a basic LoRA adapter, and launched the training script on your RTX 3090 or Google Colab T4. Then, before the first epoch even starts, you hit the wall:  RuntimeError: CUDA out of memory. Tried to allocate... This is the single most common barrier for engineers moving from using LLMs to fine-tuning them. It is frustrating because the math  seems  like it should work. If Llama 3 8B is roughly 15GB in half-precision (FP16), why does it crash a 24GB or even 40GB card? The answer lies in the hidden memory overhead of the training process itself. This guide provides a root cause analysis of VRAM consumption and a production-grade code solution using QLoRA (Quantized LoRA) to fit Llama 3 training pipelines onto consumer hardware. The Root Cause: Where Did the VRAM Go? To fix the OOM (Out Of Memory) error, you must understand what consumes GPU memory. It is not just the...