You deploy a new trigger, test it via the UI with a single record, and everything works perfectly. Later that day, an administrator runs a Data Loader update, and the batch fails. The debug logs reveal the notorious System.LimitException: Too many SOQL queries: 101.
This exception is a strict platform enforcement. It occurs when your transaction requests more than 100 synchronous database queries. In almost all cases, this is caused by an architectural anti-pattern: placing a SOQL query or DML operation inside a for loop, or allowing triggers to execute recursively without a bypass mechanism.
The Architecture Behind the Exception
Salesforce operates on a multitenant architecture. To prevent a single customer from monopolizing shared database resources and degrading performance for others, the platform enforces strict runtime constraints. Any comprehensive Apex governor limits tutorial will highlight that synchronous Apex transactions are strictly capped at 100 SOQL queries.
When a bulk data operation occurs, Salesforce processes records in chunks of up to 200. If an Apex trigger iterates over Trigger.new and executes a SOQL query for every single iteration, a batch of 200 records will generate 200 individual database queries. Once the transaction hits query number 101, the execution terminates immediately, throwing the exception and rolling back all uncommitted data.
How to Bulkify Apex Triggers
To permanently resolve the "Too many SOQL queries 101" error, you must extract all database interactions out of your loops. This process is known as bulkification.
The standard pattern requires gathering the necessary identifiers into a collection, performing a single query using the IN operator, and storing the results in memory—typically a Map—for rapid retrieval.
The Solution Implementation
The following code demonstrates how to properly bulkify Apex triggers. In this scenario, we need to update related Contact records whenever an Account's Industry field changes.
trigger AccountTrigger on Account (after update) {
// 1. Initialize collections to hold data in memory
Set<Id> accountIdsWithChanges = new Set<Id>();
List<Contact> contactsToUpdate = new List<Contact>();
// 2. Iterate to identify exactly which records require processing
for (Account acc : Trigger.new) {
Account oldAcc = Trigger.oldMap.get(acc.Id);
if (acc.Industry != oldAcc.Industry) {
accountIdsWithChanges.add(acc.Id);
}
}
// 3. Exit early if no records meet the criteria, saving CPU time
if (accountIdsWithChanges.isEmpty()) {
return;
}
// 4. Perform a single SOQL query outside the loop
// Store results in a Map for O(1) time complexity retrieval
Map<Id, Account> accountsWithContacts = new Map<Id, Account>(
[SELECT Id, Industry, (SELECT Id, Description FROM Contacts)
FROM Account
WHERE Id IN :accountIdsWithChanges]
);
// 5. Iterate over the targeted records using the in-memory Map
for (Id accId : accountIdsWithChanges) {
Account processedAccount = accountsWithContacts.get(accId);
// Loop through the pre-queried related records
for (Contact relatedContact : processedAccount.Contacts) {
relatedContact.Description = 'Account Industry updated to: ' + processedAccount.Industry;
contactsToUpdate.add(relatedContact);
}
}
// 6. Perform a single bulk DML operation
if (!contactsToUpdate.isEmpty()) {
update contactsToUpdate;
}
}
Principles of Salesforce SOQL Optimization
Understanding why the code above works is essential for scalable Salesforce SOQL optimization.
Collection over Iteration Using a Set<Id> acts as a distinct filter. By evaluating acc.Industry != oldAcc.Industry, we only query the database for records that actually changed. This reduces database processing time and memory consumption.
The IN Binding Passing the Set to the SOQL IN clause (WHERE Id IN :accountIdsWithChanges) consolidates the data request. Instead of making 200 network trips to the database, the transaction makes one trip, evaluating all IDs simultaneously.
Map Retrieval Complexity By passing the SOQL result directly into a map instantiation new Map<Id, Account>([...]), you create an indexed, in-memory database. Retrieving records from this map via .get() has an O(1) time complexity. This avoids nested iterations over generic lists, which quickly exhaust the 10,000 millisecond CPU time limit.
Parent-to-Child Subqueries Instead of querying the Contact object directly and filtering by AccountId, we query the Account and fetch related Contacts via a subquery (SELECT Id FROM Contacts). This structures the data perfectly for our loop and counts as a single query toward the 100 limit.
Edge Cases and Trigger Recursion
If you have already removed queries from loops but still receive the exception, your transaction is likely suffering from trigger recursion.
When an Account trigger updates a Contact, it fires the Contact trigger. If that Contact trigger performs logic that updates the Account, the Account trigger fires again. Each recursive loop consumes an additional portion of your 100 SOQL query limit.
To prevent this, implement a static recursion check in a dedicated handler class:
public class AccountTriggerHandler {
// Static set persists across the entire transaction execution context
private static Set<Id> processedAccountIds = new Set<Id>();
public static void handleAfterUpdate(List<Account> newList, Map<Id, Account> oldMap) {
List<Account> accountsToProcess = new List<Account>();
for (Account acc : newList) {
// Only process if the ID is not in the static Set
if (!processedAccountIds.contains(acc.Id)) {
accountsToProcess.add(acc);
// Mark as processed to block future recursive executions
processedAccountIds.add(acc.Id);
}
}
if (accountsToProcess.isEmpty()) return;
// Proceed with bulkified SOQL and DML...
}
}
By ensuring your logic only fires once per record per transaction, you seal the leaks that cause unexpected governor limit exceptions in complex, heavily customized orgs. Always assume your code will run against batches of 200 records, map your data in memory, and control the flow of execution with static variables.