You have built a robust Electron application. Logic is sound, the architecture is modular, and the UI is modern. But the moment you attempt to load a large dataset—say, a 50MB JSON file containing logs or analytical data—the application freezes. The CSS animations halt, the loading spinner sticks, and the window becomes unresponsive for a noticeable 500ms to 2 seconds.
This is the classic "IPC Jank." It is not a React issue; it is a serialization bottleneck. This post details exactly why this happens and provides a production-ready implementation to offload serialization, keeping your Renderer at 60fps.
The Why: The Hidden Cost of IPC
When you transmit data between the Main process and the Renderer process in Electron using ipcMain.handle and ipcRenderer.invoke, you are crossing a process boundary.
Under the hood, Electron uses the Structured Clone Algorithm to serialize JavaScript objects into a format suitable for IPC transmission (often via named pipes).
- Main Process: You pass a massive JavaScript Object. Electron traverses this object recursively to serialize it. This consumes CPU on the Main thread.
- IPC Channel: The serialized bits travel to the Renderer.
- Renderer Process (The Bottleneck): The Renderer receives the bits. Before your
await ipc.invoke()promise resolves, the Renderer's main thread (the UI thread) must deserialize that entire payload back into a JavaScript Object.
If that deserialization takes 400ms, your UI is blocked for 400ms. The event loop cannot process paint updates, click events, or CSS animations.
To fix this, we must bypass the deserialization of complex objects on the UI thread.
The Fix: Raw Buffers and Web Workers
The solution is a two-step architectural shift:
- Transport: Send data as a raw
Buffer(Uint8Array) instead of a JSON Object. Electron optimizes Buffer transfer, avoiding the deep object traversal cost during IPC. - Parsing: Pass that Buffer to a Web Worker within the Renderer. The Worker parses the data on a background thread and returns the result.
1. The Main Process (main.ts)
Instead of returning a parsed Object, we return the raw bytes. If you are reading from a file, return the buffer directly. If you are generating data, stringify it to a Buffer first.
import { app, BrowserWindow, ipcMain } from 'electron';
import fs from 'node:fs/promises';
import path from 'node:path';
async function setupMainIPC() {
ipcMain.handle('get-large-dataset', async () => {
// SCENARIO: Reading a 100MB JSON file.
const filePath = path.join(app.getPath('userData'), 'large-analytics.json');
try {
// CRITICAL: Do NOT use 'utf-8'. Read as raw buffer.
// This is O(1) for V8 relative to object serialization complexity.
const rawBuffer = await fs.readFile(filePath);
return rawBuffer;
} catch (error) {
console.error('Failed to read data', error);
throw error;
}
});
}
const createWindow = () => {
const win = new BrowserWindow({
webPreferences: {
preload: path.join(__dirname, 'preload.js'),
// Security best practices
contextIsolation: true,
nodeIntegration: false,
},
});
setupMainIPC();
win.loadURL('http://localhost:5173'); // Assuming Vite dev server
};
app.whenReady().then(createWindow);
2. The Preload Script (preload.ts)
Expose the API securely via contextBridge.
import { contextBridge, ipcRenderer } from 'electron';
contextBridge.exposeInMainWorld('api', {
// Returns Promise<Uint8Array>
getLargeDataset: () => ipcRenderer.invoke('get-large-dataset'),
});
3. The Worker Logic (json.worker.ts)
We create a dedicated Web Worker to handle the CPU-intensive JSON.parse. By moving this off the main thread, the UI remains interactive even while parsing 100MB of text.
// json.worker.ts
self.onmessage = (event: MessageEvent<Uint8Array>) => {
const rawBuffer = event.data;
try {
const startTime = performance.now();
// 1. Decode Buffer to String
const decoder = new TextDecoder('utf-8');
const jsonString = decoder.decode(rawBuffer);
// 2. Parse JSON (The heavy lifting)
const data = JSON.parse(jsonString);
const endTime = performance.now();
console.log(`Worker parsed data in ${endTime - startTime}ms`);
// 3. Send back to UI thread
self.postMessage({ type: 'SUCCESS', data });
} catch (error) {
self.postMessage({
type: 'ERROR',
error: error instanceof Error ? error.message : 'Unknown Parsing Error'
});
}
};
4. The Renderer Implementation (React Hook)
Finally, we consume this in the UI. We fetch the buffer, spawn the worker, and handle the response.
// useLargeData.ts (Custom Hook)
import { useState, useEffect, useRef } from 'react';
// Type definition for the exposed Electron API
declare global {
interface Window {
api: {
getLargeDataset: () => Promise<Uint8Array>;
};
}
}
export function useLargeData<T>() {
const [data, setData] = useState<T | null>(null);
const [loading, setLoading] = useState(false);
const workerRef = useRef<Worker | null>(null);
useEffect(() => {
// Initialize Worker
workerRef.current = new Worker(new URL('./json.worker.ts', import.meta.url), {
type: 'module',
});
// Cleanup
return () => {
workerRef.current?.terminate();
};
}, []);
const loadData = async () => {
if (!workerRef.current) return;
setLoading(true);
try {
// 1. Fetch raw bytes from Main (Fast IPC)
const buffer = await window.api.getLargeDataset();
// 2. Setup one-time listener for the worker response
const workerHandler = (e: MessageEvent) => {
const { type, data, error } = e.data;
if (type === 'SUCCESS') {
setData(data); // React batch update
} else {
console.error('Worker Error:', error);
}
setLoading(false);
// Remove listener to prevent memory leaks or duplicate handling
workerRef.current?.removeEventListener('message', workerHandler);
};
workerRef.current.addEventListener('message', workerHandler);
// 3. Transfer buffer to Worker (Zero-copy transfer if using ArrayBuffer)
// Note: We send the buffer.buffer to transfer ownership if needed,
// but standard cloning of Uint8Array is fast enough for most.
workerRef.current.postMessage(buffer, [buffer.buffer]);
} catch (err) {
console.error('IPC Failed', err);
setLoading(false);
}
};
return { data, loading, loadData };
}
The Explanation
Why is this solution drastically faster?
- IPC Binary Stream: By returning
fs.readFile(a Buffer) directly, Electron does not treat the data as a complex JS Object. It serializes it as binary data. This is significantly cheaper than traversing a generic Object tree. - TextDecoder: In the worker,
TextDecoderis a native browser API written in C++. It is highly optimized for converting raw bytes into a string. - Parallelism: The bottleneck
JSON.parseis O(N). By moving it to a Web Worker, the Main UI thread is free to render spinners or animations. The OS scheduler handles the threading. - Transferable Objects: In the
postMessagecall (workerRef.current.postMessage(buffer, [buffer.buffer])), the second argument is a Transfer List. This tells the JS engine to move the memory ownership to the worker rather than copying it. This makes the hand-off instantaneous, regardless of size.
Conclusion
Electron performance often degrades not because of DOM complexity, but because of main-thread blocking during IPC serialization. By treating large datasets as binary blobs and leveraging Web Workers for parsing, you decouple data processing from frame rendering.
Implement this pattern for any dataset exceeding 5MB, and your application will feel native, fluid, and professional.