Blog

On-Chain Series VII: Scaling JSON Processing with Worker Threads

In this blog, we introduce our latest infrastructure innovation: Worker Thread Pools for JSON parsing, a robust architecture designed to eliminate event loop blocking when processing large blockchain data payloads. Building on our previous advancements in high-throughput API design, this solution provides seamless handling of 20MB+ JSON blobs, automatic worker recovery, intelligent queue management, and 40x improvements in tail latency.

  • June 11, 2025
  • By Vlad Cealicu

When processing blockchain data at scale, every millisecond counts. At CoinDesk Data, we handle massive volumes of on-chain data stored as JSON blobs, with individual blocks reaching up to 20MB. While this might not sound enormous, parsing these payloads can block the Node.js event loop for up to a full second—an eternity when serving thousands of API requests.

The Problem: Event Loop Blocking

Node.js excels at I/O operations thanks to its event-driven architecture, but CPU-intensive tasks like JSON parsing run on the main thread. When JSON.parse() processes a 20MB blockchain data blob, it can monopolize the CPU for 500-1000ms, blocking all other operations.

Consider this scenario: Your API serves thousands of requests per second, each potentially triggering JSON parsing operations. A single parse operation blocking for one second means every other request during that window experiences at least that much additional latency. The cascade effect is devastating.

The Solution: Worker Thread Pools

We developed a worker pool implementation that offloads JSON parsing to separate threads, keeping the main event loop responsive. Here's how it works:

javascript// Main thread remains responsive
const spotReserved = workerPool.reserveSpot();
if (!spotReserved) {
    // Queue is full, fail fast
    return res.status(503).json({ error: 'Service temporarily unavailable' });
}

// Fetch data asynchronously
const blockData = await fetchBlockFromStorage(blockNumber);

// Parse in worker thread, main thread continues serving requests
workerPool.execute(blockData, (err, parsed) => {
    if (err) {
        workerPool.releaseSpot();
        return handleError(err);
    }
    // Process parsed data
    processBlockData(parsed);
});

Key Features

1. Reservation System: Before fetching data from storage, we reserve a spot in the queue. This prevents memory exhaustion and provides immediate feedback when the system is at capacity.

2. FIFO Queue Management: Jobs wait in a queue when all workers are busy, ensuring fair processing and preventing starvation.

3. Automatic Worker Recovery: Failed or timed-out workers are automatically replaced, maintaining pool stability.

4. Performance Metrics: The system tracks detailed performance metrics, helping identify bottlenecks:

javascript
{
    total_jobs: 45892,
    successful_jobs: 45790,
    worker_timeouts: 12,
    queue_full_rejections: 89,
    timing_buckets: {
        under_10ms: 38421,
        over_100ms: 5234,
        over_500ms: 1876,
        over_1000ms: 234,
        over_2000ms: 23
    }
}

Performance Impact

In production, this approach yielded dramatic improvements:

  • API Response Times: 99th percentile latency dropped from 2 seconds to just 50ms
  • Event Loop Blocking: Eliminated 1-second parsing freezes
  • Throughput: Maintained consistent sub-100ms response times under load

Memory Management

The reservation system prevents memory bloat by limiting concurrent operations:

javascriptconst totalCommitted = queue.length + reservedSlots;
if (totalCommitted >= maxQueueSize) {
    // Reject early, before allocating resources
    return false;
}

This approach ensures predictable memory usage even under extreme load.

Implementation Considerations

Worker Pool Sizing: Our servers typically run with 2–4 CPU cores, and we generally size the worker pool to match (number of CPUs - 1). We've found that even with just 2 CPUs, running 2 workers performs well without blocking the event loop. This is because, between our API workload and NGINX handling, we typically operate at only 30–40% of core capacity, allowing for efficient concurrency without oversubscription or unnecessary memory overhead.

Timeout Configuration: 30-second timeouts catch edge cases where parsing hangs due to malformed data.

Error Boundaries: Workers isolate failures, preventing a single bad payload from crashing the entire service.

Real-World Application

Processing Ethereum blocks with thousands of transactions, each containing complex nested structures, previously caused regular service disruptions. With worker threads:

  • 20MB blocks parse without blocking API responses
  • System remains responsive even during blockchain congestion events
  • Graceful degradation when approaching capacity limits

The difference is stark: what previously caused 1-second freezes now happens invisibly in the background while the API continues serving requests at full speed.

Conclusion

Worker threads transform Node.js from single-threaded to multi-threaded for CPU-intensive operations. For blockchain data processing, where even "modest" 20MB JSON payloads can block the event loop for a full second, this pattern is essential for building reliable, high-performance systems.

The complete implementation handles edge cases like worker crashes, memory leaks, and race conditions—critical for production blockchain infrastructure. By moving CPU-intensive work off the main thread, we've achieved a 40x improvement in tail latency, ensuring users get consistent, fast responses regardless of the blockchain data being processed.

On-Chain Series VII: Scaling JSON Processing with Worker Threads

When processing blockchain data at scale, every millisecond counts. At CoinDesk Data, we handle massive volumes of on-chain data stored as JSON blobs, with individual blocks reaching up to 20MB. While this might not sound enormous, parsing these payloads can block the Node.js event loop for up to a full second—an eternity when serving thousands of API requests.

The Problem: Event Loop Blocking

Node.js excels at I/O operations thanks to its event-driven architecture, but CPU-intensive tasks like JSON parsing run on the main thread. When JSON.parse() processes a 20MB blockchain data blob, it can monopolize the CPU for 500-1000ms, blocking all other operations.

Consider this scenario: Your API serves thousands of requests per second, each potentially triggering JSON parsing operations. A single parse operation blocking for one second means every other request during that window experiences at least that much additional latency. The cascade effect is devastating.

The Solution: Worker Thread Pools

We developed a worker pool implementation that offloads JSON parsing to separate threads, keeping the main event loop responsive. Here's how it works:

javascript// Main thread remains responsive
const spotReserved = workerPool.reserveSpot();
if (!spotReserved) {
    // Queue is full, fail fast
    return res.status(503).json({ error: 'Service temporarily unavailable' });
}

// Fetch data asynchronously
const blockData = await fetchBlockFromStorage(blockNumber);

// Parse in worker thread, main thread continues serving requests
workerPool.execute(blockData, (err, parsed) => {
    if (err) {
        workerPool.releaseSpot();
        return handleError(err);
    }
    // Process parsed data
    processBlockData(parsed);
});

Key Features

1. Reservation System: Before fetching data from storage, we reserve a spot in the queue. This prevents memory exhaustion and provides immediate feedback when the system is at capacity.

2. FIFO Queue Management: Jobs wait in a queue when all workers are busy, ensuring fair processing and preventing starvation.

3. Automatic Worker Recovery: Failed or timed-out workers are automatically replaced, maintaining pool stability.

4. Performance Metrics: The system tracks detailed performance metrics, helping identify bottlenecks:

javascript
{
    total_jobs: 45892,
    successful_jobs: 45790,
    worker_timeouts: 12,
    queue_full_rejections: 89,
    timing_buckets: {
        under_10ms: 38421,
        over_100ms: 5234,
        over_500ms: 1876,
        over_1000ms: 234,
        over_2000ms: 23
    }
}

Performance Impact

In production, this approach yielded dramatic improvements:

  • API Response Times: 99th percentile latency dropped from 2 seconds to just 50ms
  • Event Loop Blocking: Eliminated 1-second parsing freezes
  • Throughput: Maintained consistent sub-100ms response times under load

Memory Management

The reservation system prevents memory bloat by limiting concurrent operations:

javascriptconst totalCommitted = queue.length + reservedSlots;
if (totalCommitted >= maxQueueSize) {
    // Reject early, before allocating resources
    return false;
}

This approach ensures predictable memory usage even under extreme load.

Implementation Considerations

Worker Pool Sizing: Our servers typically run with 2–4 CPU cores, and we generally size the worker pool to match (number of CPUs - 1). We've found that even with just 2 CPUs, running 2 workers performs well without blocking the event loop. This is because, between our API workload and NGINX handling, we typically operate at only 30–40% of core capacity, allowing for efficient concurrency without oversubscription or unnecessary memory overhead.

Timeout Configuration: 30-second timeouts catch edge cases where parsing hangs due to malformed data.

Error Boundaries: Workers isolate failures, preventing a single bad payload from crashing the entire service.

Real-World Application

Processing Ethereum blocks with thousands of transactions, each containing complex nested structures, previously caused regular service disruptions. With worker threads:

  • 20MB blocks parse without blocking API responses
  • System remains responsive even during blockchain congestion events
  • Graceful degradation when approaching capacity limits

The difference is stark: what previously caused 1-second freezes now happens invisibly in the background while the API continues serving requests at full speed.

Conclusion

Worker threads transform Node.js from single-threaded to multi-threaded for CPU-intensive operations. For blockchain data processing, where even "modest" 20MB JSON payloads can block the event loop for a full second, this pattern is essential for building reliable, high-performance systems.

The complete implementation handles edge cases like worker crashes, memory leaks, and race conditions—critical for production blockchain infrastructure. By moving CPU-intensive work off the main thread, we've achieved a 40x improvement in tail latency, ensuring users get consistent, fast responses regardless of the blockchain data being processed.

Subscribe to Our Newsletter

Get our latest research, reports and event news delivered straight to your inbox.