When pushing payloads exceeding 1MB over SSE Protocol Fundamentals & Architecture, you will typically encounter truncated data fields, silent EventSource disconnects, or HTTP 413/502 errors. The immediate goal is to isolate hard infrastructure limits, bypass proxy buffer constraints, and implement safe payload chunking without breaking stream continuity or triggering client-side memory exhaustion.
Key Diagnostic Signals:
EventSource connection closed unexpectedlytext/event-stream responses (incomplete JSON/strings)upstream prematurely closed connection, buffer overflow, or 502 Bad GatewayThe SSE specification defines no explicit maximum payload size. Hard limits originate from infrastructure buffering and client-side parsing mechanics. Reverse proxies and load balancers enforce strict response buffer caps (often defaulting to 4KB–8KB). Browsers accumulate data: lines in memory until a double newline (\n\n) triggers event dispatch. When payloads exceed these thresholds, the underlying TCP stream stalls, proxies terminate the connection, or the JavaScript engine triggers an Out-Of-Memory (OOM) condition.
As detailed in Understanding the Event Stream Format, unchunked payloads violate streaming best practices and bypass incremental parsing. Additional compounding factors include:
Transfer-Encoding: chunked headersEventSource internal memory allocation limits per messageSplit payloads exceeding 50KB into discrete blocks. Emit each chunk with an incremental id field. The client must reassemble using lastEventId and a message buffer.
Server (Node.js/Express):
const CHUNK_SIZE = 50000; // ~50KB
function streamLargePayload(res, data) {
const payload = JSON.stringify(data);
let chunkIndex = 0;
for (let i = 0; i < payload.length; i += CHUNK_SIZE) {
const chunk = payload.slice(i, i + CHUNK_SIZE);
res.write(`id: ${chunkIndex}\ndata: ${chunk}\n\n`);
chunkIndex++;
}
}
Client Reassembly:
const buffer = [];
const source = new EventSource('/stream');
source.addEventListener('message', (e) => {
buffer.push(e.data);
// Validate completion via length check, checksum, or terminal marker
if (isComplete(buffer)) {
const fullPayload = JSON.parse(buffer.join(''));
processPayload(fullPayload);
buffer.length = 0; // Clear buffer for next stream
}
});
Disable response buffering to allow true streaming. Ensure Transfer-Encoding: chunked is preserved end-to-end.
Nginx (nginx.conf or site block):
location /stream {
proxy_pass http://backend;
proxy_buffering off;
proxy_cache off;
proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding on;
}
Apache (.htaccess or VirtualHost):
SetEnv proxy-nokeepalive 1
SetEnv force-proxy-request-1.0 0
SetEnv proxy-initial-not-pooled 1
Enable gzip or brotli at the server layer. Compression typically reduces payload size by 60–80%, keeping streams under implicit proxy limits.
Nginx Compression:
gzip on;
gzip_types text/event-stream application/json;
gzip_min_length 1000;
Prevent premature connection drops during large payload transmission. Set proxy_read_timeout well above 60s and align keepalive_timeout with the expected stream lifecycle.
Nginx Timeouts:
proxy_read_timeout 3600s;
proxy_connect_timeout 10s;
keepalive_timeout 3600s;
Deploy synthetic load tests pushing 1MB, 5MB, and 10MB payloads. Verify EventSource.readyState remains CONNECTED (1). Monitor proxy error logs for 502/413 spikes. Track browser heap allocation via Chrome DevTools: open Performance tab, record a stream session, and inspect the JS Heap graph for sharp upward spikes indicating OOM risk.
Implement server-side metrics for sse_message_size_bytes and sse_stream_duration_seconds. Configure alerts on connection resets exceeding 0.5% of total active streams.
DevTools Quick Check:
event-streamContent Download shows progressive growth (not a single spike)performance.getEntriesByType('resource').filter(r => r.name.includes('/stream')).map(r => ({duration: r.duration, size: r.transferSize}))nginx -T or