text/event-stream formatClient sends repeated requests at fixed intervals. Wastes bandwidth when nothing has changed. High latency between actual event and client awareness.
// Wasteful: polls every 3 seconds
setInterval(async () => {
const res = await fetch('/api/updates');
const data = await res.json();
if (data.length) render(data);
}, 3000);
Server holds request open until data is available, then responds. Client immediately re-opens. Better latency, but connection churn and complex error handling.
// Long-poll loop
async function poll() {
const res = await fetch('/api/updates?wait=30');
const data = await res.json();
render(data);
poll(); // immediately re-connect
}
poll();
SSE is an HTML5 standard (W3C / WHATWG) that allows a server to push events to a client over a single, long-lived HTTP connection. The client opens the connection once; the server writes events as they occur.
EventSource API, no library needed| Spec | Status |
|---|---|
| WHATWG HTML Living Standard §9.2 | Active |
| W3C Server-Sent Events (2015) | REC |
| Chrome / Edge / Firefox / Safari | Full support |
| IE 11 | No support (polyfill available) |
| Node.js / Deno / Bun | Server-side: trivial to implement |
// Client — 3 lines
const es = new EventSource('/stream');
es.onmessage = (e) => {
console.log(e.data);
};
HTTP/1.1 200 OK
Content-Type: text/event-stream
Cache-Control: no-cache
data: Hello, SSE!
data: Another event
| Feature | SSE | WebSocket | Short Polling | Long Polling |
|---|---|---|---|---|
| Direction | Server → Client | Bidirectional | Client → Server | Client → Server |
| Protocol | HTTP/1.1 or HTTP/2 | ws:// / wss:// |
HTTP | HTTP |
| Data format | Text (UTF-8) | Text + Binary | Any | Any |
| Auto-reconnect | Built-in | Manual | N/A | Manual |
| Event IDs / Resume | Native | DIY | DIY | DIY |
| Proxy/firewall friendly | Yes | Sometimes blocked | Yes | Yes |
| Browser API | EventSource |
WebSocket |
fetch |
fetch |
| Max connections (HTTP/1.1) | 6 per domain | Unlimited | 6 per domain | 6 per domain |
| Complexity | Low | Medium | Low | Medium |
HTTP/1.1 200 OK
Content-Type: text/event-stream
Cache-Control: no-cache
Connection: keep-alive
X-Accel-Buffering: no
The response body is an indefinite-length text stream. Each event is separated by a blank line (\n\n).
: This is a comment (ignored by client)
id: 1
event: user-login
data: {"user":"alice","time":"10:42"}
id: 2
event: metric
data: {"cpu":42,"mem":68}
id: 3
data: Simple message (event = "message")
retry: 5000
| Field | Purpose |
|---|---|
data: | Event payload. Multiple data: lines are joined with \n |
event: | Event type name. Defaults to "message" if omitted |
id: | Sets the last event ID. Sent back on reconnect via Last-Event-ID header |
retry: | Reconnection delay in ms. Browser uses this for auto-reconnect timing |
: (colon) | Comment line. Used as keep-alive heartbeat |
\n, events separated by \n\ndata: fields// Basic usage
const es = new EventSource('/api/stream');
// With credentials (cookies / auth)
const es = new EventSource('/api/stream', {
withCredentials: true
});
// Default "message" events
es.onmessage = (e) => {
console.log(e.data); // string payload
console.log(e.lastEventId); // event ID
console.log(e.origin); // server origin
};
// Connection opened
es.onopen = () => {
console.log('Connected!');
};
// Error / disconnection
es.onerror = (e) => {
if (es.readyState === EventSource.CONNECTING) {
console.log('Reconnecting...');
}
};
| Constant | Value | Meaning |
|---|---|---|
CONNECTING | 0 | Connecting or reconnecting |
OPEN | 1 | Connection established |
CLOSED | 2 | Connection closed, no retry |
// Permanently close — no auto-reconnect
es.close();
// readyState === EventSource.CLOSED (2)
event: notification
data: {"type":"info","msg":"Deploy started"}
event: notification
data: {"type":"success","msg":"Deploy complete"}
event: heartbeat
data: ping
event: metric
data: {"cpu":23,"mem":55,"disk":71}
The event: field sets the event type. Without it, events are dispatched as "message".
dataaddEventListener, not onmessageconst es = new EventSource('/api/events');
// Only fires for event: notification
es.addEventListener('notification', (e) => {
const { type, msg } = JSON.parse(e.data);
showToast(type, msg);
});
// Only fires for event: metric
es.addEventListener('metric', (e) => {
const { cpu, mem, disk } = JSON.parse(e.data);
updateDashboard({ cpu, mem, disk });
});
// Only fires for event: heartbeat
es.addEventListener('heartbeat', (e) => {
resetIdleTimer();
});
// Catch-all for unnamed events
es.onmessage = (e) => {
console.log('Unnamed event:', e.data);
};
onmessage only receives events without an event: fieldaddEventListener('eventName', ...)app.get('/stream', (req, res) => {
const lastId = req.headers['last-event-id'];
// Resume from where client left off
let cursor = lastId
? parseInt(lastId, 10)
: 0;
// Send missed events
const missed = eventLog.filter(
e => e.id > cursor
);
missed.forEach(e => {
res.write(`id: ${e.id}\n`);
res.write(`data: ${e.data}\n\n`);
});
// Continue streaming new events...
});
retry: 5000
data: After disconnect, browser waits
data: 5 seconds before reconnecting
import express from 'express';
const app = express();
const clients = new Set();
app.get('/events', (req, res) => {
// 1. Set SSE headers
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'X-Accel-Buffering': 'no', // Nginx
});
// 2. Send initial retry interval
res.write('retry: 5000\n\n');
// 3. Register client
clients.add(res);
console.log(`Client connected (${clients.size})`);
// 4. Handle disconnect
req.on('close', () => {
clients.delete(res);
console.log(`Client left (${clients.size})`);
});
});
// Broadcast helper
function broadcast(event, data, id) {
const msg =
(id ? `id: ${id}\n` : '') +
(event ? `event: ${event}\n` : '') +
`data: ${JSON.stringify(data)}\n\n`;
for (const client of clients) {
client.write(msg);
}
}
app.listen(3000);
// Send comment as heartbeat every 30s
setInterval(() => {
for (const client of clients) {
client.write(': heartbeat\n\n');
}
}, 30_000);
Prevents proxies and load balancers from closing idle connections. The : prefix makes it a comment, ignored by EventSource.
| Header | Why? |
|---|---|
Content-Type: text/event-stream | Required for EventSource to accept the response |
Cache-Control: no-cache | Prevents caching of the stream |
Connection: keep-alive | Keeps TCP connection open (HTTP/1.1) |
X-Accel-Buffering: no | Disables Nginx response buffering |
// Trigger from any route or event
app.post('/deploy', (req, res) => {
broadcast('deploy', {
status: 'started',
user: req.user.name,
time: new Date().toISOString()
}, nextId());
res.json({ ok: true });
});
The native EventSource API lacks support for custom headers, POST requests, and request bodies. The Fetch API with ReadableStream provides full control.
async function streamSSE(url, token) {
const res = await fetch(url, {
method: 'POST',
headers: {
'Authorization': `Bearer ${token}`,
'Content-Type': 'application/json',
'Accept': 'text/event-stream',
},
body: JSON.stringify({ channel: 'deploys' }),
});
const reader = res.body
.pipeThrough(new TextDecoderStream())
.getReader();
let buffer = '';
while (true) {
const { value, done } = await reader.read();
if (done) break;
buffer += value;
const events = buffer.split('\n\n');
buffer = events.pop(); // keep incomplete
for (const raw of events) {
const event = parseSSE(raw);
if (event) handleEvent(event);
}
}
}
function parseSSE(raw) {
const event = { data: '', event: 'message',
id: null, retry: null };
for (const line of raw.split('\n')) {
if (line.startsWith('data: '))
event.data += line.slice(6) + '\n';
else if (line.startsWith('event: '))
event.event = line.slice(7);
else if (line.startsWith('id: '))
event.id = line.slice(4);
else if (line.startsWith('retry: '))
event.retry = parseInt(line.slice(7));
}
event.data = event.data.trimEnd();
return event.data ? event : null;
}
| Need | EventSource | Fetch API |
|---|---|---|
| Custom headers (Bearer token) | No | Yes |
| POST / PUT / PATCH | No (GET only) | Yes |
| Request body | No | Yes |
| Auto-reconnect | Built-in | Manual |
| Last-Event-ID | Automatic | Manual |
const userConnections = new Map(); // userId => Set<res>
app.get('/notifications/stream', auth, (req, res) => {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
});
const userId = req.user.id;
if (!userConnections.has(userId))
userConnections.set(userId, new Set());
userConnections.get(userId).add(res);
// Send unread count on connect
const unread = getUnreadCount(userId);
res.write(`event: unread-count\n`);
res.write(`data: ${unread}\n\n`);
req.on('close', () => {
userConnections.get(userId)?.delete(res);
});
});
// Called when a new notification is created
function notifyUser(userId, notification) {
const conns = userConnections.get(userId);
if (!conns) return;
const payload =
`id: ${notification.id}\n` +
`event: notification\n` +
`data: ${JSON.stringify(notification)}\n\n`;
for (const res of conns) res.write(payload);
}
function NotificationBell() {
const [count, setCount] = useState(0);
const [items, setItems] = useState([]);
useEffect(() => {
const es = new EventSource(
'/notifications/stream',
{ withCredentials: true }
);
es.addEventListener('unread-count', (e) => {
setCount(Number(e.data));
});
es.addEventListener('notification', (e) => {
const n = JSON.parse(e.data);
setItems(prev => [n, ...prev]);
setCount(prev => prev + 1);
showToast(n.title, n.body);
});
return () => es.close();
}, []);
return <Bell count={count} items={items} />;
}
import os from 'node:os';
app.get('/metrics/stream', (req, res) => {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
});
let id = 0;
const interval = setInterval(() => {
const cpus = os.cpus();
const load = os.loadavg();
const mem = process.memoryUsage();
const payload = {
ts: Date.now(),
cpu: Math.round(load[0] * 100 / cpus.length),
mem: Math.round(mem.heapUsed / 1e6),
rss: Math.round(mem.rss / 1e6),
conns: clients.size,
};
res.write(`id: ${++id}\n`);
res.write(`event: metric\n`);
res.write(`data: ${JSON.stringify(payload)}\n\n`);
}, 1000);
req.on('close', () => clearInterval(interval));
});
const es = new EventSource('/metrics/stream');
const MAX_POINTS = 60; // 1 minute window
const chart = new Chart(ctx, {
type: 'line',
data: {
labels: [],
datasets: [{
label: 'CPU %',
data: [],
borderColor: '#f5a623',
}, {
label: 'Heap MB',
data: [],
borderColor: '#4ecb8d',
}]
},
options: { animation: false }
});
es.addEventListener('metric', (e) => {
const m = JSON.parse(e.data);
const t = new Date(m.ts).toLocaleTimeString();
chart.data.labels.push(t);
chart.data.datasets[0].data.push(m.cpu);
chart.data.datasets[1].data.push(m.mem);
if (chart.data.labels.length > MAX_POINTS) {
chart.data.labels.shift();
chart.data.datasets.forEach(d => d.data.shift());
}
chart.update();
});
SSE is the de facto standard for LLM streaming responses. OpenAI, Anthropic, Google, and others all use text/event-stream for token-by-token output.
app.post('/chat', async (req, res) => {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
});
const stream = await openai.chat.completions.create({
model: 'gpt-4o',
messages: req.body.messages,
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
res.write(`data: ${JSON.stringify({
content
})}\n\n`);
}
}
res.write('data: [DONE]\n\n');
res.end();
});
async function chat(messages) {
const res = await fetch('/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ messages }),
});
const reader = res.body
.pipeThrough(new TextDecoderStream())
.getReader();
let buffer = '';
let output = '';
while (true) {
const { value, done } = await reader.read();
if (done) break;
buffer += value;
const parts = buffer.split('\n\n');
buffer = parts.pop();
for (const part of parts) {
const data = part.replace('data: ', '');
if (data === '[DONE]') return output;
const { content } = JSON.parse(data);
output += content;
renderMarkdown(output); // live update
}
}
}
Browsers enforce a maximum of 6 connections per domain under HTTP/1.1. Each SSE stream consumes one connection. With 6 tabs open, the browser is fully blocked.
stream.example.comSharedWorker to share one connection across tabsserver {
listen 443 ssl http2;
server_name api.example.com;
location /events {
proxy_pass http://backend:3000;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_buffering off;
proxy_cache off;
chunked_transfer_encoding off;
}
}
// shared-worker.js
const ports = new Set();
const es = new EventSource('/stream');
es.onmessage = (e) => {
for (const port of ports)
port.postMessage(e.data);
};
onconnect = (e) => ports.add(e.ports[0]);
// Client
const es = new EventSource('/api/stream', {
withCredentials: true // sends cookies
});
// Server (Express + CORS)
app.use(cors({
origin: 'https://app.example.com',
credentials: true
}));
app.get('/api/stream', (req, res) => {
// req.cookies available via cookie-parser
if (!req.session.userId) {
return res.status(401).end();
}
// ... stream events
});
// EventSource can't set custom headers
// Pass token in URL (use with caution)
const token = getAuthToken();
const es = new EventSource(
`/api/stream?token=${token}`
);
// Server-side validation
app.get('/api/stream', (req, res) => {
const token = req.query.token;
const user = verifyJWT(token);
if (!user) return res.status(401).end();
// ... stream events
});
Warning: Tokens in URLs appear in access logs, browser history, and Referer headers. Use short-lived tokens.
// Full header control with Fetch
const res = await fetch('/api/stream', {
headers: {
'Authorization': `Bearer ${token}`,
'Accept': 'text/event-stream',
}
});
// ... process ReadableStream
Access-Control-Allow-Origin: https://app.example.com
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: Authorization, Last-Event-ID
Access-Control-Expose-Headers: X-Request-Id
| Method | EventSource | Fetch SSE | Security |
|---|---|---|---|
| Cookies | Yes | Yes | Good (HttpOnly, SameSite) |
| Query token | Yes | Yes | Weak (URL exposure) |
| Bearer header | No | Yes | Good |
const es = new EventSource('/stream');
es.onerror = (e) => {
switch (es.readyState) {
case EventSource.CONNECTING: // 0
// Browser is auto-reconnecting
console.log('Reconnecting...');
break;
case EventSource.CLOSED: // 2
// Server sent HTTP error or closed
console.log('Connection closed');
break;
}
};
The browser automatically reconnects when the connection drops. The server controls the delay via the retry: field.
// Send retry: 0 to stop reconnects
function closeAllClients(reason) {
for (const client of clients) {
client.write(`event: shutdown\n`);
client.write(`data: ${reason}\n\n`);
client.end();
}
clients.clear();
}
// Signal permanent close with HTTP 204
app.get('/stream', (req, res) => {
if (maintenance) {
return res.status(204).end(); // no reconnect
}
// ...
});
async function connectWithBackoff(url, opts = {}) {
let attempt = 0;
const maxDelay = 30_000;
const baseDelay = 1_000;
while (true) {
try {
await streamSSE(url, opts);
attempt = 0; // reset on success
} catch (err) {
attempt++;
const delay = Math.min(
baseDelay * 2 ** attempt + Math.random() * 1000,
maxDelay
);
console.log(`Retry #${attempt} in ${delay}ms`);
await new Promise(r => setTimeout(r, delay));
}
}
}
| Status | EventSource Behavior |
|---|---|
| 200 | Connection established, events flow |
| 301/307 | Follows redirect, then connects |
| 204 | No reconnect — permanent close |
| 401/403 | No reconnect — fires onerror |
| 500/502/503 | Auto-reconnect after retry delay |
ulimit -n is often 1024import Redis from 'ioredis';
const sub = new Redis();
const pub = new Redis();
// Each server instance subscribes
sub.subscribe('events');
sub.on('message', (channel, msg) => {
// Broadcast to local SSE clients
for (const client of clients) {
client.write(`data: ${msg}\n\n`);
}
});
// Any server can publish
function emit(event) {
pub.publish('events', JSON.stringify(event));
}
location /events {
proxy_pass http://upstream;
proxy_buffering off; # critical!
proxy_cache off;
proxy_read_timeout 86400s; # 24h
proxy_set_header Connection '';
proxy_http_version 1.1;
}
# Raise file descriptor limit
ulimit -n 65536
# /etc/sysctl.conf
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
An attacker opens thousands of SSE connections, exhausting server resources.
// Mitigation: per-IP connection limits
const ipConnections = new Map();
const MAX_PER_IP = 10;
app.get('/stream', (req, res) => {
const ip = req.ip;
const count = ipConnections.get(ip) || 0;
if (count >= MAX_PER_IP) {
return res.status(429)
.send('Too many connections');
}
ipConnections.set(ip, count + 1);
req.on('close', () => {
const c = ipConnections.get(ip) - 1;
if (c <= 0) ipConnections.delete(ip);
else ipConnections.set(ip, c);
});
// ... stream
});
Set on disconnect// Always clean up
req.on('close', () => {
clients.delete(res);
clearInterval(timer);
clearTimeout(timeout);
});
If user-controlled data flows into SSE events, newlines can inject fields.
// DANGEROUS: unsanitized user input
res.write(`data: ${userInput}\n\n`);
// If userInput = "hi\nevent: admin\ndata: pwned"
// attacker injects a fake event!
// SAFE: serialize as JSON
res.write(`data: ${JSON.stringify({
message: userInput
})}\n\n`);
// Throttle: max 10 events/second/client
function throttledWrite(client, msg) {
client._count = (client._count || 0) + 1;
if (client._count > 10) return; // drop
client.write(msg);
}
// Reset counters every second
setInterval(() => {
for (const c of clients) c._count = 0;
}, 1000);
Origin header / set CORS properly| Resource | URL |
|---|---|
| WHATWG Living Standard | html.spec.whatwg.org §9.2 |
| MDN EventSource | developer.mozilla.org/en-US/docs/Web/API/EventSource |
| Can I Use | caniuse.com/eventsource |
| SSE Polyfill (IE11) | github.com/Yaffle/EventSource |