WebSockets & Subscriptions
Real-time event streams via WebSocket subscriptions (eth_subscribe
) for newHeads
, logs
, and newPendingTransactions
.
Configuration Limits
Parameter | Default | Description |
---|---|---|
max_subscriptions_new_head | 10,000 | Maximum concurrent newHeads subscriptions per node |
Subscription buffer | 10 blocks | Internal buffer per subscription; slow consumers trigger disconnect |
Connection capacity | 100 | Maximum subscriptions per connection (all types combined) |
⚠️
Nodes enforce
max_subscriptions_new_head
globally. When the limit is reached, new eth_subscribe('newHeads')
calls return no new subscription can be created
.Subscription Types
newHeads | Streams canonical block headers. Default cap: max_subscriptions_new_head = 5000 . Use for lightweight UI updates. |
logs | Filters logs by address/topics across live blocks. Obeys max_blocks_for_log and inherits Cosmos rate limits. |
newPendingTransactions | Surface pending transactions from the mempool. Sensitive to mempool.cache_size pressure; avoid on public endpoints. |
Connection Management
Heartbeat loop
Send
eth_subscribe
keep-alives or lightweight RPC calls every ~20s to keep proxies from reclaiming idle sockets.Backoff strategy
Reconnect with exponential backoff when the node drops the socket (e.g., capacity reached). Avoid thundering herds.
Replay window
Persist the last seen block number and resubscribe with a
fromBlock
when you reconnect to cover missed events.Replay & Gap Handling
When a WebSocket disconnects:
- Track last processed block (store
block.number
andblock.hash
) - On reconnect, fetch current head via HTTP
eth_blockNumber
- Backfill gaps using
eth_getLogs
with block ranges ≤ 2,000 blocks (respectsMaxBlocksForLog
) - Deduplicate by
(transactionHash, logIndex)
to handle overlaps - Resume subscription from current head
// Gap backfill pattern
async function backfillGap(fromBlock: number, toBlock: number) {
const logs = await httpProvider.send('eth_getLogs', [
{
fromBlock: `0x${fromBlock.toString(16)}`,
toBlock: `0x${toBlock.toString(16)}`,
address: contractAddress
}
]);
// Deduplicate and process
const seen = new Set();
for (const log of logs) {
const key = `${log.transactionHash}-${log.logIndex}`;
if (!seen.has(key)) {
seen.add(key);
processLog(log);
}
}
}
Best Practices
- One subscription per topic - Fan out internally rather than creating duplicate
newHeads
subscriptions - Monitor buffer health - Track dropped subscriptions (channel closure) as a signal of slow consumption
- Hybrid approach - Use WebSocket for real-time updates, HTTP for historical queries and backfills
- Avoid trace/sim during WS handling - Offload heavy
debug_traceTransaction
calls to background workers
Troubleshooting
Error | Cause | Fix |
---|---|---|
no new subscription can be created | Node newHeads limit (10,000) reached | Check node config for max_subscriptions_new_head; reuse existing subscriptions or request limit increase. |
Subscription closed unexpectedly | Consumer not draining buffer fast enough | Increase processing speed or buffer size; slow consumers trigger auto-close. |
Missing blocks after reconnect | Gap in stream during disconnect | Backfill using eth_getLogs with last processed block as fromBlock. |
Connection drops frequently | Network instability or missing heartbeats | Implement ping/pong heartbeat every 30s; reconnect with exponential backoff. |
References
- WebSocket implementation: github.com/sei-protocol/sei-chain/evmrpc/subscribe.go
- Configuration: EVM RPC Config
Last updated on