Streaming & Subscriptions
Most bot bugs come from getting streams wrong. This page is the operational manual: when to subscribe, what each stream emits, how to reconnect, and how to keep your in-memory state correct.
Looking for the schema of each event? This guide is about consuming streams. The exhaustive list of emitted event types is in the Events & Subscriptions reference.
All streaming RPCs
Hydra has six server-streaming RPCs. Each takes a request and returns a one-way stream of messages until you cancel or the connection drops.
| RPC | Service | Scope | Request shape | Use it for |
|---|---|---|---|---|
SubscribeClientEvents | event | Per network | { network } | On-chain activity: sync, blocks, transactions, balance updates |
SubscribeNodeEvents | event | Per network | { network } | Channel and payment lifecycle, peer connections, watchtower status |
SubscribeMarketEvents | orderbook | Per pair | { base, quote } | Public market data: orderbook updates, trades, daily stats, candles |
SubscribeDexEvents | orderbook | Global (your account) | {} | Your order activity across all markets |
SubscribeSimpleSwaps | swap | Global | {} | Progress updates for SimpleSwap operations |
SubscribeHtlcEvents | htlc | Per network | { network } | HTLC lifecycle (create / claim / refund / expire) |
Subscription cardinality
- Per-network subscriptions only emit events for the network in the request. To watch Bitcoin Signet and Ethereum Sepolia, open two
SubscribeClientEventsstreams — one per network. - Per-pair (
SubscribeMarketEvents) — one stream per(base, quote)pair you trade. - Global (
SubscribeDexEvents,SubscribeSimpleSwaps) — one stream covers everything; no parameters.
A typical bot opens 4–6 streams at startup and keeps them all alive for its full lifetime.
The golden rule: subscribe before you act
If you call CreateOrder, SimpleSwap, OpenChannel, or RequestChannelLiquidity before subscribing to the matching stream, the corresponding event can fire on the server side before your stream attaches. You'll never see it. The bot will look like it's hung.
Time →
❌ Wrong: [CreateOrder] ──────────► [server emits Trade] ──► (lost)
▲
[client connects to MarketEvents stream] —— sees nothing
✅ Right: [client connects to MarketEvents stream] ──► (idle)
▲
[CreateOrder] ──────────► [server emits Trade] ──► (delivered)
Rule of thumb: subscribe at app startup, then do your first action after a small grace period (~500 ms is plenty) for the streams to fully establish.
Some streams emit a
Synced/is_synced=trueevent once the initial state is replayed. If you need to know "I'm caught up," watch for that event before issuing your first write.
Lifecycle of a single subscription
call Subscribe* ──► attach phase
│
▼
initial state replay
(Syncing → Synced / is_synced=true)
│
▼
live event stream ◄──┐
│ │
(events arrive any failure ⇒ stream ends:
as they happen) • UNAVAILABLE
• DEADLINE_EXCEEDED (rare)
• CANCELLED (you cancelled)
• INTERNAL
│
▼
stream ends
│
reconnect, replay, resume ────────┘
There's no built-in "resume from cursor" — the protocol is at-most-once delivery while the stream is alive, with full state replay on reconnect.
Reconnection pattern
A long-running bot will see streams drop — proxy timeouts, network blips, server restarts. Your reconnect loop is the difference between a bot that runs for an hour and one that runs for weeks.
async function runForever<T>(
startStream: () => grpc.ClientReadableStream<T>,
onEvent: (evt: T) => void,
label: string,
) {
let backoff = 1000 // 1s, doubles up to 30s
while (true) {
const stream = startStream()
let opened = false
try {
await new Promise<void>((resolve, reject) => {
stream.on('data', evt => {
if (!opened) { opened = true; backoff = 1000 } // reset on first event
onEvent(evt)
})
stream.on('error', reject)
stream.on('end', resolve)
})
console.warn(`[${label}] stream ended cleanly, reconnecting`)
} catch (err) {
console.error(`[${label}] stream error:`, err)
}
await new Promise(r => setTimeout(r, backoff + Math.random() * 500)) // jitter
backoff = Math.min(backoff * 2, 30_000)
}
}
// Usage
runForever(
() => events.subscribeClientEvents(makeReq(), {}),
evt => handleClientEvent(evt),
'client',
)
Backoff guidelines
| Failure mode | Recommended backoff |
|---|---|
| First attempt after a reconnect | 1 s base, doubled to 30 s ceiling |
UNAVAILABLE (transient) | Above schedule |
DEADLINE_EXCEEDED | Above schedule |
INTERNAL | Above schedule, but log loudly — server bug |
INVALID_ARGUMENT / FAILED_PRECONDITION | Don't reconnect — your request is malformed; fix and restart |
UNAUTHENTICATED / PERMISSION_DENIED | Don't reconnect — re-authenticate at the app level |
Add ±500 ms jitter to avoid thundering-herd on shared infrastructure.
Replay and idempotency on reconnect
After a reconnect, the server replays its current state from the start. Your handler will see:
- Sync events (
Syncing→Synced/is_synced=true). - The current state of every entity in scope (e.g. each open channel for
SubscribeNodeEvents). - Live events as they happen.
This means events you've already processed once may be re-emitted. Make your handlers idempotent:
| Stream | Natural dedupe key |
|---|---|
SubscribeClientEvents | transaction.txid for tx events, (asset_id, network) for balance events |
SubscribeNodeEvents | channel.id + status for channel events, payment.id for payment events |
SubscribeMarketEvents | orderbook_update.updated_orders keys, trade.taker_order_id + timestamp for trades |
SubscribeDexEvents | Order ID + status |
SubscribeSimpleSwaps | simple_swap_id + update kind |
SubscribeHtlcEvents | HTLC ID + status |
A simple in-memory Map / HashMap keyed by these tuples is enough for most bots. For high-volume bots, evict old keys with a TTL so the map doesn't grow unbounded.
Ordering guarantees
Within a single stream: events are delivered in the order the server produced them. Causally-related events on the same entity (e.g. channel Opening → Active → ClosedRedeemable) arrive in order.
Across different streams: no ordering guarantee. The same on-chain event may surface as:
- a
TransactionUpdateonSubscribeClientEvents, and - a
ChannelUpdateonSubscribeNodeEvents
…but their relative arrival order is undefined. If your logic depends on "did I see the funding tx confirm before the channel went active?", reconcile via timestamps or via a state machine that tolerates either order.
Cancellation & cleanup
Always cancel streams when you're done with them — leaked streams hold server-side resources.
const stream = orderbook.subscribeMarketEvents(req, {})
// ... later
stream.cancel()
In particular, when you change trading pair, cancel the old SubscribeMarketEvents stream before opening a new one. Don't let stale streams accumulate.
Common pitfalls
| Symptom | Cause | Fix |
|---|---|---|
| Bot places order, never sees the fill | Subscribed to SubscribeMarketEvents (or SubscribeDexEvents) after placing the order | Always subscribe at startup |
SimpleSwap returns successfully, bot then waits forever | Didn't subscribe to SubscribeSimpleSwaps first | Subscribe before calling SimpleSwap |
| Bot processes the same channel-opened event twice on restart | Treating events as "deliver once" | Make handlers idempotent — keyed dedupe |
| Events stop arriving after ~minutes | Stream silently dropped, no reconnect logic | Wrap each stream in the reconnect pattern |
| Memory grows over time | Dedup map without TTL | Evict entries older than a few minutes |
Channel state in your bot disagrees with GetChannels | Lost events between dropped stream and reconnect | On reconnect, your handler will see fresh state events — trust those over your in-memory snapshot |
network is required from SubscribeClientEvents | Sent the request without a network field | SubscribeClientEvents and SubscribeNodeEvents are per-network — set network explicitly |
| Bot crashes on reconnect because a "duplicate" channel ID arrives | Treating channel-created events as one-shot | Replay always includes existing channels — handle "already known" as a no-op |
Recommended bot architecture for streams
- One
runForeverloop per stream. Don't share retry state across streams. - Pure handlers. Each event handler updates an in-memory mirror of state; nothing else. Keep network calls out of handlers.
- Sync flag per stream. Track whether each stream has emitted its initial
Syncedevent. Don't issue writes until all streams you depend on are synced. - Backpressure. Don't
awaita slow operation inside the recv-loop — push the event onto an in-memory queue and process out of band, otherwise the stream's send window fills and the server may drop you. - Persist your dedupe checkpoint. A simple "last-seen ID per entity" on disk lets you skip events already applied after a restart.
- Log the stream that died. When (not if) a stream drops, the log line should identify which stream — debugging cross-stream bugs without that label is painful.
Quick reference
// Minimum viable subscription pattern
const stream = events.subscribeClientEvents({ network: SIGNET }, {})
stream.on('data', evt => handleClient(evt))
stream.on('error', err => reconnect(err))
stream.on('end', () => reconnect(new Error('stream ended')))
See also
- Bot Quickstart — end-to-end bot template that uses streams correctly
- Events & Subscriptions reference — every event type and its fields
- Errors — full error catalog with retry guidance per status code