Api

Streaming & Subscriptions

How to consume gRPC server-streaming RPCs in Hydra — subscribe, reconnect, dedupe, dispose

Most bot bugs come from getting streams wrong. This page is the operational manual: when to subscribe, what each stream emits, how to reconnect, and how to keep your in-memory state correct.

Looking for the schema of each event? This guide is about consuming streams. The exhaustive list of emitted event types is in the Events & Subscriptions reference.

All streaming RPCs

Hydra has six server-streaming RPCs. Each takes a request and returns a one-way stream of messages until you cancel or the connection drops.

RPCServiceScopeRequest shapeUse it for
SubscribeClientEventseventPer network{ network }On-chain activity: sync, blocks, transactions, balance updates
SubscribeNodeEventseventPer network{ network }Channel and payment lifecycle, peer connections, watchtower status
SubscribeMarketEventsorderbookPer pair{ base, quote }Public market data: orderbook updates, trades, daily stats, candles
SubscribeDexEventsorderbookGlobal (your account){}Your order activity across all markets
SubscribeSimpleSwapsswapGlobal{}Progress updates for SimpleSwap operations
SubscribeHtlcEventshtlcPer network{ network }HTLC lifecycle (create / claim / refund / expire)

Subscription cardinality

  • Per-network subscriptions only emit events for the network in the request. To watch Bitcoin Signet and Ethereum Sepolia, open two SubscribeClientEvents streams — one per network.
  • Per-pair (SubscribeMarketEvents) — one stream per (base, quote) pair you trade.
  • Global (SubscribeDexEvents, SubscribeSimpleSwaps) — one stream covers everything; no parameters.

A typical bot opens 4–6 streams at startup and keeps them all alive for its full lifetime.


The golden rule: subscribe before you act

If you call CreateOrder, SimpleSwap, OpenChannel, or RequestChannelLiquidity before subscribing to the matching stream, the corresponding event can fire on the server side before your stream attaches. You'll never see it. The bot will look like it's hung.

                    Time →

  ❌ Wrong:    [CreateOrder] ──────────► [server emits Trade] ──► (lost)
                                                                 ▲
                                            [client connects to MarketEvents stream] —— sees nothing

  ✅ Right:    [client connects to MarketEvents stream] ──► (idle)
                                                       ▲
                  [CreateOrder] ──────────► [server emits Trade] ──► (delivered)

Rule of thumb: subscribe at app startup, then do your first action after a small grace period (~500 ms is plenty) for the streams to fully establish.

Some streams emit a Synced / is_synced=true event once the initial state is replayed. If you need to know "I'm caught up," watch for that event before issuing your first write.


Lifecycle of a single subscription

   call Subscribe*  ──►  attach phase
                              │
                              ▼
                       initial state replay
                       (Syncing → Synced / is_synced=true)
                              │
                              ▼
                       live event stream  ◄──┐
                              │              │
                       (events arrive       any failure ⇒ stream ends:
                        as they happen)      • UNAVAILABLE
                                             • DEADLINE_EXCEEDED (rare)
                                             • CANCELLED (you cancelled)
                                             • INTERNAL
                              │
                              ▼
                          stream ends
                              │
                  reconnect, replay, resume ────────┘

There's no built-in "resume from cursor" — the protocol is at-most-once delivery while the stream is alive, with full state replay on reconnect.


Reconnection pattern

A long-running bot will see streams drop — proxy timeouts, network blips, server restarts. Your reconnect loop is the difference between a bot that runs for an hour and one that runs for weeks.

async function runForever<T>(
  startStream: () => grpc.ClientReadableStream<T>,
  onEvent: (evt: T) => void,
  label: string,
) {
  let backoff = 1000   // 1s, doubles up to 30s
  while (true) {
    const stream = startStream()
    let opened = false

    try {
      await new Promise<void>((resolve, reject) => {
        stream.on('data', evt => {
          if (!opened) { opened = true; backoff = 1000 }   // reset on first event
          onEvent(evt)
        })
        stream.on('error', reject)
        stream.on('end', resolve)
      })
      console.warn(`[${label}] stream ended cleanly, reconnecting`)
    } catch (err) {
      console.error(`[${label}] stream error:`, err)
    }

    await new Promise(r => setTimeout(r, backoff + Math.random() * 500))   // jitter
    backoff = Math.min(backoff * 2, 30_000)
  }
}

// Usage
runForever(
  () => events.subscribeClientEvents(makeReq(), {}),
  evt  => handleClientEvent(evt),
  'client',
)

Backoff guidelines

Failure modeRecommended backoff
First attempt after a reconnect1 s base, doubled to 30 s ceiling
UNAVAILABLE (transient)Above schedule
DEADLINE_EXCEEDEDAbove schedule
INTERNALAbove schedule, but log loudly — server bug
INVALID_ARGUMENT / FAILED_PRECONDITIONDon't reconnect — your request is malformed; fix and restart
UNAUTHENTICATED / PERMISSION_DENIEDDon't reconnect — re-authenticate at the app level

Add ±500 ms jitter to avoid thundering-herd on shared infrastructure.


Replay and idempotency on reconnect

After a reconnect, the server replays its current state from the start. Your handler will see:

  1. Sync events (SyncingSynced / is_synced=true).
  2. The current state of every entity in scope (e.g. each open channel for SubscribeNodeEvents).
  3. Live events as they happen.

This means events you've already processed once may be re-emitted. Make your handlers idempotent:

StreamNatural dedupe key
SubscribeClientEventstransaction.txid for tx events, (asset_id, network) for balance events
SubscribeNodeEventschannel.id + status for channel events, payment.id for payment events
SubscribeMarketEventsorderbook_update.updated_orders keys, trade.taker_order_id + timestamp for trades
SubscribeDexEventsOrder ID + status
SubscribeSimpleSwapssimple_swap_id + update kind
SubscribeHtlcEventsHTLC ID + status

A simple in-memory Map / HashMap keyed by these tuples is enough for most bots. For high-volume bots, evict old keys with a TTL so the map doesn't grow unbounded.


Ordering guarantees

Within a single stream: events are delivered in the order the server produced them. Causally-related events on the same entity (e.g. channel OpeningActiveClosedRedeemable) arrive in order.

Across different streams: no ordering guarantee. The same on-chain event may surface as:

  • a TransactionUpdate on SubscribeClientEvents, and
  • a ChannelUpdate on SubscribeNodeEvents

…but their relative arrival order is undefined. If your logic depends on "did I see the funding tx confirm before the channel went active?", reconcile via timestamps or via a state machine that tolerates either order.


Cancellation & cleanup

Always cancel streams when you're done with them — leaked streams hold server-side resources.

const stream = orderbook.subscribeMarketEvents(req, {})
// ... later
stream.cancel()

In particular, when you change trading pair, cancel the old SubscribeMarketEvents stream before opening a new one. Don't let stale streams accumulate.


Common pitfalls

SymptomCauseFix
Bot places order, never sees the fillSubscribed to SubscribeMarketEvents (or SubscribeDexEvents) after placing the orderAlways subscribe at startup
SimpleSwap returns successfully, bot then waits foreverDidn't subscribe to SubscribeSimpleSwaps firstSubscribe before calling SimpleSwap
Bot processes the same channel-opened event twice on restartTreating events as "deliver once"Make handlers idempotent — keyed dedupe
Events stop arriving after ~minutesStream silently dropped, no reconnect logicWrap each stream in the reconnect pattern
Memory grows over timeDedup map without TTLEvict entries older than a few minutes
Channel state in your bot disagrees with GetChannelsLost events between dropped stream and reconnectOn reconnect, your handler will see fresh state events — trust those over your in-memory snapshot
network is required from SubscribeClientEventsSent the request without a network fieldSubscribeClientEvents and SubscribeNodeEvents are per-network — set network explicitly
Bot crashes on reconnect because a "duplicate" channel ID arrivesTreating channel-created events as one-shotReplay always includes existing channels — handle "already known" as a no-op

  1. One runForever loop per stream. Don't share retry state across streams.
  2. Pure handlers. Each event handler updates an in-memory mirror of state; nothing else. Keep network calls out of handlers.
  3. Sync flag per stream. Track whether each stream has emitted its initial Synced event. Don't issue writes until all streams you depend on are synced.
  4. Backpressure. Don't await a slow operation inside the recv-loop — push the event onto an in-memory queue and process out of band, otherwise the stream's send window fills and the server may drop you.
  5. Persist your dedupe checkpoint. A simple "last-seen ID per entity" on disk lets you skip events already applied after a restart.
  6. Log the stream that died. When (not if) a stream drops, the log line should identify which stream — debugging cross-stream bugs without that label is painful.

Quick reference

// Minimum viable subscription pattern
const stream = events.subscribeClientEvents({ network: SIGNET }, {})
stream.on('data',  evt  => handleClient(evt))
stream.on('error', err  => reconnect(err))
stream.on('end',   ()   => reconnect(new Error('stream ended')))

See also


Copyright © 2025