Low-latency Solana data with Jito ShredStream: Rust, TypeScript, Go

Written by:

Olha Diachuk

12

min read

Date:

December 18, 2025

Updated on:

December 18, 2025

Jito ShredStream gives you ordered entries from the leader before a normal RPC node finishes replay. RPC Fast measured traders receiving transactions ~2 minutes earlier through direct ShredStream compared to Yellowstone gRPC on the same node setup, but not everyone on your team writes Rust deserializers for fun (we do 😀).

This guide focuses on one question: How do you build and operate Jito ShredStream clients in Rust, TypeScript, and Go without blowing up reliability or cost?

We skip protocol theory. Here we stay practical:

  • What stack to use for which job;
  • How to wire clients in each language;
  • Common mistakes and patterns that protect your P&L.

Rust, TypeScript, and Go each push Jito ShredStream in a different way. The goal in all three cases is identical: decode entries fast, keep heartbeats healthy, and avoid backpressure that bleeds alpha.

Rust: reference implementation with solana-stream-sdk

Rust is the reference path for ShredStream today. Validators DAO ships a first-class SDK, solana-stream-sdk, that wraps jito-labs/mev-protos and exposes a ShredstreamClient and filters with good ergonomics solana-stream-sdk crate, GitHub repo.

Typical production pattern:

  • One async task per subscription group (per market or per account set).
  • Bincode decode into solana_entry::entry::Entry for pipeline fan-out.
  • Aggressive metrics around slot lag, entry batch size, and decode latency.

Minimal Rust example focused on entries for one program:

use solana_stream_sdk::{
    CommitmentLevel,
    ShredstreamClient,
    SubscribeEntriesRequest,
};
use solana_entry::entry::Entry;
use std::env;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    dotenvy::dotenv().ok();

    let endpoint = env::var("SHREDS_ENDPOINT")
        .unwrap_or_else(|_| "https://shreds-ams.erpc.global".to_string());

    let mut client = ShredstreamClient::connect(&endpoint).await?;

    // Filter by program or account set in real systems
    let request = ShredstreamClient::create_entries_request_for_account(
        "6EF8rrecthR5Dkzon8Nwu78hRvfCKubJ14M5uBEwF6P",
        Some(CommitmentLevel::Processed),
    );

    let mut stream = client.subscribe_entries(request).await?;

    while let Some(entry_msg) = stream.message().await? {
        let entries: Vec<Entry> = bincode::deserialize(&entry_msg.entries)?;
        for entry in entries {
            for txn in entry.transactions {
                // Route into your internal bus
                println!("slot={} tx_count={}", entry_msg.slot, txn.signatures.len());
            }
        }
    }

    Ok(())
}

High-level guidance for Rust

  • Treat Processed as default for low-latency trading and run your own confirmation tracking.
  • Keep decode on dedicated worker threads once throughput grows.
  • Wire Prometheus on:
    • slot delay vs block_time
    • entry batch size distribution
    • decode failures and bincode errors

If you need DEX-specific parsing on top, libraries like solana-streamer-sdk extend this approach for PumpFun and Raydium streams solana-streamer-sdk.

TypeScript: NAPI bridge with @validators-dao/solana-stream-sdk

Pure TypeScript struggles with raw shred decoding at Jito volumes. Validators DAO addresses this with NAPI and Rust under the hood, exposed through @validators-dao/solana-stream-sdk and @validators-dao/solana-shreds-client NPM package, NPM shreds client.

You stay in Node, while Rust handles entry decoding. That keeps latency predictable and CPU under control ELSOUL announcement, press release.

Minimal TS example that subscribes to shreds and tracks receive latency by slot:

import {
  ShredsClient,
  ShredsClientCommitmentLevel,
  // decodeSolanaEntries,
} from '@validators-dao/solana-stream-sdk'
import 'dotenv/config'

const endpoint = process.env.SHREDS_ENDPOINT!
const client = new ShredsClient(endpoint)

// Start simple. Add filters once you trust observability.
const request = {
  accounts: {},
  transactions: {},
  slots: {},
  commitment: ShredsClientCommitmentLevel.Processed,
}

const receivedSlots = new Map<number, Date[]>()

client.subscribeEntries(
  JSON.stringify(request),
  (_error: any, buffer: any) => {
    const receivedAt = new Date()
    if (!buffer) {
      return
    }

    const { slot /*, entries */ } = JSON.parse(buffer)

    // Optional decode step if you need full entries
    // const decoded = decodeSolanaEntries(new Uint8Array(entries))

    if (!receivedSlots.has(slot)) {
      receivedSlots.set(slot, [receivedAt])
    } else {
      receivedSlots.get(slot)!.push(receivedAt)
    }

    // Export latency metrics from here
  },
)

High-level guidance for TypeScript

  • Restrict ShredStream ingestion to Node backends, not browser apps.
  • Keep logic in TypeScript, decoding in Rust through NAPI. Avoid custom binary parsing in JS.
  • Run one ingestion process per region and push normalized events into Kafka, NATS, or Redis for your traders or risk systems.
  • Watch memory. Long-running Node processes with heavy buffers deserve heap profiling.

Go: thin gRPC client on top of Jito protobufs

There is no single dominant Go SDK equivalent to solana-stream-sdk today. Multiple projects publish Go bindings generated from the same Jito shredstream.proto cheap-dev jito-sdk, Prophet-Solutions jito-sdk, bloXroute mev-protos-go. All expose a ShredstreamClient and the Heartbeat RPC from the official schemas jito-labs/mev-protos.

Production Golang teams usually:

  • Generate their own client from the upstream mev-protos repository for full control.
  • Implement robust heartbeat scheduling using the TTL in HeartbeatResponse.
  • Offload heavy decoding or strategy logic to internal microservices written in Rust or C++, where the lowest latency is required.

Minimal Go example that focuses on heartbeats to a Shredstream endpoint:

package main

import (
    "context"
    "log"
    "net"
    "os"
    "time"

    "google.golang.org/grpc"
    shredpb "github.com/cheap-dev/jito-sdk/pb/shredstream"
    sharedpb "github.com/cheap-dev/jito-sdk/pb/shared" // package name from that repo
)

func main() {
    endpoint := os.Getenv("SHREDSTREAM_HEARTBEAT_ENDPOINT")
    if endpoint == "" {
        log.Fatal("SHREDSTREAM_HEARTBEAT_ENDPOINT missing")
    }

    conn, err := grpc.Dial(endpoint, grpc.WithInsecure())
    if err != nil {
        log.Fatalf("dial failed: %v", err)
    }
    defer conn.Close()

    client := shredpb.NewShredstreamClient(conn)

    // Source IP should match what the proxy observes
    ipStr := os.Getenv("SHREDSTREAM_SOURCE_IP")
    if ipStr == "" {
        log.Fatal("SHREDSTREAM_SOURCE_IP missing")
    }
    ip := net.ParseIP(ipStr)
    if ip == nil {
        log.Fatalf("invalid ip: %s", ipStr)
    }

    socket := &sharedpb.Socket{Ip: ip.String(), Port: 0}

    for {
        ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
        hb := &shredpb.Heartbeat{
            Socket:  socket,
            Regions: []string{"fra"}, // pick regions from Jito docs
        }
        resp, err := client.SendHeartbeat(ctx, hb)
        cancel()
        if err != nil {
            log.Printf("heartbeat error: %v", err)
            time.Sleep(time.Second)
            continue
        }

        ttl := time.Duration(resp.TtlMs) * time.Millisecond
        log.Printf("heartbeat ok ttl=%s", ttl)

        // Sleep slightly below TTL to avoid drops
        sleepFor := ttl / 2
        if sleepFor <= 0 {
            sleepFor = 500 * time.Millisecond
        }
        time.Sleep(sleepFor)
    }
}

High-level guidance for Go

  • Treat Go as the right place for infrastructure glue:
    • heartbeats
    • routing between regional proxies
    • backpressure control and health checks
  • For heavy entry parsing or trading logic, either:
    • bind into Rust through FFI, or
    • forward raw entry payloads into a Rust microservice through a low-latency channel.
  • Store the generated protobufs in your own module rather than importing unpinned third-party repos into your core trading codebase.

Common mistakes and how to solve them

TL;DR

Most ShredStream failures come from design, not from Jito or the provider.

If you:

  • Treat ShredStream as an early signal, not a database
  • Engineer for bursts, not averages
  • Respect heartbeats and TTL
  • Keep ingest thin and separate from the experiments
  • Check ROI against P&L and risk

…then ShredStream becomes an advantage instead of a new outage source.

Mistake What you see in practice Root cause What to do instead
Treating ShredStream as a full indexer - Geyser / indexed RPC removed too early
- Ad-hoc queries and backfills hurt
- Incident and risk analysis require custom scripts each time
ShredStream gives ordered entries, not a queryable state or history - Keep ShredStream as the earliest signal only
- Normalize entries into an append-only log (Kafka, Redpanda, NATS, ClickHouse) with retention
- Pair with Geyser or indexed RPC for history, joins, and slow queries
No plan for packet and slot bursts - Works in dev, stalls in mainnet spikes
- OOM events or restarts during high volatility
- Latency and backlog grow exactly when the market is most profitable
Design sized for the average TPS, not the worst 1 percent of slots - Size for peak slots using capture traffic
- Use bounded queues and explicit drop/shed policy in Rust and Go
- In TS, avoid single long event loops, shard by slot or region
- Export “queue depth”, “backlog age”, “dropped entries” metrics
Ignoring heartbeats and TTL - Streams “randomly” close every few minutes
- Unexplained gaps in data
- Provider reports healthy infra, while you see holes
Heartbeat RPC not implemented or not aligned with TTL from HeartbeatResponse - Use generated Heartbeat / HeartbeatResponse types from clients
- Run a dedicated heartbeat loop/task separate from entry processing
- Use TTL from the response instead of hardcoding
- Alert on time since last successful heartbeat and heartbeat error rate
Overloading ShredStream clients with downstream responsibilities - Single ingest service writes to multiple DBs, caches, queues
- Slow analytics or warehouse path stalls the critical trading path
- Any schema change is risky
ShredStream client responsible for ingest, enrichment, storage, and analytics prep - Keep ingest thin: subscribe, validate, normalize, publish to one internal stream
- Move enrichment, joins, and heavy persistence into downstream consumers
- Use stable schemas (protobuf / Avro) between stages to decouple teams
Mixing experiment and production in one client - Research feature or UI tweak breaks ingest
- Rollbacks during incidents are slow and high-risk
- No clear SLO boundaries
Ingest, strategies, and dashboards share one binary and deployment unit - Run one stable ingest service per region/provider
- Expose a typed internal stream to which strategies, risk, and dashboards subscribe
- Version and deploy ingest and strategy layers independently
- Use feature flags in downstream services, not in the ingest binary
Ignoring the regional and provider layout - ShredStream “not much faster” than RPC for some desks
- Latency and volatility vary by venue without a clear reason
ShredStream region and client location not aligned with validators or trading venues - Align ShredStream regions and client placement with validator and exchange locations
- Measure p50 / p99 latency per region from your own stack, not provider dashboards
- Where possible, run ingest close to both ShredStream and trading venues
No ROI check on ShredStream adoption - More infra and provider complexity
- No clear evidence of better fills or lower slippage
- Internal debate on “is this worth it?”
ShredStream treated as infra upgrade, not a business metric - Define a primary metric per use case: fill rate, slippage, missed arb count, VaR
- Run A/B: ShredStream-backed vs legacy pipeline for selected strategies
- If the delta is small, reduce the scope or simplify the stack instead of expanding it

Advanced notes for leads and architects

For teams with serious Rust usage, the first three rows are mostly solved by:

  • Thin Rust ingest services that talk to ShredStream and immediately publish normalized entries into a log or in-memory bus.
  • Business logic implemented in separate services, often a mix of Rust, Go, and TypeScript, with well-defined schemas.

For Go-heavy infra teams, existing SDKs like the ShredstreamClient implementations on pkg.go.dev reduce protocol risk, but they do not fix design mistakes. You still need:

  • Bounded channels around the gRPC stream;
  • Circuit-breaking between regions;
  • Clear separation between ingest and “side effects” (DB, warehouse, BI).

For TypeScript, the most practical pattern is:

  • Never treat Node as the main ingest path.
  • Consume an internal stream from Rust/Go, not Jito directly.
  • Focus on analytics, monitoring, decision support, and smaller experiments.

If you align these patterns with ShredStream-specific checks (heartbeats, TTL, bursts), most “mysterious” issues disappear, and incidents become standard SRE work instead of protocol archaeology.

FAQ for CTOs and leads

Do we need ShredStream for every Solana product?

No. It is most useful where latency and coverage have a direct revenue impact: HFT, aggressive MM, high-value copy trading, and advanced risk. Wallets, explorers, and most SaaS workloads stay fine on good Yellowstone gRPC and RPC nodes.

Which language should our team start with?

If you have Rust in production, use Rust near ShredStream and fan out internally. If your infra team is mostly Go, a Go client plus an internal stream is cleaner. Use TypeScript only for downstream analytics and dashboards.

How does this fit with existing Solana RPC providers?

Providers like RPCFast support ShredStream and Yellowstone gRPC from the same dedicated node RPCFast docs. You subscribe directly to their ShredStream endpoint and still keep standard RPC for everything else.

Is this safe to put on Kubernetes?

Yes, but handle it like any low-latency service: pinned nodes, careful resource limits, local SSD for queues, and strict autoscaling rules. Dysnix uses PredictKube and prescaling patterns for similar workloads to keep p99 under 50 ms where needed.

Who owns incidents when data lags?

In healthy setups:

  • Provider owns their regional ShredStream SLA;
  • Your SRE team owns ingest services and internal stream SLOs;
  • Trading/product teams own strategy-level behavior when input data is good.

Any questions left? Ask our engineers

Open chat

Table of Content

Need help with Web3 infrastructure?

Drop a line
More articles

Guide

All

Written by:

Olha Diachuk

Date:

16 Dec 25

8

min read

Guide

All

Written by:

Olha Diachuk

Date:

15 Dec 25

8

min read

We use cookies to personalize your experience