CLOSED BETA • EMAIL FOR ACCESS

TCP for APIs

Layer 8 networking for your API calls

Not an API gateway. An API Aqueduct™ — bidirectional flow control for requests and responses. Built on 40-year-old telecom-grade BEAM with a new Gleam.

Two Protocols:

📦

STREAM (TCP)

Reliable delivery. Queue 429s. Retry 500s. Guaranteed completion.

TORRENT (UDP)

Speed over reliability. Race 3 regions. First wins. Sub-80ms.

Closed beta. Free tier (1M requests) coming after first enterprise customer.

Beta Status: Running 9 machines across 3 regions on Fly.io's edge network. Working with design partners to perfect the protocols before public launch.

Built for Teams Who Can't Afford API Failures

Every API can fail. Your workflows shouldn't.

🤖

AI Agent Builders

Your agents call dozens of APIs per workflow. One 429 error crashes the entire sequence. Restart from scratch. Lose all context.

❌ Without EZThrottle:

  • • OpenAI rate limit at step 8/10
  • • Entire agent workflow crashes
  • • Restart from beginning
  • • Lost progress, wasted tokens

✅ With EZThrottle:

  • • Rate limit? We queue it (STREAM)
  • • API down? We retry different region
  • • Workflow completes successfully
  • • Zero manual intervention
429 handling: Automatic backoff & queueing
500 handling: Multi-region failover
Workflows: Multi-step coordination
📊

Data Engineers

Your ETL jobs process millions of records across multiple APIs. Manual rate limiting is slow, brittle, and loses data on failures.

❌ Without EZThrottle:

  • • Hit Shopify rate limit at record 5,000
  • • Pipeline fails, restarts from zero
  • • Manual sleep() between requests
  • • 6 hour job fails at hour 4

✅ With EZThrottle:

  • • Automatic rate limit coordination
  • • Resume from checkpoint on failure
  • • Regional failover for 500 errors
  • • Job completes faster, reliably
429 handling: Distributed rate limiting
500 handling: Checkpoint & resume
Workflows: Paginated batch processing
🛒

E-commerce Platforms

Your checkout flow calls 5+ APIs. Payment succeeds, but if fulfillment fails, orders are orphaned. Manual cleanup required.

❌ Without EZThrottle:

  • • Stripe charge succeeds
  • • ShipStation API returns 500
  • • Order exists but won't ship
  • • Manual reconciliation needed

✅ With EZThrottle:

  • • Payment succeeds
  • • Fulfillment retries automatically
  • • Multi-region failover on 500
  • • Order ships, zero manual work
429 handling: Queue burst traffic
500 handling: Guaranteed completion
Workflows: Transactional consistency

Your APIs can fail. Your workflows won't.

STREAM for reliability. TORRENT for speed. Layer 8 networking.

Request Beta Access

Not an API Gateway. An API Aqueduct™

Flow control in both directions

API Gateways

  • One direction: Client → Gateway → Server
  • Synchronous: Block until response
  • No flow control: Proxy and pray
  • Stateless: No memory of failures
  • Regional: Single point of failure

AWS API Gateway, Kong, Nginx — great for routing, terrible for reliability.

API Aqueducts

  • Bidirectional: Downstream (to API) + Upstream (to you)
  • Asynchronous: Queue, deliver when ready
  • Flow control: Smooth traffic, prevent 429s
  • Stateful: Remember failures, retry smart
  • Distributed: Self-healing network

EZThrottle — built for reliability from the ground up.

The Aqueduct Model

🌊

Downstream Flow

Requests flow from you → EZThrottle → APIs. Smooth, controlled, rate-limited.

⬆️

Upstream Flow

Responses flow from APIs → EZThrottle → You. Deduplicated, validated, delivered.

⚖️

Pressure Control

Buffer bursts, smooth peaks, prevent overload. Like Roman castellum tanks.

Two Protocols. One Network.

Choose reliability or speed. We handle the rest.

📦

STREAM

Like TCP for APIs

Reliable delivery. Guaranteed completion. Your request will succeed, even if it takes hours.

How It Works:

  • 429 Rate Limit: Queue it. Deliver at optimal pace.
  • 500 Error: Retry different region automatically.
  • Network split: Store and forward when healed.
  • Result: Your workflow completes. Always.
// STREAM: Guaranteed delivery
const result = await ezthrottle.stream({
  url: "https://api.openai.com/chat",
  body: { prompt: "..." },
  timeout: "1h" // How long to keep trying
});
// Returns: When completed (might be queued)
// Guarantee: Will succeed or timeout

Perfect For:

Workflows Batch jobs Data pipelines ETL

TORRENT

Like UDP for APIs

Speed over reliability. Race multiple regions. First response wins. Sub-80ms latency.

How It Works:

  • Parallel execution: Send to 3 regions simultaneously.
  • First wins: Return fastest response, cancel others.
  • No retries: Fast or fail. No queueing.
  • Result: Sub-80ms responses. Feels instant.
// TORRENT: Fastest response
const result = await ezthrottle.torrent({
  url: "https://api.anthropic.com/complete",
  body: { prompt: "..." },
  regions: ["us-east", "us-west", "eu-west"]
});
// Returns: Immediately (first to respond)
// Guarantee: Speed, not reliability

Perfect For:

Chat UIs Autocomplete Search Real-time apps

Use Both in the Same App

STREAM for your background jobs. TORRENT for your interactive UI. One network. One bill. Your choice.

How It Works

Layer 8 networking with BEAM reliability

The Problem: APIs Have No Flow Control

Without Layer 8:

// Your code makes 100 requests/sec
for (let i = 0; i < items.length; i++) {
    await api.post('/process', items[i])
    // ❌ 429 Rate Limit Exceeded
    // Your app crashes
}

Result: Process fails. Data lost. Manual restart required.

With EZThrottle:

// Same code, proxied through EZThrottle
for (let i = 0; i < items.length; i++) {
    await ezthrottle.stream.post('/process', items[i])
    // ✓ Automatically queued if rate limited
    // ✓ Retried on failures
}

Result: Process completes. No crashes. Zero manual intervention.

The Architecture

1

Your Request Arrives

Proxied to EZThrottle's edge network (9 machines, 3 regions)

2

Syn Registry Routes

Distributed Syn process finds the least-busy queue across our cluster

3

Protocol Selection

STREAM (reliable) or TORRENT (fast) based on your call

4

Queue or Race

STREAM: Queue if needed. TORRENT: Race 3 regions parallel.

5

Response & Delivery

Get response, trigger webhook if configured, return to client

💡 Why BEAM? Erlang/OTP has powered telecom systems for 40 years with 99.9999999% uptime. WhatsApp handles 2 billion users on BEAM. Discord serves 150 million concurrent connections. If it's reliable enough for them, it's reliable enough for your API calls.

Official SDKs

Python, Node.js, and Go SDKs for both protocols

🐍

Python

$ pip install ezthrottle
from ezthrottle import EZThrottle

client = EZThrottle("your_key")

# STREAM: Reliable delivery
resp = client.stream.post(
    url="https://api.openai.com/chat",
    json={"prompt": "..."}
)

# TORRENT: Fast delivery
resp = client.torrent.post(
    url="https://api.anthropic.com/complete",
    json={"prompt": "..."},
    regions=["us-east", "us-west", "eu"]
)
View on PyPI →
📦

Node.js

$ npm install ezthrottle
const { EZThrottle } = 
  require('ezthrottle');

const ez = new EZThrottle('key');

// STREAM: Reliable
const r1 = await ez.stream.post({
  url: 'https://api.openai.com/chat',
  json: { prompt: '...' }
});

// TORRENT: Fast
const r2 = await ez.torrent.post({
  url: 'https://api.anthropic.com',
  json: { prompt: '...' },
  regions: ['us-east', 'us-west']
});
View on npm →
🔷

Go

$ go get github.com/rjpruitt16/ezthrottle-go
import ez "github.com/rjpruitt16/
  ezthrottle-sdk/go"

client := ez.NewClient("key")

// STREAM: Reliable
resp1, _ := client.Stream.Post(
  "https://api.openai.com/chat",
  map[string]any{"prompt": "..."},
)

// TORRENT: Fast
resp2, _ := client.Torrent.Post(
  "https://api.anthropic.com",
  map[string]any{"prompt": "..."},
  []string{"us-east", "us-west"},
)
View on GitHub →
💡

All SDKs support:

✓ STREAM protocol (reliable) ✓ TORRENT protocol (fast) ✓ Webhook delivery ✓ Regional failover ✓ Automatic retries ✓ Rate limit handling
View Full Documentation →

Built on Telecom-Grade BEAM

With a new Gleam ✨

Why BEAM?

40 Years of Telecom Reliability

Erlang/OTP has powered telecom systems since 1986. 99.9999999% uptime. Nine nines. Proven at scale.

WhatsApp: 2 Billion Users on BEAM

50 engineers serving 2 billion users. All on BEAM. If it scales for them, it scales for your API calls.

Discord: 150M Concurrent Connections

Real-time messaging at insane scale. BEAM handles it. Your API requests? Child's play.

Why Gleam?

Type Safety on BEAM

Rust-like type system. No runtime errors. Catch bugs at compile time. Ship with confidence.

Modern DX, Proven Runtime

Gleam compiles to BEAM bytecode. Best of both worlds: modern language, 40-year-old battle-tested runtime.

Built for Concurrency

Actors, supervision trees, distributed coordination. BEAM's primitives + Gleam's safety = reliable infrastructure.

"BEAM with a new Gleam" isn't just marketing.

It's 40 years of telecom reliability + modern type safety. Infrastructure built to last 2,000 years, like Roman aqueducts.

What We're Building Next

Layer 8 is just the beginning

SHIPPING NOW

STREAM Protocol

TCP for APIs. Reliable delivery.

  • • Automatic 429 queueing
  • • Regional 500 failover
  • • Guaranteed completion
  • • Webhook delivery
NEXT (2 WEEKS)

TORRENT Protocol

UDP for APIs. Speed over reliability.

  • • Race 3 regions simultaneously
  • • First response wins
  • • Sub-80ms latency
  • • Perfect for real-time UIs
FUTURE

Streams of Intelligence

Bidirectional streaming for AI agents.

  • • Long-running agent workflows
  • • Bidirectional state sync
  • • Checkpoint & resume
  • • Multi-day operations

The Vision: Layer 8 Networking

APIs are the nervous system of modern software. They need their own protocol layer. We're building it.

📦

STREAM

Reliability. Like TCP.

TORRENT

Speed. Like UDP.

🌊

INTELLIGENCE

State. Like... nothing that exists.

Request Beta Access →

Early design partners shape the roadmap

Zero Data Storage

✓ Requests processed in-memory only

✓ No database storage of request/response data

✓ Webhooks deliver results directly to you

✓ Automatic deletion if temporary storage ever needed

Built on open-source Gleam + BEAM. Code transparency for security-conscious teams. Your data flows through, never stays.

Open & Transparent

Our landing page, FAQ, and beta process are all open source on GitHub.

🔍 See Exactly How Beta Access Works

We use email templates for beta requests. No hidden questions, no surprises. You can see the exact template before you apply.

"We're building with design partners. We want to know who we're building with."

View on GitHub

Read the README • See beta email template • Fork if you want

CLOSED BETA

Enterprise Design Partners Needed

We need production workloads to stress-test Layer 8 networking.

Who We're Looking For

✅ Perfect Fit:

  • AI agent platforms (multi-step workflows)
  • Data pipelines (millions of API calls)
  • E-commerce platforms (critical checkout flows)
  • High-volume apps (100K+ requests/day)
  • Production traffic (real users, real stakes)

❌ Not A Fit:

  • → Personal projects (too small to stress-test)
  • → Tire-kickers (we need real workloads)
  • → "Checking it out" (need commitment)
  • → Zero tolerance for bugs (it's beta)
  • → Free forever seekers (we need revenue partners)

Translation: We're looking for serious companies with serious API problems. If 429s and 500s are costing you real money, let's talk.

This is infrastructure. Not a side project. We need partners who get that.

What Beta Partners Get

🎁

1M Requests Free

$1,500 value

Enough volume to properly evaluate both STREAM and TORRENT protocols under load.

💬

Direct Slack Channel

With founder

Not support tickets. Real-time debugging with the person building this.

🎯

Shape The Product

Your feedback = features

Need something specific? We'll build it. Your use case drives the roadmap.

💰

Grandfathered Pricing

Lock in early pricing forever. When we raise rates (we will), you don't pay more. Early bet = permanent discount.

📈

Co-Marketing Rights

If it works well, we'll write a case study together. Your success story = our credibility. Win-win marketing.

After Public Launch (For Reference)

This is what we'll charge once beta ends. Beta partners get better terms.

$0.0015

per request

(Cheaper than sending an SMS)

Small

$15

10K requests/mo

Startup

$150

100K requests/mo

Growth

$1.5K

1M requests/mo

Scale

$15K

10M requests/mo

Volume discounts at 50M+. Enterprise custom pricing available.

Ready To Test Layer 8?

We need 5-10 enterprise design partners with real production workloads.

Before you apply, ask yourself:

  • ✓ Do we have 100K+ API requests/day?
  • ✓ Are 429s and 500s costing us real money?
  • ✓ Can we tolerate beta bugs for 6-8 weeks?
  • ✓ Will we give weekly feedback?
  • ✓ Are we willing to be a reference customer?

If you answered yes to all 5, we want to talk to you.

Apply for Beta Access →

Limited to 5-10 partners. First-come, first-served for qualified companies.

Why We Need Beta Partners

Honest truth: EZThrottle works great in testing. But we're solo-founder-built infrastructure. We need real production workloads to find the edge cases we haven't thought of.

Your 429s and 500s will help us build better failover logic. Your burst traffic will help us tune our queueing. Your multi-region needs will help us optimize routing.

In exchange? You get infrastructure that actually works, direct access to the founder, and locked-in pricing when we raise rates. We both win.

Layer 8 Networking Is Here

TCP for reliability. UDP for speed. Built on 40-year-old BEAM with a new Gleam.

Not an API gateway. An API Aqueduct™

Request Beta Access →

Closed beta. Limited spots. Building with design partners.

© 2025 EZThrottle

TCP for APIs. Built on BEAM with a new Gleam. Layer 8 networking for the modern web.