AWS RTB Fabric: The Hidden Complexity of Adtech Infrastructure and Why This Changes Everything

AWS | AdTech
by Michal Niec

Amazon Web Services recently launched RTB Fabric, a purpose-built infrastructure service for real-time bidding workloads. This isn’t just another cloud service – it’s AWS putting a managed network layer directly between buyers and sellers in the programmatic advertising ecosystem.

❕What makes this announcement significant?

RTB Fabric promises three things that have historically been difficult to achieve simultaneously:

  • Single-digit millisecond latency through a dedicated network
  • Up to 80% cost savings compared to standard AWS data transfer fees
  • Simplified partner integration without colocation infrastructure

At Appliscale, we’ve architected adtech systems for clients processing billions of bid requests. We know firsthand why these promises matter – and why they’re so hard to deliver.

🚤 Why latency is everything in RTB

When a user loads a webpage, selecting which ad to show involves multiple companies making split-second decisions. Supply-side platforms typically give demand-side platforms around 120ms to respond with a bid. That’s not much time.

To put this in perspective: A signal traveling at the speed of light between San Francisco and New York takes 15ms in a direct line. The average round-trip time for internet packets between these cities? 63ms – consuming half your time budget just on transport, before any processing begins.

🔍 Why we’re paying attention

We can’t wait to test RTB Fabric in real production scenarios. But the announcement also prompted us to revisit a fundamental question: Why is architecting adtech systems at scale so challenging in the first place?

Next, we’ll break down:

  • The technical hurdles of processing billions of requests at millisecond-scale latency
  • Why optimizing for both speed and cost creates impossible tradeoffs
  • How RTB Fabric addresses these challenges (and where questions remain)
  • Who should seriously consider adopting this infrastructure

The Real Challenge – Why Optimizing Adtech Infrastructure is Brutally Hard

Let’s examine why building performant, cost-effective RTB infrastructure has been such a persistent challenge.

🗂️ The data access problem

When a DSP receives a bid request, it needs to:

  1. Enrich the request with user data (demographics, browsing history, purchase intent)
  2. Check publisher metadata and quality scores
  3. Match against campaign targeting criteria
  4. Calculate bid prices and make a decision

Each data source adds milliseconds. Many of these databases contain billions of records, forcing constant tradeoffs between access speed and storage costs.

🔄 The parallelization problem

Processing requests sequentially doesn’t scale. Even at 5ms per request, handling them one-by-one gives you just 200 requests/second – only 17 million per day. In adtech, that’s nothing. A single medium-sized DSP processes billions of requests daily.

The solution? Process requests in parallel. But this creates new issues:

1. CPU overhead from context switching When processors juggle hundreds of thousands of concurrent requests, they spend significant cycles switching between tasks.

2. Connection pool management HTTP’s nature means servers must maintain massive connection pools. For SSPs waiting on slow DSP responses, this becomes especially challenging – connections stay blocked until responses arrive. The slower your partner responds, the more connections you need open to maintain the same throughput.

3. The concurrency ceiling Each machine can handle hundreds of thousands of concurrent requests, but beyond a certain point, adding more parallelism increases overall processing time.

Example: If one request needs 2ms of CPU time, processing 200 simultaneously means all finish at 400ms (200 × 2ms). You need sophisticated rate limiting and prioritization to ensure early requests complete on time while managing queue depth.

🪨 The consistency problem

Once your servers can handle the load, you face operational challenges:

  • Traffic patterns change constantly – daily cycles, campaign activations, live events
  • Partners have outages – affecting your performance and SLAs
  • Resources can saturate – memory, fast storage, network bandwidth

This requires dynamic scaling, overflow protection, intelligent load balancing, and comprehensive monitoring.

💰The cost problem

Here’s the part many overlook: data transfer fees.

On AWS, you don’t pay for data coming in, but you pay for data going out. For an SSP sending 1 billion bid requests per month at an average 5KB per request:

Data out: 1 billion × 5KB = 5TB
Standard AWS cost: ~$450/month at standard rates
At 30B requests/month: $250/month (volume discounts apply)
Cost of processing 1 Trillion requests per month = 1000 * $250 = $250 000

These fees are often neglected during architecture planning but become major line items at scale.

📄 How companies handle this today

The traditional solutions:

  1. Run multiple regional data centers (US East, US West, EU, APAC)
  2. Colocate near partners to minimize latency and data transfer costs
  3. Use DNS load balancing to route traffic intelligently.
  4. Choose the right tech for your ORTB servers with event-loop architecture, green threads, rate limiting, and auto-scaling
  5. Implement robust observability – execution times, database RTT, connection metrics, timeout alerts

This works, but it’s expensive, operationally complex, and requires specialized expertise.

🛠️ What RTB Fabric provides

AWS RTB Fabric provides a dedicated, high-performance network layer optimized for OpenRTB communication with several key capabilities

1. Simplified connectivity No need to figure out which AWS region and availability zone your partners use. You have two cases: connected via RTB Fabric or not. This dramatically simplifies data transfer billing.

2. Built-in traffic management Modules such as Rate Limiter, OpenRTB Filter, and Error Masking help you control request volume, validate message formats, and manage response handling directly in the network path. You don’t need to implement safety valves in your application – configure them at the network level.

3. Traffic filtering DSPs can filter incoming traffic at the network layer, eliminating CPU cycles spent processing bid requests they’re not interested in.

4. Streamlined logging Easy integration with CloudWatch for metrics and S3 for raw logs.

💵 The critical question: Does it save money?

Let’s run the numbers for an SSP at realistic volumes. Assuming 5KB average bid request size and sending bid requests to DSPs who are outside AWS:

Screenshot 2025 10 29 at 12.08.25 - Elixir - appliscale.io

But there are important caveats:

1. Partner location matters If your partner is already on AWS and  in the same region, standard data transfer costs drop significantly:

Article content

2. You can optimize placement A well-architected solution with infrastructure co-located near partners avoids data transfer fees entirely while maintaining low latency.

3. Volume discounts exist AWS asks customers transferring 500+ TB monthly to contact them for custom pricing. At that scale, negotiated rates may compete with RTB Fabric pricing.

🤝 Partnership requirements

External Links are also available to connect with partners who aren’t using RTB Fabric, such as those operating on premises or in third-party cloud environments.

Both parties don’t need to use RTB Fabric – it works with external partners. However, pricing is more favorable when both sides use the service.

📈 Performance expectations

If your partners are already on AWS and you’ve established a private connection between your environments — such as VPC peering, AWS Direct Connect, or PrivateLink — that keeps traffic off the public internet, you may not need RTB Fabric.

RTB Fabric – Who Should Use It and Our Take

We’ve covered the technical challenges and cost analysis. Now let’s cut through the hype and discuss who genuinely benefits from RTB Fabric.

🎯 The ideal candidates

Small to mid-size teams building fast If you’re a lean team that wants to focus on features rather than infrastructure optimization, RTB Fabric offers genuine value. Instead of investing months building robust, colocated infrastructure, you get:

  • Instant partner connectivity
  • Built-in rate limiting and filtering
  • Automatic scaling
  • Simplified billing

As AWS’s Stephanie Layser noted, because ad tech companies are “operating on thin margins” and AWS charges customers whenever data is transferred out of the cloud to the public web or another platform’s infrastructure, “oftentimes the cost is prohibitive for being able to run RTB in the cloud.” AdExchanger

Publishers exploring programmatic This is particularly interesting. Publishers traditionally have limited resources to operate sophisticated adtech infrastructure. RTB Fabric could enable publishers to build robust prebid solutions without massive infrastructure investments.

Companies expanding to new regions Instead of setting up new data centers, establishing partner connections, and managing regional complexity, RTB Fabric provides immediate global reach.

🤔 Who might not need it

Established players with optimized infrastructure If you’ve already invested in strategically placed, well-architected infrastructure close to your partners, you’re likely:

  • Paying minimal data transfer fees
  • Achieving similar latency
  • Maintaining operational control

The migration effort might not justify the benefits.

Very high-volume operators At extreme scale (500+ TB monthly data transfer), custom AWS enterprise agreements may offer comparable economics with more flexibility.

The bigger picture

Data transfer fees are a significant but often underestimated part of adtech economics. Understanding how architectural decisions impact costs is crucial for business sustainability.

👀 What we’re watching

  1. Adoption by publishers – Could RTB Fabric democratize prebid infrastructure for smaller publishers?
  2. Integration with Trusted Ad Servers – This emerging standard could pair well with RTB Fabric’s simplified connectivity model
  3. Real-world performance data – We’re eager to test RTB Fabric with actual production workloads and validate the latency and cost claims
  4. AI and agentic workflows – As AWS’s Stephanie Layser noted, “As ad platforms transition to agentic artificial intelligence (AI) workflows that produce insights, AWS and services like RTB Fabric will help to support the transition” MediaPost Publications
  5. ProtoBuffers Adoption: The IAB’s recent push to standardize Protobuf usage could significantly improve data transfers and scaling.

🧐 Our verdict

RTB Fabric represents a thoughtful solution to genuine infrastructure challenges in adtech. The cost savings are real, especially for companies currently paying standard AWS data transfer rates. However, it’s not a silver bullet.

The right choice depends on:

  • Your current infrastructure maturity
  • Partner distribution and connectivity
  • Internal engineering resources
  • Scale and growth trajectory

For teams wanting to move fast without infrastructure overhead, RTB Fabric deserves serious consideration. For established players with optimized infrastructure, the calculus is less clear-cut.

💬 What’s your take?

Are you considering RTB Fabric for your adtech infrastructure? We’d love to hear about your architecture challenges and how you’re thinking about this new option.

At Appliscale, we’ve architected adtech solutions across the stack – from data lakes processing billions of events to low-latency bidding systems. If you’re evaluating infrastructure options for your adtech platform, let’s talk.

Appliscale delivers scalable, high-performance tech solutions, specializing in cloud migration, system architecture, and custom software to empower businesses worldwide.
Poland
Życzkowskiego 14
31-864 Kraków
1st floor
KPT Building
Ireland
Whiterock South
Wexford
Appliscale sp. z o.o. seated in Kraków, address: ul. prof. Michała Życzkowskiego 14, 31-864 Kraków District Court for the city of Krakow, 11th Commercial Division of the National Court Register registration number: 0000592380 Tax Identification Number: 9452189348 share capital: 100 000 zł.