Back to All Blogs

How Switching to NixOS will Unlock $5 Million in Performance Value

T01B454MLAX-U04AW5RK116-2bedb0413e23-512-1

Michael Bilow

VP of  Data Science & Machine Learning

Read Time:

In the world of CTV advertising, marketing teams usually focus on what they can see: creative, audiences, and conversion data. But at tvScientific, we know that the biggest performance levers often exist where you can’t see them: deep in the backend infrastructure.

We recently overhauled our bidding engine, moving away from standard containerization to a high-performance, single-tenant architecture built on NixOS.

For the technical readers, this is a story about optimizing high-throughput systems. For performance marketers, this is a story about how engineering decisions directly impact your ROAS.

The engineering challenge

A real-time bidder is a unique beast. It processes up to 10+ TB of HTTP traffic per day and must respond to requests in 10 milliseconds or less. If we are even a fraction of a second too slow, the opportunity vanishes.

Unlike lightweight web apps, our bidder is a high-footprint program, frequently consuming 20–50+ GB of memory. While the industry trend has moved toward microservices and containers (like Kubernetes), these "heavy" requirements make the bidder behave more like a monolith.

Standard containerized environments struggled to support the specific technologies we use to maintain speed, such as eBPF and io_uring. We realized that to get faster, we had to go back to first principles and return to single-tenant infrastructure.

The solution

Moving back to single-tenant infrastructure usually brings a penalty: slower iteration and harder management. To solve this, our engineering team deployed NixOS, a Linux distribution focused entirely on reproducibility.

Unlike older tools (like Vagrant or Packer) that produce massive, hard-to-wrangle disk images, NixOS allows us to manage single-tenant EC2 infrastructure with the same ease as a Kubernetes cluster.

The results were immediate and dramatic:

  • Rapid Deployment: We reduced deploy times from 15 minutes to 3.5 minutes, enabling faster iteration and resilience.
  • Latency Crushed: We decreased tail latency (the time it takes to process the most complex requests) by 4x compared to our previous implementation.

The payoff: speed = selectivity

Why should a CMO care about a Linux distribution or tail latency? Because infrastructure determines opportunity.

By cutting our tail latency by 4x, we drastically reduced the number of bids that "time out." This effectively gives us more traffic for free, allowing us to evaluate auctions that slower platforms simply miss.

This influx of volume creates a massive advantage for our patented AI:

  1. 1. More Access: The AI sees a larger pool of potential impressions.
  2. 2. Higher Selectivity: With more options to choose from, the AI doesn't have to settle. It can be more selective, bidding only on the ads that drive the most incremental value.
  3. 3. Better ROI: We are effectively "optimizing to see more auctions," which leads to better decisions and stronger outcomes.

A $5 million upgrade

This wasn't just a code refactor; it was a revenue engine. Based on the increased auction participation and improved AI selectivity, we project this infrastructure upgrade will drive an additional $5 million in improved performance across our advertisers over the next year.

At tvScientific, we believe that the best performance marketing is built on the best engineering. By solving the hard technical problems — like sub-10ms latency and high-memory orchestration — we give our AI the speed it needs to win for you.