Seeduplex
All articles
April 13, 2026·8 min read

How ByteDance Built Seeduplex — Technical Deep Dive

A technical breakdown of how ByteDance's Seed team engineered the first production-scale full-duplex voice AI model, and the architecture decisions behind it.


The Engineering Challenge

Building a full-duplex voice AI that works in a lab is one thing. Deploying it to hundreds of millions of users — with consistent quality, low latency, and high concurrency — is an entirely different engineering challenge.

ByteDance's Seed team spent years solving the latter. Here's what we know about how they did it.

The Fundamental Architecture Decision

The most important decision ByteDance made was to build Seeduplex as a native end-to-end model rather than a pipeline.

Pipeline Architecture (the old way)

Audio In → ASR Model → Text → LLM → Text → TTS Model → Audio Out

Each arrow is a handoff between separate models. Each handoff adds latency. Each model is optimized independently, losing context at every boundary. And critically: the whole thing is sequential — you cannot run step 3 while step 1 is still happening.

Native End-to-End Architecture (Seeduplex)

Audio In ─────┐
               ├──► Unified Model ──► Audio Out
Dialogue Ctx ─┘         │
                         └──► Turn Decision

A single model processes raw audio input and generates raw audio output, while simultaneously maintaining dialogue context and making real-time turn-taking decisions. There are no handoffs, no pipeline stages, and no discrete steps.

This is architecturally much harder to build but eliminates the fundamental latency and coherence problems of pipeline systems.

The Three Technical Pillars

1. Joint Speech-Semantic Modeling

Traditional voice AI separates acoustic processing (what sounds are being made) from semantic processing (what those sounds mean). These run as separate models with separate representations.

Seeduplex unifies them. The model learns joint representations that capture both acoustic features and semantic meaning simultaneously. This means:

  • The model can use semantic context to inform acoustic perception ("this person is mid-sentence, so that silence is a pause, not a stop")
  • The model can use acoustic features to inform semantic understanding ("the rising intonation suggests a question even though the syntax is declarative")

This joint modeling is what makes turn-taking dramatically more accurate than threshold-based approaches.

2. Streaming Perception Framework

For full-duplex to work, the model must process audio in real-time — not in chunks after the user stops speaking. Seeduplex implements continuous streaming audio processing:

  • Audio is processed in 20ms frames
  • Each frame is immediately processed by the model
  • The model maintains a running state across frames
  • Output generation is also streaming — audio chunks are produced as they're generated, not after the full response is computed

This streaming architecture reduces perceived latency dramatically. The user hears the first words of the AI's response within ~200ms of when a response is warranted.

3. Dynamic Turn-Taking Algorithm

This is arguably the hardest problem in full-duplex voice AI. The model must continuously answer the question: "Should I be speaking right now, or should I be listening?"

Seeduplex's approach uses three signals simultaneously:

Acoustic signal: Is there speech energy in the input? What is the pitch and rhythm pattern? Is this a sentence-final intonation?

Semantic signal: Based on the words detected so far, does the utterance appear complete? Is this a question that expects a response? Is the user mid-thought?

Dialogue state signal: Where are we in the conversation? What was the last exchange? Is the user asking for clarification or making a new request?

These three signals are fused in the model's decision layer to produce a continuous probability estimate: "probability that the user is done speaking." When this probability crosses a threshold, the model begins generating a response — while still monitoring the input for corrections or interruptions.

Solving the Interference Problem

One of the most practically important innovations in Seeduplex is its approach to interference suppression.

The Problem

In real-world deployments, users often have background audio:

  • Navigation apps giving directions while they talk
  • TV or music in the background
  • Other people speaking nearby
  • Hold music if the call was briefly placed on hold

Half-duplex systems use acoustic filtering to suppress these — essentially trying to separate the user's voice from background noise at the signal level. This works for simple cases but fails when the background audio sounds like speech (e.g., a TV announcer or a second person in the room).

Seeduplex's Semantic Approach

Because Seeduplex understands the conversation semantically, it can filter interference at a higher level:

  • It knows what topic is being discussed
  • It knows the user's speech patterns from this session
  • It can recognize that "turn left in 500 meters" is navigation audio, not part of the conversation
  • It can distinguish a second person's speech as background if it doesn't relate to the ongoing conversation

This semantic filtering reduces false trigger rates by 50% compared to acoustic-only approaches — the key metric ByteDance highlighted in the Seeduplex launch.

Scaling to Production

Getting full-duplex to work in a lab is one challenge. Getting it to work for hundreds of millions of Doubao users simultaneously is another.

The Concurrency Challenge

Every active Seeduplex session requires:

  • Continuous bidirectional audio streaming
  • Real-time model inference (every 20ms)
  • Low-latency response generation
  • State maintenance across the session

Multiply this by millions of concurrent sessions and the infrastructure requirements become extraordinary.

ByteDance's approach leveraged:

  • Custom model quantization to reduce compute per session
  • Efficient KV-cache management for dialogue state
  • Distributed inference across their global edge network
  • Graceful degradation — the system maintains conversation quality even under high load

Latency Budget

For full-duplex to feel natural, total round-trip latency must stay under ~300ms. ByteDance's published latency breakdown:

ComponentBudget
Audio encoding + transmission~20ms
Streaming perception (per frame)~10ms
Turn detection decision~15ms
Response generation (first token)~100ms
Audio decoding + transmission~20ms
**Total****~165ms**

This leaves margin for network variability while staying under the 300ms threshold for natural conversation.

What Comes Next

Seeduplex represents the first generation of production full-duplex voice AI. Several areas are actively being developed:

Broader language support: Current production focus on English and Mandarin, with other languages in training.

Emotional intelligence: Better detection and response to emotional cues in voice.

Multiparty conversations: Handling three or more simultaneous speakers — a much harder turn-taking problem.

Lower latency: Target sub-100ms total latency for the next generation.

The architecture ByteDance has built is a foundation that will support these improvements. The hard problem — unified end-to-end full-duplex modeling at scale — is already solved.

Learn how to integrate Seeduplex →