[FS_ULBC] Discussion on Methodology for Delay & Error Trace Generation
This contribution addresses the ongoing debate within SA4 Audio SWG regarding the methodology for generating delay and error traces for Ultra Low Bitrate Codec (ULBC) evaluation under Non-Terrestrial Network (NTN) conditions. Two competing approaches have emerged:
The contribution proposes clarifying the purpose of these simulations by distinguishing between Design and Verification phases.
The LTE MTSI testing methodology in TS 26.132 (Annex E and F) operated on "Stationary" conditions:
if (rand(1) < BLER_tx) logicCritically, TS 26.132 defined these traces as verification tools (System Testing):
Key Finding: Profiles were treated as Test Vectors to verify robustness against defined impairments, not as "realistic channel recordings" to train codec design.
NTN scenarios introduce challenges that invalidate the LTE approach:
Methodology:
- Define TBSs for each candidate bitrate and bundling time
- Traverse all link parameters (SCS, Tone, etc.) to evaluate if resulting link budgets satisfy predefined Target BLER
- Generate error trace for each configuration meeting BLER threshold
- Number of output traces = Number of defined Target BLERs (for each TBS)
Underlying Assumption: AI-based Codecs (specifically PLC mechanisms) require specific "real" error patterns during training/design phase
Observation: Limits testing scope to specific "safe" operating points, potentially overlooking codec behavior under unexpected channel degradation
Methodology:
- Normalize TBS across all candidate codec bitrates assuming consistent packet overhead
- For each unique Link Budget (fixed SNR) derived from specific UE, satellite, and link parameters, generate dedicated error traces
- Number of output traces = Number of unique Link Budgets (for each TBS)
Underlying Assumption: Mimics "Best Effort" or competitive scenario similar to EVS selection, where end-to-end quality (MOS) matters more than intermediate BLER
Observation: Logically sound for optimizing system performance, but implies vast search space potentially leading to unmanageable simulation workload
The standard workflow should be:
Delay/Error Profiles Generation → Codec/PLC Verification → System Performance Evaluation
The current deadlock stems from treating RAN simulation outputs as Design Constraints (training data) rather than Verification Tools.
Key Principles:
Robustness over Overfitting: Robust Codec and PLC design should not rely on "learning" a specific channel trace from a specific simulator. Design should handle variety of harsh conditions (burst losses, high jitter, varying BLER). Data augmentation is standard practice for training robust AI models.
The Role of Traces: As in TS 26.132 Annex F, generated traces serve as "Test Vectors" defining challenging conditions under which the Codec must survive. Whether traces represent 90% or 99% of real-world cases is secondary to sufficiently stress-testing JBM and PLC algorithms.
Historical Practice: Delay/Error profiles officially generated by SA4 were never distributed to codec proponents for training purposes; they were solely used to verify codec candidates fulfill design constraints and performance requirements.
Re-orient simulation efforts towards generating a Verification Suite rather than a "Perfect Reality Model":
Avoid Excessive "Realism" Filtering: Do not discard simulation results simply because they don't meet strict low-BLER threshold. High BLER conditions are valid "Corner Cases" that ULBC must handle, especially in satellite scenarios with tight link budgets.
Limit the Search Space: Select representative subset of challenging conditions (e.g., Deep Fading, High Doppler) at fixed SNR points resulting in range of BLERs (e.g., from <1% up to >10%).
Verification Focus: Output traces should verify candidate codecs degrade gracefully under varied conditions. Burden is on Codec proponent to design PLC that works across these profiles, not on RAN simulation group to provide "training set" guaranteeing codec works.
The MFTG methodology aims to decouple physical layer simulation assumptions from application-layer codec design by providing a high-resolution library of error traces rather than a single static operating point.
For Performance Comparison: Proponents selecting specific source bitrate can identify and utilize trace from library whose SNR/BLER most closely matches their design's intended link budget
For Robustness Testing: Proponents can select "stress-test" traces (e.g., those with higher BLER or specific jitter profiles) from same library to verify PLC and JBM algorithms
While the source understands the rationale behind both the Fixed BLER approach and Fixed Resource / Link Budget approach for GEO network simulation, a compromised solution is necessary for FS_ULBC to progress. MFTG is therefore proposed for consideration and agreement.