All Summaries - Table View

Meeting: TSGS4_135_India | Agenda Item: 9.6

18 documents found

Back to Agenda Card View
TDoc Number Source Title Summarie
Samsung Electronics Iberia SA
[FS_3DGS_MED] pCR on 3D tiles, LOD and 3DGS delivery format requirements

Summary of 3GPP Change Request S4-260088

Document Information

  • Source: Samsung Electronics Co., Ltd.
  • Title: pCR on 3D tiles, LOD and 3DGS delivery format requirements
  • Specification: 3GPP TR 26.958 v0.1.1 (FS_3DGS_MED)
  • Purpose: Agreement on text additions and modifications

Overview

This change request proposes comprehensive updates to TR 26.958 to address 3D Gaussian Splatting (3DGS) encapsulation and delivery format requirements, with particular focus on spatial random access and level of detail (LOD) mechanisms. The document introduces three main changes to improve clarity and technical accuracy.

Main Technical Contributions

1. Terminology Updates (1st Change)

New and Modified Definitions

  • 3DGS tile (new definition): A spatial volume of the scene represented by a specific bounding volume, containing a set of 3D Gaussians for a given level of detail (LOD)

  • Levels of detail (new definition): Multiple representations of a scene, each with a different set of data which represents different qualities of the scene for a compromise between visual detail and data size

  • 3D tile (modified/removed): The previous generic definition ("discrete spatial partition of a massive geospatial dataset") is replaced with the more specific "3DGS tile" definition to better align with 3DGS-specific requirements

Rationale: The original TR lacked an LOD definition and used a non-3DGS-specific definition for 3D tiles. The new terminology better reflects the technical requirements for 3DGS delivery.

2. Use Case Description Refinements (2nd Change)

Updates to Clause 5.3 - Exploration of Large 3DGS Environment

Terminology Harmonization: - Replaces generic "3D tiles" references with "3D Gaussians" in use case descriptions - Updates "3D tiles" to "3DGS tiles" in working assumptions with reference to new technical clause - Maintains LOD terminology as it is widely understood in 3D graphics

Key Technical Aspects: - Adaptive delivery of 3D Gaussians at various LODs based on user pose and device capabilities - Selection process maintains constant number of displayed splats for quality consistency - Interactive delivery mechanism for 3D Gaussian sets at various detail levels - Constrained navigation within captured regions

Working Assumptions Updates

Compression and Packaging: - 3DGS tiles with different LODs serialized into delivery format - Signaling for spatial and LOD indices and dependencies - Editor's notes identify need for: - Workflow documentation for different uplink/downlink traffic profiles - Characterization of Gaussian parameters requiring signaling - Evaluation of existing 3GPP media delivery frameworks

Transport and Delivery: - Interactive delivery with predictive prefetch - Edge-assisted content hosting for latency control - Buffering strategies to minimize latencies and visual artifacts

Decoding and Rendering: - UE parses 3DGS tile indices and manages GPU residency - Real-time splat-based rendering with tile/LOD switching - Navigation constraints based on capture information and collision detection - Editor's note on expressing "allowed navigation volume"

3. New Clause on 3DGS Encapsulation and Delivery Formats (3rd Change)

This represents the major technical contribution, introducing a comprehensive new clause (Clause X) covering:

X.1 Introduction

Core Concepts: - 3DGS scalability requires support for: 1. Position-based random access of 3D Gaussians 2. Delivery/rendering of different LODs

Technical Relationship: - Spatial random access and LOD are non-orthogonal (same spatial volume can have different Gaussian sets for different LODs) - Viewing frustum (derived from user pose) determines required spatial volumes and LODs - Frustum includes: position, orientation, horizontal/vertical FoV, viewing distance - Mechanisms relevant to both rendering efficiency and delivery optimization

X.2 Requirements

Two Primary Requirements Identified:

  1. 3D Gaussian Set Identification: Method to identify 3DGS data into sets enabling association of 3D Gaussians at different LODs with spatial volumes

  2. Frustum-Based Access: Method to identify and access required 3D Gaussians at different LODs using user's frustum for efficient delivery, access, and rendering

X.3 3DGS Tiles

Technical Definition: - Method to associate spatial volumes with 3D Gaussians through 3DGS tiles - 3DGS tile = spatial volume with specific bounding volume + set of 3D Gaussians for given LOD

Editor's Note: Indicates need for additional technical details

X.4 Related Compression Aspects

Compression Technology Requirements:

  1. LOD and Partial Delivery Support: Ability to support LOD and partial spatial data delivery without compression optimization dependency

  2. Optimized Compression: Real-time compression of 3DGS data optimized for LOD support and partial spatial delivery

Technical Impact

The changes establish a structured framework for: - Standardized terminology for 3DGS spatial organization - Clear requirements for encapsulation and delivery formats - Foundation for future work on compression, signaling, and delivery protocols - Alignment with interactive 6DoF streaming requirements from TR 26.928

The proposal moves from use-case-embedded technical concepts to a dedicated technical clause, providing clearer separation of concerns and enabling more detailed specification development.

Samsung Electronics Iberia SA
[FS_3DGS_MED] pCR on editorial changes

3GPP Document S4-260089 Summary

Document Information

  • Meeting: TSG-SA4 Meeting #135 (February 9-13, 2026, Goa, India)
  • Source: Samsung Electronics Co., Ltd.
  • Document Type: pCR (pseudo Change Request)
  • Target Specification: 3GPP TR 26.958 v0.1.1
  • Study Item: FS_3DGS_MED (3D Gaussian Splatting for Media)
  • Purpose: Agreement

Overview

This is an editorial change request for the 3D Gaussian Splatting for Media study item Technical Report. The document proposes non-technical corrections and clean-up modifications to improve the clarity and consistency of TR 26.958.

Main Technical Contributions

1. Reason for Change

The document identifies the need for editorial corrections and clean-up in the current version (v0.1.1) of TR 26.958. These changes are purely editorial in nature and do not affect the technical content or agreements previously made in the study.

2. Proposed Changes

The contribution proposes to incorporate editorial modifications to TR 26.958 v0.1.1. The specific editorial changes are contained in an attached document (not visible in the provided content).

Nature of Contribution

This is a maintenance-type contribution focused on: - Editorial corrections - Document clean-up - Improving readability and consistency

Note: The actual detailed editorial changes are not visible in the provided HTML document as they would be in the attachment referenced by the contribution.

Qualcomm Atheros, Inc.
[FS_3DGS_MED] glTF-based Representation Formats for 3D Gaussian Splats

Summary of S4-260119: glTF-based Representation Formats for 3D Gaussian Splats

Introduction and Scope

This contribution addresses Objective 2c of the FS_3DGS_MED Study Item ("Determine relevant formats") by providing a comprehensive analysis of glTF-based representation formats for 3D Gaussian Splatting. The document identifies a gap in TR 26.958 V0.1.1, which currently only mentions PLY as a storage format without comparative analysis of the emerging glTF-based format ecosystem from Khronos and MPEG.

The contribution proposes a two-layer architecture combining: - KHR_gaussian_splatting (Khronos) for canonical splat semantics - MPEG_gaussian_splatting_transport (MPEG-I Scene Description) for distribution and streaming capabilities

KHR_gaussian_splatting (Khronos Layer)

Core Attribute Semantics

The Khronos extension (review draft published August 2025) defines Gaussian splats as POINTS primitives within standard glTF 2.0 with the following attributes:

  • POSITION (VEC3, required): Splat center position using standard glTF base attribute
  • ROTATION (VEC4, required): Quaternion (x,y,z,w) for local axes orientation
  • SCALE (VEC3, required): Per-axis scale in log-space
  • OPACITY (SCALAR, required): Opacity in range [0,1]
  • SH_DEGREE_l_COEF_n (VEC3, conditional): Spherical harmonics coefficients organized by degree (0-3) and coefficient index for view-dependent lighting
  • COLOR_0 (VEC3/VEC4, recommended): Baseline color for fallback point-cloud rendering

Extensibility and Backward Compatibility

Key design features: - Nested extensions mechanism inside the KHR_gaussian_splatting object allows other extensions to add compression, alternative encodings, or processing without duplicating semantics - Graceful degradation: Clients not recognizing the extension can still render as standard point cloud using POSITION and COLOR_0 - Provides strong anchor for MPEG and 3GPP work targeting interoperable distribution and streaming

MPEG_gaussian_splatting_transport (MPEG Layer)

Architecture Approach

The MPEG extension is carried as a nested extension inside KHR_gaussian_splatting.extensions, avoiding semantic duplication and adding only transport-level features.

Transport-Level Features

Alternative SH Layouts

Two MPEG-specific SH coefficient storage modes alongside Khronos default:

  1. mpegProgressive layout:
  2. Groups coefficients by SH degree (degree 1, 2, 3 as separate SCALAR accessors)
  3. Efficient for progressive refinement
  4. Receiver can render with only SH degree 0 data and incrementally fetch higher degrees
  5. DC (degree 0) term reconstructed from COLOR_0.rgb or carried via KHR SH_DEGREE_0_COEF_0

  6. mpegPerChannel layout:

  7. Separates coefficients by color channel (R, G, B)
  8. More efficient for certain compression schemes

Progressive Download

  • Optional progressive ordering signaled by listing accessor indices in progressive.stages
  • Ordered from lower to higher fidelity
  • Receiver may initially fetch only first stage and progressively refine without re-decoding previous data

Timed Delivery for 4D Splats

  • Dynamic 4D Gaussian splat sequences supported using existing MPEG timed media mechanisms
  • Accessor treated as time-varying if and only if it carries MPEG_accessor_timed extension
  • Timed accessors backed by circular buffers as defined by MPEG-I Scene Description

Two-Layer Architecture Benefits for 3GPP

Architectural Summary

  • Layer 1 (Khronos): Canonical splat semantics (geometry, appearance, SH lighting) and fallback point-cloud path
  • Layer 2 (MPEG): Progressive download, timed delivery, and alternative SH layouts as nested extension

3GPP Service Integration Advantages

  1. Alignment with existing 3GPP specifications: glTF already adopted by TS 26.118 (Immersive teleconferencing) and TS 26.119 (MeCAR)

  2. 5GMS adaptive delivery mapping: Progressive download and timed delivery map naturally to 5G Media Streaming

  3. Bandwidth-adaptive quality: Progressive SH degree layout enables network/receiver control of SH levels to fetch, analogous to spatial/temporal layer selection in scalable video codecs

  4. Future-proof extensibility: Clear path for future compression extensions (e.g., from ongoing MPEG Gaussian Splat Coding exploration) and tiled spatial delivery without breaking backward compatibility

Format Comparison

PLY

  • De facto training output format
  • Raw float32 attributes without compression
  • Very large files (typically 200+ MB for single scene at SH degree 3)
  • Limitations: No extensibility mechanism, no progressive delivery support, no scene graph, no standard metadata support (camera parameters, animation)

SPZ (Splat Zip)

  • Developed by Niantic as compact binary container
  • Applies quantization and packing (~90% size reduction vs PLY)
  • Extension under development in Khronos
  • Superior compression schemes (e.g., Qualcomm's L-GSC) also being considered

glTF + KHR_gaussian_splatting + MPEG transport

  • Full scene graph support (nodes, transforms, animations)
  • Standard extensibility
  • Backward-compatible fallback
  • MPEG transport layer for progressive and timed delivery
  • Signaling and usage of different compression schemes through proper extensions
  • Recommended as primary format path for 3GPP

Proposals for TR 26.958

The contribution proposes to include the following in TR 26.958 Section 4 and new subsection under Section 11:

  1. Document KHR_gaussian_splatting as emerging industry baseline for 3DGS representation in glTF, including:
  2. Attribute semantics
  3. SH coefficient organization
  4. Backward-compatible fallback via POINTS
  5. Extensibility mechanism

  6. Document MPEG_gaussian_splatting_transport being developed within MPEG-I Scene Description, including:

  7. Progressive download
  8. Timed delivery for dynamic 4D Gaussian splat sequences
  9. Alternative SH coefficient layouts (mpegProgressive and mpegPerChannel)

  10. Document two-layer architecture (Khronos semantics + MPEG transport) and its suitability for 3GPP service integration, noting alignment with glTF-based approach in TS 26.118 and TS 26.119

Pengcheng Laboratory, China Mobile Com. Corporation
[FS_3DGS_MED] Pseudo-CR on Sport Example for Dynamic 3DGS Content Use Case

Summary of S4-260140: Sport Example for Dynamic 3DGS Content Use Case

Document Overview

This change request proposes adding a sports scenario example to TR 26.958 to illustrate the Dynamic 3DGS (3D Gaussian Splatting) content use case. The contribution is from Pengcheng Laboratory and China Mobile, targeting the FS_3DGS_MED study item.

Main Technical Contributions

Use Case Enhancement - Dynamic 3DGS Content (Section 5.4)

Core Use Case Description (Section 5.4.1)

The document enhances the existing Dynamic 3DGS content use case description with the following key characteristics:

  • Content Type: Time-varying 3DGS content depicting dynamic subjects/scenes (performers, dancers, singers, exhibitions, bands, sport actions)
  • Rendering Approach: Real-time rendering of 3DGS content sequences on the UE
  • Network Support: Delivery and rendering may be assisted through:
  • Partial delivery mechanisms
  • Network-assisted rendering
  • User Interaction: Viewpoint adjustment within a constrained navigation volume while the scene changes dynamically
  • Rendering Primitive: 3D Gaussian splats (as opposed to textured meshes or voxels used in volumetric video)

Scope Definition

  • Primary Focus: Delivery, decoding, and real-time rendering of pre-recorded dynamic 3DGS sequences
  • On-demand streaming
  • File download scenarios
  • Future Consideration: Live dynamic 3DGS capturing and delivery (feasibility-dependent, later stage)
  • Alignment: Corresponds to TR 26.928 Use Case 3: Streaming of Immersive 6DoF (non-live/on-demand variant)

Sports Action Example

Scenario Description

The CR introduces a basketball game segment as an illustrative example (Figure 5.1):

  • Content Representation: Dynamic scene encoding both:
  • Evolving motion of players
  • Surrounding environment
  • Represented as time-indexed sequence of 3D Gaussian splats

Playback Characteristics

  • Temporal Handling:
  • UE receives successive temporal segments
  • Continuous rendering to preserve temporal progression

  • Spatial Navigation:

  • User-controlled viewpoint adjustments including:
    • Limited rotation
    • Translation
    • Zoom
  • Enables observation from different perspectives without altering temporal sequence

Navigation Constraints

  • Temporal Navigation: Driven by playback timeline
  • Spatial Navigation:
  • User-controlled within permitted range
  • Constrained to allowed-view volume derived from original capture configuration
  • Ensures visual coherence and avoids out-of-distribution views
  • Combined Interaction: Time-continuous playback with interactive viewpoint exploration

Technical Significance

This contribution provides a concrete, large-scale example for Dynamic 3DGS content use cases, specifically addressing:

  • Wide-area environments with complex background dynamics
  • Fast-moving subjects (athletes)
  • Traffic analysis requirements for extensive 3DGS environments
  • Requirement derivation for the FS_3DGS_MED study

The sports scenario serves as a representative example for understanding delivery, rendering, and interaction requirements for dynamic 3DGS content in challenging real-world conditions.

Pengcheng Laboratory, China Mobile Com. Corporation
Pseudo-CR on Dancer Example for Dynamic 3DGS Content Use Case

Summary of S4-260145: Pseudo-CR on Dancer Example for Dynamic 3DGS Content Use Case

Document Overview

This contribution proposes adding a detailed dancer scenario example to TR 26.958 as an illustrative use case for Dynamic 3D Gaussian Splatting (3DGS) content. The document is submitted by Pengcheng Laboratory and China Mobile Com. Corporation for SA4 Meeting #135.

Main Technical Contributions

Dynamic 3DGS Content Use Case Enhancement (Section 5.4)

General Description

The contribution expands the existing Dynamic 3DGS content use case description with the following key characteristics:

  • Content Type: Time-varying 3DGS content depicting dynamic subjects or scenes (performers, dancers, singers, exhibition moments, bands, sport actions)
  • Delivery Model: Pre-recorded dynamic 3DGS sequences via on-demand streaming or file download
  • Rendering: Real-time rendering on UE with potential network assistance (partial delivery or network-assisted rendering)
  • User Interaction: Local viewpoint adjustment within constrained navigation volume while scene changes dynamically over time
  • Rendering Primitive: 3D Gaussian splats (analogous to volumetric video but using splats instead of textured meshes or voxels)
  • Alignment: Corresponds to 3GPP TR 26.928 Use Case 3: Streaming of Immersive 6DoF (non-live/on-demand variant)

Dancer Scenario Example

The contribution introduces a comprehensive dancer performance example with the following technical specifications:

Scene Representation: - Dynamic 3DGS sequence representing dance performance captured over short temporal interval - Time-indexed sequence of 3D Gaussian splats encoding: - Continuous body motion - Pose transitions - Expressive gestures of one or multiple dancers - Relevant stage elements

Playback Characteristics: - UE receives successive temporal segments - Real-time rendering preserving temporal continuity and rhythm - Motion evolution according to encoded timeline - Spatial structure coherence maintained across frames

User Interaction Model: - Viewpoint Adjustment: Interactive control within constrained navigation volume derived from original capture setup - Permitted Operations: Limited rotation, translation, or zoom - Benefits: Enhanced perception of choreography, spatial relationships between performers, and fine-grained motion details - Constraints: Visual consistency ensured while avoiding out-of-distribution views

Navigation Paradigm: - Temporal Navigation: Driven by playback timeline (time-continuous) - Spatial Navigation: User-controlled within permitted range - Combined Experience: Time-continuous playback with interactive viewpoint exploration

Scope Limitations

The contribution explicitly defines the following scope boundaries:

In Scope: - Delivery of pre-recorded sequences - Decoding of dynamic 3DGS content - Real-time rendering on mobile devices - On-demand streaming or file download

Out of Scope: - Live dynamic 3DGS capturing and delivery (may be considered later depending on feasibility) - Capture processes - Real-time communication

Technical Focus

The use case specifically targets: - Human-centric 3D Gaussian scene reconstruction - Capturing intricate details of human motion and dynamic appearance changes within confined volume - Reference implementation for evaluating 3DGS rendering performance on mobile devices - High-fidelity character rendering

Visual Material

The contribution includes Figure 5.x illustrating the dancer scenario, showing time-indexed dynamic 3DGS sequence playback with temporal progression preservation and user viewpoint adjustment capabilities within the allowed navigation volume.

Pengcheng Laboratory, China Mobile Com. Corporation
[FS_3DGS_MED] Pseudo-CR on Enhanced Scenario for Avatar Communication Use Case

Summary of 3GPP Change Request S4-260147

Document Information

  • Source: Pengcheng Laboratory, China Mobile Com. Corporation
  • Title: [FS_3DGS_MED] Pseudo-CR on Enhanced Scenario for Avatar Communication Use Case
  • Specification: 3GPP Draft TR 26.958 v0.1.1
  • Meeting: TSG-SA4 Meeting #135, 9-13 February 2026, Goa, India

Main Objective

This contribution proposes an enhanced scenario for avatar-based communication that combines parametric human models with 3D Gaussian Splatting (3DGS) technology. The proposal aims to enable efficient real-time interactive communication by transmitting compact motion parameters to drive a deformable mesh while using 3DGS for high-fidelity appearance rendering.

Technical Contributions

Enhanced Avatar Communication Architecture

The proposal introduces a hybrid representation approach consisting of:

  • Deformable mesh representation driven by parametric human model parameters (e.g., SMPL-X for body/hands, FLAME for face)
  • 3D Gaussian Splat representation for appearance enhancement and fine detail capture
  • Separation of geometry and appearance to optimize transmission efficiency

Technical Processing Pipeline

Sender Side Processing

  • Capture: User captured using one or more cameras
  • Parameter Extraction: Geometric and animation parameters extracted using parametric models
  • SMPL-X for body and hand motion
  • FLAME for facial geometry and expression
  • Representation Generation:
  • Deformable human mesh reconstruction based on extracted parameters
  • 3D Gaussian Splat generation for appearance details (fine surface detail, hair, clothing)
  • Spatial Alignment: 3DGS representation aligned with deformable mesh

Transmission Strategy

  • Base Avatar: Transmitted once at session setup or updated occasionally
  • Rigged mesh with skeletal structure and blendshapes
  • Static 3DGS representation
  • Animation Stream: Time-varying model parameters transmitted during session
  • Compact parametric representation
  • Low-latency transmission for interactive communication
  • Update Frequency: 3DGS updated at lower frequency than animation parameters

Receiver Side Processing

  • Animation Application: Received parameters drive avatar motion
  • Deformation Propagation: 3DGS follows mesh deformation
  • Rendering: Composite approach combining:
  • Mesh-based shading
  • 3DGS-based appearance contributions
  • Viewpoint Adaptation: Supported within application-defined constraints

Working Assumptions

The proposal defines several key working assumptions:

Capture and Animation Extraction: - Real-time capture using one or more cameras - Real-time derivation of animation parameters from captured signals

Representation: - Deformable mesh with associated rig - Associated 3DGS for appearance rendering - Static or low-frequency updated 3DGS representation

Transmission: - One-time or occasional base avatar transmission - Continuous time-varying animation parameter transmission - Low-latency requirement for interactive communication

Decoding and Rendering: - Animation parameter application at receiver - Combined mesh and 3DGS rendering - Constrained viewpoint adaptation support

Key Innovation

The main technical innovation is the separation of geometric animation (transmitted as compact parametric data) from appearance representation (using 3DGS), enabling photorealistic real-time avatar communication with efficient bandwidth utilization suitable for bidirectional interactive applications.

Tencent Cloud
[FS_3DGS_MED] Pseudo-CR on objective metrics for 3DGS

Summary of 3GPP Change Request S4-260164

Document Information

  • Source: Tencent
  • Title: Pseudo-CR on objective metrics for 3DGS
  • Specification: 3GPP TR 26.958 v0.1.1
  • Meeting: TSG-SA4 Meeting #135, February 2026

Main Technical Contributions

1. Introduction of Objective Metrics Framework for 3DGS

This change request proposes the adoption of a standardized objective quality evaluation methodology for 3D Gaussian Splatting (3DGS) content. The contribution addresses the current gap in TR 26.958, which contains only placeholders for metrics and reference implementations. The proposal leverages the mpeg-gsc-metrics software tool recently developed by MPEG for computing objective quality metrics.

2. Rationale for Standardization

The CR identifies three key requirements for the study:

  • Image-based evaluation: Enables calculation of objective image-based metrics for comparing source and decoded 3DGS content
  • Industry-standard metrics: Supports commonly used image quality metrics (PSNR, SSIM, IVSSIM, etc.)
  • Viewpoint management: Provides flexible handling of test views with exact camera parameter reuse or custom testing scenarios

The proposed software is a fork of MPEG metrics software intended for storage in the 3GPP git repository to facilitate updates and future experiments.

3. Technical Changes to TR 26.958

3.1 New Section 6.4.1: Objective Metrics

The CR introduces a comprehensive objective metrics section defining:

Supported Metrics: - PSNR and MSE: Computed in both RGB and YUV color spaces with weighted averages - Object Masked (OM) Metrics: PSNR and SSIM variants computed only on valid pixels defined by union of object masks - Perceptual Metrics: SSIM and IVSSIM - Geometric Statistics: Occupancy rate measuring valid pixel coverage percentage

Dual-Mode Rasterizer: - CPU rasterizer: Software-based implementation ensuring bit-exact rendering regardless of hardware/OS (recommended for normative results) - GPU rasterizer: OpenGL-based accelerated rendering for visual inspection and rapid experiments

Evaluation Process: 1. Viewpoint generation from original PLY file or explicit definition 2. Rendering using standardized rasterizer (CPU or GPU) 3. Metric computation on rendered pairs

3.2 New Section 12.4: Objective Metrics Reference Implementation

The CR adds detailed usage documentation for the 3DGS-Metrics command-line tool:

12.4.1 Basic Metric Computation: - Simple command-line interface for comparing source and decoded PLY files

12.4.2 Evaluation Using Embedded Camera Parameters: - --useCameraPosition option enables rendering using camera parameters stored in PLY header comments - Parameters typically inserted by content preparation tools using COLMAP photogrammetry data - Ensures exact camera intrinsics and extrinsics without external configuration

12.4.3 Evaluation with Loaded Viewpoints: - Support for external viewpoint files specifying camera poses - CPU rendering option for bit-exact results

12.4.4 Video Generation: - -s flag enables generation of rendered video sequences (Source, Decode, Butterfly comparison) - Facilitates visual inspection alongside metric computation

12.4.5 Output Results Format: - Detailed per-frame and global average statistics - Comprehensive reporting of MSE, PSNR, and SSIM in RGB and YUV color spaces - Example output provided for Bartender sequence at 1920x1080 resolution showing: - RGB PSNR (avg): 49.79 dB - YUV PSNR (avg): 53.96 dB - SSIM (avg): 0.998386 - 100% occupancy

Conclusion

The CR proposes adopting the 3DGS metrics software as the reference tool for objective quality evaluation to ensure all contributions are measured against the same baseline. This standardization will facilitate technical work by providing consistent, comparable, and reproducible results across different proponents within the study.

Tencent Cloud
[FS_3DGS_MED] Pseudo-CR on 3DGS renderer and performance benchmarking

Summary of 3GPP Change Request S4-260168

Document Information

  • Source: Tencent
  • Title: Pseudo-CR on 3DGS renderer and performance benchmarking
  • Specification: 3GPP TR 26.958 v0.1.1
  • Study: FS_3DGS_MED (3D Gaussian Splats for mobile)

Main Objective

This change request proposes adding technical content to TR 26.958 regarding a reference implementation of a 3DGS player for mobile platforms, including mobile renderer features and preliminary experimental benchmark results obtained on commercial mobile devices.

Technical Contributions

1. Mobile Renderer Architecture (Section 12.4.1)

The document proposes a hybrid architecture for the 3DGS mobile player:

  • Native Layer (C++):
  • Implements core rendering using OpenGL ES 3.2
  • Tile-based rasterizer inspired by original 3DGS method
  • CPU sorting or Compute Shaders for parallel sorting (e.g., Radix sort)
  • Vertex and Fragment shaders for rendering

  • Application Layer (Java/Kotlin):

  • UI management
  • AR runtime lifecycle for camera tracking
  • Resource management

  • Capabilities:

  • Supports standard .ply file loading
  • Real-time interaction (rotation, translation, scaling)
  • Benchmarking mode with dynamic parameter variation

2. Rendering Process Details (Section 12.4.1 - second subsection)

Key technical aspects of the mobile rendering pipeline:

  • Depth Sorting: Critical back-to-front sorting performed by CPU each frame for proper alpha blending (unlike Z-buffer-based mesh rendering)
  • Sorting Implementation: CPU-based Radix Sort preferred over GPU Compute Shaders on mobile for thermal balance and driver compatibility
  • Data Management:
  • Gaussian attributes loaded into VRAM at startup
  • FP32 textures/buffers for precision in covariance and color calculations
  • Only sorted indices transferred CPU→GPU per frame
  • Vertex shader uses texelFetch for direct reads from persistent buffers
  • Minimizes CPU-GPU bandwidth while maintaining visual fidelity

3. Benchmark Methodology (Section 12.4.2)

Proposed benchmarking approach:

  • Dynamic parameter modification during runtime
  • Thermal management API usage for consistent clock speeds
  • AR runtime disabled during benchmarking for fair comparison
  • Variable parameters:
  • Number of Gaussians: 5,000 to 485,436 points
  • Spherical Harmonics degree: 0 (diffuse only) to 3 (full view-dependence)

4. Experimental Results (Section 12.4.3)

Test Configuration

  • Device: Google Pixel 9a (Tensor G4, mid-range, March 2025)
  • Application: Tencent 3DGS mobile player
  • Build: Release mode with optimizations
  • Test duration: 30 seconds per configuration for thermal stability
  • Model: bicycle.ply (485,436 points)
  • Power measurement: Android Battery Manager API

Impact of Number of Points (SH degree=3)

Key findings from Table 1 and Figure 2:

  • 5,000 points: 355 FPS, 24% CPU, 6% GPU, 1.45W
  • 150,000 points: 56 FPS, 47% CPU, 88% GPU, 1.47W (approaching GPU saturation)
  • 200,000 points: 45 FPS, 48% CPU, 99% GPU, 1.33W (GPU saturated)
  • 485,436 points: 19 FPS, 55% CPU, 100% GPU, 1.22W

Conclusion: GPU saturation occurs at ~150k points (87% load) and full saturation at 200k points. Beyond saturation, frame rate decreases linearly with point count.

Impact of Spherical Harmonics Degree (485k points)

Key findings from Table 2 and Figure 3:

  • SH Degree 0: 20.41 FPS, 55% CPU, 100% GPU, 1.45W
  • SH Degree 3: 18.05 FPS, 55% CPU, 100% GPU, 0.99W
  • Performance impact: ~10.8% FPS reduction from degree 0 to 3

Conclusion: Moderate frame rate impact when increasing SH degree from 0 to 3.

5. Overall Analysis (Section 12.4.2.3)

Key conclusions:

  • Real-time rendering of complex 3DGS scenes is feasible on current-generation mobile hardware
  • Scene complexity management required (< 200k visible points recommended)
  • Performance variations observed between identical experiments due to:
  • Background processes
  • Dynamic power management
  • Results should be considered as trends rather than fixed values

Editor's note: Additional benchmarks planned to evaluate impact of other improvements (memory optimization, quantization, sorting algorithms, etc.)

Rationale for Change

  • Provides concrete data to validate real-time 3DGS feasibility on mobile hardware
  • Identifies performance bottlenecks (CPU sorting, memory transfer, GPU rasterization, power consumption)
  • Supports study objectives for reference implementations and performance characteristics
  • Guides future specification work with empirical evidence
Tencent Cloud
[FS_3DGS_MED] Pseudo-CR on 3DGS delivery workflows based on capability negotiation

Summary of S4-260169: Pseudo-CR on 3DGS Delivery Workflows Based on Capability Negotiation

Document Overview

This contribution from Tencent proposes updates to TR 26.958 v0.1.1 to define adaptive delivery workflows for 3D Gaussian Splats (3DGS) content in mobile environments. The document addresses the heterogeneity in both 3DGS scene complexity and UE capabilities through capability negotiation mechanisms.

Motivation and Problem Statement

The contribution identifies a critical gap in the current study: static delivery workflows for 3DGS content pose significant risks including: - Poor Quality of Experience (QoE) when content complexity exceeds UE rendering capabilities - Device overheating and thermal throttling - Inefficient resource utilization across diverse mobile devices

The heterogeneity exists on two dimensions: - Content complexity: Ranging from simple objects (thousands of primitives) to massive scenes (millions of primitives) - Device capabilities: Significant variation in GPU power, thermal limits, memory, and battery constraints

Main Technical Contributions

Adaptive Delivery Framework (Clause 9.2)

The contribution proposes updating clause 9.2 with a comprehensive adaptive workflow that introduces:

  1. Capability Reporting Mechanism: UEs report both static and dynamic capabilities to the server
  2. Static capabilities: Maximum visible Gaussians at target frame rate (e.g., 30fps), highest supported Spherical Harmonics (SH) degree (0-3), maximum memory, supported quantization/compression formats, GPU rendering capacity, CPU performance class, native screen resolution/frame rate, memory bandwidth
  3. Dynamic state: Current thermal status (throttling level), battery level, available GPU/CPU compute headroom, real-time battery charge

  4. Rendering Budget Concept: A negotiated constraint that ensures target frame rates and maximizes session duration based on device capabilities

Two Negotiation Modes

The contribution defines two distinct approaches aligned with TR 26.928 principles:

Server-Centric Decision Mode (Clause 9.2.2.2)

In this approach, the UE acts as a data provider while the server makes adaptation decisions:

Workflow Steps: 1. Hardware assessment: UE evaluates capabilities via system checks (potentially using OpenXR APIs) 2. Capability reporting: UE transmits comprehensive capability report (CPU, GPU, Memory constraints) 3. Server decision: Server analyzes report and determines optimal delivery strategy 4. Content adaptation: Server processes 3DGS model through: - Pruning low-opacity or spatially insignificant splats - LOD selection from pre-generated levels - SH degree reduction (stripping high-order coefficients, transmitting only Direct Color components) - Quantization adjustments 5. Data delivery: Server streams optimized 3DGS payload 6. Local adaptation: UE performs final on-device optimizations (further pruning/merging) to fit runtime constraints 7. Rendering: UE executes rendering pipeline

Key characteristics: - Server employs internal logic or lookup tables to map raw metrics to rendering budget - Server determines primitive count limits (e.g., N primitives for specific GPU under thermal stress) - Reduces both network bandwidth and client rendering load

Client-Centric Decision Mode (Clause 9.2.2.3)

In this approach, the UE determines its own requirements and explicitly requests specific content characteristics:

Workflow Steps: 1. Hardware analysis: UE performs internal audit of hardware resources and API support 2. Format determination: UE calculates optimal 3DGS representation format (point budget, SH degree) based on continuous self-assessment of frame time, thermal headroom, and hardware capabilities 3. Content request: UE explicitly specifies required format parameters (quantization levels, SH orders, point budget) 4. Server-side adaptation: Server processes source content to match UE-specified constraints 5. Data delivery: Server streams optimized payload 6. Local refinement: UE applies final local adaptations 7. Rendering: UE executes rendering pipeline

Key characteristics: - Decision-making responsibility delegated to UE - UE continuously monitors performance metrics - Server acts as content filter/selector fulfilling explicit UE requests

Use Case Alignment

The proposed workflows address requirements from: - Clause 5.2: Static 3DGS scene delivery - Clause 5.4: Dynamic 3DGS content delivery

Technical Benefits

The contribution ensures: - Frame rate stability through capability-aware content delivery - Thermal management by preventing device overheating - Prevention of application crashes and frame drops - Optimized battery consumption - Maximized session duration - Content complexity aligned with hardware processing limits

Proposed Changes

The document proposes modifications to Clause 9.2 of TR 26.958, specifically: - Adding new clause 9.2.1 (Overview) - Adding new clause 9.2.2 (Workflow with capability negotiation) - Adding new clause 9.2.2.1 (Objectives) - Adding new clause 9.2.2.2 (Server-centric 3DGS adaptation) with Figure 2 - Adding new clause 9.2.2.3 (Client-centric 3DGS adaptation) with Figure 3

Nokia
[FS_3DGS_MED] On Software and Services

Summary of S4-260186: 3DGS Software and Services

Document Overview

This contribution from Nokia provides an overview of consumer-facing 3D Gaussian Splatting (3DGS) software and services for inclusion in the draft TR for the study on 3DGS for Media (FS_3DGS-MED). The document proposes two main changes: addition of normative references and a new clause describing available 3DGS software products.

Main Technical Contributions

Addition of Normative References

The contribution proposes adding multiple new references to support the technical content:

Foundational 3DGS References: - Kerbl et al. foundational 3DGS paper (ACM TOG 2023) - Existing 3GPP references (TR 21.905, TR 26.928)

Image Processing and Rendering References: - SSIM (Structural Similarity Index) - Wang et al. 2004 - GPU sorting algorithms - Satish et al. - Alpha compositing - Porter et al. (SIGGRAPH '84)

Recent 3DGS Research (2024-2025): - VGGT: Visual Geometry Grounded Transformer - DepthSplat: Connecting Gaussian Splatting and Depth - AnySplat: Feed-forward 3DGS from unconstrained views - GS-LRM: Large Reconstruction Model for 3D Gaussian Splatting - iLRM: Iterative Large 3D Reconstruction Model - MetaSapiens: Real-time neural rendering with efficiency-aware pruning - Hybrid Transparency Gaussian Splatting (HTGS) - Sort-free Gaussian Splatting via Weighted Sum Rendering

Software and Service References: - KIRI Engine, Niantic Scaniverse, Polycam, Luma AI, Jawset Potshot, LichtFeld Studio, SuperSplat, Gauzilla

New Clause: Software and Products (11.3)

The contribution proposes adding a comprehensive overview of consumer 3DGS software categorized by platform and capabilities:

Mobile Applications

KIRI Engine: - Platform: iOS and Android - Processing: Cloud-based - Capabilities: Photogrammetry or LiDAR capture with 3DGS generation - Export: .ply and other formats - Limitations: Limited control over splat parameters, quality depends on capture device

Niantic Scaniverse: - Platform: Smartphone (iOS/Android) - Processing: Local on-device - Pipeline: SfM for camera pose estimation + Gaussian optimization - Export: .ply, .spz formats - Limitations: Mobile GPU/thermal constraints limit scene size and density, no manual SH order adjustment or splat pruning

Polycam: - Platform: Web, iOS, Android - Processing: Cloud-based - Capabilities: Photos/videos to Gaussian splats, also supports mesh/point cloud - Export: .ply for splats, standard formats for meshes - Limitations: No control over splat parameters, non-deterministic cloud processing results

Desktop Applications

Jawset Potshot: - Platform: Windows desktop - Processing: Local GPU-based - Workflow: Alignment, optimization, and visualization - Export: .ply format - Limitations: Limited parameter tuning compared to research tools, no low-level SH coefficient control

LichtFeld Studio: - Platform: Linux and Windows desktop - Type: Open source - Processing: Local GPU-based - Input Requirements: Pre-computed SfM data (images, point clouds, camera locations) - Features: - 3D Unscented (3DGUT) transform for rendering - Background Modulation for black segments - Timelapse for intermittent quality checks - Masking support - Export: .ply format

Web-Based Viewers/Editors

SuperSplat and Gauzilla: - Platform: Browser-based - Rendering: Client-side via WebGL or WebGPU - Capabilities: Rendering, sharing, transformations, cropping, basic filtering - Limitations: No training or reconstruction support, lower rendering fidelity vs desktop GPU pipelines - Use Case: Post-processing and quick visualization

Hybrid Platform

Luma AI: - Platform: iOS and Web - Processing: Cloud-based - Input: Short handheld videos or image sets - Technology: Neural scene representations rendered as Gaussian splats or hybrid neural radiance fields - Pipeline: Pose estimation and scene normalization before splat optimization - Limitations: No raw Gaussian parameters or SH coefficients exposed, no export capability (as of February 2026), oriented toward visualization rather than pipeline integration

Summary Table

The contribution includes a comparative table summarizing: - Product name - Application type (Mobile/Desktop/Web) - Processing location (Cloud/Local) - Export format options

This table provides a quick reference for understanding the landscape of available 3DGS tools and their capabilities.

Key Observations

The contribution demonstrates the rapid proliferation of 3DGS tools across different platforms and use cases, from mobile capture applications to desktop processing tools and web-based viewers. The tools vary significantly in: - Processing location (cloud vs. local) - User control over parameters - Export capabilities - Target use cases (capture, processing, viewing, sharing)

This overview provides important context for standardization work by documenting the current state of consumer 3DGS software ecosystem.

Nokia
[FS_3DGS_MED] On Mapping to 3GPP services

Summary of S4-260187: On Mapping to 3DGS Services

1. Introduction and Background

This contribution from Nokia addresses objectives 3 and 5 of the FS_3DGS_MED study (approved in SP-251190 at SA#109). The objectives include:

  • Objective 5: Mapping relevant workflows to 3GPP services
  • Objective 3: Studying content generation aspects including network-based processing and Edge/Cloud operations for 3DGS representations

The document notes that static 3DGS content generation workflow (documented in draft TR 26.958v0.1.0) consists of: - Capture - Structure from Motion (SfM) estimation for sparse point cloud reconstruction - Gaussian initialization - Training and optimization

The contribution emphasizes that SfM and training are compute-heavy operations requiring architectural consideration. The document reviews two SA4 media service architectures as potential frameworks: the 5G Media Delivery architecture (TS 26.501, TS 26.510) and IMS (TS 23.228, TS 26.114), noting precedents from R18/R19 services like split rendering, avatar communications, media messaging, and spatial computing.

2. Media Service Enabler (MSE) Framework

Architecture Overview

The contribution proposes leveraging the MSE framework (TR 26.857) which provides Application Providers with well-defined client and network-side functions. The reference architecture includes:

Defined Functions: - Application: UE-resident function leveraging MSE - MSE Client: Logical internal UE function for specific MSE - MSE Application Function (AF): Dedicated application function for an MSE - MSE Application Server (AS): Dedicated application server for an MSE

Defined Interfaces/APIs: - MSE-1: Provisioning API for Application Providers - MSE-2: Optional ingest/egest API for content processing - MSE-3: Inter MSE AF-MSE AS communication - MSE-4: User plane interface between MSE Client and Server - MSE-5: Control API for configuration and management - MSE-6: Client APIs for internal application communication - MSE-7: External device APIs for accessing device functions - MSE-8: Application APIs for information exchange between Application and Provider

3DGS Workflow Mapping to MSE

The proposed mapping for 3DGS content generation and sharing: - AP provisions service through MSE-1 - Session handling control plane information exchanged between MSE AF and MSE client over MSE-5 - Media communication over MSE-4 - Application uses device functions (cameras) via MSE-7 to capture images/video - Captured media transmitted over MSE-4 to MSE AS for SfM and training - Generated 3DGS or rendered views shared back to UE over MSE-4

3. IMS Data Channel (IMS DC) Architecture

Architecture Overview

The contribution proposes IMS DC (TS 23.228 Annex AC) as an alternative architecture, noting IMS as the backbone for conversational media in 3GPP networks.

New Functional Entities: - Data Channel Signalling Function (DCSF): Manages data channel control logic, determines service availability, manages bootstrap and application data channel resources at MF via IMS AS, handles interworking between application data channel media and audio/video media - Media Function (MF): Provides media resource management and forwarding of data channel media traffic, manages bootstrap and application data channel resources, anchors application data channels in P2P scenarios, relays traffic between UEs and DC-AS, handles transcoding. SA2 specifies MF supports rendering (S4-251420) but not AIML functionality (S4-260022) - DC Application Repository (DC-AR): Stores verified data channel applications for retrieval by DCSF and download to UE - DC Application Server (DC-AS): Interacts with DCSF for resource control and traffic forwarding, serves as endpoint for application data channels, communicates with UE through MF. DC-AS functionalities are not 3GPP-specified

DC-Relevant Reference Points: - DC1: Between DCSF and IMS AS - DC2: Between IMS AS and MF - DC3: Between DCSF and NEF - DC4: Between DCSF and DC Application Server - DC5: Between DCSF and DCAR - N70/Cx/Dx: Between CSCF and HSS (updated for DC signalling) - N71/Sh: Between IMS AS and HSS (updated for DC signalling)

Data Channel Media Handling Reference Points: - MDC1: Between MF and DCSF - MDC2: Between MF and DC-AS, between BAR and DC-AS, between MF and BAR - MDC3: Between DCSF and DC-AS

3DGS Workflow Mapping to IMS DC

The proposed mapping for 3DGS generation over IMS DC: - Service provider provides IMS DC application to DCAR - Provisions and configures resources via NEF and DC4 - UE downloads IMS DC app - IMS DC app sets up application data channel with DC-AS for service configuration - Uses device camera(s) to capture images/video - Transmits captured media to DC-AS for SfM and 3DGS training - Generated 3DGS shared back to UE or sent to MF for view-based rendering

4. Conclusion and Proposal

The contribution proposes to develop mappings for 3DGS content generation and sharing workflows to both an MSE framework and to IMS DC architecture, considering both frameworks appropriate for 3DGS service deployment.

InterDigital New York
Dynamic 3DGS complexity

Summary of S4-260191: Dynamic 3DGS Complexity

Document Overview

This contribution from InterDigital addresses an open editor's note in TR 26.958 regarding scene complexity impacts on Dynamic 3D Gaussian Splatting (3DGS) feasibility for mobile platforms. The document proposes text for Clause 6.3 (Complexity) which is currently empty.

Main Technical Contributions

Scene Complexity Impact on Mobile Platforms

The contribution identifies that dynamic scene complexity significantly affects the feasibility of dynamic 3DGS content on mobile devices. Key complexity drivers include:

  • Number of Gaussians: Direct impact on memory and processing requirements
  • Magnitude of motion: Affects rendering load and temporal prediction efficiency
  • Topology changes: Increases complexity when scene structure varies
  • Variability of Gaussian attributes: Impacts both storage and processing

These parameters directly constrain: - Achievable frame rate - Session duration - Visual quality

FFS: Determining maximum scene complexity that representative UE categories can sustain.

Compression Complexity Considerations

The document highlights that highly dynamic content (multi-person scenes, self-occlusions, cloth/hair motion) presents specific compression challenges:

  • Reduces benefits of temporal prediction
  • Requires more frequent keyframes
  • Weakens temporal coherence assumptions in coding algorithms
  • Increases both encoding and decoding complexity proportional to scene intrinsic complexity and temporal variability

Dynamic 3DGS Format Categories

The contribution proposes categorizing dynamic 3DGS representations based on temporal association of Gaussian primitives:

  • Tracked: Gaussians maintain temporal associations across frames
  • Partially tracked: Some temporal associations maintained
  • Untracked: No temporal associations

These categories differ in: - Efficiency for temporal prediction - Robustness to motion/topology changes

FFS: Comparison of these formats regarding bitrate efficiency, latency, UE processing, and visual quality.

Limitations of Original 3DGS Format

The document notes that the original INRIA 3DGS representation has inherent limitations for dynamic content:

  • Designed for per-scene optimization
  • Static topology assumption
  • Frame-independent Gaussian attributes
  • Does not exploit temporal redundancy
  • Not optimized for dynamic content

Recent academic developments (references [1]-[4]) explore alternatives addressing these limitations.

FFS: Whether multiple dynamic-oriented 3DGS formats may coexist.

Conclusion

The contribution proposes adding the provided text to Section 6.3.X or another suitable section of TR 26.958 to address the open editor's note on complexity considerations for Dynamic 3DGS content.

Tencent Cloud
[FS_3DGS_MED] Pseudo-CR on 3DGS delivery workflows for large 3DGS scenes

Summary of 3GPP Technical Document S4-260239

Document Overview

This is a pseudo-CR to TR 26.958 v0.1.1 addressing viewport-adaptive delivery workflows for large-scale 3D Gaussian Splatting (3DGS) scenes in the context of FS_3DGS_MED study. The contribution focuses on enabling delivery of massive 3DGS environments (e.g., city-scale digital twins) to mobile devices with constrained resources.

Problem Statement

Large-scale 3DGS scenes (as defined in clause 5.4) cannot be fully loaded into mobile device memory due to: - Bandwidth limitations - Memory constraints - Rendering capacity restrictions

Static delivery workflows would result in: - Excessive latency - Immediate resource saturation - Inability to deliver complete scenes

Simple capability negotiation alone is insufficient for these use cases.

Main Technical Contributions

Viewport-Adaptive Workflow (Clause 9.2.3)

The document proposes a new clause 9.2.3 introducing a viewport-adaptive workflow that extends existing capability negotiation mechanisms by incorporating continuous spatial feedback.

Core Mechanism

  • Dynamic Spatial Context: UE continuously transmits 6DoF pose and Field of View (FoV) to server
  • Metadata Format: Adheres to formats defined in TR 26.928 (XR services)
  • Rendering Budget Management: Server optimizes 3DGS stream relative to user's perspective while staying within negotiated rendering budget

Spatial Optimization Strategies (Clause 9.2.3.2)

Two approaches are defined:

Tiled Environments with LOD

  • Environment partitioned into spatial tiles
  • Multiple levels of detail (LOD) per tile
  • Server selects appropriate LOD based on:
  • Proximity to user
  • Visibility within frustum
  • LOD Distribution:
  • High-density tiles (e.g., LOD 4) for viewport center
  • Lower-density tiles (e.g., LOD 1-3) for peripheral/distant areas
  • Concentrates point budget where user is looking

Unstructured Scenes

  • Real-time frustum culling, pruning, and merging
  • High point density in center of FoV
  • Aggressive simplification in peripheral zones
  • Dynamic primitive removal/merging for non-visible areas

Server-Centric Decision Workflow (Clause 9.2.3.3)

Two-Phase Approach:

Static Initialization Phase

  1. Hardware Capabilities Assessment: UE evaluates resources via system APIs or OpenXR
  2. Capability Reporting: UE transmits comprehensive capability report to server
  3. Server-Side Capability Decision: Server defines global rendering budget (max point count, SH degree) for session

Dynamic Delivery Phase

  1. Viewpoint and FoV Determination: UE calculates current 6DoF pose and camera frustum
  2. Viewpoint and FoV Information: UE sends spatial metadata to server
  3. Content Adaptation Based on FoV: Server selects visible spatial tiles and adapts content (pruning, merging, LOD selection, quantization) to fit budget and user's view
  4. Optimized 3DGS Data: Server streams adapted content payload (N points) to UE
  5. Local Adaptation: UE performs final on-device adjustments if necessary
  6. 3DGS Rendering: UE renders the scene

Key Characteristic: Server maintains control over rendering budget throughout session based on initial capability assessment.

Client-Centric Decision Workflow (Clause 9.2.3.4)

UE-Driven Approach:

Initialization Phase

  1. Hardware Assessment Analysis: UE performs internal audit of hardware capabilities
  2. Decision of Best Representation Format: UE selects optimal configuration (max point count, SH degree)
  3. 3DGS Format Request: UE requests content from server, specifying desired format parameters (point budget, SH degrees, quantization)

Delivery Phase

  1. Viewpoint and FoV Determination: UE calculates current spatial position and FoV
  2. Viewpoint and FoV Information: UE sends spatial metadata to server
  3. Content Adaptation Based on FoV: Server filters scene spatially (frustum culling/tile selection) and adapts data to match format requested in step 3
  4. Optimized 3DGS Data: Server delivers visible content conforming to requested parameters
  5. Local Adaptation: UE applies final local refinements for runtime stability
  6. 3DGS Rendering: UE renders received content

Key Characteristic: UE explicitly requests specific representation format during initialization; server's role restricted to spatial operations while adhering to UE-imposed format constraints.

Alignment with Existing Specifications

  • Builds upon capability negotiation described in clause 9.2.2
  • Aligns with viewport-dependent streaming principles from TR 26.928 (XR services)
  • Addresses use case defined in clause 5.4 (Large 3DGS scenes)

Proposal

The document proposes to agree the changes introducing clause 9.2.3 and its subclauses (9.2.3.1-9.2.3.4) to TR 26.958, including two workflow diagrams (Figures 5 and 6) and one illustration of tile/LOD selection (Figure 4).

Samsung Research America
[FS_3DGS_MED] High level media data workflows for All-in-client configuration

Change Request Summary: High Level Media Data Workflows for All-in-Client Configuration

CR Metadata

  • Document: 3GPP TR 26.958 v0.1.1
  • CR Category: B (addition of feature)
  • Release: Rel-20
  • Work Item: FS_3DGS_MED (3D Gaussian Splatting Media Study)
  • Source: Samsung Electronics Co. Ltd.

Purpose

This CR adds high level media data workflows for the All-in-client configuration to the 3D Gaussian Splatting (3DGS) media study. The workflows describe how different 3DGS service use cases can be realized when processing steps primarily run on the UE.

Technical Contributions

New Clause 9.1: All-in-Client Configuration

9.1.1 Description

Defines the All-in-client configuration as workflows where functionality primarily runs on the UE for the use cases described in clause 5 of the technical report.

9.1.2 Media Workflow Steps

Workflow Description

The CR identifies the following key workflow steps that can be executed on the UE:

  1. Scene Capture
  2. Utilizes UE cameras (rear or front) on mobile devices
  3. Supports multiple viewpoint capture for coverage and parallax
  4. Includes application-guided user interaction
  5. Collects auxiliary signals (references clause 5.2)

  6. 3DGS Model Generation

  7. Generation of static 3DGS models on UE (subject to device capability)
  8. Referenced to clause 5.2

  9. Animation Stream Generation

  10. Creation of time-aligned animation streams for 3D Avatar animation
  11. Referenced to clause 5.5

  12. Packaging and Distribution

  13. 3DGS assets (static objects, scenes, or dynamic sequences) and animation streams are packaged
  14. Distribution via multiple channels: MMS, OTT messaging, or download
  15. Supports UE-to-UE or UE-to-network device transmission
  16. Referenced to clauses 5.2 and 5.5

  17. Asset Reception and Storage

  18. UE receives one or more 3DGS assets
  19. Storage in local memory or GPU memory
  20. Supports dynamic 3DGS scene content via file delivery (clause 5.4)

  21. Rendering

  22. Gaussian selection based on Level of Detail (LOD)
  23. LOD selection dependent on:
    • User preferences
    • UE device capabilities and characteristics
    • Camera pose
    • Display resolution
  24. Support for time-aligned animation streams
  25. Referenced to clauses 5.2 and 5.5
Configuration Characteristics

The CR defines three key characteristics for the All-in-client configuration:

  1. Latency/Performance
  2. Dependent on UE device capabilities for capture, generation, and rendering operations

  3. Scalability

  4. Limited by UE device capabilities:

    • Local memory
    • GPU memory
    • Decoding capabilities
  5. Network Usage

  6. Network only used for distribution/asset transfer
  7. No network interaction during playback
  8. All viewpoint updates and navigation handled locally after 3DGS data reception

Impact Assessment

The CR states that if not approved, the study would be incomplete, indicating this is a foundational contribution to the 3DGS media workflows study.

Samsung Research America
[FS_3DGS_MED] High level media data workflows for Client-Server configuration

Summary of 3GPP TR 26.501 Change Request

Document Information

  • Meeting: 3GPP TSG-S4 Meeting #135, Goa, India (9-13 February 2026)
  • Document Number: S4-260247
  • CR Type: Category B (addition of feature)
  • Release: Rel-20
  • Work Item: FS_3DGS_MED (3D Gaussian Splatting Media Study)

Main Purpose

This CR introduces high level media data workflows for Client-Server configuration in the context of 3D Gaussian Splatting (3DGS) service delivery. This complements the existing all-in-client configuration by defining workflows where functionality is split between the UE client and the network server.

Technical Contributions

New Clause 9.2: Client-Server Configuration

9.2.1 Description

  • Defines media data workflows where functionality is split between client and server (network)
  • Supports interactive navigation in large or dynamic 3DGS scenes
  • Enables network-assisted processing for resource-intensive operations

9.2.2 Media Workflow Steps

9.2.2.1 Workflow Description

The CR identifies the following workflow steps that execute in the Client-Server configuration:

Server-Side Operations: - 3DGS Content Generation: Generation of 3DGS content from 2D captures of scenes (references clause 5.2) - Dynamic Content Generation: Creation of dynamic 3DGS content and region-based parts of 3DGS scenes based on the 3DGS model for adaptive delivery - Adaptive Selection: Selection of 3D tiles and their Level of Detail (LOD) based on: - User movement - UE device capabilities - Packaging and Distribution: 3DGS assets packaged in the network and delivered via: - MMS - OTT messaging - Download services

Client-Side Operations: - Content Reception: UE receives one or more of: - 3DGS assets (clause 5.2) - 3D tiled LODs (clause 5.3) - Dynamic 3DGS scene content via file delivery, partial-delivery, or on-demand streaming (clause 5.4) - Storage: Content stored in local memory or GPU memory - Rendering: UE renders: - 3DGS assets by selecting Gaussians based on LODs dependent on: - User preferences - UE device capabilities and characteristics (camera pose, display resolution, etc.) - 3D tiled LODs fetched using adaptive delivery (clause 5.3) - 3D Avatars using time-aligned animation streams (clause 5.5)

9.2.2.2 Characteristics

The CR defines key characteristics of the client-server configuration:

Latency/Performance: - Dependent on network and application latency - Influenced by: - Capabilities of the network server generating 3DGS content - UE device rendering capabilities

Scalability: - Enhanced scalability compared to all-in-client configuration - Leverages theoretically infinite network resources - Enables more use cases

Network Usage: - Content generation - Rendering (full or partial) - Distribution

Network Interaction During Playback: - Selection of 3D tiles and LODs - Sending user pose information - Temporal updates - Optional partial or full rendering support

Affected Clauses

  • Clause 9.2: New clause added for Client-Server configuration

Relationship to Existing Work

This CR builds upon: - Clause 5 use cases (referenced throughout) - Clause 9.1 All-in-client configuration (complementary approach) - Various delivery mechanisms already defined in the study (MMS, OTT messaging, file delivery, streaming)

Samsung Research America
[FS_3DGS_MED] Mapping 3DGS to 3GPP services with All in UE configuration

Summary of 3GPP TR 26.501 Change Request

Document Information

  • CR Number: S4-260249
  • Specification: 3GPP TR 26.958 v0.1.1
  • Work Item: FS_3DGS_MED (3D Gaussian Splatting for Media)
  • Category: B (addition of feature)
  • Release: Rel-20

Purpose

This CR addresses the mapping of 3D Gaussian Splatting (3DGS) services to 3GPP services and specifications, specifically for the "All in UE" configuration.

Main Technical Contributions

Clause 10: Mapping to 3GPP Services/Specifications

This CR introduces a new clause (Clause 10) that maps high-level media data workflows for 3DGS to different 3GPP services. The mapping covers two configurations, with this CR specifically detailing the "All in UE" configuration.

10.1: All in UE Configuration Mapping

In this configuration, 3DGS content is treated as downloadable or message-based assets. The CR provides a comprehensive mapping table that covers the following functional areas:

Content Generation

  • Functions: Scene capture, static 3DGS model generation, time-aligned animation stream generation for animating 3D Avatars
  • 3GPP Mapping: 3DGS/XR Application on the UE
  • Reference: Media-Aware Application of Media Delivery architecture (TS 26.501)

3DGS File Delivery

  • 3GPP Mapping:
  • MMS (TS 26.140, TS 26.143)
  • RCS messaging
  • HTTP file transfer
  • Reference: Media Access Function of Media Delivery architecture (TS 26.501)
  • Functionality: Provides upload/download function for sending and receiving 3DGS content

Storage

  • 3GPP Mapping: UE Local storage (no 3GPP-specific mapping)

Rendering and Playback

  • 3GPP Mapping: 3DGS/XR Application on the UE
  • Reference: Media-Aware Application (TS 26.501)

Key Technical Notes

NOTE 1 - File-based Delivery Requirements: - 5G latency or jitter requirements do not apply (strict 5G QoS is not necessary) - Low Packet Error Rate and reliable delivery required - Standard 5G bearers specified in TS 23.501 are adequate to carry 3DGS content

NOTE 2 - Storage: No 3GPP-specific mapping required

Technical Implications

The CR establishes that for the All in UE configuration: 1. 3DGS content follows a file-based delivery model rather than streaming 2. Existing 3GPP services (MMS, RCS, HTTP) are sufficient for content delivery 3. Standard 5G bearers are adequate without requiring enhanced QoS 4. The architecture aligns with the existing Media Delivery framework in TS 26.501

Samsung Research America
[FS_3DGS_MED] Mapping 3DGS to 3GPP services with Client-Server configuration

3GPP TR 26.501 Change Request Summary

Document Information

  • CR Number: Pseudo CR for TR 26.958 v0.1.1
  • Category: B (addition of feature)
  • Release: Rel-20
  • Work Item: FS_3DGS_MED
  • Source: Samsung Electronics Co. Ltd.

Purpose

This CR addresses the mapping of 3D Gaussian Splatting (3DGS) services to 3GPP services and specifications for the Client-Server configuration. This complements the existing All-in-UE configuration mapping.

Main Technical Contributions

Clause 10.2: Client-Server Configuration Mapping

The CR introduces a comprehensive mapping table that defines how different 3DGS workflow functions map to existing 3GPP services when operating in a Client-Server configuration. In this configuration, 3DGS is delivered as either an interactive XR service or 6DoF media streaming.

Content Generation

  • UE-side 2D Capture: Maps to Media-Aware Application (TS 26.501) and Split Rendering Client (TS 26.565)
  • Network-side 3DGS Generation: Maps to (Edge) Media AS of Media Delivery architecture (TS 26.501), including:
  • 3DGS scene generation from 2D capture
  • Dynamic 3DGS content
  • Region-based parts of 3DGS scenes

Caching

  • 3DGS Model and Tile Caching: Utilizes 5G Edge CDN infrastructure

Delivery

  • Streaming/Real-time Communication: Supports tiled LOD streaming using:
  • Adaptive media delivery protocols (DASH, HLS, RTP, QUIC)
  • Partial delivery or on-demand streaming
  • 3GPP Mapping: (Edge) Media AS and Media Access Function (TS 26.501), Split Rendering Server (TS 26.565)

Network Rendering

  • Edge Rendering: Partial or full network edge rendering
  • 3GPP Mapping: (Edge) Media AS (TS 26.501) and Split Rendering Server (TS 26.565)

UE Rendering/Playback

  • Client-side Rendering: Maps to Media-Aware Application (TS 26.501) and Split Rendering Client (TS 26.565)

Pose/LOD Reporting

  • Uplink Reporting: Maps to:
  • Media-Aware Application (TS 26.501) and Split Rendering Client (TS 26.565) for pose/LOD capture
  • Real-time/conversational service interfaces (TS 26.506) for transfer of pose and LOD information to generate view/LOD dependent 3DGS content

5G QoS Considerations (NOTE 1)

The CR identifies key 5G QoS requirements for 3DGS service delivery (marked as FFS for final determination):

Performance Requirements

  • High bit rates/bandwidth/throughput: Leveraging 3GPP eMBB
  • Low to Ultra Low Latencies: Utilizing 5G URLLC
  • Low jitter delivery

XR-Specific QoS Features

  • New 5QI for XR services
  • PDU Set based QoS with parameters:
  • PDU Set Error Rate
  • PDU Set Delay Budget
  • PDU Set Size
  • Alternative QoS Profiles

Network Architecture Features

  • Service Stability: Using 5G Network Slicing (e.g., dedicated network slice for 3DGS delivery) per TS 23.501
  • Edge Computing: Caching with edge CDNs and network processing (generation, rendering) per TS 26.501 and TS 23.558
  • Multicast/Broadcast: Distribution to multiple users per TS 26.502

Additional Considerations (NOTE 2)

For Avatar-related 3DGS services, TS 26.264 may be applicable.

Impact

This CR completes the study on mapping 3DGS to 3GPP services by addressing the Client-Server configuration, which is essential for network-assisted 3DGS delivery scenarios including edge rendering and adaptive streaming use cases.

Samsung Research America
[FS_3DGS_MED] Mapping 3DGS to 5QI

Summary of 3GPP TR 26.501 Change Request

Document Information

  • CR Number: S4-260253
  • Specification: TR 26.958 v0.1.1
  • Work Item: FS_3DGS_MED (Study on 3D Gaussian Splatting for Media)
  • Category: B (addition of feature)
  • Release: Rel-20

Main Objective

This CR proposes to add a new clause (6.X) to TR 26.958 addressing the mapping of 3D Gaussian Splatting (3DGS) services to 3GPP 5G QoS Identifier (5QI) parameters as specified in TS 23.501.

Technical Contributions

Background on 5QI (Clause 6.X.1)

The CR provides a comprehensive table of relevant pre-defined 5QI values from TS 23.501 that have similar QoS characteristics to 3DGS services, including:

  • GBR Resources:
  • 5QI 1-4: Conversational voice/video, real-time gaming, buffered streaming (PDB: 50-300ms, PER: 10⁻² to 10⁻⁶)
  • 5QI 71-76: Live uplink streaming (PDB: 300-500ms, PER: 10⁻⁴ to 10⁻⁸)
  • 5QI 80: Low latency eMBB/AR applications (PDB: 10ms, PER: 10⁻⁶)

  • Delay-Critical GBR:

  • 5QI 88: Motion tracking data, split AI/ML inference (PDB: 10ms, PER: 10⁻³)
  • 5QI 89-90: Visual content for cloud/edge/split rendering (PDB: 15-20ms, PER: 10⁻⁴)

  • Non-GBR Resources:

  • 5QI 5-10: IMS signaling, buffered streaming, interactive gaming (PDB: 100-1100ms)

3DGS Application Flows for 5QI Mapping (Clause 6.X.2)

The CR identifies distinct application flows requiring QoS treatment:

  1. Delivery of 3DGS media content application flows
  2. Delivery from/to UE of static 3DGS scene content
  3. Delivery from network to UE of dynamic or view-based 3DGS content
  4. Delivery of user pose, gaze information, LOD information from UE to network for network-assisted rendering or delivery

Recommendations (Clause 6.X.3)

The CR establishes the following recommendations for the study outcome:

  • Mapping Identification: A mapping of 3DGS application flows to appropriate 5QI values should be identified
  • Reference 5QI Values: Existing 5QI values (from clause 6.X.1) with similar QoS/QoE expectations and RAN resource priority should be used as reference for determining appropriate 5QI values and QoS characteristics limits for 3DGS services
  • New 5QI Definition: If existing 5QI values are insufficient, liaison with appropriate 3GPP groups (likely SA2) should be initiated to define new 5QI values and corresponding QoS characteristics specifically for 3DGS services

Rationale

The CR addresses a gap in the FS_3DGS_MED study by providing guidance on how 3DGS services should be treated within the 5G System QoS framework, ensuring proper traffic handling behavior through appropriate QoS Flow configuration.

Total Summaries: 18 | PDFs Available: 18