# Discussion on AI Traffic Trends for 6G Media

## 1. Introduction

This contribution addresses Work Task 2b (Traffic characteristics) and 2d (Media communication for emerging AI services) from the 6G Media study, focusing on AI media traffic analysis. The document provides insights into popular AI applications, their traffic generation patterns, and proposes organizational structure for the TR clauses.

## 2. Impact of AI Applications on Mobile Traffic

### 2.1 General Observations

- AI-powered applications are emerging as contributors to mobile data traffic
- As of 2025, AI-related network traffic across mobile carriers is still in preliminary phases
- Expected to become significant contributor to 6G network traffic as adoption increases

### 2.2 Current AI Traffic Types

Four broad categories of consumer AI applications identified:

1. **Chat and conversation**: Text-based chats with general-purpose chatbots (e.g., ChatGPT) and voice conversations with AI services. Includes task-specific use cases (scene recognition, solving handwritten math problems). Some use cases take images as conditioning input, increasing UL data volumes and rates.

2. **Document generation**: Creation of longer texts and formatted documents (PDFs, presentations). Prompts include text, voice, documents, and images.

3. **Image generation**: Creation of images from scratch based on prompts and AI-powered image manipulation. Entertainment-driven adoption among younger demographics. Heavy traffic impact on network.

4. **Video generation**: AI-based video creation. Throughput-intensive in DL, while image inputs drive relatively high UL volumes.

**Key Technical Characteristics:**
- Cloud-based AI inferencing creates bursts in uplink traffic
- Uses existing web-based protocols (e.g., WebRTC for live audio/video)
- Existing codecs (AVC, HEVC) used for encoding before transport
- Text and images are base-64 encoded and encapsulated in JSON (OpenAI API, Gemini API)
- Agentic AI apps becoming more common

### 2.3 AI Traffic Trends

**Shifting UL/DL Ratios:**
- Uplink data growing faster than downlink traffic
- Driven by conditioning inputs (images) transmitted to AI inference factories
- Data volume spread per app session documented (Figure 1 reference)

**Rising Data Volumes:**
- Multi-modal, user-friendly experiences increasing overall traffic
- Users "talking with their data" and interacting with AI assistants
- Sharing photos and videos from smartphones to refine prompts

**Sensitivity to Latency:**
- Conversational AI services respond non-linearly to extended latency
- Application-level reaction to network conditions varies by application
- Example case study: AI-app responded linearly for inserted latency up to 0.5s, then non-linear response begins. At ~1.5s inserted latency, response time grew by almost twice the inserted latency (Figure 2 reference)

**Agentic AI Opportunities:**
- AI agents can shift inference loads and network traffic away from peak hours
- Operating in scheduled, off-peak cycles ensures results ready when needed while avoiding congestion

**Current Traffic Statistics:**
- AI traffic constitutes 0.06% of total traffic in observed mobile network
- 74% downlink, 26% uplink

### 2.4 Agentic AI and Traffic Impact

**Architecture:**
- LLM-driven autonomous agent architecture with LLM as core reasoning engine
- Additional components for planning, memory management, and interaction with external tools
- Multi-agent systems with collaborative reasoning, persistent memory, and autonomous decision-making

**Operational Characteristics:**
- Agentic tasks span multiple steps: data search, analysis, document generation in defined formats
- Example: PDF-format travel plans including flights, accommodation, meeting schedules, budget limitations
- Tested AI agents typically operated 10-20 minutes
- Data volumes roughly in line with other AI apps analyzed
- More data-rich outputs, partially offset by interim step results not sent to smartphone

**Protocols for Agentic Communication:**

1. **Remote Procedure Calls (RPC)**: Run tasks on remote servers

2. **Model Context Protocol (MCP)**: Open-source standard for connecting AI applications (e.g., LLMs like Claude, ChatGPT) to external systems (local files, databases), tools (search engines, calculators), and workflows. Uses JSON-RPC 2.0 as underlying RPC protocol.

3. **Agent2Agent (A2A) Protocol**: Open standard enabling seamless communication and collaboration between AI agents to solve complex tasks. Complementary to MCP. Provides standard methods and data structures for agent-to-agent communication over HTTPS, irrespective of underlying implementation. MCP can expose AI agents as tools to other agents, while A2A provides inter-agent communication.

**NOTE:** Agentic AI apps, like other AI apps, typically use existing transport protocols (e.g., HTTPS for A2A, JSON-RPC for MCP) and data types (e.g., encoded audio, video, text) for data exchange over the network.

## 3. Proposals

The document proposes the following agreements:

1. **Add Clause 2 content to TR 26.870 clause 6.2** as a basis for further work

2. **Take into account** that current AIML traffic reuses existing protocols and formats (i.e., audio, video, text over HTTP, RTP, etc.)

3. **Agree to prioritize** characterization of existing popular AI apps and provide initial analysis to SA by June 2026

## 4. References

- [1] Nokia: "The impact of AI-apps on mobile networks", Nov 2025
- [2] Ericsson: "GenAI data traffic today – Ericsson Mobility Report", June 2025
- [3] Model Context Protocol documentation
- [4] Agent2agent Protocol documentation