SoftBank LTM Ranks 3rd in GSMA Open-Telco LLM AI Benchmarks

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
SoftBank LTM Ranks 3rd in GSMA Open-Telco LLM AI Benchmarks
Philip Lee, Pickool

SoftBank LTM Ranks 3rd in GSMA Open-Telco LLM AI Benchmarks

SoftBank's Large Telecom Model (LTM) ranks third among 84 models in the GSMA Open-Telco LLM Benchmarks, utilizing a specialized telecom AI training framework.

Philip Lee profile image
by Philip Lee

TOKYO, Japan — SoftBank Corp. said its generative artificial intelligence foundation model, the Large Telecom Model (LTM), ranked third out of 84 models in the GSMA Open-Telco LLM Benchmarks.

The GSMA benchmark evaluates large language models designed for the telecommunications industry.

The ranking reflects average scores across datasets that measure model performance in tasks such as reading comprehension of telecommunications specifications, question answering, operational log interpretation, mathematical reasoning, and configuration description.

The benchmark is part of the GSMA's “Open Telco AI” initiative, announced at MWC Barcelona 2026.

The initiative provides a platform for telecom operators, vendors, developers, and academic institutions to share models, datasets, computing resources, and tools.

SoftBank said it developed a telecommunications-specific learning framework for LTM to address the technical structures and interdependencies of network data.

The training process incorporates public telecommunications data, SoftBank’s internal network data, and operational expertise.

The framework employs a phased approach that combines continuous pre-training, fine-tuning, and reinforcement learning.

To process data formats such as tables and code, the framework synthesizes and reconstructs data for specific learning stages.

The system uses large language models for data filtering and small language models for hyperparameter optimization, adjusting variables such as learning rates and epoch counts.

SoftBank said it plans to use LTM internally to reduce manual processes, lower operational workloads, and improve network operations efficiency.

Philip Lee profile image
by Philip Lee

Subscribe to The Pickool

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More