KT Introduces AI Chat to IPTV Service with 95% Voice Recognition Accuracy.

Seoul, South Korea - KT Corp (NYSE: KT) announced on Tuesday that it has integrated artificial intelligence conversational capabilities into its Internet Protocol Television (IPTV) service, marking a shift from basic voice command functionality.

The company said the AI agent operates through large language model integration and records voice recognition rates above 95 percent. 

KT stated the system processes queries on topics including weather, news, current affairs, science, and entertainment programming.

According to the company, the technology supports consecutive questioning sessions, enabling users to ask follow-up queries without needing to restate the context. 

KT provided examples where viewers watching news programming can inquire about tariff negotiations, then ask subsequent questions about stock market trends or foreign investment factors.

The system identifies television content through partial descriptions rather than exact program titles, KT said. 

The company stated that users can reference cast members, locations, or plot elements to locate specific shows across its content catalog and external streaming services, including YouTube, Disney+, Tving, and Coupang Play.

KT reported that the implementation builds on its Giga Genie platform, launched in 2017, which currently serves more than 5 million subscribers.

The company said it has developed an intent classification engine that analyzes user queries and automatically selects from multiple language models.

The current deployment uses Microsoft's Azure OpenAI Service through a partnership arrangement, according to KT. 

The company stated that the system architecture supports additional language model integrations.

Service availability begins with the Genie TV Set-top Box 4 device. KT plans to expand its All-in-One Soundbar product in November and stated that it will deploy the technology across approximately 5 million AI speaker-equipped set-top boxes throughout 2026.

The company stated that it intends to implement multimodal processing for image and audio content recognition by year-end.