University of Montreal: AI breakthrough enables real-time translation of sign language
- Global Research Partnerships
- Nov 15, 2025
- 1 min read

Researchers at the University of Montreal have developed a groundbreaking artificial intelligence system that can translate sign language into spoken language in real-time with 95% accuracy, representing a major advancement in accessibility technology for the deaf and hard-of-hearing community.
The system, called SignSpeak, uses advanced computer vision and machine learning algorithms to recognize and interpret hand movements, facial expressions, and body language that comprise sign language communication. Unlike previous systems that were limited to basic vocabulary, SignSpeak can handle complex grammatical structures and contextual nuances.
Dr. Marie-Claire Dubois, who led the research team at the university's Computer Science and Operations Research Department, explained that the breakthrough came from training the AI on an extensive database of sign language videos from native signers across different dialects and regional variations.
"What sets our system apart is its ability to understand the full linguistic complexity of sign language, including grammar, syntax, and cultural context," said Dr. Dubois. "This isn't just word-for-word translation – it's true language interpretation."
The technology has been tested in real-world scenarios including medical appointments, educational settings, and business meetings, with consistently high accuracy rates. The research team is now working with technology companies to integrate SignSpeak into smartphones, tablets, and video conferencing platforms.



