Sign language has long been a tool used by many of the world’s estimated 360 million people with severe hearing loss. But since the majority of hearing individuals do not understand sign language, the hearing world does not always have the capability to engage in real-time, unwritten communication with people with hearing loss. Now, technology is poised to make such interactions more feasible.
Hoping to facilitate communication between the signing and non-signing communities, Microsoft Research in February 2012 initiated the Kinect Sign Language project in collaboration with the Chinese Academy of Sciences (CAS) and Beijing Union University. The Kinect Sign Language Translator enables real-time conversations between signing and non-signing participants by turning sign language into words spoken by a computer and simultaneously changing spoken words into sign language rendered by an avatar.
Some of the attendees at the Kinect Sign Language Working Group’s inaugural event
Early last month, the Kinect Sign Language Working Group, a research community that includes a website for sharing data and algorithms, was established at the Institute of Computing Technology, CAS in Beijing. P. Anandan, managing director of Microsoft Research Outreach, attended this inaugural event, as did other dignitaries representing the community’s founding members: the CAS, Beijing Union University, and Microsoft Research. We are encouraging experts from other research institutions, schools for the deaf and hard of hearing, and non-government organizations to join the Kinect Sign Language Working Group.
The community’s vision is to advance research in sign-language recognition.
The community’s vision is to advance research in sign-language recognition. As a first step, we are opening to academia the DEVISIGN, Chinese Sign Language Database. Compiled by the Visual Information Processing and Learning (VIPL) group of the Institute of Computing Technology, under the sponsorship of Microsoft Research Asia, the DEVISIGN covers about 4,400 standard Chinese Sign Language words based on 331,050 pieces of vocabulary data from 30 signers (13 males and 17 females). The vocabulary data comprises RGB video (in AVI format), and depth and skeleton information (in BIN format). The DEVISIGN thus provides sign-language researchers with a rich store of data for training and evaluating their algorithms and for creating state-of-the-art practical applications, such as solutions for training the system to adapt to an unknown signer.
In the near future, we hope to expand the sign-language database with contributions from new community members, which will help advance the research and development progress for this and potentially other sign language translations. In addition, we intend to organize workshops and to post sign-language-recognition algorithms from researchers worldwide.
Microsoft Research Asia Director Tim Pan expects the Kinect Sign Language project to provide cost-effective and reliable communication between deaf and hearing users.
No single field of expertise can fulfill such an expansive mission. Doing so requires “the collaboration of experts in such diverse fields as machine learning, sign language, social science, and more,” noted Microsoft Research Asia Director Tim Pan during the community’s inaugural event. “In the long run,” he added, “the community will work together to turn ideas into reality, and we fully expect the Kinect Sign Language project to provide cost-effective, easy, and reliable communication between deaf and hearing users.”
—Guobin Wu, Research Program Manager, Microsoft Research Asia
Related links