Skip to content Skip to left sidebar Skip to right sidebar Skip to footer

Tag: #AssistiveTechnology

IIIT Hyderabad takes a step forward in inclusive education, expanding learning opportunities for visually impaired learners.

IIIT Hyderabad takes a step forward in inclusive education, expanding learning opportunities for visually impaired learners.

Hyderabad: Researchers at the International Institute of Information Technology (IIIT) Hyderabad have launched Drishti Library, an AI-powered initiative under the Bhashini mission aimed at improving access to higher-education resources for visually impaired students.

For years, visually impaired learners have faced significant challenges due to the lack of accessible study materials in higher education. Drishti Library seeks to bridge this gap by converting textbooks into Braille and audiobook formats across multiple Indian languages.

The initiative will begin with Punjabi and gradually expand to cover more languages and academic disciplines. Drishti Library was officially unveiled at a recent Language AI for Accessibility symposium, marking a key step toward more inclusive education in India.

Developed at IIIT Hyderabad by researcher Krishna Tulsiyan under the guidance of Prof C.V. Jawahar and Prof Gurpreet Singh Lehal, Drishti Library is part of the Central government–led Bhashini initiative, a national mission focused on building AI-driven language technologies for all Indian languages.

Bhashini’s core mandate—to strengthen optical character recognition (OCR), speech synthesis, and other language tools for Indian languages—forms the technological backbone of Drishti. The library is built on OCR systems developed by a national consortium and aligns with Bhashini’s broader vision of creating inclusive, language-first digital public infrastructure.

“Any book in an Indian language can be scanned, proofread, converted, and finally made ready for Braille embossers or audio delivery,” said Prof Lehal.

The generated audio content is delivered through an audiobook reader application developed by the Product Lab at IIIT Hyderabad, led by Prakash Yalla and Satish Kathirisetti, along with Meghana Tatavolu, Afrin Sayed, Sairam Bonu, Akhila Vennigalla, and Vidushi Garg.

Key features of the application include adjustable playback speed, intuitive audio-based navigation controls, and other accessibility-focused interface enhancements.

Despite significant progress, challenges remain—particularly in achieving natural-sounding speech across all languages. “For some languages, we have very good text-to-speech systems. However, for Punjabi, high-quality TTS is still lacking. That will improve with time,” Prof Lehal noted.

While the initial focus was on generating content for undergraduate and postgraduate studies, the scope has since expanded to support visually impaired students preparing for competitive examinations, including UPSC, with IIIT Hyderabad researchers actively working in this direction.