Affordable Access

Multimodal Kinect-supported interaction for the visually impaired

Authors
  • Gross, Richard
Publication Date
Jan 01, 2012
Source
Fraunhofer-ePrints
Keywords
Language
English
License
Unknown
External links

Abstract

This thesis suggests a new computer interface, specifically targeted at blind and visually impaired people. We use the Microsoft Kinect to track a user's position and have implemented a novel spatial interface to control text-to-speech synthesis of a document. Which actions are executed is solely determined through hand movements in relation to the body. All feedback for the actions is given in auditory form, through synthesized speech or earcons. Earcons are brief, unique sounds that convey information. Visually impaired or blind users do not have to point or remember keyboard commands, but can use their proprioceptive sense to effectively explore documents and execute actions. The test results are encouraging. Even when participants found themselves lost they were always able to find their way back to an interface state they knew how to navigate. Furthermore, most negative feedback can be attributed to the current technical limitations and not the spatial interface itself.

Report this publication

Statistics

Seen <100 times