Communication is a basic human right that everyone should have access to. However, for many individuals who struggle with communication, augmentative and alternative communication (AAC) approaches are essential. These approaches can include tools like notebooks or electronic tablets with symbols that users can select to form messages and communicate effectively.
Unfortunately, many existing AAC systems do not fully cater to the needs of individuals with motor or visual impairments. This is where the innovative research conducted at Penn State comes into play, aiming to bridge this gap and provide better support for AAC users.
Researchers at Penn State, led by Krista Wilkinson and Syed Billah, have developed a groundbreaking prototype application that utilizes movement sensors and artificial intelligence (AI) to interpret body-based communicative movements into speech output. This technology has the potential to revolutionize the way individuals with communication challenges express themselves.
The initial testing of this prototype involved three individuals with motor or visual impairments who served as community advisors for the project. Feedback from all participants was overwhelmingly positive, with everyone noting an improvement in their ability to communicate quickly and effectively with individuals outside their immediate social circle.
The research and initial findings of this technology have been published in the journal “Augmentative and Alternative Communication,” showcasing the potential impact it can have on the field of AAC.
Aided and Unaided AAC
There are two primary types of AAC that individuals can utilize. Aided AAC involves technology-assisted methods, such as pointing at pictures or selecting symbols on an electronic tablet. For example, a person may use images on a tablet to communicate their food preferences by pointing to the option they desire. While aided AAC is generally easy to understand, it can be physically challenging for individuals with visual or motor impairments, as noted by Wilkinson.
On the other hand, unaided AAC involves body-based communication, such as facial expressions, shrugs, or gestures that are unique to the individual. This form of AAC relies solely on the individual’s physical movements to convey messages and emotions.
An innovative approach to assistive communication technology is emerging, aimed at bridging the gap between aided and unaided communication for individuals with disabilities. One such technology involves using natural gestures to convey messages, allowing users to communicate more freely with those around them.
For example, consider a person with limited speech and motor impairments who can move their arms and hands. This individual may raise their hand when shown a specific object, indicating a desire for that item. This unaided form of communication is often more efficient and less physically taxing for individuals, as the gestures used are already familiar in their everyday lives. However, the downside is that these gestures may only be understood by those who are closely acquainted with the individual, making it challenging for them to interact independently with a wider range of people.
To address this issue, researchers have been developing a prototype technology that combines unaided gestures with artificial intelligence (AI) for more seamless communication. By utilizing AI algorithms for natural gesture recognition, the technology can learn and interpret individualized movements that have specific meanings to the user. This personalized approach reduces errors and minimizes the need for predetermined movements, enhancing the user experience and overall utility of the system.
During the development and testing of the prototype, input and feedback from individuals who would benefit from this technology were essential. Community advisors like Emma Elko, who has cortical visual impairment and uses aided AAC to communicate, provided valuable insights into how the technology could be optimized for different disabilities. By wearing a sensor on her wrist, Emma’s unique communicative movements were captured and analyzed to distinguish between different gestures accurately.
Ultimately, the goal of integrating AI into assistive communication technology is to empower individuals with disabilities to communicate more effectively and independently. By combining natural gestures with advanced algorithms, this innovative approach has the potential to open up new possibilities for individuals with limited communication abilities, allowing them to connect with a broader range of people and engage more fully in the world around them.