Online Seminar on Emerging AT for Communication

“We are facing some exhilarating and interestingly challenging times ahead of us in the field of assistive technology”, is one of the conclusions David Banes, CEO of Access & Inclusion, drew from WIPO’s recent report on “Technology Trends 2021: Assistive technology”. A report that WIPO and AAATE are presenting in detail over a three-part online seminar series that kicked off on 16 April 2021 with a session dedicated to assistive technology for communication.


WIPO uses patent information to explore the trends in specific technology areas and assess the state of the art in this sector. Communication is a broad field, and assistive technology in this field has to cover speech and language and, more broadly, human interaction, literacy and information access.

Communication is a complex process. It involves using different skills and faculties, including speech, hearing, vision, and physical coordination, including gestures and cognition. In many cases, only the combination of verbal and nonverbal communication can create effective communication between partners.

Technologies intended to facilitate communication often seek to address personal and specific barriers where communication has become challenging because of limitations in one or more areas of human function.

WIPO found a broad range of patents filed around the world that for a large part fall into the category of special software and services but also incorporate technologies that support visual communication, communication including speech input, the use of input devices.

The fastest-growing patents using conventional technologies are to be found in emulation software to help us to interact with devices. Speech in- and output related filings have marked an average annual growth rate of nearly 40%.

Many of these applications and patents are coming from the corporate sector. 65% come from the corporate sector and companies such as IBM, Panasonic, Microsoft and NEC. Individuals are filing a smaller number, around 25%, and a relatively small amount of only about 10% are being filed by academia.

We saw some 1600 patents filed, some focusing on communication support for navigation and orientation within the environment, others addressing technologies to substitute one or more sensory channels. There was also significant growth in the use of brain-computer interfaces and smart assistants, some of which have already come to market in the products we use in our homes, schools, colleges and workplaces.

Three examples of innovative assistive technologies for communication were selected and presented by their creators in the online seminar.

AlterEgo, presented by Arnav Kapur from the Massachusetts Institute of Technology (MIT) Media Lab is a peripheral neural interface intended to help people with speech disorders by focusing on the flow of information from the brain to the nerves that conduct signals to speech muscles.

The AlterEgo can detect a user’s internally articulated speech through weak electrical signals sourced deep within the mouth cavity from the skin’s surface, even when only a fraction of the internal speech muscles is engaged by the brain. It then uses a combination of distributed sensing, signal processing and machine learning and feeds back via bone conduction, transmitting audio overlaid on the user’s natural hearing.

In other words, the user can “speak” in real-time without the need for any discernible action, voice or movement. Additionally, AlterEgo is non-invasive and non-intrusive, physically connected to only a voluntary part of the human body.

The system is in the research stage and is being tested in clinics and hospitals in patients with Lou Gehrig’s disease, multiple sclerosis and autism.

For the moment, AlterEgo supports people who have been locked in through their disease. But future applications might include support with real-time translation when conversing in foreign languages.

Since the system is non-invasive and works with signals right on the skin’s surface, many mainstream applications are possible.

Professor Suranga Nanayakkara from Auckland University presented his work on smart assistive environments and assistive augmentation. He explores how wearing an intelligent interface can support the user in navigating the environment and accomplishing his or her daily tasks. He shared a video of a person trying out the interface, pointing at objects in the environment and getting them read out and explained. Another example was capturing information such as a phone number, with a fingertip, from the screen of one device and dropping it onto another, such as a smartphone.

Assistive augmentation wants to harness the full potential of technology by designing new human-computer interfaces that feel like an extension of our body, mind and behaviour. It builds on three key components, namely integrating technology to either the body or the behavior, understanding beyond explicit instructions, and enhancing physical capabilities.

However, there are important considerations to be had before such technologies are deployed. One of which being backward compatibility: if the augmentation stops working, will the user fall back to their original state of capability or feel even more challenged? These first concepts of assistive augmentation provide us with a framework to have the necessary ethics and safety discussions before the technologies come to market.

The third example, the OTTAA project, was presented by its founder Hector Costa. OTTAA is an augmentative and alternative communication (AAC) platform that uses environmental data, an artificial intelligence algorithm, and a pictogram-based communication code to create sentences and help non-verbal users to communicate effectively.

The mobile platform allows users to voice words through the device and even send WhatsApp or Facebook messages. The sentences are created through images that the user selects from the screen.

The algorithm suggests to the user the most appropriate pictograms according to their environment, daily routines, and most used phrases. Data points considered include the time of the day, user age, gender, location, and previous usage and preselects 4 pictograms from the database of 18,000 pictograms. The algorithm learns from each pictogram a user chooses and continuously suggests related pictograms, enabling the user to quickly and effectively create and voice sentences with the device.

To date, OTTAA has impacted more than 40,000 people in more than 11 countries. The original language is Spanish, with machine-translated versions in English and Italian.

The Q&A session following the presentation of examples focused on cultural sensitivities when deploying such technologies in different cultural regions. Acceptance and trust are pre-conditions for assistive communications technologies to work and be used; otherwise, already difficult communication might become even more awkward for the users of these technologies.

Another vital aspect pointed out by the audience was that our younger generations now growing up with augmented and alternative communication tools (AAC) will expect to have corresponding tools for higher education and the workplace. This will necessitate the education of mainstream teachers and human resources people on these technologies and their users.

The 2nd online seminar will focus on emerging assistive technologies for cognition. You can find the agenda and the link for registration here: https://aaate.net/2021/02/25/virtual-workshops-series-on-emerging-assistive-technology/

Resources:

WIPO’ report on “Technology Trends 2021: Assistive technology”: https://www.wipo.int/edocs/pubdocs/en/wipo_pub_1055_2021.pdf

More information on Alter Ego: www.media.mit.edu/projects/alterego/overview

More information on the OTTAA project: https://www.unicef.org/innovation/innovation-fund-ottaa-project