Online Seminar on Emerging AT for Cognition

“Many of these innovations will be adapted for use in a wider array of consumer products in the coming years. This means increased commercialization of assistive tech applications for a wider consumer base.”

On 23 April, AAATE together with WIPO held an online seminar focused on the challenges associated to cognitive disabilities and the way technology can be used to increase the quality of life of an increasing number of people. The seminar built on the findings of WIPO’s recent report on “WIPO Technology Trends 2021: Assistive technology”.

AAATE was happy to co-host a 3-part series of online seminars to explore key aspects of this report and investigate what the development of patterns in assistive technology (AT) can tell us about the future of AT. The findings do support AAATE’s work towards an inclusive society where assistive technology, accessible mainstream technology and universally designed products and services can deliver equal access to opportunities.

As Katerina Mavrou, AAATE president, underlined, what is emerging and innovative varies in different cultural contexts and communities, but we need universally shared ground and a common knowledge base to successfully provide stakeholder support and ensure access to appropriate assistive and accessible technologies worldwide.

The report gives an overview on the assistive technology patent landscape, trends in AT, the wider context of AT and an outlook on the future of assistive technology. WIPO’s director general Daren Tang pointed out in the report that “Many of these innovations will be adapted for use in a wider array of consumer products in the coming years. This means increased commercialization of assistive tech applications for a wider consumer base.

Irene Kitsara, patent analytics expert at WIPO, explained that nine enabling technologies were identified in this report, which will likely accelerate or even enable the emergence of new assistive products in a combination of artificial intelligence, the Internet of Things (IoT) connectivity, advanced sensors, virtual reality and augmented reality.

As Lorenzo Desideri, moderating the seminar on behalf of AAATE, underlined, cognitive disability is an umbrella term used to refer to people presenting a wide variety of health conditions. These can include dementia, disorders in neuro-development or brain injuries, to name but a few. The impact on the person, their family and society overall is huge and evidence shows that people with cognitive disability may require a wide variety of healthcare and social support services. Often these conditions may lead to high rates of unemployment and poverty.

We do not have reliable statistics on the number of people living with cognitive disability worldwide, but it is estimated that by 2030 the number of people with dementia may rise to 75 million. Another shocking number stems from 2016, when it was estimated that over 52 million children under the age of 5 suffered from development disabilities, and 95% of these children live in low- and middle-income countries.

In light of these considerations, what can we expect from assistive technology?

Traditionally, AT for cognition consisted mainly of products that help reduce the cognitive load associated to a task. A good example is the electronic calendar developed for people with memory impairments, reminding the user in an accessible manner which day of the week it is, the tasks for the day, if food would be delivered, if appointments were scheduled, or medication dispensers helping users to take the right medication at the right time.

Today, AT for cognition has developed far beyond that. To explore the potential of emerging AT for cognition in more detail, three of the creators cited in WIPO’s report joined the online seminar to explain in detail the development and capabilities of their innovations, namely a social robot and two examples of smart assistance.

Sara Cooper, robotics engineer at PAL Robotics, presented the ARI Assistive Robot, a high-performance social robot and companion designed to support hospital staff in their caregiving duties and assist older people to promote healthy aging. With the Covid-19 pandemic, the robot proved helpful in taking people’s temperature to detect signs of infection, as well as promote social distancing to reduce infection rates. For others, the robot serves as source of entertainment. ARI is used to help administer first-care attention and provide emotional support to people who live in isolation, including the elderly population.

The ARI Assistive Robot has a humanoid design, and an expressive face, is AI powered and recognizes people for example inside the home. The robot has an interface that allows users to understand the robot and enables the robot to adapt its behavior to the user such as by using speech or gestures.

ARI is deployed in the EU project SHAPES as therapeutic assistant to older people with early-stage dementia, living independently or in sheltered apartments. In one of the pilots, ARI provides reminders of events, enables video calls, shows the user images and photos, plays games with the user and encourages interaction.

In a second pilot, ARI is deployed as robot companion to support older people to live independently in rural or urban environments, at different locations of Europe such as Ireland, Greece, Italy and Spain. The robot reminds the user about appointments, to take medication, encourages social engagement, and monitors the users’ wellbeing, in order to provide an overall feeling of safety and entertainment.

Overall, we see that conventional assistive products for the built environment merge into smart, connected, robotic systems implemented in smart homes and smart cities, intended to support people in dependent living. Companion and pet robots using a combination of artificial intelligence (AI), Internet of Things (IoT) applications and sensors seem particularly appropriate to address various disabilities, including cognitive impairments. This combination of technologies can support functions that in traditional AT were scattered over different applications such as task reminders, automated calendars, medication dispenser, navigational aids, software applications calculating tasks and organizing planned activities, as well as applications monitoring health and emotions.

An important component of the interaction between machine (or robot) and environment, is the correct recognition of images. Youssef Mroueh, research staff member of the IBM T.J Watson Research Center presented his work with IBM on image recognition as assistive technology, which came out of IBM’s AI for Social Good programme.

One output of image recognition is the generation of captioning, in other words the generation of a sentence that describes the content of the image. However, in the context of using image recognition as assistive technology, the focus has to be rather on the purpose of the image. Also, in this context, the image is often taken by a non-sighted or partially sighted person, hence the image might not be centred or blurry or upside-down, and additionally might contain text somewhere.

In 2020, the University of Texas launched a challenge to collect images taken by visually impaired people, capture them and train AI systems to correctly interpret them. Youssef’s research team took part in this challenge. He explained with a few examples the challenges involved in training an AI system on captioning such images. For example, the picture of a bottle of wine could be captured saying “a bottle of wine”, which is accurate but too general to be of use. The user is likely to want to know which kind of wine (in his case Chardonnay). Furthermore, the context around the object could be of importance. Youssef’s team also trained their AI by having it analyze the picture upright, tilted to the left, tiled to the right and upside down, to understand and correctly interpret the orientation of pictures.

While Youssef’s team developed this showcase AI focused on visually impaired users, it will be very useful to users with cognitive impairments as well as pictures taken by these users might have the same characteristics (blurry, not centred etc.).

Our third creator worked on improving a mainstream application, namely the Google Assistant, to become able to be used in assistance for people with disabilities. Lorenzo Caggioni has been working with Google since 2010 and in 2019 started project DIVA, intended to support people with cognitive impairments, finally resulting in the Starting Blocks application.

Lorenzo’s story started with his brother Giovanni, who is legally blind and non-verbal. The aim was to find a way for him to interact with the Google Assistant on a smartphone or tablet. In a first step, Lorenzo created physical objects to let him trigger the Assistant with a common set of commands, allowing him to move to a specific room, or listen to YouTube, or watch Netflix. Lorenzo soon realized that this would benefit many people, not just his brother, and explored further.

Interaction with software is a complex process that requires the ability to process emotion and feelings, memory, motor skills, reasoning, attention, visual comprehension, eventually troubleshooting if a wrong command was given. This can be challenging for people with a variety of cognitive impairments.

Lorenzo and his team focused on reducing the cognitive load by focusing on a limited set of options. Action Blocks can be displayed with custom images that act as visual cues. Clicking the photograph of a cab can trigger the Assistant to call a cab, clicking the image of lights can have the Assistant switch on the lights in a smart home, the photo of a family member can have the Assistant call that person, specific pictures can have the Assistant share the location, play a favorite TV show, listen to selected music etc.

These three examples show that the combination of mainstream applications with AI systems trained to support people with disabilities can provide us with powerful and effective tools to support people with different access needs – cognitive and otherwise.

However, technological advancements are not the only, maybe even not the crucial aspect that will decide over the actual use and value of these applications. On one hand, the generation of the right data sets to properly train AI systems for assistive purposes, as well as data protection and the privacy of the users providing the data are valid concerns. On the other hand, we will need to build trust among users and caregivers to be willing to give these applications a chance.

Nonetheless, WIPO’s report shows that a lot of activity can be seen in this area, a lot of patents are submitted and some of these innovations have the potential to make a real change in the lives of people with disabilities when they come to market.

WIPO’ report on “Technology Trends 2021: Assistive technology”:

Home robot – ARI helping in homes through EU project SHAPES:

ARI: the Social Assistive Robot and Companion:

IBM, Image Captioning as an Assistive Technology:

Action Blocks: one tap to make technology more accessible:

About AAATE:
The AAATE is the interdisciplinary pan-European association devoted to all aspects of assistive technology, such as use, research, development, manufacture, supply, provision and policy. Over 250 members from all over Europe and throughout the world currently take part in the AAATE.

About WIPO:
The World Intellectual Property Organization (WIPO) is a specialized and self-funding agency of the United Nations with 193 member states. Its mission is to lead the development of a balanced and effective international intellectual property (IP) system that enables innovation and creativity for the benefit of all.

About GAATO:
GAATO is an Alliance of Assistive Technology associations worldwide. It is based in Geneva.