OECD research on AI for labour market accessibility: opportunities and challenges

In a policy session at #AAATE2023, OECD researcher Chloé Touzet summarised key insights from a study of opportunities and challenges of using AI to foster labour market accessibility from the perspective of both technology innovators and application users. An accessibility expert panel analysed the key results and provided additional perspectives. Here below a summary of the discussions, which fed into the just published report.. 

People with disabilities are still three times more likely to be unemployed. The use of artificial intelligence (AI) in the labour market is increasing with the impact on people with disabilities and other vulnerable groups being unclear. 

For this reason, OECD launched a study of the following research questions: 

  • What is the potential of AI to foster employment of people with disabilities? 
  • What is preventing this potential from being full used?
  • How can governments help to avoid the risks and seize the opportunities of AI to support people with disabilities in the labour market? 

Labour market accessibility is attained when people with disability can interact in the labour market with similar amount of time, effort and ease as those without disability. 

What can AI do to support people with disabilities? A lot, if employed with accessibility in mind. 77% of the 142 case studies that OECD collected, would not exist without AI. Out of these case studies, under 50% were disability-centred solutions, including live captioning, speech recognition, algorithms, gait-correction prosthetics, mental health agents and more. Some applications focused on environmental-adaptation solutions, some on meta-level solutions improving accessibility and a few on creating new job opportunities. 

It was found that AI can cater to different scenarios and is well-suited for collaborative development, supporting people with disabilities to easier integrate mainstream technologie and reduce the cost of solutions. 

There is potential for employing AI to improve accessibility but this aspect of the use of AI is still under-developed. There a several reasons for that. 

Research and development in accessibility-centred AI applications is hindered by lack of funding, cost of access to data and computing power as well as lack of accessibility training for developers. 

Should this difficulty be overcome, commercialisation runs into again a lack of funding to scale past the prototype phase, difficulty of finding sustainable business solutions and issues with the discoverability of solutions for people who would benefit. 

And finally, there is a lack of user engagement leading to irrelevant and impractical solutions, aggravated by the lack of interoperability. 

All of the above often lead to low reliability of developed solutions, with errors that are more likely to affected people with disabilities and be more consequential too. There is often built-in bias and discrimination because the training datasets exclude people with disabilities. There are also serious privacy concerns as people with disabilities are more easily identifiable because of their uniqueness and of course inequity of use when mainstream AI is built in an inaccessible manner. 

With all this in mind, there is still a huge potential in AI for people with disabilities, when risks are mitigated by good policies. 

OECD’s research has shown that some governments have started to develop policies but they are no way near sufficient yet. Good approaches are to explicitly regulate against discriminatory uses or impacts of AI for people with disabilities, to revise liability laws and procurement guidance to incentives the develop of deployment of safe and interoperable AI products, to promote process-oriented accessibility standards for AI produced products, and to implement better systems of quality control and enforcement. 

For this to work, it will need the production of accessibility-relevant data and enhanced discoverability of accessibility-enhancing solutions. It will also need the training of HR teams to use AI to close the disability employment gap and the inclusion of accessible AI training into computer science curricula. 

After listening to Chloé Touzet’s presentation of the research, a panel of accessibility experts chaired by Luc de Witte, President of GAATO, discussed the findings and provided additional perspectives grounded in their work and experience. 

“We should see AI as underlying foundational technology changing every aspect of life”, said Christine Hemphill, Managing Director of Open Inclusion, UK. As such it can bring us improvements in efficiency and capability as well as joy, agency, and independency – if it equitably shared. However, these gains in efficiency shall not come at the cost of quality, and it is particularly unfair when the gains in efficiency only apply to one group of people, while the risk and liability may have to be borne by another group. With AI, we have new ways of solving problems and fulfilling wishes, but it will need the balancing of human and computer intelligence as well as a very heterogeneous group of people to decide what data to use, as well as where and why to use AI. 

AI is having an impact and will either help create unique solutions for people with disabilities, or create barriers. Bill Curtis-Davidson, Co-Director of the Partnership on Employment & Accessible Technology (PEAT), USA, can see AI creating new interfaces with technology that can drive accessibility, if we facilitate disability-led innovation and promote people with disabilities as founders and innovators using AI to build better solutions. Bill pointed towards the NIST AI Risk Management Framework as example. 

Klaus Miesenberger, Institut Integriert Studieren, University of Linz, Austria, picked up Bill’s thought and emphasized the need to move from discussing how people with disabilities can use AI to how they can “do” AI. What environment can allow practitioners to sit down and implement AI? We need a working environment to play with AI and we need to make it much more fun. 

“We need understandable, explainable, trustworthy, reliable, ethical AI”, continued Lampros Stergiouslas, UNESCO Chair / Professor in AI & Data Science for Society at The Hague University of Applied Sciences, the Netherlands. He came back to the point of quality and representativeness of the underlying dataset, avoiding biases and distortion. AI has the potential of personalisation and adaptation to individual needs, but we are not there yet. The technological revolution defined by AI will have transformative consequences for our societies but we need to mitigate the risks and focus on equal access. 

In the discussion with the audience it was pointed out that we can think about AI as human powered computing or computer enabled humans. Our basic conception changes our approach to the technology. 

Other important questions arise around the security of AI applications, the reliance on private companies developing the technology and almost no funding for basic AI research and development. The session concluded on a discussion of the role of DPOs (disabled people organisations): are they using and investing in AI? 

OECD’s report on “Using Ai To Support People With Disability In The Labour Market. Opportunities And Challenges.” has been published in November 2023. You can download the report directly from this link: OECD report