Artificial intelligence (and its compatriot, machine learning) is all around us. From more obvious examples such as Alexa and Siri to less obvious ones like your car screeching at you because your seatbelt isn’t fastened, AI is everywhere. The average user may not realize it, but AI plays a pivotal role in how we use technology today. And it’s more than merely asking Alexa to adjust the thermostat—the ability to use one’s voice to do so rather than on the thermostat itself means an artificially intelligent system can make tasks more accessible for a disabled person who can’t see or touch the screen. Put another way, AI’s presence (and its ever-burgeoning capabilities) technologically has real potential to affect lives in more meaningful ways beyond convenience or even productivity. In the case of ambient computing and digital assistants, AI is enabling access to a world that would otherwise be closed off.
Yet as intelligent as we make Alexa or Siri or Google Assistant out to be, the reality is their smartness is literally artificial. They get their data from humans. Which is to say, an artificially intelligent system is only as “smart” as the data it’s being fed. For all the myriad benefits, one of the problems with AI is bias—it’s fair to question whether the data these systems are given is favoring (or not) certain types of people more than others. A prime example of this ideal is speech. Alexa and her ilk are built assuming normal, fluent speech; the language models don’t take into account atypical speech patterns. On one hand, this is perfectly logical—it’s hard enough to teach a machine typical speech so it’s exponentially harder to teach it anything less. But therein lies the rub; for people who stutter (myself included) the bias towards typical speech patterns can make voice-first interfaces feel exclusionary and inaccessible because the people who build them don’t also consider alternate ways of speaking. The bias then is the supposition that ostensibly “everyone” can competently talk to these genies in boxes.
Element 451 is a company that’s working on addressing biases in AI. They are focused primarily on the education sector, as they help colleges and universities gauge student enrollment. Over 60 institutions use Element’s technology currently. In an interview conducted in early September, chief executive Ardis Kadiu told me the company’s name is a nod to Bradbury’s classic novel Fahrenheit 451. He explained the number 451 is the degree at which paper burns, and Element wanted to tie in the “transformation point” from colleges recruiting the traditional way to a more modern, digital way, in the same spirit as how paper is transformed after being burned.
In a statement provided to me, Element said: “Because Element451’s AI is about the intent of the student and not their traits (i.e., race, gender, age, disability, etc), it works to eliminate [or] significantly reduce the bias that comes from historic modeling admissions and enrollment.”
What Kadiu and company do is clearly not at all focused on disability and accessibility. Nonetheless, disability certainly has adjacency to Element’s focus on artificial intelligence. Kadiu recognizes AI’s ability to be a helper, making everyday tasks like unlocking your phone easier. “[Through] the use of [AI] technology, we are accelerating or taking a steps that could take 10, 20, 30 different steps and [reduce] that,” he said. “So in essence, technology becomes an enabler and it becomes a kind of a facilitator.” Kadiu and his team are honed more on efficiency than accessibility, but his comment on the reduction of steps echoes the promise of software like Apple’s Shortcuts app on iOS and the Mac. Shortcuts does exactly what the name implies, providing shortcuts to complete what would normally be multi-step processes. They can be efficient and convenient, but they can be accessible as well.
MORE FOR YOU
A considerable portion of our conversation delved into much of the nitty-gritty of AI: how it works, how it’s used, and so on. For accessibility, Kadiu acknowledged AI’s power to help the disability community in the classroom and other settings. He said repeatedly that “AI is not magic” and one of the ways it needs to be improved is by eliminating biases. The information fed to AI systems must be more diverse and inclusive all around, including disabled people. “[Our] job, as technologists, and people who are building AI, is to expose them to a variety of inputs and data,” he said.
In the end, while Element’s mission is niche to the education industry, their focus on eliminating bias serves as a lesson for all people in all industries. To make AI smarter and make more informed choices, the onus is on technology companies to give these systems as much diverse data as possible—and that must include disabled people somehow. With over 1.2 billion people worldwide identifying as having some sort of disability, the disability community is the largest minority group. It’s only logical that they are more representative in the data that is used to build and train artificial intelligence systems. This is precisely why Alexa and her ilk must be trained to adeptly understand atypical speech—not everyone on the planet speaks without a stutter or other speech impairment. That’s quintessential bias right there: most of these digital assistants are trained using typical speech because most people have typical speech.
As Kadiu and Element understands, however, most people is not everyone.