Designing for accessibility


#1

Roger Kibbe, developer evangelist for Bixby, just shared a highly interesting piece of content on Twitter that I wanted to put up for discussion here!


For your convenience, I’m posting the chart (AFAIK originally by Jen Gentleman) here:

What I found so interesting about this is the message that disabilities are not a binary state but stretch out on the entire scale from 0 to 100%, and that a lot of people have at least a situational disability at some point or another. This is also the gist of this humorous voice-themed clip:

The question is: How can we as voice app designers and developers make voice apps more accessible? Some things that come to my mind are:

  • Decrease the speed in TTS-generated texts (using the prosody SSML tag)
  • Make voice apps multimodal (including touch input)
  • Have a comprehensive language model that includes utterances in which various social groups would express the respective intent
  • Don’t require the user to use complex utterances, maybe even stick with the simplest way of input (yes/no) where possible

Looking forward to hear your thoughts on this!


#2

I love this chart. Moving the conversation from “doing it for just a few people” to “almost everyone” is very powerful.

Reminded me of a Twitter conversation I had with someone a few years ago who said something along the lines of “sorry about the typos, messaging with one hand and baby feeding with the other can be challenging

Does it maybe make sense to allow people to ask for something like “slow down” when interacting with a voice app? We could then save that in the user specific config. Also offering a repeat behavior is important I think.

Yes! This is one of the things I learned while doing my masters’ thesis on multimodal experiences: one of the main advantages of multimodal systems is increased accessibility (and expression) because people switch/choose between modalities they need (or prefer).

The question is how we can design experiences to make sure people understand they can move between modalities and devices. APL is a good start for cross-device experiences, but the mobile phone is still mostly neglected.

This made me think of something: Typical voice apps have some sort of Unhandled functionality that just repeats what people can do (“please answer with either, …”). If a user enters that state more than once, there clearly is some sort of problem with speech recognition. Maybe we could then think about switch to simple yes/no statements.


Feature Proposal: Conversational Components
#3

I also just learned there is a complete website by Microsoft dedicated to this: Inclusive Design. I think the pages from the tweet above are from their Inclusive 101 pdf.

I like how they frame it as “inclusive” design (open for everyone) as compared to “accessibility” (maybe often perceived as “doing extra work for the disabled”?):


#4

Seems like Amazon has been working on this: