Presenter(s):  AmberNechole Hart

Date / Time:  March 5, 2020 @ 1:40 pm

Key Takeaways

Hart explores the reasons Siri sounds white, why that’s a problem and how to make digital voice technology more inclusive.

Notes

Hart focused on three questions over the course of her talk titled “Why Does Siri Sound White?”

  1. What is sounding white?
  2. How do products sound white?
  3. Are there Opportunities to change the narrative?

To explain what “sounding white” is, Hart spoke about her personal experience.

“As a black person, and honestly it might also happen to other ethnic minorities, as you begin to grow up, you find that sounding white is basically the equivalent of mastering standard American English.” Hart said.

There’s a historical context to this reality. Hart says in 1619, when the first ship carrying Africans landed in what would become the United States of America, black people were given a design brief.

“Some of the requirements: you have to be subservient,” Hart said, “and you have to use someone else’s voice to communicate. A limit is that, again, you can’t use your own voice and that you’re disconnected from the land, you’re disconnected from your culture, and many times you’re disconnected from your community.”

African American English is the result of this design brief.

“Not only did we create our own dialect, but we also mastered Standard American English,” Hart said, which allows black folks to code switch depending on what the context requires of them. Sounding white, at its base, is just the mastery of Standard American English. Siri’s whiteness, then, perpetuates the myth that white voices are intelligent and successful and black voices aren’t.

“The black voice, the way that it’s coded, it’s okay to be used socially, it’s okay in informal conversation,” Hart said, “but it’s always relegated outside of professional and academic spaces.”

Hart focused on voice assistants to explore how products end up sounding white. These devices work by taking in the sound from the user, trying to parse the sound for meaning, and then responding. Hart argues that even if you have literal black voice actors performing the roles of digital assistants, it still wouldn’t sound quite right.

“You have a black voice being your voice assistant, but it’s actually the models underneath, the natural processing of models underneath, that are really impacting it,” Hart said. “It doesn’t sound like the Issa Rae I connected with on Insecure, it doesn’t sound like John Legend.”

The problem is most of these models are only trained on Standard American English. These parsing models end up incapable of supporting African American English, in some instances even coding African American English as toxic. These parsing models also often fail to ‘hear’ other accents and dialects as well.

Hart offers four ways to go about changing this narrative.

  1. Integrate historical context.
  2. Build out algorithms with linguists.
  3. Incentivize underrepresented voices
  4. Ask whose voice we do and don’t want to hear

“If these companies are not interested in inclusion, their bottom line will be interested in it,” Hart said. “I think this is where we’re moving and we really need to start thinking about inclusivity, not as something that we need to do because it’s right, but something that will help us innovate and help push our products forward.”