Building User Trust with Machine Learning Products

Designing user experiences for machine learning (ML) and other emerging technologies can feel ephemeral; users often see ML as a black box where they have little visibility or control. Like other forms of automation, users need to trust that ML features will behave as expected, and that they can override it if needed.

Last year, PagerDuty launched Intelligent Alert Grouping, a feature that predicts how to group alerts in real time for software developers so they can understand the types of problems happening on their systems. While intelligent alert grouping is more efficient and flexible than alternatives like hard-coded rules, users hesitated to try the new technology because they feared it might make mistakes.

In this talk, I’ll share my experience designing interactions that helped users preview and experiment with this new ML feature, how our team approached user research and design, and the lessons we learned about how users gain trust in unfamiliar technology.

You’ll will come away with:

  • Ideas for how to apply tactics like journey mapping and prototyping in new ways that address product experiences that change over time
  • Examples of ways to provide more transparency and control for experiences with ML products that can often be invisible
  • Strategies for explainability, both inside and outside the product, through effective metaphors and collaboration with user-facing teams

If you’re curious about how to tackle the unique UX challenges for these new technologies, or interested in the unexpected ways users want to experiment with them, this talk is for you.

Technology View Full Agenda
Location: Blakely Date: March 3, 2020 Time: 3:00 pm - 3:45 pm photo of Claire Pacheco

Claire Pacheco

PagerDuty