Recap: PEPR 2021 — Design

Overview

This post is the fifth blog post in a seven-post series recapping the PEPR 2021 conference. If you're wondering what PEPR is or want to see the other PEPR conference recaps check out this post!

The three PEPR 2021 talks on Design are:

  1. Illustrating Privacy Engineering Concepts with Potty Talk
  2. Privacy UX At Scale: Foundational “truths” and design guidelines
  3. Is it Time Our Devices Showed a Little Respect? Informing the Design of Respectful Intelligent Systems

Illustrating Privacy Engineering Concepts with Potty Talk

In Illustrating Privacy Engineering Concepts with Potty Talk, Lorrie Cranor presents from her 3rd-floor bathroom—yes, you read that right. While there, Lorrie describes things to consider when designing effective privacy notice and choice mechanisms through the lens of unconventional use cases like smart toilets.

Lorrie begins by introducing her work on Privacy Illustrated where she asks people to draw what privacy means to them. From this work, Lorrie identifies what she calls the quintessential example of a private space—the bathroom. The expectations of privacy in the bathroom resonate with people of all ages. In an effort to push her students to think beyond traditional privacy notices and compliance checkboxes, Lorrie asks her students at Carnegie Mellon to provide effective notice and choice for smart toilets.

Like many novel developments, smart toilets are responsible for a series of serious design considerations:

  • What are the legal requirements?
  • How will the choice mechanisms work ergonomically?
  • How will notice and choice be communicated to the visually impaired?
  • What happens when a toilet user doesn't make a choice?
  • Could law enforcement use data to identify illegal drug use?
  • Could employers discover which of their employees are pregnant?

To ensure the design decisions are effective an in-depth evaluation is needed. But how do we measure the effectiveness of our smart toilet design? The reality of effectiveness is that different parties will measure this differently.

Privacy advocates may suggest that a high opt-out right means that the privacy notice is effective. In contrast, smart toilet manufacturers may suggest that a high number of opt-ins indicate an effective notice that instills trust, confidence, and utility. Lorrie suggests that effectiveness should be determined based on whether (1) users understand the choices they've selected and (2) whether those choices match the users' actual preferences.

While it's possible to conduct a lab-based study to examine the effectiveness of notice and choice mechanisms for smart toilets, these studies may not match a users' expectations "when nature is calling". Instead, Lorrie suggests that it may be more effective to conduct the study in a bathroom retrofitted with design prototypes. Although this is more complicated logistically, the study is likely to reveal factors and observations that may have been missed in the lab-based setting.

In the second half of Lorrie's talk, she moves on to discuss the range of controls present in hotel bathrooms and showers—complete with illustrative photos! Lorrie demonstrates light switches that are hidden by hand towels and shower knobs that you're not quite sure how to operate. For the true "Potty Talk" experience, I recommend watching the talk!

Based on all of these anecdotes, Lorrie shares a few key takeaways:

  • Privacy concepts are abstract and icons may not always be intuitive—when needed, add words to aid understanding.
  • Privacy controls should be easy to understand and standardized—users shouldn't have to relearn controls for different systems.
  • Privacy controls should be easy to find—put controls where people will look for them when they're ready to use them.
  • Privacy-enhancing technologies should be simple, intuitive, attractive, and (really importantly) fail-safe.
  • Privacy tools, including notice and choice mechanisms, should be tested with users in context.

Finally, Lorrie closes out the talk with the following words of wisdom:

If you're trying to explain privacy engineering concepts or reason about privacy issues, take a bathroom break and look for some inspiration.

Privacy UX At Scale: Foundational “truths” and design guidelines

In Privacy UX At Scale: Foundational “truths” and design guidelines, Manya Sleeper and Johanna Woll share 3 privacy truths and 3 high-level Privacy UX design principles. Given the constant changes in news, technology, and regulations, design decisions should scale over time and across different needs and contexts—this talk starts you on that journey!

Manya begins by introducing 3 privacy truths that are curated from 100s of UX studies dating back 10 years. Manya supplements these research findings with her learnings from practical experience designing and advocating for users' needs in global products. These 3 truths serve as an anchor to secure a starting point for new privacy guidelines:

  1. Privacy is complex and contextual
  2. Multiple factors shape privacy "ABCs" (Attitudes, Behaviors, Comprehension)
  3. Privacy is often not users' main task, even though it's important to them

Privacy is complex and contextual: Privacy means different things to different people in different places in different contexts—there is no singular definition of privacy. Privacy concerns are also not binary and everyone is privacy-sensitive some of the time. People face different security and privacy threats and have varying needs related to these threats. While some people may worry about companies or governments accessing their data, others may worry about friends, family, maintenance workers, or young children accessing their data.

Understanding these differences in users' needs is critical to evaluating the impact of design.

Multiple factors shape privacy "ABCs": Users develop "folk models" in order to understand complex systems. These models are based on how the systems are designed and presented to users, as well as their past experiences and conversations with peers. These folk models strongly impact a peoples' expectations, concerns, and behaviors when using a product.

Privacy is often not users' main task: Privacy is still an important goal and underlying value, even though it may not be the focus of a given task. Privacy tools and settings that are carelessly introduced may increase the overall burden on the user. While users often seek increased transparency, education, and control, too much of one of these attributes in the absence of others may be harmful. Transparency without control leads to helplessness or distrust, while too many controls may make users frustrated.

Users should be able to ask for privacy when they need it, but not have it thrust upon them when pursuing their primary task.

In the second half of the talk, Johanna presents 3 design principles to help apply these truths in practice:

  1. Deliver appropriate outcomes.
  2. Design for the context.
  3. Craft with care.

Deliver appropriate outcomes: Privacy is not the same for everyone and it can be difficult to build products that work for everyone. Instead of attempting to understand users' motivations or classify users based on privacy personas, we should instead focus on delivering appropriate outcomes. When focusing on privacy-centric user journeys, look for opportunities to promote agency at core touchpoints by providing appropriate controls. Even though privacy may not be the primary task, having protections and controls close-at-hand may help people react quickly to privacy threats when they arise.

Design for the context: Privacy has nuanced meanings and facets that vary based on cultural contexts. To adequately address privacy issues for global audiences we should test for global contexts. Linguistic nuances and localization should be approached cautiously—it may be necessary to create terms or concepts when no cultural equivalent exists. Progressive disclosure can also help ensure that a user can execute on their primary task but have access to detailed privacy information when needed.

Craft with care: People have different needs, abilities, and ways of processing information. Consistent iconography and content can help reduce cognitive load on users while promoting and enforcing consistency across product(s). Be careful when choosing terms like remove, clear, move to trash, or auto-delete and ensure they reflect the underlying functionality. People like to set and forget without doing the heavy-lifting, aim to enable control without too much complexity—balance trade-offs for your users so they don't have to.

To wrap things up, Manya shares that:

Privacy UX is a challenging and constantly evolving space...but we can start by thinking about how to account for constants (while adapting this guidance to specific product goals)

Is it Time Our Devices Showed a Little Respect? Informing the Design of Respectful Intelligent Systems

In Is it Time Our Devices Showed a Little Respect? Informing the Design of Respectful Intelligent Systems, William Seymour discusses respect and how to incorporate it into the systems we build. A few key questions when thinking about respect are:

  1. What does respect mean for your users?
  2. What's important to users and how might respect manifest in their interactions with your system?
  3. Does the system treat people as a means to an end?
  4. What assumptions underpin system design decisions and what other ways exist to achieve the same goals without relying on those assumptions?

So what is respect?

Is respect something that is due to everyone or is it earned? Is it a duty that we have to each other? What about a duty for designers and developers when designing systems? As a word, respect has a variety of meanings and we generally rely on context, as well as subtle clues to evaluate it—it's a key component to human relationships, society, and culture.

Stephen Hudson proposes four categories of respect: directive, obstacle, evaluative, and institutional. Directive respect includes following laws and rules that guide actions, obstacle respect relates to things that could harm or impede our goals, evaluative respect involves respecting one's skill at a task, and institutional respect includes respecting social, cultural, or historical norms.

A different perspective comes from Immanuel Kant. Kant's major thesis was that we should respect other people as human beings, as moral agents, and as people with moral capacity. One should always treat humanity as an end in themselves and never merely as a means to an end. This language can be useful to begin analyzing systems and evaluating how they treat their users. When considering data and privacy, one should treat users and their needs genuinely and avoid tricking them into doing something they wouldn't otherwise agree to.

While Kantian ethics serves as a reasonable starting point it has received its fair amount of criticism. Kantian ethics create a sort of abstract nature of respect that is genderless, colors, and apolitical. In other words, everyone is due respect because we are human and equals in terms of moral capacity. However, the world many of us find ourselves in is one of structural oppression. To bridge this gap, models like those proposed by feminist care ethics conceptualizes respect by focusing on and nourishing the characteristics and relationships that make each of us unique.

In terms of data and privacy, we should look for opportunities where these conceptualizations of respect may be violated. In the case of machine learning classifers, we often reduce people to an objective set of features (small or large) in order to make decisions about them. One should consider how systems may inadvertently exascerbate existing inequalities in society and be careful when choosing objective functions like optimizing engagement time on social media platforms.

The previous theories of respect assume that respect exists outside of humans. Other conceptualizations of respect suggest that respect arises through actions, intereactions, social rules, and social norms that are set throughout society. People provide social signals and use social structures to inform others how they would like to be treated.

In reality, systems form complex ecologies of respect with different boundaries between distinct people and stakeholders. Within any given system there are interactions between design and development teams, the people who use the system, and the people who are affected by the usage of these systems. One must ensure respect is maintained during each of these boundaries and relationships.

William also discusses how respect applies to different parts of a systems' lifecycle. We should not only think about respect when creating new systems but also when deciding how to turndown existing systems.

When creating systems, one may want to consider how different approaches, paradigms, and architecture changes the way that respect will be communicated to users. When designing a system we should consider what user experiences are being optimized for, as well as how people's individuality is considered, their ability to express themselves, and how they're ascribed agency.

William provides a number of wonderful examples throughout this talk that help solidify these messages and how you can build respect into the systems that one builds. Consider how respect fits into the systems that one designs, and how it relates to the implementation, as well as peoples' data and privacy.

Wrapping Up

I hope these posts have piqued your interest in PEPR 2021 and future iterations of the conference. Don't forget to check out the other Conference Recaps for PEPR 2021 as well!

If you liked this post (or have ideas on how to improve it), I'd love to know as always. Cheers!