This post is the second blog post in a seven-post series recapping the PEPR 2021 conference. If you're wondering what PEPR is or want to see the other PEPR conference recaps check out this post!
The two PEPR 2021 talks on Consent are:
- Designing Meaningful Privacy Choice Experiences for Users
- Engineering a Consent Sandbox to Eliminate Annoying Pop-Ups and Dark Patterns
Designing Meaningful Privacy Choice Experiences for Users
In Designing Meaningful Privacy Choice Experiences for Users, Yuanyuan Feng Yaxing Yao, and Norman Sadeh introduce a conceptual framework and taxonomy for designing and analyzing privacy notices and privacy choices. Laws often fall short of providing practical guidance for privacy practitioners to satisfy legal obligations. This work helps bridge the gap between legal definitions and implementation.
Privacy notice and privacy choice are widely adopted legal approaches that outline how people should be informed about data collection, use, and any privacy controls available to users. However, companies frequently, intentionally or unintentionally, employ dark patterns that manipulate users into making undesirable privacy choices—privacy choices can be difficult to find, overly-simplified, and act contrary to users' expectations.
How do we identify when a privacy choice is effective and respects users?
To answer this question, Yuanyuan presents 5 desirable attributes that form a conceptual framework for designing and analyzing privacy choices:
- Effectiveness — Choices should match preferences and desired outcomes
- Efficiency — Choices should take minimal time and effort
- User Awareness — Users should be aware of and understand choices
- Comprehensiveness — The range of privacy choices available to users
- Neutrality — Choices should be neutral and void of dark patterns
In addition to this conceptual framework, Yuanyuan also introduces a design space that serves as a taxonomy of important privacy dimensions. This taxonomy is based on existing privacy design literature and other user-centered analysis. The 5 dimensions of the taxonomy are: type, functionality, status timing, channel, and modality.
- Type defines the type of privacy choice offered to users i.e., binary choice, multiple-choice, contextualized choice, and privacy rights-based choices.
- Functionality describes the capabilities offered to support the presentation, enforcement, and feedback of privacy choices that users make.
- Status timing relates to when the privacy choice is presented to users i.e., at setup time, just-in-time, and context-aware.
- Channel establishes the mechanism through which privacy choices are communicated i.e., primary, secondary, or public channels.
- Modality describes the medium used to deliver and record users' privacy choices e.g., visual, auditory, haptic, machine-readable, etc.
Let's apply this conceptual framework and taxonomy to a concrete example.
Yuanyuan introduces an Internet of Things (IoT) Assistant App that helps users discover and control the information collected from them by nearby IoT devices. This IoT Assistant App affords users different types of privacy choices—users can opt-in and opt-out of certain data practices and can request access to or deletion of their data. While the IoT Assistant App does not enforce users' privacy decisions, this responsibility falls on IoT system owners, it does provide timely feedback to users that describe the functionality of privacy choices.
Users can also choose the timing of how they are made aware of new IoT devices that may be around them. Users are afforded location-based discovery of IoT devices, as well as their choice of context-aware, on-demand, and personalized push notifications for notifications. The IoT Assistant App also provides multiple delivery channels for privacy choices. Privacy choices are primarily delivered through a secondary channel (the app itself), as well as through public channels via QR codes. Finally, while privacy choices are primarily delivered visually, accessibility labels and push notifications provide various modalities.
Engineering a Consent Sandbox to Eliminate Annoying Pop-Ups and Dark Patterns
In Engineering a Consent Sandbox to Eliminate Annoying Pop-Ups and Dark Patterns, Benjamin Brook introduces Airgap.js—a third-party script that helps companies respect users' consent preferences. Airgap.js provides a browser firewall that quarantines tracking-related events on websites until user consent is provided and helps remove annoying user experiences associated with disruptive consent banners.
Companies frequently track users and their actions as they arrive and navigate any given website. This tracking is frequently immediate, automatic, and continues until a specific action is taken e.g., signing up for a newsletter, purchasing a product, or ultimately leaving the website.
Due to regulations, and to facilitate continued tracking of users, companies have implemented pop-up consent banners. These cookie banners ask for users' consent before tracking begins but these banners are intrusive, disruptive, and interrupt the users' primary goals.
Benjamin suggests that to address the problem of disruptive cookie banners companies should ask for users' consent just-in-time. To do so, Benjamin proposes that websites could track users locally in their browser and only upload this data after a user provides consent to do so. However, many companies leverage third-party scripts that activate on page load and immediately begin uploading data.
How can we change this paradigm? A browser firewall may be the answer.
A browser firewall, or consent sandbox, would permit essential network requests like loading images while restricting tracking-related requests. These tracking events would be quarantined locally in a user's browser and could be replayed later after a user decides to provide consent to share this information. Benjamin's team tried three possible approaches to achieve this:
- Sandboxed IFrames
- Dynamic Content Security Policies (CSPs)
Sandboxed IFrames: The team began by sandboxing each third-party tracking script into its own IFrame in the browser. These IFrames are unable to make network requests and each DOM mutation would be inspected and compared against users' consent preferences before being allowed to execute.
While this approach was straightforward to implement, it suffered from performance problems. As the size of the website scaled, this implementation introduced substantial memory and CPU strain and rendered operations like
defer completely ineffective. Additionally, it was possible to replay DOM mutations in the wrong order and easily break websites—not good.
Dynamic Content Security Policies: The team's second implementation utilized dynamically generated Content Security Policies (CSPs). These dynamic CSPs provide granular controls that permit or block network requests to particular resources and sources. While this approach was substantially more performant than the sandboxed IFrames, CSPs could not effectively replay tracking events and would require users to fully reload a website if they modified their consent choices.
Neither sandboxed IFrames or dynamic CSPs by themselves are sufficient.
navigator.sendbeacon, there are many HTML elements that can produce network requests as well.
Benjamin walks through a few concrete examples and describes how each network request would be evaluated and ultimately permitted to proceed or be blocked. For the full experience, I recommend you watch the talk!
Most importantly, the chosen solution introduced no noticeable performance overhead for large websites and allowed for the replay of local tracking events once a user has provided consent.
I hope these posts have piqued your interest in PEPR 2021 and future iterations of the conference. Don't forget to check out the other Conference Recaps for PEPR 2021 as well!
If you liked this post (or have ideas on how to improve it), I'd love to know as always. Cheers!