10 min read

Recap: PEPR 2021 — Privacy for Vulnerable Populations

Missed PEPR 2021 and want a recap of the Privacy for Vulnerable Populations talks? Read this.

Overview

This post is the seventh and final blog post in a seven-post series recapping the PEPR 2021 conference. If you're wondering what PEPR is or want to see the other PEPR conference recaps check out this post!

The three PEPR 2021 talks on Privacy for Vulnerable Populations are:

  1. Considering Privacy when Communicating with Incarcerated People
  2. Security through Care: Abusability Insights from Tech Abuse Advocates
  3. If at First You Don’t Succeed: Norway’s Two Contact-Tracing Apps

Considering Privacy when Communicating with Incarcerated People

In Considering Privacy when Communicating with Incarcerated People, Kentrell Owens interviews family members of incarcerated relatives to understand their privacy practices and concerns. Kentrell focuses on families' communication practices, awareness and attitudes concerning surveillance, and any privacy-preserving strategies they may utilize.

The United States is the world leader in the incarceration rate and the total number of people incarcerated. Incarcerated people, and their families, experience a range of surveillance activities when communicating—prisons purport this is for safety reasons, but harms should also be considered. Incarcerated people may not report abuse due to fear of retaliation, people in jail have not been convicted of a crime, and existing racial disparities affect who's subject to surveillance in the first place.

Incarcerated people generally have 5 communication methods available to them, depending on the capabilities at a given facility: phone calls, physical mail, electronic messaging/email, video visitation, and in-person visitation. To understand the privacy preferences and concerns of family members of incarcerated people, Kentrell interviewed 16 people. Kentrell interviewed family remembers rather than incarcerated people because:

  • Communication with incarcerated people would be surveilled and could introduce bias to the results and risks to the participants.
  • Recruiting participants would have been difficult
  • Unique ethical challenges regarding consent and compensation

Kentrell asked families about their perceptions of data collection, retention, and use, as well as general security, surveillance, and privacy questions. The results are organized into four categories: (1) communication practices unrelated to surveillance, (2) awareness of surveillance, (3) attitudes about surveillance, and (4) privacy-preserving strategies participants mentioned.

Communication Practices: Families are largely driven by an obligation to stay in touch with their incarcerated relatives. However, the communication methods available to them are expensive and unreliable. It can take days to receive a message and the cost of communicating is offloaded onto families of the incarcerated—you have to pay to send an email.

Awareness of Surveillance: Most participants thought surveillance wasn't possible due to legal, practical, or technical barriers. However, these barriers are effectively non-existent or do not have a material impact on the facilities' ability to surveil incarcerated peoples' communications.

Attitudes About Surveillance: Families mentioned that it was unfair that they were being surveilled because they themselves were not incarcerated. They thought their words and communications would be used against their incarcerated relative, especially when discussing the judicial system. Half the participants believed facilities tracked and monitored communications to strengthen cases or to press more charges.

Privacy-Preserving Strategies: Participants stated that they always try to use the most private communication method e.g., physical mail or in-person visitation. By using physical mail one could omit your real name and personal return address which may offer some type of anonymity. For others, self-censorship is a privacy-preserving strategy i.e., avoid talking about a case or concealing names of others.

When participants were asked what can be improved about communicating with incarcerated relatives their focus was on ease of communication and cost, rather than privacy. Participants also indicated that convenience, accessibility, and prior trauma prevented them from staying in touch with incarcerated people—it should be easier for people on the outside to reach out.

To close things out, Kentrell highlights two problems raised in the paper: (1) misalignment of stakeholder incentives and (2) the fact that surveillance contributes to the worst legal outcomes for people of color.

Misaligned Stakeholder Incentives: The companies that provide communication services to incarceration facilities are generally unresponsive to the needs and concerns of end-users. General functionality issues go unaddressed, and the few privacy controls that exist are ineffective.

Disproportionate Outcomes for People of Color: The ability to post bail and stay out of prison is a major predictor of legal outcomes—those who cannot post bail have all of their communications surveilled. Due to social stratification in the US, poor Black, Latinx, and indigenous people are disproportionally the targets of incarceration-based surveillance that reinforce existing wealth and racial disparities.

So what do we do?

For end-users, they should use the most private communication method available for their purposes. Mail or in-person visitation may provide the best forms of anonymity. However, some people value email because it does not record audio or video—even though you must identify yourself and create an account.

For policymakers, they should recommend data minimization and increased access controls. There have been multiple breaches of telecommunication providers that have leaked attorney-client privileged calls and electronic messages. Incarcerated people should be afforded better control over their data once they leave a facility. Additionally, advocacy groups should share information about surveillance with impacted individuals.

For privacy engineers, consider the threat models of marginalized groups and how technology can be used/misused to exacerbate social inequities or create new ones. To help account for this, diversify your teams—having people from marginalized groups can add strength to the validity of your results.

Security through Care: Abusability Insights from Tech Abuse Advocates

In Security through Care: Abusability Insights from Tech Abuse Advocates, Julia Slupska shares how conventional cybersecurity threat models fail to account for gendered surveillance and technology abuse. Julia also conducted 26 qualitative interviews to better understand how technologists can reduce technology-enabled abuse, and compensate survivors for the feedback they provide to companies.

Julia's research began by conducting a survey of existing research papers that perform security analyses of Internet of Things (IoT) devices—more specifically, smart locks. The security research papers explore a variety of threats in an attempt to identify what, or who could compromise these smart lock systems.

Julia highlights a specific research paper focusing on August smart locks that are Bluetooth-enabled and attach to conventional locks. The lock connects to a smartphone via Bluetooth, which in turn communicates with the August webservers. August smart locks have two user types, owners and guests, with distinnct permissions. Owners can lock and unlock the door, add and remove guests, and view user activity. On the other hand, guests can only lock and unlock the door.

One particular research paper considers a situation where two individuals (Alice and Bob) are cohabitating and are both co-owners for the August smart lock. The potential threat exists if Bob becomes a malicious actor while also being a co-owner. By definition, co-owners can revoke each other's access.

Consider if Alice attempts to revoke Bob's access but Alice's device is outside of Bluetooth range. At the same time that Alice attempts to revoke access, Bob's device is in airpline mode with Bluetooth enabled—it can still connect to the August smart lock but not with the August webservers. Because Bob's phone cannot connect to the August servers, the smart lock does not recognize Alice's revocation. Thus, Bob can still lock and unlock the door and due to a bug, the system won't record that these actions were performed.

The above example poses a serious threat in the context of domestic violence.

In this case, a previously trusted party could pose a substantial psychological, emotional, or physical threat to people in the house. The authors of the research paper describe this scenario as alarming in theory but unlikely in practice. However, this analysis treats trust as a static, non-changing attribute and may be dismissive of a serious security shortcoming.

More generally, security research papers for IoT devices do not consider individuals inside the home as a potential malicious actor. Instead, researchers typically focus on network-based attackers, hackers, burglars, thieves, and arsonists. When researchers do consider insider threats, it's typically attributed to the company offering the smart lock e.g., August.

Although these types of technology-enabled abuse can happen to anyone, they are disproportionality committed by men against women—with other compounding factors like class, race, disability, and more. Technology-enabled abuse is part of broader behavior patterns that include domestic violence, sexual abuse, image-based sexual abuse, revent porn, etc.

These issues are downplayed and typically labeled as a non-security issue.

Conventional security threat modeling frequently excludes domestic violence and intimate partner violence. In response to these threats, security practitioners frequently turn toward providing more advanced security and privacy controls. However, while these are important they create "safety work" that places the burden on survivors to navigate complex technology practices to secure their devices and their lives.

Companies need to not only think about security but also about abusability.

Julia also conducted 26 qualitative interviews with domestic violence survivors, shelter workers, advocates, and others to understand their needs. While not necessarily security experts these people see the extremes of how technology can fail on a daily basis. Abuse advocates are developing digital security beyond technical controls provided by companies. Advocates provide networks of cares within their community ranging from maintaing relationships with mechanics to check for tracking devices, personnel at BestBuy, the police, and to some extent the technology industry. One of the advocates Julia talked to provides unpaid customer to survivors to help them remove intimate images from Facebook.

Julia suggests that companies should consider leveraging trauma-informed design. Companies and individuals will likely never know whether they're providing services to someone who has experience trauma or what that trauma is. To account for this, companies should provided specialized features and services to these individuals. For example, provide regular reminders that inform individuals when their location is being track or whether their data is being shared with another user—be mindful of the risk of retraumitization as well.

To wrap things up, Julia reminds us that every device, product, and situation is different. There is no one-size-fits-all design recommendation that prevents technology abuse. However, companies can look for technology abuse risks early on in their design process and account for it in their threat modeling exercises. When possible, companies should partner with technology abuse advocates and survivors and compensate these individuals adequately for their unique security and privacy expertise.

If at First You Don’t Succeed: Norway’s Two Contact-Tracing Apps

In If at First You Don’t Succeed: Norway’s Two Contact-Tracing Apps, Eivind Arvesen shares the problems with Norway's first contract tracing application and how they fixed them using Google and Apple's Exposure Notification System.

While trying to manage and contain the COVID-19 pandemic public health officials turn to contract tracing. By interviewing those diagnosed with COVID-19, governments attempt to identify additional individuals that may have been exposed to encourage them to isolate and get tested. However, this process is manual and tedious—public health officials needed an automated, scalable solution.

Norway was one of the first EU member states to launch a contract tracing application. The application used Bluetooth and location data, had a centralized storage model with device-specific identifiers, and was closed-source. It also required de facto identification of its users through registration to participate in contact tracing. It claimed to produce long-term aggregated and anonymized data, but it's likely this data was actually identified or pseudonymous at best.

Norway's first contact tracing application was highly criticized for these decisions.

To address these shortcomings, a democratic proxy was formed to examine the source code, talk to key stakeholders, review documentation, and determine whether the application achieved security and privacy goals—it did not. In addition to these formal findings, the application received public complaints related to batter drainage, the inability to register and handle traffic surges, and limited notification support. The application was launched and collected data liberally but only provided contact tracing notifications for a few municipalities.

In fact, the Norwegian Data Protection Authority declared the application's data processing to be forbidden and it was ruled one of the most dangerous contact tracing applications in the world by Amnesty International. Based on this feedback, the Norwegian Institute of Public Health announced a second national contract tracing application.

The second version of Norway's contact tracing application would leverage Google and Apple's Exposure Notification Framework, only use Bluetooth, has a decentralized storage model, and is open-source. It also has an appointed external counsel that serves as community representatives and advocates.

Let's break down some of the major differences between the two applications.

One of the big differences involves the types of sensors used. Norway's first application used Bluetooth and GPS, whereas the second application only used Bluetooth. GPS was used by the first application to perform contact tracing, as well as evaluate public movement patterns to determine the effectiveness of government restrictions. Additionally, the first application stored its data centrally in the public cloud which enables theft, leakage, misuse, and secondary use of data. In contrast, the decentralized data storage model proposed by the second model stores data locally until a positive COVID-19 diagnosis is received.

Norway's first application required that users register their phone numbers and were identified via static device identifiers. Due to regulatory requirements, it's practically difficult to obtain a phone number without identifying yourself and phone numbers are generally considered public information. It also produced data that could be used to produce device fingerprints e.g., analytics data, operating system, version, hardware manufacturer, carrier, screen resolution, and more.

On the other hand, Norway's second contract tracing application relies on Google and Apple's Exposure Notification System. In this system, a temporary exposure key is created every 24 hours and stored on users' devices for 14 days—the period in which one presumes a person is contagious with COVID-19. If an individual receives a positive diagnosis, they can upload previous identifying keys so others can determine whether they were a close contact.

Temporary exposure keys also generate a rolling proximity identifier which is encrypted and used to generate a new identifier every 15 minutes. While this ephemeral identifier makes it more difficult to track users, it doesn't make it impossible to do so. The application is still susceptible to third-party correlation attacks, replay attacks, and cross-correlated mapping attacks, but it at least protects against identification tracking and personal impersonation.

Importantly, neither of Norway's contact tracing applications allows users to control what data they wish to share if they receive a positive diagnosis. The basis for processing under the General Data Protection Regulation (GDPR) for the first application disallowed police from obtaining health or location data. However, it's unclear whether the data collected by the second application fall under these restrictions. That is, this data may be obtainable by public health authorities.

In sum, Eivind suggests that Norway's first contact tracing application had a huge mismatch with the 7 key principles from the GDPR. You do not want to be in a position of solving novel and difficult problems using tools that weren't built with a particular purpose in mind—security and privacy experts should be involved when developing high-risk engineering solutions. We should work toward objective rules and metrics that secure privacy engineering as an established field with agreed-upon principles that can be incorporated more generally into software engineering practices.

A major pandemic like COVID-19 is not a reason to lower privacy standards.

Wrapping Up

I hope these posts have piqued your interest in PEPR 2021 and future iterations of the conference. Don't forget to check out the other Conference Recaps for PEPR 2021 as well!

If you liked this post (or have ideas on how to improve it), I'd love to know as always. Cheers!