Recap: PEPR 2020 — Incidents
4 min read

Recap: PEPR 2020 — Incidents

Missed PEPR 2020 and want a recap of the privacy incidents talks? Read this.

Overview

This post is the sixth blog post in a seven-post series. If you want to see all the posts in the series check out Recap: Privacy Engineering Practice and Respect 2020.

The three PEPR 2020 talks on incidents are:

  1. Building an Effective Feedback Loop for Your Privacy Program through Privacy Incident Response
  2. When Things Go Wrong
  3. Taking Responsibility for Someone Else's Code: Studying the Privacy Behaviors of Mobile Apps at Scale

Building an Effective Feedback Loop for Your Privacy Program through Privacy Incident Response

In Building an Effective Feedback Loop for our Privacy Program through Privacy Incident Response, Sri Maddipati talks about building an effective feedback loop by defining the critical components of a privacy program: privacy policies, risk assessment and compliance, privacy by design, and incident response (IR). Privacy IR is challenging regardless of organization size because:

  1. There are limited channels for privacy incident detection
  2. Complex notification requirements and non-compliance repercussions
  3. Long-term remediation and prevention is complex due to cross-team impact
  4. Privacy budget and resources for IR are slim

Next, Sri covers what a typical feedback loop would look like for a privacy program. A privacy-focused software development life cycle (SDLC) is followed by privacy impact assessments, audits, and vendor assessments. While this is happening the Privacy IR team must detect and respond to incidents and develop effective metrics and trend analysis around incident remediation and prevention. From here, one can identify the threats, vulnerabilities, and risks, which feed back into a privacy-focused SDLC.

It's important to track metrics for IR to: see the big picture, identify scope for improvement, justify the need for resources, highlight the effectiveness of IR, and ultimately prevent incidents (we want to minimize the observed trends over time). When choosing good privacy incident metrics you should consider: data quality, ensure consistency in tracking metrics across the org, measure overall privacy health to find blind spots, and find the right audience to share your metrics with. Consider tracking the volume of incidents, their source (external v. internal), trends by service, type of issue, severity, root cause, etc.

When Things Go Wrong

In When Things Go Wrong, Lea Kissner shares their insight from being a veteran security and privacy incident responder for many large, medium, and small companies. Their first piece of advice is to write things down while you have a cool head. Before you have an incident, plan ahead and document your procedures and playbooks and consider the hard choices you may have to make in an incident. Once you have an incident, Lea suggests you:

  1. Assign an incident commander to coordinate and manage operations
  2. Find the cut
  3. Stop the bleeding
  4. Clean up the blood

There should be only one incident commander at a time. They are responsible for documenting events of the incident and are not in the weeds debugging issues. Hand-offs between commanders should be explicit to avoid confusion or no one being assigned as the incident commander.

The first step of incidence response is to find the cut and diagnose it. What's going wrong, who is affected, how many people are affected, and what are their characteristics e.g., type of user, region, etc. Determine whether it's an incident or a vulnerability—the remediation and legal repercussions are different for each of these. The source may be a human error or bugs e.g., race conditions, cache collisions, logic errors, bad ML models, etc.

Once you've diagnosed the issue, stop the bleeding by rolling back binaries, taking down a service, implement feature flagging, or some other means. Messy or partial fixes are often better than no fix. Next, clean up the blood using a combination of short-term remediations (usable but brittle state) or long-term remediations (not-broken state to a good state). Finally, conduct a postmortem where you ask "why" at least three times. Use blameless postmortems, look for related issues, and improve your overall incident response program.

Taking Responsibility for Someone Else's Code: Studying the Privacy Behaviors of Mobile Apps at Scale

In Taking Responsibility for Someone Else's Code: Studying the Privacy Behaviors of Mobile Apps at Scale, Serge Egelman shares the work he and his colleagues have done to instrument the Android platform to assess potential privacy violations. More specifically, they created a custom instrumented Android platform to determine which Android apps access sensitive resources e.g., location data, call logs, network state, various identifiers, etc. Serge's team downloads Android apps and runs them to observe the data flows and data whether personal data was exfiltrated, what APIs were accessed, and which SDKs were bundled with the application.

Through this analysis, Serge was able to find that many third-party SDKs were performing nefarious operations. If a library failed to acquire the permissions to access location information through a library, it would often default to a side-channel attack and derive this information by reading the information in /proc/. Developers not aware of these nuances may not be accurately disclosing their app's behavior, unintentionally collecting data without consent, and potentially violating laws without their knowledge.

While code reuse through the adoption of third-party has substantial benefits, it has the potential to cause severe compliance problems. Each SDK comes with its own API and nuances that developers and people responsible for compliance need to examine for potential issues. Serge recommends that you read the SDK documentation with special attention to these types of security and privacy concerns, and also test and verify that the data processed by the SDK is what you think it is.

Wrapping Up

I hope these posts have piqued your interest in PEPR 2020 and future iterations of the conference! If you are interested in checking out other sessions at PEPR 2020 check out Recap: Privacy Engineering Practice and Respect 2020.

If you liked this post (or have ideas on how to improve it), I'd love to know as always. Cheers!