By NCL Google Public Policy Fellow Pollyanna Turner-Ward
On September 11, 2019, policymakers, industry stakeholders, and consumer advocates gathered at The Brookings Institution to discuss the pressing question of how to protect information privacy through federal legislation. Representing the National Consumers League was Executive Director, Sally Greenberg.
How did we get here?
To set the scene, panelists first discussed why there is consensus on the need for federal legislation to address privacy and data security. The Snowden revelations showed consumers how much of their data is out there, and they began to question whether companies could be trusted to keep their data safe from the government. More recently, in light of the Cambridge Analytica scandal and increasing instances of identity theft and fraud resulting from data breaches, consumers have begun to question whether companies themselves can be trusted with their data.
Businesses are worried about lack of consumer trust interfering with their adoption of digital products and services. For instance, parental refusal to provide consent to the collection and use of data regarding their kid’s academic performance prevents the personalization of their children’s learning experience. By providing individuals with greater privacy protections, businesses hope that individual participation in the digital economy will increase.
In response to consumer privacy concerns, a patchwork of state bills on privacy and data security are also popping up. Business claims to be overwhelmed by the idea of complying with these differing regulatory schemes, especially in light of the EU’s General Data Protection Regulation (GDPR), which has already moved many organizations to comply with privacy and data security rules. To support businesses and to regain U.S. privacy leadership, greater international operability is necessary.
What should federal legislation look like?
Each panelist set forth their idea of what federal legislation should aim to achieve. Intel drafted a privacy bill which includes various protections but which lacks a private right of action – that is, the ability to take wrongdoers to court if they violate privacy laws. If companies promise not to use your information in certain ways and then do it anyway, in violation of law, you should have the right to take them to court. NCL’s Sally Greenberg directed audience members towards the Public Interest Privacy Principles signed by thirty-four consumer advocacy and civil rights organizations. Advocating in favor of strong protections, strong enforcement, and preemption, and highlighting the importance of “baking data privacy into products and services”, she offered NCL’s vision of a strong, agile and adaptive national standard.
Panelists drew comparisons between this approach and that of the EU’s GDPR, but criticized the time-consuming and resource intensive nature of that legislation. They agreed that U.S. legislation should avoid being too prescriptive in the details. Rather than requiring documentation of policies, practices, and data flow maps, legislation should focus on high-level issues.
Breaking down these issues according to consensus and complexity, Cameron F. Kelly listed covered information, de-identification, data security, state enforcement, accountability, and FTC authority as solvable issues. Implementation issues, he said, include notice and transparency and individual rights (access, portability, right to object to processing, deletion, nondiscrimination). However, Mr. Kelly noted that disagreement clouds a number of complex issues. These relate to algorithmic transparency, algorithmic fairness, and data processing limitations (use restrictions). Until consensus is reached in these areas, disagreements about preemption and private right of action are unlikely to be resolvable.
Notice and Transparency
While notice and transparency are important aspects of a comprehensive approach towards privacy and data security, it is difficult for consumers to process the volume of information contained in privacy policies. Consumers also often have little choice but to “agree” to services that are essential to everyday life. As such, legislators may wish to explore the extent to which a company may force an individual to waive their privacy rights as a condition of service. Consent should only have a limited role in relation to sensitive data uses, and companies should focus on designing user interfaces to enable meaningful consumer consent. Panelists criticized the California Consumer Protection Act (CCPA) for its lack of detail and for putting the burden on individuals to protect themselves. It was agreed that federal standards should move beyond notice-and-consent and put the burden back on businesses.
De-identification
One panelist called de-identification the “secret sauce” to privacy. Preserving the utility of data while removing identification puts the focus on data processing harms. It is important to get de-identification right for valuable research purposes. However, de-identification is often not done well and confusion lurks around pseudonymization. This technique involves replacing personally identifiable information fields within a data record with artificial identifiers. As data remains identifiable using that technique, data security and privacy risks remain. Companies must be incentivized to effectively de-identify data, to not re-identify, and to contractually restrict downstream users from doing the same. To avoid conflating data security levels with pseudonymization levels, a universal and adaptable de-identification standard must be developed.
Data security
Because data security is critical to privacy, panelists agreed that it is the foundation upon which privacy legislation should be built. Panelists warned against an overly prescriptive approach towards data security but suggested that the Federal Trade Commission (FTC) should offer more guidance. “Reasonable” data security depends upon the nature and scope of data collection and use. This affords organizations flexibility when adopting measures that make sense in terms of information sensitivity, context, and risk of harm.
However, determining data security standards according to the risk of privacy harm is difficult because “risk of privacy harm” is an unsettled and controversial concept. It was also debated whether “information sensitivity” should be used to determine the reasonableness of data security standards. Public Knowledge argued that all data should be protected in the same way because the distinction between sensitive and non-sensitive data is increasingly questionable. When data is aggregated and sophisticated technologies such as machine learning are applied, each and every data point can lead back to an identifiable person.
While use of off-the-shelf software should generally be considered reasonable, higher standards should apply to companies that are more aggressive in their data collection and use. Extending to third party processors and service providers, organizations must continually develop physical, technical, and legal safeguards. To ensure robust infrastructure to secure their data, they should run tests, impact assessments, and put resources towards data mapping.
Data processing limitations
In sectors ranging from education to healthcare, the use of data undoubtedly has the potential to help us solve many societal problems. However, data use is pervasive, and new and unpredictably bad outcomes are also possible. Consumers want data to be used in ways that benefit them, for data not to be used in ways that harm them, and for their data to be protected. However, information collection and sharing is largely unbounded. If Congress wishes to move beyond a notice-and-consent model and put the burden back on organizations that handle data, then the boundaries of how data should be collected, retained, used, and shared must be confronted. Without limitations, the high value of data will continue to incentivize organizations to collect and retain data for the sake of it. These practices increase cybersecurity and privacy risks on unforeseen levels.
Calling out data brokers, Intel’s David Hoffman stated that databases containing lists of rape victims are simply “unacceptable.” However, transfer restrictions are likely to be one of the hardest areas to reach consensus on. Use restrictions, which relate to what organizations can and cannot do with data at a granular level, may be approached by creating presumptively allowed and presumptively prohibited lists. Use and sharing could be presumptively allowed for responsible advertising, legal process and compliance, data security and safety, authentication, product recalls, research purposes, and the fulfillment of product and service requests. Meanwhile, use of data for eligibility determinations, committing fraud or stalking, or for unreasonable practices could be presumptively prohibited.
However, it is difficult to determine the standards by which a particular data use should be “green-lighted” or “red-lighted.” To determine if a data use is for a purpose related to that which a user originally shared data, factors may be considered such as whether the use is primary or secondary, how far down the chain of vendors processing occurs, and whether the processor has a direct or indirect relationship with the data subject. The FTC has done work to articulate “unreasonable” data processing and sharing, and the Center for Democracy and Technology’s Consumer Bill of Rights emphasizes respect for context (user expectations) by laying out applicable factors such as consumer privacy risk and information sensitivity.
However, “context” is difficult to operationalize. One option may be to grant the FTC rulemaking authority to determine issues such as which data uses are per se unfair, or which information is sensitive. The deception and unfairness standard has guided the FTC for decades. However, panelists were concerned about giving the FTC a blank check to use the abusiveness standard to deal with data abuses. Instead, the FTC could be given a clear set of instructions in the form of FTC guidance, legislative preamble, or written in detail in the legislation. If this approach is taken, it would be necessary to confront the difficult question of what harm legislation should seek to address. Because privacy injury is not clear or quantifiable, it is difficult to agree on the appropriate harm standard. A specific list of the types of injury – not an exhaustive list – resulting from data processing would give the harm standard substance, and algorithmic data processing ought to be directly confronted.
Because the purpose of data analysis is to draw differences and to make distinctions, the privacy debate cannot be separated from the discrimination debate. Intent to engage in prohibited discrimination is difficult to prove, especially with use of proxies. For instance, rather than directly using a protected characteristic such as racial heritage as a proxy to offer payday loans, an algorithm could use zip code or music taste as a proxy for race in order to decide who to advertise payday loans to. To provide clarity and to promote algorithmic fairness, existing discrimination laws could be augmented with privacy legislation by defining unfair discrimination according to disparate impact on protected classes (disadvantaged groups). Privacy legislation should ensure that data use does not contribute to prohibited discrimination by requiring risk assessments and outcome monitoring.
To increase consumer trust and to provide them with recourse when they suspect that they are the victims of unfair discrimination, legislation should directly confront algorithmic transparency and burden of proof. Consumers cannot be expected to understand the mechanisms that determine what advertisements they are presented with or how automatic decisions are made about them. However, organizations should not be able to escape liability by claiming that they do not have access to the data or algorithm necessary to prove discrimination claims.
Enforcement
Panelists agreed that State Attorney Generals need to be able to enforce the law and that the FTC requires increased resources and enforcement powers. As Congress cannot anticipate every possible scenario, it is appropriate to give the FTC narrow rulemaking authority, the authority to fine for first offences, to be able to approve codes of conduct, and to clarify guidance on how to comply with the law on issues such as de-identification. The FTC needs vastly more resources to be able to accomplish this oversight and enforcement role. The jury is out as to whether Congress will pony up.
Sally Greenberg described the importance of also including an option for private parties to bring class-action suits. However, there was disagreement between panelists about whether individuals should be able to privately enforce their rights where the government lacks the resources or will to act. David Hoffman highlighted evidentiary problems associated with the difficulty in proving privacy harms. To better serve the public, he argued in favor of the creation of a uniform standard with strong protections.
Preemption of state laws
The objective of creating a consistent federal standard was emphasized as a key driving factor for industry for the creation of a federal bill. Not including preemption of state law is a kind of “deal-breaker” for industry. They claim that complying with a patchwork of fifty different data breach notification standards is hard today. It was suggested that states could be given a window of five years with no preemption to allow them to adapt and innovate, after which time the situation could be reviewed. Or the reverse – preempt for five years and sunset the federal law. These suggestions both have merit, but in the end, answering the questions of preemption and private right of action remain to be seen.