Developing an approach towards consumer privacy and data security

Polly Turner-Ward

By NCL Google Public Policy Fellow Pollyanna Sanderson

This blog post is the first of a series of blogs offering a consumer perspective on developing an approach towards consumer privacy and data security.

For more than 20 years, Congressional inaction on privacy and data security has coincided with increased data breaches impacting millions of consumers. In the absence of Congressional action, states and the executive branch have increasingly stepped in. A key part of the White House’s response is the National Telecommunication and Information Administration (NTIA) September Request for Comment (RFC).

While a “Request for Comment” sounds incredibly wonky, it is a key part of the process that informs the government’s approach to consumer privacy. The NTIA’s process gathers input from interested stakeholders on ways to advance consumer privacy while protecting prosperity and innovation. Stakeholder responses provide a glimpse into where consensus and disagreements lie among consumer and industry players on key issues. We have read through the comments and in this series of blogs are pleased to offer a consumer perspective.

This first blog focuses on a fundamental aspect of any proposed approach to privacy and data security: the scope. Reflecting risks of big data classification and predictive analytics, one suggestion by the Center for Digital Democracy (CDD) was to frame the issues according to data processing outputs. This would cover inferences, decisions, and other data uses that undermine individual control and privacy. However, focusing on data inputs, there was consensus among many interested stakeholders that privacy legislation must cover “personal information.”

The Center for Democracy and Technology noted that personal information is an evolving concept, the scope of which is “unsettled…as a matter of law, policy, and technology.” Various legal definitions exist at the state, federal, and international level. The Federal Trade Commission’s (FTC) 2012 definition defines it as information capable of being associated with or reasonably linked or linkable to a consumer, household, or device. Subject to certain conditions, de-identified information is excluded from this definition. To help to address privacy concerns while enabling collection and use, many stakeholders agree that regulatory relief should be provided for effective de-identification techniques. This would incentivize the development and implementation of privacy-enhancing techniques and de-identification technologies such as differential privacy and encryption. Federal law to avoid classifying covered data in a binary way as personal or non-personal. An all-or-nothing approach requiring irreversible de-identification is a difficult or impossible standard.

In an attempt to recognize that identifiability rests on a spectrum, the EU’s General Data Protection Regulation (GDPR) excludes anonymized information and introduces the concept of pseudonymized data. These concepts demand federal consideration, having been introduced to United States law via the California Consumer Protection Act (CCPA). The law should clarify how it applies to aggregated, de-identified, pseudonymous, identifiable, and identified information. To be considered de-identified data subject to lower standards, data must not be linkable to an individual, risk of re-identification must be minimal, the entity must publicly commit not to attempt to re-identify the data, and effective legal, administrative, technical, and/or contractual controls must be applied to safeguard that commitment.

While de-identified and other anonymized data may be subject to lower privacy standards, they should not be removed from protection altogether. In their NTIA comment, the CDD highlights that third-party personal data, anonymized data, and other forms of non-personal data may be used to make sensitive inferences and to develop profiles. These could be used for purposes ranging from persuading voters to targeting advertisements. However, individual privacy rights may only be exercised after inferences or profiles have been applied at the individual level. Because profiles and inferences can be made without identifiability, this aspect of corporate data practice would therefore largely escape accountability if de-identified and other anonymized data were not subject to standards of some kind.

This loophole must be closed. Personal information should be broadly defined to address risks of re-identification and to capture evolving business practices that undermine privacy. While the GDPR does not include inferred information in its definition of personal information, inspiration could be taken from the definition of personal information given by the CCPA, which includes inferred information drawn from personal information and used to create consumer profiles.

Our next blog  will explore “developing an approach for handling privacy risks and harms.” In its request for comment, the NTIA established a risk and outcome-based approach towards consumer privacy as a high-level goal for federal action. However, within industry and society, there is a lack of consensus about what constitutes a privacy risk. Stay tuned for a deep dive into the key issues that arise.

The author completed her undergraduate degree in law at Queen Mary University of London and her Master of Laws at William & Mary. She has focused her career on privacy and data security.