Computer chip defects force consumers to choose between speed and security

October is National Cybersecurity Awareness Month! Since the first observation of this month 15 years ago, the world has gone from about 800 million Internet users to approximately 4.5 billion. Over that same period of time, there has been an extensive amount of time and energy dedicated to improving cybersecurity and cyber hygiene.

Sadly, despite those good faith efforts, it does not appear that consumers have become safer. In fact, it is clear by now that most individuals have, in one way or another, been affected by some sort of hack or data breach—either on a personal computer or through a company that they have entrusted with their sensitive information.

To make matters worse, beyond the heightened cyber threat environment that exists today, a new hardware-based vulnerability found in almost every processor in the world has recently emerged, and it is making it increasingly difficult for consumers to keep their data protected.

A new report released by the National Consumers League’s #DataInsecurity Project, “Data Insecurity: How One of the Worst Computer Defects Ever Sacrificed Security for Speed,” discusses the threat these processor flaws pose to consumers—both in terms of the security of their data and the performance of their computer after security patches are applied—and how they can protect themselves in the future.

The report details seven publicly disclosed exploits, known as “Spectre,” “Meltdown,” “Foreshadow,” “Zombieload,” “RIDL,” “Fallout,” and “SWAPGS,” that take advantage of the flaws found in CPUs manufactured by AMD, ARM, and Intel. While Spectre affects all three major chip manufacturers, all six subsequent exploits largely affect only Intel processors.

The exploits, in short, can allow a hacker to obtain unauthorized access to privileged information. And while patches have been released alongside each exploit, they have led to a decrease in computer speed and performance—as much as 40 percent according to some reports. In addition, the patch is only good until the next exploit is discovered.

The flaws create a real challenge for consumers: apply each temporary “fix” as new exploits are discovered and risk slowing down your device, or don’t and put your sensitive information at risk. And consumers who apply patches remain at the mercy of companies that hold their sensitive data and are faced with a similar dilemma, particularly as they must consider the expenses of implementing these fixes—including costs to add computing power lost by each patch.

The report concludes that the best protection for consumers is to buy a new computer that has a CPU with hardware-level security fixes or is immune from some of the exploits. Unfortunately, this is not practical for many consumers. Therefore, consumers are advised to perform frequent software updates. NCL is also strongly supporting data security bills, such as the Consumer Privacy Protection Act of 2017, which would require companies to take preventative steps to defend against cyberattacks and data breaches and to provide consumers with notice and appropriate protection when a data breach occurs.

As we mark this year’s National Cybersecurity Awareness Month, we should certainly celebrate the progress that we have made. We cannot lose sight, however, of the need to better secure our information and systems moving forward. Awareness and smart data hygiene by consumers is one part. Companies must do their part to secure our information as well.

If you are interested in learning more, you can find NCL’s latest report here.

NCL: Cars need to come with data deletion buttons to enhance consumer privacy protections

October 3, 2019

Media contact: National Consumers League – Carol McKay, carolm@nclnet.org, (412) 945-3242 or Taun Sterling, tauns@nclnet.org, (202) 207-2832

Washington, DC—The National Consumers League, America’s pioneering worker and consumer advocacy organization, today called on Congress to take steps to rein in car manufacturers’ data collection practices and ensure that consumers have a mechanism to easily delete personal information collected about them by their vehicles.

Thanks to a proliferation of sensors, cellular connectivity and powerful in-car infotainment systems, modern cars can reportedly generate 25 gigabytes every hour and 4,000 gigabytes of data per day. In its new white paper, the consumer group examined the vast scope of personal information being collected about drivers by automobile companies to power a vast data engine that could be worth $750 billion by 2030.

“Every time a consumer gets in a car — whether it’s a vehicle she owns, rents, or rides in – huge amounts of personal data get shared with car companies with practically no oversight or consumer protections,” said NCL Executive Director Sally Greenberg. “We want to shine a light on car companies’ data practices and encourage Congress to create common-sense rules of road for this growing marketplace.”

The NCL white paper examines several existing laws and proposed bills to offer a framework to legislators for steps they can take to better protect the privacy and data security of the driving public. In particular, NCL is urging Congress to mandate that car manufacturers include an easy-to-use data deletion functionality in all new cars to help consumers take control over their in-car data.

“Consumers just want to get from point A to point B safely,” said Greenberg. “While the data generated by our cars can help fuel innovation in the auto industry, that shouldn’t come at the expense of our privacy. Consumers are looking to Congress to take the lead and ensure that car company’s data collection practices have some sensible guardrails.”

Read NCL’s new white paper here. (pdf)

###

About the National Consumers League

The National Consumers League, founded in 1899, is America’s pioneer consumer organization. Our mission is to protect and promote social and economic justice for consumers and workers in the United States and abroad. For more information, visit www.nclnet.org.

Protecting information privacy: challenges and opportunities in federal legislation

Polly Turner-Ward

By NCL Google Public Policy Fellow Pollyanna Turner-Ward

On September 11, 2019, policymakers, industry stakeholders, and consumer advocates gathered at The Brookings Institution to discuss the pressing question of how to protect information privacy through federal legislation. Representing the National Consumers League was Executive Director, Sally Greenberg.

How did we get here?

To set the scene, panelists first discussed why there is consensus on the need for federal legislation to address privacy and data security. The Snowden revelations showed consumers how much of their data is out there, and they began to question whether companies could be trusted to keep their data safe from the government. More recently, in light of the Cambridge Analytica scandal and increasing instances of identity theft and fraud resulting from data breaches, consumers have begun to question whether companies themselves can be trusted with their data.

Businesses are worried about lack of consumer trust interfering with their adoption of digital products and services. For instance, parental refusal to provide consent to the collection and use of data regarding their kid’s academic performance prevents the personalization of their children’s learning experience. By providing individuals with greater privacy protections, businesses hope that individual participation in the digital economy will increase.

In response to consumer privacy concerns, a patchwork of state bills on privacy and data security are also popping up. Business claims to be overwhelmed by the idea of complying with these differing regulatory schemes, especially in light of the EU’s General Data Protection Regulation (GDPR), which has already moved many organizations to comply with privacy and data security rules. To support businesses and to regain U.S. privacy leadership, greater international operability is necessary.

What should federal legislation look like?

Each panelist set forth their idea of what federal legislation should aim to achieve. Intel drafted a privacy bill which includes various protections but which lacks a private right of action – that is, the ability to take wrongdoers to court if they violate privacy laws. If companies promise not to use your information in certain ways and then do it anyway, in violation of law, you should have the right to take them to court. NCL’s Sally Greenberg directed audience members towards the Public Interest Privacy Principles signed by thirty-four consumer advocacy and civil rights organizations. Advocating in favor of strong protections, strong enforcement, and preemption, and highlighting the importance of “baking data privacy into products and services”, she offered NCL’s vision of a strong, agile and adaptive national standard.

Panelists drew comparisons between this approach and that of the EU’s GDPR, but criticized the time-consuming and resource intensive nature of that legislation. They agreed that U.S. legislation should avoid being too prescriptive in the details. Rather than requiring documentation of policies, practices, and data flow maps, legislation should focus on high-level issues.

Breaking down these issues according to consensus and complexity, Cameron F. Kelly listed covered information, de-identification, data security, state enforcement, accountability, and FTC authority as solvable issues. Implementation issues, he said, include notice and transparency and individual rights (access, portability, right to object to processing, deletion, nondiscrimination). However, Mr. Kelly noted that disagreement clouds a number of complex issues. These relate to algorithmic transparency, algorithmic fairness, and data processing limitations (use restrictions). Until consensus is reached in these areas, disagreements about preemption and private right of action are unlikely to be resolvable.

Notice and Transparency 

While notice and transparency are important aspects of a comprehensive approach towards privacy and data security, it is difficult for consumers to process the volume of information contained in privacy policies. Consumers also often have little choice but to “agree” to services that are essential to everyday life. As such, legislators may wish to explore the extent to which a company may force an individual to waive their privacy rights as a condition of service. Consent should only have a limited role in relation to sensitive data uses, and companies should focus on designing user interfaces to enable meaningful consumer consent. Panelists criticized the California Consumer Protection Act (CCPA) for its lack of detail and for putting the burden on individuals to protect themselves. It was agreed that federal standards should move beyond notice-and-consent and put the burden back on businesses.

De-identification 

One panelist called de-identification the “secret sauce” to privacy. Preserving the utility of data while removing identification puts the focus on data processing harms. It is important to get de-identification right for valuable research purposes. However, de-identification is often not done well and confusion lurks around pseudonymization. This technique involves replacing personally identifiable information fields within a data record with artificial identifiers. As data remains identifiable using that technique, data security and privacy risks remain. Companies must be incentivized to effectively de-identify data, to not re-identify, and to contractually restrict downstream users from doing the same. To avoid conflating data security levels with pseudonymization levels, a universal and adaptable de-identification standard must be developed.

Data security 

Because data security is critical to privacy, panelists agreed that it is the foundation upon which privacy legislation should be built. Panelists warned against an overly prescriptive approach towards data security but suggested that the Federal Trade Commission (FTC) should offer more guidance. “Reasonable” data security depends upon the nature and scope of data collection and use. This affords organizations flexibility when adopting measures that make sense in terms of information sensitivity, context, and risk of harm.

However, determining data security standards according to the risk of privacy harm is difficult because “risk of privacy harm” is an unsettled and controversial concept. It was also debated whether “information sensitivity” should be used to determine the reasonableness of data security standards. Public Knowledge argued that all data should be protected in the same way because the distinction between sensitive and non-sensitive data is increasingly questionable. When data is aggregated and sophisticated technologies such as machine learning are applied, each and every data point can lead back to an identifiable person.

While use of off-the-shelf software should generally be considered reasonable, higher standards should apply to companies that are more aggressive in their data collection and use. Extending to third party processors and service providers, organizations must continually develop physical, technical, and legal safeguards. To ensure robust infrastructure to secure their data, they should run tests, impact assessments, and put resources towards data mapping.

Data processing limitations

In sectors ranging from education to healthcare, the use of data undoubtedly has the potential to help us solve many societal problems. However, data use is pervasive, and new and unpredictably bad outcomes are also possible. Consumers want data to be used in ways that benefit them, for data not to be used in ways that harm them, and for their data to be protected. However, information collection and sharing is largely unbounded. If Congress wishes to move beyond a notice-and-consent model and put the burden back on organizations that handle data, then the boundaries of how data should be collected, retained, used, and shared must be confronted. Without limitations, the high value of data will continue to incentivize organizations to collect and retain data for the sake of it. These practices increase cybersecurity and privacy risks on unforeseen levels.

Calling out data brokers, Intel’s David Hoffman stated that databases containing lists of rape victims are simply “unacceptable.” However, transfer restrictions are likely to be one of the hardest areas to reach consensus on. Use restrictions, which relate to what organizations can and cannot do with data at a granular level, may be approached by creating presumptively allowed and presumptively prohibited lists. Use and sharing could be presumptively allowed for responsible advertising, legal process and compliance, data security and safety, authentication, product recalls, research purposes, and the fulfillment of product and service requests. Meanwhile, use of data for eligibility determinations, committing fraud or stalking, or for unreasonable practices could be presumptively prohibited.

However, it is difficult to determine the standards by which a particular data use should be “green-lighted” or “red-lighted.” To determine if a data use is for a purpose related to that which a user originally shared data, factors may be considered such as whether the use is primary or secondary, how far down the chain of vendors processing occurs, and whether the processor has a direct or indirect relationship with the data subject. The FTC has done work to articulate “unreasonable” data processing and sharing, and the Center for Democracy and Technology’s Consumer Bill of Rights emphasizes respect for context (user expectations) by laying out applicable factors such as consumer privacy risk and information sensitivity.

However, “context” is difficult to operationalize. One option may be to grant the FTC rulemaking authority to determine issues such as which data uses are per se unfair, or which information is sensitive. The deception and unfairness standard has guided the FTC for decades. However, panelists were concerned about giving the FTC a blank check to use the abusiveness standard to deal with data abuses. Instead, the FTC could be given a clear set of instructions in the form of FTC guidance, legislative preamble, or written in detail in the legislation. If this approach is taken, it would be necessary to confront the difficult question of what harm legislation should seek to address. Because privacy injury is not clear or quantifiable, it is difficult to agree on the appropriate harm standard. A specific list of the types of injury – not an exhaustive list – resulting from data processing would give the harm standard substance, and algorithmic data processing ought to be directly confronted.

Because the purpose of data analysis is to draw differences and to make distinctions, the privacy debate cannot be separated from the discrimination debate. Intent to engage in prohibited discrimination is difficult to prove, especially with use of proxies. For instance, rather than directly using a protected characteristic such as racial heritage as a proxy to offer payday loans, an algorithm could use zip code or music taste as a proxy for race in order to decide who to advertise payday loans to. To provide clarity and to promote algorithmic fairness, existing discrimination laws could be augmented with privacy legislation by defining unfair discrimination according to disparate impact on protected classes (disadvantaged groups). Privacy legislation should ensure that data use does not contribute to prohibited discrimination by requiring risk assessments and outcome monitoring.

To increase consumer trust and to provide them with recourse when they suspect that they are the victims of unfair discrimination, legislation should directly confront algorithmic transparency and burden of proof. Consumers cannot be expected to understand the mechanisms that determine what advertisements they are presented with or how automatic decisions are made about them. However, organizations should not be able to escape liability by claiming that they do not have access to the data or algorithm necessary to prove discrimination claims.

Enforcement

Panelists agreed that State Attorney Generals need to be able to enforce the law and that the FTC requires increased resources and enforcement powers. As Congress cannot anticipate every possible scenario, it is appropriate to give the FTC narrow rulemaking authority, the authority to fine for first offences, to be able to approve codes of conduct, and to clarify guidance on how to comply with the law on issues such as de-identification. The FTC needs vastly more resources to be able to accomplish this oversight and enforcement role. The jury is out as to whether Congress will pony up.

Sally Greenberg described the importance of also including an option for private parties to bring class-action suits. However, there was disagreement between panelists about whether individuals should be able to privately enforce their rights where the government lacks the resources or will to act. David Hoffman highlighted evidentiary problems associated with the difficulty in proving privacy harms. To better serve the public, he argued in favor of the creation of a uniform standard with strong protections.

Preemption of state laws 

The objective of creating a consistent federal standard was emphasized as a key driving factor for industry for the creation of a federal bill. Not including preemption of state law is a kind of “deal-breaker” for industry. They claim that complying with a patchwork of fifty different data breach notification standards is hard today. It was suggested that states could be given a window of five years with no preemption to allow them to adapt and innovate, after which time the situation could be reviewed. Or the reverse – preempt for five years and sunset the federal law. These suggestions both have merit, but in the end, answering the questions of preemption and private right of action remain to be seen.

Developing an approach towards consumer privacy and data security

Polly Turner-Ward

By NCL Google Public Policy Fellow Pollyanna Sanderson

This blog post is the first of a series of blogs offering a consumer perspective on developing an approach towards consumer privacy and data security.

For more than 20 years, Congressional inaction on privacy and data security has coincided with increased data breaches impacting millions of consumers. In the absence of Congressional action, states and the executive branch have increasingly stepped in. A key part of the White House’s response is the National Telecommunication and Information Administration (NTIA) September Request for Comment (RFC).

While a “Request for Comment” sounds incredibly wonky, it is a key part of the process that informs the government’s approach to consumer privacy. The NTIA’s process gathers input from interested stakeholders on ways to advance consumer privacy while protecting prosperity and innovation. Stakeholder responses provide a glimpse into where consensus and disagreements lie among consumer and industry players on key issues. We have read through the comments and in this series of blogs are pleased to offer a consumer perspective.

This first blog focuses on a fundamental aspect of any proposed approach to privacy and data security: the scope. Reflecting risks of big data classification and predictive analytics, one suggestion by the Center for Digital Democracy (CDD) was to frame the issues according to data processing outputs. This would cover inferences, decisions, and other data uses that undermine individual control and privacy. However, focusing on data inputs, there was consensus among many interested stakeholders that privacy legislation must cover “personal information.”

The Center for Democracy and Technology noted that personal information is an evolving concept, the scope of which is “unsettled…as a matter of law, policy, and technology.” Various legal definitions exist at the state, federal, and international level. The Federal Trade Commission’s (FTC) 2012 definition defines it as information capable of being associated with or reasonably linked or linkable to a consumer, household, or device. Subject to certain conditions, de-identified information is excluded from this definition. To help to address privacy concerns while enabling collection and use, many stakeholders agree that regulatory relief should be provided for effective de-identification techniques. This would incentivize the development and implementation of privacy-enhancing techniques and de-identification technologies such as differential privacy and encryption. Federal law to avoid classifying covered data in a binary way as personal or non-personal. An all-or-nothing approach requiring irreversible de-identification is a difficult or impossible standard.

In an attempt to recognize that identifiability rests on a spectrum, the EU’s General Data Protection Regulation (GDPR) excludes anonymized information and introduces the concept of pseudonymized data. These concepts demand federal consideration, having been introduced to United States law via the California Consumer Protection Act (CCPA). The law should clarify how it applies to aggregated, de-identified, pseudonymous, identifiable, and identified information. To be considered de-identified data subject to lower standards, data must not be linkable to an individual, risk of re-identification must be minimal, the entity must publicly commit not to attempt to re-identify the data, and effective legal, administrative, technical, and/or contractual controls must be applied to safeguard that commitment.

While de-identified and other anonymized data may be subject to lower privacy standards, they should not be removed from protection altogether. In their NTIA comment, the CDD highlights that third-party personal data, anonymized data, and other forms of non-personal data may be used to make sensitive inferences and to develop profiles. These could be used for purposes ranging from persuading voters to targeting advertisements. However, individual privacy rights may only be exercised after inferences or profiles have been applied at the individual level. Because profiles and inferences can be made without identifiability, this aspect of corporate data practice would therefore largely escape accountability if de-identified and other anonymized data were not subject to standards of some kind.

This loophole must be closed. Personal information should be broadly defined to address risks of re-identification and to capture evolving business practices that undermine privacy. While the GDPR does not include inferred information in its definition of personal information, inspiration could be taken from the definition of personal information given by the CCPA, which includes inferred information drawn from personal information and used to create consumer profiles.

Our next blog  will explore “developing an approach for handling privacy risks and harms.” In its request for comment, the NTIA established a risk and outcome-based approach towards consumer privacy as a high-level goal for federal action. However, within industry and society, there is a lack of consensus about what constitutes a privacy risk. Stay tuned for a deep dive into the key issues that arise.

The author completed her undergraduate degree in law at Queen Mary University of London and her Master of Laws at William & Mary. She has focused her career on privacy and data security.

Consumer group: Capital One breach highlights need for Congressional action on data security legislation

July 30, 2019

Media contact: National Consumers League – Carol McKay, carolm@nclnet.org, (412) 945-3242, or Taun Sterling, tauns@nclnet.org, (202) 207-2832

Washington, DC—Just one week after consumers received relief from the massive Equifax breach, yet another massive breach—this time at Capital One bank—is placing consumers at risk, yet again, of identity theft.

In one of the largest financial breaches in history, more than 100 million Capital One accounts and 140,000 Social Security numbers were reportedly compromised. As was the case in previous breaches, the Capital One breach appears to have stemmed from a third-party cloud hosting vendor that stored Capital One’s data.

The National Consumers League (NCL), the nation’s pioneering consumer and worker advocacy organization, is calling on Congress to immediately pass comprehensive privacy legislation and protect highly personal data.

“Consumers are sitting ducks if big banks like Capital One, giant hotel chains like Marriott, and credit scoring companies like Equifax don’t take the necessary steps to protect our data,” said John Breyault, NCL’s vice president of public policy, telecommunications, and fraud. “When companies like Capital One are sloppy in protecting consumers’ data, it allows hackers steal consumer information which ultimately fuels identity theft and other frauds against us.”

“More than five years after hackers compromised the personal information of nearly 110 million Target customers, criminals are still breaking through supposedly strong firewalls and stealing consumers’ personal data from companies. Any data security legislation must require that consumer data be protected with strong fines and criminal penalties for failing to do so,” said NCL Executive Director Sally Greenberg.

###

About the National Consumers League

The National Consumers League, founded in 1899, is America’s pioneer consumer organization. Our mission is to protect and promote social and economic justice for consumers and workers in the United States and abroad. For more information, visit www.nclnet.org.

Carpenter v. United States: Impacts on privacy legislation – National Consumers League

The U.S. Supreme Court decision last week in Carpenter v. United States will shape the relationship consumers have with their wireless devices and the services they use every day for years to come. In a 5-4 decision, the Court held that by obtaining cell-site records, the U.S. government performed a search. By doing so without a warrant, this search was judged unconstitutional, violating petitioner Timothy Carpenter’s Fourth Amendment rights and reversing two previous decisions.

In the case, the FBI had requested records as part of an investigation into several Detroit-area armed robberies, and those records included details about call dates, times, and approximate locations. Carpenter asked that the cell phone evidence be suppressed because it was obtained in a search without a warrant.   

You’re thinking, “And? I’m not accused of armed robbery,” but it’s bigger than Timothy Carpenter. The Carpenter decision affects all of us, and in essence redefines government searches in a digital age.

Think of your relationship with your cell phone. According to Pew, 95 percent of Americans now own one. The same study found that for one in five of us, our smartphone is our sole source of Internet service. We carry them to work, to school, to our homes, and to meet up with friends. They go with us to our meetings, appointments, and vacations. They are a key vector through which we’re understood. Part of that is an unprecedented ability to locate us. When 95 percent of us are moving and communicating with our phones, and when 20 percent of us are using them as our only personal Internet connection, government access to when and where we use cell phones becomes an inroad to very intimate surveillance.

The FBI obtained records defined by the Court as “personal location information maintained by a third party” under the Stored Communications Act (SCA). SCA compels service providers to hand over records of electronically stored communications to government, without a warrant requirement, provided there is evidence for the information’s relevance to an ongoing investigation. Last week’s decision sets a new standard for expectations of digital privacy at a time when consumers and government are grappling with how to think about our lives online using documents drafted by the nation’s founders.

NCL has previously stated that consumer privacy is an integral part of the data economy, and we advocate for robust consumer protections in this space to encourage safe and secure use of online services. We applaud the Court’s decision and see it as an important step in the fight to safeguard consumers’ data in the United States and beyond.

Rebecca Kielty is spending the summer with John Breyault’s team, working on consumer privacy issues as NCL’s 2018 Google Public Policy Fellow. Rebecca received her B.A. from the University of South Florida Saint Petersburg and her M.A. from Georgetown University.