The Goal of a National Privacy Law in the United States

Joseph Jerome / Jun 27, 2024

An illustration of a US flag draped over a security camera. Shutterstock

The fight for a comprehensive federal privacy framework in the US can sometimes be characterized as an exercise in losing the forest for the trees. A third version of the American Privacy Rights Act (APRA) was recently released, and commentators have been quick to analyze what’s been changed and outright removed from the draft bill. Tracking substantive changes to legislative text that is moving is important, but as we debate the appropriate scope of preemption or private rights of action, it is essential not to lose sight of what the goals of a privacy law like APRA seem to be.

A lack of general understanding or consensus about why privacy law matters to different stakeholders bedevils both APRA and the larger comprehensive privacy project. In its own summary of the latest draft of APRA, R Street Institute laments that comprehensive support for federal action “unravels when it gets to the substance.” Politics and polarization are at play here, but I believe there is simply not nearly enough agreement about what a national US privacy law should even do.

This post was written in advance of a scheduled markup of APRA that was surprisingly canceled even as people were in the hearing room. News of the cancellation came amidst calls by privacy and civil rights organizations to delay APRA’s markup, unified opposition by a coalition of industry groups led by the Chamber of Commerce, and claims by advertisers that APRA would “eviscerate the modern advertising industry.” Politically, the bill was facing intraparty gridlock amongst Republican leaders. This vast and divergent disagreement about APRA echos a similar disagreement about what privacy law is supposed to do.

Without a clearer understanding of what components of a privacy proposal matter to whom, we are stuck. Even if one has little interest in the machinations of APRA or privacy law generally, this paralysis has ramifications for larger debates across tech policy. It is a problem that different stakeholders – lawmakers, regulated industry, civil society, and academia – are prioritizing different and potentially conflicting goals, and by my count, APRA has at least ten such different goals.

1. Sett(l)ing defaults for targeted advertising.

Advertising is either the lifeblood of the modern internet or its original sin, and for many, privacy law is synonymous with putting limits on targeted advertising. If you participated in the W3C’s “Do Not Track” wars or were one of the early vanguard of privacy professionals, the primary ideological divide when it comes to online privacy is how online advertising should be treated. As the IAPP’s Cobun Zweifel-Keegan more recently suggested, “adtech is built on a privacy fault line.”

This is a bitter divide. Broad-based coalitions have called for bans “surveillance advertising” at the same time as the Interactive Advertising Bureau, an industry group, has been mobilizing small businesses to come out against extreme “privacy” legislation like APRA. Other marketing groups have cautioned against the unintended consequences of privacy laws on noncontroversial advertising practices. Ultimately, a federal privacy law needs to settle a number of policy questions about targeted advertising. But thirty years after the first banner ad came online, APRA reflects a continuing lack of consensus about the impacts of targeted advertising on privacy and what to do about it.

Should targeted ads be opt-in or opt-out? Should certain data be excluded from ad profiles or should we shut down targeted advertising altogether? When APRA was first announced, Consumer Reports tech policy director Justin Brookman wrote that the very first thing he does with any privacy bill is look at how it treats targeted ads, and he concluded that it was difficult to assess how APRA addressed targeted advertising. This difficulty may be due to the fact that major political actors have very different goals with respect to advertising.

2. Updating civil rights laws to address digital discrimination.

The introduction of digital civil rights to the privacy debate demonstrates how far the conversation has come from being only about targeted advertising. There has been a remarkable shift in acknowledging that illegal discrimination can be amplified by unchecked data processing, which transformed discrimination into a data privacy question. As the Leadership Conference on Civil and Human Rights recently declared, “privacy and civil rights have always been inseparable. We cannot have one without the other.”

The most recent version of APRA removes many of its key provisions governing civil rights, algorithmic fairness, and user rights with respect to consequential decisions, but for some, digital civil rights is the most important part of any privacy framework. The Lawyers’ Committee for Civil Rights Under Law’s David Brody recently explained to Tech Policy Press that privacy rules were “the foundational area of tech and civil rights policy for this generation.”

Civil rights law is a huge and mature field, but there does seem to be general acknowledgement that existing federal anti-discrimination laws need reinforcement to address what happens when humans aren’t in the loop. It is understandable why certain stakeholders would want to avoid addressing this issue within the contours of a privacy bill, but removing civil rights provisions appears to have deflated much of the enthusiasm for APRA within civil society.

3. Establishing baseline rules for artificial intelligence and algorithms.

The removal of the civil rights and algorithmic governance provisions from APRA is also in tension with the view in some corners that federal privacy law might serve as a foundation for regulating artificial intelligence. Some stakeholders have explicitly suggested that establishing privacy rules is a necessary precursor to drafting AI regulations.

There is, however, an ongoing debate on the extent to which AI is a “privacy” issue as it cuts across so many other major tech policy questions. That presents a logical reason to avoid addressing the issue within the confines of APRA, but on the other hand, it is not clear what other legislative vehicles would offer congressional lawmakers an opportunity to regulate AI at present. If your priority is placing guardrails on AI, your best near-term chance at accomplishing this is via a privacy framework.

4. Putting the screws to Big Tech.

Another foundational question is who do we want privacy rules to apply to? Viviane Reding, an architect behind the General Data Protection Regulation (GDPR), recently stated that the goal of European privacy rules was to go after Big Tech – not the local butcher or football club. This is an interesting admission, because stakeholders are often not clear about whose privacy violations they intend to stop.

For American lawmakers, APRA has been called necessary to stop Americans from being exploited by “Big Tech and shadowy data brokers.” Thus, APRA includes specific requirements for so-called “large data holders,” “data brokers,” and “covered high-impact social media companies.” It also attempts to minimize impacts on small- and medium-sized businesses.

While it is politically expedient to exclude small businesses from complicated privacy rules, lawmakers may be inviting mischief and gamesmanship by tying privacy protections to revenue targets or number of customers. Most stakeholders seem to want a privacy law that stops Facebook from engaging in the next Cambridge Analytica, but it is not clear if the same is true for the sorts of facial recognition activities undertaken by Clearview AI. Further, the rhetorical drumbeat against Big Tech is also minimized when most businesses from car companies to major retailers want to fashion themselves as data-hungry tech companies anyway.

5. Adequately and sufficiently policing bad behavior.

“A privacy law is only as strong as how it’s enforced” is a saying among privacy advocates, and the enforcement provisions of a privacy framework are often a matter of significant debate and vocally contested. Too often, this debate breaks down into support or opposition to a private right of action, which is the ability of an individual to sue in court for violations of their privacy rights, but the enforcement debate is much larger than this.

APRA attempts to compromise on this issue in a variety of ways, but it is important to acknowledge that not all stakeholders care about the enforceability of their privacy rules. What often goes unsaid is that companies would prefer less oversight. After all, companies follow the law and if a law says not to do something, companies will not do that thing. Lawmakers, too, may view a piece of legislation as a messaging bill or an attempt to establish an aspirational baseline. Regulators, in turn, may choose not to enforce every provision in a law. The Communications Act, for example, permits the Federal Communications Commission (FCC) to forbear from enforcing certain legal provisions, but regulators may also take a conservative approach to interpreting their enforcement authorities.

6. Providing business certainty and eliminating regulatory patchworks.

The growth in corporate support for national privacy rules over the past decade has much to do with two geopolitical facts: (1) global businesses cannot avoid data protection laws and (2) states are on the march, even if the resulting laws do not do much. I bemoan the use of the term “privacy patchwork” to describe different regional legal rules, but businesses, compliance professionals, and even regulators may sincerely desire to deal only with one law.

There are stakeholders in the privacy debate that do not care about the underlying substance of a privacy law – and may well not care one iota about protecting people’s privacy – but they can accept one single, strict but clear, rule.

7. Protecting users’ privacy.

This may seem like an obvious goal, but my points about enforcement and business certainty also illustrate that privacy compliance need not have anything to do with protecting people’s privacy. University of California Irvine School of Law professor Ari Waldman has put forward a blistering critique of the privacy profession’s “complicity” in undermining privacy, and one of the animating impulses of frameworks like APRA is to move away from notices, user controls, and transparency toward an alternative approach like strict data minimization that may better achieve privacy dividends.

8. Protecting kids. Full stop.

Privacy is often wrapped into the larger ongoing discussion about how to protect children from the ills of social media, smartphones, and AI. Just look at contrasting statements about the Surgeon General’s recent call for a social media “warning label” from Common Sense Media and the Family Online Safety Institute. Common Sense Media supports a warning label but suggests it is no substitute for updating children’s privacy protections; the Family Online Safety Institute criticizes the idea and instead calls on the Surgeon General to add “his voice to the bipartisan call for a federal privacy bill upon which online safety bills can be built.”

The inclusion of kid-specific provisions in APRA has been both a point of debate and necessary to secure additional support from lawmakers. There are many privacy stakeholders that view children as either an especially vulnerable population or a potential cudgel by which to achieve larger privacy protections, but elevating the goal of children’s privacy protections does come at the cost of some of the other goals I’ve highlighted: it does not advance digital civil rights, it does not provide wider business certainty, and it is unclear whether stronger online protections for kids even hurts Big Tech.

9. Limiting use and mis-use of sensitive information.

Most privacy frameworks divide data into different buckets based on perceived risk, sensitivity, or need for protections, though privacy advocates have begun to question this sensitive / non-sensitive distinction. However, it remains an important definitional component in APRA, and lists of sensitive information can provide useful insight into larger questions about disfavored business models, important constituencies, and unrelated political concerns. Thus, APRA’s list of sensitive data reflects worries about location surveillance, protects military service members, and aims to address mental privacy before the brain-computer interfaces and biochips run rampant.

10. Proving Congress can legislate.

Since the passage of the GDPR eight years ago, major political and economic powers including Brazil, Canada, China, India, Japan, New Zealand, South Korea, and Thailand have put in place data protection laws that are inspired at least in part by the GDPR. Over 130 countries around the world now have privacy rules. In one short decade, the US has become a global outlier. That is not the best reason to pass a privacy law, but it does reflect one of the major underlying goals animating discussions around APRA.


My ordering of privacy goals is completely arbitrary, but that is my point. I wrote this list in a way that I believe flows appropriately, but depending upon my priorities, these ten issues could be stack ranked in so many different ways.

Prioritization is a necessary part of the political process. Any piece of legislation will have political winners and losers, but as APRA faces a mark-up this week, it is my hope that stakeholders across the policy and privacy spectrum take care to understand what their friends and foes’ goals are.

This piece was updated on the news that a committee markup for the American Privacy Rights Act (APRA) was canceled at the last moment.


Joseph Jerome
Joseph Jerome is Visiting Assistant Professor at the University of Tampa and a Tech & Public Policy Visiting Fellow at Georgetown's McCourt School of Public Policy. He was previously a policy manager at Meta Reality Labs. Before that, he worked on state advocacy efforts at the Center for Democracy &...