Home

Donate
Perspective

How to Test New York’s Algorithmic Pricing Law

Stephanie T. Nguyen / Nov 25, 2025

Stephanie T. Nguyen is a Senior Fellow at the Georgetown Institute for Technology Law & Policy and Vanderbilt Policy Accelerator and Former Chief Technologist at the Federal Trade Commission.

A New York law took effect this month — The Algorithmic Pricing Disclosure Act (APDA) – mandates that any business employing algorithmic pricing to clearly disclose to consumers: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.” The law also prohibits businesses from setting different prices based on people’s race, gender, or other protected classes.

There are limitations, of course. While APDA mandates transparency on whether a price was set by an algorithm, most people still won’t know how surveillance pricing affects them. The law does not prohibit surveillance pricing or meaningfully explain how algorithms influence prices, but it does offer some new ways for researchers to examine these practices.

For instance, APDA does not restrict the collection, sharing, or selling of personal data, nor does it require companies to explain how such data is used to influence prices. Even if this information were provided, the “notice-and-choiceregime has failed — companies rely on pop-up disclosures that are more than merely incomprehensible; they are impossible for consumers to meaningfully digest. Knowing that prices are being manipulated offers limited value if consumers lack practical ways to respond. Moreover, the disclosure will not tell consumers whether the price is higher or lower as a result of algorithmic pricing.

Despite these limits, APDA creates important avenues for further inquiry and investigation. By requiring companies to disclose when algorithms set prices, the law gives researchers, journalists, and investigators a new vantage point. As policymakers assess the effectiveness of the law, there may be additional transparency measures to consider, including requiring covered firms to disclose what data informs their price setting, what range of prices they’ve offered over the past 6 months, and the average prices they charge for their products over the past 6 months. Even with these points in mind, relying on observation of these new disclosures assumes companies actually follow the law — an assumption we should question, rather than take for granted.

Already, there have been public reports highlighting the disclosure with companies like DoorDash, Uber, and Uber Eats. Timely, targeted research can use these disclosures as signals, helping researchers quickly build evidence about when and where algorithmic pricing occurs and how prices may differ across consumers. APDA’s impact will be shaped in part by how researchers, investigative journalists, and other watchdogs can turn its disclosures into tools for accountability. The questions below suggest where this work can start.

  1. Are different people seeing different prices? Do consumers in New York receive different prices for the same product or service that disclose they are set by algorithms? For example, how might researchers compare known algorithmic-driven prices offered across boroughs or zip codes to test whether location, income, or demographics influence price differences?
  2. What does exercising consumer data rights reveal? Deletion or correction rights apply where data-rights laws exist. For firms that comply with the law and disclose algorithmic pricing based on personal data, what can we learn about the information collected when consumers request their own data? Does algorithmic pricing change after consumers collect their data? How do data access requests uncover what data is collected, what profiles or inferences are made about people? E.g., Company data access requests by The Washington Post with Starbucks and Consumer Reports with Kroger.
  3. What data are used to influence prices — e.g., location, browsing history, purchase behavior, loyalty status, or inferred income? How do device identifiers, cookies, or app permissions feed into algorithmic pricing systems? Are inferred traits (e.g., “aptitudes,” “intelligence,” “likely parent”) being used to inform the prices set by algorithms?
  4. Do online, in-app and in-store prices set by algorithms differ across retailers? How do known prices set by algorithms shift across users, locations, and devices? Do in-app, web, or in-store prices differ, or have they changed for the same item? How do prices differ or change across grocery chains, pharmacies, convenience stores, or delivery services (e.g., DoorDash, Instacart, etc.)?
  5. Do loyalty program members vs. non-loyalty program members receive different prices set by algorithms? How do these prices change once someone joins a membership or rewards program? How do prices shift with purchase frequency or engagement in a loyalty program (e.g., clicks, swipes, or app usage)?
  6. How are companies complying with algorithmic pricing disclosures? What do the disclosures look like? Are these disclosures consistent or different? Where and how are disclosures presented (e.g., checkout page, product page, terms of service)?What types of surveys or focus groups could explore whether consumers interact with or see the disclosures, and if they do, how do they understand these disclosures?
  7. Which sectors (e.g., retail, travel, delivery apps, gig work, hospitality, event ticketing) are showing disclosures? Do disclosures appear differently or more frequently for digital services (such as streaming services or apps) versus tangible goods (such as groceries or retail)? E.g., comparison of website, mobile app, and in-store digital displays.

Experts have sounded and continue to sound the alarm on relevant pricing issues for years. Building on that work, we need studies that operate at the speed of industry practices and track algorithmic pricing as it happens. Research can deliver findings and a body of evidence that can guide timely interventions and actionable decision-making. These could include targeted experiments — field audits, secret shopping, data requests, public price tracking, user research, and investigative reporting — that can surface early warning signs of harm and make the effects of algorithmic pricing more tangible.

In this environment, community support is essential for such research: asking focused questions, designing sharp investigations, testing methods, and building the collective capacity to understand how consumers may be getting ripped off.

***

The author wishes to thank the following colleagues for their insightful feedback and contributions: Cat Mai, Ania Calderon, Becca Ricks, Tunika Onnekikami, Iretiolu Akinrinade, Erie Meyer, Damon McCoy, Mikey Dickerson, David Choffnes, Jonathan Mayer, Olivier Sylvain, Brian Shearer, Asad Ramzanali, Sam Levine, John Davisson, and the Public Technology Leadership Collaborative.

To stay informed and get involved as research like this unfolds, you can join an email list here.

Authors

Stephanie T. Nguyen
Stephanie T. Nguyen is a Senior Fellow at the Vanderbilt Policy Accelerator and Georgetown Institute for Technology Law & Policy and was most recently the Chief Technologist at the Federal Trade Commission under Chair Lina M. Khan. She built, designed, and executed the first Office of Technology and...

Related

Perspective
Mayor-Elect Mamdani Can Build a Tech Agenda for New York and a Model for the CountryNovember 5, 2025

Topics