Home

December 2023 US Tech Policy Roundup

Rachel Lau, Kennedy Patlan / Jan 6, 2024

Rachel Lau and Kennedy Patlan work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC.

Man Controlling Trade, created by Michael Lantz for the Federal Trade Commission Building in Washington, D.C.

As 2023 came to an end, US agencies, policymakers, and advocates alike were all busy pushing efforts across the finish line on an array of tech policy topics, including children’s privacy, antitrust, AI, and surveillance.

On the agency side, the Federal Trade Commission (FTC) proposed revisions to its Children’s Online Privacy Protection Act (COPPA) rules to reflect the modern digital age, given that COPPA became law in 2000. Proposed revisions include requiring digital platforms to requiring digital platforms to turn off behavioral advertising for children as a default practice and making it illegal for companies to monetize children’s data without parental consent.

In related news, the National Academies of Science, Engineering, and Medicine published recommendations for Congress, industry, and the Department of Education. While the report’s authors concluded “there is not enough evidence to say that social media causes changes in adolescent health at the population level,” they suggested it could offer both benefits and harms. Rather than advocating for a “broad-stroke ban” on children’s use of social media, the report encourages stakeholders to build standards, develop more research, and increase education to protect children and teens who use social media.

Following a two-year process, the FTC and DOJ released finalized 2023 merger guidelines in December. The new guidelines account for modern day market dynamics, including the rise of platform markets. They are intended to provide a current framework for robust antitrust law enforcement, especially against corporate consolidation and merger activities. In related activity, the FTC won a court ruling that temporarily blocked health care provider IQVIA from acquiring PropelMedia over concerns that the merger would give IQVIA a dominant, leading position in pharmaceutical digital advertising. In December, the FTC also began the process to appeal a blocked injunction against Microsoft’s acquisition of Activision Blizzard.

In 2024 presidential campaign news, during a December campaign rally, former President Donald Trump promised to reverse President Biden’s executive order on AI, claiming that the EO promoted censorship. On the industry side, Alphabet announced that it would limit how the company’s AI-driven chatbot, Bard, responds to election questions; and the conservative social media platform Parler announced plans for a 2024 relaunch.

In other industry news, Microsoft announced a new tech-labor partnership with the AFL-CIO focused on AI and its impact on workers. Google also announced a new suite of AI models that will operate under the company’s MedLM, which aims to use AI to aid healthcare clinicians and researchers with industry research and studies.

On the philanthropic side, Omidyar Network announced $30 million in funds that will be used to “bridge the gap between AI hype and hysteria.” Initial funds will support organizations including the AFL-CIO Technology Institute, the Collective Intelligence Project, and the Economic Security Project. This news followed Vice President Harris’s announcement of philanthropic commitments for AI issues in November 2023.

Read on to learn more about December developments in AI and tech surveillance reform.

AI Focus Continues in Congress, Agencies, Civil Society, Academia, & Industry

  • Congress:
    • Congress scrambled to pass bills this month as the holiday season and the end of 2023 drew near. The Senate and House passed and President Joe Biden signed the National Defense Authorization Act (NDAA, Public Law No: 118-301), which contained many AI-related measures, including developing AI technologies, expanding AI-related capabilities for the Department of Defense, and appointing a new Chief AI Officer who will oversee AI within the department. Also, Sen. Ted Cruz (R-TX) blocked Sen. Josh Hawley’s (R-MO) effort to pass a bill by unanimous consent that would make tech companies liable for AI-generated content by amending Section 230 to exclude synthetic content from the provision’s protections. Cruz blocked Hawley’s efforts due to his concerns that increased liability around AI-generated content would limit US competitiveness in the global market.
    • Also in December, a bipartisan group of House Science, Space, and Technology Committee leaders sent a letter to the National Institute of Standards and Technology (NIST) concerning the federal laboratory’s new Artificial Intelligence Safety Institute (AISI) and the need for “merit and transparency” in standing up new processes for the program. Science Committee leaders expressed concern over AISI’s potential funding awarded to projects outside NIST after it was discovered that the agency would be partnering with the RAND Corporation. The letter concluded with a request for a briefing from NIST leadership. In December, it was also reported that the RAND Corporation played a large role in drafting President Biden’s executive order on AI.
  • Agencies:
    • The Department of Health and Human Services (HHS) released finalized rules on transparency on use of AI in clinical settings. Promoted through the HHS Office of the National Coordinator for Health Information Technology (ONC), the Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) rule contains algorithm transparency requirements, stronger information blocking requirements, new reporting metrics for certified health IT, and updates the data system for the ONC Health IT Certification Program.
    • The Government Accountability Office (GAO) released a report on the federal government’s use of AI, which found that the policies to responsibly acquire and use private sector technology are not robust enough to support government plans for increased AI use. GAO also reported that 15 of the agencies it examined had provided “instances of incomplete and inaccurate data” on their AI use case inventories submitted to the Office of Management and Budget (OMB). The report includes 35 recommendations to 19 agencies, 10 of which fully agreed with the recommendations.
  • Civil Society, Academia, and Industry:
    • Civil society, academia, and industry were busy at work in December in the aftermath of the October AI executive order and the draft OMB guidance memorandum released in November. The Electronic Privacy Information Center, the ACLU, the Brennan Center for Justice, the Center for Democracy and Technology, Data & Society, the Leadership Conference on Civil and Human Rights, NAACP Legal Defense and Educational Fund, the Surveillance Resistance Lab, Upturn, and others published comments in response to the OMB draft guidance ahead of the closing of the comment period on December 5. Many of these comments focused on improving upon the minimum practices established in the OMB draft, shared concerns about national security exceptions and other potential loopholes, suggested methods for an equity and rights-respecting design and testing process, and emphasized the need for transparency, accountability, and public reporting measures.
    • The New York Times sued OpenAI and Microsoft for copyright infringement, arguing that the companies used the Times’ published works to train AI without permission. Meta and IBM, alongside dozens of private and public sector partners, launched the AI Alliance to advocate for open-source AI. The Responsible AI Institute released its Responsible AI Safety and Effectiveness (RAISE) benchmarks, intended to help companies develop responsible AI products. The Lawyers’ Committee for Civil Rights Under Law released a model Online Civil Rights Act, which seeks to mitigate and prevent civil rights harms in AI and related tech advancements by establishing a “broad, tech-neutral regulatory and governance regime.” Finally, 17 tech and climate advocacy groups sent a letter to President Biden on concerns about AI’s potential impacts on climate change.
    • Scholars at MIT published a series of AI policy briefs for effective AI governance. Stanford University also started a new project to support AI policymaking: the Stanford Emerging Technology Review (SETR) seeks to bring together technical and policy expertise. Stanford professor Daniel Ho also testified this month before the US House Subcommittee on Cybersecurity, Information Technology, and Government Innovation. Scott Babwah Brennen and Matt Perault at the Center on Technology Policy at the University of North Carolina at Chapel Hill released a policy framework on generative AI in political ads, suggesting that policies target electoral harms rather than specific technologies.
  • What We’re Reading:
    • Sen. John Thune (R-SD) published an op-ed in support of his bill, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (AIRIA, S.3312), that seeks to establish an AI governance framework. NPR covered the EU’s finalization of “the world’s first comprehensive artificial intelligence rules.” David Evan Harris examined potential regulation for unsecured “open-source” AI, and Jessica Dheere wrote about shareholders’ influence on AI’s democracy impacts in Tech Policy Press. Also in Tech Policy Press, Anthony Barrett, Jessica Newman, Brandie Nonnecke, Dan Hendrycks, Evan Murphy, and Krystal Jackson analyzed the risk management needs of general-purpose AI systems. Trade association Chamber of Progress published an AI-generated analysis of AI-related congressional hearings.

2023 Ends with Advances in Surveillance and Data Privacy Reform

As public awareness and concerns over data privacy practices continue to evolve, Congress, federal agencies, and major tech companies have faced mounting pressure to reform surveillance policies and strengthen safeguards around consumer data. The end of 2023 marked a series of penultimate updates, announcements, and efforts that will set the stage for further developments in 2024.

Organizations File Amicus Briefs for Supreme Court Tech Cases

In 2023, the Supreme Court agreed to hear several cases regarding the role of social media platforms in the US, and in particular, their impacts on free speech, privacy, and regulation. From Murthy v. Missouri, to Moody v. NetChoice and NetChoice v. Paxton, the Court’s docket underscores the growing need for judicial clarity in a constantly evolving technology landscape. Toward the end of 2023, many organizations filed amicus briefs connected to relevant cases, which could have sweeping implications for First Amendment protections online and technology regulation writ large. Read below for more details:

New AI Legislation

  • Eliminating Bias in Algorithmic Systems Act of 2023. (S.3478, Sens. Ed Markey (D-MA), Cory Booker (D-NJ), Amy Klobuchar (D-MN), Ben Lujan (D-NM), Jeff Merkley (D-OR), Elizabeth Warren (D-MA), Peter Welch (D-VT), and Ron Wyden (D-OR)): This bill would mandate that any government agency using, funding, or overseeing algorithms must create “an office of civil rights focused on bias, discrimination, and other harms of algorithms.”
  • AI Foundation Model Transparency Act of 2023 (H.R.6881, sponsored by Reps. Donald Beyer (D-VA) and Anna Eshoo (D-CA)): This bill would require the FTC to create disclosure standards for training data and algorithms used in AI foundation models, including on copyright infringement, inaccurate information, and biased outputs.
  • Financial Artificial Intelligence Risk Reduction (FAIRR) Act (S.3554, sponsored by Sens. Mark Warner (D-VA) and John Kennedy (R-LA)): This bill would assign various duties relating to AI in the financial industry to the Financial Stability Oversight Council.
  • Farm Tech Act (H.R.6806, sponsored by Reps. Randy Feenstra (R-IA), David Caladao (R-CA), Eric Sorensen (D-IL), and Josh Harder (D-CA)): This bill would create a certification program for agriculture-related AI software.
  • Artificial Intelligence Literacy Act of 2023 (H.R.6791, sponsored by Reps. Lisa Blunt Rochester (D-DE) and Larry Bucshon (R-IN)): This bill would amend the Digital Equity Act of 2021 to incorporate AI literacy, including adding AI to the Digital Equity Competitive Grant Program and creating new grants aimed at increasing AI education.

Other New Tech Legislation

  • FISA Reform and Reauthorization Act of 2023 (H.R.6611, sponsored by Reps. Michael Turner (R-OH) and James Himes (D-CT)): This bill, which was unanimously approved by the House Intelligence Committee, would reauthorize FISA while reducing the number of FBI personnel with the power to approve US person searches, establishing additional punishments for leaking FISA-derived information, and banning searches designed to suppress political opinions. The bill was reported by the House Intelligence Committee and may be considered by the full House, but a vote has not yet been announced.
  • Protect Liberty and End Warrantless Surveillance Act (PLEWSA) (H.R.6570, sponsored by Reps. Andy Biggs (R-AZ), Jerrold Nadler (D-NY), Jim Jordan (R-OH), Pramila Jayapal (D-WA), Warren Davidson (R-OH), Sara Jacobs (D-CA), and Russell Fry (R-SC)): This bill would require a warrant for all US person searches, impose reforms upon the Foreign Intelligence Surveillance Court, and stop the federal government from purchasing Americans’ data from tech companies without a warrant. It creates limits on surveillance for foreign intelligence, prohibits reverse targeting of US persons, creates requirements for disclosure of relevant information in FISA applications, and increases reporting requirements with annual and quarterly reports. PLEWSA was reported by the House Judiciary Committee 35 to 2 and may be considered by the full House, but a vote has not yet been announced.
  • Protecting Americans from Unauthorized Surveillance Act (H.R.6577, sponsored by Reps. Ted Lieu (D-CA) and Ken Buck (R-CO)): This bill would “require the involvement of third-party technical experts in every court-ordered decision to surveil a US citizen” by mandating that the Foreign Intelligence Surveillance (FISA) Court utilize a technical expert every time it evaluates a FISA application, not just for particularly complex cases.
  • Stop Terrorism and Illicit Finance Location Exploitation (STIFLE) Act (H.R.6605, sponsored by Reps. Zachary Nunn (R-IA), Joyce Beatty (D-OH), and Michael Lawler (R-NY)): This bill would direct the Secretary of the Treasury to establish a group of technology, national security, and law enforcement experts to study location obfuscation technologies (e.g., VPNs, proxy servers, etc.) and their uses in money laundering, sanction evasion, and funding terrorism. The bill would direct this group to publish a report within one year of bill passage that details a plan to design and implement more advanced location determination technology to combat these illicit uses.
  • Protecting Military Servicemembers' Data Act of 2023 (H.R.6573/S.1029, sponsored by Rep. Pat Fallon (R-TX) and 22 others in the House and Sens. Bill Cassidy (R-LA), Elizabeth Warren (D-MA), and Marco Rubio (R-FL) in the Senate): This bill would ban data brokers from “selling, reselling, trading, licensing, or providing for consideration lists of US military service members to adversarial nations.”

Public Opinion on AI Topics

Canva and Morning Consult surveyed 1,006 US educators from August 6-11, 2023 on the impact of AI on education. They found that:

  • 59 percent of teachers agree that AI has given their students more ways to be creative.
  • 60 percent agree it has helped educators think of ideas to make students more productive.
  • 72 percent believe AI can help with language learning.
  • 67 percent agree it could help make education more accessible for students with different learning needs.

Published in December 2023, YouGov and Brigham Young University Center for the Study of Elections and Democracy polled 3,044 respondents from August 3-15, 2023 on a variety of topics, including technology and social media. They found that:

  • 26 percent of parents who had concerns about online predators “placed contact restrictions on their children."
  • 28 percent of parents who "worried about inappropriate content online placed content restrictions on their children."
  • 28 percent favored suing social media companies for harms their children face on platforms.
  • About 70 percent want some kind of government intervention on platforms to protect children.

EY used a third-party polling entity to survey 1,000 Americans working an office job about AI between October 5–16, 2023. They found that:

  • 75 percent of employees are worried about AI making jobs obsolete in the future.
  • 67 percent are afraid of being passed up for promotions due to lack of knowledge about using AI.
  • 66 percent are concerned about falling behind if they don’t use AI in their work.
  • 63 percent of Gen Z report using AI at work, compared to 74 percent of Millennials and 70 percent of Gen X.
  • 73 percent are worried about their organization not offering education or training on AI.
  • 77 percent "would be more comfortable using AI at work if employees from all levels were involved in the adoption process."

UserTesting commissioned OnePoll to survey 2,000 Americans from October 13 - 17, 2023 on their use of the internet and AI for medical purposes. They found that:

  • More Americans consult healthcare websites (53 percent) and social media sites (46 percent) than their personal doctor (44 percent) when looking for health information.
  • 73 percent "believe they have a better understanding of their personal health than their doctor."
  • 53 percent "have listed their symptoms to a large language model (LLM) like ChatGPT."
  • People consult the internet or ChatGPT instead of their doctors due to a lack of understanding about their healthcare insurance (57 percent), embarrassment about their experience (51 percent), or for a second opinion (45 percent).
  • Respondents would trust AI to help pharmacies fill prescriptions (47 percent), schedule doctor appointments (52 percent) and recommend treatments (53 percent).

A Politico / Morning Consult poll of 1,005 registered voters from December 15-19, 2023 on AI found that:

  • Respondents were slightly more positive (43 percent) than negative (39 percent) on how they expect AI to influence their lives, with 18 percent expecting no influence.
  • Respondents were evenly split (50 percent very or somewhat concerned, 50 percent not concerned) on whether they were concerned about losing their job to AI in the next five years.

A CNBC / SurveyMonkey poll of 7,776 American workers from December 4-8, 2023 asked about AI in the workforce. They found that:

  • 29 percent of workers have used AI at work, including 41 percent of Asian workers, 38 percent of Black workers, 36 percent of Hispanic workers, and 23 percent of white workers.
  • 72 percent of workers who have used AI say it made them more productive.
  • 42 percent of workers worry about AI’s impact on their job. Those who use AI at work are more likely (60 percent) than those who do not (35 percent) to worry about AI’s impact on their jobs.

Published in December 2023, a Harris Poll conducted on behalf of Association Workforce Monitor asked 2,037 American adults about AI from June 20-22, 2023. It found that:

  • 49 percent of respondents believe that AI recruiting tools have greater bias than people.
  • 39 percent of current job seekers used AI tools when applying to jobs.

Public Opinion on Other Tech Topics

Using Ipsos, Pew Research Center polled 1,453 US teens between September 26 - October 23, 2023 on their social media use. They found that:

  • 93 percent of teens use YouTube, the most widely used platform in the survey. 63 percent of teens use TikTok, 60 percent use Snapchat, and 59 percent use Instagram.
  • Facebook use among teens dropped from 71 percent in 2014 and 2015 to only 33 percent today.
  • A majority of teens visit YouTube (71 percent), TikTok (58 percent), or Snapchat (51 percent) daily.
  • Teen girls are more likely than teen boys to use Instagram, BeReal, TikTok, Snapchat, and Facebook while teen boys are more likely to use Discord, Twitch, Reddit, and YouTube.

A white paper by Tiffany Johnson, Daniela Molta, and Evan Shapiro analyzed survey data by Publishers Clearing House (PCH) Consumer Insights, which asked 44,985 American adults older than 25 years old about data privacy in Q2 2023. They found that:

  • 38 percent of respondents said they “wouldn’t ever want to share [their] data.”
  • 86 percent of respondents are concerned about their data security, ranking above worries about the current state of the economy (85 percent) and sexual abuse of women (84 percent).
  • 87 percent of respondents thought that consumers should be responsible for their data, with fewer respondents placing responsibility on governments (49 percent) and social media platforms (48 percent).
  • 64 percent of respondents believe “both government and businesses should be responsible for data privacy and security.”
  • 58 percent of respondents are “willing to stop interacting with companies who have a bad reputation around data.”

We welcome feedback on how this roundup could be most helpful in your work – please contact Alex Hart and Rachel Lau with your thoughts.

Authors

Rachel Lau
Rachel Lau is a Senior Associate at Freedman Consulting, LLC, where she assists project teams with research, strategic planning, and communications efforts. Her projects cover a range of issue areas, including technology policy, criminal justice reform, economic development, and diversity and equity...
Kennedy Patlan
Kennedy Patlan is a Project Manager at Freedman Consulting, LLC, where she assists with strategic development, project management, and research. Her work covers technology policy, health advocacy, and public-private partnerships.

Topics