Home

February 2024 US Tech Policy Roundup

Rachel Lau, J.J. Tolentino / Mar 1, 2024

Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Rohan Tapiawala, a Freedman Consulting Phillip Bevington policy & research intern, also contributed to this article.

Mark Zuckerberg, CEO of Meta Platforms Inc., center right, addresses the audience, including parents of children injured or lost in events involving social media, during a Senate Judiciary Committee hearing in Washington, DC, US, on Wednesday, Jan. 31, 2024. Photographer: Kent Nishimura/Bloomberg via Getty Images

February may be the shortest month of the year, even in a leap year, but there was no shortage of tech policy developments across the federal government, industry, and civil society this month, from the launch of bipartisan task forces to new agency regulations and progress on key tech legislation. To cover the myriad of action across topics and venues, a new Tech TidBits & Bytes section in this monthly roundup will provide a snapshot of need-to-know happenings in Congress, agencies, courts, civil society, and industry. We have also added a section on new opportunities for relevant public comment and RFIs in response to the Biden administration’s AI Executive Order, issued last October.

Read on to also learn more about February developments in kids online privacy regulation, efforts to regulate and moderate political generative AI content, the Supreme Court oral arguments on major content moderation cases Moody v. NetChoice and NetChoice v. Paxton, the back and forth on Section 702 reform, and updates on new legislation and public opinion research on tech policy issues.

KOSA Returns with Filibuster-Proof Majority

  • Summary: Kids online safety continued to be a hot topic in February following a major Senate Judiciary Committee hearing on the last day of January. Sens. Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) unveiled a new version of the Kids Online Safety Act (KOSA, S. 1409), which featured the support of over 60 Senate co-sponsors. With this massive support in the Senate, the bill would have enough supporters to pass even in the face of a filibuster. Although there was no floor vote scheduled or House companion bill for KOSA at the time of publishing, Politico reported that Sen. Lindsey Graham (R-SC) planned to bring multiple kids online safety bills to the Senate floor in early March through unanimous consent, an expedited voting process. The Senate bill would require that companies “exercise reasonable care” to protect kids using their products and enable their strictest protective privacy and safety settings for kids by default. The new version of the bill specifically “includes a specific definition of a ‘design feature’ as something that will encourage minors to spend more time and attention on the platform, including infinite scrolling, notifications, and rewards for staying online.” It also removes a previously included provision that allowed state attorneys general to enforce the duty of care and leaves enforcement largely up to the Federal Trade Commission (FTC).
  • Stakeholder Response: The sign-on of key Senate stakeholders to the bill indicated a major step forward, with Senate Majority Leader Chuck Schumer (D-NY) committing to “working on a bipartisan basis with [the bill sponsors] to advance this bill in the Senate.” A number of civil society stakeholders also expressed support for the edited bill: a coalition of parents whose children died in connection with social media harms in partnership with civil society groups like Mental Health America and the Tech Oversight Project published a letter urging Schumer to bring KOSA to a vote. A group of seven LGBTQ+ organizations also published a letter in support of KOSA, writing that the “considerable changes… significantly mitigate the risk of [KOSA] being misused to suppress LGBTQ+ resources.” The Young People’s Alliance, #HalfTheStory, and Encode Justice also urged Congress to pass KOSA. Other civil society organizations, however, continued to oppose the bill: Electronic Frontier Foundation and Fight for the Future both argued that KOSA could still be used to censor controversial topics like LGBTQ+ and abortion issues.
  • What We’re Reading: The National Law Review published a historical review of legislative, congressional, and executive actions intended to protect children online. R Street’s Steven Ward, a Resident Privacy and Security Fellow on the Cybersecurity and Emerging Threats team, wrote an article on how children’s online safety intersects with problems with data security and privacy online. The Washington Post discussed different considerations and tools currently accessible to parents seeking to protect their kids online.

FTC, FCC Take Action on Deepfakes

  • Summary: Multiple federal agencies and stakeholders undertook efforts to strengthen regulations combatting the use of AI-generated impersonations this month. Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technology, reported that the Biden Administration is urging companies and Congress to address political deepfakes and suggested that they consider “exploring the possibility of watermarking computer generated content.” These remarks followed a White House convening with key leaders from federal agencies and industry groups that discussed AI-generated voice cloning and practical solutions to protect American citizens from harm. Concurrent with the White House’s attention on deepfakes, the Federal Communications Commission (FCC) and FTC independently took steps to regulate the issue. On February 8, the FCC voted unanimously to make the use of AI to generate a voice in robocalls illegal by extending “anti-robocall rules to cover unsolicited AI deepfake calls.” The FCC’s ruling provides state attorneys general with new tools to pursue bad actors that seek to use AI-generated robocalls to dissuade voters and undermine democracy. A week later, the FTC announced that it is proposing to expand a new rule that protects governments and businesses from impersonation to also include individuals. This new ruling would make it illegal for businesses to create “images, videos, or text, to provide goods or services that they know or have reason to know is being used to harm consumers through impersonation,” including AI-generated deepfakes.
  • Stakeholder Response: In the environment of growing pressure from regulators, AI researchers, and advocacy organizations to address fake and misleading election content, a group of leading AI companies, including Google, Microsoft, Meta, OpenAI, Adobe, and TikTok, signed an agreement to address political deepfakes. The agreement stated that companies would develop tools to identify and label AI-generated images, videos, and audio recordings designed to deceive voters. The agreement did not ban political deepfakes, but will seek to mitigate potential risks to the electoral process posed by some AI-generated content. Additionally, IBM published a policy paper outlining three key priorities for policymakers to address to mitigate the harms of deepfakes which included protecting elections, protecting creators, and protecting privacy. Within each priority, IBM also spotlighted their support for legislative efforts to address deepfakes including the Protect Elections from Deceptive AI Act (S.2770), the NO FAKES Act, and the Preventing Deepfakes of Intimate Images Act (H.R. 3106).
  • In civil society, hundreds of AI experts and leaders signed an open letter urging lawmakers to regulate deepfakes. The letter called for the full criminalization of deepfake child pornography even when fictional children are depicted, establishing criminal penalties for those who create or knowingly circulate harmful deepfakes, and requiring software developers and distributors to prevent their products from being used to create deepfakes, with liability requirements in place if their preventative measures are easily avoided. Notable signatories included Yoshua Bengio, often credited as the “AI godfather;” Frances Haugen, Facebook whistleblower and tech accountability advocate; Andrew Yang, former 2020 presidential candidate; and multiple researchers from Google DeepMind and OpenAI, among others.
  • Congress also ramped up the pressure to rein in deepfakes. Sen. Jeanne Shaheen (D-NH) sent a letter to Attorney General Merrick Garland and Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, urging them to address the election threats posed by AI-generated deepfakes. Within the letter, Sen. Shaheen urged the Justice Department (DOJ) to collaborate with various agencies to address disinformation campaigns from foreign adversaries, and develop “best practices and novel techniques to track, investigate and disrupt the use of AI technologies to spread misinformation and undermine free and fair elections.”
  • What We’re Reading: Public Citizen published a tracker on state legislation to regulate AI-generated deepfakes designed to undermine the electoral process. The AI Democracy Projects, a collaboration between Proof News and the Science, Technology, and Social Values Lab at the Institute for Advanced Study, released a research report that found that leading AI models often shared inaccurate answers about election information. The Washington Post highlighted a New York University report (of which Justin Hendrix, editor of Tech Policy Press was a coauthor) that found that despite AI deepfakes driving political disinformation discourse, the prevalence and spread of disinformation on social media remains “the biggest digital threat” to the 2024 elections. In a recent Politico Tech podcast, Emily Slifer, policy director at Thorn, warned that policymakers and tech companies must act quickly to combat the rising concerns over AI-generated online child sexual abuse material. In an op-ed in the Federal Times, Ali Noorani, program director of US democracy at the William and Flora Hewlett Foundation, and Jennifer Pahlka, senior fellow at the Niskanen Center and the Federation of American Scientists, discussed how the federal government must work with the technology community, nonprofits, and philanthropies to bolster its digital capacity and strengthen its internal and external capabilities to safeguard the responsible use of AI.

The Supreme Court Argues Free Speech and Content Moderation in NetChoice v. Paxton and Moody v. NetChoice

  • Summary: In hearing the two related cases this month, the Supreme Court considered key questions on the role of state regulation and free speech in social media platforms’ content moderation in NetChoice v. Paxton and Moody v. NetChoice. The two cases considered, respectively, Texas and Florida laws that attempted to impose restrictions on social media platforms’ ability to moderate content. The justices and attorneys debated whether platforms were more like common carriers (e.g. telephone lines, internet providers) or newspapers. This question underpinned the relevant jurisprudence and precedent that might be used to determine whether social media platforms practice editorial discretion and have a First Amendment right to content moderation as a form of free speech or whether the state has a right to regulate that content moderation to ensure equal access to a platform as a communications utility. Also potentially implicated in the outcome of the case was the debate around Section 230, which shields platforms from liability over their publication of third-party content as well as their moderation decisions. After significantly longer oral arguments than expected, the justices seemed unlikely to support either the Texas or Florida laws and generally questioned the power of state governments to regulate the content moderation of platforms. However, it remains to be seen how broadly the justices resolve the two cases and what their impacts on greater content moderation and platform regulation debates might be.
  • Stakeholder Response: Both NetChoice v. Paxton and Moody v. NetChoice sparked dozens of amicus briefs, and responses from civil society after the hearings mostly focused on First Amendment considerations and analyzed the ways in which the justices’ lines of questioning prioritized some questions over others. The Electronic Privacy Information Center (EPIC) commented on the court’s emphasis on free speech, frustrations with the cases’ litigation as pre-enforcement facial challenges, and potential consequences with Section 230 as well as the relative lack of discussion on the individualized explanation provisions of the laws. The Foundation for Individual Rights and Expression (FIRE) also emphasized the First Amendment arguments made by justices, asking the court to “keep the government’s hands out of online content moderation.” The Cato Institute made a similar First Amendment argument in response to the case, arguing that the Florida and Texas laws are counterproductive to concerns about free expression. The ACLU also argued that “private platforms’ decisions about what speech to host, publish, and distribute on the internet are protected by the First Amendment and cannot be mandated by the government” following the oral arguments.
  • What We’re Reading: Ahead of the hearings, Gabby Miller, Ben Lennett, Justin Hendrix, Maria Fernanda Chanduvi, Divya Goel, and Mateo García Silva provided a review in Tech Policy Press of all the amicus briefs filed in both cases. Also in Tech Policy Press, Ben Lennett shared five important questions in the two cases and Lauren Wagner argued that platform accountability will require more than data sharing, adding to a different angle on the responsibilities of platforms and tech companies. Following the oral arguments, Gabby Miller, Haajrah Gilani, and Ben Lennett provided observations from the hearing in Tech Policy Press. Tim Wu, law professor at Columbia, published an op-ed in the New York Times outlining the dangerous potential for the NetChoice cases to limit the accountability that states could require from platforms.

Section 702 Reauthorization Delayed Again

  • Summary: Earlier this month, House Speaker Rep. Mike Johnson (R-LA) was unsuccessful in bringing a new bill to the House floor that would reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA), which is set to expire on April 19. Section 702 allows the government to surveil non-US persons targets who are located outside of the United States. Introduced by Rep. Laurel Lee (R-FL) this month, the Reforming Intelligence and Securing America Act (H.R. 7320), was the first movement on Section 702 related issues since December 2023, when the reauthorization deadline was postponed to April 2024. Divisions within the Republican Party regarding a warrant requirement have led to multiple versions of the reauthorization bill and a lack of consensus on which to support. Intelligence Committee members have generally favored looser restrictions than those preferred by members of the Judiciary Committee, which reported out a bill in December (H.R. 6570) that would have drastically limited Section 702 and other surveillance authorities. Liz Elkind reported that House Speaker Johnson delayed the Section 702 vote because Republican members of the House Intelligence Committee threatened to kill a House rule in response to the most recent reauthorization bill, which altered the text regarding the warrant requirements without the consent of members on the committee. The Biden Administration reportedly is seeking to extend the program through a court recertification process that would allow the program to continue “even if the underlying statute expires in the meantime.”
  • Stakeholder Response: Following news that the House had postponed its vote to reauthorize Section 702, the Center for Democracy and Technology called the provision a “controversial surveillance authority repeatedly abused by law enforcement [and] intelligence officials to pull up Americans’ private messages without a warrant,” and emphasized the need for reform.
  • What We’re Reading: Noah Chauvin and Elizabeth Goitein published an explainer with the Brennan Center for Justice analyzing the different Section 702 reauthorization bills and discussing Congress’ potential path forward. Brennan Center for Justice’s Emile Ayoub and Elizabeth Goitein, examined legal loopholes that could enable the government to access swaths of personal information and describe legislative proposals to address the potential risks associated with unfettered access to American’s data.

Tech TidBits & Bytes

Tech TidBits & Bytes aims to provide short updates on tech policy happenings across Congress, the executive branch and agencies, civil society, industry, and elsewhere.

In Congress:

  • Rep. Cathy McMorris Rodgers (R-WA) announced that she will not be seeking reelection in November. As the chair of the House Energy and Commerce Committee, McMorris Rodgers has played a key role in transparency and accountability, Section 230 reform, competition and antitrust, and content moderation debates in recent years. McMorris Rodgers drove the American Data Privacy and Protection Act (ADPPA, H.R.8152) through the committee, the strongest data privacy bill to advance in Congress.
  • House Speaker Mike Johnson (R-LA) and Minority Leader Hakeem Jeffries (D-NY) announced that the House is launching a bipartisan AI task force tasked with producing a report with recommendations for Congress to set new regulatory standards for AI and ensure the US remains a leader in AI innovation. The 24 member task force will be led by Reps. Jay Obernolte (R-CA) and Ted Lieu (D-CA) and will seek to address questions regarding national competitiveness and AI safety.
  • The House Judiciary Committee and its Select Subcommittee on the Weaponization of the Federal Government released an interim staff report accusing the National Science Foundation (NSF) of funding the development of AI-driven censorship and propaganda tools through its efforts to combat misinformation regarding COVID-19 and the 2020 election. The report argued that NSF’s grants to university and non-profit research teams have been used on projects to develop these tools for use by governments and “Big Tech to shape public opinion by restricting certain viewpoints or promoting others.” A spokesperson for NSF responded by saying that their research into AI censorship tools help improve public safety: “It is in this nation’s national and economic security interest to understand how these tools are being used and how people are responding so we can provide options for ways we can improve safety for all.”
  • Sen. Ron Wyden (D-OR) published a letter calling on the FTC and Securities and Exchange Commission (SEC) to protect the privacy of patients in response to an anti-abortion group using mobile phone location data to send “targeted misinformation to people who visited any of 600 reproductive health clinics in 48 states.”
  • Reps. Dan Crenshaw (R-TX) and Josh Gottheimer (D-NJ) sent a letter to Commerce Secretary Gina Raimondo alongside 13 of their colleagues calling for the department to add TikTok parent company ByteDance to the Bureau of Industry and Security entity list. The entity list includes companies owned outside the US that pose potential national security risks and restricts the export of “goods, software, and technology” from the US to listed companies.
  • The House Committee on Oversight and Accountability and the House Committee on the Judiciary both sent letters to FTC Chairwoman Lina Khan requesting interviews with FTC employees to investigate concerns that Khan and the FTC are overstepping their purview and abusing their power.
  • In a letter to President Biden, Rep. Rosa DeLauro (D-CT) and 87 other representatives commended US Trade Representative Katherine Tai’s approach to developing and implementing the Administration’s worker-centered trade policy. The letter praised Tai for respecting Congress’s “role in setting domestic policy” and advancing shared oversight goals on digital competition, privacy, and AI.
  • Sens. John Hickenlooper (D-CO) and Mike Braun (R-IN) sent a letter to the Department of Labor requesting information regarding DOL’s efforts to develop best practices and guidelines to ensure workers benefit from AI and those who face labor disruptions from the technology have necessary federal support.

In the executive branch and agencies:

  • US Secretary of Commerce Gina Raimondo announced the nearly 400-member AI Safety Institute Consortium (AISIC) housed under the AI Safety Institute (USAISI) at NIST, which will convene a wide range of AI stakeholders to develop “guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”
  • The Biden Administration issued an executive order authorizing DOJ and other federal agencies to take steps to prevent “the large-scale transfer of Americans’ personal data to countries of concern.” The executive order tasked the DOJ with developing rules that restrict the sale of Americans' location, health, financial, and biometric data to foreign adversaries including China, Russia, Iran, and North Korea, among other entities linked to those nations. The executive order represented an attempt by the Biden Administration to regulate the data broker industry and limit the ability of US adversaries to purchase granular data to blackmail or surveil American citizens. In response to the executive order, the Consumer Financial Protection Bureau (CFPB) is expected to propose rules under the Fair Credit Reporting Act “to limit certain activities of data brokers, including those that sell personal data to those overseas.”
  • The Department of State announced a new policy implementing visa restrictions on individuals involved in commercial spyware misuse, including users of spyware, individuals deriving financial benefit from misused commercial spyware, and sellers of commercial spyware to governments.
  • The United States Patent and Trademark Office (USPTO) published inventorship guidance with “instructions to examiners and stakeholders on how to determine whether the human contribution to an innovation is significant enough to qualify for a patent when AI also contributed.” The guidelines state that inventions in which a human significantly contributed to the invention may seek patent protection.
  • The Centers for Medicare & Medicaid Services (CMS) sent a letter to all Medicare Advantage insurers warning against algorithmic discrimination. The guidance clarified that “health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans.”
  • The DOJ named Princeton University computer science professor Jonathan Mayer as the department's chief AI officer and chief science and technology advisor. Mayer will advise the department’s Emerging Technology Board, which is “tasked with coordinating and governing AI and other types of emerging tech throughout the agency.”
  • UK-based software company, Avast Limited, has been ordered by the FTC to pay $16.5 million and delete data and models to settle charges that the company unfairly collected consumers’ browsing data and sold it to third parties without adequate notice and consumer consent. Avast also reportedly deceived consumers by claiming the software would protect user privacy by preventing third-party tracking, but failed to inform consumers that their detailed browsing data was sold for advertising purposes.
  • FTC Chair Lina Khan reported that an FTC investigation found that Twitter employees refused to comply with Elon Musk’s demands to grant outside third-party individuals access to company documents during the “Twitter Files” incident in 2022 in a letter to House Judiciary Committee Chairman Rep. Jim Jordan (R-OH). Khan argued that investigation was justified given the deep staff cuts following Musk’s acquisition of the company meant that "there was no one left at the company responsible for interpreting and modifying data policies and practices to ensure Twitter was complying with the FTC's Order to safeguard Americans' personal data."

In civil society:

  • Mozilla Foundation published an updated report of its 2020 paper “Creating Trustworthy AI.” The report discussed four key levers for advancing the development of trustworthy AI, analyzed progress made within each pillar, and highlighted opportunities where more work needs to be done. Mozilla Foundation also announced the Privacy for All campaign to “raise data privacy standards worldwide and lay the groundwork for successful AI regulation.” The campaign will work with civil society groups, researchers, technologists, and consumers around the globe to advance data privacy products and policy.
  • Ten civil society groups including Upturn, Lawyers’ Committee for Civil Rights Under Law, and the Center for Democracy & Technology sent a letter with recommendations on AI EO implementation to Attorney General Merrick Garland and Assistant Attorney General Kristen Clarke. Recommendations included suggestions for increased focus on civil rights principles, like creating an interagency working group in partnership with civil rights enforcement agencies to “develop and expand the federal government’s own anti-discrimination testing capabilities to uncover algorithmic discrimination.”
  • EPIC and US PIRG Education Fund published a report on state privacy laws, arguing that tech companies’ influence on state laws are weakening privacy, describing the qualities of a strong privacy law, and arguing that most states’ laws “fail to protect consumers’ privacy and security.”
  • In their public comments, the Center for AI and Digital Policy urged National Institute of Standards and Technology (NIST) to ensure that guidelines and standards are “human-centered” and designed to create “transparency, accountability, safety, and fairness obligations for the industry.

In industry:

  • Google’s Threat Analysis Group published a report on the landscape of actors developing, selling, and deploying spyware technology as well as an analysis of available products and recent activity. The report found that a wide range of smaller, less publicly recognizable commercial spyware vendors play an important role in spyware development and that the private sector is responsible for a significant portion of innovation in the spyware space.
  • BSA | The Software Alliance, a software company trade group including IBM, Microsoft, and other large tech companies, published its 2024 US policy agenda. The agenda includes ten policy priorities, including “strong data privacy,” “trustworthy artificial intelligence,” and “preventing the proliferation of harmful online content.”
  • Microsoft announced partnerships with five news organizations to incorporate generative AI technologies “to grow audiences, streamline time-consuming tasks in the newsroom, and build sustainable business operations.”
  • Google settled a class action lawsuit over a user data security glitch in Google Plus, a now defunct social media platform, agreeing to pay $350 million to plaintiffs.
  • The Software & Information Industry Association (SIIA) sent a letter to Members of the House Committee on Science, Space, and Technology in support of the CREATE AI Act of 2023 (H.R. 5077), which would formalize the National Artificial Intelligence Research Resource (NAIRR) pilot program established under the AI Executive Order.
  • In response to NIST’s request for public comments on the safe and trustworthy development and use of AI, several companies and industry groups including Google, IBM, and TechNet, urged the agency to work with foreign governments and use its AI Risk Management Framework as a model to develop common international standards. Among other civil society groups who responded, EPIC also emphasized the AI Risk Management Framework, additional safeguards for generative AI risks, data minimization, and transparency and oversight requirements.

Elsewhere:

  • The Biden campaign joined TikTok to engage younger voters despite President Biden’s previously expressed national security concerns over the platform.
  • Steven Kramer and Paul Carpenter were identified as the alleged culprits behind last month’s robocalls that used AI to impersonate President Biden to urge New Hampshire Democrats to not vote in their primary. Kramer claimed that he hired Carpenter to produce the robocalls in an attempt to draw attention to the “potential abuse of AI in campaigns.” Kramer is expected to face both criminal charges and a potential civil lawsuit as a result of the robocalls.

New and Updated AI EO RFIs and Public Comments

  • The National Telecommunications and Information Administration requested comments on the potential risks, benefits, implications, and policy approaches related to widely available model weights of dual-use foundation models and will be used to create and submit a report to the President on these topics. The comment period ends on March 27.
  • The Office of Management and Budget issued an RFI to invite public input on enhancing the effectiveness of privacy impact assessments (PIAs) in addressing privacy risks, especially those intensified by AI and technological advancements in data capabilities. The comment period ends on April 1.
  • The Department of Commerce requested comments for a proposed rule related to President Biden’s executive orders on “Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities” (January 19, 2024) and “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (October 30, 2023) to solicit comment on regulation that will implement these executive orders. This notice seeks public comments on regulations to require US Infrastructure as a Service (IaaS) providers to verify foreign customer identity and report transactions related to training large AI models with potential cybersecurity implications. The comment period ends on April 29.

New Legislation

  • Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act of 2024 (S.3696, sponsored by Sens. Richard J. Durbin (D-IL), Lindsey Graham (R-SC), Josh Hawley (R-MO), and Amy Klobuchar (D-MN)): This bill would address nonconsensual sexually explicit “deepfakes” by creating a federal civil remedy for victims identifiable in an “intimate digital forgery,” defined as a visually manipulated depiction created through software, AI, machine learning, or any technological means to falsely appear authentic. This law would be enforceable against individuals producing or possessing forgeries with the intent to distribute and those knowingly involved without the victim's consent.
  • Preventing Rampant Online Technological Exploitation and Criminal Trafficking (PROTECT) Act of 2024 (S.3718, sponsored by Sen. Mike Lee (R-UT)): This bill would mandate that pornography hosting sites verify the age and identity of content uploaders. It would require these sites to obtain consent forms for individuals in uploaded content, with unauthorized images to be removed within 72 hours, punishable by a civil penalty per day per image. Additionally, the law would criminalize the knowing publication of non-consensual pornographic images, commonly known as “revenge porn,” with provisions allowing victims to seek damages.
  • Artificial Intelligence Environmental Impacts Act of 2024 (S.3732, sponsored by Sens. Edward Markey (D-MA), Martin Heinrich (D-NM), Ron Wyden (D-OR), Peter Welch (D-VT), Alex Padilla (D-CA), and Cory Booker (D-NJ)): This bill would focus on evaluating the environmental impact of AI by tasking the EPA with conducting a study on AI’s environmental effects and directing NIST to develop standards and a voluntary reporting system for companies to report the potential environmental impacts of their models. The House version of the bill (H.R. 7197) was also introduced this month by Reps. Anna Eshoo (D-CA), and Don Beyer (D-VA). The bill has garnered support from several climate groups and has been endorsed by EPIC and Data & Society, among others.
  • Technology Workforce Framework Act of 2024 (S.3792, sponsored by Sens. Gary Peters (D-MI) and Eric Schmitt (R-MO)): This bill “expands the functions of NIST to include workforce frameworks for critical and emerging technologies” and mandates the development of an artificial intelligence workforce framework and periodic updates to the National Initiative for Cybersecurity Education Workforce Framework for Cybersecurity.
  • Justice in Forensic Algorithms Act of 2024 (H.R.7394, sponsored by Reps. Mark Takano (D-CA) and Dwight Evans (D-PA)): This bill would restrict trade secrets privileges in criminal proceedings, which are used to deny the defense access to source code and software details. It would also direct NIST to establish computational forensic algorithm testing standards and a testing program, mandating federal law enforcement to adhere to these standards and testing requirements when utilizing forensic algorithms.
  • Diversify Tech Act (H.R.7314, sponsored by Reps. Gregory Meeks (D-NY), Barbara Lee (D-CA), Melanie Stansbury (D-NM), and David Trone (D-MD)): This bill would establish a Department of Commerce task force to advance diversity, equity, inclusion, and accessibility in the tech industry by fostering a diverse talent pipeline and supporting the success of tech employees of color.

Public Opinion on AI Topics

Data for Progress surveyed 1,225 US likely voters from February 2-5, 2024, using web panel respondents. This survey found that:

  • A large majority of respondents (80 percent) express concern about the use of deepfakes featuring candidates and political figures during the November 2024 election.
  • A significant majority of respondents (83 percent) believe that “companies that develop AI tools should be required to label AI-generated content that is used to influence an election.”

Data for Progress also conducted a separate nationwide survey of 1,231 US likely voters from January 31-February 1, 2024, using web panel respondents. This survey found that:

  • 85 percent of respondents support the DEFIANCE Act (S. 3696), which proposes allowing Americans to sue individuals creating AI-generated media that portrays them in sexually explicit positions without consent.
  • A majority of respondents express support for key actions outlined within the executive order announced by the Biden Administration in October addressing the use of artificial intelligence. These key actions include:
    • Completing risk assessments about AI use in critical government sectors (73 percent).
    • Creating a national research database for AI data, software, and models (68 percent).
    • Hiring more AI professionals and data scientists across the federal government (57 percent).

Cleveland Clinic, in partnership with Savanta, conducted an online survey of 1,000 adult Americans from November 10-21, 2023. The survey found that:

  • Three in five Americans (60 percent) express optimism about AI leading to improved heart care in healthcare advancements.
  • While 72 percent of respondents believe advice from a computer chatbot is accurate, 89 percent would still consult a doctor before acting on AI recommendations.
  • Although 65 percent of respondents would be comfortable receiving heart health advice from AI, only 22 percent have sought health advice from a computer chatbot or other AI technology.

IBM surveyed 8,584 IT professionals across 20 countries worldwide in November 2023 across the globe, and found that:

  • 42 percent of respondents work at companies that adopted AI early (defined as successful integration into their business, such as data management).
  • Among these self-identified early adopters, 59 percent are accelerating their use or investment.
  • 40 percent of respondents’ companies are still exploring or experimenting, held back by challenges including finding qualified employees, handling complex data, and navigating ethical concerns.

D2L surveyed 3,000 full- and part-time US employees about their use of AI and feelings about professional development courses in January 2024. Some key findings include:

  • 52 percent of Generation Z workers and 45 percent of millennial workers are concerned about being replaced by individuals with superior AI skills within the next year, compared to only 33 percent of Generation X workers.
  • 60 percent of respondents express a desire to use AI tools more frequently in the upcoming 12 months.
  • Nearly 40 percent of respondents feel their employers are not “prioritizing AI professional development opportunities.”

Public Opinion on Other Topics

Pew Research Center conducted a survey of 8,842 US adults from September 25 - October 1, 2023, on American adults’ news consumption habits:

  • Convenience remains the top benefit of getting news on social media, with 20 percent of respondents highlighting factors such as accessibility and availability of news on social media. The speed at which respondents can receive news quickly on social media was the second most important benefit (9 percent).
  • 40 percent of respondents rank inaccuracy as the top concern in obtaining news from social media, marking a 9-percentage-point increase in the past five years. The second-ranked concern is the low quality of news on social media (like clickbait or a lack of in-depth coverage), from a much smaller share (8 percent) of respondents.
  • A significant majority of those polled (86 percent) at least sometimes get news from a smartphone, computer, or tablet.
  • 58 percent of respondents prefer digital devices to consume news, with the most popular pathways to do so being news websites/apps (67 percent of respondents) and search engines (71 percent), followed by social media (50 percent) and podcasts (30 percent).

Accountable Tech and Data for Progress conducted a national survey of 1,184 likely US voters from February 16-19, 2024 on their concerns about efforts by tech companies to overturn state and federal regulations. They found that:

  • 36 percent of respondents are most concerned about tech companies using industry lobbying groups to hide attempts to overturn regulations.
  • 23 percent of people polled are most concerned with companies expanding their rights of corporate free speech.
  • 23 percent of respondents are most concerned about companies protecting their profits by filing lawsuits to overturn tech regulations.
  • 55 percent of respondents would be less likely to use Meta’s platforms if the company sued to overturn federal laws protecting consumer privacy and children’s safety.
  • 77 percent of respondents are concerned about companies suing the FTC to profit from data collected from minors.
  • 76 percent of respondents are concerned about companies suing to overturn a state law that restricts the collection of minors’ data on social media platforms.

We welcome feedback on how this roundup could be most helpful in your work – please contact Alex Hart with your thoughts.

Authors

Rachel Lau
Rachel Lau is a Senior Associate at Freedman Consulting, LLC, where she assists project teams with research, strategic planning, and communications efforts. Her projects cover a range of issue areas, including technology policy, criminal justice reform, economic development, and diversity and equity...
J.J. Tolentino
J.J. Tolentino is an Associate at Freedman Consulting, LLC where he assists project teams with research, strategic planning, and communication efforts. His work covers a wide range of policy issues including technology and civil rights, environmental sustainability, and public health.

Topics