Home

Donate
Analysis

March 2026 US Tech Policy Roundup

Rachel Lau, Shirley Frame, Ben Lennett / Apr 1, 2026

Rachel Lau and Shirley Frame work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press.

Avery Schott holds up a photo of his daughter Annalee Schott, beside others after the verdict in a landmark trial over whether social media platforms deliberately addict and harm children at Los Angeles Superior Court, Wednesday, March 25, 2026, in Los Angeles. (AP Photo/William Liang)

March’s US tech policy landscape was shaped by a series of high-profile legal decisions and new federal AI policy proposals. Two major jury verdicts against social media companies signaled a shift in how courts approach platform accountability, focusing on product design and user harm rather than traditional content-based liability. The decisions were widely seen as a potential turning point, prompting both celebration from online safety advocates and concern from critics about broader implications for innovation and free expression. At the same time, the Trump administration advanced a national AI policy framework aimed at creating a unified federal approach while limiting the role of states in regulating AI. The proposal, alongside competing legislation in Congress, underscored ongoing disagreements over issues like safety standards and federal preemption.

Meanwhile, activity across the executive branch and Congress highlighted the growing intersection of AI, data governance, and national security. Federal agencies advanced new AI deployments and cybersecurity initiatives while facing scrutiny over data practices, including the use of commercially available location data to surveil the public and reported mishandling of sensitive government data. In the courts, litigation continued to shape the contours of tech policy, with cases addressing AI guardrails, transparency requirements, competition issues, and questions around liability.

Read on to learn more about March developments in US tech policy.

Landmark verdicts in social media liability cases shift legal landscape

Summary

In March, two landmark verdicts were reached against major social media companies in New Mexico and California. In New Mexico, a jury ordered Meta to pay $375 million in damages after finding the company violated state consumer protection laws by misleading users about the safety of its platforms and enabling child sexual exploitation. The following day, a California jury awarded $6 million in total damages—split between Meta, which is responsible for 70%, and Google-owned YouTube, which is responsible for 30%—finding the companies liable for negligently designing addictive features, such as infinite scroll and autoplay, that caused mental distress to a young woman identified as KGM. The California jury determined the companies acted with malice, oppression, and fraud, marking the first time these tech firms were found liable for products inflicting harm on young people through addictive design. Both Meta and YouTube have stated they disagree with the decisions and plan to appeal the verdicts.

Online safety advocates, plaintiffs' lawyers, and civil society groups widely celebrated the verdicts as a critical turning point in holding Big Tech accountable. The Tech Oversight Project heralded the California decision as an "earthquake" that shatters the tech industry's "era of invincibility" regarding their business models. Similarly, Common Sense Media praised the New Mexico ruling as a major victory for families, arguing that Meta ignored child safety threats to protect its bottom line. Legal experts and advocates noted that these trials successfully sidestepped traditional Section 230 protections by focusing on "product design" rather than third-party content, effectively applying traditional product liability tort law to social media. Supporters hoped these verdicts, along with the damning internal documents unearthed during discovery, will pressure tech companies to redesign their platforms and prompt Congress to pass comprehensive federal children's online safety legislation.

Conversely, critics warned that these rulings posed a severe threat to the open internet, free speech, and user privacy. In a post for Techdirt, Mike Masnick argued that attempting to separate a platform's "design" from its "content" effectively nullifies Section 230 protections and cautioned that while massive corporations like Meta and Google can easily absorb multimillion-dollar verdicts and the costs of trials, the looming threat of litigation will bankrupt smaller platforms and startups. Furthermore, he argued that the New Mexico case penalized Meta's implementation of end-to-end encryption, and warned that it sets a dangerous precedent that could force companies to weaken a vital security feature. The Information Technology and Innovation Foundation, an industry-supported think tank, cautioned that relying on a state-by-state, case-by-case litigation approach is an inefficient approach to online safety and could lead to conflicting compliance obligations and free speech overreach.

What we’re reading

  • Mariana Olaizola Rosenblat, “The Jury Has Spoken on Big Tech. Now It’s US Lawmakers’ Turn,” Tech Policy Press.
  • Cristiano Lima-Strong, “Landmark Verdicts Could Unleash New Legal Playbook Over Social Media Harms,” Tech Policy Press.
  • Mike Masnick, “Everyone Cheering The Social Media Addiction Verdicts Against Meta Should Understand What They’re Actually Cheering For,” Techdirt.

Trump administration releases AI framework urging Congress to preempt state laws

Summary

The White House released a national AI policy framework in March urging Congress to establish a unified federal standard for artificial intelligence governance and preempt a “fragmented patchwork” of state AI laws that impose “undue burdens.” Key provisions included age-assurance requirements, tools for parents to manage children’s digital environments, measures to combat AI-enabled fraud, streamlined permitting for AI infrastructure, and protections against unauthorized AI-generated digital replicas. On copyright, the administration recommended that Congress allow courts to resolve whether AI training on copyrighted material constitutes fair use. The framework would preserve a narrow set of state authorities, including generally applicable consumer and child protection laws, zoning over AI infrastructure, and rules governing states’ own AI use, but would preempt state-level algorithmic discrimination standards, transparency requirements, and accountability measures for high-risk systems. The framework also explicitly rejected the creation of a new federal AI regulatory body and instead proposed embedding oversight in existing sector-specific regulatory bodies and industry standards.

The framework intersects with a number of congressional AI regulation proposals. Sen. Marsha Blackburn (R-TN) released a discussion draft of the TRUMP AMERICA AI Act, which would bundle child safety provisions, including the Kids Online Safety Act (KOSA, S. 1748) and NO FAKES Act (S. 1367), with preemption measures and provisions targeting perceived ideological bias in AI systems. The bill would mandate third-party audits for high-risk AI systems to detect discrimination based on political affiliation, prohibit federal procurement of large language models featuring “manipulation in favor of an ideological dogma, such as diversity, equity, and inclusion,” and sunset Section 230. Diverging from the White House framework, Blackburn’s bill imposes a duty of care on AI developers and declares that AI training on copyrighted works is not fair use, while the framework defers copyright questions to courts. Also, Rep. Don Beyer (D-VA), along with four additional House Democrats, introduced the Guaranteeing and Upholding Americans’ Right to Decide Responsible AI Laws and Standards (GUARDRAILS) Act (H.R. 8031) to repeal the December 2025 executive order that sought to establish a moratorium on state-level AI policies, with Sen. Brian Schatz (D-HI) planning to introduce companion legislation in the Senate.

Prior congressional attempts to preempt state AI laws have met resistance from Republicans as well as Democrats. A previous 10-year moratorium on state AI laws, championed by Sen. Ted Cruz (R-TX), was dropped from a budget reconciliation bill by a 99-1 Senate vote in 2025, partly because Blackburn herself argued Congress could not block states without first passing federal child safety legislation. Furthermore, more than 50 Republican state lawmakers sent a letter earlier this month urging the White House to stop its efforts to block state-level AI regulations, arguing that “state-led efforts are fully consistent with conservative principles” and with the administration’s “stated goals of promoting human flourishing while accelerating innovation.”

Civil society groups and public interest organizations criticized the AI framework. Robert Weissman, co-president of Public Citizen, described it as a “a national framework to protect Big Tech at the expense of everyday Americans” that “will be dead on arrival in Congress.” Brad Carson, president of Americans for Responsible Innovation, warned that combining state preemption with opposition to open-ended industry liability amounts to “open season on the American public.” Writing in Tech Policy Press, Laura MacCleery argued that the framework followed a familiar preemption playbook historically employed by the tobacco and gun industries.

What we’re reading

  • Ben Lennett, “Trump and GOP Lawmakers Push for New National AI Legislation,” Tech Policy Press.
  • Sydney Saubestre, “Trump’s AI Policy Framework Leaves Most Vulnerable Exposed,” Tech Policy Press.
  • Laura MacCleery, “America’s AI Governance Crisis Is a Democracy Crisis,” Tech Policy Press.

Tech TidBits & Bytes

Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, and courts.

In the executive branch and agencies:

  • At a White House event, AI companies including Google, Microsoft, Meta, and OpenAI signed a “Ratepayer Protection Pledge,” outlined by President Trump in his State of the Union Address. The companies committed to independently finance the power generation and grid improvements needed for their data center facilities, with the goal of avoiding strain on existing local grids. While Trump described the agreement as “mandatory,” the pledge lacks legally enforceable binding measures.
  • Venture capitalist David Sacks stepped down from his role as the White House’s AI and cryptocurrency adviser after reaching the 130-day limit on his status as a special government employee. Axios reported that the White House has no plans to appoint a replacement. Sacks transitioned to co-chairing the President's Council of Advisors on Science and Technology (PCAST) alongside Office of Science and Technology Policy Director Michael Kratsios. President Trump appointed 13 members to the council, predominantly tech industry leaders including Nvidia CEO Jensen Huang, Meta CEO Mark Zuckerberg, Oracle Executive Chairman Larry Ellison, Google co-founder Sergey Brin and Advanced Micro Devices (AMD) CEO Lisa Su. The council will advise the president on AI policy and other technology issues, though it lacks regulatory authority.
  • President Donald Trump signed an executive order directing federal agencies to expand efforts against transnational cybercrime and ransomware, produce plans to identify and dismantle cybercrime networks, and expand coordination with commercial cybersecurity firms. The White House also released President Trump’s Cyber Strategy for America, a broader seven-page framework that emphasizes offensive and defensive cyber operations, closer public-private coordination, streamlined regulation and stronger protection for federal networks and critical infrastructure.
  • The Federal Trade Commission (FTC) sent letters to 13 companies urging them to ensure they comply with the Protecting Americans’ Data from Foreign Adversaries Act, which forbids companies from selling data to countries like China, Russia, and North Korea. The letter reported instances of Americans’ sensitive data being offered to foreign adversaries and pushed the companies to reevaluate their practices to ensure compliance with federal law.
  • Federal Bureau of Investigation (FBI) Director Kash Patel told the Senate Intelligence Committee that the bureau purchases commercially available data that can reveal individuals’ location and movement history. The US Supreme Court has required law enforcement to obtain a warrant for gathering location data from cell phone providers since its 2018 Carpenter decision, but agencies have circumvented the requirement by purchasing comparable data from third-party brokers.
  • The Department of Homeland Security (DHS) faced mounting cybersecurity disruptions as a funding shutdown and leadership departures weakened the agency's cyber apparatus. Sens. Katie Britt (R-AL) and Susan Collins (R-ME) reported that the Cybersecurity and Infrastructure Security Agency (CISA) furloughed much of its workforce, reducing personnel from roughly 2,000 to 800, and halted cybersecurity assessments of US critical infrastructure. DHS announced emergency measures to preserve priority programs, with congressional Republicans accusing Democrats of putting "politics over public safety" while Democrats demanded reforms to ICE operations as a condition of funding. DHS also reorganized its IT leadership following the exit of former Secretary Kristi Noem, who was replaced by Sen. Markwayne Mullin (R-OK). Bipartisan lawmakers questioned whether the upheaval would weaken the department's capacity to protect critical infrastructure.
  • The Wall Street Journal reported that the Trump administration will stand to collect a $10 billion fee from investors involved in the sale of TikTok’s US operations. Investors, including Oracle, Silver Lake and MGX, paid an initial $2.5 billion to the Treasury Department when the deal closed in January, with further installments due until the full $10 billion is met. Sen. Mark Warner (D-VA), the top Democrat on the Senate Intelligence Committee, wrote to Treasury Secretary Scott Bessent asking for answers about the arrangement, saying it could be part of a pattern of the administration “exercising the power and authority of the government to benefit certain companies and individuals close to the President.” Administration officials said the fee was justified, pointing to President Trump’s role in brokering the deal.
  • Deputy Defense Secretary Steve Feinberg directed the Department of Defense (DOD) to designate the Maven Smart System, Palantir Technologies’ AI-powered weapons-targeting platform, as a department-wide “program of record,” securing long-term funding and military-wide adoption. The memo ordered the transfer of Maven oversight from the National Geospatial-Intelligence Agency to the DOD Chief Digital and Artificial Intelligence Office (CDAO) within 30 days, with the US Army taking responsibility for future contracting with Palantir. While Palantir has said humans would remain responsible for selecting and approving targets, advocates have warned that autonomous weapons targeting systems raise ethical, legal, and security risks.
  • The Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) issued a joint interpretation clarifying how federal securities laws apply to crypto assets and delineating jurisdiction between the two agencies and called for congressional action to provide more durable clarity. The interpretation established a five-category token taxonomy and stated that most major cryptocurrencies, including Bitcoin and Ether, are digital commodities under CFTC rather than SEC oversight.

In Congress:

  • Speaker Mike Johnson (R-LA) delayed a scheduled House floor vote on a clean 18-month extension of Section 702 of the Foreign Intelligence Surveillance Act (FISA) amid growing bipartisan opposition ahead of the April 20 deadline. Section 702 allows the government to collect communications of non-citizens abroad without a warrant, although it incidentally collects Americans' communications as well. Some Republicans, including Rep. Anna Paulina Luna (R-FL), conditioned their support on attaching the SAVE America Act (H.R. 22), a voter registration bill, while the Congressional Progressive Caucus formally voted to oppose any reauthorization without reforms, binding its 98 members against a clean extension. Almost 90 civil liberties and advocacy organizations signed a joint letter urging Democrats to reject any extension without a warrant requirement. A clean extension would sidestep bipartisan reform efforts, including the SAFE Act (S. 3893), reintroduced in February by Sens. Dick Durbin (D-IL) and Mike Lee (R-UT) to add a warrant requirement before accessing the content of communications involving US citizens.
  • A bipartisan group of nine senators, led by Sens. Mark Warner (D-VA) and Josh Hawley (R-MO), called for the expansion of federal data collection on the effects of AI on workers, citing the Consolidated Appropriations Act (H.R. 7148) in a letter to the Department of Labor (DOL), Bureau of Labor Statistics (BLS), and the US Census Bureau. The lawmakers called for integrating AI-specific questions into existing surveys to track how automation directly influences hiring, layoffs, and shifts in workplace tasks.
  • More than 70 Democratic lawmakers called on DHS Inspector General Joseph Cuffari to open a new investigation into warrantless purchases of Americans' location data by ICE and other DHS agencies in a letter led by Sen. Ron Wyden (D-OR) and Rep. Adriano Espaillat (D-NY). The lawmakers noted that ICE canceled a scheduled congressional briefing on the purchases in February 2026 "with no explanation and without any offer to reschedule," and that the Inspector General’s 2023 recommendation for a department-wide policy governing commercial location data remains unimplemented.

In civil society:

  • TechCrunch reported that a toolkit likely originally developed for the US government by defense contractor L3Harris was used for a mass hacking campaign on iPhone users in Ukraine and China. Investigations found that a Russian government intelligence group used the toolkit for espionage against Ukrainian targets and that Chinese cybercriminals repurposed the tools for cryptocurrency theft.
  • The Center for Democracy & Technology (CDT) released a national poll finding that 74 percent of Americans are concerned about the privacy and security of personal data held by the government, consistent across political affiliation, geography, and race and ethnicity. 73 percent agreed that without privacy laws, government agencies would likely use personal data to track and monitor anyone they want, and 44 percent said they would forgo public benefits if unsure how their data would be used. 79 percent agreed Congress should hold agencies accountable for ignoring existing privacy laws. The research was accompanied by a coalition letter signed by more than 20 organizations and individuals urging congressional oversight of federal agencies’ data access.
  • Axios reported that Innovation Council Action, a pro-AI dark money group, plans to spend more than $100 million on the 2026 midterms to support candidates aligned with the Trump administration's deregulatory AI agenda. The group is led by Taylor Budowich, a former Trump White House deputy chief of staff who previously ran the pro-Trump MAGA Inc. super PAC, and has developed a scorecard ranking lawmakers on their alignment with the president's AI priorities. Outgoing White House AI and crypto adviser David Sacks praised the group, stating it "will play a critical role in advancing the innovation agenda championed by President Trump." Innovation Council Action joins an increasingly crowded field of AI-focused political spending, including the OpenAI-backed Leading the Future network and Meta's $65 million state-level super PAC effort.

In industry:

  • Google, Microsoft, Meta, Amazon, OpenAI, LinkedIn, Adobe, Pinterest, Target, Levi Strauss & Co. and Match Group signed the Industry Accord Against Online Scams and Fraud, a voluntary agreement announced at the United Nations Global Fraud Summit in Vienna. The companies committed to share threat intelligence about criminal networks with peers and law enforcement, build automated scam detection systems, and establish clear scam reporting pathways for users – although the agreement carried no enforcement mechanism or penalties for noncompliance. The group also called on governments to declare scam prevention a national priority.
  • Meta announced its plans to discontinue end-to-end encryption (E2EE) for Instagram direct messages, citing low user adoption of the opt-in feature. The move came two weeks after TikTok stated it would forgo E2EE for its direct messaging, claiming the technology could hinder efforts by safety teams and law enforcement to identify harmful activity. Internal documents raised in an ongoing child safety trial in New Mexico showed Meta officials were aware of potential trade-offs between safety and privacy while developing the encryption feature.
  • Moxie Marlinspike, founder of Signal, announced that the end-to-end encryption (E2EE) technology powering his encrypted AI platform Confer will be integrated into Meta’s AI systems, though no public timeline for the rollout has been set. Security researchers have praised Confer's approach to privacy but have noted it lacks full public documentation of its architecture, threat model, and supply chain.

In the courts:

  • Anthropic filed two federal lawsuits – in San Francisco and Washington, DC – challenging the Pentagon's supply chain risk designation and President Trump's directive banning federal agencies from using Claude as unlawful First Amendment retaliation. A wide range of stakeholders filed amicus briefs in support, including Microsoft, retired military leaders, more than 30 AI researchers from Google and OpenAI, and a cross-ideological coalition of civil society organizations. US District Judge Rita Lin granted a preliminary injunction blocking the Department of Defense’s (DOD) designation of Anthropic as a “supply chain risk” in a federal court in San Francisco. Lin also blocked Trump’s directive ordering all federal agencies to cease using Anthropic’s technology, barring all 17 named federal agency defendants from implementing the ban. Lin found that the government's actions constituted "classic illegal First Amendment retaliation.” She also ruled that the government violated Anthropic’s due process rights by providing no advance notice or opportunity to respond. The injunction, delayed seven days to allow a government appeal to the Ninth Circuit, is intended to allow Anthropic to continue operating with defense partners while the case proceeds. However, Undersecretary of Defense Emil Michael called the ruling a “disgrace,” noting that the designation was issued under two statutes and that the § 4713 designation remains in effect pending a separate challenge before the DC Circuit.
  • The Trump administration and the New Civil Liberties Alliance, representing plaintiffs, reached a consent decree, pending final court approval, in the Missouri v. Biden lawsuit barring the US Surgeon General, Centers for Disease Control and Prevention (CDC) and Cybersecurity and Infrastructure Security Agency (CISA) from “threatening social media companies into removing or suppressing constitutionally protected speech on Facebook, Instagram, X, LinkedIn and YouTube.” The prohibition only applies to the named plaintiffs and does not not extend to the government sharing information with social-media companies, nor does it extend to statements by “government officials that posts on Social Media Companies' platforms are inaccurate, wrong, or contrary to the Administration's views, unless those statements are otherwise coupled with a threat of punishment...”.
  • The Ninth Circuit partially reversed a lower court's injunction against California's Age Appropriate Design Code (CAADCA), a 2022 children's privacy law blocked from taking effect for nearly three years. The court vacated the injunction against the statute as a whole and its age-estimation requirement, reviving provisions requiring platforms to apply the highest privacy settings for children by default and limit the collection of children's geolocation data. However, the court affirmed the injunction against several data-use and dark-patterns restrictions, finding that key terms were unconstitutionally vague, a holding with implications for similar laws in Maryland, South Carolina, and other states. The case returned to the Northern District of California for further proceedings.
  • A federal judge denied xAI’s request for a preliminary injunction against California’s AB 2013, leaving the state’s generative AI training-data disclosure law in effect while the case moves forward. The law, signed in 2024 and effective January 1, 2026, requires developers of generative AI systems made available to Californians to post website documentation describing the data used to train them. xAI sued California in December, arguing that the measure violates the First Amendment, forces disclosure of trade secrets without compensation and is unconstitutionally vague.
  • A Ninth Circuit panel granted a temporary stay on a lower court’s preliminary injunction blocking Perplexity AI’s Comet browser from accessing password-protected parts of Amazon, allowing the AI shopping agent to resume operating while the startup appeals. Amazon had sued Perplexity in November 2025, alleging the startup disguised its AI agents as human browsing and posed security risks to customer data.
  • Journalist Julia Angwin filed a class action lawsuit against Grammarly and its parent company Superhuman alleging that the writing tool’s “Expert Review” feature improperly attributed AI-generated editorial feedback to hundreds of prominent journalists and academics without consent. Superhuman disabled the feature the same day the suit was filed and maintained the legal claims are meritless.
  • Two individual shareholders in Alphabet and Meta filed a federal lawsuit against President Trump and Attorney General Pam Bondi, arguing that the administration unlawfully approved ByteDance’s TikTok restructuring and declined to enforce the 2024 law requiring divestiture or a ban. The suit does not seek to ban TikTok outright, but it could force a court-ordered renegotiation of the deal if it succeeds.
  • A DOJ attorney told a federal judge in Maine that DHS “does not know what happened” to photographs that ICE agents took of legal observers’ faces and license plates during immigration enforcement operations in Maine and Minnesota. The statement was made in an ongoing class action suit against DHS filed in February by two legal observers, who alleged that ICE agents threatened to enter their information into a domestic terrorist database for observing and recording ICE operations.
  • A family filed a wrongful death lawsuit in federal court against Google after their son took his own life following extensive interactions with the Gemini chatbot, alleging Google made specific design choices to ensure Gemini would “maximize engagement.” In response, Google said Gemini "clarified that it was AI and referred the individual to a crisis hotline many times" and that it would "continue to improve our safeguards."

Legislation Updates

The following bills made progress across the Senate and House in March:

  • KIDS ActH.R. 7757. Introduced by Rep. Brett Guthrie (R-KY), the bill was reported out of the House Committee on Energy and Commerce.
  • Sammy's LawH.R. 2657. Introduced by Rep. Debbie Wasserman Schultz (D-FL), the bill was reported out of the House Committee on Energy and Commerce.
  • App Store Accountability ActH.R. 3149. Introduced by Rep. John James (R-MI), the bill was reported out of the House Committee on Energy and Commerce.
  • Health Care Cybersecurity and Resiliency Act of 2026S. 3315. Introduced by Sen. Bill Cassidy (R-LA), the bill was reported out of the Committee on Health, Education, Labor, and Pensions.
  • Children and Teens' Online Privacy Protection ActS. 836. Introduced by Sen. Edward Markey (D-MA), the bill passed the Senate with amendments by unanimous consent and was sent to the House.
  • Chip Security ActH.R. 3447. Introduced by Rep. Bill Huizenga (R-MI), the bill was reported out of the Committee on Foreign Affairs.

The following bills were introduced in both the Senate and House in March:

  • The GUARDRAILS ActS. 4216 / H.R. 8031. Introduced by Sen. Brian Schatz (D-HI) and Rep. Donald Beyer (D-VA), the bill would “repeal President Trump's executive order seeking to prevent states from regulating artificial intelligence.”
  • AI Fraud Accountability ActS.3982 / H.R. 7786. Introduced by Rep. Vern Buchanan (R-FL) and Sen. Tim Sheehy (R-MT), the bill would “establish protections against digital impersonation fraud, and for other purposes.”

The following bills were introduced in the Senate in March:

  • Sammy's LawS. 4159. Introduced by Sen. Jon Husted (R-OH), the bill would “require large social media platform providers to create, maintain, and make available to third-party safety software providers a set of real-time application programming interfaces, through which a child or a parent may delegate permission to a third-party safety software provider to manage the online interactions, content, and account settings of such child on the large social media platform in the same manner as is available to the child, and for other purposes.”
  • Privacy Protection Updates ActS. 4268. Introduced by Sen. Ron Wyden (D-OR), the bill would require “the government to disclose the existence of the Privacy Protection Act and prove that an exception applies if the government wants to search or seize a journalist's materials with a warrant,” including “journalist records stored in the cloud.” A companion House bill, HR.8093, was also introduced by Rep. Becca Balint (D-VT).
  • Artificial Intelligence (AI) Data Center Moratorium Act S. 4214. Introduced by Sen. Bernie Sanders (I-VT), the bill would “enact a reasonable pause to the development of AI to ensure the safety of humanity.”
  • Data Center Water and Energy Transparency Act S. 4213. Introduced by Sen. Richard Durbin (D-IL), the bill would require “data centers to disclose their energy and water usage.”
  • Consumer Data Privacy and Security ActS. 4211. Introduced by Sen. Jerry Moran (R-KS), the bill would “strengthen the laws that govern consumers' personal data and create clear standards and regulations for American businesses that collect, process and use consumers' personally identifiable data.”
  • S.Con.Res. 30. Introduced by Sen. Rick Scott (R-FL), the resolution expresses the sense of Congress that the Ratepayer Protection Pledge announced on March 4, 2026, reflects sound national policy to protect ratepayers in the United States, promote electricity affordability, and ensure that all people of the United States, including households, small businesses, schools, hospitals, and farms, have access to reliable and affordable energy as artificial intelligence and data center infrastructure expands across the United States.
  • Research and Oversight of AI in Courts Act of 2026S. 4154. Introduced by Sen. Roger Wicker (R-MS), the bill would “establish a task force to address legal and ethical issues related to the use of AI speech-to-text technology and automatic speech recognition technology in the United States judicial system, and for other purposes.”
  • Future of Artificial Intelligence Innovation Act of 2026S. 3952. Introduced by Sen. Todd Young (R-IN), the bill would “establish artificial intelligence standards, metrics, and evaluation tools, to support artificial intelligence research, development, and capacity building activities, to promote innovation in the artificial intelligence industry by ensuring companies of all sizes can succeed and thrive, and for other purposes.”
  • Promoting United States Leadership in Standards Act of 2025S. 1269. Introduced by Sen. Marsha Blackburn (R-TN), the bill would “promote United States leadership in technical standards by directing the National Institute of Standards and Technology and the Department of State to take certain actions to encourage and enable United States participation in developing standards and specifications for artificial intelligence and other critical and emerging technologies, and for other purposes.”
  • AI Guardrails Act of 2026S. 4113. Introduced by Sen. Elissa Slotkin (D-MI), the bill would “provide for limitations on the use of artificial intelligence by the Department of Defense.”
  • Artificial Intelligence-Ready Data ActS. 4098. Introduced by Sen. Ted Budd (R-NC), the bill would “establish standards and guidelines to make open government data assets artificial intelligence-ready, and for other purposes.”
  • Government Surveillance Reform Act of 2026S. 4082. Introduced by Sen. Ron Wyden (D-OR), the bill would “implement reforms relating to foreign intelligence surveillance authorities, and for other purposes.”
  • Websites and Software Applications Accessibility Act of 2026S. 3974. Introduced by Sen. Tammy Duckworth (D-IL), the bill would “establish uniform accessibility standards for web content and applications of employers, employment agencies, labor organizations, joint labor-management committees, public entities, public accommodations, testing entities, and commercial providers, and for other purposes.”

The following bills were introduced in the House in March:

  • AI Foundation Model Transparency Act – H.R. 8094. Introduced by Rep. Donald Beyer (D-VA), the bill would require “the Federal Trade Commission to establish transparency requirements for how artificial intelligence foundation models are built, trained, and deployed.”
  • Protect American AI Act of 2026 H.R. 8037. Introduced by Rep. Michael Baumgartner (R-WA), the bill would “limit the effect of litigation on the environmental application process for data centers and associated infrastructure.”
  • No Harm Data Centers ActH.R. 8033. Introduced by Rep. Greg Landsman (D-OH), the bill would “ensure that American families are protected from the impacts of data centers on the electric grid, and for other purposes.”
  • Data Center Community Impact ActH.R. 7858. Introduced by Rep. Bonnie Watson Coleman (D-NJ), the bill would “require the Secretary of Energy to conduct a study on the effect of data centers on communities of color and low-income communities, and for other purposes.”
  • Safe Cloud Storage ActH.R. 7834. Introduced by Rep. Laurel Lee (R-FL), the bill would “limit liability for certain entities storing child sexual abuse material for law enforcement agencies, and for other purposes.”
  • HBCU AI Research Leadership ActH.R. 7826. Introduced by Rep. Valerie Foushee (D-NC), the bill would “amend the National Artificial Intelligence Initiative Act of 2020 to provide for a special allocation of certain awards of financial assistance to historically Black Colleges and Universities relating to national artificial intelligence research institutes, and for other purposes.”
  • AI-Ready Networks ActH.R. 7783. Introduced by Rep. Jennifer McClellan (D-VA), the bill would “direct the Assistant Secretary of Commerce for Communications and Information to publish a report on the integration of artificial intelligence into the commercial telecommunications infrastructure of the United States, and for other purposes.”
  • Small AI Innovators Empowerment ActH.R. 7968. Introduced by Rep. Suhas Subramanyam (D-VA), the bill would “direct the Department of Commerce, in collaboration with NIST and the SBA, to conduct a study on challenges faced by small artificial intelligence businesses across the country.”
  • Online Privacy ActH.R. 8014. Introduced by Rep. Zoe Lofgren (D-CA), the bill would set “a national baseline for how Americans' personal data can be collected, used, and shared.”
  • H.Res. 1007H.Res. 1007. Introduced by Rep. Bryan Steil (R-WI), the resolution expresses the sense of the House of Representatives with respect to the use of artificial intelligence in the financial services and housing industries.

We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.

Authors

Rachel Lau
Rachel Lau is a Project Manager at Freedman Consulting, LLC, where she assists project teams with research and strategic planning efforts. Her projects cover a range of issue areas, including technology, science, and healthcare policy.
Shirley Frame
Shirley Frame is an Associate at Freedman Consulting, LLC, where she assists project teams with strategic planning, research, and policy landscaping. Her projects cover a range of issues, including technology policy, criminal justice, education, and youth development.
Ben Lennett
Ben Lennett is the Managing Editor of Tech Policy Press. A writer and researcher focused on understanding the impact of social media and digital platforms on democracy, he has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Related

Analysis
February 2026 US Tech Policy RoundupMarch 4, 2026
Analysis
January 2026 US Tech Policy RoundupFebruary 2, 2026
Analysis
November 2025 US Tech Policy RoundupDecember 3, 2025
Analysis
October 2025 US Tech Policy RoundupOctober 31, 2025

Topics