Home

Donate
Analysis

October 2025 US Tech Policy Roundup

Rachel Lau, J.J. Tolentino, Ben Lennett / Oct 31, 2025

Rachel Lau and J.J. Tolentino work with leading public interest foundations and nonprofits on technology policy issues at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press. Isabel Epistelomogi, a policy and research intern with Freedman Consulting, also contributed to this article.

Northwest view up to the pediment, rotunda, and dome of the California State Capitol in Sacramento. (Radomianin / Wikimedia Commons)

This month, amid a federal government shutdown that sidelined federal oversight of tech—including staff furloughs at the Federal Trade Commission (FTC) and the Federal Communications Commission (FCC)—and with the House of Representatives in recess, California became a focal point for US tech policy. Governor Gavin Newsom signed a suite of bills addressing artificial intelligence and consumer privacy. Newsom signed SB53, the Transparency in Frontier Artificial Intelligence Act, requiring major AI developers to disclose safety protocols and incident reports, while also extending whistleblower protections. The governor also enacted SB524, one of the nation’s first laws to govern law enforcement’s use of AI-generated content. Meanwhile, new laws SB243 and AB489 impose obligations on AI chatbots. On privacy, California enacted three major laws. AB566 will require web browsers to include a universal opt-out mechanism for users to restrict the sale or sharing of their data by 2027; AB656 simplifies the deletion of social media accounts and the removal of personal data; and SB361 mandates that data brokers disclose when they sell sensitive personal information.

While California forged ahead, the federal shutdown left most agencies paralyzed. The FCC furloughed over 80% of staff, halting licensing and enforcement operations. NIST suspended biometric testing programs, and sweeping staff cuts at the Cybersecurity and Infrastructure Security Agency (CISA) sparked bipartisan alarm over weakened cyber defenses. In contrast, Immigration and Customs Enforcement (ICE) expanded its surveillance infrastructure with new biometric and AI-powered monitoring tools, raising new privacy concerns. In Congress, lawmakers pressed Meta over deepfake political ads, while Sen. Rand Paul (R-KY) blocked reauthorization of the Cybersecurity Information Sharing Act. Meanwhile, legal battles continued despite the shutdown. The Supreme Court allowed a landmark antitrust ruling against Google’s Play Store to take effect, and YouTube settled former President Trump’s lawsuit over his account suspension.

Read on to learn more about October developments in US tech policy.

Amid a federal government shutdown, California passes a myriad of AI and privacy bills

Summary

With the federal government shut down, the tech policy focus moved to California this month as various tech policy bills went to Governor Gavin Newsom's desk. Governor Newsom took significant steps in advancing California’s tech policy agenda by signing multiple bills addressing AI and privacy concerns. California’s actions could serve as a preview for the tech policy battles ahead in states nationwide and in Congress. What follows is a non-exhaustive overview of major bills that were passed in California related to AI and privacy.

Artificial Intelligence

California became one of the first states in the country to address the risks and harms associated with frontier AI models and AI’s implementation into law enforcement. Governor Newsom signed SB53, the Transparency in Frontier Artificial Intelligence Act, requiring AI companies with annual revenues over $500 million to disclose their safety protocols and report the risks associated with their AI tools. The law also strengthened whistleblower protections and mandated reporting of safety incidents to California’s Office of Emergency Services.

Newsom also signed SB524, one of the nation’s first laws to regulate the use of AI in policing. Starting January 1, 2026, all police reports generated using AI must be clearly labeled and include the name of the officer who verified the content as true. The law also requires agencies to retain AI-generated first drafts, disclose the software used, and maintain audit trails of who generated the report and what footage was involved. While police departments have welcomed AI for streamlining paperwork, the law seeks to address concerns about AI “hallucinations” and other errors.

SB53 drew broad support from consumer and public interest groups. Sunny Gandhi, Vice President of Political Affairs at Encode AI, claimed that the bill’s “passage marks a notable win for California and the AI industry as a whole. Its adaptability and flexible framework will be essential as AI progresses.” Sacha Haworth, Executive Director of Tech Oversight California, suggested that SB53 is a “key victory” to hold tech companies accountable for their AI products and an advancement in applying guardrails to the development and deployment of AI.

In contrast, industry reactions have been largely mixed. Anthropic CEO Dario Amodei said that while the bill “isn’t perfect but… it gets at core ideas of transparency and how AI companies test their models that his company has been promoting, and performing, for some time.” Collin McCune, Head of Government Affairs at Andreessen Horowitz, criticized the bill for regulating how the technology is developed and that it “risks squeezing out startups, slowing innovation, and entrenching the biggest players.”

To address risks associated with youth use of AI models, Governor Gavin Newsom signed two bills to safeguard children from potential risks and harms. SB243 requires AI chatbot companies to identify users expressing suicidal thoughts and prevent minors from accessing sexually explicit material. The bill also mandates that companies disclose when users are interacting with an AI companion. AB489 prohibits AI chatbots from portraying themselves as licensed medical professionals, limits developers from marketing or presenting chatbots as doctors or nurses, and permits state licensing boards to take legal action against violators.

Common Sense Media CEO Jim Steyer raised concerns that SB243 brought “weaker standards than those in other states and could mislead parents to believe the chatbots are regulated more meaningfully than they actually are.” Common Sense Media dropped its support for the bill after a series of amendments scaled back reporting requirements and eliminated third-party audits. In response, SB243 author California Sen. Steve Padilla described the bill as a “step in the right direction and noted that major legislation is often the result of negotiations with businesses and other interest groups that will be affected by the regulations.”

Privacy

Newsom also signed three major privacy bills into law, strengthening the state’s data protections by making it easier for consumers to opt out of online tracking and delete social media accounts. The new laws included AB566, the “Opt Me Out Act,” which required web browsers to offer a universal opt-out button by 2027; AB656, mandating a simpler social media account deletion process and associated personal data; and SB361, granting users visibility into whether data brokers sell their sensitive personal data like citizenship or gender identity.

Tom Kemp, Executive Director of the California Privacy Protection Agency, a government entity created under the California Consumer Privacy Act, released a statement in support of AB566, stating that the “law puts the power back in consumers' hands and makes exercising your privacy rights at scale as simple as clicking a button in your browser." In contrast, the California Chamber of Commerce claimed that AB566 posed “a significant threat to California consumers, small businesses, and the fundamental structure of the digital economy.”

What We’re Reading

  • Drew Liebert and David Evan Harris, “California Is Getting its ‘AI Act’ Together,” Tech Policy Press.
  • Jasemine Mithani, “What Tech Bills California Governor Newsom Signed Or Vetoed in 2025,” Tech Policy Press.
  • Cristiano Lima-Strong, “California Signed A Landmark AI Safety Law. What To Know About SB53,” Tech Policy Press.
  • Cristiano Lima-Strong, Inside the Lobbying Frenzy Over California's AI Companion Bills, Tech Policy Press.
  • Justin Hendrix, “California Becomes Frontline in Battle Over AI Companions,” Tech Policy Press.

Tech TidBits & Bytes

Tech TidBits & Bytes aims to provide short updates on tech policy happenings across the executive branch and agencies, Congress, civil society, industry, and courts.

In the executive branch and agencies:

Government shutdown

  • The National Institute of Standards and Technology (NIST) suspended its review of all biometric testing, ranking, and research activity, including its ranking of facial recognition algorithms, an industry standard relied upon by both the public and private sectors to understand performance and fairness.
  • The FCC furloughed 81% of its 1,044 staff after a lapse in federal funding forced the agency to suspend most operations. The commission halted consumer protection enforcement, licensing services, and public inquiry phone lines for broadcast, wireless, and wireline systems. The agency also stopped approving new equipment authorizations. However, it will continue a congressionally mandated spectrum auction, one of the few operations allowed to proceed during the shutdown.
  • The Trump Administration aggressively downsized the Cybersecurity and Infrastructure Security Agency (CISA), laying off staff and reassigning others to unrelated roles across federal agencies. The cuts targeted divisions of CISA responsible for election security and critical infrastructure, effectively dismantling large parts of the agency’s operations. Cyber programs across the Department of State, Department of Defense, and Department of Justice also faced disruptions and shifting staffing priorities amid the government shutdown. Experts warned these moves would adversely impact US cyber defenses. Amid growing bipartisan concern, Rep. Eric Swalwell (D-CA) wrote a letter to acting CISA Director Madhu Gottumukkala describing the cuts as a major national security risk and warning that CISA’s diminished capacity leaves state and local governments vulnerable to cyber threats. Meanwhile, DHS defended the shakeup as a return to CISA’s “core mission,” framing the agency’s previous election security work as federal overreach.

Immigration and Customs Enforcement (ICE)

  • ICE expanded its surveillance infrastructure following a $75 billion funding boost from the “One Big Beautiful Bill Act,” investing heavily in biometric identification, facial and iris recognition, smartphone spyware, and social media monitoring tools. ICE said the technologies will aid immigration enforcement and domestic terrorism investigations, but civil rights groups and lawmakers raised concerns about the agency’s sweeping new powers to surveil US citizens. The surveillance tools include Clearview AI’s facial recognition technology and Paragon spyware systems (previously banned under the Biden Administration), prompting concerns over potential civil rights and privacy violations.
  • ICE was also reported to plan a major expansion of its social media surveillance operations, seeking to hire nearly 30 contractors to monitor platforms like TikTok, Facebook, and Reddit 24/7 from targeting centers in Vermont and California. These analysts would mine social media posts, public records, and commercial databases like LexisNexis to generate leads for deportation raids, with turnaround times as short as 30 minutes. The program also invited proposals for AI-driven sentiment analysis and advanced surveillance tools, raising privacy concerns among civil liberties groups.
  • New documents revealed that ICE contracted with PenLink to access their mass surveillance platform containing billions of mobile phone location information. PenLink’s services integrate real-time location data, facial recognition, dark web feeds, and social media activity into a single investigation dashboard. Privacy advocates and lawmakers, including Sen. Ron Wyden (D-OR), warned the purchase could enable mass tracking without a warrant and violate Fourth Amendment protections. ICE previously halted such data buys in early 2024 under Biden-era policy, but has reversed course under the new administration.

Other

  • The US government announced it will expand facial recognition to all points of departure to track non-citizens, as part of efforts to reduce visa overstays and combat passport fraud. The new provisions will allow border authorities to require photographs and other biometrics, including fingerprints and DNA, from non-citizens at airports, seaports, and land crossings. The policy also removed previous exemptions for children under 14 and adults over 79. The expanded surveillance raised civil liberties concerns, especially given past findings that facial recognition disproportionately misidentifies Black and other minority individuals.
  • In a unanimous vote, the Federal Communications Commission (FCC) approved new rules to close loopholes that allowed previously-authorized equipment from Chinese companies, including Huawei and ZTE, to still enter or be marketed in the US despite national security bans. The expanded restrictions included limitations on devices with components from blacklisted firms, following warnings from FCC Chair Brendan Carr that “America’s foreign adversaries are constantly looking for ways to exploit vulnerabilities.” Carr also highlighted the removal of millions of unauthorized listings from US online retailers as part of the agency’s crackdown. Some Chinese companies pushed back against the new rules, arguing the FCC’s retroactive measures exceeded legal authority and would harm small US businesses relying on existing equipment.
  • A deal on TikTok continued to elude finalization, even as US Secretary Scott Bessent unilaterally announced that the US and China had reached an agreement that would allow TikTok to continue operating in the US. However, according to reporting on the US and China talks, the Chinese government did not publicly confirm an agreement on TikTok, stating only that collaboration would continue on the issue. Bessent’s proposed $14 billion deal would fulfill the requirements of a 2024 law mandating divestment, transferring majority ownership of TikTok’s US operations to American investors and giving US-based stakeholders control over TikTok’s algorithm for US users.
  • Multiple agencies, including the Justice Department and ICE reported using AI to generate draft reports.
  • A new report from NIST, published on the eve of the government shutdown, raised concerns about Chinese generative AI provider DeepSeek, citing cybersecurity, reasoning, and privacy shortcomings compared to US models like ChatGPT and Claude. Released by NIST’s Center for AI Standards and Innovation in response to President Trump’s AI Action Plan, the analysis found DeepSeek models to be more vulnerable to agent hijacking and more likely to comply with malicious prompts. The report also noted systemic censorship aligned with Chinese government views, including assertions that Taiwan is part of China, and data-sharing with third parties like ByteDance. While DeepSeek models performed competitively on science and reasoning benchmarks, they lagged in software engineering and cybersecurity.
  • In response to energy demand resulting from AI use, the Department of Energy finalized a $1.6 billion loan guarantee to a subsidiary of American Electric Power to rebuild and upgrade 5,000 miles of transmission lines across Indiana, Michigan, Ohio, Oklahoma, and West Virginia. DOE Secretary Chris Wright praised the project for supporting US competitiveness in AI and manufacturing.

In Congress:

  • Senate Republicans released an AI-generated video of Senate Minority Leader Chuck Schumer (D-NY) on X appearing to say, “Every day gets better for us,” during the ongoing government shutdown. The National Republican Senatorial Committee (NRSC) used AI to fabricate a video of Schumer saying the quote, accompanied by a small “AI-generated” label. While the NRSC defended the tactic as a savvy use of campaign technology, experts and watchdogs warned it further blurs the line between fact and fiction. With no federal law yet regulating AI in campaign ads, and Democrats largely avoiding similar tactics, critics fear an unchecked partisan AI arms race could undermine democratic norms in the 2026 election cycle and beyond.
  • In a letter, Reps. Debbie Dingell (D-MI) and August Pfluger (R-TX) pressed Meta CEO Mark Zuckerberg for answers about the company’s role in hosting and profiting from AI-generated deepfake ads. The lawmakers cited a report that revealed nearly 150,000 deceptive ads, many impersonating public officials, ran on Meta platforms, generating close to $49 million over seven years. Despite Meta’s stated policies against scams and impersonation, repeated offers were allegedly allowed to continue advertising. The letter criticized Meta for putting vulnerable users, particularly older Americans, at risk and demanded clarity on how the company detects deepfakes, notifies affected parties, and enforces policy violations.
  • A bipartisan group of Representatives pushed back against the Trump administration’s $100,000 H-1B visa application fee, warning it will devastate small tech companies and hinder US innovation. In a letter to President Trump and Commerce Secretary Lutnick, they urged the Trump Administration to work with Congress on reforming the high-skilled immigration system rather than imposing punitive fees.
  • Senate Homeland Security Committee Chair Rand Paul (R-KY) blocked bipartisan legislation to renew the Cybersecurity Information Sharing Act of 2015 (CISA 2015), which expired at the end of September. The lapse of CISA stripped liability protections from private firms sharing threat intel with the government, a move experts said could stall cyber information exchange and delay federal responses to breaches.

In civil society:

  • The Center for Democracy & Technology (CDT) published a brief on “the use of generative AI models to develop individualized education programs (IEPs) for disabled students.” The brief outlines how AI is being used to develop IEPs, analyzes benefits and risks, and provides recommendations for responsible AI use for teachers, administrators, tool developers, and the disabled community.
  • The AFL-CIO unveiled its “Workers First Initiative on AI,” marking the labor movement’s first comprehensive agenda to shape AI’s role in the American workplace. Developed with unions across industries, the initiative offered a blueprint for employers and lawmakers to implement AI that upholds worker dignity, protects jobs, and boosts shared prosperity. It included guiding principles for responsible AI development and deployment, a national education campaign, and integration with a state-level task force to advance pro-worker AI policy. AFL-CIO leaders warned that without strong labor standards, AI could accelerate exploitation, subjecting workers to algorithmic quotas, job displacement, and data abuse.
  • Stanford Institute for Human-Centered Artificial Intelligence (HAI) highlighted new research finding that most major AI companies are not living up to the voluntary safety commitments they made in 2023. The average compliance score among the 16 major companies, including OpenAI, Anthropic, Google, Microsoft, Apple, Meta, and Amazon, is 53 percent, with OpenAI topping the list with a score of 83 percent. The least-kept safety commitment is securing AI model weights, with 11 companies scoring a zero. Companies also fell short on third-party vulnerability reporting, while performing best on provenance and public reporting. The study’s authors argued that vague promises without enforcement mechanisms leave room for corporate accountability gaps and urged policymakers to require clear, verifiable disclosures in future AI governance frameworks.
  • Former US Surgeon General Vivek Murthy and Common Sense Media CEO Jim Steyer launched a 2026 California ballot initiative, the California Kids AI Safety Act, to protect kids using AI chatbots. The ballot initiative seeks to ban unsafe “companion” chatbots for youth, prohibit the sale of children’s data, and require independent safety audits of child-facing AI products.

In industry:

  • Apple removed ICEBlock, an app tracking ICE sightings, from its App Store following a request from the Department of Justice, which cited safety risks to ICE agents. The app surged in downloads after immigration raids began in Los Angeles, but did not share personal agent data. Apple said the decision was based on law enforcement concerns, while Google confirmed it also removed similar apps. The removal followed a recent deadly shooting at a Dallas ICE facility, allegedly involving ICE tracking tools.
  • Leaked documents revealed that Amazon plans to automate 75% of its operations by 2033, potentially impacting over 600,000 US jobs. Amazon avoided references to automation and AI in the plans, instead suggesting that humans and robots would work alongside each other. Amazon executives claimed these documents reflect only one team’s vision, but if successful, economists warned that a nationwide automation wave could occur across blue-collar industries. Amazon also announced plans to cut as many as 30,000 corporate jobs, nearly 10 percent of its corporate employees.
  • Amazon’s Ring doorbell partnered with Flock Safety, a controversial AI-powered surveillance company used by ICE, the Secret Service, and over 5,000 law enforcement agencies. The partnership allows police to directly request Ring doorbell footage through Flock’s platforms. While the deal is marketed as an opt-in tool for public safety, it could potentially expand law enforcement’s access to millions of home cameras. Critics, including the Electronic Frontier Foundation, warned that the deal creates a “round-the-clock warrantless digital dragnet,” citing Ring’s poor privacy track record and Flock’s role in nearly one million arrests per year.
  • Meta laid off over 600 employees in its AI division, including more than 100 from its risk review team responsible for ensuring regulatory compliance and user privacy, replacing many manual risk audits with automated systems. Meta’s regulatory compliance and user privacy roles were created after Meta’s $5 billion FTC fine in 2019 and critics, including current and former staff, shared skepticism of automation’s ability to manage high-stakes privacy issues.
  • Meta announced content and AI chatbot limitations for teenage users on Instagram, based on PG-13 standards used by the film industry. The new policy will also apply to AI chatbot interactions, following concerns about inappropriate exchanges between users and AI models, and will introduce a new “limited content” setting that gives parents stricter controls over their child’s feed.
  • OpenAI released a new AI-based social media app, Sora, which uses their new Sora 2 video generator to allow users to create realistic, audio-synced videos. In Sora, videos can feature copyrighted characters unless rights holders explicitly opt out. OpenAI did not offer blanket exclusions, instead asking agencies to report violations individually, sparking alarm among Hollywood studios and legal experts, who argued that the opt-out policy undermined copyright protections. Users uploaded AI clips of actors and other public figures, prompting SAG-AFTRA and top Hollywood talent agencies to demand action. OpenAI responded by reinforcing its deepfake guardrails, announcing a collaboration with SAG-AFTRA, CAA, and UTA to implement stricter protections against deepfakes. OpenAI also reiterated support for the bipartisan NO FAKES Act, a proposed US Senate bill to create liability protections for unauthorized deepfakes.
  • OpenAI announced new parental controls for ChatGPT on web and mobile platforms following a lawsuit filed by the parents of a California teen who died after allegedly receiving harmful information from the chatbot. The new safeguards permit parents to link accounts with their teens to set “quiet hours,” disable voice mode and image generation, restrict chat history from training OpenAI’s models, and apply stricter content protections. Parents will be notified if the system detects potential signs of self-harm, though they will not see the chat transcripts. The rollout comes amid increased scrutiny from US lawmakers as CA considers two AI safety bills aimed at protecting minors. OpenAI also said it is developing an age-prediction system to automatically apply teen-appropriate settings.

In the courts:

  • Amid the government shutdown, the Department of Justice (DOJ) and Federal Trade Commission (FTC) paused antitrust cases related to Amazon and Apple, while Google and Meta’s continued on despite federal furloughs. DC District Judge Amit Mehta refused to delay his ruling in the DOJ’s Google Search case, while another Google ad-tech case remained in trial. The FTC’s suit over Meta’s acquisition of Instagram moved forward, awaiting a final decision, but efforts to break up Amazon’s retail platform and Apple’s iPhone ecosystem were paused until the end of the government shutdown. Experts warned that prolonged delays could impact the timeline for trials scheduled as far out as 2027.
  • A US district judge permanently banned Israeli spyware firm NSO Group from targeting WhatsApp users, continuing a six-year legal battle over the company’s surveillance activities. However, the ruling slashed the punitive damages NSO owes from $167 million to just $4 million, citing a lack of precedent around mobile surveillance. The injunction does not apply to NSO’s government clients, but WhatsApp’s chief hailed the ruling as a precedent-setting decision to hold spyware makers accountable.
  • The US Supreme Court rejected Google’s emergency request to block the enforcement of a landmark antitrust ruling over the Google Play Store, greenlighting a major shakeup of Android’s app ecosystem. The case originated from a 2020 lawsuit filed by Epic Games, which accused Google of illegally monopolizing Android app distribution and in-app payments. In 2023, a California jury sided with Epic, and a US District judge ordered Google to permit third-party stores, external payment links, and developer billing systems for three years. Google appealed the decision twice and warned the Court of “unrecoverable expenses” and security risks, but the justices offered no comment as they denied the stay. To comply with the order, Google changed its Play Store policies for US developers, allowing links to external payment services and alternative app downloads.
  • YouTube agreed to pay $24.5 million to settle a lawsuit brought by President Trump over the platform’s suspension of his account following the January 6 Capitol riot. $22 million of the payment went to the Trust for the National Mall to support Trump’s White House ballroom project, while the remaining $2.5 million is distributed to co-plaintiffs, including the American Conservative Union. The lawsuit, part of a broader effort by Trump to challenge what he described as “censorship of conservative viewpoints,” also targeted Meta and X, both of which settled earlier this year for $25 million and $10 million, respectively.

Legislation Updates

The following bills made progress across the House and Senate in October:

  • FRAUD Act of 2025H.R. 3483. Introduced by Rep. Tom Barrett (R-WI), the bill advanced through the Committee on Veterans' Affairs.
  • Deploying American Blockchains Act of 2025S. 1492. Introduced by Sen. Bernie Moreno (R-OH), the bill advanced through the Committee on Commerce, Science, and Transportation.
  • To improve the safety and security of Members of Congress…S.2144. Introduced by Sen. Amy Klobuchar (D-MN), the bill was sent to the House after passing the Senate by unanimous consent in September. It was also approved as an amendment to the National Defense Authorization Act for Fiscal Year 2026 (S.2296).
  • ANCHOR Act S. 318. Introduced by Alex Padilla (D-CA), the bill was sent to the House after passing the Senate by unanimous consent in September.

The following bills were introduced in the House and Senate in October:

  • Calling on the United States to champion a regional artificial intelligence strategy… H.Res. 836. Introduced by Adriano Espaillat (D-NY), the resolution calls “on the United States to champion a regional artificial intelligence strategy in the Americas to foster inclusive artificial intelligence systems that combat biases within marginalized groups and promote social justice, economic well-being, and democratic values.”
  • AI–WISE ActH.R. 5784. Introduced by Rep. Hillary J. Scholten (D-MI) on October 17, 2025, the bill would “amend the Small Business Act to help small business concerns critically evaluate artificial intelligence tools, and for other purposes.”
  • AI for Mainstreet ActH.R. 5764. Introduced by Rep. Mark Alford (R-MO), the bill would “amend the Small Business Act to require small business development centers to assist small business concerns with the use of artificial intelligence, and for other purposes.”
  • Targeting Online Sales of Fentanyl ActH.R. 5744. Introduced by Rep. Eugene Vindman (D-VA), the bill would “require a Government Accountability Office (GAO) study on the sale of illicit drugs online, and for other purposes.”
  • Quantum LEAP Act of 2025H.R. 5712. Introduced by Rep. Charles J. “Chuck” Fleischmann (R-TN), the bill would “establish the Commission on American Quantum Information Science Dominance, and for other purposes.”
  • STOP HATE Act of 2025 H.R. 5681. Introduced by Rep. Josh Gottheimer (D-NJ), the bill would “require the reporting of certain terms of service of social media companies for purposes of limiting the online presence of terrorist organizations.”
  • The GUARD ActS. 3062. Introduced by Sen. Josh Hawley (R-MO), the bill would “ban AI companions for minors, mandate AI chatbots disclose its non-human status, and create new crimes for companies who make AI for minors that solicits or produces sexual content.”
  • Learning Innovation and Family Empowerment (LIFE) with AI ActS. 3063. Introduced by Sen. Bill Cassidy (R-LA), the bill would establish “a parent-centered framework to safeguard student data privacy, enhance transparency, and encourage the responsible and optional use of AI in K-12 personalized learning.”
  • Safe Cloud Storage ActS. 3023. Introduced by Sen. Marsha Blackburn (R-TN), the bill would “limit liability for certain entities storing child sexual abuse material for law enforcement agencies, and for other purposes.”
  • Right to Override ActS. 2997. Introduced by Sen. Edward J. Markey (D-MA), the bill would “protect the independent judgment of health care professionals acting in the scope of their practice in overriding AI/CDSS outputs, and for other purposes.”

We welcome feedback on how this roundup could be most helpful in your work – please contact contributions@techpolicy.press with your thoughts.

Authors

Rachel Lau
Rachel Lau is a Project Manager at Freedman Consulting, LLC, where she assists project teams with research and strategic planning efforts. Her projects cover a range of issue areas, including technology, science, and healthcare policy.
J.J. Tolentino
J.J. Tolentino is a Senior Associate at Freedman Consulting, LLC where he assists project teams with research, strategic planning, and communication efforts. His work covers issues including technology policy, social and economic justice, and youth development.
Ben Lennett
Ben Lennett is the Managing Editor of Tech Policy Press. A writer and researcher focused on understanding the impact of social media and digital platforms on democracy, he has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Related

Analysis
September 2025 US Tech Policy RoundupOctober 1, 2025
Analysis
August 2025 US Tech Policy RoundupSeptember 2, 2025
Analysis
July 2025 US Tech Policy RoundupAugust 1, 2025

Topics