Friday, March 20, 2026

Draft of US Federal AI Law Unveiled: "Toward a Single National Framework" [Global AI Briefing]

Input
2026-03-20 09:37:42
Updated
2026-03-20 09:37:42
[Financial News] Republican Party (GOP) Senator Marsha Blackburn has released a draft bill to establish a federal regulatory framework for artificial intelligence (AI). It is the first bill to emerge since December last year, when Donald Trump raised the need for a "federally centered, unified AI regulatory regime." Observers have warned that a patchwork of state-level AI rules across the United States is creating confusion and hindering industrial development.
GOP Senator Marsha Blackburn has unveiled a draft bill to create a federal regulatory framework for artificial intelligence. It is the first legislative proposal since Donald Trump called for a "federally centered, unified AI regulatory regime" in December last year. [Graphic = Newsis] Redistribution and database storage prohibited. /Photo = Newsis
According to Engadget on the 19th (local time), the draft Federal Artificial Intelligence Act presented by Blackburn outlines protection of so‐called "4 Cs"—Children, Creators, Communities, and Conservatives—as its core policy direction. The bill would make AI developers responsible for preventing harm to users and clarify the application of copyright law so that AI systems cannot train on creative works without authorization. It would also require social media platforms to implement safety measures to protect users under 17, and mandate regular reporting on job changes caused by AI.
The proposal also includes a "regulation-and-growth in parallel" strategy designed to prevent excessive rules from stifling innovation. It sets out protections for children, creators, and conservative users, requires robust safety measures from platforms, and obliges AI providers to clearly label AI‐generated content with its source. Regular reporting on labor market impacts from AI is also specified.
Engadget noted that political interests surrounding a federal AI law and pushback from the AI industry make it uncertain what form the bill will ultimately take, or whether it will pass at all.
Sony requests removal of 135,000 AI deepfake tracks
Sony Music Entertainment (SME) has asked streaming platforms to remove more than 135,000 AI deepfake tracks that impersonate its artists.
According to the British Broadcasting Corporation (BBC) on the 19th (local time), SME filed the request as the spread of AI technology has made it easy to create and upload fake songs that mimic the voices of famous singers. The company argued that such tracks are surging in number, eroding artists’ actual income and undermining activities such as new releases and promotions.
Industry observers say deepfake music is evolving beyond simple copyright infringement into a broader streaming fraud problem. In some cases, operators have artificially inflated play counts of fake tracks to generate illicit revenue.
SME has asked streaming platforms to take down more than 135,000 AI deepfake tracks that impersonate its artists. (Source = Yonhap News)

The International Federation of the Phonographic Industry (IFPI) stressed that streaming platforms must adopt technology capable of identifying and labeling AI‐generated content in order to address these issues. As AI rapidly reshapes how music is produced and distributed, transparency and copyright protection are emerging as critical challenges.
Experts believe the currently identified 135,000 tracks likely represent only a fraction of the actual volume in circulation. They warn that enforcement mechanisms are still lagging far behind the pace at which AI‐based content is spreading.
SME also asked the UK government last year to remove more than 75,000 tracks that impersonated the voices of its artists.
Meta AI agent leaks sensitive internal information
An AI agent under development at Meta has accidentally exposed sensitive internal information to employees. The incident has heightened concerns over security risks posed by AI agents.
According to The Information on the 19th (local time), an autonomous AI agent that Meta was testing internally bypassed security protocols on its own and exposed confidential internal data. In response, Meta declared a company‐wide security emergency at the "Severity 1" level and launched an investigation.
[Seoul = Newsis] Meta corporate identity (CI). An AI agent under development at Meta exposed sensitive internal information to employees, fueling concerns about security incidents involving AI agents. (Photo = Meta) 2026.03.04. *Redistribution and database storage prohibited /Photo = Newsis
Reports say the AI agent penetrated deep into internal systems and extracted data without explicit approval from engineers. The sensitive information it obtained was exposed for more than two hours to employees who did not have the proper authorization.
The case is seen as a prime example of how the autonomy of AI agents can threaten internal security. It is expected to further intensify the growing debate over security risks associated with emerging AI agents.

cafe9@fnnews.com Lee Gu-soon Reporter