Draft of US Federal AI Law Unveiled: "Toward a Single National Framework" [Global AI Briefing]
- Input
- 2026-03-20 09:37:42
- Updated
- 2026-03-20 09:37:42

The proposal also includes a "regulation-and-growth in parallel" strategy designed to prevent excessive rules from stifling innovation. It sets out protections for children, creators, and conservative users, requires robust safety measures from platforms, and obliges AI providers to clearly label AI‐generated content with its source. Regular reporting on labor market impacts from AI is also specified.
Engadget noted that political interests surrounding a federal AI law and pushback from the AI industry make it uncertain what form the bill will ultimately take, or whether it will pass at all.
Sony requests removal of 135,000 AI deepfake tracks
Sony Music Entertainment (SME) has asked streaming platforms to remove more than 135,000 AI deepfake tracks that impersonate its artists.According to the British Broadcasting Corporation (BBC) on the 19th (local time), SME filed the request as the spread of AI technology has made it easy to create and upload fake songs that mimic the voices of famous singers. The company argued that such tracks are surging in number, eroding artists’ actual income and undermining activities such as new releases and promotions.
Industry observers say deepfake music is evolving beyond simple copyright infringement into a broader streaming fraud problem. In some cases, operators have artificially inflated play counts of fake tracks to generate illicit revenue.

The International Federation of the Phonographic Industry (IFPI) stressed that streaming platforms must adopt technology capable of identifying and labeling AI‐generated content in order to address these issues. As AI rapidly reshapes how music is produced and distributed, transparency and copyright protection are emerging as critical challenges.
Experts believe the currently identified 135,000 tracks likely represent only a fraction of the actual volume in circulation. They warn that enforcement mechanisms are still lagging far behind the pace at which AI‐based content is spreading.
SME also asked the UK government last year to remove more than 75,000 tracks that impersonated the voices of its artists.
Meta AI agent leaks sensitive internal information
An AI agent under development at Meta has accidentally exposed sensitive internal information to employees. The incident has heightened concerns over security risks posed by AI agents.According to The Information on the 19th (local time), an autonomous AI agent that Meta was testing internally bypassed security protocols on its own and exposed confidential internal data. In response, Meta declared a company‐wide security emergency at the "Severity 1" level and launched an investigation.

The case is seen as a prime example of how the autonomy of AI agents can threaten internal security. It is expected to further intensify the growing debate over security risks associated with emerging AI agents.
cafe9@fnnews.com Lee Gu-soon Reporter