"Suppressing Civil Liberties"—Police Roughly Subdue BJ in Viral Video, Criticism Follows... The Shocking Truth Revealed
- Input
- 2025-12-23 11:06:47
- Updated
- 2025-12-23 11:06:47

[Financial News] A video showing the police forcefully subduing a Broadcast Jockey (BJ) was released on YouTube, sparking controversy. Viewers left critical comments, claiming the police were suppressing civil liberties. However, it was later revealed that the footage was a fake video created using Artificial Intelligence (AI).
According to Yonhap News on the 23rd, these fake police dispatch videos began appearing on YouTube channels, Instagram, and TikTok from October 2. The number of such videos has now surpassed 50. All of them were staged to look as if they were captured by police body cameras at scenes involving assault, arguments, or drunk driving.
These AI-generated videos garnered 12 million cumulative views on Instagram alone during October. Within a month, the TikTok channel gained more than 9,900 followers, making the content a hot topic online.
Many internet users failed to recognize the videos as fakes. In fact, the video showing the arrest of a BJ received negative comments such as, "Isn't this too excessive?"
Recently, a YouTuber edited and uploaded a video to make it appear as though the police were interfering with an illegal parking report. In response, the head of the local police station urged, "Please stop the witch hunt." Concerns are mounting that AI-generated fake videos are amplifying misunderstandings about alleged police brutality.
The Korean National Police Agency (KNPA) announced that it will launch a preliminary investigation into the relevant social media channels to prevent the spread of harm caused by the dissemination of AI-generated false videos.
The police believe that the channel operators transmitted false information with the intent to benefit themselves or others, or to cause harm. They are considering applying charges under the Framework Act on Telecommunications and plan to pursue deletion or blocking of the content in parallel.
However, the provision (Article 47, Paragraph 1) that allowed for the punishment of those who transmitted false information with the intent to harm the public interest was abolished following a Constitutional Court ruling of unconstitutionality in the 2010 Minerva Case. Since no replacement law has been enacted, it remains unclear whether channel operators will actually face punishment.
The Framework Act on the Development of Artificial Intelligence, which is set to take effect in January next year, focuses more on promoting the industry than on regulation. Since it does not address AI content that causes social disruption, there are calls for supplementary legislation.
gaa1003@fnnews.com Ahn Ga-eul Reporter