[Gangnam Perspective] Question and Question Again
- Input
- 2025-11-16 18:26:32
- Updated
- 2025-11-16 18:26:32

As it becomes increasingly difficult to distinguish between real and fake, AI content is infiltrating daily life at an unimaginable pace. Crime that exploits these gaps has already become a reality. Recently, a woman in her 50s lost 500 million won after being deceived by an 'AI Lee Jung-jae.' The romance scam group lured the victim by presenting an AI-generated face and a fake driver's license. Last year, there was also a case where someone was tricked into transferring 70 million won to an AI account impersonating Elon Musk. The damage is spreading much faster and wider than expected, with celebrity photos, voices, and IDs being forged in an instant.
False advertising is also a serious issue. During this year's Parliamentary Inspection of the Administration at the National Assembly of the Republic of Korea, it was pointed out that AI-generated 'fake doctors and pharmacists' are being used in health supplement advertisements. Consumers are left defenseless against these 'AI doctors,' who speak more clearly and naturally than real experts. The elderly, in particular, who are less familiar with digital environments, are especially vulnerable to such manipulation.
According to a survey on 'Cyberbullying & AI Awareness' conducted last year by the CDL Digital Literacy Association among 2,000 men and women aged 14 to 69, 58.5% of respondents reported having encountered content created with deepfake technology. When asked about their ability to identify deepfake content, 55.2% said it was difficult to distinguish. Additionally, 62.7% of respondents pointed out that cyberbullying caused by AI technology and services could become a serious problem.
Experts believe the situation will only worsen. Byung-Ho Choi, a professor at the Korea University AI Research Institute, stated, "It will soon become virtually impossible for the human eye to determine the authenticity of AI content," emphasizing the need for a campaign to question all content. While an era may come when AI detects AI, even that will not be a complete solution. Ultimately, what we must rely on is not technology, but our own critical attitude.
Therefore, here are a few ways to practice 'intelligent consumption' by distinguishing AI content. First, carefully check for watermarks on videos or photos, and look for awkward details such as backgrounds, people, animal toes, or object distortions. Next, prioritize information distributed by trusted sources such as the government or official media, and double-check any content that is overly extreme or provocative, especially if it comes from unofficial channels. It is also helpful to use fact-checking websites and deepfake detection tools.
In this context, the words of René Descartes deserve to be recalled. Before reaching the conclusion, "I think, therefore I am," he thoroughly doubted everything in the world. His insight that 'to find truth, one must, at least once in life, doubt as much as possible' is not confined to the 17th century. Especially today, when the real and the fake are perfectly intertwined, Descartes' attitude may be the most realistic and necessary survival strategy.
AI has already permeated our daily lives, communication, finance, consumption, and even political judgment at an irreversible speed. Technology will continue to advance, but human discernment does not automatically improve. Ultimately, the question we must ask is not 'what to believe,' but 'how to doubt.' Citizenship in the AI era begins with critical thinking. That ability will be our most reliable shield for the future.
[email protected] Reporter