[Gangnam Perspective] In the Age of AI, Between Use and Blind Faith
- Input
- 2026-04-13 18:13:17
- Updated
- 2026-04-13 18:13:17

After a few glasses of white wine at a dinner gathering, a headache started to creep in, and I instinctively pulled out my smartphone to ask an AI. It replied, "Given the interaction between alcohol and acetaminophen, it’s best to wait at least four to six hours," and I followed that advice as it was.
It is no different when making plans with friends. We ask for restaurant recommendations, compare options, and even complete the reservation through AI. The old habit of manually comparing prices and information on portal sites is gradually disappearing. An AI Shopping Agent that will be launched through a collaboration between OpenAI and Shinsegae Group signals a shift where, instead of searching for where something is cheaper, you simply ask and move straight to purchase. A world once confined to science fiction films—asking AI for medical advice, buying medicine based on its recommendations, and letting it choose where you meet—is now becoming reality.
Using AI in everyday life no longer feels strange. Just a year ago, many would have shaken their heads at the idea of "asking AI about your health," but now they shrug as if it is the most natural thing. The same trend is visible in professional fields. Hospitals are using AI to draft medical reports, and in the legal sector AI is being used to prepare initial drafts of legal briefs. At work, AI has entered almost every task, from writing proposals and developing code to summarizing meeting minutes and translating documents.
This trend leads to another problem. Search portals merely show links and leave the judgment to the user, but AI is different. It responds in a confident tone and presents neatly packaged answers. The real danger of AI is not that it can be wrong, but that it can be convincingly wrong. These are the so‐called "hallucinations."
Last year in New York City, a lawyer submitted a legal brief generated by AI to a court, only for it to be revealed that most of the cited precedents were fabricated, sparking controversy.
When a similar case emerged in the United Kingdom, the High Court of Justice of England and Wales took the unusual step of warning that lawyers who use fabricated AI-generated material in their arguments could be charged with contempt of court. This reflects a reality in which even professionals such as lawyers are increasingly accepting AI’s answers without verification.
The investment world is no exception. Cases of people suffering losses after investing based on AI’s advice are now commonly heard. The real issue is not the loss itself, but that when the basis for a decision is AI, it becomes unclear who should be held responsible.
Anthropic PBC’s latest model, Claude Mythos, has elevated this problem into a risk at the industrial and national level.
This model, described as a "monster AI" that could upend the cybersecurity landscape, is reportedly capable of autonomously finding vulnerabilities and even generating attack code, to the point that the White House has felt compelled to respond. If Claude Mythos, which can independently discover security vulnerabilities that had gone undetected for decades, is turned toward offense rather than defense, existing security systems could be rendered meaningless.
Geoffrey Hinton, often called the "godfather of AI," has warned that there is a 10–20% chance humanity could be wiped out by AI within the next 30 years. It may sound exaggerated, but at least one thing is clear: AI has moved beyond being a simple tool and is now entering a stage where it influences the very structure of human decision-making.
Does that mean we can or should try to stop the tide of the AI era? AI has already taken root across medicine, work, and daily life as a tool for boosting efficiency. According to the Harvard T.H. Chan School of Public Health, using AI for diagnosis can cut treatment costs by up to 50% and improve health outcomes by as much as 40%.
In the end, the problem lies not in the technology itself but in how we use it. Asking AI for input is different from handing over our judgment to it. Will we treat AI’s answers as reference points, or accept them as final conclusions? That choice still rests with us.
yjjoe@fnnews.com Reporter