AI to Block Deepfakes... Completion of Detection Model Development, Opening a New Path for Criminal Investigation
- Input
- 2025-07-30 12:00:00
- Updated
- 2025-07-30 12:00:00
15 Cases Analyzed Successfully in the First Half of the Year, Supporting the Deletion of Over 10,000 Illegal Election Materials with Election Commission Cooperation
[Financial News]
During the 21st presidential election period, videos manipulated to make it appear as if a specific candidate made statements they did not make were circulated online. The National Forensic Service quickly detected subtle discrepancies in the voice of individuals through the 'AI Deepfake Analysis Model' and promptly provided clear forensic results indicating 'deepfake' to investigative agencies, thereby preventing potential confusion among voters early on.
The Ministry of the Interior and Safety, in collaboration with the National Forensic Service, announced on the 30th that they completed the development and verification of the 'AI Deepfake Analysis Model' by April, which determines the authenticity of suspected deepfake images, videos, and audio by integrating artificial intelligence (AI) technology, and disclosed the results of its approximately two-month application in deepfake crime investigations.
The 'AI Deepfake Analysis Model' successfully conducted forensic analysis on 60 types of evidence, totaling 15 cases of deepfake analysis, upon requests from front-line investigative agencies such as the National Police Agency, during May to June this year.
For reference, during the 21st presidential election period, there were 13 deepfake incidents related to presidential candidates and 2 incidents involving digital sex crimes.
Notably, during the 21st presidential election period, the analysis model was shared with the Central Election Management Committee, contributing to the detection and deletion of over 10,000 illegal deepfake election materials in online spaces such as YouTube.
The Ministry of the Interior and Safety explained that the development of this model is significant as it officially formalized deepfake forensic analysis, which was previously not possible due to technical limitations, and established an investigative system based on scientific evidence.
The Ministry of the Interior and Safety and the National Forensic Service expect that by fully deploying the analysis model in deepfake evidence forensic tasks, a groundbreaking turning point in deepfake crime investigations will be established.
The 'AI Deepfake Analysis Model' was developed to overcome the situation where investigative agencies faced difficulties in analyzing related evidence due to a lack of deepfake detection technology, despite the explosive increase in deepfake crimes, where specific individuals' faces are synthesized, becoming a serious social issue.
During the model development process, approximately 2.31 million deepfake data (690,000 videos, 1.62 million audio) from open datasets and self-produced content were utilized.
By training the latest deep learning algorithms with vast amounts of data and undergoing continuous feedback and performance improvement processes, the deepfake detection performance was dramatically enhanced.
The developed 'AI Deepfake Analysis Model' automatically detects traces of deepfakes and assists in quickly determining the presence of deepfakes by predicting the synthesis probability and modulation rate over time for suspected tampered files (images, videos, audio).
In particular, it is designed to demonstrate its effectiveness in practical investigative environments with excellent analysis capabilities for evidence where some data is lost or audio quality is degraded due to repeated uploads and downloads, along with the ability to detect tampering in specific parts such as eyes, nose, and mouth.
The two institutions plan to create even stronger synergy by linking the achievements of the 'AI Deepfake Analysis Model' with the 'Voice Phishing Audio Analysis Model' developed by the Ministry of the Interior and Safety in 2023.
By utilizing both models together, it is expected to determine the presence of deepfakes and even prove whether the deepfake was created by mimicking and synthesizing the voice of a specific politician.
The plan is to further expand the scope of application of the 'AI Deepfake Analysis Model' in the future.
Yongseok Lee, Director of the Digital Government Innovation Office, emphasized, "The 'AI Deepfake Analysis Model' is an example of actively utilizing artificial intelligence technology to solve social problems," and stated, "In the future, the Ministry of the Interior and Safety will actively introduce AI and data analysis into the administrative field for the safety and stability of the public."
ktitk@fnnews.com Taekyung Kim Reporter