Wednesday, December 17, 2025

Kakao Unveils 'Kanana Safeguard' for AI Safety Verification... Open Source Distribution

Input
2025-05-27 09:51:07
Updated
2025-05-27 09:51:07

[Financial News] Kakao is taking steps to create a safe and reliable generative AI technology environment and ecosystem.
Kakao announced on the 27th that it has developed the AI guardrail model 'Kanana Safeguard' to verify the safety and reliability of AI services, and for the first time among domestic companies, it will release a total of three models as open source.
As various generative AI services are spreading recently, social concerns about the risks of harmful content are increasing. Kakao recognized the need for technical and institutional devices, such as the AI guardrail system, to address this issue and developed the 'Kanana Safeguard' model. Major big tech companies are operating models specialized in detecting risk factors that may arise from generative AI.
'Kanana Safeguard' utilizes Kakao's self-developed language model 'Kanana' as the base technology and possesses performance specialized in Korean by using a self-constructed dataset reflecting Korean language and culture. Evaluated based on the F1 score, which measures the precision and recall of AI models, it recorded results exceeding global models in Korean performance.
The models released as open source this time are a total of three, each capable of effectively detecting harmfulness and risks according to the type of risk. △ 'Kanana Safeguard' detects harmfulness in user speech or AI responses related to hate, harassment, and sexual content △ 'Kanana Safeguard-Siren' detects requests requiring legal attention such as personal information or intellectual property rights △ 'Kanana Safeguard-Prompt' detects attacks by users attempting to misuse AI services, all of which can be downloaded from Hugging Face. Kakao has applied the Apache 2.0 license, which allows free commercial use, modification, and redistribution, to 'Kanana Safeguard' to contribute to the establishment of a safe AI ecosystem. It plans to continuously update and enhance the model in the future.

yjjoe@fnnews.com Yoonju Cho Reporter