Monday, December 8, 2025

"Is This Even Possible?" New Self-Driving Car Technology Emerges Without the Need for LiDAR

Input
2025-10-15 08:00:00
Updated
2025-10-15 08:00:00
Last month at the Gwangju Artificial Intelligence Complex in Oryong-dong, Buk-gu, Gwangju, a driving simulator was used to demonstrate future self-driving car and other Artificial intelligence (AI)-based vehicle technologies. (Newsis)

[Financial News] A new Artificial intelligence (AI) technology has been developed that enables camera-based self-driving cars to perceive their surroundings more accurately. This technology utilizes the 'vanishing point,' a geometric device that imparts a sense of depth to images.
A research team led by Professor Kyungdon Joo at the Graduate School of Artificial Intelligence, Ulsan National Institute of Science and Technology (UNIST), announced on the 15th that they have developed 'VPOcc,' an AI model that compensates for perspective distortion in information captured by cameras.
The AI systems in self-driving cars and robots perceive their environment using either cameras or Light Detection and Ranging (LiDAR) sensors. Cameras are less expensive and lighter than LiDAR, and they provide rich information such as color and shape. However, because they represent three-dimensional space as two-dimensional images, significant size distortion occurs based on distance.
To address this issue, the research team designed the AI to reconstruct information based on the vanishing point. The vanishing point is a technique established by Renaissance painters to impart depth, referring to the point where parallel lines, such as road lanes or railway tracks, appear to converge in the distance. Just as humans perceive depth on a flat canvas by referencing the vanishing point, the developed AI model uses this reference to more accurately reconstruct depth and distance from camera footage.
Experimental results showed that VPOcc outperformed existing models in both spatial understanding (mIoU) and reconstruction capability (IoU) across several benchmarks. Notably, in road environments crucial for self-driving, it could clearly predict distant objects and more accurately distinguish overlapping objects.
This research was led by Kim Junsu, a researcher at UNIST, as the first author. Junhee Lee (UNIST) and researchers from Carnegie Mellon University (CMU) also participated.
Kim Junsu explained, "We began this research believing that integrating the way humans perceive space into AI would enable a more effective understanding of three-dimensional environments. This achievement maximizes the utility of camera sensors, which are more cost-effective and lightweight than LiDAR sensors."
Professor Kyungdon Joo expressed optimism, stating, "The developed technology can be applied not only to robots and self-driving systems but also to various fields such as augmented reality (AR) map creation."
The results of this research won the Silver Prize at the 31st Samsung HumanTech Paper Award last March and have been accepted for presentation at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2025, a leading conference in the field of intelligent robotics. This year's conference will be held in Hangzhou, China, from the 19th to the 25th.
jiany@fnnews.com Yeon Ji-an Reporter