Case ID: M25-079P^

Published: 2025-09-18 07:59:50

Last Updated: 1758182390


Inventor(s)

Joshua Rego
Sanjeev Koppal
Suren Jayasuriya

Technology categories

Artificial Intelligence/Machine LearningComputing & Information TechnologyCybersecurityImagingIntelligence & SecurityPhysical Science

Licensing Contacts

Physical Sciences Team

System and Methods for Epipolar Plane Imaging in Event-Based Cameras for Depth Estimation

Event-based cameras that asynchronously detect changes in pixel brightness with high temporal resolution, high dynamic range, and low-power, have been very successful for robotic vision and autonomous navigation. Depth estimation from monocular or stereo event cameras is an important task which helps improve downstream robotic localization and mapping algorithms. Since monocular or stereo event depth estimation can be ill posed and challenging, most state-of-the-art techniques leverage trained deep learning networks to generate either sparse or dense depth predictions. Yet the reliance on learned networks prevents generalization to new captured scenes or camera parameters outside the datasets used for training. Recently, new deep learning techniques have shown promise in generating sparse or dense depth predictions, but often struggle to generalize and exhibit lower accuracy on new scenes and camera parameters that are different than the datasets they are trained on.
 
Researcher at Arizona State University and collaborators have developed an innovative approach to depth estimation by leveraging event cameras’ unique capabilities along with concepts from light field photography and epipolar plane images (EPIs). Unlike traditional deep learning methods, which are prone to overfitting and data dependency, this simple but robust approach is capable of extracting depth lines corresponding to sparse scene points directly from EPI images enhancing depth accuracy without heavy computational requirements. The approach has been validated through a prototype using a motorized rail setup and shows superior generalization and robustness compared to existing techniques.
 
Potential Applications
  • Surveillance & security systems
  • Robotics & industrial automation
  • Autonomous vehicles & drones
  • Depth imaging systems
 
Benefits and Advantages
  • Scalable – Strong generalization to varied scenes and conditions
  • Energy Efficient – Low power consumption enabled by event cameras
  • Precise – Greater accuracy compared to current learning-based methods for monocular and stereo depth estimation