Case ID: M24-210P^

Published: 2024-12-13 09:08:48

Last Updated: 1734080928


Inventor(s)

Sreenithy Chandran
Shenbagaraj Kannapiran
Suren Jayasuriya
Spring Berman

Technology categories

Applied TechnologiesArtificial Intellegence/Machine LearningImagingPhysical ScienceWireless & Networking

Licensing Contacts

Physical Sciences Team

Dynamic Non-Line-of-Sight Tracking with Mobile Robots

Background

The rise of autonomous machines navigating the changing outdoor environment requires not only the use of line-of-sight tracking, but also non-line of sight (NLOS) tracking. NLOS imaging techniques are divided into active and passive approaches. Active methods employ high-temporal-resolution light sources and time-resolved detectors which measure the intensity and time of a signal and are capable of high-resolution 3D object reconstruction. However, their elaborate setup process and long acquisition limit real-time applications.

Passive methods, on the other hand, offer a low-cost, practical solution for dynamic environments without specialized detectors. These methods are suitable for 2D reconstructions and localization tasks but struggle with low signal-to-noise ratios (SNR). Existing NLOS imaging methods rely on time-resolved detectors. Additionally, current methods utilize laser configurations that require precise optical alignment, making it difficult to deploy them in dynamic environments.

Invention Description

Researchers at Arizona State University have developed a new method of NLOS imaging, specifically tailored for use in dynamic environments by small, power-constrained mobile robots, such as aerial drones. The method enables the localization and tracking of the 2D position and trajectory of an occluded human in real-world environments using standard depth cameras, RGB cameras, that picks up red, green, and blue light. This is achieved through a pipeline that processes successive frames from a moving camera and employs an “attention mechanism” to focus on specific parts of an input data sequence and assign different levels of importance to each element (attention based neural network). This method also includes an AI model that learns to understand and generate human-like text by analyzing patterns in sequential data (transformer-based neural network).

Potential Applications:

  • Rescue Operations: Detecting and tracking victims in occluded areas during search-and-rescue missions
  • Autonomous Driving: Enhancing pedestrian detection capabilities of self-driving cars by detecting individuals around corners or behind obstacles
  • Surveillance: Monitoring areas not directly visible to the camera, such as behind walls or around corners

Benefits and Advantages:

  • Data-driven approach to NLOS imaging with a moving camera
  • Use of an attention-based neural network to handle dynamic successive frames
  • Pre-processing selection metric for extracting planes with maximum NLOS information from moving camera images
  • Low-cost and practical deployment in dynamic environments

Related Publication: PathFinder: Attention-Driven Dynamic Non-Line-of-Sight Tracking with a Mobile Robot