Background
Mobile vision systems would benefit from the ability to situationally sacrifice image resolution to save system energy when imaging detail is unnecessary. However, upon any resolution reconfiguration request, current system frameworks stop providing image frames to the application until the reconfiguration completes. Frame delivery is bottlenecked by a sequence of reconfiguration procedures and memory management in current operating systems before it resumes at the new resolution. This latency from reconfiguration—a substantial 280 ms—impedes the adoption of otherwise beneficial resolution-energy tradeoff mechanisms.
Invention Description
Researchers at Arizona State University have developed a new media framework that provides a rapid sensor resolution reconfiguration service as a modification to common media frameworks, including V4L2. This approach virtually eliminates the frame-to-frame reconfiguration latency (226 ms to 33 ms), in effect removing the frame drop during sensor resolution reconfiguration. End-to-end resolution reconfiguration latency is also halved, from 226 ms to 105 ms. By allowing the system to reconfigure the sensor resolution to 480p compared with downsampling 1080p↓480p, power consumption is reduced by more than 49% as measured in a cloud-based offloading workload running on a Jetson TX2 board. As a result, this technology unlocks unprecedented capabilities for mobile vision applications to dynamically adjust sensor resolutions while balancing energy efficiency, task accuracy, and user experience.
Potential Applications
• Mobile vision systems
• Video processing
• Augmented reality
Benefits and Advantages
• Reduces power consumption and reconfiguration latency
• Enables seamless transition of computer vision without loss of frames