Falcon: A Wide-and-Deep Onboard Active Vision System | IEEE Conference Publication | IEEE Xplore

Falcon: A Wide-and-Deep Onboard Active Vision System


Abstract:

The tradeoff between the field-of-view and resolution of conventional onboard vision systems primarily results from their fixed optical components. We propose a novel act...Show More

Abstract:

The tradeoff between the field-of-view and resolution of conventional onboard vision systems primarily results from their fixed optical components. We propose a novel active vision system, Falcon, as an optimal solution. This system comprises an electric zoom lens connected to a high-speed camera with a pair of galvanometer mirrors, enabling high-resolution imaging of a moving object across a wide range, from near to far. To ensure accurate calibration of the Falcon system, we introduce a novel mapping-based calibration method using external cameras. We also present a robust and lightweight visual feedback method that utilizes this mapping-based calibration for effective object tracking. The effectiveness of the Falcon system is verified by constructing a prototype and conducting tracking experiments in an indoor setting, which demonstrated the superior performance of our method. Additionally, we successfully achieved continuous and high-resolution imaging of a curved mirror on public roads while the vehicle was moving.
Date of Conference: 01-05 October 2023
Date Added to IEEE Xplore: 13 December 2023
ISBN Information:

ISSN Information:

Conference Location: Detroit, MI, USA
References is not available for this document.

I. Introduction

Sensing the surrounding environment is a fundamental function of robots, including autonomous vehicles. While recent advances in electronics have facilitated the development of various sensors such as LiDAR and millimeter radar, vision sensors play an indispensable role due to their high capability in environmental recognition, which is widely appreciated in many vehicle applications.

Select All
1.
A. W. Bergman, D. B. Lindell, and G. Wetzstein, “Deep adaptive lidar: End-to-end optimization of sampling and depth completion at low sampling rates,” in International Confer-ence on Computational Photography, 2020, pp. 1–11.
2.
F. Pittaluga, Z. Tasneem, J. Folden, B. Tilmon, A. Chakrabarti, and S. J. Koppal, “Towards a mems-based adaptive lidar,” in International Conference on 3D Vision, 2020, pp. 1216–1226.
3.
T. Sueishi, H. Oku, and M. Ishikawa, “Lumipen 2: Dynamic Projection Mapping with Mirror-Based Robust High-Speed Tracking against Illumination Changes,” Presence: Teleoper-ators and Virtual Environments, vol. 25, no. 4, pp. 299–321, 2016.
4.
T. Sueishi, M. Ishii, and M. Ishikawa, “Tracking background-oriented schlieren for observing shock oscillations of transonic flying objects,” Applied Optics, vol. 56, no. 13, pp. 3789–3798, 2017.
5.
B. Tilmon and S. J. Koppal, “Saccadecam: Adaptive visual attention for monocular depth sensing,” in International Con-ference on Computer Vision, 2021, pp. 5989–5998.
6.
Q. Li, M. Chen, Q. Gu, and I. Ishii, “A flexible calibration algorithm for high-speed bionic vision system based on gal-vanometer,” in International Conference on Intelligent Robots and Systems, 2022, pp. 4222–4227.
7.
T. Zhang, S. Hu, K. Shimasaki, I. Ishii, and A. Namiki, “Dual-camera high magnification surveillance system with non-delay gaze control and always-in-focus function in indoor scenes,” in International Conference on Intelligent Robots and Systems, 2022, pp. 6637–6642.
8.
K. Okumura, H. Oku, and M. Ishikawa, “High-speed gaze controller for millisecond-order pan/tilt camera,” in International Conference on Robotics and Automation, 2011, pp. 6186–6191.
9.
S. Hu, K. Shimasaki, M. Jiang, T. Senoo, and I. Ishii, “A simultaneous multi-object zooming system using an ultrafast pan-tilt camera,” IEEE Sensors Journal, vol. 21, no. 7, pp. 9436–9448, 2021.
10.
K. Iida and H. Oku, “Saccade mirror 3: High-speed gaze controller with ultra wide gaze control range using triple rotational mirrors,” in International Conference on Robotics and Automation, 2016, pp. 624–629.
11.
B. Tilmon, E. Jain, S. Ferrari, and S. Koppal, “Foveacam: A mems mirror-enabled foveating camera,” in International Conference on Computational Photography, 2020, pp. 1–11.
12.
Ultralytics, “YOLOv8.” [Online]. Available: https://github.com/ultralytics/ultralytics
13.
NVIDIA, “TensorRT.” [Online]. Available: https://developer.nvidia.com/tensorrt

Contact IEEE to Subscribe

References

References is not available for this document.