Multi-Scale, Multi-modal, High-Speed 3D Shape Measurement

Yatong An, Purdue University

Abstract

With robots expanding their applications in more and more scenarios, practical problems from different scenarios are challenging current 3D measurement techniques. For instance, infrastructure inspection robots need large-scale and highspatial-resolution 3D data for crack and defect detection, medical robots need 3D data well registered with temperature information, and warehouse robots need multiresolution 3D shape measurement to adapt to different tasks. In the past decades, a lot of progress has been made in improving the performance of 3D shape measurement methods. Yet, measurement scale and speed and the fusion of multiple modalities of 3D shape measurement techniques remain vital aspects to be improved for robots to have a more complete perception of the real scene. In this dissertation, we will focus on the digital fringe projection technique, which usually can achieve high-accuracy 3D data, and expand the capability of that technique to complicated robot applications by 1) extending the measurement scale, 2) registering with multi-modal information, and 3) improving the measurement speed of the digital fringe projection technique. The measurement scale of the digital fringe projection technique mainly focused on a small scale, from several centimeters to tens of centimeters, due to the lack of a flexible and convenient calibration method for a large-scale digital fringe projection system. In this study, we first developed a flexible and convenient large-scale calibration method and then extended the measurement scale of the digital fringe projection technique to several meters. The meter scale is needed in many large-scale robot applications, including large infrastructure inspection. Our proposed method includes two steps: 1) accurately calibrate intrinsics (i.e., focal lengths and principal points) with a small calibration board at close range where both the camera and projector are out of focus, and 2) calibrate the extrinsic parameters (translation and rotation) from camera to projector with the assistance of a low-accuracy large-scale 3D sensor (e.g., Microsoft Kinect). The two-step strategy avoids fabricating a large and accurate calibration target, which is usually expensive and inconvenient for doing pose adjustments. With a small calibration board and a low-cost 3D sensor, we calibrated a large-scale 3D shape measurement system with a FOV of (1120 ×1900×1000)mm3 and verified the correctness of our method. Multi-modal information is required in applications such as medical robots, which may need both to capture the 3D geometry of objects and to monitor their temperature. To allow robots to have a more complete perception of the scene, we further developed a hardware system that can achieve real-time 3D geometry and temperature measurement. Specifically, we proposed a holistic approach to calibrate both a structured light system and a thermal camera under exactly the same world coordinate system, even though these two sensors do not share the same wavelength; and a computational framework to determine the sub-pixel corresponding temperature for each 3D point, as well as to discard those occluded points. Since the thermal 2D imaging and 3D visible imaging systems do not share the same spectrum of light, they can perform sensing simultaneously in real time. We developed a hardware system that achieved real-time 3D geometry and temperature measurement at 26Hz with 768×960 points per frame. In dynamic applications, where the measured object or the 3D sensor could be in motion, the measurement speed will become an important factor to be considered. Previously, people projected additional fringe patterns for absolute phase unwrapping, which slowed down the measurement speed.

Degree

Ph.D.

Advisors

Zhang, Purdue University.

Subject Area

Computer science|Electrical engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS