About

Fusion of Stero Depth Maps and Sensor Localization in Wireless Sensor Networks.

David J. Scherba

M.S. dissertation, Electrical Engineering, University of Illinois at Urbana-Champaign, 2005

Peter Bajcsy, Advisor

This thesis addresses the following problem: Given multiple images of a scene, and sensor localization data, can we improve our knowledge of scene geometry and sensor locations? A novel approach to improving three dimensional scene geometry and sensor locations by fusing localization data from wireless sensor networks (WSN) with depth maps obtained through stereopsis is presented along with a software prototype. In experiments, "smart" wireless sensors, and a digital camera are used. Sensor locations are determined via an acoustic time-of-flight ranging technique, and the uncalibrated depth map is computed using a binocular stereopsis technique.

Depth map calibration is performed (a) by fitting a three dimensional surface to a set of a priori known co-planar sensor locations, and (b) by computing the depth map calibration model parameters through minimizing the squared distance between the sensor-defined plane and the corresponding depth map measurements. Fusion is performed by analyzing the expected uncertainties of the outputs from computational stereopsis and wireless sensor network localization techniques, and then by minimizing the uncertainty over a wide range of depth values.

Algorithms for computational stereopsis, sensor localization, and depth map and sensor location fusion are presented, followed by multiple experiments and obtained simulation and experimental fusion results. The contribution of the presented work is in building a first prototype for improving our knowledge of scene geometry and sensor locations based on camera and WSN data fusion using TinyOS and Image to Knowledge (I2K) software tools. A summary of challenges with respect to automation, computational requirements, and obtained accuracy of depth estimation is included.