Integration of Thermal and Visible Imagery for Robust Foreground Detection in Tele-Immersive Spaces.

Shadi Ashnai

M.S. dissertation, Computer Science, University of Illinois at Urbana-Champaign, 2007

John C. Hart and Peter Bajcsy, Advisors

In this thesis we have proposed an integration framework for understanding multi-modal 3D medical volumes. The integration framework consists of a sequence of operations that are designed to support the transfer of raw data to knowledge, and to enable learning and exploration of medical hypotheses about imaged specimens. The framework includes algorithms for reconstructing, integrating, analyzing and visualizing multi-modal medical data.

In our work, raw medical data are represented by 2D and 3D images. The images correspond to the same tissue and are acquired using different imaging instruments and processes. Different imaging modalities provide a variety of properties of the tissue of interest. Some raw data sets, such as magnetic resonance (MR) or computer tomography (CT) images, form already a 3D volume. Other data sets, such as a set of 2D microscopy images of histological cross sections, have to be aligned to reconstruct a 3D volume.

When multiple multi-modal data sets form 3D volumes, volumes of different modalities can be spatially integrated into the same coordinate system using computer assisted techniques. During the integration, the following challenges have to be addressed: (a) the spatial resolution of multi-modal volumes might differ in every dimension, (b) the appearance of the same physical tissue varies across modalities, (c) the modality specific measurements represent grayscale (MRI, CT, or Neutron Beam) or color (Histology) or vector (diffusion tensor (DT) images) values, and (d) the file size of 3D volumes requires significant computational resources and scalable algorithms.

In addition to integration, there is a need to support the end users of integrated data by providing 3D visualization and quantitative feedback about the estimated integration accuracy Due to the large variety of specimens, imaging techniques and preparation methods to obtain raw data, the current framework has been designed as computer assisted rather than as fully automated. The full automation of each algorithm is outside of the scope of this work. The main contribution of this work is in designing and prototyping an integration framework that includes algorithms for detecting and clustering of features, extraction of foreground in volumes, reconstruction of 3D volumes from 2D cross sections, 3D-to-3D registration and 3D visualization of multi-modal information. The framework could be used not only for transforming raw data to knowledge about the imaged specimens but also for better understanding of the uncertainty introduced by integration.

The prototype was applied to a specific study focusing on understanding multi-modal correlations of gender specific patterns and stuttering patterns of myelinated fibers in animal brain models.