Integration of Data across Disparate Sensing Systems Over Both Time and Space to Design Smart
Environments.
This work addresses a set of general problems during a design of smart spaces.
The problems are related to knowing where sensing takes place, at what time, of what measurement type and what
the interpretation of acquired measurements is for event detection, recognition and proactive event-driven action purposes.
The work outlines multi-instrument and sensor measurement systems that provide sensing capabilities for smart spaces,
as well as theoretical and practical limitations that have to be understood while working with novel sensing technologies.
The disparate sensing systems include wireless micro-electro-mechanical systems (MEMS)
sensor networks (such as
MICA sensors by Crossbow Inc.) and cameras that capture a variety of spectra (such as visible, thermal infrared and
hyperspectral information). The sensors represent a mobile system over both time and space by engaging a robot with its
robotic arm. The robot performs remotely controlled sensor deployment, and adaptive sensor placement based on
the sensing feedback.
Our solution addresses:
robotic sensor deployment using various human-computer interfaces,
synchronization of sensors and cameras,
localization of sensors and objects by fusing the acoustic time-of-flight
localization and vision stereo approaches,
mutual calibration of measurements from sensors and cameras,
proactive camera control based on sensor readings, and
hazard detection and understanding.
Our prototype solution demonstrates
primarily the integration of data across disparate sensing systems over both time and space.
Figure 1: An overview of the current prototype.
We applied the prototyped solutions to hazard detection and human alert in the context of
hazard aware spaces (HAS).
The goal of HAS is to alert people in the event of dangers like natural disasters, failures of human hazard attention,
and intentionally harmful human behavior. Our prototype HAS system integrates both emerging and standard sensing
technologies in order to deliver sensor and camera data streams to a central location. At the central location, our
data analysis algorithms detect hazards to alert humans, and possibly analyze hazard sources. Once alerted decision
makers know when, where and what events occur, they can use the gesture/voice/keyboard controlled robot to confirm
the presence of hazards using sensor and video feedback, as well as to attempt to contain the hazards with the robot.
Sub-projects
Localization using Passive RFID Technology.
We investigate the use of passive RFID technology for object localization and tracking.
This work is motivated by the desire to design, and build, robust smart spaces (pervasive spaces), and in particular
their application in the area of hazard aware spaces. In particular we are interested in building spaces that can detect
hazards and autonomously take action to gain more information about the hazard and alert the users of the space.
To enable a robot to operate in such a space, accurate localization and tracking must be carried out, allowing the
robot to then sense and detect hazards and alert humans about when, where, and what hazards occur. We evaluate the
strengths and weaknesses of passive RFID technology as a solution to this problem. In addition, we have researched
and developed a methodology for building a sensor model which is required for accurate localization. We applied one
specific method of performing probabilistic localization that demonstrated successful global localization of a robot
using these approaches.
Figure 2: Tag types used in our experimental analysis.
Figure 3: Left: documentation of setup used to build the sensor model.
Right: Image showing nominal detection range when Tags are distributed on the floor and the Reader is mounted on
the robot and tilted forward 30 degrees. The 4th "row" has a radius of roughly 90 cm.
Figure 4: Experimental setup. The Alien Reader was mounted on the platform of a robot (left).
The RFID tags were placed on the floor in a regular grid pattern (right).
Figure 5: These images show three time steps in the middle of an experimental run.
Open circles represent undetected tags. Filled blue circles represent detected tags.
The red circle represents the average robot location, and a green circle represents a ground truth measurement.
Robotic Deployment of Wireless MEMS Sensors for Thermal Infrared Camera Calibration
Figure 6: Wireless micro electro-mechanical systems (MEMS) sensors: MICA motes from UCB
combine sensing, communications and computing (using Open source software TinyOS, NesC.
Calibrated Thermal Infrared Images represent a raster corresponding to the environment with an associated temperature
for each raster element. We assume inaccessible and/or hostile environments; environments with barriers to human measurement:
Robots are a means of deployment in the above scenarios
The calibration result improves with more sensor data points (both spatially and temporally).
We want to maximize the number of data points we can extract from a given network in a given amount of time.
It depends on a number of factors: sensor hardware, sensor communication protocol (software), RF environment.
We have empirically tested a number of schemes and currently use an "asynchronous memory limited" communication model
implemented at the application level.
Application:Sensor Networks for Hazard Detection
If sensor temperature > Threshold then acquire thermal IR
Analyze (calibrated) thermal IR imagery: Source of thermal change could be natural hazard (fire) or security
hazard (intruder)
If imagery suggests a hazard, take action: A person/robot could investigate further, a sprinkler system
could be triggered, etc…
Figure 8: Wireless MEMS Sensors deployed. Left: Picture in visible and (right)
infrared spectra
Figure 9: Sensors deployed and activated. Left: Signal from the sensors on osciloscope before, during and
after activation. Right: Corresponding image from the infrared camera.
Robot teleoperation
The problem of interest here is about multi-sensor data
fusion in order to reliably navigate a vehicle in an unknown environment. Multiple human controls are used for
navigating a robot in a hazardous environment or in a complex environment (presence of both Unmanned and Manned Aerial Vehicles
on an aircraft deck).
Figure 10: Sensor Network Data Fusion.
Multi-sensor modeling
The problem of multi-sensor phenomenology includes several fundamental theoretical, experimental and
validation issues.
Multi-sensor modeling: The objective in our study is to predict image appearance
(or pixel values) based on (a) previous knowledge about viewed scene and objects, (b) existing data (saved measurements),
and (c) developed prediction models. The goal of a validation component in the process multi-sensor modeling is to assess
the goodness of image predictions based on modeling or application criteria.
Stereopsis
Goal: Given multiple images of a scene, extract a three dimensional (depth) representation of the scene.
Figure 11: Computational Stereopsis Flow.
Image Matching: Matching pixels lay upon the same image scanline.
We define the disparity as the distance from the pixel in the first (left) image
to the pixel in the second (right) image. Depth and disparity are inversely proportional. Correlation is calculated from
all possible matches along scanline within disparity window.
Figure 12: Stereopsis. Left and Right images
Figure 13: Three dimensional representation of the scene. Ground truth disparity and
Multiscale SAD correlation.
People, Publications, Presentations
Collaborators
Peter Bajcsy Image Spatial Data Analysis Group, NCSA, UIUC
Rob Kooper ISDA, NCSA, UIUC
David Scherba MS student in ECE Department and ISDA, NCSA, UIUC
Martin Urban MS student in ECE Department and ISDA, NCSA, UIUC
Miles Johnson MS Student in Aerospace Department and ISDA, NCSA, UIUC
Kyaw Soe MS student in General Engineering Department and ISDA, NCSA, UIUC
This material is based upon work partially supported by the ONR funding. We acknowledge NCSA/UIUC support of this work.
Conference Papers
J.-C. Lementec and P. Bajcsy, "Recognition of arm gestures using multiple orientation sensors: gesture
classification.",
Proceedings of the 7th International IEEE Conference on Intelligent Transportation
Systems, p965-70, October 3-6, 2004 Washington, D.C.
[abstract][pdf 281kB]
M. Urban, P. Bajcsy, R. Kooper, J.-C. Lementec, "Recognition of arm gestures using multiple orientation
sensors: repeatability assessment.",
Proceedings of the 7th International IEEE Conference on Intelligent Transportation
Systems, p553-8, October 3-6, 2004 Washington, D.C.
[abstract][pdf 192kB]
P. Bajcsy and S. Saha, "A New Thermal Infrared Camera Calibration Approach Using Wireless MEMS Sensors.",
Proceedings of the Communication Networks And Distributed Systems Modeling And Simulation Conference
(CNDS 2004), January 19-22 2004, San Diego, California 2004.
[abstract][pdf 64kB]
S. Saha and P. Bajcsy, "System design issues in single-hop wireless sensor networks.",
Proceedings of the 2nd IASTED International Conference on Communications, Internet,
and Information Technology, p743-8, Scottsdale, Arizona, November 2003.
[abstract][pdf 64kB]
M. Urban and P. Bajcsy, "Fusion of voice, gesture, and human-computer interface controls for remotely
operated robot."
Proceedings of the 8thInternational Conference on Information Fusion (FUSION), p8,
July 25-28, 2005, Philadelphia, Pennsylvania,.
[abstract][pdf 249kB]
D.J. Scherba and P. Bajcsy, "Depth map calibration by stereo and wireless sensor network fusion."
Proceedings of the 8thInternational Conference on Information Fusion (FUSION), p8,
July 25-28, 2005, Philadelphia, Pennsylvania,
[abstract][pdf 384kB]
"Understanding Multi-Instrument Measurement Systems"
P. Bajcsy, Understanding Complex Systems Symposium, May 17-20, 2004, UIUC [Abstract]
Technical Reports
P. Bajcsy, R. Kooper, D. Scherba and M. Urban,
"Toward Hazard Aware Spaces: Knowing Where, When And What Hazards Occur",
Technical Report NCSA-ISDA06-003, June 1, 2006.
[abstract][pdf 2.2MB]
P. Bajcsy, R. Kooper, M. Johnson, K. Soe,
"Toward Hazard Aware Spaces: Localization using Passive RFID Technology",
NCSA-ISDA06-002, May 25, 2006.
[abstract][pdf 343kB]
D. Scherba and P.Bajcsy, "Depth Estimation by Fusing Stereo and Wireless Sensor Locations",
Technical Report NCSA-ALG04-0008, December 2004.
[abstract][pdf 952kB]
M. Urban, P.Bajcsy and R. Kooper,
"Recognition of Arm Gestures Using Multiple Orientation Sensors",
Technical Report NCSA-ALG04-0004, July 2004.
[abstract][pdf 432kB]
D. Scherba and P. Bajcsy, "Communication Models for Monitoring Applications Using Wireless Sensor Networks",
Technical Report NCSA-ALG04-0003, April 2004.
[abstract][pdf 660kB]
S. Saha and P. Bajcsy,
"System Design Issues for Applications Using Wireless Sensor Networks",
Technical Report NCSA-ALG03-0003, August 2003.
[abstract][pdf 463kB]