Skip to Main Content
We are investigating how combining 3D stereo-vision, touch and sound into a multi-modal virtual environment can be used to improve interaction with and analysis of spatial (3D) data. We specifically focus on providing tools for planning new structures (such as schools or pipelines) within a system of spatial constraints. The planning task is typically performed within a GIS (geographic information system), where it is called suitability analysis. Our proof-of-concept virtual environment uses 3D stereo, force-feedback and interactive sound. We use a set of typical GIS raster and vector data and drape the data on a touchable digital elevation model (3D terrain). In addition, the system lets the user configure a planning environment where the relative importance of each GIS layer can be expressed via force and/or via sound. For example, when digitizing a path, the user could model proximity to objects such as roads or houses as repulsion (with different intensities, depending on the object's importance) and hear land-cover values as sound (pitch). We can then substitute multiple layers of potentially cluttering 2D maps with a combination of vision, force (gravity, friction) and sound (pitch, tempo, timbre) and facilitate the fusion of data streams from different sensory modalities. With the help of students is a ISU GIS class, we intend to formally evaluate this setup and compare it with the traditional GIS suitability analysis.
Date of Conference: 2-3 Oct. 2004