PIs: Alina Zare and Paul Gader
This project aims to translate streams of data from individual sensors into a shared manifold-space for joint understanding and processing. This work includes an investigation of computational topology and contrastive learning for manifold learning. In practice, processing chains are generally developed for a particular sensor or set of sensors for a given application. However, over various regions of the world, the available data sets and sensor suites may vary and have different spatial, spectral, and temporal resolutions. Yet, multiple sensors and processes often contain duplicate or corroborating information. For example, consider the case that you wanted to determine the location of all the buildings in a rural area. This could be accomplished (with varying degrees of certainty) using LiDAR, SAR, hyperspectral imagery, or map data that includes building profiles. So, instead of developing individual sensor-specific processing pipelines, we proposed to develop a mechanism for mapping sensor data to a shared manifold space where a single processing chain needs to be developed to achieve the desired goal. This way, application implementations can be easily leveraged for any available data. This work will leverage the team’s ongoing research in methods to address uncertainty and imprecision. Additionally, the fusion of multiple sensors will allow a robust performance in achieving the desired goal.