F6-M Adaptive Manifold Learning for Multi-Sensor Translation and Fusion given Missing Data

PIs: Alina Zare, Paul Gader

The goal of this work is to translate streams of data from individual sensors into a shared manifold-space for joint understanding and processing. This work includes investigation of computational topology for manifold learning, data summarization, and intrinsic dimensionality estimation. In practice, for a given application, processing chains are generally developed for a particular sensor or set of sensors.  However, over various regions of the world, the data sets and sensor suites available may vary and have different spatial, spectral and temporal resolutions.  Yet, various sensors and processes often contain duplicate or corroborating information. For example, consider the case that you wanted to determine the location of all the buildings in a rural area.  This could be accomplished (with varying degrees of certainty) using LiDAR, hyperspectral imagery or map data that includes building profiles.  So, rather than developed individual sensor suite-specific processing chains for determining building locations.  We proposed to develop a mechanism for mapping sensor data to a shared manifold space where a single processing chain to achieve the desired goal can be developed.  In this way, application implementations can be easily leveraged for any available data.  This work will leverage the team’s on-going research in methods to address uncertainty and imprecision [Jiao, 2017; Jiao, 2016; Glenn, 2015; Jiao, 2015; Du, 2016], map-guided scene understanding [Sun, 2017; Zou, 2017; Zou 2016], and Deep Unmixing [Zhou, 2016; Won, 1995; Won, 1997; Gader, 1992; Kalantari, 2016; Heylen, 2016; Heylen 2015; Gader, 1997].