O2-M Ontology-based Deep Learning with Explanation for Human Behavior Prediction

PI: Thien Nguyen

In this project, we will investigate ontology-based deep learning (OBDL) algorithms to predict and explain human behaviors in health domains. The main idea of our algorithms is to consider domain knowledge in the design of deep learning models and utilize domain ontologies for explaining the deep learning models and results. We will focus on specific application in behavior and temporal prediction in health domain. We will extend ontology-based deep (learning) architecture from RBM to other deep learning models, such as RNN and LSTM with temporal information. We will extend OBDL to other application domains related to human behavior prediction, such as Electronic Health Records (PeaceHealth), Drug Information (with Eli Lilly), and Social Medias (with Baidu). We will utilize ontologies to provide more meaningful (semantic) explanations for the deep learning models and results. We can compare our algorithms with state-of-the-art human behavior prediction models, which do not use ontologies. Common deep learning architectures take a flat representation of features as an input. This would introduce bias into representation learning results since uncorrelated features are treated the same with correlated features in the same learning process. Therefore, designing a model that can have the ability to learn the representations of features from domain knowledge is an urgent demand.