Current question answering systems is incapable of providing human-interpretable explanations or proof to support the decision. In this project, we propose general methods to answer common sense questions, offering natural language explanations or supporting facts. In particular, we propose Copy-explainer that generate natural language explanation that later help answer commonsense questions by leveraging structured and unstructured commonsense knowledge from external knowledge graph and pre-trained language models. Furthermore, we propose Encyclopedia Net, a fact-level causal knowledge graph, facilitating commonsense reasoning for question answering.
Tag: Natural Language Processing
O3-N Understand Transcripts with Natural Language Processing and Deep Learning
We propose comprehensive resources and models for understanding automatically
transcribed videos. In particular, in this project, we pursue a deep learning model for identifying the
important points and questions mentioned in a video transcript. To achieve this objective, we employ two
specific deep learning models.
O3-B Ontology-based Interpretable Deep Learning for Consumer Complaint Explanation and Analysis
In this project, we will investigate ontology-based explainable deep learning (OBDL) algorithms for textual data to identify key factors, concepts, and hypothesis that significantly contribute to the decision made by deep neural networks.
F8-M TPR Learning: a Symbolic Neural Approach for Vision Language Intelligence
Deep learning (DL) has in recent years been widely used in computer vision and natural language processing (NLP) applications due to its superior performance. However, while images and natural languages are known to be rich in structures expressed, for example, by grammar rules, DL has so far not been capable of explicitly representing and enforcing such structures. In this project, we propose an approach to bridging this gap by exploiting tensor product representations (TPR), a structured neural-symbolic model developed in cognitive science, aiming to integrate DL with explicit language rules, logical rules, or rules that summarize the human knowledge about the subject.
O3-B Ontology-based Interpretable Deep Learning for Consumer Complaint Explanation and Analysis
In this project, we will design ontology-based interpretable deep models for consumer complaint explanation and analysis. The main idea of our algorithms is to consider domain knowledge in the design of deep learning models and utilize domain ontologies for explaining the deep learning models and results through casual modeling.
O4-N Multilingual Knowledge Alignment with Embedding Representation Learning
In this project, we will develop innovative medical embedding learning algorithms and make knowledge alignment across multiple languages in the medical domain. Our learning algorithms will fully exploit the semantic similarity for knowledge alignment across languages.
F2-M GraphBTM: Graph Enhanced VAE for Biterm Topic Model
GraphBTM is a topic model which is an unsupervised algorithm to understand documents. It learns to discover the latent representation of documents and produce meaning clustering of words in the same topic. The goal of GraphBTM is to overcome the limitations of the Latent Dirichlet Allocation (LDA) which suffers from the data sparsity problem in short text and Biterm Topic Model (BTM) which claims an insufficient whole-corpus topic distribution.
F3-M TPR Learning: a Symbolic Neural Approach for Vision Language Intelligence
Deep learning (DL) has in recent years been widely used in computer vision and natural language processing (NLP) applications due to its superior performance. However, while images and natural languages are known to be rich in structures expressed, for example, by grammar rules, DL has so far not been capable of explicitly representing and enforcing such structures [Huang18]. In this project, we propose an approach to bridging this gap by exploiting tensor product representations (TPR) [Smolensky90a, Smolensky90b], a structured neural-symbolic model developed in cognitive science, aiming to integrate DL with explicit language rules, logical rules, or rules that summarize the human knowledge about the subject.
O3-B Ontology-based Interpretable Deep Learning for Consumer Complaint Explanation and Analysis
The goal of this project is to design ontology-based interpretable deep models for consumer complaint explanation and analysis.
O4-N Multilingual Knowledge Alignment with Embedding Representation Learning
The goal of this project is to create a novel “Multilingual Knowledge Alignment” method in the medical domain with no/less parallel corpus, to enhance/improve Medical Knowledge in Chinese.