Tag: Natural Language Processing

F12-A Explainable Commonsense Question Answering

Current question answering systems is incapable of providing human-interpretable explanations or proof to support the decision. In this project, we propose general methods to answer common sense questions, offering natural language explanations or supporting facts. In particular, we propose Copy-explainer that generate natural language explanation that later help answer commonsense questions by leveraging structured and unstructured commonsense knowledge from external knowledge graph and pre-trained language models. Furthermore, we propose Encyclopedia Net, a fact-level causal knowledge graph, facilitating commonsense reasoning for question answering.

F8-M TPR Learning: a Symbolic Neural Approach for Vision Language Intelligence

Deep learning (DL) has in recent years been widely used in computer vision and natural language processing (NLP) applications due to its superior performance. However, while images and natural languages are known to be rich in structures expressed, for example, by grammar rules, DL has so far not been capable of explicitly representing and enforcing such structures. In this project, we propose an approach to bridging this gap by exploiting tensor product representations (TPR), a structured neural-symbolic model developed in cognitive science, aiming to integrate DL with explicit language rules, logical rules, or rules that summarize the human knowledge about the subject.

O3-B Ontology-based Interpretable Deep Learning for Consumer Complaint Explanation and Analysis

In this project, we will design ontology-based interpretable deep models for consumer complaint explanation and analysis. The main idea of our algorithms is to consider domain knowledge in the design of deep learning models and utilize domain ontologies for explaining the deep learning models and results through casual modeling.

F2-M GraphBTM: Graph Enhanced VAE for Biterm Topic Model

GraphBTM is a topic model which is an unsupervised algorithm to understand documents. It learns to discover the latent representation of documents and produce meaning clustering of words in the same topic. The goal of GraphBTM is to overcome the limitations of the Latent Dirichlet Allocation (LDA) which suffers from the data sparsity problem in short text and Biterm Topic Model (BTM) which claims an insufficient whole-corpus topic distribution.

F3-M TPR Learning: a Symbolic Neural Approach for Vision Language Intelligence

Deep learning (DL) has in recent years been widely used in computer vision and natural language processing (NLP) applications due to its superior performance. However, while images and natural languages are known to be rich in structures expressed, for example, by grammar rules, DL has so far not been capable of explicitly representing and enforcing such structures [Huang18]. In this project, we propose an approach to bridging this gap by exploiting tensor product representations (TPR) [Smolensky90a, Smolensky90b], a structured neural-symbolic model developed in cognitive science, aiming to integrate DL with explicit language rules, logical rules, or rules that summarize the human knowledge about the subject.