About the Tutorial
Understanding natural language interactions in realistic settings requires models that can
deal with noisy textual inputs, reason about the dependencies between different textual elements
and leverage the dependencies between textual content and the context from which it emerges.
Neural-symbolic approaches, which combine highly expressive neural representations with symbolic reasoning
capabilities, directly fit these settings. Despite that fact, most current NLP work focuses on learning
neural representations.
In this tutorial, we will motivate neural symbolic modeling as a general approach for a wide range of
natural language scenarios. We will review several recently proposed approaches designed around different
NLP domains, such as knowledge-base completion, quantitative reasoning, question-answering, relation extraction,
and grounding text in images. We will propose a general formulation for neuro-symbolic modeling and discuss
the key research challenges and opportunties for NLP tasks that can drive future research directions. Finally,
we will provide an interactive hands-on demonstration showing how to model a complex natural language domain
using a declarative neural-symbolic framework.