Left to right: Risto Miikkulainen, Gary Marcus, Greg Wayne, Stephen Muggleton, Antoine Bordes, Leo de Penning, Tarek R. Besold, Dan Roth, Barbara Hammer, Alessio Lomuscio, Michael Witbrock, Jay McClelland, Artur d'Avila Garcez, James Davidson, Isaac Noble, Kristina Toutanova
"Top-Down and Bottom-Up Interactions between Low-Level Reactive Control and Symbolic Rule Learning in Embodied Agents" (Clement Moulin-Frier, Xerxes Arsiwalla, Jordi-Ysard Puigbo, Marti Sanchez-Fibla, Armin Duff, Paul Verschure)
"Accuracy and Interpretability Trade-offs in Machine Learning Applied to Safer Gambling" (Sanjoy Sankar, Tillman Weyde, Artur D'Avila Garcez, Gregory Slabaugh, Simo Dragicevic, Chris Percy)
"A Simple but Tough-to-Beat Baseline for Sentence Embeddings" (Sanjeev Arora, Yingyu Liang, Tengyu Ma)
"MS MARCO: A Human-Generated MAchine Reading COmprehension Dataset" (Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, Li Deng)
11:50 - 12:45 Poster session
=== 12:45 - 14:00 Lunch break ===
14:00 - 14:20 "Variable binding through assemblies in spiking neural networks" (Robert Legenstein, Christos Papadimitriou, Santosh Vempala, Wolfgang Maass)
14:20 - 14:40 "Pre-Wiring and Pre-Training: What does a neural network need to learn truly general identity rules?" (Raquel Alhama, Willem Zuidema)
14:40 - 15:00 "ReasoNet: Learning to Stop Reading in Machine Comprehension" (Yelong Shen, Po-Sen Huang, Jianfeng Gao, Weizhu Chen)
=== 15:00 - 15:30 Coffee Break ===
15:30 - 16:00 Invited talk Dan Roth (University of Illinois at Urbana-Chambaign, USA)
16:00 - 17:25 Panel on "Explainable AI" (Yoshua Bengio, Marco Gori, Alessio Lomuscio, Gary Marcus, Stephen Muggleton, Michael Witbrock)
Additional support generously provided by Facebook.
Mission Statement
While early work on knowledge representation and inference was primarily symbolic, the corresponding approaches subsequently fell out of favor, and were largely supplanted by connectionist methods. In this workshop, we will work to close the gap between the two paradigms, and aim to formulate a new unified approach that is inspired by our current understanding of human cognitive processing. This is important to help improve our understanding of Neural Information Processing and build better Machine Learning systems, including the integration of learning and reasoning in dynamic knowledge-bases, and reuse of knowledge learned in one application domain in analogous domains.
The workshop brings together established leaders and promising young scientists in the fields of neural computation, logic and artificial intelligence, knowledge representation, natural language understanding, machine learning, cognitive science and computational neuroscience. Invited lectures by senior researchers will be complemented with presentations based on contributed papers reporting recent work (following an open call for papers) and a poster session, giving ample opportunity for participants to interact and discuss the complementary perspectives and emerging approaches.
The workshop targets a single broad theme of general interest to the vast majority of the NIPS community, namely translations between connectionist models and symbolic knowledge representation and reasoning for the purpose of achieving an effective integration of neural learning and cognitive reasoning, called neural-symbolic computing. The study of neural-symbolic computing is now an established topic of wider interest to NIPS with topics that are relevant to almost everyone studying neural information processing.
Keywords: neural-symbolic computing; language processing; cognitive agents; multimodal learning; deep networks; symbol manipulation; variable binding; memory-based networks; dynamic knowledge-bases; integration of learning and reasoning; explainable AI.
Call for Papers
We invite submission of papers dealing with topics related to the research questions discussed in the workshop. The reported work can range from theoretical/foundational research to reports on applications and/or implemented systems.
We explicitly also encourage the submission of more controversial papers which can serve as basis for open discussions during the event.
Besides the list of keywords already stated above, possible topics of interest include but are (by far!) not limited to:
The representation of symbolic knowledge by connectionist systems;
Neural Learning theory;
Integration of logic and probabilities, e.g., in neural networks, but also more generally;
Structured learning and relational learning in neural networks;
Logical reasoning carried out by neural networks;
Integrated neural-symbolic approaches;
Extraction of symbolic knowledge from trained neural networks;
Submissions are limited to at most eight pages, an additional ninth page containing only cited references is allowed. Still, also shorter papers are expressly welcomed.
Reviewing will be single-blind, i.e., you are free to indicate your name etc. on the paper. (Still, this is not an obligation.)
Please note that at least one author of each accepted paper must register for the event and be available to present the paper at the workshop.
Publication
Accepted papers will be published in official workshop proceedings submitted to CEUR-WS.org.
Authors of the best papers will be invited to submit a revised and extended version of their papers to a journal special issue after the workshop.
Important Dates
Deadline for paper submission (EXTENDED): October 14, 2016 Notification of paper acceptance (DELAYED): November 2, 2016 Camera-ready paper due: November 14, 2016 Workshop date: December 9, 2016
NIPS 2015 main conference: December 5-8, 2016
Admission
The workshop is open to anybody, please register via NIPS 2016.