Links

Show Us the Way: Learning to Manage Dialog from Demonstrations

Show Us the Way: Learning to Manage Dialog from Demonstrations

We present our submission to the End-to-End Multi-Domain Dialog Challenge Track of the Eighth Dialog System Technology Challenge. Our proposed dialog system adopts a pipeline architecture, with distinct components for Natural Language Understanding, Dialog State Tracking, Dialog Management and Natural Language Generation. At the core of our system is a reinforcement learning algorithm which uses Deep Q-learning from Demonstrations to learn a dialog policy with the help of expert examples. We find that demonstrations are essential to training an accurate dialog policy where both state and action spaces are large. Evaluation of our Dialog Management component shows that our approach is effective – beating supervised and reinforcement learning baselines.
[paper]

Audio Visual Scene-Aware Dialog System Using Dynamic Memory Networks

Audio Visual Scene-Aware Dialog System Using Dynamic Memory Networks

The audio visual scene-aware dialog (AVSD) task, proposed as one of the tracks in the Eighth Dialog System Technology Challenge (DSTC8), is a multimodal dialog task which aims to automatically generate a response to an input question about the content of a video clip in the context of a given dialog. In this paper, we propose for this task a number of models that are based on dynamic memory networks (DMNs). Compared to the baseline model released by the AVSD organizers, our DMN-based AVSD model with single modality achieves performance improvements of more than 4.2% in the BLEU-4 score and 18.1% in the CIDEr score, demonstrating the effectiveness of DMNs for encoding long-term context information in dialog tasks. We also present a multimodal variant of the DMN-based model which incorporates all modalities.

[paper]

LSTMEmbed: Learning Word and Sense Representations from a Large Semantically Annotated Corpus with Long Short-Term Memories

LSTMEmbed: Learning Word and Sense Representations from a Large Semantically Annotated Corpus with Long Short-Term Memories

We explore the capabilities of a bidirectional LSTM model to learn representations of word senses from semantically annotated corpora. We show that the utilization of an architecture that is aware of word order, like an LSTM, enables us to create better representations. We assess our proposed model on various standard benchmarks for evaluating semantic representations, reaching state-of-the-art performance on the SemEval-2014 word-to-sense similarity task.

[site] [paper] [presentation]

Embedding Words and Senses Together via Joint Knowledge-Enhanced Training

Embedding Words and Senses Together via Joint Knowledge-Enhanced Training

We propose a new model that jointly learns word and sense embeddings and represents them in a unified vector space by exploiting large corpora and knowledge obtained from semantic networks. We evaluate the main features of our approach qualitatively and quantitatively in various tasks, highlighting the advantages of the proposed method with respect to state-of-the-art word- and sense-based models.

[site] [source] [paper] [poster]

[Tutorial] Semantic Representations of Word Senses and Concepts

[Tutorial] Semantic Representations of Word Senses and Concepts

This tutorial will first provide a brief overview of the recent literature concerning word representation (both count based and neural network based). It will then describe the advantages of moving from the word level to the deeper level of word senses and concepts, providing an extensive review of state ­of ­the ­art systems. Approaches covered will not only include those which draw upon knowledge resources such as WordNet, Wikipedia, BabelNet or FreeBase as reference, but also the so ­called multi ­prototype approaches which learn sense distinctions by using different clustering techniques. Our tutorial will discuss the advantages and potential limitations of all approaches, showing their most successful applications to date. We will conclude by presenting current open problems and lines of future work.

[presentation] [paper]