Multi-View S2S
Official
GitHub - SALT-NLP/Multi-View-Seq2Seq: Source codes for the paper "Multi-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization"
This repo contains codes for the following paper: Jiaao Chen, Diyi Yang: Multi-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization, EMNLP 2020 If you would like to refer to it, please cite the paper mentioned above. These instructions will get you running the codes of Multi-View Conversation Summarization.
Summary
We propose a multi-view sequence-to-sequence model by first extracting conversational structures of unstructured daily chats from different views to represent conversations and then utilizing a multi-view decoder to incorporate different views to generate dialogue summaries.

- we propose to utilize rich conversational structures, id est, structured views and the generic views for abstraction conversation summarization
- we design a multi-view sequence-to-sequnce model that consists of a conversation encoder to encode different views and a multi-view decoder with multi-view attention to generate dialogue summaries
- we perform experiments on a large-scale conversation summarization dataset (SAMSum) and demonstrate the effectiveness of our proposed methods
- we conduct thorough error analyses and discuss specific challenges that current approaches faced with this task
Architecture


Pre-Training


GitHub - UKPLab/sentence-transformers: Multilingual Sentence & Image Embeddings with BERT
This framework provides an easy method to compute dense vector representations for sentences, paragraphs, and images. The models are based on transformer networks like BERT / RoBERTa / XLM-RoBERTa etc. and achieve state-of-the-art performance in various task. Text is embedding in vector space such that similar text is close and can efficiently be found using cosine similarity.
Experiments

GitHub - pltrdy/rouge: A full Python Implementation of the ROUGE Metric (not a wrapper)
This implementation is independant from the "official" ROUGE script (aka. ROUGE-155). Results may be slighlty different, see discussions in #2. Clone & Install git clone https://github.com/pltrdy/rouge cd rouge python setup.py install pip install -U .

Performance

Amazon Mechanical Turk
Access a global, on-demand, 24x7 workforce Amazon Sagemaker Ground Truth Plus: Fully managed data labeling service Ground Truth Plus is a turnkey data labeling service that enables you to easily create high-quality training datasets without having to build labeling applications or manage the labeling workforce on your own.







Further Readings
Natural Language Understanding with Sequence to Sequence Models
Natural Language Understanding (NLU), the technology behind conversational AI (chatbots, virtual assistant, augmented analytics) typically includes the intent classification and slot filling tasks, aiming to provide a semantic tool for user utterances. Intent classification focuses on predicting the intent of the query, while slot filling extracts semantic concepts in the query.


Seq2Seq Model | Sequence To Sequence With Attention
Deep Learning at scale is disrupting many industries by creating chatbots and bots never seen before. On the other hand, a person just starting out on Deep Learning would read about Basics of Neural Networks and its various architectures like CNN and RNN.

