📘

Multi-Task Learning

Papers

TOASTS: Analyzing Multi-Task Learning for Abstractive Text Summarization

https://arxiv.org/pdf/2210.14606.pdf

Multi-task learning for abstractive text summarization with key information guide network

https://asp-eurasipjournals.springeropen.com/counter/pdf/10.1186/s13634-020-00674-7.pdf

Summaries

Further Readings

An Overview of Multi-Task Learning for Deep Learning
This post gives a general overview of the current state of multi-task learning. Note: If you are looking for a review paper, this blog post is also available as an article on arXiv. Table of contents: In Machine Learning (ML), we typically care about optimizing for a particular metric, whether this is a score on a certain benchmark or a business KPI.
https://ruder.io/multi-task/
overview of multi-task learning
As a promising area in machine learning, multi-task learning (MTL) aims to improve the performance of multiple related learning tasks by leveraging useful information among them. In this paper, we give an overview of MTL by first giving a definition of MTL.
https://academic.oup.com/nsr/article/5/1/30/4101432
Too long, didn't read: AI for Text Summarization and Generation of tldrs
TLDR (or TL;DR) is a common internet acronym for "Too Long; Didn't Read." It likely originated on the comedy forum Something Awful around 2002 ( source) and then became more popular in online forums like Reddit.
https://towardsdatascience.com/too-long-didnt-read-ai-for-text-summarization-and-generation-of-tldrs-dc020590aed8