%I Association for Computational Linguistics
%L discovery10153278
%J Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
%X In this paper, we aim to improve abstractive dialogue summarization quality and, at the same time, enable granularity control. Our model has two primary components and stages: 1) a two-stage generation strategy that generates a preliminary summary sketch serving as the basis for the final summary. This summary sketch provides a weakly supervised signal in the form of pseudo-labeled interrogative pronoun categories and key phrases extracted using a constituency parser. 2) A simple strategy to control the granularity of the final summary, in that our model can automatically determine or control the number of generated summary sentences for a given dialogue by predicting and highlighting different text spans from the source text. Our model achieves state-of-the-art performance on the largest dialogue summarization corpus SAMSum, with as high as 50.79 in ROUGE-L score. In addition, we conduct a case study and show competitive human evaluation results and controllability to human-annotated summaries.
%O © 2022 ACL. Original content in this paper is licensed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) Licence (https://creativecommons.org/licenses/by/4.0/).
%A CS Wu
%A L Liu
%A W Liu
%A P Stenetorp
%A C Xiong
%T Controllable Abstractive Dialogue Summarization with Sketch Supervision
%B Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
%P 5108-5122
%D 2021