Abstract | ||
---|---|---|
Neural sequence-to-sequence model have been successfully applied for abstractive text summarization. However, there is an obvious hierarchic phenomenon when we do a summarization that we always need to read the source text several times and abstract information at multiple level, but in the basic sequence-to-sequence model there is not a corresponding multiple structure. We propose a novel multi-level encoder that get the information of the text at different level to address that problem. The experiment shows that our model outperform the baseline 2 ROUGE points. |
Year | Venue | Field |
---|---|---|
2017 | 2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI) | Automatic summarization,Logic gate,Task analysis,Computer science,Speech recognition,Encoder,Source text,Decoding methods,Vocabulary |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Junshuai Liu | 1 | 0 | 0.34 |
Xin Xin | 2 | 58 | 7.73 |
Li Li | 3 | 1 | 0.70 |
Shaozhuang Liu | 4 | 1 | 0.70 |
Xiaoyu Ma | 5 | 1 | 0.70 |