[1] Supreme People's Court. Provisions of the Supreme People??s Court on the publication of judgments on the Internet by the People??s Courts (2016 revision) [EB/OL]. (2016-10-01)[2022-07-02]. https://pkulaw.com/.
[2] WIDYASSARI A P, RUSTAD S, SHIDIK G F, et al. Review of automatic text summarization techniques & methods[J]. Journal of King Saud University (Computer and Information Sciences) 2022, 34(4): 1029-1046.
[3] MORATANCH N, CHITRAKALA S. A survey on extractive text summarization[C]//Proceedings of the 2017 International Conference on Computer, Communication and Signal Processing, 2017: 1-6.
[4] AKHIL R. Extractive text summarization by deep learning[J]. arXiv: 1708. 04439, 2017.
[5] ZHANG Y, LI D, WANG Y H, et al. Abstract text summarization with a convolutional seq2seq model[J]. Applied Sciences, 2019, 9(8): 1665.
[6] LUO D, XING C P. Logical formula of trial in civil and commercial cases[N/OL]. People??s court newspaper (2018-04-04)[2022-07-02]. http://rmfyb.chinacourt.org/paper/html/2018-04/04/content_137571. htm.
[7] HELLI B, Moghaddam M E. A text-independent Persian writer identification system using LCS based classifier[C]//Proceedings of the 2008 IEEE International Symposium on Signal Processing and Information Technology, 2008: 203-206.
[8] LUHN H P. The automatic creation of literature abstracts[J]. IBM Journal of Research and Development, 1958, 2(2): 159-165.
[9] MIHALCEA R, TARAU P. TextRank: bringing order into text[C]//Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, 2004: 404-411.
[10] PAGE L. The PageRank citation ranking: bringing order to the Web[R]. Stanford Digital Library Technologies Project, 1998.
[11] XIAO W, CARENINI G. Extractive summarization of long documents by combining global and local context[J]. arXiv: 1909. 08089, 2019.
[12] KIM Y, RUSH A M. Sequence-level knowledge distillation[J]. arXiv:1606. 07947, 2016.
[13] KIM M, SINGH M D, LEE M. Towards abstraction from extraction: multiple timescale gated recurrent unit for summarization[J]. arXiv:1607. 00718, 2016.
[14] SEE A, LIU P J, MANNING C D. Get to the point: summarization with pointer-generator networks[J]. arXiv:1704. 04368, 2017.
[15] FARZINDAR A, LAPALME G. LetSum, an automatic legal text summarizing[C]//Proceedings of the 17th Annual Conference on Legal Knowledge and Information Systems. Amsterdam: IOS Press, 2004: 11-18.
[16] HACHEY B, GROVER C. Extractive summarisation of legal texts[J]. Artificial Intelligence and Law, 2006, 14(4): 305-345.
[17] POLSLEY S, JHUNJHUNWALA P, HUANG R. CaseSummarizer: a system for automated summarization of legal texts[C]//Proceedings of the 26th International Conference on Computational Linguistics: System Demonstrations, 2016: 258-262.
[18] REN P J, CHEN Z M, REN Z C, et al. Leveraging contextual sentence relations for extractive summarization using a neural attention model[C]//Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2017: 95-104.
[19] 张迎, 王中卿, 王红玲. 基于篇章主次关系的单文档抽取式摘要方法研究[J]. 中文信息学报, 2019, 33(8): 67-76.
ZHANG Y, WANG Z Q, WANG H L. Single document extractive summarization with satellite and nuclear relations[J]. Journal of Chinese Information Processing, 2019, 33(8): 67-76.
[20] WU R S, ZHANG Y F, WANG H L, et al. Abstractive summarization based on hierarchical structure[J]. Journal of Chinese Information Processing, 2019, 33(8): 90-98.
[21] LI Y Y, KRISHNAMURTHY R, RAGHAVAN S, et al. Regular expression learning for information extraction[C]//Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, 2008: 21-30.
[22] CUI L Y, ZHANG Y. Hierarchically-refined label attention network for sequence labeling[J]. arXiv:1908.08676, 2019.
[23] TSOUMAKAS G, KATAKIS I. Multi-label classification: an overview[J]. International Journal of Data Warehousing and Mining, 2007, 3(3): 1-13.
[24] RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. The Journal of Machine Learning Research, 2020, 21: 5485-5551.
[25] ZHANG J Q, ZHAO Y, SALEH M, et al. PEGASUS: pre-training with extracted gap-sentences for abstractive summarization[C]//Proceedings of the 37th International Conference on Machine Learning, 2020: 11328-11339.
[26] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv: 1810. 04805, 2018.
[27] LIN M, CHEN Q, YAN S. Network in network[J]. arXiv: 1312. 4400, 2013.
[28] ZHANG J J, FANG M, LI X. Multi-label learning with discriminative features for each label[J]. Neurocomputing, 2015, 154: 305-316.
[29] LIN C Y, HOVY E. Automatic evaluation of summaries using n-gram co-occurrence statistics[C]//Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, 2003: 150-157.
[30] LEWIS M, LIU Y, GOYAL N, et al. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 7871-7880. |