[1] QIN L, XIE T, CHE W, et al. A survey on spoken language understanding: recent advances and new frontiers[C]//International Joint Conference on Artificial Intelligence, 2021.
[2] WELD H, HUANG X, LONG S, et al. A survey of joint intent detection and slot-filling models in natural language understanding[J]. ACM Computing Surveys (CSUR), 2021, 55(8): 1-38.
[3] CHEN Q, ZHUO Z, WANG W. BERT for joint intent classification and slot filling[J]. arXiv:1902.10909,2019.
[4] GANGADHARAIAH R, NARAYANASWAMY B. Joint multiple intent detection and slot labeling for goal-oriented dialog[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019.
[5] CUI L, ZHANG Y. Hierarchically-refined label attention network for sequence labeling[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019.
[6] CHEN J, HUANG H, TIAN S, et al. Feature selection for text classification with Naive Bayes[J]. Expert Systems with Applications, 2009, 36(3): 5432-5435.
[7] HAFFNER P, TUR G, WRIGHT J H. Optimizing SVMs for complex call classification[C]//Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003.
[8] SCHAPIRE R E, SINGER Y. BoosTexter: a boosting-based system for text categorization[J]. Machine Learning, 2004, 39: 135-168.
[9] XU P, SARIKAYA R. Convolutional neural network based triangular CRF for joint intent detection and slot filling[C]//Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding, 2013.
[10] LECUN Y, BOTTOU L E O, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
[11] RAVURI S, STOLCKE A. Recurrent neural network and LSTM models for lexical utterance classification[C]//Proceedings of the Sixteenth Annual Conference of the International Speech Communication Association, 2015.
[12] HOCHREITER S, SCHMIDHUBER J U R. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780.
[13] XIA C, ZHANG C, YAN X, et al. Zero-shot user intent detection via capsule neural networks[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018.
[14] HINTON G E, KRIZHEVSKY A, WANG S D. Transforming auto-encoders[C]//Proceedings of the International Conference on Artificial Neural Networks, 2011.
[15] SABOUR S, FROSST N, HINTON G E. Dynamic routing between capsules[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017.
[16] RABINER L R. A tutorial on hidden Markov models and selected applications in speech recognition[J]. Proceedings of the IEEE, 1989, 77(2): 257-286.
[17] SUN G, GUAN Y, WANG X, et al. A maximum entropy Markov model for chunking[C]//Proceedings of the International Conference on Machine Learning and Cybernetics, 2005.
[18] PENG F, Mccallum A. Information extraction from research papers using conditional random fields[J]. Information Processing and Management, 2006, 42(4): 963-979.
[19] GERS F A, SCHMIDHUBER J U R, CUMMINS F. Learning to forget: continual prediction with LSTM[J]. Neural Computation, 2000, 12(10): 2451-2471.
[20] CHO K, VAN MERRI E NBOER B, BAHDANAU D, et al. On the properties of neural machine translation: encoder-decoder approaches[J]. arXiv:1409.1259,2014.
[21] YAO K, ZWEIG G, HWANG M, et al. Recurrent neural networks for language understanding[C]//Proceedings of the Interspeech, 2013.
[22] YAO K, PENG B, ZHANG Y, et al. Spoken language understanding using long short-term memory neural networks[C]//IEEE Spoken Language Technology Workshop (SLT), 2014.
[23] MESNIL G E G, HE X, DENG L, et al. Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding[C]//Proceedings of the Interspeech, 2013.
[24] LIU B, LANE I. Recurrent neural network structured output prediction for spoken language understanding[C]//Proceedings of the NIPS Workshop on Machine Learning for Spoken Language Understanding and Interactions, 2015.
[25] KURATA G, XIANG B, ZHOU B, et al. Leveraging sentence-level information with encoder LSTM for semantic slot filling[J]. arXiv:1601.01530,2016.
[26] HUANG Z, XU W, YU K. Bidirectional LSTM-CRF models for sequence tagging[J]. arXiv:1508.01991,2015.
[27] MA X, HOVY E. End-to-end sequence labeling via bi-directional LSTM-CNNS-CRF[J]. arXiv:1603.01354,2016.
[28] LAMPLE G, BALLESTEROS M, SUBRAMANIAN S, et al. Neural architectures for named entity recognition[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016.
[29] HAKKANI-TüR D Z, G?KHAN T, WANG Y Y, et al. Multi-domain joint semantic frame parsing using bi-directional RNN-LSTM[C]//Proceedings of the 17th Annual Meeting of the International Speech Communication Association, 2016.
[30] ZHANG X, WANG H. A joint model of intent determination and slot filling for spoken language understanding[C]//Proceedings of the International Joint Conference on Artificial Intelligence, 2016.
[31] LIU B, LANE I. Attention-based recurrent neural network models for joint intent detection and slot filling[C]//Proceedings of the Interspeech, 2016: 685-689.
[32] GOO C, GAO G, HSU Y, et al. Slot-gated modeling for joint slot filling and intent prediction[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018.
[33] QIN L, CHE W, LI Y, et al. A stack-propagation framework with token-level intent detection for spoken language understanding[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019.
[34] XU P, SARIKAYA R. Exploiting shared information for multi-intent natural language sentence classification[C]//Proceedings of the Interspeech, 2013.
[35] KIM B, RYU S, LEE G G. Two-stage multi-intent detection for spoken language understanding[J]. Multimedia Tools and Applications, 2017, 76(9): 11377-11390.
[36] 杨春妮, 冯朝胜. 结合句法特征和卷积神经网络的多意图识别模型[J]. 计算机应用, 2018, 38(7): 1839-1845.
YANG C N, FENG C S. Multi-intention recognition model with combination of syntactic feature and convolution neural network[J]. Journal of Computer Applications, 2018, 38(7): 1839-1845.
[37] 刘娇, 李艳玲, 林民. 胶囊网络用于短文本多意图识别的研究[J]. 计算机科学与探索, 2020, 14(10): 1735-1743.
LIU J, LI Y L, LIN M. Research of short text multi-intent detection with capsule network[J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(10): 1735-1743.
[38] WANG Y, SHEN Y, JIN H. A bi-model based RNN semantic frame parsing model for intent detection and slot filling[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018.
[39] E H H, NIU P, CHEN Z, et al. A novel bi-directional interrelated model for joint intent detection and slot filling[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019.
[40] QIN L, XU X, CHE W, et al. AGIF: an adaptive graph-interactive framework for joint multiple intent detection and slot filling[C]//Findings of the Association for Computational Linguistics: EMNLP 2020, 2020.
[41] QIN L, WEI F, XIE T, et al. GL-GIN: fast and accurate non-autoregressive model for joint multiple intent detection and slot filling[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021.
[42] VELI?KOVI? P, CUCURULL G, CASANOVA A, et al. Graph attention networks[C]//Proceedings of the International Conference on Learning Representations, 2018.
[43] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017. |