[1] VAGHEFI S A, STAMMBACH D, MUCCIONE V, et al. ChatClimate: grounding conversational AI in climate science[J]. Communications Earth & Environment, 2023, 4: 480.
[2] NI J W, JULIA B, CHIARA C S, et al. CHATREPORT: democratizing sustainability disclosure analysis through LLM-based tools[J]. arXiv:2307.15770, 2023.
[3] JIANG H Q, DING Y, CHEN R, et al. Carbon price forecasting with LLM-based refinement and transfer-learning[C]//Proceedings of the 33rd International Conference on Artificial Neural Networks. Cham: Springer, 2024: 139-154.
[4] FAIZ A, KANEDA S, WANG R H, et al. LLMCarbon: modeling the end-to-end carbon footprint of large language models[J]. arXiv:2309.14393, 2023.
[5] 张艳萍, 陈梅芳, 田昌海, 等. 面向军事领域知识问答系统的多策略检索增强生成方法[J]. 计算机应用, 2025, 45(3): 746-754.
ZHANG Y P, CHEN M F, TIAN C H, et al. Multi-strategy retrieval-augmented generation method for military domain knowledge question answering systems[J]. Journal of Computer Applications, 2025, 45(3): 746-754.
[6] 韦一金, 樊景超. 基于ChatGLM2-6B的农业政策问答系统[J]. 数据与计算发展前沿 (中英文), 2024, 6(4): 116-127.
WEI Y J, FAN J C. An agricultural policy question answering system based on ChatGLM2-6B[J]. Frontiers of Data & Computing, 2024, 6(4): 116-127.
[7] JI Z W, LEE N, FRIESKE R, et al. Survey of hallucination in natural language generation[J]. ACM Computing Surveys, 2023, 55(12): 1-38.
[8] MANAKUL P, LIUSIE A, GALES M J F. SelfCheckGPT: zero-resource black-box hallucination detection for generative large language models[J]. arXiv:2303.08896, 2023.
[9] TAM D, MASCARENHAS A, ZHANG S Y, et al. Evaluating the factual consistency of large language models through news summarization[C]//Findings of the Association for Computational Linguistics: ACL 2023. Stroudsburg: ACL, 2023: 5220-5255.
[10] SHEN J M, LIU J L, FINNIE D, et al. “Why is this misleading?”: Detecting news headline hallucinations with explanations[C]//Proceedings of the ACM Web Conference 2023. New York: ACM, 2023: 1662-1672.
[11] LI J Y, CHENG X X, ZHAO W X, et al. HaluEval: a large-scale hallucination evaluation benchmark for large language models[J]. arXiv:2305.11747, 2023.
[12] MAHMOOD R, WANG G, KALRA M, et al. Fact-checking of AI-generated reports[J]. arXiv:2307.14634, 2023.
[13] YUAN W Z, NEUBIG G, LIU P F. BARTScore: evaluating generated text as text generation[J]. arXiv:2106.11520, 2021.
[14] AMAYUELAS A, WONG K, PAN L M, et al. Knowledge of knowledge: exploring known-unknowns uncertainty with large language models[C]//Findings of the Association for Computational Linguistics: ACL 2024. Stroudsburg: ACL, 2024: 6416-6432.
[15] 陶江垚, 奚雪峰, 盛胜利, 等. 结构化思维提示增强大语言模型推理能力综述[J]. 计算机工程与应用, 2025, 61(6): 64-83.
TAO J Y, XI X F, SHENG S L, et al. Review on enhancing reasoning abilities of large language model through structured thinking prompts[J]. Computer Engineering and Applications, 2025, 61(6): 64-83.
[16] World Resources Institute, World Business Council for Sustainable Development. Greenhouse gas protocol[R/OL]. (2011)[2024-11-26]. https://ghgprotocol.org.
[17] HAN M, CAO Z X, WANG J T, et al. Enterprise carbon emission analysis and knowledge Q&A system based on large language model[EB/OL]. [2024-11-26]. https://github. com/czx666nnn/Carbon_analyze.
[18] ZHANG Q L, CHEN Q, LI Y L, et al. Sequence model with self-adaptive sliding window for efficient spoken document segmentation[C]//Proceedings of the 2021 IEEE Automatic Speech Recognition and Understanding Workshop. Piscataway: IEEE, 2021: 411-418.
[19] BAI J Z, BAI S, YANG S S, et al. Qwen-VL: a frontier large vision-language model with versatile abilities[J]. arXiv:2308.12966, 2023.
[20] LU X H. BM25S: orders of magnitude faster lexical search via eager sparse scoring[J]. arXiv:2407.03618, 2024.
[21] XIAO S T, LIU Z, ZHANG P T, et al. C-Pack: packed resources for general Chinese embeddings[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 641-649.
[22] CHEN J L, XIAO S T, ZHANG P T, et al. BGE M3-embedding: multi-lingual, multi-functionality, multi-granularity text embeddings through self-knowledge distillation[J]. arXiv:2402.03216, 2024.
[23] ASAI A, WU Z Q, WANG Y Z, et al. Self-RAG: learning to retrieve, generate, and critique through self-reflection[J]. arXiv:2310.11511, 2023.
[24] 何多魁, 唐中君, 陈倩倩, 等. 微调大语言模型驱动的短文本动态主题建模方法[J/OL]. 数据分析与知识发现 [2025-01-28]. https://link.cnki.net/urlid/10.1478.G2.20250124. 1340.004.
HE D K, TANG Z J, CHEN Q Q, et al. Dynamic topic modelling approach of short text driven by fi-ne-tuned large language model[J/OL]. Data Analysis and Knowledge Dis-covery [2025-01-28]. https://link.cnki.net/urlid/10.1478.G2. 20250124.1340.004.
[25] POURREZA M, RAFIEI D. DIN-SQL: decomposed in-context learning of text-to-SQL with self-correction[J]. arXiv:2304. 11015, 2023.
[26] Carbon disclosure project[R]. CDP Global Climate Change Report 2023, 2023.
[27] LIN C Y. ROUGE: a package for automatic evaluation of summaries[C]//Annual Meeting of the Association for Computational Linguistics, 2024: 1-10.
[28] ZHANG T Y, VARSHA K, FELIX W, et al. BERTScore: evaluating text generation with BERT[C]//Proceedings of the 8th International Conference on Learning Representations, 2020: 1-43.
[29] XU W B, YAN L, HAN P Y, et al. TCSR-SQL: towards table content-aware text-to-SQL with self-retrieval[J]. arXiv: 2407.01183, 2024.
[30] YU T, ZHANG R, YANG K, et al. Spider: a large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 3911-3921.
[31] GRATTAFIORI A, DUBEY A, JAUHRI A, et al. The Llama 3 herd of models[J]. arXiv:2407.21783, 2024. |