| [1] |
Birhane A, Kasirzadeh A, Leslie D, et al. Science in the age of large language models[J]. Nature Reviews Physics, 2023, 5(5): 277-280.
|
| [2] |
Wang H C, Fu T F, Du Y Q, et al. Scientific discovery in the age of artificial intelligence[J]. Nature, 2023, 620(7972): 47-60.
|
| [3] |
Bender E M, Koller A. Climbing towards NLU: on meaning, form, and understanding in the age of data[C]//Jurafsky D,Chai J, Schluter N, et al. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020: 5185-5198.
|
| [4] |
Boiko D A, MacKnight R, Kline B, et al. Autonomous chemical research with large language models[J]. Nature, 2023, 624(7992): 570-578.
|
| [5] |
Jablonka K M, Schwaller P, Ortega-Guerrero A, et al. Leveraging large language models for predictive chemistry[J]. Nature Machine Intelligence, 2024, 6(2): 161-169.
|
| [6] |
Irwin R, Dimitriadis S, He J Z, et al. Chemformer: a pre-trained transformer for computational chemistry[J]. Machine Learning: Science and Technology, 2022, 3(1): 015022.
|
| [7] |
Tshitoyan V, Dagdelen J, Weston L, et al. Unsupervised word embeddings capture latent knowledge from materials science literature[J]. Nature, 2019, 571(7763): 95-98.
|
| [8] |
Butler K T, Davies D W, Cartwright H, et al. Machine learning for molecular and materials science[J]. Nature, 2018, 559(7715): 547-555.
|
| [9] |
Jumper J, Evans R, Pritzel A, et al. Highly accurate protein structure prediction with AlphaFold[J]. Nature, 2021, 596(7873): 583-589.
|
| [10] |
Senior A W, Evans R, Jumper J, et al. Improved protein structure prediction using potentials from deep learning[J]. Nature, 2020, 577(7792): 706-710.
|
| [11] |
Lin Z M, Akin H, Rao R, et al. Evolutionary-scale prediction of atomic-level protein structure with a language model[J]. Science, 2023, 379(6637): 1123-1130.
|
| [12] |
Shoombuatong W, Schaduangrat N, Mookdarsanit P, et al. Advancing the accuracy of clathrin protein prediction through multi-source protein language models[J]. Scientific Reports, 2025, 15: 24403.
|
| [13] |
Carvalho T F M, Silva J C F, Calil I P, et al. Rama: a machine learning approach for ribosomal protein prediction in plants[J]. Scientific Reports, 2017, 7: 16273.
|
| [14] |
叶国安, 郑卫芳, 何辉, 等. 我国核燃料后处理技术现状和发展[J]. 原子能科学技术, 2020, 54(S1): 75-83.
|
|
Ye G A, Zheng W F, He H, et al. Current status and development of nuclear fuel reprocessing technology in China[J]. Atomic Energy Science and Technology, 2020, 54(S1): 75-83.
|
| [15] |
于婷, 张音音, 张睿志, 等. 基于机器学习的30% TBP/煤油-硝酸体系中主要组分的分配比预测研究[J]. 原子能科学技术, 2025, 59(1): 14-23.
|
|
Yu T, Zhang Y Y, Zhang R Z, et al. Distribution ratio prediction of major components in 30% TBP/kerosene-HNO3 system based on machine learning[J]. Atomic Energy Science and Technology, 2025, 59(1): 14-23.
|
| [16] |
Baron P, Cornet S M, Collins E D, et al. A review of separation processes proposed for advanced fuel cycles based on technology readiness level assessments[J]. Progress in Nuclear Energy, 2019, 117: 103091.
|
| [17] |
Sanchez-Lengeling B, Aspuru-Guzik A. Inverse molecular design using machine learning: generative models for matter engineering[J]. Science, 2018, 361(6400): 360-365.
|
| [18] |
Lu X, Li Y, Chen D D, et al. Challenges of high-fidelity virtual reactor for exascale computing and research progress of China Virtual Reactor[J]. Nuclear Engineering and Design, 2023, 413: 112566.
|
| [19] |
Weidinger L, Uesato J, Rauh M, et al. Taxonomy of risks posed by language models[C]// Diaz F, Ekstrand M D, Hytönen E M T, et al. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, USA: Association for Computing Machinery, 2022: 214-229.
|
| [20] |
Ji Z W, Lee N, Frieske R, et al. Survey of hallucination in natural language generation[J]. ACM Computing Surveys, 2023, 55(12): 1-38.
|
| [21] |
Wu K, Wu E, Wei K, et al. An automated framework for assessing how well LLMs cite relevant medical references[J]. Nature Communications, 2025, 16: 3615.
|
| [22] |
Chen Q Y, Hu Y, Peng X Q, et al. Benchmarking large language models for biomedical natural language processing applications and recommendations[J]. Nature Communications, 2025, 16: 3280.
|
| [23] |
Sandmann S, Hegselmann S, Fujarski M, et al. Benchmark evaluation of DeepSeek large language models in clinical decision-making[J]. Nature Medicine, 2025, 31(8): 2546-2549.
|
| [25] |
DeepSeek-AI, Guo D, Yang D, et al. DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning[EB/OL]. 2025[2025-07-17]. .
|
| [26] |
Yang An, Li Anfeng, Yang Baosong, et al. Qwen3 technical report[EB/OL]. 2025[2025-07-17]. .
|
| [27] |
Team Gemini. Gemini 2.5: our most intelligent AI model[EB/OL]. Mountain View: Google, 2025[2025-06-30]. .
|
| [28] |
Wei J, Wang X, Schuurmans D, et al. Chain-of-thought prompting elicits reasoning in large language models[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook, NY, USA: Curran Associates Inc, 2022: 24824-24837.
|
| [29] |
Pires T, Schlinger E, Garrette D. How multilingual is multilingual BERT[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019: 4996-5001.
|
| [30] |
Zhang Y, Li Y F, Cui L Y, et al. Siren's song in the AI ocean: a survey on hallucination in large language models[EB/OL]. [2025-07-17]..
|
| [31] |
Agrawal A, Suzgun M, Mackey L, et al. Do language models know when they're hallucinating references[EB/OL]. 2023[2025-07-17]. .
|
| [24] |
OpenAI. Introducing-o3-and-o4-mini[EB/OL]. San Francisco: OpenAI, 2025[2025-06-30].
|