Abstract
The past five years have seen a dramatic increase in the usage of artificial intelligence (AI) algorithms in pure mathematics and theoretical sciences. This might appear counter-intuitive as mathematical sciences require rigorous definitions, derivations and proofs, in contrast to the experimental sciences, which rely on the modelling of data with error bars. In this Perspective, we categorize the approaches to mathematical and theoretical discovery as âtop-downâ, âbottom-upâ and âmeta-mathematicsâ. We review the progress over the past few years, comparing and contrasting both the advances and the shortcomings of each approach. We believe that although the theorist is not in danger of being replaced by AI systems in the near future, the combination of human expertise and AI algorithms will become an integral part of theoretical research.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 /Â 30Â days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$99.00 per year
only $8.25 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
References
Sejnowski, T. J. The Deep Learning Revolution (MIT Press, 2018).
Acemoglu, D., Autor, D., Hazell, J. & Restrepo, P. Artificial intelligence and jobs: evidence from online vacancies. J. Labor Econ. 40, S293âS340 (2022).
Bruun, E. P. & Duka, A. Artificial intelligence, jobs and the future of work: racing with the machines. Basic Income Stud. 13, 20180018 (2018).
Perret-Gallix, D. & Wójcik, W. New computing techniques in physics research. In Proc. 1st Int. Workshop on Software Engineering, Artificial Intelligence and Expert Systems in High-energy and Nuclear Physics (CNRS, 1990).
He, Y.-H. Deep-learning the landscape. Phys. Lett. B 774, 564â568 (2017).
Carifio, J., Halverson, J., Krioukov, D. & Nelson, B. D. Machine learning in the string landscape. J. High Energy Phys. 09, 157 (2017).
Krefl, D. & Seong, R.-K. Machine learning of CalabiâYau volumes. Phys. Rev. D 96, 066014 (2017).
Ruehle, F. Evolving neural networks with genetic algorithms to study the string landscape. J. High Energy Phys. 08, 038 (2017).
Liu, J. Artificial neural network in cosmic landscape. J. High Energy Phys. 12, 149 (2017).
He, Y.-H. The CalabiâYau Landscape: From Geometry, to Physics, to Machine Learning (Springer Cham, 2018).
Bull, K., He, Y.-H., Jejjala, V. & Mishra, C. Machine learning CICY threefolds. Phys. Lett. B 785, 65â72 (2018).
Demirtas, M., Long, C., McAllister, L. & Stillman, M. The KreuzerâSkarke axiverse. J. High Energy Phys. 04, 138 (2020).
Mütter, A., Parr, E. & Vaudrevange, P. K. S. Deep learning in the heterotic orbifold landscape. Nucl. Phys. B 940, 113â129 (2019).
Cole, A. & Shiu, G. Topological data analysis for the string landscape. J. High Energy Phys. 03, 054 (2019).
Otsuka, H. & Takemoto, K. Deep learning and k-means clustering in heterotic string vacua with line bundles. J. High Energy Phys. 05, 047 (2020).
Ruehle, F. Data science applications to string theory. Phys. Rep. 839, 1â117 (2020).
Larfors, M. & Schneider, R. Explore and exploit with heterotic line bundle models. Fortsch. Phys. 68, 2000034 (2020).
Bies, M. et al. Machine learning and algebraic approaches towards complete matter spectra in 4d F-theory. J. High Energy Phys. 01, 196 (2021).
Perez-Martinez, R., Ramos-Sanchez, S. & Vaudrevange, P. K. S. Landscape of promising nonsupersymmetric string models. Phys. Rev. D 104, 046026 (2021).
Abel, S., Constantin, A., Harvey, T. R. & Lukas, A. String model building, reinforcement learning and genetic algorithms. In Nankai Symp. Mathematical Dialogues: In Celebration of S.S. Chernâs 110th Anniversary https://arxiv.org/abs/2111.07333 (2021).
Hashimoto, K., Sugishita, S., Tanaka, A. & Tomiya, A. Deep learning and the AdS/CFT correspondence. Phys. Rev. D 98, 046019 (2018).
de Mello Koch, E., de Mello Koch, R. & Cheng, L. Is deep learning a renormalization group flow? IEEE Access 8, 106487â106505 (2020).
Erbin, H., Lahoche, V. & Samary, D. O. Non-perturbative renormalization for the neural network-QFT correspondence. Mach. Learn. Sci. Tech. 3, 015027 (2022).
Jinno, R. Machine learning for bounce calculation. Preprint at https://arxiv.org/abs/1805.12153 (2018).
Rudelius, T. Learning to inflate. JCAP 2019, 044 (2019).
Chen, H.-Y., He, Y.-H., Lal, S. & Majumder, S. Machine learning lie structures & applications to physics. Phys. Lett. B 817, 136297 (2021).
Kaspschak, B. & MeiÃner, U.-G. Neural network perturbation theory and its application to the Born series. Phys. Rev. Res. 3, 023223 (2021).
Harvey, T. R. & Lukas, A. Quark mass models and reinforcement learning. J. High Energy Phys. 08, 161 (2021).
Gupta, R., Bhattacharya, T. & Yoon, B. in Artificial Intelligence for Science (eds Choudhary, A., Fox, G. & Hey, T.) Ch. 26 (World Scientific, 2023).
Lal, S., Majumder, S. & Sobko, E. The R-mAtrIx Net. Preprint at https://arxiv.org/abs/2304.07247 (2023).
Gal, Y., Jejjala, V., Mayorga Peña, D. K. & Mishra, C. Baryons from mesons: a machine learning perspective. Int. J. Mod. Phys. A 37, 2250031 (2022).
Krippendorf, S. & Syvaeri, M. Detecting symmetries with neural networks. Mach. Learn. Sci. Technol. 2, 015010 (2020).
Udrescu, S.-M. & Tegmark, M. AI Feynman: a physics-inspired method for symbolic regression. Sci. Adv. 6, eaay2631 (2020).
Cornelio, C. et al. Combining data and theory for derivable scientific discovery with AI-Descartes. Nat. Commun. 14, 1777 (2023).
Liu, Z. et al. Machine-learning nonconservative dynamics for new-physics detection. Phys. Rev. E 104, 055302 (2021).
Lemos, P., Jeffrey, N., Cranmer, M., Ho, S. & Battaglia, P. Rediscovering orbital mechanics with machine learning. Mach. Learn. Sci. Technol. 4, 045002 (2023).
Constantin, A. & Lukas, A. Formulae for line bundle cohomology on CalabiâYau threefolds. Fortsch. Phys. 67, 1900084 (2019).
Klaewer, D. & Schlechter, L. Machine learning line bundle cohomologies of hypersurfaces in toric varieties. Phys. Lett. B 789, 438â443 (2019).
Altman, R., Carifio, J., Halverson, J. & Nelson, B. D. Estimating CalabiâYau hypersurface and triangulation counts with equation learners. J. High Energy Phys. 03, 186 (2019).
Grimm, T. W., Ruehle, F. & van de Heisteeg, D. Classifying CalabiâYau threefolds using infinite distance limits. Commun. Math. Phys. 382, 239â275 (2021).
Erbin, H. & Finotello, R. Machine learning for complete intersection CalabiâYau manifolds: a methodological study. Phys. Rev. D 103, 126014 (2021).
Berman, D. S., He, Y.-H. & Hirst, E. Machine learning CalabiâYau hypersurfaces. Phys. Rev. D 105, 066002 (2022).
Berglund, P., Campbell, B. & Jejjala, V. Machine learning KreuzerâSkarke CalabiâYau threefolds. Preprint at https://arxiv.org/abs/2112.09117 (2021).
Coates, T., Kasprzyk, A. M. & Veneziale, S. Machine learning the dimension of a Fano variety. Nat. Commun. 14, 5526 (2023).
He, Y.-H. & Kim, M. Learning algebraic structures: preliminary investigations. Preprint at https://arxiv.org/abs/1905.02263 (2019).
Bao, J. et al. Quiver mutations, Seiberg duality and machine learning. Phys. Rev. D 102, 086013 (2020).
Davies, A. et al. Advancing mathematics by guiding human intuition with AI. Nature 600, 70â74 (2021).
Zaremba, W., Kurach, K. & Fergus, R. Learning to discover efficient mathematical identities. In Proc. 27th International Conference on Neural Information Processing Systems Vol. 1 (eds Ghahramani, Z. et al.) 1278â1286 (NeurIPS, 2014).
England, M. in Mathematical Software â ICMS 2018 Vol. 10931 (eds Davenport, J. et al.) 165â174 (Springer, 2018).
Lample, G. & Charton, F. Deep learning for symbolic mathematics. In Proc. Eighth International Conference on Learning Representations (ICLR, 2023).
Weinan, E. Machine learning and computational mathematics. Commun. Comput. Phys. 28, 1639â1670 (2020).
Peifer, D., Stillman, M. & Halpern-Leistner, D. Learning selection strategies in Buchbergerâs algorithm. In Proc. 37th International Conference on Machine Learning 7575â7585 (PMLR, 2020).
Kamienny, P.-A., dâAscoli, S., Lample, G. & Charton, F. End-to-end symbolic regression with transformers. Adv. Neural Inf. Process. Syst. 35, 10269â10281 (2022).
Dabelow, L. & Ueda, M. Symbolic equation solving via reinforcement learning. Preprint at https://arxiv.org/abs/2401.13447 (2024).
Ashmore, A., He, Y.-H. & Ovrut, B. A. Machine learning CalabiâYau metrics. Fortsch. Phys. 68, 2000068 (2020).
Jejjala, V., Mayorga Pena, D. K. & Mishra, C. Neural network approximations for CalabiâYau metrics. J. High Energy Phys. 08, 105 (2022).
Larfors, M., Lukas, A., Ruehle, F. & Schneider, R. Numerical metrics for complete intersection and KreuzerâSkarke CalabiâYau manifolds. Mach. Learn. Sci. Technol. 3, 035014 (2022).
Alessandretti, L., Baronchelli, A. & He, Y.-H. Machine learning meets number theory: the data science of BirchâSwinnerton-Dyer. Preprint at https://arxiv.org/abs/1911.02008 (2019).
He, Y.-H., Lee, K.-H. & Oliver, T. Machine-learning the SatoâTate conjecture. Preprint at https://arxiv.org/abs/2010.01213 (2020).
Abel, S. A. & Nutricati, L. A. Ising machines for diophantine problems in physics. Fortsch. Phys. 70, 2200114 (2022).
Charton, F. Learning the greatest common divisor: explaining transformer predictions. Preprint at https://arxiv.org/abs/2308.15594# (2024).
He, Y.-H. & Yau, S.-T. Graph Laplacians, Riemannian manifolds and their machine-learning. Preprint at https://arxiv.org/abs/2006.16619 (2020).
Bao, J. et al. Hilbert series, machine learning, and applications to physics. Phys. Lett. B 827, 136966 (2022).
Gukov, S., Halverson, J., Ruehle, F. & SuÅkowski, P. Learning to unknot. Mach. Learn. Sci. Technol. 2, 025035 (2021).
Craven, J., Jejjala, V. & Kar, A. Disentangling a deep learned volume formula. J. High Energy Phys. 06, 040 (2021).
Cohen, T., Freytsis, M. & Ostdiek, B. (Machine) learning to do more with less. J. High Energy Phys. 2018, 1â28 (2018).
Hudson, M. No coding required: companies make it easier than ever for scientists to use artificial intelligence. Science 365, 416â417 (2019).
Xu, Y. et al. Artificial intelligence: a powerful paradigm for scientific research. Innov. J. 2, 100179 (2021).
Georgescu, I. How machines could teach physicists new scientific concepts. Nat. Rev. Phys. 4, 736â738 (2022).
Thiyagalingam, J., Shankar, M., Fox, G. & Hey, T. Scientific machine learning benchmarks. Nat. Rev. Phys. 4, 413â420 (2022).
Gukov, S., Halverson, J. & Ruehle, F. Rigor with machine learning from field theory to the Poincaré conjecture. Nat. Rev. Phys. 6, 310â319 (2024).
Fink, T. Why mathematics is set to be revolutionized by AI. Nature 629, 505 (2024).
Bengio, Y. & Malkin, N. Machine learning and information theory concepts towards an AI mathematician. Preprint at https://arxiv.org/abs/2403.04571 (2024).
Satariano, A. & Specia, M. Global leaders warn AI poses risk of âcatastrophicâ harm. The New York Times (1 November 2023).
Russell, S. in Perspectives on Digital Humanism (eds Werthner, H. et al.) 19â24 (Springer, 2022).
Lu, P., Qiu, L., Yu, W., Welleck, S. & Chang, K.-W. A survey of deep learning for mathematical reasoning. In Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (eds Rogers, A. et al.) 14605â14631 (Association for Computational Linguistics, 2023).
Zhang, C. E., Collins, K. M., Weller, A. & Tenenbaum, J. B. AI for mathematics: a cognitive science perspective. Preprint at https://arxiv.org/abs/2310.13021 (2023).
Williamson, G. Is deep learning a useful tool for the pure mathematician? Bull. Amer. Math. Soc. 61, 271â286 (2024).
He, Y.-H. (ed.) Machine Learning in Pure Mathematics and Theoretical Physics (World Scientific, 2023).
Wilkins, A. How AI mathematicians might finally deliver human-level reasoning. New Scientist (10 April 2024).
Greiner-Petter, A. et al. Why machines cannot learn mathematics, yet. Preprint at https://arxiv.org/abs/1905.08359 (2019).
Dolotin, V. V., Morozov, A. Y. & Popolitov, A. V. Machine learning of the well-known things. Theor. Math. Phys. 214, 446â455 (2023).
Fajardo-Fontiveros, O. et al. Fundamental limits to learning closed-form mathematical models from data. Nat. Commun. https://doi.org/10.1038/s41467-023-36657-z (2023).
Kolpakov, A. & Rocke, A. On the impossibility of discovering a formula for primes using AI. Preprint at https://arxiv.org/abs/2308.10817 (2023).
He, Y.-H. Machine-learning mathematical structures. Preprint at https://arxiv.org/abs/2101.06317 (2021).
Lindström, S. et al. (eds) Logicism, Intuitionism, and Formalism: What Has Become of Them? Vol. 341 (Springer, 2008).
Zach, R. in Philosophy of Logic 411â447 (Elsevier, 2007).
Russell, B. & Whitehead, A. N. Principia Mathematica 2nd edn, Vol. 1 (Cambridge Univ. Press, 1963).
Ioannidis, J. P., Fanelli, D., Dunne, D. D. & Goodman, S. N. Meta-research: evaluation and improvement of research methods and practices. PLoS Biol. 13, e1002264 (2015).
McCarthy, J. in Daedalus Vol. 117, 297â311 (MIT Press, 1988).
Brouwer, L. E. J. in From Frege to Gödel: A Source Book In Mathematical Logic 1879â1931 (ed. van Heijenoort, J.) 490â492 (Harvard Univ. Press, 1967).
Gödel, K. Ãber formal unentscheidbare sätze der principia mathematica und verwandter systeme I [German]. Monatshefte für mathematik und physik 38, 173â198 (1931).
Church, A. An unsolvable problem of elementary number theory. Am. J. Math. 58, 345â363 (1936).
Turing, A. M. On computable numbers, with an application to the Entscheidungsproblem. A correction. Proc. Lond. Math. Soc. s2-43, 544â546 (1938).
Newell, A. & Simon, H. The logic theory machine â a complex information processing system. IRE Trans. Inf. Theory 2, 61â79 (1956).
Rosenblatt, F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386 (1958).
Roberts, S. AI is coming for mathematics, too. The New York Times (2 July 2023).
Davenport, J. & Buzzard, K. The future of mathematics. In Proc. International Congress of Mathematicians (eds Sirakov, B. et al.) 1085â1110 (World Scientific, 2018).
De Bruijn, N. G. in Studies in Logic and the Foundations of Mathematics Vol. 133, 73â100 (Elsevier, 1994).
Nipkow, T. et al. (eds) Isabelle/HOL: A Proof Assistant for Higher-order Logic Vol. 2283 (Springer, 2002).
Bertot, Y. & Castéran, P. Interactive Theorem Proving and Program Development: CoqâArt: The Calculus of Inductive Constructions (Springer, 2004).
Wadler, P. Programming language foundations in Agda. In Formal Methods: Foundations and Applications (SBMF 2018) (eds Massoni, T. & Mousavi, M.) 56â73 (Springer, 2018).
de Moura, L., Kong, S., Avigad, J., van Doorn, F. & von Raumer, J. in Automated Deduction-CADE-25: 25th International Conference on Automated Deduction, Berlin, Germany, August 1â7, 2015, Proceedings (eds Felty, A. P. & Middeldorp, A.) 378â388 (Springer, 2015).
Gowers, W. T., Green, B., Manners, F. & Tao, T. On a conjecture of Marton. Preprint at https://arxiv.org/abs/2311.05762 (2023).
Ganesalingam, M. The Language of Mathematics (Springer, 2013).
Wittgenstein, L. Tractatus logico-philosophicus (Kegan Paul, Trench, Trübner & Co., 1922).
Turing, A. M. Computing machinery and intelligence. Mind LIX, 433â460 (1950).
Biever, C. ChatGPT broke the turing test â the race is on for new ways to assess AI. Nature 619, 686â689 (2023).
He, Y.-H., Jejjala, V. & Nelson, B. D. hep-th. Preprint at https://arxiv.org/abs/1807.00735 (2018).
Mikolov, T., Chen, K., Corrado, G. & Dean, J. Efficient estimation of word representations in vector space. In Proc. 1st International Conference on Learning Representations (ICLR, 2013).
Tshitoyan, V. et al. Unsupervised word embeddings capture latent knowledge from materials science literature. Nature 571, 95â98 (2019).
Kim, J. T., Landajuela, M. & Petersen, B. K. Distilling Wikipedia mathematical knowledge into neural network models. In Proc. 1st Mathematical Reasoning in General Artificial Intelligence Workshop (ICLR, 2021).
Hutson, M. AI learns to write computer code in âstunningâ advance. Science https://doi.org/10.1126/science.adg2088 (2022).
Wang, Z., Xia, R. & Liu, P. Generative AI for math: Part i â MathPile: a billion-token-scale pretraining corpus for math. Preprint at https://arxiv.org/abs/2312.17120 (2023).
Romera-Paredes, B. et al. Mathematical discoveries from program search with large language models. Nature 625, 468â475 (2024).
Trinh, T. H., Wu, Y., Le, Q. V., He, H. & Luong, T. Solving olympiad geometry without human demonstrations. Nature 625, 476â482 (2024).
Ahn, J. et al. Large language models for mathematical reasoning: progresses and challenges. Preprint at https://arxiv.org/abs/2402.00157 (2024).
Frieder, S., Berner, J., Petersen, P. & Lukasiewicz, T. Large language models for mathematicians. Internationale Mathematische Nachrichten 254, 1â20 (2024).
Didolkar, A. et al. Metacognitive capabilities of LLMs: an exploration in mathematical problem solving. Preprint at https://arxiv.org/abs/2405.12205 (2024).
Polu, S., Han, J. M. & Sutskever, I. Solving (some) formal math olympiad problems. OpenAI https://openai.com/index/formal-math/ (2022).
Rodriguez, A., Azhar, F. & Spisak, J. Teaching AI advanced mathematical reasoning. Meta https://ai.meta.com/blog/ai-math-theorem-proving/ (2022).
Otte, M. & Panza, M. (eds) Analysis and Synthesis in Mathematics: History and Philosophy Vol. 196 (Kluwer Academic Publishers, 1997).
Lynch, P. Will mathematicians be replaced by computers? The Irish Times https://www.irishtimes.com/news/science/will-mathematicians-be-replaced-by-computers-1.4349668 (17 September 2020).
Penrose, R. The role of aesthetics in pure and applied mathematical research. Bull. Inst. Math. Appl. 10, 266â271 (1974).
Arnolâd, V. I. On teaching mathematics. Uspekhi Mat. Nauk 53, 229â234 (1998).
Carlson, J. A., Jaffe, A. & Wiles, A. (eds) The Millennium Prize Problems (American Mathematical Society, 2006).
Hardy, G. H. A Mathematicianâs Apology (Cambridge Univ. Press, 1992).
Wu, C. W. Can machine learning identify interesting mathematics? An exploration using empirically observed laws. Preprint at https://arxiv.org/abs/1805.07431 (2018).
Friederich, P., Krenn, M., Tamblyn, I. & Aspuru-Guzik, A. Scientific intuition inspired by machine learning-generated hypotheses. Mach. Learn. Sci. Technol. 2, 025027 (2021).
Raayoni, G. et al. Generating conjectures on fundamental constants with the Ramanujan machine. Nature 590, 67â73 (2021).
Mishra, C., Moulik, S. R. & Sarkar, R. Mathematical conjecture generation using machine intelligence. Preprint at https://arxiv.org/abs/2306.07277 (2023).
Bauer, A., PetkoviÄ, M. & Todorovski, L. MLFMF: data sets for machine learning for mathematical formalization. In Proc. 37th International Conference on Neural Information Processing Systems 50730â50741 (Association for Computing Machinery, 2024).
Davila, R. Advancements in research mathematics through AI: a framework for conjecturing. Preprint at https://arxiv.org/abs/2306.12917 (2023).
Birch, B. Reminiscences from 1958â62. Isaac Newton Institute for Mathematical Sciences https://www.sms.cam.ac.uk/media/3530787 (2021).
He, Y.-H. & Burtsev, M. Can AI make genuine theoretical discoveries? Nature 625, 241â241 (2024).
Schenck, H. Computational Algebraic Geometry Vol. 58 (Cambridge Univ. Press, 2003).
Fawzi, A. et al. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 610, 47â53 (2022).
Kauers, M. & Moosbauer, J. The FBHHRBNRSSSHK-algorithm for multiplication in \({{\mathbb{Z}}}_{2}^{5\times 5}\) is still not the end of the story. Preprint at https://arxiv.org/abs/2210.04045 (2022).
He, Y.-H., Lee, K.-H., Oliver, T. & Pozdnyakov, A. Murmurations of elliptic curves. Preprint at https://arxiv.org/abs/2204.10140 (2022).
Gorenstein, D. Finite Simple Groups: An Introduction to Their Classification (Springer, 2013).
Touvron, H. et al. LLaMA: open and efficient foundation language models. Preprint at https://arxiv.org/abs/2302.13971 (2023).
Wolfram, S. ChatGPT gets its âWolfram superpowersâ! Stephen Wolfram Writings https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/ (2023).
Ellenberg, J. Will AI replace mathematicians? Big Think https://bigthink.com/the-future/artificial-intelligence-replace-mathematicians/ (2021).
Qin, H. & Ye, Y. Algorithms of the Möbius function by random forests and neural networks. J. Big Data 11, 31 (2024).
Mossinghoff, M. J. & Trudgian, T. S. in Exploring the Riemann Zeta Function: 190 Years from Riemannâs Birth (eds Montgomery, H. et al.) 201â221 (Springer, 2017).
Acknowledgements
The author is most grateful to A. Bhattacharya, A. Kosyak and M. Duncan for many valuable comments on the draft. The author thanks many collaborators over the past few years on AI-assisted mathematics, for the great fun and friendship: D. Aggarwal, L. Alessandretti, G. Arias-Tamargo, A. Ashmore, J. Bao, A. Baronchelli, P. Berglund, D. Berman, K. Bull, L. Calmon, H. Chen, S. Chen, A. Constantin, P.-P. Dechant, R. Deen, S. Garoufalidis, E. Heyes, E. Hirst, J. Hofscheier, J. Ipiña, V. Jejjala, A. Kasprzyk, M. Kim, S. Lal, K.-H. Lee, S.-J. Lee, J. Li, A. Lukas, S. Majumder, C. Mishra, G. Musiker, B. Nelson, A. Nestor, T. Oliver, B. Ovrut, T. Peterken, S. Pietromonaco, A. Pozdnyakov, D. Riabchenko, D. Rodriguez-Gomez, H. Sá Earp, M. Sharnoff, T. Silva, E. Sultanow, Y. Xiao, S.-T. Yau and Z. Zaz. The research is funded in part by STFC grant ST/J00037X/2 and the Leverhulme Trust for a project grant.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Peer review
Peer review information
Nature Reviews Physics thanks Rak-Kyeong Seong, Adam Zsolt Wagner and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Additional information
Publisherâs note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
He, YH. AI-driven research in pure mathematics and theoretical physics. Nat Rev Phys 6, 546â553 (2024). https://doi.org/10.1038/s42254-024-00740-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42254-024-00740-1