- Fei Liu ORCID:orcid.org/0000-0001-6719-040914,
- Xi Lin ORCID:orcid.org/0000-0001-5298-689314,
- Shunyu Yao ORCID:orcid.org/0000-0003-0427-221714,
- Zhenkun Wang ORCID:orcid.org/0000-0003-1152-678016,
- Xialiang Tong ORCID:orcid.org/0009-0006-8153-157617,
- Mingxuan Yuan ORCID:orcid.org/0000-0002-2236-878417 &
- …
- Qingfu Zhang ORCID:orcid.org/0000-0003-0786-067114,15
Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 15513))
Included in the following conference series:
210Accesses
Abstract
Multiobjective evolutionary algorithms (MOEAs) are major solutions for solving multiobjective optimization problems (MOPs). However, to design efficient and powerful operators in MOEAs, a tedious process of trial-and-error with domain experts is usually required. This paper investigates a novel approach that leverages large language models (LLM) to design MOEA operators. Firstly, with proper prompt engineering, we directly employ an LLM to serve as a black-box search operator for decomposition-based MOEA in a zero-shot manner. In this way, we can get rid of the time-consuming manual operator design process. In addition, we further design a white-box operator to interpret and approximate the behaviour of LLM and propose a new version of decomposition-based MOEA, termed MOEA/D-LMO. This white-box operator improves generalization and reduces runtime by eliminating costly LLM interaction. Experimental studies on different test benchmarks show that our proposed method can achieve competitive performance with widely used MOEAs. Furthermore, the operators only learned from LLMs on a few instances have good generalization performances on unseen problems with different patterns and settings.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 6634
- Price includes VAT (Japan)
- Softcover Book
- JPY 8293
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Blank, J., Deb, K.: Pymoo: multi-objective optimization in python. IEEE Access8, 89497–89509 (2020)
Chen, A., Dohan, D.M., So, D.R.: EvoPrompting: language models for code-level neural architecture search. arXiv preprintarXiv:2302.14838 (2023)
Das, I., Dennis, J.E.: Normal-boundary intersection: a new method for generating the pareto surface in nonlinear multicriteria optimization problems. SIAM J. Optim.8(3), 631–657 (1998)
Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput.6(2), 182–197 (2002)
Emmerich, M.T.M., Deutz, A.H.: A tutorial on multiobjective optimization: fundamentals and evolutionary methods. Nat. Comput.17(3), 585–609 (2018).https://doi.org/10.1007/s11047-018-9685-y
Jin, Y., Wang, H., Chugh, T., Guo, D., Miettinen, K.: Data-driven evolutionary optimization: an overview and case studies. IEEE Trans. Evol. Comput.23(3), 442–458 (2018)
Kasneci, E., et al.: ChatGPT for good? on opportunities and challenges of large language models for education. Learn. Individ. Differ.103, 102274 (2023)
Lange, R., et al.: Discovering evolution strategies via meta-black-box optimization. In: Proceedings of the Companion Conference on Genetic and Evolutionary Computation, pp. 29–30 (2023)
Li, H., Zhang, Q.: Multiobjective optimization problems with complicated pareto sets, MOEA/D and NSGA-II. IEEE Trans. Evol. Comput.13(2), 284–302 (2008)
Li, K., Zhang, T., Wang, R.: Deep reinforcement learning for multiobjective optimization. IEEE Trans. Cybern.51(6), 3103–3114 (2020)
Li, X., Wu, K., Zhang, X., Wang, H., Liu, J.: OptfFrmer: beyond transformer for black-box optimization (2022)
Lin, X., Yang, Z., Zhang, Q.: Pareto set learning for neural multi-objective combinatorial optimization (2022)
Liu, F., et al.: Evolution of heuristics: Towards efficient automatic algorithm design using large language model. In: Forty-first International Conference on Machine Learning (2024)
Liu, F., et al.: A systematic survey on large language models for algorithm design. arXiv preprintarXiv:2410.14716 (2024)
Liu, S., Lin, Q., Li, J., Tan, K.C.: A survey on learnable evolutionary algorithms for scalable multiobjective optimization. IEEE Trans,. Evol. Comput.27, 1941–1961 (2023)
Liu, X., Sun, J., Zhang, Q., Wang, Z., Xu, Z.: Learning to learn evolutionary algorithm: a learnable differential evolution. IEEE Trans. Emerg. Top. Comput. Intell.7, 1605–1620 (2023)
Meyerson, E., Nelson, M.J., Bradley, H., Moradi, A., Hoover, A.K., Lehman, J.: Language model crossover: variation through few-shot prompting. arXiv preprintarXiv:2302.12170 (2023)
Miettinen, K.: Nonlinear Multiobjective Optimization, vol. 12. Springer Science & Business Media (2012).https://doi.org/10.1007/978-1-4615-5563-6
Min, S., et al.: Rethinking the role of demonstrations: What makes in-context learning work? arXiv preprintarXiv:2202.12837 (2022)
Mukhopadhyay, A., Maulik, U., Bandyopadhyay, S., Coello, C.A.C.: Survey of multiobjective evolutionary algorithms for data mining: part II. IEEE Trans. Evol. Comput.18(1), 20–35 (2013)
Penghui, L., Wu, K., Liu, J.: DECN: evolution inspired deep convolution network for black-box optimization (2022)
Ponsich, A., Jaimes, A.L., Coello, C.A.C.: A survey on multiobjective evolutionary algorithms for the solution of the portfolio optimization problem and other finance and economics applications. IEEE Trans. Evol. Comput.17(3), 321–344 (2012)
Shao, Y., et al.: Multi-objective neural evolutionary algorithm for combinatorial optimization problems. IEEE Trans. Neural Netw. Learn. Syst.34, 2133–2143 (2021)
Tanabe, R., Ishibuchi, H.: An easy-to-use real-world multi-objective optimization problem suite. Appl. Soft Comput.89, 106078 (2020)
Tian, Y., Cheng, R., Zhang, X., Jin, Y.: PlatEMO: a MATLAB platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput. Intell. Mag.12(4), 73–87 (2017)
Trivedi, A., Srinivasan, D., Sanyal, K., Ghosh, A.: A survey of multiobjective evolutionary algorithms based on decomposition. IEEE Trans. Evol. Comput.21(3), 440–462 (2016)
Wang, Z., et al.: Multiobjective optimization-aided decision-making system for large-scale manufacturing planning. IEEE Trans. Cybern.52(8), 8326–8339 (2021)
Wei, J., et al.: Larger language models do in-context learning differently. arXiv preprintarXiv:2303.03846 (2023)
Xiong, M., et al.: Can LLMS express their uncertainty? An empirical evaluation of confidence elicitation in LLMS. arXiv preprintarXiv:2306.13063 (2023)
Yang, C., et al.: Large language models as optimizers. arXiv preprintarXiv:2309.03409 (2023)
Yao, Y., Liu, F., Cheng, J., Zhang, Q.: Evolve cost-aware acquisition functions using large language models. arXiv preprintarXiv:2404.16906 (2024)
Yu, C., Liu, X., Tang, C., Feng, W., Lv, J.: GPT-NAS: neural architecture search with the generative pre-trained model. arXiv preprintarXiv:2305.05351 (2023)
Zhang, Q., Li, H.: MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput.11(6), 712–731 (2007)
Zhang, S., Gong, C., Wu, L., Liu, X., Zhou, M.: AutoML-GPT: automatic machine learning with GPT. arXiv preprintarXiv:2305.02499 (2023)
Zhang, Y., Zhou, K., Liu, Z.: What makes good examples for visual in-context learning? arXiv preprintarXiv:2301.13670 (2023)
Zhang, Z., Wu, Z., Zhang, H., Wang, J.: Meta-learning-based deep reinforcement learning for multiobjective optimization problems. IEEE Trans. Neural Netw. Learn. Syst.34, 7978–7991 (2022)
Zhao, W.X., et al.: A survey of large language models. arXiv preprintarXiv:2303.18223 (2023)
Zheng, M., et al.: Can GPT-4 perform neural architecture search? arXiv preprintarXiv:2304.10970 (2023)
Zhou, A., Qu, B.Y., Li, H., Zhao, S.Z., Suganthan, P.N., Zhang, Q.: Multiobjective evolutionary algorithms: a survey of the state of the art. Swarm Evol. Comput.1(1), 32–49 (2011)
Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput.8(2), 173–195 (2000)
Acknowledgments
The work described in this paper was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China [GRF Project No. CityU11215723 ] and Natural Science Foundation of China (Project No: 62276223).
Author information
Authors and Affiliations
City University of Hong Kong, Hong Kong, China
Fei Liu, Xi Lin, Shunyu Yao & Qingfu Zhang
The City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
Qingfu Zhang
Southern University of Science and Technology, Shenzhen, China
Zhenkun Wang
Huawei Noah’s Ark Lab, Shenzhen, China
Xialiang Tong & Mingxuan Yuan
- Fei Liu
You can also search for this author inPubMed Google Scholar
- Xi Lin
You can also search for this author inPubMed Google Scholar
- Shunyu Yao
You can also search for this author inPubMed Google Scholar
- Zhenkun Wang
You can also search for this author inPubMed Google Scholar
- Xialiang Tong
You can also search for this author inPubMed Google Scholar
- Mingxuan Yuan
You can also search for this author inPubMed Google Scholar
- Qingfu Zhang
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toFei Liu.
Editor information
Editors and Affiliations
University of New South Wales, Canberra, NSW, Australia
Hemant Singh
University of New South Wales, Canberra, NSW, Australia
Tapabrata Ray
SLB Cambridge Research, Cambridge, UK
Joshua Knowles
RMIT, Melbourne, VIC, Australia
Xiaodong Li
University of Warwick, Coventry, Warwickshire, UK
Juergen Branke
University of New South Wales, Canberra, NSW, Australia
Bing Wang
Japan Aerospace Exploration Agency, Tokyo, Japan
Akira Oyama
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Liu, F.et al. (2025). Large Language Model for Multiobjective Evolutionary Optimization. In: Singh, H.,et al. Evolutionary Multi-Criterion Optimization. EMO 2025. Lecture Notes in Computer Science, vol 15513. Springer, Singapore. https://doi.org/10.1007/978-981-96-3538-2_13
Download citation
Published:
Publisher Name:Springer, Singapore
Print ISBN:978-981-96-3537-5
Online ISBN:978-981-96-3538-2
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative