Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

A Python 3 gradient-free optimization library

License

NotificationsYou must be signed in to change notification settings

dietmarwo/fast-cma-es

Repository files navigation

Join%20Chat

logo

fcmaes complementsscipy optimize by providingadditional optimization methods, faster C++/Eigen based implementations and a coordinated parallel retry mechanism.It supports the multi threaded application of different gradient free optimization algorithms.There are 35 real worldtutorialsshowing in detail how to use fcmaes.Seeperformancefor detailed fcmaes performance figures.

fcmaes started as a fast CMA-ES implementation combined with a new smart parallel retry mechanism aimed to solvehard optimization problems from the space flight planning domain. It evolved to a general library ofstate-of-the-art gradient free optimization algorithms applicable to all kind of real world problems coveringmulti-objective and constrained problems. Its main algorithms are implemented both in Python and C++ andsupport both parallel fitness function evaluation and a parallel retry mechanism.

Update: How to compute weighted spherical t-designs

Spherical t-design New example shows how to compute weighted spherical t-designs.

Update: Interaction between AI code generation and optimization

LLMs can help to generate code implementing a trading strategy. It can even propose ways to optimize the final return.prophet_opt.py shows:

  • The o1-preview prompts used to generate the strategy back-testing code.

  • How to identify the parameters to optimize using the AI.

  • How the parameter optimization process can be automated efficiently utilizing trading simulations executed in parallel.

This idea can be applied everywhere when parameters of time consuming simulations have to be optimized.If you aim to optimize multiple objectives, you may find many other examples in:

Features

  • Focused on optimization problems hard to solve utilizing modern many-core CPUs.

  • Parallel fitness function evaluation and different parallel retry mechanisms.

  • Excellent scaling with the number of available CPU-cores.

  • Minimized algorithm overhead - relative to the objective function evaluation time - even for high dimensions.

  • Multi-objective/constrained optimization algorithm MODE combining features from Differential evolution and NSGA-II, supporting parallel function evaluation. Featuresenhanced multiple constraint ranking improving its performance in handling constraints for engineering design optimization.

  • QD support: CVT-map-elites including a CMA-ES emitter and a new "diversifier" meta algorithm utilizing CVT-map-elites archives.

  • Selection of highly efficient single-objective algorithms to choose from.

  • Ask-tell interface for CMA-ES, CR-FM-NES, DE, MODE and PGPE.

  • Large collection of 35 tutorials related to real world problems:tutorials.

Changes from version 1.6.3:

  • Logging now based on loguru. All examples are adapted.

  • New dependencies: loguru + numba.

  • New tutorial related to theGECCO 2023 Space Optimization Competition:ESAChallenge.

  • You can define an initial population as guess for multi objective optimization.

Changes from version 1.4.0:

  • Pure Python versions of the algorithms are now usable also for parallel retry. Pure Python features:Algorithms: CMA-ES, CR-FM-NES, DE, MODE (multiple objective), Map-Elites+Diversifier (quality diversity). AllPython algorithms support an ask-tell interface and parallel function evaluation. Additionally parallel retry / advanced retry (smart boundary management) are supported for these algorithms.

  • Python version > 3.7 required, 3.6 is no longer supported.

  • PEP 0484 compatible type hints useful for IDEs like PyCharm.

  • Most algorithms now support an unified ask/tell interface: cmaes, cmaescpp, crfmnes, crfmnescpp, de, decpp, mode, modecpp, pgpecpp.This is useful for monitoring and parallel fitness evaluation.

  • Added support for Quality Diversity [QD]: MAP-Elites with additional CMA-ES emitter, new meta-algorithm Diversifier, a generalizedvariant of CMA-ME, "drill down" for specified niches and bidirectional archive <→ store transfer between the QD-archive andthe smart boundary management meta algorithm (advretry). All QD algorithms support parallel optimization utilizing all CPU-coresand statistics for solutions associated to a specific niche: mean, stdev, maximum, minimum and count.

Derivative free optimization of machine learning models often have several thousand decisionvariables and require GPU/TPU based parallelization both of the fitness evaluation and the optimization algorithm.CR-FM-NES, PGPE and the QD-Diversifier applied to CR-FM-NES (CR-FM-NES-ME) are excellent choices in this domain.Since fcmaes has a different focus (parallel optimizations and parallel fitness evaluations) we contributed thesealgorithms toEvoJax which utilizesJAXfor GPU/TPU execution.

Optimization algorithms

To utilize modern many-core processors all single-objective algorithms should be used with the parallel retry for cheap fitness functions, otherwise use parallel function evaluation.

  • MO-DE: A new multi-objective optimization algorithm merging concepts from differential evolution and NSGA.Implemented both inPython and inC++. Provides an ask/tell interface and supports constraints and parallel function evaluation.Can also be applied to single-objective problems with constraints. Supports mixed integer problems (seeCFD for details)

  • CVT-map-elites/CMA: A new Python implementation of CVT-map-elites including a CMA-ES emitter providing low algorithm overhead and excellent multi-core scaling even for fast fitness functions. Enables "drill down" for specific selected niches. Seemapelites.py andMap-Elites.

  • Diversifier: A new Python meta-algorithm based on CVT-map-elites archives generalizing ideas fromCMA-ME to other wrapped algorithms. Seediversifier.py andQuality Diversity.

  • BiteOpt algorithm from Aleksey VaneevBiteOpt. Only a C++ version is provided. If your problem is single objective and if you have no clue what algorithm to apply, try this first. Works well with almost all problems. For constraints you have to use weighted penalties.

  • Differential Evolution: Implemented both inPython and inC++. Additional concepts implemented aretemporal locality, stochastic reinitialization of individuals based on their age and oscillating CR/F parameters. Provides an ask/tell interface and supports parallel function evaluation. Supports mixed integer problems (seeCFD for details)

  • CMA-ES: Implemented both inPython and inC++. Provides an ask/tell interface and supports parallel function evaluation. Good option for low number of decision variables (< 500).

  • CR-FM-NES: Fast Moving Natural Evolution Strategy for High-Dimensional Problems, seehttps://arxiv.org/abs/2201.11422. Derived fromhttps://github.com/nomuramasahir0/crfmnes .Implemented both inPython and inC++. Both implementations provide parallel function evaluation and an ask/tell interface. Good option for high number of decision variables (> 100).

  • PGPE Parameter Exploring Policy Gradients, seehttp://mediatum.ub.tum.de/doc/1099128/631352.pdf .Implemented inC++. Provides parallel function evaluation and an ask/tell interface.Good option for very high number of decision variables (> 1000) and for machine learning tasks. An equivalent Python implementation can be found atpgpe.py, use this on GPUs/TPUs.

  • Wrapper forcmaes which provides different CMA-ES variants implemented in Python likeseparable CMA-ES and CMA-ES with Margin (seehttps://arxiv.org/abs/2205.13482) which improves support for mixed integer problems. The wrapper additionally supportsparallel function evaluation.

  • Dual Annealing: Eigen based implementation inC++. Use thescipy implementation if you prefer a pure Python variant or need more configuration options.

  • Expressions: There are two operators for constructing expressions over optimization algorithms: Sequence and random choice.Not only the single objective algorithms above, but also scipy and NLopt optimization methods and custom algorithms can be used for defining algorithm expressions.

Installation

Linux

Windows

For parallel fitness function evaluation use the native Python optimizersor the ask/tell interface of the C++ ones. Python multiprocessing works better on Linux.To get optimal scaling from parallel retry and parallel function evaluation use:

  • Linux subsystem for WindowsWSL.

The Linux subsystem can read/write NTFS, so you can do your development on a NTFS partition. Just the Python call is routed to Linux.If performance of the fitness function is an issue and you don’t want to use the Linux subsystem for Windows,think about using the fcmaes java port:fcmaes-java.

MacOS

  • pip install fcmaes

The C++ shared library is outdated, use the native Python optimizers.

Usage

Usage is similar toscipy.optimize.minimize.

For parallel retry use:

fromfcmaesimportretryret=retry.minimize(fun,bounds)

The retry logs mean and standard deviation of the results, so it can be used to test and compare optimization algorithms:You may choose different algorithms for the retry:

fromfcmaes.optimizerimportBite_cpp,De_cpp,Cma_cpp,Sequenceret=retry.minimize(fun,bounds,optimizer=Bite_cpp(100000))ret=retry.minimize(fun,bounds,optimizer=De_cpp(100000))ret=retry.minimize(fun,bounds,optimizer=Cma_cpp(100000))ret=retry.minimize(fun,bounds,optimizer=Sequence([De_cpp(50000),Cma_cpp(50000)]))

Herehttps://github.com/dietmarwo/fast-cma-es/blob/master/examples you find more examples.Check thetutorials for more details.

Dependencies

Runtime:

Compile time (binaries for Linux and Windows are included):

Optional dependencies:

  • matplotlib for the optional plot output.

  • NLopt:NLopt. Install with 'pip install nlopt'.

  • pygmo2:pygmo. Install with 'pip install pygmo'.

Example dependencies:

  • pykep:pykep. Install with 'pip install pykep'.

Citing

@misc{fcmaes2025,    author = {Dietmar Wolz},    title = {fcmaes - A Python-3 derivative-free optimization library},    note = {Python/C++ source code, with description and examples},    year = {2025},    publisher = {GitHub},    journal = {GitHub repository},    howpublished = {Available at \url{https://github.com/dietmarwo/fast-cma-es}},}

About

A Python 3 gradient-free optimization library

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp