- Notifications
You must be signed in to change notification settings - Fork17
GPU-accelerated NeuroEvolution of Augmenting Topologies (NEAT)
License
EMI-Group/tensorneat
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
🌟 TensorNEAT: JAX-based NEAT Library for GPU Acceleration 🌟
TensorNEAT has been selected to recieve theGECCO 2024 Best Paper Award 🏆
Many thanks to everyone who has been supporting TensorNEAT, and we will remain committed to advancing TensorNEAT for future 'open-endedness'!
TensorNEAT is a JAX-based libaray for NeuroEvolution of Augmenting Topologies (NEAT) algorithms, focused on harnessing GPU acceleration to enhance the efficiency of evolving neural network structures for complex tasks. Its core mechanism involves the tensorization of network topologies, enabling parallel processing and significantly boosting computational speed and scalability by leveraging modern hardware accelerators. TensorNEAT is compatible with theEvoX framework.
JAX-based network for neuroevolution:
- Batch inference across networks with different architectures, GPU-accelerated.
- Evolve networks withirregular structures andfully customize their behavior.
- Visualize the network and represent it inmathematical formulas orcodes.
GPU-accelerated NEAT implementation:
- Run NEAT and HyperNEAT on GPUs.
- Achieve500x speedup compared to CPU-based NEAT libraries.
Rich in extended content:
- Compatible withEvoX for multi-device and distributed support.
- Test neuroevolution algorithms on advancedRL tasks (Brax, Gymnax).
Using the NEAT algorithm to solve RL tasks. Here are some results:
The following animations show the behaviors in Brax environments:
![]() | ![]() | ![]() |
---|---|---|
halfcheetah | hopper | walker2d |
The following graphs show the network of the control policy generated by the NEAT algorithm:
halfcheetah | hopper | walker2d |
You can use these codes for running an RL task (Brax Hopper) in TensorNEAT:
# Import necessary modulesfromtensorneat.pipelineimportPipelinefromtensorneat.algorithm.neatimportNEATfromtensorneat.genomeimportDefaultGenome,BiasNodefromtensorneat.problem.rlimportBraxEnvfromtensorneat.commonimportACT,AGG# Define the pipelinepipeline=Pipeline(algorithm=NEAT(pop_size=1000,species_size=20,survival_threshold=0.1,compatibility_threshold=1.0,genome=DefaultGenome(num_inputs=11,num_outputs=3,init_hidden_layers=(),node_gene=BiasNode(activation_options=ACT.tanh,aggregation_options=AGG.sum, ),output_transform=ACT.tanh, ), ),problem=BraxEnv(env_name="hopper",max_step=1000, ),seed=42,generation_limit=100,fitness_target=5000,)# Initialize statestate=pipeline.setup()# Run until terminationstate,best=pipeline.auto_run(state)
More examples of RL tasks in TensorNEAT can be found in./examples/brax
and./examples/gymnax
.
You can define your custom function and use the NEAT algorithm to solve the function fitting task.
- Import necessary modules:
importjax,jax.numpyasjnpfromtensorneat.pipelineimportPipelinefromtensorneat.algorithm.neatimportNEATfromtensorneat.genomeimportDefaultGenome,BiasNodefromtensorneat.problem.func_fitimportCustomFuncFitfromtensorneat.commonimportACT,AGG
- Define a custom function to be fit, and then create the function fitting problem:
defpagie_polynomial(inputs):x,y=inputsres=1/ (1+jnp.pow(x,-4))+1/ (1+jnp.pow(y,-4))# Important! Returns an array with one item, NOT a scalarreturnjnp.array([res])custom_problem=CustomFuncFit(func=pagie_polynomial,low_bounds=[-1,-1],upper_bounds=[1,1],method="sample",num_samples=100,)
- Define custom activation function for the NEAT algorithm:
defsquare(x):returnx**2ACT.add_func("square",square)
- Define the NEAT algorithm:
algorithm=NEAT(pop_size=10000,species_size=20,survival_threshold=0.01,genome=DefaultGenome(num_inputs=2,num_outputs=1,init_hidden_layers=(),node_gene=BiasNode(# Using (identity, inversion, square)# as possible activation functionsactivation_options=[ACT.identity,ACT.inv,ACT.square],# Using (sum, product) as possible aggregation functionsaggregation_options=[AGG.sum,AGG.product], ),output_transform=ACT.identity, ),)
- Define the Pipeline and then run it:
pipeline=Pipeline(algorithm=algorithm,problem=custom_problem,generation_limit=50,fitness_target=-1e-4,seed=42,)# Initialize statestate=pipeline.setup()# Run until terminationstate,best=pipeline.auto_run(state)# Show resultpipeline.show(state,best)
More examples of function fitting tasks in TensorNEAT can be found in./examples/func_fit
.
Start your journey with TensorNEAT in a few simple steps:
- Import necessary modules:
fromtensorneat.pipelineimportPipelinefromtensorneatimportalgorithm,genome,problem,common
- Configure the NEAT algorithm and define a problem:
algorithm=algorithm.NEAT(pop_size=10000,species_size=20,survival_threshold=0.01,genome=genome.DefaultGenome(num_inputs=3,num_outputs=1,output_transform=common.ACT.sigmoid, ),)problem=problem.XOR3d()
- Initialize the pipeline and run:
pipeline=Pipeline(algorithm,problem,generation_limit=200,fitness_target=-1e-6,seed=42,)state=pipeline.setup()# run until terminationstate,best=pipeline.auto_run(state)# show resultspipeline.show(state,best)
Obtain result in a few generations:
Fitness limit reached!input: [0. 0. 0.], target: [0.], predict: [0.00037953]input: [0. 0. 1.], target: [1.], predict: [0.9990619]input: [0. 1. 0.], target: [1.], predict: [0.9991497]input: [0. 1. 1.], target: [0.], predict: [0.0004661]input: [1. 0. 0.], target: [1.], predict: [0.998262]input: [1. 0. 1.], target: [0.], predict: [0.00077246]input: [1. 1. 0.], target: [0.], predict: [0.00082464]input: [1. 1. 1.], target: [1.], predict: [0.99909043]loss: 8.861396736392635e-07
- Visualize the best network:
network=algorithm.genome.network_dict(state,*best)algorithm.genome.visualize(network,save_path="./imgs/xor_network.svg")
- Transform the network to latex formulas or python codes:
fromtensorneat.common.sympy_toolsimportto_latex_code,to_python_codesympy_res=algorithm.genome.sympy_func(state,network,sympy_output_transform=ACT.obtain_sympy(ACT.sigmoid))latex_code=to_latex_code(*sympy_res)print(latex_code)python_code=to_python_code(*sympy_res)print(python_code)
Latex formulas:
\begin{align}h_{0} &=\frac{1}{0.27 e^{4.28 i_{1}} + 1}\newlineh_{1} &=\frac{1}{0.3 e^{- 4.8 h_{0} + 9.22 i_{0} + 8.09 i_{1} - 10.24 i_{2}} + 1}\newlineh_{2} &=\frac{1}{2.83 e^{5.66 h_{1} - 6.08 h_{0} - 3.03 i_{2}} + 1}\newlineo_{0} &=\frac{1}{0.68 e^{- 20.86 h_{2} + 11.12 h_{1} + 14.22 i_{0} - 1.96 i_{2}} + 1}\newline\end{align}
Python codes:
h=np.zeros(3)o=np.zeros(1)h[0]=1/(0.269965*exp(4.279962*i[1])+1)h[1]=1/(0.300038*exp(-4.802896*h[0]+9.215506*i[0]+8.091845*i[1]-10.241107*i[2])+1)h[2]=1/(2.825013*exp(5.660946*h[1]-6.083459*h[0]-3.033361*i[2])+1)o[0]=1/(0.679321*exp(-20.860441*h[2]+11.122242*h[1]+14.216276*i[0]-1.961642*i[2])+1)
- Install the correct version ofJAX. We recommend
jax >= 0.4.28
.
For cpu version only, you may use:
pip install -U jax
For nvidia gpus, you may use:
pip install -U "jax[cuda12]"
For details of installing jax, please checkhttps://github.com/google/jax.
- Install
tensorneat
from the GitHub source code:
pip install git+https://github.com/EMI-Group/tensorneat.git
TensorNEAT doesn't natively support multi-device or distributed execution, but these features can be accessed via the EvoX framework. EvoX is a high-performance, distributed, GPU-accelerated framework for Evolutionary Algorithms. For more details, visit:EvoX GitHub.
TensorNEAT includes an EvoX Adaptor, which allows TensorNEAT algorithms to run within the EvoX framework. Additionally, TensorNEAT provides a monitor for use with EvoX.
Here is an example of creating an EvoX algorithm and monitor:
fromtensorneat.common.evox_adaptorsimportEvoXAlgorithmAdaptor,TensorNEATMonitorfromtensorneat.algorithmimportNEATfromtensorneat.genomeimportDefaultGenome,BiasNodefromtensorneat.commonimportACT,AGG# define algorithm in TensorNEATneat_algorithm=NEAT(pop_size=1000,species_size=20,survival_threshold=0.1,compatibility_threshold=1.0,genome=DefaultGenome(max_nodes=50,max_conns=200,num_inputs=17,num_outputs=6,node_gene=BiasNode(activation_options=ACT.tanh,aggregation_options=AGG.sum, ),output_transform=ACT.tanh, ),)# use adaptor to create EvoX algorithmevox_algorithm=EvoXAlgorithmAdaptor(neat_algorithm)# monitor in Evoxmonitor=TensorNEATMonitor(neat_algorithm,is_save=False)
Using this code, you can run the NEAT algorithm within EvoX and leverage EvoX's multi-device and distributed capabilities.
For a complete example, see./example/with_evox/walker2d_evox.py
, which demonstrates EvoX's multi-device functionality.
TensorNEAT also implements the HyperNEAT algorithm. Here is a sample code to use it:
fromtensorneat.pipelineimportPipelinefromtensorneat.algorithm.neatimportNEATfromtensorneat.algorithm.hyperneatimportHyperNEAT,FullSubstratefromtensorneat.genomeimportDefaultGenomefromtensorneat.commonimportACT# Create the substrate for HyperNEAT.# This substrate is used to solve the XOR3d problem (3 inputs).# input_coors has 4 coordinates because we need an extra one to represent bias.substrate=FullSubstrate(input_coors=((-1,-1), (-0.33,-1), (0.33,-1), (1,-1)),hidden_coors=((-1,0), (0,0), (1,0)),output_coors=((0,1),),)# The NEAT algorithm calculates the connection strength in the HyperNEAT substrate.# It has 4 inputs (in-node and out-node coordinates in substrates) and 1 output (connection strength).neat=NEAT(pop_size=10000,species_size=20,survival_threshold=0.01,genome=DefaultGenome(num_inputs=4,# size of query coordinates from the substratenum_outputs=1,# the connection strengthinit_hidden_layers=(),output_transform=ACT.tanh, ),)# Define the HyperNEAT algorithm.algorithm=HyperNEAT(substrate=substrate,neat=neat,activation=ACT.tanh,activate_time=10,output_transform=ACT.sigmoid,)
For a complete example, see./examples/func_fit/xor_hyperneat.py
and./examples/gymnax/cartpole_hyperneat.py
.
- Improve TensorNEAT documentation and tutorials.
- Implement more NEAT-related algorithms, such as ES-HyperNEAT.
- Add gradient descent support for networks in NEAT.
- Further optimize TensorNEAT to increase computation speed and reduce memory usage.
We warmly welcome community developers to contribute to TensorNEAT and look forward to your pull requests!
- Engage in discussions and share your experiences onGitHub Issues.
- Join our QQ group (ID: 297969717).
Thanks to Kenneth O. Stanley and Risto Miikkulainen forthe NEAT algorithm, which has greatly advanced neuroevolution.
Thanks to the Google team forJAX, making GPU programming easy and efficient.
Thanks toneat-python andpureples for their clear Python implementations of NEAT and HyperNEAT.
Thanks toBrax andgymnax for efficient benchmarking frameworks.
Thanks to theEvoX. Integrating with EvoX allows TensorNEAT to combine the NEAT algorithm with other evolutionary algorithms, expanding its potential. EvoX also provides multi-device and distributed support for TensorNEAT.
If you use TensorNEAT in your research and want to cite it in your work, please use:
@inproceedings{10.1145/3638529.3654210, author = {Wang, Lishuang and Zhao, Mengfei and Liu, Enyu and Sun, Kebin and Cheng, Ran}, title = {Tensorized NeuroEvolution of Augmenting Topologies for GPU Acceleration}, year = {2024}, isbn = {9798400704949}, doi = {10.1145/3638529.3654210}, booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference}, pages = {1156–1164}, numpages = {9}, keywords = {neuroevolution, GPU acceleration, algorithm library}, location = {Melbourne, VIC, Australia}, series = {GECCO '24}}