Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Mathematical Optimization in Julia. Local, global, gradient-based and derivative-free. Linear, Quadratic, Convex, Mixed-Integer, and Nonlinear Optimization in one simple, fast, and differentiable interface.

License

NotificationsYou must be signed in to change notification settings

SciML/Optimization.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Join the chat at https://julialang.zulipchat.com #sciml-bridgedGlobal Docs

codecovBuild Status

ColPrac: Contributor's Guide on Collaborative Practices for Community PackagesSciML Code Style

DOI

Optimization.jl is a package with a scope that is beyond your normal global optimizationpackage. Optimization.jl seeks to bring together all of the optimization packagesit can find, local and global, into one unified Julia interface. This means, youlearn one package and you learn them all! Optimization.jl adds a few high-levelfeatures, such as integrating with automatic differentiation, to make its usagefairly simple for most cases, while allowing all of the options in a singleunified interface.

Installation

Assuming that you already have Julia correctly installed, it suffices to importOptimization.jl in the standard way:

using PkgPkg.add("Optimization")

The packages relevant to the core functionality of Optimization.jl will be importedaccordingly and, in most cases, you do not have to worry about the manualinstallation of dependencies. Below is the list of packages that need to beinstalled explicitly if you intend to use the specific optimization algorithmsoffered by them:

Tutorials and Documentation

For information on using the package,see the stable documentation. Use thein-development documentation for the version ofthe documentation, which contains the unreleased features.

Examples

using Optimizationrosenbrock(x, p)= (p[1]- x[1])^2+ p[2]* (x[2]- x[1]^2)^2x0=zeros(2)p= [1.0,100.0]prob=OptimizationProblem(rosenbrock, x0, p)using OptimizationOptimJLsol=solve(prob,NelderMead())using OptimizationBBOprob=OptimizationProblem(rosenbrock, x0, p, lb= [-1.0,-1.0], ub= [1.0,1.0])sol=solve(prob,BBO_adaptive_de_rand_1_bin_radiuslimited())

Warning: The output of the second optimization task (BBO_adaptive_de_rand_1_bin_radiuslimited()) iscurrently misleading in the sense that it returnsStatus: failure (reached maximum number of iterations). However, convergence is actuallyreached and the confusing message stems from the reliance on the Optim.jl outputstruct (where the situation of reaching the maximum number of iterations isrightly regarded as a failure). The improved output struct will soon beimplemented.

The output of the first optimization task (with theNelderMead() algorithm)is given below:

* Status: success* Candidate solution   Final objective value:     3.525527e-09* Found with   Algorithm:     Nelder-Mead* Convergence measures   √(Σ(yᵢ-ȳ)²)/n ≤ 1.0e-08* Work counters   Seconds run:   0  (vs limit Inf)   Iterations:    60   f(x) calls:    118

We can also explore other methods in a similar way:

using ForwardDifff=OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())prob=OptimizationProblem(f, x0, p)sol=solve(prob,BFGS())

For instance, the above optimization task produces the following output:

* Status: success* Candidate solution   Final objective value:     7.645684e-21* Found with   Algorithm:     BFGS* Convergence measures   |x - x'|               = 3.48e-07 ≰ 0.0e+00   |x - x'|/|x'|          = 3.48e-07 ≰ 0.0e+00   |f(x) - f(x')|         = 6.91e-14 ≰ 0.0e+00   |f(x) - f(x')|/|f(x')| = 9.03e+06 ≰ 0.0e+00   |g(x)|                 = 2.32e-09 ≤ 1.0e-08* Work counters   Seconds run:   0  (vs limit Inf)   Iterations:    16   f(x) calls:    53   ∇f(x) calls:   53
prob=OptimizationProblem(f, x0, p, lb= [-1.0,-1.0], ub= [1.0,1.0])sol=solve(prob,Fminbox(GradientDescent()))

The examples clearly demonstrate that Optimization.jl provides an intuitiveway of specifying optimization tasks and offers a relativelyeasy access to a wide range of optimization algorithms.

About

Mathematical Optimization in Julia. Local, global, gradient-based and derivative-free. Linear, Quadratic, Convex, Mixed-Integer, and Nonlinear Optimization in one simple, fast, and differentiable interface.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp