Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Implementation of nonlinear Optimization Algorithms in Python

NotificationsYou must be signed in to change notification settings

Paulnkk/Nonlinear-Optimization-Algorithms

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

During my time as Scientific Assistant at the Karlsruhe Institute of Technology (Germany) I implemented various standard Optimization Algorithms solving unrestricted nonlinear Problems; Gradient-Descent-Method, Newton-Method, Conjugate-Gradient-Descent-Method, BFGS-Method and a Trust-Region-Method in Python.

In addition, I implemented an Armijo linesearch.

The code is implemented in an object-oriented manner, whereby each method is implemented in a class (bfgs.py, cg.py, gradv.py, newtonm.py and tr.py) and executed via the ros_test.py script. In the script ros_test.py the Rosenbrock function was implemented, which is minimized to a given starting point x_0 with each method.


[8]ページ先頭

©2009-2025 Movatter.jp