Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Optimal control module#549

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
sawyerbfuller merged 18 commits intopython-control:masterfrommurrayrm:obc
Mar 2, 2021
Merged
Show file tree
Hide file tree
Changes fromall commits
Commits
Show all changes
18 commits
Select commitHold shift + click to select a range
00297d6
draft unit test
murrayrmDec 28, 2020
23cb793
convert to pytest
murrayrmDec 30, 2020
8711900
initial minimal implementation (working)
murrayrmFeb 13, 2021
6f9c092
minor refactoring plus additional comments on method
murrayrmFeb 13, 2021
bd322e7
remove polytope dependence; implement MPC iosys w/ notebook example
murrayrmFeb 15, 2021
0d14642
add'l unit tests, cache sim results, equality constraint support
murrayrmFeb 15, 2021
2456f36
slight code refactoring + docstrings + initial doc/obc.rst
murrayrmFeb 16, 2021
769eaa5
add info/debug messages + code refactor, result object
murrayrmFeb 19, 2021
ea2884d
slight refactoring of cost functions + example tweaks
murrayrmFeb 20, 2021
9494092
rename obc to optimal, new examples/unit tests
murrayrmFeb 20, 2021
acc4439
update unit tests for speed and coverage
murrayrmFeb 21, 2021
5f261cc
updated argument checking + unit tests (and coverage) + fixes
murrayrmFeb 21, 2021
980fa5f
PEP8 cleanup
murrayrmFeb 21, 2021
7741fe9
add basis functions, solver options, examples/tests
murrayrmFeb 27, 2021
df91cac
set up benchmarks/profiling via asv
murrayrmFeb 27, 2021
d735f79
add unit tests for additional coverage
murrayrmFeb 28, 2021
5838c2f
clean up steering-optimal example
murrayrmFeb 28, 2021
c49ee90
updated benchmarks + performance tweaks
murrayrmMar 1, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion.gitignore
View file
Open in desktop
Original file line numberDiff line numberDiff line change
Expand Up@@ -7,7 +7,7 @@ MANIFEST
control/_version.py
__conda_*.txt
record.txt
build.log
*.log
*.egg-info/
.eggs/
.coverage
Expand All@@ -23,3 +23,6 @@ Untitled*.ipynb
# Files created by or for emacs (RMM, 29 Dec 2017)
*~
TAGS

# Files created by or for asv (airspeed velocity)
.asv/
161 changes: 161 additions & 0 deletionsasv.conf.json
View file
Open in desktop
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,161 @@
{
// The version of the config file format. Do not change, unless
// you know what you are doing.
"version": 1,

// The name of the project being benchmarked
"project": "python-control",

// The project's homepage
"project_url": "http://python-control.org/",

// The URL or local path of the source code repository for the
// project being benchmarked
"repo": ".",

// The Python project's subdirectory in your repo. If missing or
// the empty string, the project is assumed to be located at the root
// of the repository.
// "repo_subdir": ".",

// Customizable commands for building, installing, and
// uninstalling the project. See asv.conf.json documentation.
//
// "install_command": ["in-dir={env_dir} python -mpip install {wheel_file}"],
// "uninstall_command": ["return-code=any python -mpip uninstall -y {project}"],
"build_command": [
"python make_version.py",
"python setup.py build",
"PIP_NO_BUILD_ISOLATION=false python -mpip wheel --no-deps --no-index -w {build_cache_dir} {build_dir}"
],

// List of branches to benchmark. If not provided, defaults to "master"
// (for git) or "default" (for mercurial).
// "branches": ["master"], // for git
// "branches": ["default"], // for mercurial

// The DVCS being used. If not set, it will be automatically
// determined from "repo" by looking at the protocol in the URL
// (if remote), or by looking for special directories, such as
// ".git" (if local).
// "dvcs": "git",

// The tool to use to create environments. May be "conda",
// "virtualenv" or other value depending on the plugins in use.
// If missing or the empty string, the tool will be automatically
// determined by looking for tools on the PATH environment
// variable.
"environment_type": "conda",

// timeout in seconds for installing any dependencies in environment
// defaults to 10 min
//"install_timeout": 600,

// the base URL to show a commit for the project.
"show_commit_url": "http://github.com/python-control/python-control/commit/",

// The Pythons you'd like to test against. If not provided, defaults
// to the current version of Python used to run `asv`.
// "pythons": ["2.7", "3.6"],

// The list of conda channel names to be searched for benchmark
// dependency packages in the specified order
// "conda_channels": ["conda-forge", "defaults"],

// The matrix of dependencies to test. Each key is the name of a
// package (in PyPI) and the values are version numbers. An empty
// list or empty string indicates to just test against the default
// (latest) version. null indicates that the package is to not be
// installed. If the package to be tested is only available from
// PyPi, and the 'environment_type' is conda, then you can preface
// the package name by 'pip+', and the package will be installed via
// pip (with all the conda available packages installed first,
// followed by the pip installed packages).
//
// "matrix": {
// "numpy": ["1.6", "1.7"],
// "six": ["", null], // test with and without six installed
// "pip+emcee": [""], // emcee is only available for install with pip.
// },

// Combinations of libraries/python versions can be excluded/included
// from the set to test. Each entry is a dictionary containing additional
// key-value pairs to include/exclude.
//
// An exclude entry excludes entries where all values match. The
// values are regexps that should match the whole string.
//
// An include entry adds an environment. Only the packages listed
// are installed. The 'python' key is required. The exclude rules
// do not apply to includes.
//
// In addition to package names, the following keys are available:
//
// - python
// Python version, as in the *pythons* variable above.
// - environment_type
// Environment type, as above.
// - sys_platform
// Platform, as in sys.platform. Possible values for the common
// cases: 'linux2', 'win32', 'cygwin', 'darwin'.
//
// "exclude": [
// {"python": "3.2", "sys_platform": "win32"}, // skip py3.2 on windows
// {"environment_type": "conda", "six": null}, // don't run without six on conda
// ],
//
// "include": [
// // additional env for python2.7
// {"python": "2.7", "numpy": "1.8"},
// // additional env if run on windows+conda
// {"platform": "win32", "environment_type": "conda", "python": "2.7", "libpython": ""},
// ],

// The directory (relative to the current directory) that benchmarks are
// stored in. If not provided, defaults to "benchmarks"
// "benchmark_dir": "benchmarks",

// The directory (relative to the current directory) to cache the Python
// environments in. If not provided, defaults to "env"
"env_dir": ".asv/env",

// The directory (relative to the current directory) that raw benchmark
// results are stored in. If not provided, defaults to "results".
"results_dir": ".asv/results",

// The directory (relative to the current directory) that the html tree
// should be written to. If not provided, defaults to "html".
"html_dir": ".asv/html",

// The number of characters to retain in the commit hashes.
// "hash_length": 8,

// `asv` will cache results of the recent builds in each
// environment, making them faster to install next time. This is
// the number of builds to keep, per environment.
// "build_cache_size": 2,

// The commits after which the regression search in `asv publish`
// should start looking for regressions. Dictionary whose keys are
// regexps matching to benchmark names, and values corresponding to
// the commit (exclusive) after which to start looking for
// regressions. The default is to start from the first commit
// with results. If the commit is `null`, regression detection is
// skipped for the matching benchmark.
//
// "regressions_first_commits": {
// "some_benchmark": "352cdf", // Consider regressions only after this commit
// "another_benchmark": null, // Skip regression detection altogether
// },

// The thresholds for relative change in results, after which `asv
// publish` starts reporting regressions. Dictionary of the same
// form as in ``regressions_first_commits``, with values
// indicating the thresholds. If multiple entries match, the
// maximum is taken. If no entry matches, the default is 5%.
//
// "regressions_thresholds": {
// "some_benchmark": 0.01, // Threshold of 1%
// "another_benchmark": 0.5, // Threshold of 50%
// },
}
39 changes: 39 additions & 0 deletionsbenchmarks/README
View file
Open in desktop
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
This directory contains various scripts that can be used to measure the
performance of the python-control package. The scripts are intended to be
used with the airspeed velocity package (https://pypi.org/project/asv/) and
are mainly intended for use by developers in identfying potential
improvements to their code.

Running benchmarks
------------------
To run the benchmarks listed here against the current (uncommitted) code,
you can use the following command from the root directory of the repository:

PYTHONPATH=`pwd` asv run --python=python

You can also run benchmarks against specific commits usuing

asv run <range>

where <range> is a range of commits to benchmark. To check against the HEAD
of the branch that is currently checked out, use

asv run HEAD^!

Code profiling
--------------
You can also use the benchmarks to profile code and look for bottlenecks.
To profile a given test against the current (uncommitted) code use

PYTHONPATH=`pwd` asv profile --python=python <file>.<test>

where <file> is the name of one of the files in the benchmark/ subdirectory
and <test> is the name of a test function in that file.

If you have the `snakeviz` profiling visualization package installed, the
following command will profile a test against the HEAD of the current branch
and open a graphical representation of the profiled code:

asv profile --gui snakeviz <file>.<test> HEAD

RMM, 27 Feb 2021
Empty file addedbenchmarks/__init__.py
View file
Open in desktop
Empty file.
Loading

[8]ページ先頭

©2009-2025 Movatter.jp