335Accesses
Abstract
In this paper, we propose an infeasible arc-search interior-point algorithm for solving nonlinear programming problems. Most algorithms based on interior-point methods are categorized as line search since they compute a next iterate on a straight line determined by a search direction which approximates the central path. The proposed arc-search interior-point algorithm uses an arc for the approximation. We discuss convergence properties of the proposed algorithm. We also conduct numerical experiments on the CUTEst benchmark problems and compare the performance of the proposed arc-search algorithm with that of a line-search algorithm. Numerical results indicate that the proposed arc-search algorithm reaches the optimal solution using fewer iterations but longer times than a line-search algorithm. A modification that leads to a faster arc-search algorithm is also discussed.
This is a preview of subscription content,log in via an institution to check access.
Access this article
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
Price includes VAT (Japan)
Instant access to the full article PDF.



Similar content being viewed by others
References
Byrd, R. H., Hribar, M. E., Nocedal, J.: An interior point algorithm for large-scale nonlinear programming. SIAM J. Optim.9, 877–900 (1999)
Byrd, R. H., Gilbert, J. C., Nocedal, J.: A trust region method based on interior point techniques for nonlinear programming. Math. Progr.89, 149–185 (2000)
El-Bakry, A. S., Tapia, R. A., Tsuchiya, T., Zhang, Y.: On the formulation and theory of the Newton interior-point method for nonlinear programming. J. Optim. Theory Appl.89, 507–541 (1996)
Forsgren, A., Gill, P. E.: Primal-dual interior methods for nonconvex nonlinear programming. SIAM J. Optim.8, 1132–1152 (1998)
Gould, N., Scott, J.: A note on performance profiles for benchmarking software. ACM Trans. Math. Softw.43(2), 15 (2016)
Gould, N. I., Orban, D., Toint, P.L.: CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput. Optim. Appl.60(3), 545–557 (2015)
Kheirfam, B.: An arc-search infeasible interior-point algorithm for horizontal linear complementarity problem in the\(N^{-\infty }\) neighbourhood of the central path. Int. J. Comput. Math.94(1), 2271–2282 (2017)
Lu, T., Shiou, S.: Inverses of 2 × 2 block matrices. Comput. Math. Appl.43, 119–129 (2002)
Lustig, I., Marsten, R., Shannon, D.: Computational experience with a primal-dual interior-point method for linear programming. Linear Algebra Appl.152, 192–222 (1991)
Lustig, I., Marsten, R., Shannon, D.: On implementing Mehrotra’s predictor-corrector interior-point method for linear programming. SIAM J. Optim.2, 432–449 (1992)
Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim.2, 575–601 (1992)
Nocedal, J., Wachter, A., Waltz, R. A.: Adaptive barrier update strategies for nonlinear interior methods. SIAM J. Optim.19, 1674–1693 (2009)
Plantenga, T.: A trust region method for nonlinear programming based on primal interior-point techniques. SIAM J. Optim.20, 282–305 (1998)
Tits, A. L., Wachter, A., Bakhtiarl, S., Urban, T. J., Lawrence, C. T.: A primal-dual method for nonlinear programming with strong global and local convergence properties. Math. Progr.8, 1132–1152 (1998)
Ulbrich, M., Ulbrich, S., Vicente, L. N.: A globally convergent primal-dual interior-point filter method for nonlinear programming. Math. Progr.100, 379–410 (2004)
Vanderbei, R., Shanno, D.: An interior-point algorithm for nonconvex nonlinear programming. Comput. Optim. Appl.13, 231–252 (1999)
Wright, S.: Primal-Dual Interior-Point Methods. SIAM, Philadelphia (1997)
Yang, Y.: A polynomial arc-search interior-point algorithm for convex quadratic programming. Eur. J. Oper. Res.215(1), 25–38 (2011)
Yang, Y.: A polynomial arc-search interior-point algorithm for linear programming. J. Optim. Theory Appl.158, 859–873 (2013)
Yang, Y.: Curvelp-a matlab implementation of an infeasible interior-point algorithm for linear programming. Numer. Algorithms74, 967–996 (2017)
Yang, Y.: Two computationally efficient polynomial-iteration infeasible interior-point algorithms for linear programming. Numer. Algorithms79, 957–992 (2018)
Yang, Y., Yamashita, M.: An arc-searchO(nL) infeasible-interior-point algorithm for linear programming. Optim. Lett.12(1), 781–798 (2018)
Yang, X., Liu, H., Zhang, Y.: An arc-search infeasible-interior-point method for symmetric optimization in a wide neighborhood of the central path. Optim. Lett.11(1), 135–152 (2017)
Ye, Y.: Interior Point Algorithms: Theory and Analysis. Wiley, New York (1997)
Zhang, M., Yuan, B., Zhou, Y., Luo, X., Huang, Z.: A primal-dual interior-point algorithm with arc-search for semidefinite programming. Optim. Lett.13(5), 1157–1175 (2019)
Zhao, G., Zhu, J.: Analytical properties of the central trajectory in interior point methods. In: Du, D. Z., Sun, J. (eds.) Advances in Optimization and Approximation, pp 362–375. Springer, Boston (1994)
Funding
This research was partially supported by JSPS KAKENHI (Grant Number: 18K11176).
Author information
Authors and Affiliations
Department of Mathematical and Computing Science, Tokyo Institute of Technology, Tokyo, Japan
Makoto Yamashita & Einosuke Iida
US NRC, Office of Research, 21 Church Street, Rockville, MD, USA
Yaguang Yang
- Makoto Yamashita
You can also search for this author inPubMed Google Scholar
- Einosuke Iida
You can also search for this author inPubMed Google Scholar
- Yaguang Yang
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toMakoto Yamashita.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix: 1: Derivatives
In this section, we give notation related to derivatives. The Hessian matrix of\(f : {\mathbb {R}}^{n} \to {\mathbb {R}}\) is
The Jacobian for\(h : {\mathbb {R}}^{n} \to {\mathbb {R}}^{m}\) is
The Jacobian for\(g : {\mathbb {R}}^{n} \to {\mathbb {R}}^{p}\) is
For the right-hand side of (8), we use
??
Appendix: 2: The largest step angle
In this section, we give analytical forms to compute the largest\(\alpha _{w_{i}}\) and\(\alpha _{s_{i}}\) for eachi in (12). For simplicity, here, we drop the indexi and the iteration numberk; for example,\({w_{i}^{k}}\) is simply written asw. For (11a), we should have
or equivalently,
We split this computation into seven cases by the signs of\(\dot {w}\) and\(\ddot {w}\).
Case 1
(\(\dot {w}=0\)and\(\ddot {w}\ne 0\))
If\(\ddot {w} \ge -(1-\delta ) w\), thenw(α) ≥δw holds for\(\alpha \in [0, \frac {\pi }{2}]\). If\(\ddot {w} \le -(1-\delta )w < 0\), to meet (28), we must have\(\cos \limits (\alpha ) \ge \frac {w-\delta w +\ddot {w}}{\ddot {w}} \ge 0\), or,\(\alpha \le \cos \limits ^{-1}\left (\frac {w-\delta w +\ddot {w}} {\ddot {w}} \right )\). Therefore,
Case 2
(\(\ddot {w}=0\)and\(\dot {w}\ne 0\))
If\(\dot {w} \le (1 - \delta ) w\), thenw(α) ≥δw holds for any\(\alpha \in [0, \frac {\pi }{2}]\). If\(\dot {w} \ge (1 - \delta ) w > 0\), to meet (28), we must have\(\sin \limits (\alpha ) \le \frac {w-\delta w }{\dot {w}}\), or\(\alpha \le \sin \limits ^{-1}\left (\frac {w-\delta w } {\dot {w}} \right )\). Therefore,
Case 3
(\(\dot {w}>0\)and\(\ddot {w}>0\))
Let\(\beta = \sin \limits ^{-1} \left (\frac {\ddot {w} } {\sqrt {\dot {w}^{2}+\ddot {w}^{2}}} \right )\). We can express\(\dot {w}=\sqrt {\dot {w}^{2}+\ddot {w}^{2}}\cos \limits (\beta )\) and\(\ddot {w}=\sqrt {\dot {w}^{2}+\ddot {w}^{2}}\sin \limits (\beta )\). Then, (28) can be rewritten as
If\(\ddot {w} + w- \delta w \ge \sqrt {\dot {w}^{2}+\ddot {w}^{2}}\), thenw(α) ≥δw holds for any\(\alpha \in [0, \frac {\pi }{2}]\). If\(\ddot {w} + w- \delta w \le \sqrt {\dot {w}^{2}+\ddot {w}^{2}}\), to meet (29), we must have\(\sin \limits (\alpha + \beta ) \le \frac {w-\delta w + \ddot {w}} {\sqrt {\dot {w}^{2}+\ddot {w}^{2}}}\), or\(\alpha + \beta \le \sin \limits ^{-1}\left (\frac {w-\delta w + \ddot {w} } {\sqrt {\dot {w}^{2}+\ddot {w}^{2}}} \right )\). Therefore,
Case 4
(\(\dot {w}>0\)and\(\ddot {w}<0\))
Let\(\beta = \sin \limits ^{-1} \left (\frac {-\ddot {w} } {\sqrt {\dot {w}^{2}+\ddot {w}^{2}}} \right )\). We can express\(\dot {w}=\sqrt {\dot {w}^{2}+\ddot {w}^{2}}\cos \limits (\beta )\) and\(\ddot {w}=-\sqrt {\dot {w}^{2}+\ddot {w}^{2}}\sin \limits (\beta )\). Then, (28) can be rewritten as
If\(\ddot {w} + w- \delta w \ge \sqrt {\dot {w}^{2}+\ddot {w}^{2}}\), thenw(α) ≥δw holds for any\(\alpha \in [0, \frac {\pi }{2}]\). If\(\ddot {w} +w- \delta w \le \sqrt {\dot {w}^{2}+\ddot {w}^{2}}\), to meet (30), we must have\(\sin \limits (\alpha - \beta ) \le \frac {w-\delta w + \ddot {w}} {\sqrt {\dot {w}^{2}+\ddot {w}^{2}}}\), or\(\alpha - \beta \le \sin \limits ^{-1}\left (\frac {w-\delta w + \ddot {w} } {\sqrt {\dot {w}^{2}+\ddot {w}^{2}}} \right )\). Therefore,
Case 5
(\(\dot {w}<0\)and\(\ddot {w}<0\))
Let\( \beta = \sin \limits ^{-1} \left (\frac {-\ddot {w} } {\sqrt {\dot {w}^{2}+\ddot {w}^{2}}} \right )\). We can express\(\dot {w}=-\sqrt {\dot {w}^{2}+\ddot {w}^{2}}\cos \limits (\beta )\) and\(\ddot {w}=-\sqrt {\dot {w}^{2}+\ddot {w}^{2}}\sin \limits (\beta )\). Then, (28) can be rewritten as
If\(\ddot {w} +(w- \delta w) \ge 0\), thenw(α) ≥δw holds for any\(\alpha \in [0, \frac {\pi }{2}]\). If\(\ddot {w} +(w- \delta w) \le 0\), to meet (31), we must have\(\sin \limits (\alpha + \beta ) \ge \frac {-(w-\delta w + \ddot {w})} {\sqrt {\dot {w}^{2}+\ddot {w}^{2}}}\), or\(\alpha + \beta \le \pi - \sin \limits ^{-1} \left (\frac {-(w-\delta w + \ddot {w})}{\sqrt {\dot {w}^{2}+\ddot {w}^{2}}} \right )\). Therefore,
Case 6
(\(\dot {w}<0\)and\(\ddot {w}>0\))
Clearly, (28) always holds for any\(\alpha \in [0, \frac {\pi }{2}]\). Therefore, we can take
Case 7
(\(\dot {w}=0\)and\(\ddot {w}=0\))
Clearly, (28) always holds for any\(\alpha \in [0, \frac {\pi }{2}]\). Therefore, we can take
Similar analysis can be performed for (11b), then similar analytical forms are derived forαs.
Appendix: 3: Details on numerical results
Tables 1 and 2 report the objective value, the numbers of iterations, and the computation time (in seconds) of the proposed arc-search algorithm (Algorithm 1) and the line-search algorithm (Algorithm 2) for QCQP and Other type problems. The symbol “Unattained” indicates that the algorithms stopped prematurely, mainly because of the numerical errors. We excluded the problems that all the three algorithms (Algorithm 1, Algorithm 1 with\(\ddot {\ddot {v}}\), and Algorithm 2) stopped with “Unattained.”
Rights and permissions
About this article
Cite this article
Yamashita, M., Iida, E. & Yang, Y. An infeasible interior-point arc-search algorithm for nonlinear constrained optimization.Numer Algor89, 249–275 (2022). https://doi.org/10.1007/s11075-021-01113-w
Received:
Accepted:
Published:
Issue Date:
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative