Movatterモバイル変換


[0]ホーム

URL:


Next Article in Journal
Efficient BEM-Based Algorithm for Pricing Floating Strike Asian Barrier Options (with MATLAB® Code)
Previous Article in Journal
Some Summation Theorems for Generalized Hypergeometric Functions
 
 
Search for Articles:
Title / Keyword
Author / Affiliation / Email
Journal
Article Type
 
 
Section
Special Issue
Volume
Issue
Number
Page
 
Logical OperatorOperator
Search Text
Search Type
 
add_circle_outline
remove_circle_outline
 
 
Journals
Axioms
Volume 7
Issue 2
10.3390/axioms7020039
Font Type:
ArialGeorgiaVerdana
Font Size:
AaAaAa
Line Spacing:
Column Width:
Background:
Review

What Do You Mean by “Nonlinear Eigenvalue Problems”?

Dipartimento di Ingegneria dell’Informazione e Scienze Matematiche, Università di Siena, I-53100 Siena, Italy
Submission received: 30 March 2018 /Revised: 5 June 2018 /Accepted: 6 June 2018 /Published: 9 June 2018

Abstract

:
A nonlinear eigenvalue problem is generally described by an equation of the formF(λ,x)=0, whereF(λ,0)=0 for allλ, and contains by definition two unknowns: the eigenvalue parameterλ and the “nontrivial” vector(s)x corresponding to it. The nonlinear dependence ofF can be in either of them (and of course in both), and also the research in this area seems to follow two quite different directions. In this review paper, we try to collect some points of possible common interest for both fields.
MSC:
Primary 47J10; Secondary 47A56

    1. Introduction

    Nonlinear eigenvalue problems are generally described by equations of the form
    F(λ,x)=0(λK,xE)
    whereK(=R orC) is the field of real or complex numbers, andE is a real or complex Banach space that can in particular be then-spaceRn orCn. In Equation (1),F is a continuous map ofK×E intoE, and it is assumed thatF(λ,0)=0 for all scalarsλ. That is to say,x=0 solves trivially Equation (1) for allλ; and one looks therefore for thoseλ’s (theeigenvalues ofF) such that Equation (1) has a solutionx0 (aneigenvector ofF corresponding toλ).
    Of course, Equation (1) contains as a (very) special case the proper eigenvalue-eigenvector equation of Linear Algebra and Linear Functional Analysis,
    F(λ,x)=Axλx=(AλI)x=0
    in whichAL(E), the space af all bounded linear operators acting inE andI is the identity map; to stress the linearity ofA, we write as usualAx rather thanA(x). In addition to Equation (2), consider now the following special forms of Equation (1):
    F(λ,x)=G(λ)x=0,
    and
    F(λ,x)=A(x)λC(x)=0.
    Evidently, both Equations (3) and (4) encompass the classical case Equation (2). However, there is quite a difference between them: in Equation (3),F depends linearly onx and arbitrarily inλ, the latter dependence being driven by a mapG:KL(E), while, in Equation (4), it is rather the opposite, for here it is the dependence onx that is (possibly) nonlinear as dictated by the continuous mapsA,C:EE. One first consequence is that the termseigenvalue/eigenvector/eigenspace retain their usual significance in the case of Equation (3), while on the contrary they have in general a poor meaning in the case of Equation (4). On the other hand, in the latter case, assuming thatC(x)0 forx0, the eigenvalue associated with an eigenvector is uniquely determined, forA(x^)λ1C(x^)=A(x^)λ2C(x^) withx^0 implies thatλ1=λ2, while in the former case this is not necessarily true.
    In fact, in the past decades both Equations (3) and (4) have been usually referred to asNonlinear Eigenvalue Problems. Those of the type in Equation (3), especially withE=Kn, have been extensively studied in Numerical Analysis and Matrix Analysis (see, for instance, the review paper [1], where the abbreviation NLEVP is used to designate them), while problems of the type in Equation (4) have formed a main subject in Nonlinear Functional Analysis and its applications to differential equations, and are at the basis of, among others, Bifurcation Theory; see, for instance, the nowadays classic book [2] or the most recent [3].
    Historically, the study of Nonlinear Eigenvalue Problems can be dated back to nearly one century ago, if we look in particular to the work—inspired by D. Hilbert and E. Landau—of E. Schmidt and A. Hammerstein, and subsequently of M. Golomb, on parameter-dependent nonlinear integral equations of the form
    λu(x)=Ωk(x,y)f(y,u(y))dy.
    On the eastern side of Europe, the investigation of this kind of problems received a strong impulse in the former Soviet Union on behalf of M.A. Krasnosel’skii and I.T. Gohberg. They were both pupils of M.G. Krein, and their subsequent work during many decades of the second half of the last century seems to have developed mainly on problems of the type in Equation (3) by Gohberg, and mainly on problems of the type in Equation (4) by Krasnosel’skii. For this reason, and in honor of these two true giants of Nonlinear Functional Analysis, I will often refer in the sequel to Equation (3) as describing problemsof type G, and to Equation (4) as describing problemsof type K.
    The present paper does not contain new results in either field. It is rather a tentative review, having as a prominent scope that of indicating some problems and methods followed in each of the two classes, with a look for possible future interactions between them. This is done in the main part of the paper (Section 2). In fact, as to problems of type G—of which I became aware only a short time ago—my presentation (Section 2.1) will be that of a beginner, and limited to a few historical remarks, accompanied by some indication for further study and some motivation from ODEs.
    Something more the reader will find about problems of type K, for which I have focused on a short account of some basic results and methods from Bifurcation Theory on the one hand (Section 2.2), and to a brief description of a very special—maybe the “closest to linear”—nonlinear eigenvalue problem on the other (Section 2.3). The latter is thep-Laplace equation
    div(|u|p2u)=μ|u|p2uinΩu=0onΩ
    wherep>1 andΩ is a bounded domain inRn. Here, one can prove the existence—exactly as for the classical Laplace operator,p=2—of countably many eigenvalues which can be naturally arranged in an increasing sequence
    μ1<μ2μk,μk+
    The importance of this example is also because it shows—via the Lusternik–Schnirelmann theory—the full strength of the variational methods and of Critical Point Theory in particular. As is well known, these consist in searching a solution of a given equation as acritical point of a functional (i.e., a point where the derivative of the functional vanishes), and are of the utmost importance both for equations of the form
    A(x)=0
    and for equations of the form
    A(x)=λC(x)
    the latter being in fact the nonlinear eigenvalue problem in Equation (4) for the pair(A,C). Indeed, ifA=f andC=g, then solutions of Equation (7) are thefree critical points off, while solutions of Equation (8) are—modulo technicalities—theconstrained critical points off on the manifoldM={x:g(x)=const.}. As explained for instance in [4], the Lusternik–Schnirelmann theory not only guarantees, under appropriate assumptions on the nonlinear operatorsA andC, the existence of infinitely many distinct eigenvalues of Equation (8), and thus in particular of Equation (5), but also provides for them a “minimax" characterization of the form (whenC=I)
    λn=supKninfKA(u),u,
    obtained via suitable familiesKn of subsetsK of the unit sphere. This realizes a conceptually beautiful (and also practically useful, see for instance [5]) extension of the Courant–Weyl principle for the eigenvalues of a linear compact self-adjoint operator. The variational characterization of the eigenvalues seems to be a main point of common interest for either type of nonlinear eigenvalue problems, see, e.g., [6] and the references quoted inSection 2.1 andSection 2.3.
    In the second part of the paper (Section 3), we return to Equation (1) and look at the case in which a small “perturbation parameter”ϵ enters in the problem, originating an equation of the form
    F(ϵ,λ,x)=0
    of which Equation (1) is seen as the unperturbed form forϵ=0. We consider parameter-dependent forms of both Equations (3) and (4), precisely
    G(ϵ,λ)x=0
    and, taking in Equation (4)C=I and adding to a linearA a nonlinear perturbationϵB,
    Ax+ϵB(x)=λx.
    In both Equations (10) and (11), one common problem is—in the light of what is done for linear operators [7]—to see how the perturbed eigenvaluesλ(ϵ) (provided that they exist) will depend onϵ near a given unperturbed eigenvalueλ0. To this purpose, we review the main points of the recent contributions [8,9], respectively, to Equation (10) and to Equation (11).
    Two more points deserve to be mentioned before closing this Introduction. The first is that, for a better understanding, Nonlinear Eigenvalue Problems—both of type G and of type K—should be set in the more general respective context of Nonlinear Spectral Theory. References for this are [10,11], respectively. The interested reader might look at [12] for a recent contribution to the latter. The second fact, clear enough from this Introduction, is that we have not even attempted to mention the various numerical methods used for the practical solution of Equation (3) in the caseE=Kn. The reader interested in this rich and fundamental research field might look into the excellent and very recent survey paper [13].
    Let me repeat in conclusion that the only reasonable scope of this paper is to possibly arouse the curiosity of some expert in either field towards the problems treated in the other, and to give a chance of possible inspiration for further study.

    2. The Two Types of NLEVP

    2.1. Problems of Type G: (Linear) Operator- and Matrix-Valued Functions

    A good point to start a presentation of nonlinear eigenvalue problems of the type in Equation (3) is perhaps R.E.L. Turner’s paper [6] of 1968. Given a complex Hilbert spaceH, rather than considering the spectrum of a single bounded linear operatorA acting inH, he considers forλC operators of the form
    AB(λ)A1NλkBk
    whereBkL(H) are given(k=1,,N), withB1=I; thus ifN=1 we are back to the familiarAλI considered in linear spectral theory. Thespectrum ofAB(λ) is defined as the set of thoseλC for whichAB(λ) fails to be a homeomorphism ofH onto itself. In particular, a pointλ0C such thatAB(λ0) is not injective, i.e., such that the nullspaceKer(AB(λ0)){0}, is aneigenvalue ofAB(λ). Note that, in the caseN=1, these definitions of spectrum and eigenvalues ofAλI yield what we usually call the spectrum and eigenvaluesof A. The new point of view is that the spectrum is now attributed to the (polynomial)function ofC intoL(H) defined putting
    G(λ)=AB(λ).
    In the caseH=C, the spectrum so defined consists very simply of the zeroes of the polynomialG itself. Now recall (see e.g., [14] or [15]) that, ifA is compact, self-adjoint and nonnegative, then:
    • The spectrum ofA is at most countable and consists of a finite or infinite decreasing sequence of non-negative eigenvalues(λn):
      λ1λ2λk
      If the sequence is infinite, thenλn0.
    • The eigenvectors(un) associated with the eigenvalues(λn) form an orthonormal basis ofH.
    Turner first generalizes this to operators as in Equation (12) whereA is compact, self-adjoint and nonegative,Bk is self-adjoint and non-negative fork=1,,N andA belongs to the Schatten classCr (i.e., its eigenvalues(αi) satisfy the conditioni(αi)r<) for somer<12. Another basic fact concerning the spectrum of an operatorA as above is thevariational characterization of its positive eigenvalues(λn): indeed,
    λn=maxxu1,,un1Ax,xx,x=minx[u1,,un]Ax,xx,x
    =minVVn1maxxVAx,xx,x=maxVVnminxVAx,xx,x
    whereV is a vector subspace ofH, andVn denotes the family of all vector subspaces of dimensionn.
    Turner generalizes the variational principle as follows. ForxH,x0, letZ(x) be the unique non-negative zero of the polynomialλG(λ)x,x [6]. Note that, in the caseN=1, asG(λ)x,x=Ax,xλx,x, we have
    Z(x)=Ax,xx,x
    so that the functionZ is the usualRayleigh quotient of A, of which the eigenvalues areextremal values as shown by Equation (14). Then, under the stated assumptions onA andBk, if moreover the eigenvectors ofAB(λ), corresponding to non-negative eigenvalues, form a basis forH, then the variational principles in Equations (14) and (15) hold replacingAx,x/x,x withZ(x).
    Finally, we have by definition ofZ(x) that
    G(Z(x))x,x=0forallxH,x0
    Results similar to those of Turner, and practically at the same time, were obtained by K.P. Hadeler in [16,17]. He considered several-parameter dependent operators of the form
    A(λ1B1+λ2B2+λNBN)
    withBj bounded self-adjoint forj=1,,N, and in connection with the variational property of their eigenvalues introduced the general concept ofRayleigh functional of a matrix function as follows. LetαT(α) be a differentiable mapping of the real interval(a,b) to the setSn of real symmetric matrices of ordern. Then, a Rayleigh functional ofT is a continuous real-valued functionp onRn\{0} such thatp(x)(a,b) for allxRn\{0} and
    • p(cx)=p(x)ifc0
    • T(p(x)x,x=0
    • T(p(x)x,x>0.
    The last is a definiteness condition that can be replaced byT(p(x)x,x<0, and is plainly satisfied in the basic caseT(α)=AαI, whereT(α)=I. Thus, looking at Equations (16) and (17), we see that this is a sensible extension of the definition and properties of the Rayleigh quotient.
    The results of Turner and Hadeler indicated above were developed and improved by, among others, H. Langer. For instance, in [18], studying combinationsT(λ) of bounded self-adjoint operators of the form of Equation (12) considered by Turner, he assumed that, for each nonzero vectorx, the polynomialpx(L)T(λ)x,x has only real roots
    λ1(x)λ2(x)λn(x)
    Under this assumption he showed that the rangesΛi of the functionsλi are intervals, calledspectral zones, whose interiors do not overlap.
    A systematization of the spectral theory (that is, of the properties of eigenvalues and eigenvectors) ofpolynomial operator pencils, as had been named the families
    A(λ)=A0+λA1+λnAn
    whereλC is a spectral parameter, andAi,i=1,,n, are linear operators in a Hilbert space, was given by A.S. Markus in his book [19]. Among others, he considered in depth the problem of thefactorization of pencils, which in the simplest case consists in representing a quadratic pencilA(λ)=λ2I+λB+C in the form
    A(λ)=(λIY)(λIZ).
    The importance of many results in [19] is due to the fact that they hold for the more general case ofholomorphic(i.e., analytic) operator-valued functions, namely operatorsA(λ) expressed as the sum of convergent power series inL(E):
    A(λ)=0λnAn.
    For an updated reference reviewing the spectral properties of self-adjoint analytic operator functions, and in particular the factorization problem, see [10]. On the other hand, for further work on the variational characterization of eigenvalues as well as for the development of the theory of Rayleigh functionals, the interested reader can look for instance into the quite recent papers by Binding, Eschwé and Langer [20], Hasanov [21], Voss [22], and Schwetlick and Schreiber [23], and the references therein.
    Let us now add some more specific indication for the case in whichE=Km, so that the functionG appearing in Equation (3) takes its values in the spaceKm×m ofm×m real or complex matrices. We shall stress the finite-dimensionality of the ambient spaceE using the letterM rather thanG, and often the letterv rather thanx for the vectors ofE. A well known reference book for the matter is the one by Gohberg, Lancaster and Rodman [24], and the very Introduction to this book explains to us that problems of the form
    M(λ)v=0,λC,vCm,v0
    whereM(λ)Cm×m appear naturally when dealing with linear systems of higher order ordinary differential equations (ODE) with constant coefficients:
    dnudtn+An1dn1udtn1++A1dudt+A0u=0
    whereAiCm×m fori=0,1,,n1. Indeed, looking for solutions of the formu(t)=eλtv(λC,vCm) of Equation (21) leads to the equation
    eλt{λn+λn1An1++λA1+A0}v=0
    which—as long asv0, and puttingAn=I—is equivalent to Equation (20) with
    M(λ)=i=0nλiAi.
    Thus,eλ0tv0 is a nontrivial solution of Equation (21) if and only ifλ0 is an eigenvalue of Equation (20), i.e., it is a zero of thecharacteristic equation
    detM(λ)=0
    andv0KerM(λ0). More generally, the function
    u(t)=eλ0t{tkk!v0++t1!vk1+vk}
    is a solution of Equation (21) if and only if the vectorsv0,v1,,vk satisfy the relations
    j=0l1j!djMdλj(λ0)vlj=0,l=0,1,,k.
    Such a set of vectorsv0,v1,vk is called aJordan chain of lengthk+1 for the matrix functionM(λ), corresponding to the eigenvalueλ0 andstarting with the eigenvectorv0. The above definitions extend from matrix polynomials as in Equation (23) to any analytic matrix functionM(λ). It is good to see the explicit form of Equation (26), which is
    l=0:M(λ0)v0=0l=1:M(λ0)v1+11!dMdλ(λ0)v0=0l=2:M(λ0)v2+11!dMdλ(λ0)v1+12!d2Mdλ2(λ0)v0=0l=k:M(λ0)vk+11!dMdλ(λ0)vk1++1k!dkMdλk(λ0)v0=0.
    Ifn=1 in Equation (23), we haveM(λ)=λA1+A0; and if moreoverA1=I, thenM(λ)=A0λI. In this case,dMdλ(λ0)=I, whiledjMdλj(λ0)=0 for allj>1, so that the above equalities reduce to (puttingA0=A)
    (Aλ0I)v0=0(Aλ0I)v1=v0(Aλ0I)vk=vk1
    and are those defining anordinary Jordan chain for the matrixA corresponding toλ0 andv0, used to representA in its Jordan canonical form and in particular to construct a basis of thegeneralized eigenspaceEλ0(A) associated withλ0. We recall that this is defined as
    Eλ0(A)=Ker((Aλ0I)p)
    wherep is the least integer such thatKer((Aλ0I)p)=Ker((Aλ0I)p+1), and that the dimensiondimEλ0(A) ofEλ0(A) is equal to thealgebraic multiplicity ofλ0, that is, the multiplicity of the eigenvalue as a zero of the characteristic polynomialdet(AλI). We say thatλ0 issemisimple ifp=1 in Equation (29)—that is, if the algebraic multiplicity coincides with thegeometric multiplicity ofλ0, defined asdimKer(Aλ0I)—and thatλ0 issimple if they are both equal to 1.
    These familiar concepts from Linear Algebra, concerning the basic caseM(λ)=A0λI, need to be extended to analytic matrix functionsM(λ). To this purpose, we quote from [25]; see also ([26], Chapter 7).
    • Letx0 be an eigenvector corresponding to an eigenvalueλ0. The maximal length of a Jordan chain starting atx0 is called themultiplicity ofx0 and denoted bym(x0). An eigenvalueλ0 is said to benormal if it is an isolated eigenvalue and the multiplicity of each corresponding eigenvector is finite.
    • Suppose thatλ0 is a normal eigenvalue. Then, a correspondingcanonical system of Jordan chains
      x0k,x1k,,xmk1k(k=1,,N)
      is defined by the following rules:
      (1)
      The vectorsx01,,x0N form a basis ofKerM(λ0) (and soN=dimKerM(λ0)).
      (2)
      x01,x11,,xm111 is a Jordan chain of the maximal lengthm1m(x01).
      (3)
      Once that the vectorsx01,x02,,x0k1 (1k<N) have been chosen, then pick an eigenvectorx0k linearly independent fromx01,x02,,x0k1 and form a Jordan chainx0k,x1k,,xmk1k of the maximal lengthmkm(x0k).
    • A canonical system is not defined uniquely; however, the numbersm1,m2,,mN do not depend on the choice of Jordan chains and are calledpartial multiplicities of the eigenvalueλ0. The numberQ(λ0)m1++mN is thealgebraic multiplicity of the eigenvalueλ0.
    The next statement—which is based on results found in [27]—proves that these definitions are a coherent generalization of the usual ones.
    Proposition 1.
    An eigenvalueλ0 is a zero ofdetM(λ) of multiplicityQ(λ0).
    Based on Proposition 1, the definitions of simple and semisimple eigenvalue carry over to the case of matrix polynomials and more generally to analytic matrix functions. For instance, one may check that the matrix function
    M2(λ)=λ1+eλ00λ+1
    considered in [8] hasλ0=0 as a double (i.e., of algebraic multiplicity 2), nonsemisimple (i.e., of geometric multiplicity 1) eigenvalue, with Jordan chain
    H0=10,H1=α0
    for anyαR. This example also shows that in the nonlinear case, generalized eigenvectors do not need to be linearly independent. Indeed, in the construction (and notation) recalled above, the generating vectorsx01,,x0N of the system of Jordan chain are chosen to be linearly independent, but it is not necessarily so for the vectors in each corresponding chain, generated by the rules given by the system in Equation (27).
    An especially important source for the study of NLEVP are the Delay Differential Equations (DDE), or systems of them. For instance, in [26] is considered the so-called Wright equation
    x(t)=αx(t1)[1+x(t)]
    whereα>0. The objective is to determine the periodic orbits (if any) of Equation (32). To do this, one must first look at the linearized equation of Equation (32) nearx0, which is
    x(t)=αx(t1)
    Solutionseλt of this exist iffλ satisfies the characteristic equation
    λeλ+α=0
    Forα=π/2, this hasλ=iπ2 as a simple purely imaginary root, corresponding to the periodic solutioneiπ2t. Studying the properties of these nonlinear eigenvalues, that is of the rootsλ(α) of Equation (34) as a function ofα, and using deep topological and functional-analytic results from [26], it is possible to demonstrate that Equation (32) has a Hopf bifurcation atα=π/2, and that foreveryα>π/2 Equation (32) has a nonconstant periodic solution with period close to 4. Finally, the authors show that forp>4, there is a periodic solution of Equation (32) of periodp.
    One can also consider systems of DDE, for instance
    x(t)=1001x(t)+1000x(t1)
    whosecharacteristic matrix is precisely that displayed in Equation (30). The general form of a system ofN delay differential equations, with delaysτ1,,τN is
    x(t)=A0x(t)+i=1NAix(tτi)
    withA0,AiCN×N, and the corresponding characteristic matrix is
    M(λ)=λIA0i=1NAieλτi.
    More general forms of Equation (35) are considered inSection 3.

    2.2. Problems of Type K: Nonlinear Operators and Bifurcation

    Throughout this SectionE will be a real Banach space, of finite or infinite dimension. Originally, bifurcation theory deals with the local study of Equation (1) near a point(λ0,0)R×E, and studies precisely the conditions under which from the given point(λ0,0) of the lineR×{0}R×E of thetrivial solutions of Equation (1), therebifurcates a branch of nontrivial solutions, that is, of solutions(λ,x) withx0. Of course, the basic situation that comes to one’s mind is the caseF(λ,x)=Axλx, withλ0 an eigenvalue of the linear operatorA, the “branch” being here the special subset{λ0}×(Ker(Aλ0I)\{0}) ofR×E. The interesting case is whenF depends in a less obvious way fromλ andx; an easy example of what we mean is given for instance by the equation
    F(λ,x)ax+bx3λx=0,(λ,x)R2
    in which the parabolaλ=a+bx2 bifurcates at the point(a,0) from the line of the trivial solutions. For a motivating introduction to the theory, and a discussion of some important physical problems that fall in this context, an excellent source is the old review paper by Stackgold [28].
    The previous “naif” idea of bifurcation needs to be made both more precise and more general, and this is done by saying that(λ0,0) is a bifurcation point for Equation (1) if any neighborhood of(λ0,0) inR×E contains nontrivial solutions of Equation (1). For this definition to make sense, it is enough thatF be defined in an open setUR×E with(λ0,0)U, and this is what we assume from now on. For the next step, we further assume thatF is differentiable at the point(λ0,0), so thatF can be linearized near that point as
    F(λ,x)=F(λ0,0)+DλF(λ0,0)(λλ0)+DxF(λ0,0)x+R(λ,x)=DxF(λ0,0)x+R(λ,x)
    where the remainder termR satisfies
    R(λ,x)=o((λ,x))as(λ,x)(λ0,0).
    Some more regularity onF yields immediately a necessary condition for bifurcation:
    Theorem 1.
    Suppose that F is of classC1 in a neighborhood of(λ0,0). IfDxF(λ0,0) is a homeomorphism of E onto itself, then(λ0,0) cannot be a bifurcation point for Equation (1).
    Proof. 
    The assumption implies, via the Implicit Function Theorem, that there is a neighborhoodI×V of(λ0,0) such that, for anyλI, there is auniquex=x(λ)V such thatF(λ,x)=0. As by assumptionF(λ,0)=0 for anyλ, we must havex(λ)=0 forλI, so that there is no nontrivial solution to Equation (1) in the neighborhoodI×V of(λ0,0). ☐
    For simplicity, we shall henceforth consider only the special case
    F(λ,x)=A(x)λx
    whereA(0)=0 andA is of classC1 nearx=0. Here,DxF(λ0,0)=A(0)λ0I, and we have a more explicit form of the remainder term in the linearized form in Equation (36) ofF: for we can writeA(x)=A(0)x+B(x) withB(x)=o(x) asx0, so that Equation (37) yields
    F(λ,x)=A(0)x+B(x)λx
    =(A(0)λ0I)x+B(x)(λλ0)x
    and comparing this with Equation (36) we see thatR(λ,x)=B(x)(λλ0)x in this special case. Resuming, the equation we want to study is
    A(x)λx=0
    withA(0)=0 andA of classC1 nearx=0, and can be written as
    Txλ0x+B(x)=(λλ0)x
    whereTA(0) andB(x)=o(x) asx0. The necessary condition for bifurcation implicitly stated in Theorem 1 can now be rephrased as follows:
    (λ0,0)bifurcationpointofA(x)λx=0λ0σ(A(0)).
    The standard case considered in the literature is whenλ0 is in thepoint spectrum ofA(0), and we formalize this more precisely under the form of a basic assumption, which is plainly satisfied ifdimE<:
    H0. 
    λ0 is an isolated eigenvalue ofTA(0) andTλ0I is a Fredholm operator of index zero.
    Let us recall (see, e.g., [14]) that a bounded linear operatorL between two Banach spacesE andF is said to be aFredholm operator if its nullspaceKerL has finite dimension and its rangeImL is closed and has finite codimension; in this case, theindex ofL,indL, is defined as
    indL=dimKerLcodimImL.
    Thus, ifdimE=dimF<, then any linear operator is Fredholm of index zero. From the Riesz–Schauder theory of such operators (see e.g., [15]), it is known that also the nullspacesKerLj (j>1) are finitedimensional, and that they stabilize forj sufficiently large; with reference to the caseL=Tλ0I, this means that there exists a least integerp such thatKer(Tλ0I)p=Ker(Tλ0I)p+1, and moreover one has
    E=Ker(Tλ0I)pIm(Tλ0I)p.
    It follows in particular that thealgebraic multiplicity ofλ0 is finite, where in general this is is defined—consistently with the definition recalled inSection 2.1 for the casedimE<—as the dimension of the subspace
    j=1Ker(Tλ0I)j.
    In the following, when speaking ofmultiplicity of an eigenvalue, we refer to the algebraic multiplicity. We recall that this coincides with thegeometric multiplicitydimKer(Tλ0I) whenT is a self-adjoint operator in a Hilbert space.
    Remark 1.
    The assumptionH0 alone is not sufficient to guarantee that an eigenvalue of the “linear part” T of A at 0 is a bifurcation point for A. To see this, consider the example ([29], Chapter 11) given by the system
    x+y3=λxyx3=λy.
    Here,E=R2 and we have (in our notations)T=I,λ0=1 andB(x,y)=(y3,x3). Multiplying the first equation by y, the second by x and subtracting the second from the first, we obtainx4+y4=0, showing that Equation (40)has no nontrivial solution whatsoever. One way of seeing this is that the two-dimensional eigenspace associated withλ0 is completely destroyed by the addition of the perturbing term B.
    Three typical situations are then considered, each of them guaranteeing bifurcation fromλ0, and described by the following assumptions, respectively:
    H1. 
    λ0 is asimple eigenvalue ofA(0).
    H2. 
    A iscompact andλ00 is an eigenvalue ofodd multiplicity ofA(0).
    H3. 
    A is agradient operator in a Hilbert space andλ0 is an isolated eigenvalue of finite multiplicity ofA(0).
    These assumptions call immediately for some explanation. In fact, it could be noted at once that bothH2 andH3 are a strengthening ofH0. However, to proceed with some order, in the remaining part of this subsection, we shall give a precise statement for each of the three bifurcation results roughly indicated above, preceded by a comment on the respective assumption, and followed by an indication of the proof.
    Thus, starting withH2, we recall that, ifA is compact, then the linear operatorA(0) is a compact, too [30]. ThereforeH0 is redundant in this case, as it is a basic spectral property of any such operator [14].
    Theorem 2.
    IfH2 is satisfied, thenλ0 is a bifurcation point for Equation (38).Moreover, it is a globalbifurcation point in the following sense: if S denotes the closure inR×E of the set of nontrivial solutions of Equation (38),thenS{(λ0,0)} has a connectedsubsetSλ0 containing{(λ0,0)}, and which is either unboundedinR×E or contains a point{(λ1,0)} withλ1 an eigenvalue of odd multiplicity of T.
    Proof. 
    The proof relies on theLeray–Schauder degree. Roughly speaking, this is a topological tool to detect the fixed points of a compact map and can be briefly introduced as follows (see, for instance, [3] (Part I) for a complete presentation). Suppose we have a continuous compact mapC ofE into itself, a bounded open setΩE with0Ω, and suppose thatC(x)x forxΩ. Then, there exists an integer, denotedd(IC,Ω,0) and called the (Leray–Schauder)degree ofIC relative to the set Ωand to the point 0, having the following properties:
    (i)
    Ifd(IC,Ω,0)0, then there exists anxΩ such thatx=C(x).
    (ii)
    d(I,Ω,0)=1.
    (iii)
    IfΩ0Ω andIC has no zeroes inΩ\Ω¯0, thend(IC,Ω,0)=d(IC,Ω0,0).
    (iv)
    SupposeC1,C2:EE are compact maps, and put
    H(t,x)=x[C1(x)t(C2(x)C1(x)],t[0,1],xE
    IfH(t,x)0 fort[0,1] andxΩ, then
    d(IC1,Ω,0)=d(IC2,Ω,0).
    (v)
    IfC is alinear compact map andIC is injective, then
    d(IC,Ω,0)=(1)ν
    whereν is the number of eigenvalues >1 ofC, each counted with its algebraic multiplicity.
    To prove thatλ0 is a bifurcation point, it is enough to show that for any sufficiently smallr>0, there exists a solution(λr,xr) of Equation (38) withλr[λ0r,λ0+r] andxr=r. Thus, letBr={xE:xr<r} be the open ball centered atx=0 and with radiusr; we consider the degree of various maps with respect to this neighborhood of 0. Precisely, assume for instanceλ0>0 and writeA(x)λx=λ(xμA(x)),μ=1/λ, forλ nearλ0. Consider thus the equivalent equation
    xμA(x)=0
    and letμ vary in an interval[μ̲,μ¯] containing as interior pointμ0=1/λ0 and no othercharacteristic values(as are named the reciprocals of the nonzero eigenvalues) ofTA(0) exceptμ0. Assume by way of contradiction thatxμA(x)0 forx=r andμ[μ̲,μ¯]; then using theHomotopy invariance Property (iv) withC1=μ̲A,C2=μ¯A we would have
    d(Iμ̲A,Br,0)=d(Iμ¯A,Br,0).
    On the other hand, for smallr>0, using again Property (iv) we have
    d(Iμ̲A,Br,0)=d(Iμ̲T,Br,0)
    becauseIμ̲A is homotopic toIμ̲T onBr; indeed, since the latter operator is a homeomorphism and sinceB(x)=o(x) asx0, we have (diminishingr if necessary)
    xμ̲(Tx+tB(x))xμ̲Txμ̲tB(x)kx
    for somek>0 and for all(t,x)[0,1]×B¯r. Similarly,
    d(Iμ¯A,Br,0)=d(Iμ¯T,Br,0).
    However, using Property (v), we have
    d(Iμ̲T,Br,0)=(1)ν̲,d(Iμ¯T,Br,0)=(1)ν¯
    whereν¯=ν̲+h,h an odd integer (the algebraic multiplicity ofλ0); therefore the two degrees in Equation (45) are different, contradicting the previous equalities in Equations (42)–(44). This proves thatxrμrA(xr)=0 for somexrBr and someμr[μ̲,μ¯], and therefore that there is bifurcation from(μ0,0) for the Equation (41), or equivalently from(λ0,0) for Equation (38). The proof that under the stated assumptions the bifurcation has aglobal character, in the sense described by the statement of Theorem 2, requires the much deeper topological analysis performed by P.H. Rabinowitz in his famous paper [31]. ☐
    We now go on to comment assumptionH3, and to briefly discuss the corresponding bifurcation result. For the next definition, and the statements following it, see for instance [2] or [30].
    Definition 1.
    Let H be a real Hilbert space with scalar product denoted.. An operatorA:HH is said to be a gradient (or potential) operatorif there exists a differentiable functionala:HR such that
    A(x),y=a(x)yforallx,yH.
    One then writesA=a; the functionala—thepotential ofA—is uniquely determined by the requirement thata(0)=0, and is explicitly given by the formula
    a(x)=01A(tx),xdt.
    A bounded linear operator is a gradient if and only if it is self-adjoint. Moreover, if a gradient operatorA is differentiable at a pointx0, thenA(x0) is self-adjoint.
    Theorem 3.
    IfH3 is satisfied, thenλ0 is a bifurcation point of Equation (38).Moreover, for eachr>0 sufficiently small, Equation (38)has at least twodistinct solutions(λr,xr) such thatxr=r.
    Proof. 
    The proof makes use of theLyapunov–Schmidt method (see, for instance, ([3], Chapter 2) or ([29]), Chapter 11) which allows to reduce the infinite-dimensional problem in Equation (38) to a problem in the finite-dimensional spaceKer(Tλ0I). Indeed, consider the equivalent form Equation (39) of Equation (38), and rewrite it as
    Lx+B(x)=δx
    whereL=Tλ0I andδ=λλ0. Now recalling thatT=A(0), the assumptionH3 implies thatH is the orthogonal sum
    H=KerLImL.
    Then, lettingP,Q denote the orthogonal projections ofH ontoKerL andImL, respectively, we have
    x=Px+Qxv+w
    and using this in Equation (48), we obtain the equivalent system
    PB(v+w)=δv
    Lw+QB(v+w)=δw.
    The restrictionL|ImL ofL toImL is a homeomorphism ofImL onto itself. A standard application of the Implicit Function Theorem, together with the conditionB(x)=o(x) asx0, then allows to solve thecomplementary equation, Equation (52) in the form
    w=w(δ,v)
    withw(0,0)=0, whereδ andv belong to suitably small neighborhoodsJ andV ofδ=0 andv=0, respectively inR and inKerL. Replacing this in Equation (51) first yields
    PB(v+w(δ,v)),v=δv2
    whence, applying once more the Implicit Function Theorem, one can recoverδ as a function ofv,
    δ=δ(v),δ(0)=0,
    forv in a neighborhoodV0V of 0 inKerL. Finally, putting
    ϕ(v)=w(δ(v),v),vV0
    and replacing this in Equation (51), one is left with the finite-dimensional equation (thebifurcation equation)
    F0(v)PB(v+ϕ(v))=δ(v)v.
    Any solutionvV0,v0, of this equation will give rise to a solution(δ,x),x0,
    (δ,x)=(δ(v),v+ϕ(v))
    of the original Equation (48), and the continuity (in fact,C1 regularity) of the mapsδ=δ(v),w=w(δ,v) will ensure that this solution(δ,x) stays into a given small neighborhood of(0,0) inR×H provided thatv is small enough. Thus, proving bifurcation fromλ0 for Equation (38)—or equivalently, bifurcation fromδ=0 for Equation (48)—reduces to prove that Equation (57) has solutionsv0 of arbitrarily small norm.
    Remark 2.
    The Lyapunov–Schmidt reduction can be applied more generally, and with minor modifications, in a Banach space E whenever the basic assumptionH0 (i.e., thatL=Tλ0I is Freholm of index zero) holds and is supplemented by the transversality condition
    KerLImL={0}
    which is plainly satisfied when T is self-adjoint, as the two subspaces in Equation (58)are then orthogonal. Note that Equation (58)is in general equivalent toKerL=KerL2, and thus to the fact that the algebraic and geometric multiplicities ofλ0 coincide.H0 and Equation (58)imply the direct decomposition of E into (closed) subspaces as in Equation (49),and therefore allow for the same reduction on taking forP,Q the (continuous) projections associated with Equation (49).
    Returning to the proof of Theorem 3, we let now come in the assumption that the wholeA, and therefore also its “nonlinear part”B, is a gradient. Here, we bound ourselves to give the main idea of the particularly clear demonstration provided by C. Stuart [32]. Thus, letf be such thatf=L+B, and consider the reduced functionalf0:V0KerLR defined putting
    f0(v)f(v+ϕ(v)).
    Moreover, for smallr>0 put
    Mr={vV0:g(v)v2+ϕ(v)2=r2}.
    Then,Mr is the level set of theC1 functionalg, and is compact because it is a closed and bounded subset of the finite dimensional spaceKerL. Thus,f0 attains its minimum and its maximum onMr, and ifv0Mr is such an extremal point we have, by the Lagrange multiplier’s rule,
    f0(v0)=λg(v0).
    Performing the computations off0(v0) andg(v0) by the definitions in Equations (59) and (60), and using the fact thatw(δ,v) satisfies the complementary equation, Equation (52), one checks thatλ=δ(v0) and that Equation (57) is satisfied. ☐
    We finally come toH1. UnlikeH2 andH3, in generalH1 is independent fromH0, and must be supplemented with it to guarantee bifurcation. Of course, whenE is finite dimensional,H0 does not play any role, and indeedH1 can in this case be viewed as a special case ofH2, because any continuous map is then compact.
    Theorem 4.
    IfH0 andH1 are satisfied, thenλ0 is a bifurcation point of Equation (38).Moreover, if A is of classC2 in a neighborhood ofx=0, then near(λ0,0) the solution set of Equation (38)consists of the trivial solutions{(λ,0)} and of aC1 curve
    γ(t)=(λ(t),x(t)),t]δ,δ[
    withγ(0)=(λ0,0) andx(t)0 fort0. Finally, ifKerL=[ϕ], then ast0
    x(t)=tϕ+o(|t|)λ(t)=λ0+o(1).
    The statement of Theorem 4 means that near(λ0,0), the solution set of Equation (38) is topologically equivalent to the “cross”
    (]1,1[×{0})({0}×]1,1[).
    As to the proof, this goes for a first part along the same lines used to prove the previous Theorem 3, that is, using the Lyapounov–Schmidt decomposition in the sense indicated in Remark 2. What is specific here is that, sincedimKerL=1, one ends with an equation inR; a further nontrivial application of the Implicit Function Theorem then leads to the result: see, for instance, ([3], Chapter 2).

    2.3. A Very Special Nonlinear Problem: Thep-Laplace Equation

    LetΩ be a bounded open set inRn, letp>1, and letE be the Sobolev spaceW01,p(Ω), equipped with the norm
    vW01,p=(Ω|v|pdx)1p.
    That this is actually a norm inW01,p(Ω), equivalent to the standard one ofW1,p(Ω), is a consequence ofPoincaré’s inequality(see e.g., [14]), stating that
    Ω|v|pdxCΩ|v|pdx
    for someC>0 and for allvW01,p(Ω). LetE=W1,p(Ω) be the dual space ofE. A (weak) solution of thep-Laplace Equation (5) is a functionuE such that
    Ap(u)=λBp(u)
    whereλ=μ1 (it will soon be clear thatμ=0 is not an eigenvalue of Equation (5)) andAp,Bp:EE are defined by duality via the equations
    Ap(u),v=Ω|u|p2uvdx,Bp(u),v=Ω|u|p2uvdx
    whereu,vE and, denotes the duality pairing betweenE andE.
    The proof of the existence of countably many eigenvalues and eigenfunctions of Equation (65) relies on the Lusternik–Schnirelmann (LS) theory of critical points for an even functional on a symmetric manifold. Complete presentations of this theory, in both finite and infinite dimensional spaces, can be found, among others, in [3,4,29,33,34]. Theorem 5 below is essentially a simplified version of Theorem A in [35], save that with respect to [35] we have for expository convenience interchanged the roles of the operatorsA andB. Thus, letE be a real, infinite dimensional, uniformly convex Banach space with dualE, and consider the problem
    A(u)=λB(u)
    whereA,B:EE are continuous gradient operators with potentialsa,b, respectively:A=a,B=b. Definition 1 of gradient operator extends of course to mappings ofE intoE replacing the scalar product with the duality pairing.
    Suppose thatb(u)>0 foru0; then, the eigenvectors of Equation (67) satisfying a normalization conditionb(u)=r (r>0), are precisely the constrained critical points ofa on the level set
    Mr={uE:b(u)=r}.
    The additional key assumptions that we make onA andB are as follows:
    • A,B areodd(that is,A(u)=A(u) foruE, and similarly forB).
    • A isnon-negative (that is,A(u),u0 foruE) andstrongly sequentially continuous (that is, if(un)E converges weakly tou0E, thenA(un) converges strongly toA(u0) inE).
    • B isstrongly monotone in the following sense: there exist constantsk>0 andp>1 such that, for allu,vE,
      B(u)B(v),uvkuvp.
    By the above assumptions onB,Mr issymmetric (that is,uMruMr) andsphere-like, in the sense that each ray through the origin hitsMr in exactly one point. IfKMr is compact and symmetric, then thegenus ofK, denotedγ(K), is defined as
    γ(K)=inf{nN:thereexistsacontinuousoddmapofKintoRn\{0}}.
    IfV is a subspace ofE withdimV=n, thenγ(MrV)=n. FornN put
    Kn(r)={KMr:Kcompactandsymmetric,γ(K)n}.
    Theorem 5.
    LetA,B:EE be as above. Suppose moreover thata(u)0 impliesA(u)0. FornN andr>0, put
    Cn(r)supKn(r)infKa(u)
    whereKn(r) is as in Equation (69). Then
    supMra(u)=C1(r)Cn(r)Cn+1(r)0.
    Moreover,Cn(r)0 asn, and ifCn(r)>0 thenCn(r) is attained and is critical value of a onMr: thus, there existun(r)Mr andλn(r)R such that
    Cn(r)=a(un(r))
    and
    A(un(r))=λn(r)B(un(r)).
    Here are a few indications for the Proof of Theorem 5:
    (i) The sequence(Cn(r)) is non-decreasing because, for anynN, we haveKn(r)Kn+1(r) as shown by Equation (69). (ii) In addition,C1(r)=supMra(u) becauseK1(r) contains all sets of the form{x}{x},xMr. (iii) The proof thatCn(r)0 asn, together with a lot of related information, can be found for instance in [34]. (iv) Finally, the assumption thatA(u)0 whenevera(u)0, together with the stated continuity properties ofA andB, ensures thata satisfies the crucialPalais–Smale(PS)condition onMr at any levelC>0, needed to prove the final (and most important) assertion of the Theorem via the standard deformation methods of Critical Point Theory; see for this any of the above cited references.
    Of special importance—with reference to the thep-Laplace equation—is the case in whichA andB have the additional property of beingpositively homogeneous of the same degreep1>0, meaning thatA(tu)=tp1A(u) foruE andt>0, and similarly forB. In this case, we have from Equation (47)
    a(u)=A(u),up,b(u)=B(u),up
    so thata(u)0 impliesA(u)0. Moreover, the use of Equation (74) in Equations (72) and (73) yields at once the relationCn(r)=λn(r)r. In fact, here,λn(r) isindependent ofr>0: to see this, it is convenient to re-parameterize the level sets on putting forR>0
    MR={uE:b(u)=Rpp}={uE:B(u),u=Rp}.
    Asa andb arep-homogeneous, it follows thatMR=RM1, that eachKKn(R) is the image of the corresponding set inKn(1) under the mapuRu, and thatCn(R)=RpCn(1). By these remarks, we thus have the equalities
    λn(R)Rpp=Cn(R)=RpCn(1)
    showing as expected thatλn(R) is independent ofR, and precisely that
    λn(R)=pCn(1)=supKninfKA(u),uλn
    whereKnKn(1). From Theorem 5, we then get immediately the following statements aboutλn:
    • supM1A(u),u=λ1λ2λn;
    • λn0 asn; and
    • ifλn>0, then there existsunM1 (that is,B(un),un=1) such thatAun=λnB(un); in particular,λn=A(un),un.
    Remark 3.
    The situation just described contains as a more special case that of two linear operators A and B, in which the above formulae hold withp=2. Suppose in particular that A acts in a real Hilbert space H andB=I; thenM1={uH:u=1} is the unit sphere in H, while A is a compact, self-adjoint, non-negative linear operator (strong sequential continuity and compactness are equivalent properties for a linear operator acting in a reflexive Banach space, see e.g., [15]). Then, Equation (75)and the statements following this formula yield a good part of the familiar spectral properties of such operators: indeed it is not hard to see that the LS variational characterization in Equation (75)ofλn reduces in this case to the classical Courant’s minimax principle expressed by Equation (15),so that the sequence in Equation (75)of the LS eigenvalues of A coincides with the decreasing sequence of all the eigenvalues of A, each counted with its multiplicity.
    Returning finally to thep-Laplacian, it is now a matter of applying the above information to the operatorsAp,Bp defined in Equation (66). One can check (see [36,37], for instance) that they satisfy all the requirements for the application of Theorem 5. Moreover, they are evidently positively homogeneous of degreep1, and finallyAp is (strictly)positive, for
    Ap(v),v=Ω|v|pdx>0forvE,v0.
    This implies that each of the numbersλn defined in Equation (75) for the pairAp,Bp is strictly positive, whence it follows—using the last statement of Theorem 5—that the eigenvalue problem in Equation (65) for thep-Laplacian possesses an infinite sequence of eigenvaluesλn>0, each given by
    λn=supKninfKΩ|v|p(n=1,2,)
    where
    Kn={K{vW01,p:Ω|v|p=1},Kcompactandsymmetric,γ(K)n}.
    Settingμn=λn1, this finally proves the properties of Equation (5) stated in the Introduction, and in particular Equation (6).
    Remark 4.
    For the very special properties owned by the first eigenvalueμ1 in the sequence in Equation (6)and by the associated eigenfunctions, see for instance [37]. Anyway, it follows by our discussion thatλ1=μ11 is the best constant in Poincaré’s inequality, Equation (64):
    λ1=supvW01,p,v0Ω|v|pΩ|v|p.
    To conclude this section, let us remark that the study and research in problems related to thep-Laplacian has grown enormously in the last decades, and even remaining in the strict context of a “spectral theory” for Equation (5), one should at least mention the following relevant points: (i) the problem of theasymptotic distribution of the LS eigenvalues (along the classical Weyl’s law for the Laplacian); (ii) the question of the existence ofother eigenvalues outside the LS sequence; and (iii) theFredholm alternative for perturbed non-homogeneous versions of Equation (5). For information on these issues, we refer the reader to [37,38,39] and to the recent and very clear review paper [36]. Related material can be found in [40].

    3. Nonlinear Perturbation of an Isolated Eigenvalue

    As a way to introduce and motivate the more specific content of this section, let me start recalling a famous and beautiful result of F. Rellich in perturbation theory of linear eigenvalue problems:
    Theorem 6.
    ([41], Theorem 1). LetA(ϵ) be a family of Hermitiann×n matrices depending analytically on the real parameter ϵ for ϵ near 0. Letλ0 be an eigenvalue of multiplicitym>1 ofA=A(0). Then, for ϵ near 0,A(ϵ) possesses m eigenvalues
    λ1(ϵ),,λm(ϵ)
    and corresponding orthonormal eigenvectorsu1(ϵ),,um(ϵ); that is, for all sufficiently small ϵ, we have
    A(ϵ)ui(ϵ)=λi(ϵ)ui(ϵ)(i=1,,m).
    Moreover,λi(0)=λ0 for alli=1,,m and the functionsλi andui depend analytically on ϵ nearϵ=0.
    As is well known, the “ideal" situation described by Theorem 6 for thesplitting of the multiple eigenvalue does not hold in general. In Rellich’s words, “...our question about the eigenvalues reduces to asking whether or not the zeroes of a polynomial [in the case, the characteristic polynomial of a matrix whose elements depend analytically on a parameterϵ] are themselves regular analytic functions ofϵ for smallϵ. In general the answer is no; a counterexample isλ2+ϵ. What is true is that ifλ=λ(0) is a zero forϵ=0, then the zeroλ(ϵ) can be written as a convergent (for smallϵ) power series inϵ1h(Puiseux series) whereh is the multiplicity ofλ=λ(0).” The example indicated by Rellich can be displayed as
    A(ϵ)01ϵ0=0100+ϵ0010A+ϵB
    and shows the unperturbed eigenvalueλ0=0 ofA, of multiplicityh=2, splitting into the two simple eigenvaluesλ(ϵ)=±(ϵ)12 ofA(ϵ). In general, ifλ0 has multiplicityh, the perturbed eigenvalue(s)λ(ϵ) will admit an expansion such as
    λ(ϵ)=λ0+ϵ1hλ1+ϵ2hλ2++ϵλh+=λ0+i=1ϵimλi
    For the special case thatA(ϵ) is Hermitian, using the reality ofλ(ϵ) Rellich showed in [41] that only integral powers ofϵ can have non-zero coefficients in the expansion of Equation (78), thus proving the analytic dependence onϵ of the perturbed eigenvalues as stated in Theorem 6. Rellich’s work was a main starting point for the very vast literature concerning the systematic analysis of the perturbation of eigenvalues of linear operators, both in finite and infinite dimensional spaces; see Kato’s book [7] and the references therein. Our aim in this section is to indicate some partial results about similar questions fornonlinear eigenvalue problems, both of typeG and of typeK, recently appearing in [8,9], respectively.

    3.1. A Perturbation Problem of Type G

    In the paper [8], the authors study the splitting of a multiple eigenvalue of the nonlinear eigenvalue problem, depending on the real parameterϵ,
    M(λ;ϵ)v=0,λC,vCn,v0.
    Here,M(λ;ϵ) is ann×n complex matrix having an eigenvalueλ0 forϵ=0 (i.e., detM(λ0;0)=0). As in the linear case, a perturbation theory for the eigenvalueλ0 consists in the study of the eigenvalues of Equation (79)—and of the corresponding eigenvectors—in the vicinity ofλ0, and will focus precisely on the behaviour of such eigenvalues/eigenvectors as functionsλ(ϵ),v(ϵ) of the parameterϵ forϵ near 0; one assumes to know the solutions of Equation (79) forϵ=0, i.e., to know the nullspace ofM(λ0;0). In the linear case, we have
    M(λ;ϵ)=A(ϵ)λI
    for some assigned functionA ofϵ intoCn×n, and Rellich’s theorem can be rephrased on saying that if this function is analytic and with Hermitian values, and if
    dimKer(A(0)λ0I)=m
    then there existm pairs of analytic functionsλi(ϵ),ui(ϵ) such thatλi(0)=λ0,ui(0)Ker(A(0)λ0I), each pair satisfying identically Equation (76) forϵ near 0.
    For the study of Equation (79), it is assumed thatM(λ;ϵ) depends regularly onλ andϵ in the following sense: there exists an open setΩC containingλ0, and an open intervalIR containing zero, such that for allϵI the entries ofM are analytic functions ofλ inΩ, and for allλΩ the entries ofM are smooth functions ofϵ inI. In the first part of [8], the authors develop previous work on the subject and consider the case in which the geometric multiplicity ofλ0 (that is, the dimension of the nullspace ofM(λ0,0)) is one, while its algebraic multiplicity (that is, the multiplicity ofλ as a root of the characteristic equation detM(λ;0)=0) ism>1. Thus,λ0 in a multiple, nonsemisimple eigenvalue ofM forϵ=0. The following notations are used in the sequel:
    MϵMϵ(λ0,0);MλMλ(λ0,0);Mλλ2Mλ2(λ0,0);MλmmMλm(λ0,0)
    Theorem 7.
    [8]. Letλ0 be an eigenvalue of Equation (79)forϵ=0, with algebraic multiplicity equal to m and geometric multiplicity one, with Jordan chain(H0,,Hm1). LetU0 be the corresponding left eigenvector. Assume that the conditionU0*MϵH00 holds. Then, aroundϵ=0, the eigenvalues in the vicinity ofλ0 can be expanded as the branches of the Puiseux series in Equation (78),where
    λ1m=U0*MϵH0U0*(11!MλHm1+12!MλλHm2++1m!MλmH0).
    Remark 5.
    In the classical terminology of Numerical Analysis (see e.g., ([42], p. 137), a (column) vectorUCn=Cn×1 is a left eigenvector of a matrix M ifU*M=0, whereU* denotes the transpose of the conjugate. “Starring both sides”, this is equivalent toM*U=0, that is U is a (“right”) eigenvector of the adjoint matrixM*. With the same notations, for the scalar product inCn we have
    x,y=i=1nxiyi¯=y*x
    (the last product being the matrix product betweeny*C1×n andxCn×1), and therefore Equation (80)reads
    λ1m=MϵH0,U0Z,U0
    withZ=(11!MλHm1+12!MλλHm2++1m!MλmH0).
    To some extent, the proof of Theorem 7 relies on previous work by Lancaster et al. [25,43] on the perturbation of analytic matrix functions. To indicate the main idea followed to obtain Equation (80), consider that by definition the perturbedλ(ϵ),v(ϵ) have to satisfy for allϵ the condition
    M(λ(ϵ),ϵ)v(ϵ)=0.
    Now use the Taylor expansion ofM(λ,ϵ) around(λ0,0),
    M(λ,ϵ)=M(λ0,0)+Mϵϵ+11!Mλ(λλ0)+12!Mλλ(λλ0)2+
    and replaceλ with the expansion of Equation (78) forλ(ϵ). Using a similar expansion forv(ϵ),
    v(ϵ)=V0+i=1ϵimVi
    starting with an eigenvectorV0 associated withλ0, and putting all this in Equation (81) yields (equalling to zero the coefficients of the increasing powers ofϵ1m)m+1 recursive equations that contain the elements of a Jordan chain built upon the unknown vectorsV1,,Vm. Solving these equations with the help of a technical lemma (Lemma 2.1 in [8]) that relates all possible Jordan chains corresponding to the same eigenvalue, one returns to the original chain(H0,,Hm1) and finally obtains Equation (80).
    Example 1.
    [8]. Consider the perturbed matrix
    M2(λ,ϵ)=λ1+eλ+ϵ00λ+1+ϵ
    that forϵ=0 reduces to Equation (30).For the unperturbed eigenvalueλ0=0, in addition to the Jordan chain in Equation (31),one has
    U0=10,Mϵ=I,Mλ=0001,Mλλ=1000.
    Substituting these values in Equation (80),one obtains
    λ1=2.
    In the second part of [8], the authors consider general linear functional differential equations of the form
    x(t)=τmax0dμ(θ)x(t+θ),x(t)Cn
    whereμ:[τmax,0]Cn×n is a function of bounded variation such thatμ(0)=0. We refer to Chapter 7 of [26] for a thorough discussion of this kind of problems. Note that Equation (84) contains as special cases both equations with discrete delay and equations with continuous delay: indeed, Equation (84) takes the form of Equation (35) if one lets
    0=τ0<τ1<<τkτmax
    and definesμ:[τmax,0]Cn×n as follows:
    μ(0)=0μ(θ)=i=0,τi>θkAi,θ(τmax,0)μ(τmax)=i=0kAi.
    On the other hand, taking in Equation (84)μ(θ)=θ0A(s)ds forθ[τmax,0], withA a continuous function from[τmax,0] toCn×n, yields the system with distributed delay
    x(t)=τmax0A(θ)x(t+θ)dθ.
    The relation of Equation (84) with Equation (79) is as follows: looking for solutionsx(t)=eλtv(vCn) of Equation (84) yields the equation
    λv=τmax0dμ(θ)eλθvN(λ)v
    that is a non-parametric form of Equation (79) withM(λ)=λIN(λ). The authors then consider the infinite-dimensional vector spaceX=Cn×L2([τmax,0],Cn) and a suitably defined linear operatorA acting inX and having the property that Equation (84) can be rewritten as the abstract ordinary differential equationz(t)=Az(t) inX. Moreover,λ0 is an eigenvalue of the nonlinear eigenvalue problem in Equation (86) if and only if it is an eigenvalue of the linear operatorA, and in Theorem 3.1 of [8] it is shown how to build an (ordinary) Jordan chain forA corresponding toλ0 starting from a Jordan chain forλ0 as an eigenvalue of the NLEVP in Equation (86), and vice versa; for this matter, see also ([26], Chapter 7, Theorem 4.2). Further exploiting this functional-analytic point of view, the authors are then able to deal with parameter-dependent forms of Equation (84)—that is, with functionsμ=μ(θ,ϵ)—and to reformulate the sensitivity formula in Equation (80) for the eigenvaluesλϵ of the perturbed matrixN(λ;ϵ), corresponding toμ=μ(θ,ϵ) as in Equation (86), in terms of eigenvectors and generalized eigenvectors of the linear operatorA(ϵ) acting inX. This produces a more readable formula, given in Theorem 3.2 of [8], for the coefficientλ1 of the leading term in the expansion in Equation (78).
    The concluding section of [8] shows applications of the theory to some numerical examples, that deal in particular with a planar time-delay system containing an uncertain delayτ+ϵ and with a model problem for spectral abscissa optimization.

    3.2. A Perturbation Problem of Type K

    In [9] we have considered the following parameter-dependent version of Equation (39),
    Tx+ϵB(x)=λx,xS
    where—as inSection 2.2T is a self-adjoint bounded linear operator acting in a real Hilbert spaceH and havingλ0R as an isolated eigenvalue of finite multiplicity. In Equation (87),S stands for the unit sphere inH, so thatSKer(Tλ0I) is the unit sphere in someRn. As to the nonlinear termB, we shall soon give precise assumptions, but roughly speaking can say that theϵ term appearing before it in Equation (87) replaces the conditionB(x)=o(x) asx0 previously considered for bifurcation in Equation (39). Indeed, rather than looking for solutions of small norm as in Equation (39), we now look fornormalized eigenvectors of the perturbed eigenvalue problemTx+ϵB(x)=λx. Here, is our result for Equation (87):
    Theorem 8.
    Let T be a self-adjoint bounded linear operator acting in a real Hilbert space H, and havingλ0 as an isolated eigenvalue of finite multiplicity. Suppose that B is aC1 map of H into itself, and suppose moreover that at least one of the following conditions is satisfied: either
    (a) 
    the dimension of the nullspaceNKer(Tλ0I) is odd; or
    (b) 
    B is a gradient operator.
    Then, there existϵ0>0,δ0>0 such that for anyϵ[ϵ0,ϵ0], there existλϵ[λ0δ0,λ0+δ0] andxϵS such that
    Txϵ+ϵB(xϵ)=λϵxϵ.
    If moreover B is bounded on S, thenλϵλ0 asϵ0. Finally, if we suppose in addition thatB(0)=0 and that B is Lipschitz continuous in the unit ballU={xH:x1} of H, i.e., that there existk>0 such that
    B(x)B(y)kxy
    forx,yU, then putting
    C=inf0<v1,vNB(v),vv2,D=sup0<v1,vNB(v),vv2
    the following asymptotic estimate forλϵ hold asϵ0+:
    λ0+ϵC+O(ϵ2)λϵλ0+ϵD+O(ϵ2).
    The same estimate, with reversed inequalities, holds forϵ0.
    Remark 6.
    The bounds in Equation (91) are sharp in the sense that there exist perturbing operators B satisfying all the assumptions of the Theorem, and perturbed eigenvaluesλ+(ϵ),λ(ϵ) ofT+ϵB that satisfy at least one of the inequalities in Equation (91) with the equality sign. To see this, just consider a linear operatorB0 acting in the finite-dimensional subspace N, and then extend it to all of H on puttingB(x)=B0(v) for allxH, with v the orthogonal projection of x onto N. IfB0:NN is taken to be self-adjoint, then it has n eigenvalues (counting multiplicities)μ01....μ0n with normalized eigenvectorsv1,,vn, say; that is,B0vi=μ0ivi andvi=1. Then, putting for eachi=1,,n
    λϵ=λ0+ϵμ0i,xϵ=vi
    we have n families of eigenvalues/eigenvectors satisfying Equation (87) for allϵR. We haveμ0i=B0vi,vi=Bvi,vi for each i, and the variational characterization of the eigenvalues ofB0 gives in particular
    μ01=inf0<v1,vNB(v),vv2=C
    and similarlyμ0n=D. Therefore takingλ(ϵ)=λ0+ϵμ01 (respectively,λ+(ϵ)=λ0+ϵμ0n), the left-hand side (respectively, the right-hand side) of Equation (91) is satisfied with equality sign andO(ϵ2)=0.
    The first part of Theorem 8 is proved following the track indicated inSection 2.2, that is performing the Lyapounov–Schmidt reduction of Equation (87). One non-trivial difference is that here aglobal version of the Implicit Function Theorem is employed in order to obtain a mapping
    (δ,ϵ,v)w(δ,ϵ,v)
    defined in an open neighborhoodY1=I1×J1×V1R×R×N of{0}×{0}×S by the rule that theϵ-dependent complementary equation (see Equation (52)) can be solved uniquely with respect tow for each given(δ,ϵ,v)Y1. Moreover,w(0,0,v)=0 for anyvS, and the mapping(δ,ϵ,v)w(δ,ϵ,v) ofY1 intoW is of classC1. Next, expressingδ as aC1 functionδ(ϵ,v) of(ϵ,v) in a possibly smaller neighborhoodJ×VJ1×V1, and putting for convenience
    ϕ(ϵ,v)w(δ(ϵ,v),ϵ,v),(ϵ,v)J×V
    we arrive at theϵ-dependent form of the bifurcation Equation (57), namely
    ϵPB(v+ϕ(ϵ,v))=δ(ϵ,v)v
    that is here accompanied by the norm constraint
    v+ϕ(ϵ,v)2=v2+ϕ(ϵ,v)2=1.
    A solution(λ,x) of the original problem in Equation (87) will then be given by the formulae
    λ=λ0+δ(ϵ,v),x=v+ϕ(ϵ,v).
    In any of the two Cases (a) and (b) listed in Theorem 8, using as needed either of the methods (topological or variational) recalled in general inSection 2.2, we find for eachϵ small a solutionvϵ of Equations (92) and (93). Therefore, making an appropriate choice ofδ0,ϵ0 for the intervalsI0[δ0,δ0],J0[ϵ0,ϵ0] and putting
    δϵ=δ(ϵ,vϵ)andwϵ=ϕϵ(vϵ)=w(δϵ,ϵ,vϵ)
    the first part of Theorem 8, asserting the existence of at least one solutions(λϵ,xϵ)I0×S of Equation (87) for eachϵJ0, is proved withλϵ=λ0+δϵ andxϵ=vϵ+wϵ.
    Some words are now in order to explain the estimates in Equation (91). One first shows that the componentwϵ ofxϵ (as defined in Equation (95)) satisfieswϵ0 asϵ0, uniformly with respect tovϵ, and consequently with respect toxϵ. This in turn implies thatλϵλ0 asϵ0, uniformly with respect toxϵ. Indeed, using Equation (92), we have
    ϵPB(vϵ+wϵ)=δϵvϵ
    for allϵ, whence taking the scalar product withvϵ of both members, we obtain
    ϵB(vϵ+wϵ),vϵ=δϵvϵ2.
    Therefore,
    δϵ=ϵB(xϵ),vϵvϵ2.
    Moreover, asxϵ=vϵ+wϵS andwϵ0 as indicated above, then necessarilyvϵ1 asϵ0. Therefore, since
    |B(xϵ),vϵ|vϵ2B(xϵ)vϵ
    it follows, by the boundedness assumption onB, that the term multiplyingϵ in Equation (97) remains bounded asϵ0, implying thatδϵ=O(ϵ) asϵ0, uniformly with respect toxϵ.
    We can now prove the asymptotic formula in Equation (91) onλϵ ifB satisfies Equation (89). In this respect, the utility of Equation (89) is twofold. First, it permits to improve significantly the information onwϵ as it yields by means of straightforward computations the estimate
    w(δ,ϵ,v)C1|ϵ|v
    holding for some constantC1>0 and all(δ,ϵ,v)[δ0,δ0]×[ϵ0,ϵ0]×(UN). Moreover, Equation (89) implies via the Schwarz’ inequality that, for anyv andw such thatv,v+wU, one has
    |B(v+w),vB(v),v|kvw.
    Writing this forw(δ,ϵ,v) and using Equation (98), we then get the inequality
    |B(v+w(δ,ϵ,v)),vB(v),v|C2|ϵ|v2
    withC2=kC1, valid for all the (possible) solutions(δ,x=v+w(δ,ϵ,v)) of Equation (87) having sufficiently smallϵ. Using in turn this estimate in Equation (96) for the actual solutions(δϵ,xϵ=vϵ+wϵ) we see that asϵ0
    δϵvϵ2=ϵB(vϵ+wϵ),vϵ=ϵB(vϵ),vϵ+vϵ2O(ϵ2).
    This implies that
    δϵ=ϵB(vϵ),vϵvϵ2+O(ϵ2)
    asϵ0, and thus forϵ>0 yields immediately the estimate of Equation (91) in view of the definition in Equation (90) ofC andD.
    Example 2.
    Theorem 8 can be used to evaluate the convergence rate asϵ0 of the eigenvaluesμϵ of the nonlinear elliptic problem
    Δu=μ(u+ϵf(x,u))inΩu=0onΩ
    near an eigenvalueμ0 of the unperturbed linear problemΔu=μu in Ω,u=0 onΩ. Here, Ω is a bounded domain inRN(N1) with boundaryΩ, andΔ=i=1N2xi2 is the familiar Laplace operator acting on sufficiently smooth real functions u defined in Ω. Under appropriate hypotheses on f, and assuming in particular that
    mt2f(x,t)tMt2(xΩ,tR)
    for some real constants0mM, one proves that asϵ0+
    μ0ϵμ0M+O(ϵ2)μϵμ0ϵμ0m+O(ϵ2).
    These inequalities can be used for actual computation, once an efficient approximation of the linear eigenvalueμ0 is available and the bounds in Equation (103)for f are known with accuracy. For instance, iff(x,t)=f(t)=t1+t2, just put in Equation (104)
    m=inft0f(t)tt2=0,M=supt0f(t)tt2=1.

    4. Concluding Remarks, Open Problems and Applicability

    To summarize and motivate again the content of this paper, let me define it as an attempt to identify and logically re-connect (or at least give a common frame to) two important and presently distinct research areas in Mathematical Analysis and its applications that appear in the current literature under the same name ofNonlinear Eigenvalue Problems. As better explained in the Introduction, problems in these two areas are described (in abstract operator form) by the two equations
    G(λ)x=0
    (“problems of type G”) and
    A(x)λC(x)=0
    (“problems of type K”). Some basic facts and solution methods about each of the two equations are reported inSection 2.Section 3 is devoted to discuss two specific problems, one for each type, with the scope of giving samples of very recent research in either field.
    While for the problem discussed inSection 3.1 there are already concrete numerical examples [8], these are still missing for the problem presented inSection 3.2 [9]. The main aim of this final section is to partially fill this gap by further commenting (inSection 4.1) on the formula
    λ0+ϵC+O(ϵ2)λϵλ0+ϵD+O(ϵ2)
    proved in Theorem 8 and by finally providing, at least in a special case, a recipe ready for use in numerical simulation (Section 4.2).

    4.1. Open Problems

    The basic idea standing behind the formula in Equation (108) is that if the (algebraic and geometric) multiplicitym(λ0) of the unperturbed eigenvalueλ0 of the linear operatorT in Equation (87) is equal tom, then upon perturbation by a small termϵB there are potentiallym eigenvalue functionsλ1(ϵ),,λm(ϵ) ofT+ϵB satisfying Equation (108). This is what actually happens for alinear operatorB (essentially under the further assumption thatT andB are self-adjoint) as described by Rellich’s Theorem 6. Our idea is that something of this persists also fornonlinear operators that have good similarity with the linear ones: the class of Lipschitz continuous maps considered in Theorem 8 is apparently quite close to that of bounded linear maps, and in fact contains properly the latter. Indeed, the conclusions of Theorem 8 point in in this direction. However, many problems remain open and we describe here three of them (in increasing order of interest and difficulty):
    • Verify on specific examples of nonlinear ODE/PDE/Equations inRn—by means of explicit computation or by means of a numerical analysis—the existence of at least one “eigenvalue branch”λ(ϵ) satisfying the bounds in Equation (108) as predicted by the theory.
    • Verify by the same means that the bounds in Equation (108) are optimal by producing examples of nonlinear problems (in the same fields as above) where at least one eigenvalue function exists that satisfies the RHS (LHS) bound in Equation (108) with equality sign.
    • Exhibit examples of “nonlinear splitting of the multiple eigenvalue”, that is, of nonlinear problems in which starting from an unperturbed eigenvalueλ0 of multiplicity2 there existtwo different familiesλ+(ϵ),λ(ϵ) respecting Equation (108), and possibly each satisfying the RHS (LHS) bound of it with equality sign.
    Here, is a very simple example that highlights the above issues, and the last in particular. For the linear case, these questions were answered by Remark 6.
    Example 3.
    Consider the system
    x+ϵx3=λxy+ϵy3=λy.
    In the notations of Equation (87)and of Theorem 8, we have hereH=R2,T=I,λ0=1 andB(x,y)=(x3,y3). Solving Equation (109)with the constraintx2+y2=1 gives the solutions
    (x,y)=(0,±1),(x,y)=(±1,0),λ=1+ϵ(x,y)=(±12,±12),λ=1+ϵ/2.
    Therefore,λ0=1 splits into the two eigenvalue functions (each carrying four distinct eigenvectors)
    λ+(ϵ)=1+ϵ,λ(ϵ)=1+ϵ/2.
    This is in full agreement with Equation (108),for
    D=sup0<v1,vNB(v),vv2=sup0<x2+y21x4+y4x2+y2=1
    and similarly, replacing “sup" with “inf", we find thatC=1/2.

    4.2. Applicability

    As indicated in Example 2, the bounds in Equation (108) proved in Theorem 8 for the operator Equation (87) can be used for concrete nonlinear elliptic problems in a bounded domainΩRN(N1) such as Equation (102), or more generalized forms of it in whichΔ is replaced by a uniformly elliptic second order operator in divergence form. In particular forN=1 this applies to the Sturm–Liouville problem
    (p(x)u)+q(x)u=μ(u+ϵf(x,u))in]a,b[u(a)=u(b)=0
    wherepC1([a,b]),p>0 andqC([a,b]). The first remark on the applicability of Theorem 8 to such kind of problems, in one or more space variables, is that one must not worry of the multiplicity of the unperturbed eigenvalue because—as recalled for instance in [9]—the operatorB corresponding to the nonlinear termf is a gradient operator, so that the assumptionb) of Theorem 8 is satisfied. In [9] it is previously recalled that these problems can be cast in the operator form of Equation (87) on taking as Hilbert space the Sobolev spaceH01(Ω)W01,2(Ω), equipped with the scalar product
    u,v=Ωu(x)v(x)dx
    and that the operatorB mentioned above is defined via the duality relation
    B(u),v=Ωf(x,u(x))v(x)dx.
    The representation in Equation (114) ofB is the key formula to be used to gain information on the constantsC,D appearing in Equation (108), for these are defined by the formulae in Equation (90) that involve precisely the nonlinear Rayleigh quotient ofB. Indeed, using Equation (114) and the fact (see e.g., [9]) that forvN we have
    v2=Ωv2(x)dx=μ0Ωv2(x)dx
    whereμ0 is the unperturbed eigenvalue andN the corresponding eigenspace, yields the following quite readable expression forD:
    D=sup0<v1,vNB(v),vv2=sup0<v1,vNΩf(x,v(x))v(x)dxμ0Ωv2(x)dx
    Thus, essentially, in the applications of the theory to elliptic PDE or ODE, estimating the Rayleigh quotient ofB reduces to estimating the ratio appearing in the RHS of Equation (116). In turn, this can be easily obtained bypointwise bounds onf: for clearly iff satisfies Equation (103), then it follows that for everyvH (and in fact for everyvL2(Ω))
    mΩf(x,v(x))v(x)dxΩv2(x)dxM.
    We conclude by Equation (116) and the dual formula forC that
    mμ0C,DMμ0.
    Using the inequalities in Equations (117) and (108) and puttingλϵ=1/μϵ yield bounds on the perturbed eigenvaluesμϵ of Equation (102). Considering for instance the right-hand side of in Equation (108), we obtain
    1μϵ1μ0+ϵMμ0+O(ϵ2)=1μ0(1+ϵM+O(ϵ2))
    and doing the same with the lower bound thus yields
    μ01+ϵm+O(ϵ2)μϵμ01+ϵM+O(ϵ2).
    Remark 7.
    Note that Equation (119)contains—as due—the equality
    μϵ=μ01+ϵa
    which plainly holds for the eigenvalues of Equation (102)in the linear casef(x,s)=as,a=const.
    Finally, using in Equation (119) the asymptotic relation(1+x)1=1x+O(x2) forx0, we obtain asϵ0+ the formula in Equation (104), which as remarked is ready for use in numerical experiments onceμ0,m andM are known. For instance, takingf(x,s)=f(s)=s1+s2 and using the boundsm=0,M=1 (see Equation (105)) yields, forϵ0+,
    μ0ϵμ0+O(ϵ2)μϵμ0+O(ϵ2).
    The case of the simple eigenvalue. More information can be gained in the case thatdimN=1, so thatN={tϕ,tR} for someϕ that we normalize takingϕ=1. Then, the Rayleigh quotient ofB simplifies as
    B(v),vv2=B(tϕ),tϕt2ϕ2=1tB(tϕ),ϕh(t),0<|t|1.
    Note thath is bounded sinceB is sublinear (that is,B(u)ku for alluH) as follows from Equation (89) and the conditionB(0)=0. It follows by Equations (114) and (121) that
    h(t)=1tΩf(x,tϕ(x))ϕ(x)dx.
    Considering as above the examplef(x,s)=f(s)=s1+s2, we obtain
    h(t)=Ωϕ2(x)1+t2ϕ2(x)dx
    showing thath can be extended continuously tot=0 and that it is an even function oft in[1,1]. Therefore, using also Equation (115) and the conditionϕ=1, we get
    D=sup1t1h(t)=sup0t1h(t)=Ωϕ2(x)dx=1μ0
    while
    C=inf1t1h(t)=inf0t1h(t)=Ωϕ2(x)1+ϕ2(x)dxK.
    These computations show that in the present case:
    • The upper boundMμ0=1μ0 given by Equation (117) forD is optimal.
    • The lower bound 0 given by Equation (117) forC can be improved toC=K>0.
    Proceeding in the same way as before (see Equations (118) and (119)) and using as before the asymptotic expansion for11+x asx0, we see that, asϵ0+, Equation (120) can be replaced by the more precise formula
    μ0ϵμ0+O(ϵ2)μϵμ0ϵμ02K+O(ϵ2).
    The practical use of Equation (125) for numerical purposes requires explicit knowledge of the eigenvalueμ0 and of the corresponding eigenfunctionϕ. Typical cases in which these data are available are:
    • N=2,Ω a rectangle or a circle, andμ0 the first eigenvalue of the Dirichlet Laplacian inΩ (see e.g., [44].
    • N=1,Ω=]a,b[ andμ0=μn any eigenvalue of the Sturm–Liouville problem in Equation (112) with simple forms of the coefficientsp andq. For instance, if]a,b[=]0,π[,p1 andq0 we have
      μn=n2,ϕn(x)=1n2πsinnx(0xπ).
      As to the expression ofϕn, recall that we have normedH01(a,b) via the formula in Equation (113), which in this case reduces to(u,v)=0πu(x)v(x)dx.

    Acknowledgments

    The author wishes to thank the Referees for their valuable suggestions that have helped to improve the quality of the present paper.

    Conflicts of Interest

    The authors declare no conflict of interest.

    References

    1. Betcke, T.; Higham, N.J.; Mehrmann, V.; Schroder, C.; Tisseur, F. NLEVP: A collection of nonlinear eigenvalue problems.ACM Trans. Math. Softw.2013,39, 28. [Google Scholar] [CrossRef]
    2. Berger, M.S.Nonlinearity and Functional Analysis; Academic Press: Cambridge, MA, USA, 1977. [Google Scholar]
    3. Ambrosetti, A.; Malchiodi, A.Nonlinear Analysis and Semilinear Elliptic Problems; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
    4. Palais, R.S. Critical point theory and the minimax principle. In Proceedings of the Symposia in Pure Mathematics, Berkeley, CA, USA, 1–26 July 1968; American Mathematics Society: Providence, RI, USA, 1970; Volume XV, pp. 185–212. [Google Scholar]
    5. Chabrowski, J. On nonlinear eigenvalue problems.Forum Math.1992,4, 359–375. [Google Scholar] [CrossRef]
    6. Turner, R.E.L. A class of nonlinear eigenvalue problems.J. Funct. Anal.1968,2, 297–322. [Google Scholar] [CrossRef]
    7. Kato, T.Perturbation Theory for Linear Operators, 2nd ed.; Springer: Berlin, Germany; New York, NY, USA, 1976. [Google Scholar]
    8. Michiels, W.; Boussaada, I.; Niculescu, S.I. An explicit formula for the splitting of multiple eigenvalues for nonlinear eigenvalue problems and connections with the linearization for the delay eigenvalue problem.SIAM J. Matrix Anal. Appl.2017,38, 599–620. [Google Scholar] [CrossRef]
    9. Chiappinelli, R. Approximation and convergence rate of nonlinear eigenvalues: Lipschitz perturbations of a bounded self-adjoint operator.J. Math. Anal. Appl.2017,455, 1720–1732. [Google Scholar] [CrossRef]
    10. Langer, H.; Markus, A.; Matsaev, V. Linearization, factorization, and the spectral compression of a self-adjoint analytic operator function under the condition (VM). InA Panorama of Modern Operator Theory and Related Topics; Birkhauser/Springer Basel AG: Basel, Switzerland, 2012; pp. 445–463. [Google Scholar]
    11. Appell, J.; De Pascale, E.; Vignoli, A.Nonlinear Spectral Theory; de Gruyter: Berlin, Germany, 2004. [Google Scholar]
    12. Chiappinelli, R. Surjectivity of coercive gradient operators in Hilbert space and Nonlinear Spectral Theory.Ann. Funct. Anal.2018,9. in press. [Google Scholar]
    13. Guttel, S.; Tisseur, F. The nonlinear eigenvalue problem.Acta Numer.2017,26, 1–94. [Google Scholar] [CrossRef] [Green Version]
    14. Brezis, H.Functional Analysis, Sobolev Spaces and Partial Differential Equations; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
    15. Taylor, A.; Lay, D.Introduction to Functional Analysis; Wiley: Hoboken, NJ, USA, 1980. [Google Scholar]
    16. Hadeler, K.P. Mehrparametrige und nichtlineare Eigenwertaufgaben.Arch. Ration. Mech. Anal.1967,27, 306–328. [Google Scholar] [CrossRef]
    17. Hadeler, K.P. Variationsprinzipien bei nichtlinearen Eigenwertaufgaben.Arch. Ration. Mech. Anal.1968,30, 297–307. [Google Scholar] [CrossRef]
    18. Langer, H. Uber eine Klasse polynomialer Scharen selbstadjungierter Operatoren im Hilbertraum.J. Funct. Anal.1973,12, 13–29. [Google Scholar] [CrossRef]
    19. Markus, A.S.Introduction to the Spectral Theory of Polynomial Operator Pencils; Translations of Mathematical Monographs, 71; American Mathematical Society: Providence, RI, USA, 1988. [Google Scholar]
    20. Binding, P.; Eschwé, D.; Langer, H. Variational principles for real eigenvalues of self-adjoint operator pencils.Integral Equ. Oper. Theory2000,38, 190–206. [Google Scholar] [CrossRef]
    21. Hasanov, M. An approximation method in the variational theory of the spectrum of operator pencils.Acta Appl. Math.2002,71, 117–126. [Google Scholar] [CrossRef]
    22. Voss, H. A minmax principle for nonlinear eigenproblems depending continuously on the eigenparameter.Numer. Linear Algebra Appl.2009,16, 899–913. [Google Scholar] [CrossRef] [Green Version]
    23. Schwetlick, H.; Schreiber, K. Nonlinear Rayleigh functionals.Linear Algebra Appl.2012,436, 3991–4016. [Google Scholar] [CrossRef]
    24. Gohberg, I.; Lancaster, P.; Rodman, L.Matrix Polynomials; Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers]: New York, NY, USA; London, UK, 1982. [Google Scholar]
    25. Hryniv, R.; Lancaster, P. On the perturbation of analytic matrix functions.Integral Equ. Oper. Theory1999,34, 325–338. [Google Scholar] [CrossRef] [Green Version]
    26. Hale, J.K.; Verduyn Lunel, S.M.Introduction to Functional Differential Equations; Applied Mathematical Sciences, 99; Springer: New York, NY, USA, 1993. [Google Scholar]
    27. Gohberg, I.; Lancaster, P.; Rodman, L.Invariant Subspaces of Matrices With Applications; A Wiley-Interscience Publication; John Wiley and Sons, Inc.: New York, NY, USA, 1986. [Google Scholar]
    28. Stackgold, I. Branching of solutions of non-linear equations.SIAM Rev.1971,13, 289–332. [Google Scholar] [CrossRef]
    29. Rabinowitz, P.H.Minimax Methods in Critical Point Theory With Applications to Differential Equations; CBMS Regional Conference Series Mathematics; American Mathematics Society: Providence, RI, USA, 1986; Volume 65. [Google Scholar]
    30. Krasnoselskii, M.A.Topological Methods in the Theory of Nonlinear Integral Equations; Pergamon Press: Oxford, UK, 1964. [Google Scholar]
    31. Rabinowitz, P.H. Some global results for nonlinear eigenvalue problems.J. Funct. Anal.1971,7, 487–513. [Google Scholar] [CrossRef]
    32. Stuart, C.A. An introduction to bifurcation theory based on differential calculus. InNonlinear Analysis and Mechanics: Heriot-Watt Symposium; Research Notes in Mathematics; Pitman: Totowa, NJ, USA, 1979; Volume IV, pp. 76–135. [Google Scholar]
    33. Mawhin, J.; Willem, M.Critical Point Theory and Hamiltonian Systems; Applied Mathematical Sciences, 74; Springer: New York, NY, USA, 1989. [Google Scholar]
    34. Zeidler, E.Nonlinear Functional Analysis and Its Applications. III. Variational Methods and Optimization; Springer: New York, NY, USA, 1985. [Google Scholar]
    35. Amann, H. Liusternik-Schnirelman theory and non-linear eigenvalue problems.Math. Ann.1972,199, 55–72. [Google Scholar] [CrossRef]
    36. Fernandez Bonder, J.; Pinasco, J.P.; Salort, A.M. Quasilinear eigenvalues.Rev. Union Mat. Argent.2015,56, 1–25. [Google Scholar]
    37. Lindqvist, P. A nonlinear eigenvalue problem. InTopics in Mathematical Analysis; World Scientific Publishing: Hackensack, NJ, USA, 2008; pp. 175–203. [Google Scholar]
    38. Drábek, P. On the variational eigenvalues which are not of Ljusternik-Schnirelmann type.Abstr. Appl. Anal.2012, 434631. [Google Scholar] [CrossRef]
    39. Drábek, P.; Robinson, S.B. Resonance problems for the p-Laplacian.J. Funct. Anal.1999,169, 189–200. [Google Scholar] [CrossRef]
    40. Appell, J.; Drábek, P.; Chiappinelli, R. (Eds.)Mini-Workshop: Nonlinear Spectral and Eigenvalue Theory with Applications to the p-Laplace Operator; Abstracts from the Mini-Workshop held 15–21 February 2004; Oberwolfach Report; Mathematisches Forschungsinstitut Oberwolfach: Oberwolfach, Germany, 2004; pp. 407–437. [Google Scholar]
    41. Rellich, F.Perturbation Theory of Eigenvalue Problems; Gordon and Breach Science Publishers: New York, NY, USA; London, UK; Paris, France, 1969. [Google Scholar]
    42. Isaacson, E.; Keller, H.B.Analysis of Numerical Methods; John Wiley and Sons, Inc.: New York, NY, USA; London, UK; Sydney, Australia, 1966. [Google Scholar]
    43. Lancaster, P.; Markus, A.S.; Zhou, F. Perturbation theory for analytic matrix functions: The semisimple case.SIAM J. Matrix Anal. Appl.2003,25, 606–626. [Google Scholar] [CrossRef]
    44. Courant, R.; Hilbert, D.Methods of Mathematical Physics; Wiley: Hoboken, NJ, USA, 1953; Volume I. [Google Scholar]

    © 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

    Share and Cite

    MDPI and ACS Style

    Chiappinelli, R. What Do You Mean by “Nonlinear Eigenvalue Problems”?Axioms2018,7, 39. https://doi.org/10.3390/axioms7020039

    AMA Style

    Chiappinelli R. What Do You Mean by “Nonlinear Eigenvalue Problems”?Axioms. 2018; 7(2):39. https://doi.org/10.3390/axioms7020039

    Chicago/Turabian Style

    Chiappinelli, Raffaele. 2018. "What Do You Mean by “Nonlinear Eigenvalue Problems”?"Axioms 7, no. 2: 39. https://doi.org/10.3390/axioms7020039

    APA Style

    Chiappinelli, R. (2018). What Do You Mean by “Nonlinear Eigenvalue Problems”?Axioms,7(2), 39. https://doi.org/10.3390/axioms7020039

    Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further detailshere.

    Article Metrics

    No
    No

    Article Access Statistics

    For more information on the journal statistics, clickhere.
    Multiple requests from the same IP address are counted as one view.
    Axioms, EISSN 2075-1680, Published by MDPI
    RSSContent Alert

    Further Information

    Article Processing Charges Pay an Invoice Open Access Policy Contact MDPI Jobs at MDPI

    Guidelines

    For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers

    MDPI Initiatives

    Sciforum MDPI Books Preprints.org Scilit SciProfiles Encyclopedia JAMS Proceedings Series

    Follow MDPI

    LinkedIn Facebook X
    MDPI

    Subscribe to receive issue release notifications and newsletters from MDPI journals

    © 1996-2025 MDPI (Basel, Switzerland) unless otherwise stated
    Terms and Conditions Privacy Policy
    We use cookies on our website to ensure you get the best experience.
    Read more about our cookieshere.
    Accept
    Back to TopTop
    [8]ページ先頭

    ©2009-2025 Movatter.jp