Movatterモバイル変換


[0]ホーム

URL:


Vector  Spaces
&  Algebras


 Stefan Banach  1892-1945

Articles previously on this page:

  • Banach spaces  are complete  normed vector spaces.
  • Modules  are vectorial structures over a ring of scalars  (instead of a field).

    Click for the new location.

Related articles on this site:

 Michon

Related Links (Outside this Site)

Théoriedes opérations linéaires  (Banach spaces)  by Stefan Banach  (1932).

Higgs Bundles (43:21,47:10,51:26,42:58) by Laura Schaposnik  (2017-10-10).

 
border
border

Vector Spaces and Algebras


(2006-05-07)  
Vectors were so named because they "carry" the distance from the origin.

Inmedical and other contexts,"vector" is synonymous with "carrier". The etymology is that of "vehicle":  The latin verb vehere  means "to transport".

In elementary geometry, a vector  is the difference  between two points in space; it's what has to be traveled to go from a given origin to a destination. Etymologically, such a thing was perceived as "carrying"the distance between two points (the radius  from a fixed origin to a point).

The term vector  started out its mathematical life as part ofthe French locution "rayon vecteur"  (radius vector). The whole expression is still used to identify a point inordinary (Euclidean) space, as seen from a fixed origin.

As presentednext, the term vector  moregenerally denotes an element of a linear space  (vector space) of unspecified dimensionality  (possibly infinite dimension) over any scalar field  (not just the real numbers).


(2006-03-28)    (called the ground field).
Vectors can be added, subtracted or scaled. Thescalars form afield.

The vocabulary is standard: An element of the field K is called a scalar. Elements of the vector space  are called vectors.

By definition,  a vector space E  is a set with a well-definedinternal addition  (the sum U+V of two vectors is a vector) and a well-defined external multiplication  (i.e.,  for a scalar  x and a vector U,  the scaled vector  x U  is a vector)  with the following properties:

  • (E, + )  is an Abeliangroup. This is to say that the addition of vectors is an associative and commutativeoperation and that subtraction is defined as well (i.e.,  there's azero vector,neutral for addition, and every vector has an opposite  which yields zero  when added to it).
  • Scaling is compatible with arithmetic on the field K :
xK,yK,UE,VE, (x + y)U   =  
x (U +V)   =  
(x y)U   =  
1U   =  
xU  +  yU
xU  +  xV
x (yU)
U


(2010-04-23)  
Thedimension is the largest possible number ofindependent vectors.

The moderndefinition of a vector space doesn'tinvolve the concept of dimension  which had a toweringpresence in the historical examples of vector spaces taken fromEuclidean geometry:  A line has dimension 1, a plane has dimension 2,"space" has dimension 3,etc.

The concept of dimension is best retrieved by introducing two complementarynotions pertaining to aset of vectors  B. 

  • B  is said to consist of independent vectors  when all nontrivial linear combinations of the vectors of  B  are nonzero.
  • B  is said to generate E when every vector of the spaceE is a linear combination of vectors of B.

linear combination of vectors is a sum of finitely many  of thosevectors, each multiplied by a scaling factor (called a coefficient ). A linear combination withat least one nonzero  coefficientis said to be nontrivial.

If  B  generates E  and consists of independent vectors,  then it'scalled a basis  of E.  Note that the trivial space  {0}  has an empty basis (the empty set does generate the space  {0} because anempty sum is zero).

To prove that all nontrivial vector spaces have a basisrequires the Axiom of Choice (in fact, the existence of a basis for any nontrivialvector space is equivalent  to the Axiom of choice ).

Dimension theorem for vector spaces :

A not-so-obvious statement is that two bases of E can always be put in one-to-one correspondencewith each other.  Thus,  all bases of E  have the same cardinal  (finiteor not). That cardinal is called the dimension  of the space E and it is denoted  dim (E)

Schauder basis)  |  Georg Hamel (1877-1954;Ph.D. 1901)
 
Linearly independent sets  |  Matroids  |  Matroid representations  |  Dimension theorem for vector spaces


(2010-06-21)  
A vector space included in another is called a subspace.

Asubset F  of a vector space E is a subspace  of E  if and only if it isstable by addition and scaling  (i.e., the sum of two vectors of F is in F  and so is any vector of F multiplied into a scalar).

It's an easy exercise to show that the intersection FG  of two subspaces F  and G  is a subspace of E. So is the Minkowski sum F+G (defined as the set of all sums x+y  of a vector  x  from F  and a vector y  from G).

Two subspaces of E  for which  FG = {0}   and  F+G = E  are said to be  supplementary. Their sum is then called a direct sum  and thefollowing compact notation is used to state that fact:

E   =  FG

In the case of finitely many dimensions, the following relation holds:

dim (FG )  =   dim (F)  +  dim (G)

The generalization to nontrivial intersections is Grassmann's Formula :

dim (F +G )   =   dim (F)  +  dim (G)  dim (FG )

A lesser-known version applies to spaces of finite codimensions:

codim (F +G )   =   codim (F)  +  codim (G)  codim (FG )


(2010-12-03) 
Two spaces are isomorphic  if there's a linear bijection between them.

Afunction f which maps a vector space E  into another space F over the same field K  is saidto be linear  if it respects addition and scaling:

x,yKU,VE,   f ( xU + yV )   =  xf (U )  + yf (V )

If such a linear function f  is bijective, its inverse is alsoa linear map and the vector spaces E  and F  are said to be isomorphic

E    F

In particular, two vector spaces which have the same finite dimension over the same field are necessarily isomorphic.


(2010-12-03)  
The equivalence classes (or residues) modulo H  can be called slices.

If H  is a subspace of the vector space E, an equivalence relation can be defined by calling two vectors equivalent whentheir difference  is in H.

An equivalence class is called a slice,  and it can be expressed asa Minkowski sum of the form  x+H. The space E  is thus partitioned into slices parallel to H. The set  of all slices is denoted E/H and is called the quotient  ofE  byH. It's clearly a vector space  (scaling a slice or adding up two slices yields a slice).

When E/H  has finite dimension that dimension is called the codimension  of H. A linear subspace of codimension 1  is calledan hyperplane  of  E.

The canonical linear map which sends a vector x  of E  to the slice  x+H is called the quotient map of E  onto E/H.

A vector space is always isomorphic to thedirect sum of any subspace H and its quotient by that same subspace:

E    HE/H

Use this with H  =  ker (f )  toprove the following fundamental theorem:


(2010-12-03) 
A vector space is isomorphic to thedirect sum of the image and kernel (French: noyau)  of any linear function defined over it.

The image  or range  of alinear function f  which maps a vector space E to a vector space F  is a subspace of F  defined as follows:

im (f )   =   range (f )   =  f (E)  =  { yF  |  xEf (x) = y }

The kernel  (also called nullspace) of f  is the following subspace of E :

ker (f )  = null (f )  = { xE  |  f (x) = 0 }

The fundamental theorem of linear algebra  states thatthere's a subspace of E  which is isomorphic to f (E)  and supplementary to  ker ( f )  inE. This results holds for a finite or an infinite number of dimensionsand it's commonly expressed by the following isomorphism:

f (E)    ker (f )   E

 This is a corollary of theabove, since f (E)   and  E / ker (f )  are isomorphic because a bijective map between them is obtainedby associating  uniquely f (x) with the residue class  x + ker ( f ). Clearly, that association doesn't depend on the choice of  x. QED

Restricted to vector spaces of finitely many dimensions,the theorem amounts to the following famous result  (of great practical importance).

Rank theorem  (or rank-nullity theorem) :

For any linear function f  over a finite-dimensional space E, we have:

dim (f (E) )  +  dim ( ker (f ) )   =   dim (E )

dim (f (E) )  is called the rank  of f. The nullity  of f  is  dim ( ker (f ) ).

In the language of thematricesnormally associated with linear functions: The rank  and nullity of a matrix add up to its number of columns. The rank of a matrix  A is defined as the largest number of linearly independent columns (or rows)  in it. Its nullity is the dimension of its nullspace  (consisting, by definition, ofthe column vectors  x  for which  A x = 0).


(2007-11-06)  
normed vector space  is a linear space endowed with a norm.

Vector spaces can be endowed with a function  (called norm) which associates to any vector V a real number  ||V|| (called the norm  or the length  of V)  such that the followingproperties hold:

  • ||V||  is positive for any nonzero vector V.
  • ||V||  = || ||V||
  • ||U +V ||   ≤   ||U ||  +  ||V ||

In this,   is a scalar  and ||  denotes what's called a valuation  on the field of scalars (a valuation is a special type of one-dimensional norm;the valuation of a product is the product of the valuations of its factors). Some examples of valuations are the absolute value of real numbers,  the modulus ofcomplex numbers and the p-adic metric  of p-adic numbers.

Let's insist: The norm of a nonzero vector is always a positive real number, even for vector spaces whose scalars aren't real numbers.


(2020-10-09)  
v | [ x |f (x,v) ] is an isomorphism  from E  to E*.

In this E*  is understood to be the continuous dual of E*.

In finitely many dimensions,  a pseudo inner product is non-degenerate if andonly if its associated determinant  is nonzero.


(2020-09-30)  
Linear space endowed with a positive-definite sesquilinear form.

 Come back later, we're still working on this one...

Parallelogram identity :

In an inner-product space, the following identity holds which reduces to Pythagoras theorem when ||u+v|| = ||u-v||

||u + v||2  +  ||u v||2   =  2 ||u||2  +  2 ||v||2

Polarization Identity :

Conversely,  a norm which verifies the above parallelogram identity  on a complexlinear space is necessarily derived from a sesquilinear inner product,  obtained from the norm alone throughthe following polarization formula:

<u,v>   =   ¼ (||u + v||   ||u v||  + i ||u i v||   i ||u + i v|| )

Hilbert space  is an inner-product space which is complete  with respect to the norm associated tothe defining inner-product  (i.e.,  it's a Banach space  with respect to that norm). The name Hilbert space  itself was coined by John von Neumann (1903-1957) in honor of the pioneering work published byDavid Hilbert (1862-1943). on the Lebesgue sequence space l2 (which is indeed a Hilbert space).

 Come back later, we're still working on this one...


(2009-09-03)  
Algebraic duality  & topological duality.

In a vector space E, a linear form  is a linear function which maps every vector  of E  to a scalar  of the underlyingfield K.  The set of all linear forms is called the algebraic dual  of E. The set of all continuous linear forms is called the [ topological ]dual  of E.

With finitely many dimensions, the two concepts are identical (i.e., every linear form is continuous). Not so with infinitely many dimensions. An element of the dual  (a continuous linear form) is often called a covector.

Unless otherwise specified, we shall use the unqualified term dual to denote the topological dual.  We shall denote it E* (some authors use E*  to denote the algebraic dual and E' for the topological dual).

The bidual E**  of E  is the dual of E*. It's also called second dual  or double dual.

canonical injective  homomorphism   exists which immerses E  into E** by defining  (v),  for any element  v  of E,  as the linear form on E*  which maps everyelement f  of E*  to thescalar f (v).  That's to say:

(v) (f )   =  f (v)

If the canonical homomorphism isabijection, then E is said to be reflexive  and it isroutinely identified with its bidual E**.

E   =  E**

Example of an algebraic dual :

If  E =R(N)  denotes the space consisting of allcomplex sequences with only finitely many nonzero values,  then the algebraic  dual of E consists of all  complex sequences without restrictions.  In other words:

E'   =  RN

Indeed, an element f  of E'  is a linearform over E  which is uniquely determined by the unrestrictedsequence of scalars formed by the images of the elements in the countable  basis (e0 ,e1 ,e2  ... )  of E.

E'  is aBanach space, but E  is not  (it isn'tcomplete). As a result, an absolutely convergent series  need notconverge in E.

For example, the series of generalterm  en / (n+1)2  doesn't converge in E,  although it's absolutelyconvergent  (because the series formed by thenormsof its terms is awell-known convergent real series).

Representation theorems for  [continuous]  duals :

representation theorem  is a statement thatidentifies in concrete terms some abstractly specified entity. For example, the celebrated Riesz representation theorem  states that the [continuous] dualof the Lebesgue space Lp  (an abstractspecification)  is just isomorphic to the space Lq  where  q  is a simple function of  p  (namely, 1/p+1/q = 1 ).

Lebesgue spaces are usually linear spaces with uncountably many  dimensions (their elements are functions over a continuum like R orC). However, the Lebesgue sequence spaces described in thenext section are simpler (they have only countably many  dimensions) and can serve as a more accessible example.


(2012-09-19)  
lp and lq are duals of each other when   1/p + 1/q  =  1

For  p > 1,  the linear space lp is defined as the subspace of RN consisting of all sequences for which the followingseries converges:

( || x || p) p   =  ( || (x0, x1, x2, x3, ... ) ||p) p   =  n  | xn |p

As the notation implies,  ||.||p  is a norm  on lp because of the following famous nontrivial inequality (Minkowski's inequality) which serves as the relevant triangular inequality :

|| x+y ||p     ≤    || x ||p  + || y ||p

For the topology induced by that so-called  "p-norm",  the [topological] dual of lp  is isomorphic to lq ,  where:

1/p  +  1/q   =   1

Thus, lp is reflexive  (i.e., isomorphic to its own bidual) for any  p > 1.

 Tensor Product

(2009-09-03)  
EF  is generated  by tensor products.

Consider two vector spaces E  and F  over the same  field of scalars K. For two covectors f  and g (respectively belonging to E*  and F*)  we may considera particular  [continuous]  linear form denoted f  g and defined over the cartesian product EF  via the relation:

f  g (u,v)  =  f (u) g (v)

The binary operator   thus defined from  (E*)(F*) to  (EF)* is called tensor product. (Even when E = F,  the operator   is not  commutative.)

Example of Use:  The Dehn invariant  (1901)

In the Euclidean plane,  two simple polygonal loops which enlose the same area  are always decomposable into each other.  That's to say,  with a finite number of straigth cuts,  we can obtain pieces of one shape whichare pairwise congruent to the pieces of the other (William Wallace, 1807. (Paul Gerwien, 1833. (Farkas Bolyai, 1835.)

Hilbert's third problem (the equidecomposability problem, 1898) asked whether the same is true for any pair of polyhedra having the same volume.

Surprisingly enough,  that's not so because volume is not the only invariant preserved by straight cuts in three dimensions. The other invariant  (there are only two)  is now known as Dehn's invariant,  in honor ofMax Dehn (1878-1952) the doctoral student of Hilbert who based his habilitation thesis on its discovery  (September 1901).  Here's a description:

 Come back later, we're still working on this one...


(2015-02-21)  
Direct sum of vector spaces indexed by a monoid.

 Come back later, we're still working on this one...

When the indexing monoid is the set of natural integers  {0,1,2,3,4...} or part thereof, the degree  n  of a vector is the smallest  integer such that the direct sum of the subfamilyindexed by  {0,1 ... n}  contains that vector.


(2007-04-30)  
An internal product among vectors turns a vector space  into analgebra.

They're also called distributive algebras  because of a mandatory propertythey share with rings.  Any ring is triviallya 1-dimensional associative  algebra over theboolean field F2. Any associative algebra is a ring.

An algebra  is the structure obtained whenaninternal binary multiplication is well-defined on the vector space E (the product of two vectors is a vector)  which is bilinear  and distributive over addition.  That's to say:

xK,UE,VE,WE, x (UV)   =  
U (V +W)   =  
(V +W)U   =  
(xU)V   =  U (xV)
U V +U W
V U +W U

The commutator  is the followingbilinear function.  If it's identically zero, the algebra is said to be commutative.

[U ,V ]  =  (UV)    (VU)

The lesser-used anticommutator  is the followingbilinear function.  If it's identically zero, the algebra is said to be anticommutative.

{U ,V }  =  (UV)  +  (VU)

The commutator is anticommutative.  The anticommutator is commutative.

The associator is defined as the following trilinear function. It measures how the internal muliplication fails to be associative.

[U ,V ,W ]  =  U (VW)    (UV)W

If its internal product has a neutral element, the algebra is called unital :

1 ,  U ,      1 U   =  U 1   =  U

By definition,  a derivation  in an algebra is a vectorialendomorphism  D  (i.e,  D  is a linear operator) which obeys the following relation:

D (U V )   =   D(U)V  + U D(V)

One nontrivial example of a derivation of some historical importance is the Dirac operator. The derivations over an algebra form the Lie algebra of derivations, where the product of two derivations is defined as their commutator. One important example of that is the Witt algebra, introduced in 1909 by Elie Cartan(1869-1951)  and studied at length in the 1930s by Ernst Witt (1911-1991). The Witt algebra is the Lie algebra of the derivation on theLaurent  polynomials withcomplex coefficients (which may be viewed as the polynomials of two complex variables X any Y when XY = 1,  namely C[z,1/z] ).

Associative Algebras :

When the associator definedaboveis identically zero, the algebra is said to be associative (many authors often use the word  "algebra"  to denote only associative  algebras, includingClifford algebras). In other words, associative algebras  fulfill the additional requirement:

UE,VE,WE,     U (VW)   =   (UV)W

Distributive Algebras  (a.k.a. nonassociative  algebras) :

In a context where distributivity is a mandatory property of algebras, that locution merely denote algebraswhich may or may not be associative: A good example of this early usage was given in 1942, by a top expert:

That convention is made necessary by the fact that the unqualified word algebra  is very often used to denote only associative algebras

Unfortunately, Dick Schafer  himself later recanted andintroduced a distinction between not associative  and nonassociative (no hyphen)  with the latter not precuding associativity. He explains that in the opening lines of his reference book on the subject, thus endowed with a catchy title:

I beg to differ.  Hyphenation is too fragile a distinctionand a group of experts simply can't redefine the hyphenated term non-associative.

Therefore,  unless the full associativity of multiplication is taken for granted, I'm using only  the following set of inclusive  locutions:

Let's examine all those lesser types of algebras.  strongest first:

Alternative Algebras :

In general, a multilinear function is said to be alternating if its sign changes when the arguments undergo an odd  permutation.  An algebra is said to be alternative  when the aforementioned associator  is alternating.

The alternativity  condition is satisfied if and only if two of the following statementshold  (the third one then follows) :

  • UV     U (UV)   =   (UU)V      (Left alternativity.)
  • UV     U (VV)   =   (UV)V      (Right alternativity.)
  • UV     U (VU)   =   (UV)U      (Flexibility.)

Octonions are a non-associative example of such an alternative algebra.

Power-Associative Algebras :

Power-associativity  states that the n-th powerof any element is well-defined regardless ofthe order in which multiplications are performed:

U1   =  U
U2   =  UU
U3   =  UU2   =  U2U
U4   =  UU3   =  U2U2   =  U3U
...   ...
i > 0   j > 0      Ui+j   =  UiUj

The number of ways to work out a product of  n  identical factorsis equal to the Catalan number  C(2n,n)/(n+1): 1, 1, 2, 5, 14, 42, 132... (A000108)
Power-associativity  means that, for any given n, all those ways yieldthe same result.  The following special case  (n=3)  is not sufficient:

 Dangerous Bend

 U ,     U (UU)   =   (UU)U

That just says that cubes  are well-defined. The accepted term for that is third-power associativity (I call it cubic associativity  or cube-associativity  for short). It's been put to good use at least once, in print:

The Three Standard Types of Subassociativity :

By definition,  a subalgebra  is an algebra contained in another (the operations of the subalgebra being restrictions of the operations in the whole algebra). Anyintersectionof subalgebras is a subalgebra.  The subalgebra generated  by a subsetis the intersection of all subalgebras containing it.

The above three types of subassociativity can be fully characterized in terms of the associativityof the subalgebras generated by  1,  2  or  3  elements:

  • If all subalgebras generated by one element are associative,
    then the whole algebra is power-associative.
  • If all subalgebras generated by two elements are associative,
    then the whole algebra is alternative (theorem ofArtin).
  • If all subalgebras generated by three elements are associative,
    then the whole algebra is associative  too.

Flexibility :

A product is said to be flexible  when:

UV     U (VU)   =   (UV)U

In particular, commutative  or anticommutative  products are flexible. Flexibility is usually not  considered a form of subassociativity becauseit doesn't fit into the neat classification of theprevious section.

Flexibility is preserved by the Cayley-Dickson construction. Therefore, all hypercomplex multiplications are flexible (Richard D. Schafer, 1954). In particular,  the multiplication of sedenions  is flexible (but notalternative).

Likewise, Okubo algebras  are flexible. So are Lie algebras.

In a flexible algebra, the right-power of an element is equal to the matching left-power, but that doesn'tmake powers well-defined.  A flexible algebra iscube-associative butnot necessarilypower-asspciative.  In particular, fourth powers need not be well-defined:

A (A A)   =   (A A) A   =   A3
A A3   =   A3 A    may differ from     A2A2

Example of a two-dimensional flexible algebra which isn't power-associative :
 
×A B 
A B B
 B BA
 The operator at left is flexible  because it's commutative.
Yet,  neither of the two possible fourth powers is well-defined:
A  A3   =   A B   =   B
A2A2   =   B B   =   A
B  B3   =   B B   =   A
B2B2   =   A A   =   B

Antiflexible Algebras:

An algebra is called antiflexible  when itsassociator is rotation-invariant:

UVW     [U,V,W]   =   [V,W,U]

An antiflexible ring need not be power-associative.

 Sophus Lie
(2015-02-14)  
Anticommutativealgebras obeying Jacobi's identity.

Hermann Weyl (1885-1955) named those structuresafter the Norwegian mathematicianSophus Lie (1842-1899).

The basic internal multiplication in a Lie algebra is a bilinear operator denoted by a square bracket (called a Lie bracket )  which must be anticommutative and obey the so-called Jacobi identity,  namely:

  • [B,A]   =   [A,B]
  • [A,[B,C]] +[B,[C,A]] +[C,[A,B]]   =  0

Anticommutativity implies [A,A]  = 0  only in the absence of 2-torsion.

Being anticommutative,  the Lie bracket is flexible. However,  it need not be alternative because its associator  need not be alternating:

[A,B,C]  = [A,[B,C]] [[A,B],C] = [A,[B,C]] +[C,[A,B]] = [[C,A],B]

The operator d  definedby  d(x)  =  [A,x]  is a derivation, since :

d ([B,C])  = [A,[B,C]]  = [[A,B],C] + [B,[A,C]]  = [d (B),C] + [B,d (C)]

d  is called the Lie-derivative along A.

The cross-product  gives ordinary 3D vectors a Lie algebra structure.

Representations of a Lie Algebra :

The bracket notation  is compatible with the key exampleappearing inquantum mechanics,where the Lie bracket is obtained as the commutator  over an ordinary linear algebra of linear operatorswith respect to the functional composition  of operators,  namely:

[U ,V ]   =  UV  VU

If the Lie bracket is so defined, the Jacobi identity  is a simple theorem,whose proof is straightforward;  just sum up the following three equations:

  • [A,[B,C]]   =  A (BCCB)  (BCCB)A
  • [B,[C,A]]   =  B (CAAC)  (CAAC)B
  • [C,[A,B]]   =  C (ABBA)  (ABBA)C

Linearity makes "" distribute over "", so the sum of the right-hand-sides sonsists of 12 terms,  where the 6 possible permutations of the operatorseach appear twice with opposite signs.  The whole sum is thus zero.  QED

Conversely, an anticommutative algebra obeying the Lie identity is said to havea representation  in terms of linear operators ifit's isomorphic to the Lie algebra formed by those operators whenthe bracket is defined as the commutator of operators.

The universal enveloping algebra of a given Lie algebra  L  is the most general unital associative  algebra containing all representationsof  L.

Lie algebra  Lie(G)  associated to a Lie group  G :

Loosely speaking,  the Lie algebra associated to a Lie group isits tangent space  at the origin  (about the identity).

Lie(G)  is formed by all the left-invariant vector-fields onG.

vector field X on a Lie GroupG is said to be invariant under left translations when

gG,hG,  (dlg)h (Xh)   =   Xgh

where lg is the left-translation within G  (lg(x) = g x) and  dlg  is its differential between tangent spaces.

 Come back later, we're still working on this one...

Lie's third theorem states that every real finite-dimensional Lie algebra is associated to some Lie group. However,  there exist infinite-dimensional Lie algebras not associated to any Lie group. Adrien Douady (1935-2007) pointed out the first example one late evening after a Bourbaki  meeting...

Douady's counter-example is known as Heisenberg's Lie algebra  (non-exponential Lie algebra) and arises naturally in quantum mechanics  to describethe motion of particles on a straight line by means of three operators  (X,  Y  and  Z)  acting on any square-integrablefunction f  of a real variable  (which form aninfinite-dimensional space):

(Xf ) (x)  =  xf (x)       (Yf ) (x)  =  -ixf (x)       (Zf ) (x)  =  -if (x)

They obey the following commutation relations:

[ X, Y ]   =   Z        [ Y, Z ]   =   0        [ X, Z ]   =   0

Thus,  Jacobi's identity holds :

[X,[Y,Z]] + [Y,[Z,X]] + [Z,[X,Y]]  = [X,0] + [Y,0] + [Z,Z]  =  0

This cannot be realized as the tangent space of a connected three-dimensional Lie group,because the Lie algebra associated to any such group is either abelian or solvable,and the non-exponential Lie algebra is neither.

Semisimple Lie Algebras  &  Semisimple Lie Groups :

A simple Lie algebra is a non-abelian Lie algebra without anynonzero proper  ideal.

A simple Lie group is a connected Lie group whose Lie algebra is simple.

A semisimple Lie algebra is a direct sum of simple Lie algebras.

A semisimple Lie group is a connected Lie group whose Lie algebra is semisimple.

Any Lie algebra L  is the semidirect sum of its radical R(a maximal solvable ideal)  and a semisimple algebra S.

 Come back later, we're still working on this one...

 Leibniz
(2023-04-08)  
They're Lie algebras  if the bracket is alternative  (i.e.,  [x,x] = 0).

Much of the literature about Leibniz algebras consist in pointing out that resultsknown for Lie algebras are also true in this more general context.

Originally,  Leibniz algebras were called D-algebras.

The "D" stood forderivation, a name which identifies any operator  D  obeying the following Leibniz law  of ordinary derivatives:

D (x y)   =   x D(y)  +  D(x) y

If  D  is the operator corresponding to multiplication to the right by some element  z, that becomes the following substitute for associativity:

(x y) z   =   x (y z)  +  (x z) y

If brackets are used for multiplication,  as is common here, that reads:

(Right)  Leibniz Identity
[[x,y],z]   =   [[x,z],y]  +  [x,[y,z]]

Indeed,  Blokh (1965) and Loday (1993) defined a Leibniz algebra  as a vector space (or a module)  with a bilinear bracket verifying that identity.

In the anticommutative case,  the Leibniz identity and Jacobi's identity are equivalent. Thus a Lie algebra  is just an anticommutative Leibniz algebra.

The Koszul duals  of Leibniz algebras are called Zinbiel ("Leibniz" backwards).  That was a pseudonym used by Loday who created a fictitious character by that name as a joke.

 Pascual Jordan
(2015-02-19)  
Turning any linear algebra A into a commutative one A+.

Those structures were introduced in 1933 by Pascual Jordan (1902-1980). They were named after him byA. Adrian Albert (1902-1980)  in 1946. (Richard Schafer,  a former student of Albert, wouldn'tuse that name,  possibly because Jordan was a notorious Nazi).

Just like commutators turn linear operators into a Lie algebra, a Jordan algebra  is formed by using the anti-commutator  or Jordan product :

UV   =   (UV  + VU )

Axiomatically, a Jordan algebra  is a commutative algebra ( U V  = V U ) obeying Jordan's identity :  (UV) (UU)  = U (V (UU)).

A Jordan algebra is always power-associative.


(2019-02-27)  
A  27-dimensional exceptional  Jordan algebra.

 Come back later, we're still working on this one...


Signature of William K. Clifford(2007-04-30)  
Unital associative algebras with a quadratic form.

Those structures are named after the British geometer and philosopherWilliam Clifford (1845-1879) who originated the concept in 1876.

The first description of Clifford algebras  centered on quadratic forms  was given in 1945 by a founder of the Bourbaki  collaboration: Claude Chevalley (1909-1984).

 Come back later, we're still working on this one...


(2022-02-18)  
Commutative two-dimensional algebra of elements of the form  x + y

The second component  (the "Fermionic" part)  has zero square: 2 = 0.

(a + b ) (c + d )   =   ac  +  (ad + bc)


(2023-03-16)  
The algebra generated byDirac'sfourgamma matrices    (1927).

That's  Cl (1,3)   =  Cl1,3(R) namely,  the Clifford algebra built over real four-dimensional space using Minkowski's relativistic metric  Q :

Q ( x0, x1, x2, x3)     =    (x0) 2   (x1) 2   (x2) 2   (x3) 2
 
Q ( ct, x, y, z)     =    c 2 t 2   x 2   y 2    z 2      

Spacetime algebra  (of dimension 16)  generated byDirac's  4 gamma matrices
Grade
(g)
Dimension
C(4, g)
Linear BasisMultivectors
011Scalars
14Vectors
26Bivectors
34eTrivectors = Pseudovectors
41eQuadrivectors = Pseudoscalars

The grade of a product is the sum of the grades of its factorsmodulo  4.

Products of gamma matrices : a =0  b =1  c =2  d =3   (with e = abcd)
0Grade 1Grade 2  (bivectors)Grade 34
Iabcdabacadbcbdcdaebecedee
aIabacadbcdde-cebeecd-bdbcae
b-ab-Ibcbda-dece-c-dae-cd-e-adacbe
c-ac-bc-Icddea-beb-ae-dbdad-e-abce
d-ad-bd-cd-I-cebeaaebc-bc-acab-ede
ab-b-ade-ceI-bc-bd-ac-ade-be-ae-dccd
ac-c-de-abebcI-cdab-e-ad-ced-ae-b-bd
ad-dce-be-abdcdIeabac-de-cb-aebc
bcdec-baeac-abe-Icd-bd-dce-be-a-ad
bd-ced-ae-bad-e-ab-cd-Ibccdea-beac
cdbeaed-cead-acbd-bc-I-b-ade-ce-ab
ae-e-cdbd-bcbecede-dc-bIabacad-a
becdead-acaed-c-ce-de-a-ab-Ibcbd-b
ce-bd-adeab-daebbea-de-ac-bc-Icd-c
debcac-abec-bae-abece-ad-bd-cd-I-d
e-ae-be-ce-decd-bdbc-adac-ababcd-I

Products highlighted in blue  commute. The others anticommute  (except in trivial cases where one factor is the identity or both are equal).

There are two opposite ways to associate the letter "a" to the time coordinate (ct). Likewise, we can assign 3 of the 4 remaining letters (b,c,d,e) to 3 given orthogonal axesin 120 nonoriented ways or 1920 oriented ones.  Thus, a spacetime reference frame canbe labeled in 1920 equivalent ways.

Covariant Differential Operators

The simplest one was found by Dirac in 1927.  Its eigenvectors describes particles of spin  ½  and mass  m, like the electron.

... and then, you could liearize the sum of four squares.
Actually. you could even linearize the sums of five squares...

Paul Dirac (Accademia Dei Lincei, Rome 1975-04-15).

Three Distinct Products :

The product used so far is understood as the canonicalmultiplicative operator of the spacetime algebra,unambiguously known as the geometric product  and denoted by mere juxtaposition of the operands, without  printing any intervening operator symbol. It decomposes into a commutative part, called inner product or dot product (which is denoted by the dot symbol "." and always produces a scalar) and an anticommutative one,called outer product, exterior product or wedge product  (  ).

U V   =   U . V  +  U V
 
with    U . V  =  ½ (UV + VU)    and    U V  =  ½ (UV VU)

Such relations define the inner  and  outer  productsin any  distributive algebra.  Both are distinct from the canonical algebra product (geometric product)  unless the algebra is commutative.


(2017-12-01)  
The exterior product  (wedge product)  is anticommutative.


(2015-02-23)  
A special linear involution is singled out  (adjunction or conjugation).

As the adjoint  or conjugate of an element U  is usually denoted U*  such structures are also called  *-algebras  (star-algebras). The following properties are postulated:

 Come back later, we're still working on this one...


 Arms of John von Neumann (2015-02-23)  
Compact operators resemble ancient infinitesimals...

John von Neumann (1903-1957)  introduced those structuresin 1929  (he called them simply rings of operators). Von Neumann  presented their basic theory in 1936 with the help ofFrancis Murray(1911-1996).

Von Neumann algebra  is an involutive algebra such that...

 Come back later, we're still working on this one...

By definition,  a factor  is a Von Neumann algebra with a trivialcenter (which is to say that only the scalar multiples of identity commute with all the elements of thealgebra).


(2009-09-25)  

To a mathematician, the juxtaposition  (or cartesian product ) of several vector spaces over the same field K  is always a vector space over that field  (as component-wise definitionsof addition and scaling satisfy the aboveaxioms).

When physicists state that some particular juxtaposition ofquantities  (possibly a single numerical quantity by itself)  is"not a scalar", "not a vector" or "not a tensor" they mean that the thing lacks an unambiguous and intrinsic definition.

Typically, a flawed vectorial definition would actually depend onthe choice of a frame of reference for the physical universe. For example, the derivative of a scalar with respect to the firstspatial coordinate is "not a scalar"  (that quantity depends onwhat spatial frame of reference is chosen).

Less trivially, the gradient of a scalar is  a physical covector(of which the above happens to be one covariant coordinate).  Indeed,the definition of a gradient specifies the same object  (indualspace)  for any choice of a physical coordinate basis.

Some physicists routinely introduce  (especially in the contextofGeneral Relativityvectors  as "things that transform like elementary displacements" and covectors  as "things that transform like gradients". Their students are thus expected to grasp a complicated notion (coordinate transformations)  before the stage is set. Newbies will need several passesthrough that intertwined logic before they "get it".

I'd rather introduce the mathematical notion of a vector first. Having easily absorbed that straight notion, the student may then be asked to considerwhether a particular definition depends on a choice of coordinates.

For example, the linear coefficient of thermal expansion CTE)  cannot be properly definedas a scalar  (except for isotropic substances) it's a tensor. On the other hand, the related cubic  CTE is always a scalar (which is equal to thetrace of the aforementionedCTE tensor).

 David Hestenes
(2007-08-21)  
Unifying some notations of mathematical physics...

Observing similitudes in distinct areas of mathematical physics and building on groundworkfrom Grassmann (1844) and Clifford  (1876) David Hestenes (1933-)has been advocating a denotational unification,which has garnered quite a few enthusiastic followers.

The approach is called Geometric Algebra  by its proponents. The central objects are called multivectors. Their coordinate-free manipulation goes by the name of multivector calculus  or geometric calculus, a term which first appeared in the title of Hestenes' own doctoral dissertation(1963).

That's unrelated to the abstract field of Algebraic Geometry (which has been at the forefront of mainstream mathematical research for decades).

border
border
visits since March 28, 2006
 (c) Copyright 2000-2025, Gerard P. Michon, Ph.D.

[8]ページ先頭

©2009-2025 Movatter.jp