(2006-05-07) Vectors were so named because they "carry" the distance from the origin.
Inmedical and other contexts,"vector" is synonymous with "carrier". The etymology is that of "vehicle": The latin verb vehere means "to transport".
In elementary geometry, a vector is the difference between two points in space; it's what has to be traveled to go from a given origin to a destination. Etymologically, such a thing was perceived as "carrying"the distance between two points (the radius from a fixed origin to a point).
The term vector started out its mathematical life as part ofthe French locution "rayon vecteur" (radius vector). The whole expression is still used to identify a point inordinary (Euclidean) space, as seen from a fixed origin.
As presentednext, the term vector moregenerally denotes an element of a linear space (vector space) of unspecified dimensionality (possibly infinite dimension) over any scalar field (not just the real numbers).
(2006-03-28) (called the ground field). Vectors can be added, subtracted or scaled. Thescalars form afield.
The vocabulary is standard: An element of the fieldK is called a scalar. Elements of the vector space are called vectors.
By definition, a vector spaceE is a set with a well-definedinternal addition (the sum U+V of two vectors is a vector) and a well-defined external multiplication (i.e., for a scalar x and a vector U, the scaled vector x U is a vector) with the following properties:
(E, + ) is an Abeliangroup. This is to say that the addition of vectors is an associative and commutativeoperation and that subtraction is defined as well (i.e., there's azero vector,neutral for addition, and every vector has an opposite which yieldszero when added to it).
Scaling is compatible with arithmetic on the fieldK:
xK,yK,UE,VE,
(x + y)U = x (U +V) = (x y)U = 1U =
xU + yU xU + xV x (yU) U
(2010-04-23) Thedimension is the largest possible number ofindependent vectors.
The moderndefinition of a vector space doesn'tinvolve the concept of dimension which had a toweringpresence in the historical examples of vector spaces taken fromEuclidean geometry: A line has dimension 1, a plane has dimension 2,"space" has dimension 3,etc.
The concept of dimension is best retrieved by introducing two complementarynotions pertaining to aset of vectors B.
B is said to consist of independent vectors when all nontrivial linear combinations of the vectors of B are nonzero.
B is said to generateE when every vector of the spaceE is a linear combination of vectors of B.
A linear combination of vectors is a sum of finitely many of thosevectors, each multiplied by a scaling factor (called a coefficient ). A linear combination withat least one nonzero coefficientis said to be nontrivial.
If B generates Eand consists of independent vectors, then it'scalled a basis of E. Note that the trivial space {0} has an empty basis (the empty set does generate the space {0} because anempty sum is zero).
To prove that all nontrivial vector spaces have a basisrequires the Axiom of Choice (in fact, the existence of a basis for any nontrivialvector space is equivalent to the Axiom of choice ).
Dimension theorem for vector spaces :
A not-so-obvious statement is that two bases of E can always be put in one-to-one correspondencewith each other. Thus, all bases of E have the same cardinal (finiteor not). That cardinal is called the dimension of the space E and it is denoted dim (E)
(2010-06-21) A vector space included in another is called a subspace.
AsubsetF of a vector space E is a subspace of E if and only if it isstable by addition and scaling (i.e., the sum of two vectors of F is in F and so is any vector of F multiplied into a scalar).
It's an easy exercise to show that the intersection FG of two subspaces F and G is a subspace of E. So is the Minkowski sumF+G (defined as the set of all sums x+y of a vector x from F and a vector y from G).
Two subspaces of E for which FG = {0} and F+G = E are said to be supplementary. Their sum is then called a direct sum and thefollowing compact notation is used to state that fact:
E = FG
In the case of finitely many dimensions, the following relation holds:
dim (FG ) = dim (F) + dim (G)
The generalization to nontrivial intersections is Grassmann's Formula :
dim (F +G ) = dim (F) + dim (G) dim (FG )
A lesser-known version applies to spaces of finite codimensions:
codim (F +G ) = codim (F) + codim (G) codim (FG )
(2010-12-03) Two spaces are isomorphic if there's a linear bijection between them.
Afunctionf which maps a vector space E into another space F over the same field K is saidto be linear if it respects addition and scaling:
x,yK, U,VE, f ( xU + yV ) = xf (U ) + yf (V )
If such a linear function f is bijective, its inverse is alsoa linear map and the vector spaces E and F are said to be isomorphic.
EF
In particular, two vector spaces which have the same finite dimension over the same field are necessarily isomorphic.
(2010-12-03) The equivalence classes (or residues) modulo H can be called slices.
If H is a subspace of the vector space E, an equivalence relation can be defined by calling two vectors equivalent whentheir difference is in H.
An equivalence class is called a slice, and it can be expressed asa Minkowski sum of the form x+H. The space E is thus partitioned into slices parallel to H. The set of all slices is denoted E/H and is called the quotient ofE byH. It's clearly a vector space (scaling a slice or adding up two slices yields a slice).
When E/H has finite dimension that dimension is called the codimension of H. A linear subspace of codimension 1 is calledan hyperplane of E.
The canonical linear map which sends a vector x of E to the slice x+H is called the quotient map of E onto E/H.
A vector space is always isomorphic to thedirect sum of any subspace H and its quotient by that same subspace:
EHE/H
Use this with H = ker (f ) toprove the following fundamental theorem:
(2010-12-03) A vector space is isomorphic to thedirect sum of the image and kernel (French: noyau) of any linear function defined over it.
The image or range of alinear functionf which maps a vector space E to a vector space F is a subspace of F defined as follows:
im (f ) = range (f ) = f (E) = { yF | xE , f (x) = y }
The kernel (also called nullspace) of f is the following subspace of E :
ker (f ) = null (f ) = { xE | f (x) = 0 }
The fundamental theorem of linear algebra states thatthere's a subspace of E which is isomorphic to f (E) and supplementary to ker ( f ) inE. This results holds for a finite or an infinite number of dimensionsand it's commonly expressed by the following isomorphism:
f (E) ker (f ) E
This is a corollary of theabove, since f (E) and E / ker (f ) are isomorphic because a bijective map between them is obtainedby associating uniquely f (x) with the residue class x + ker ( f ). Clearly, that association doesn't depend on the choice of x.
Restricted to vector spaces of finitely many dimensions,the theorem amounts to the following famous result (of great practical importance).
Rank theorem (or rank-nullity theorem) :
For any linear function f over a finite-dimensional space E, we have:
dim (f (E) ) + dim ( ker (f ) ) = dim (E )
dim (f (E) ) is called the rank of f. The nullity of f is dim ( ker (f ) ).
In the language of thematricesnormally associated with linear functions: The rank and nullity of a matrix add up to its number of columns. The rank of a matrix A is defined as the largest number of linearly independent columns (or rows) in it. Its nullity is the dimension of its nullspace (consisting, by definition, ofthe column vectors x for which A x = 0).
(2007-11-06) A normed vector space is a linear space endowed with a norm.
Vector spaces can be endowed with a function (called norm) which associates to any vector V a real number ||V|| (called the norm or the length of V) such that the followingproperties hold:
||V|| is positive for any nonzero vector V.
||V|| = || ||V||
||U +V || ≤ ||U || + ||V ||
In this, is a scalar and || denotes what's called a valuation on the field of scalars (a valuation is a special type of one-dimensional norm;the valuation of a product is the product of the valuations of its factors). Some examples of valuations are the absolute value of real numbers, the modulus ofcomplex numbers and the p-adic metric of p-adic numbers.
Let's insist: The norm of a nonzero vector is always a positive real number, even for vector spaces whose scalars aren't real numbers.
(2020-10-09) v | [ x |f (x,v) ] is an isomorphism from E to E*.
In this E* is understood to be the continuous dual of E*.
In finitely many dimensions, a pseudo inner product is non-degenerate if andonly if its associated determinant is nonzero.
(2020-09-30) Linear space endowed with a positive-definite sesquilinear form.
Parallelogram identity :
In an inner-product space, the following identity holds which reduces to Pythagoras theorem when ||u+v|| = ||u-v||
||u + v||2 + ||u v||2 = 2 ||u||2 + 2 ||v||2
Polarization Identity :
Conversely, a norm which verifies the above parallelogram identity on a complexlinear space is necessarily derived from a sesquilinear inner product, obtained from the norm alone throughthe following polarization formula:
<u,v> = ¼ (||u + v|| ||u v|| + i ||u i v|| i ||u + i v|| )
A Hilbert space is an inner-product space which is complete with respect to the norm associated tothe defining inner-product (i.e., it's a Banach space with respect to that norm). The name Hilbert space itself was coined by John von Neumann (1903-1957) in honor of the pioneering work published byDavid Hilbert (1862-1943). on the Lebesgue sequence space2 (which is indeed a Hilbert space).
In a vector spaceE, a linear form is a linear function which maps every vector of E to a scalar of the underlyingfield K. The set of all linear forms is called the algebraic dual of E. The set of all continuous linear forms is called the [ topological ]dual of E.
With finitely many dimensions, the two concepts are identical (i.e., every linear form is continuous). Not so with infinitely many dimensions. An element of the dual (a continuous linear form) is often called a covector.
Unless otherwise specified, we shall use the unqualified term dual to denote the topological dual. We shall denote it E* (some authors use E* to denote the algebraic dual and E' for the topological dual).
The bidualE** of E is the dual of E*. It's also called second dual or double dual.
A canonicalinjective homomorphism exists which immerses E into E** by defining (v), for any element v of E, as the linear form on E* which maps everyelement f of E* to thescalar f (v). That's to say:
(v) (f ) = f (v)
If the canonical homomorphism isabijection, then E is said to be reflexive and it isroutinely identified with its bidualE**.
E = E**
Example of an algebraic dual :
If E =() denotes the space consisting of allcomplex sequences with only finitely many nonzero values, then the algebraic dual of E consists of all complex sequences without restrictions. In other words:
E' =
Indeed, an element f of E' is a linearform over E which is uniquely determined by the unrestrictedsequence of scalars formed by the images of the elements in the countable basis (e0 ,e1 ,e2 ... ) of E.
E' is aBanach space, but E is not (it isn'tcomplete). As a result, an absolutely convergent series need notconverge in E.
For example, the series of generalterm en/ (n+1)2 doesn't converge in E, although it's absolutelyconvergent (because the series formed by thenormsof its terms is awell-known convergent real series).
Representation theorems for [continuous] duals :
A representation theorem is a statement thatidentifies in concrete terms some abstractly specified entity. For example, the celebrated Riesz representation theorem states that the [continuous] dualof the Lebesgue spaceLp (an abstractspecification) is just isomorphic to the space Lq where q is a simple function of p (namely, 1/p+1/q = 1 ).
Lebesgue spaces are usually linear spaces with uncountably many dimensions (their elements are functions over a continuum like or). However, the Lebesgue sequence spaces described in thenext section are simpler (they have only countably many dimensions) and can serve as a more accessible example.
(2012-09-19) p and q are duals of each other when 1/p + 1/q = 1
For p > 1, the linear space p is defined as the subspace of consisting of all sequences for which the followingseries converges:
( || x || p) p = ( || (x0, x1, x2, x3, ... ) ||p) p = n | xn |p
As the notation implies, ||.||p is a norm on p because of the following famous nontrivial inequality (Minkowski's inequality) which serves as the relevant triangular inequality :
|| x+y ||p ≤ || x ||p + || y ||p
For the topology induced by that so-called "p-norm", the [topological] dual of p is isomorphic to q , where:
1/p + 1/q = 1
Thus, p is reflexive (i.e., isomorphic to its own bidual) for any p > 1.
(2009-09-03) EF is generated by tensor products.
Consider two vector spaces E and F over the same field of scalars K. For two covectorsf and g (respectively belonging to E* and F*) we may considera particular [continuous] linear form denoted fg and defined over the cartesian productEF via the relation:
fg (u,v) = f (u) g (v)
The binary operator thus defined from (E*)(F*) to (EF)* is called tensor product. (Even when E = F, the operator is not commutative.)
Example of Use: The Dehn invariant (1901)
In the Euclidean plane, two simple polygonal loops which enlose the same area are always decomposable into each other. That's to say, with a finite number of straigth cuts, we can obtain pieces of one shape whichare pairwise congruent to the pieces of the other (William Wallace, 1807. (Paul Gerwien, 1833. (Farkas Bolyai, 1835.)
Hilbert's third problem (the equidecomposability problem, 1898) asked whether the same is true for any pair of polyhedra having the same volume.
Surprisingly enough, that's not so because volume is not the only invariant preserved by straight cuts in three dimensions. The other invariant (there are only two) is now known as Dehn's invariant, in honor ofMax Dehn (1878-1952) the doctoral student of Hilbert who based his habilitation thesis on its discovery (September 1901). Here's a description:
(2015-02-21) Direct sum of vector spaces indexed by a monoid.
When the indexing monoid is the set of natural integers {0,1,2,3,4...} or part thereof, the degree n of a vector is the smallest integer such that the direct sum of the subfamilyindexed by {0,1 ... n} contains that vector.
(2007-04-30) An internal product among vectors turns a vector space into analgebra.
They're also called distributive algebras because of a mandatory propertythey share with rings. Any ring is triviallya 1-dimensionalassociative algebra over theboolean fieldF2. Any associative algebra is a ring.
An algebra is the structure obtained whenaninternal binary multiplication is well-defined on the vector space E (the product of two vectors is a vector) which is bilinear and distributive over addition. That's to say:
xK,UE,VE,WE,
x (UV) = U (V +W) = (V +W)U =
(xU)V = U (xV) U V +U W V U +W U
The commutator is the followingbilinear function. If it's identically zero, the algebra is said to be commutative.
[U ,V ] = (UV) (VU)
The lesser-used anticommutator is the followingbilinear function. If it's identically zero, the algebra is said to be anticommutative.
{U ,V } = (UV) + (VU)
The commutator is anticommutative. The anticommutator is commutative.
The associator is defined as the following trilinear function. It measures how the internal muliplication fails to be associative.
[U ,V ,W ] = U (VW) (UV)W
If its internal product has a neutral element, the algebra is called unital :
1 , U , 1U = U1 = U
By definition, a derivation in an algebra is a vectorialendomorphism D (i.e, D is a linear operator) which obeys the following relation:
D (U V ) = D(U)V + U D(V)
One nontrivial example of a derivation of some historical importance is the Dirac operator. The derivations over an algebra form the Lie algebra of derivations, where the product of two derivations is defined as their commutator. One important example of that is the Witt algebra, introduced in 1909 by Elie Cartan(1869-1951) and studied at length in the 1930s by Ernst Witt (1911-1991). The Witt algebra is the Lie algebra of the derivation on theLaurent polynomials withcomplex coefficients (which may be viewed as the polynomials of two complex variables X any Y when XY = 1, namely [z,1/z] ).
Associative Algebras :
When the associator definedaboveis identically zero, the algebra is said to be associative (many authors often use the word "algebra" to denote only associative algebras, includingClifford algebras). In other words, associative algebras fulfill the additional requirement:
In a context where distributivity is a mandatory property of algebras, that locution merely denote algebraswhich may or may not be associative: A good example of this early usage was given in 1942, by a top expert:
That convention is made necessary by the fact that the unqualified word algebra is very often used to denote only associative algebras.
Unfortunately, Dick Schafer himself later recanted andintroduced a distinction between not associative and nonassociative (no hyphen) with the latter not precuding associativity. He explains that in the opening lines of his reference book on the subject, thus endowed with a catchy title:
I beg to differ. Hyphenation is too fragile a distinctionand a group of experts simply can't redefine the hyphenated term non-associative.
Therefore, unless the full associativity of multiplication is taken for granted, I'm using only the following set of inclusive locutions:
Let's examine all those lesser types of algebras. strongest first:
Alternative Algebras :
In general, a multilinear function is said to be alternating if its sign changes when the arguments undergo an odd permutation. An algebra is said to be alternative when the aforementioned associator is alternating.
The alternativity condition is satisfied if and only if two of the following statementshold (the third one then follows) :
UVU (UV) = (UU)V (Left alternativity.)
UVU (VV) = (UV)V (Right alternativity.)
UVU (VU) = (UV)U (Flexibility.)
Octonions are a non-associative example of such an alternative algebra.
Power-Associative Algebras :
Power-associativity states that the n-th powerof any element is well-defined regardless ofthe order in which multiplications are performed:
U1 = U U2 = UU U3 = UU2 = U2U U4 = UU3 = U2U2 = U3U ... ... i > 0 j > 0 Ui+j = UiUj
The number of ways to work out a product of n identical factorsis equal to the Catalan number C(2n,n)/(n+1): 1, 1, 2, 5, 14, 42, 132... (A000108) Power-associativity means that, for any given n, all those ways yieldthe same result. The following special case (n=3) is not sufficient:
U , U (UU) = (UU)U
That just says that cubes are well-defined. The accepted term for that is third-power associativity (I call it cubic associativity or cube-associativity for short). It's been put to good use at least once, in print:
The Three Standard Types of Subassociativity :
By definition, a subalgebra is an algebra contained in another (the operations of the subalgebra being restrictions of the operations in the whole algebra). Anyintersectionof subalgebras is a subalgebra. The subalgebra generated by a subsetis the intersection of all subalgebras containing it.
The above three types of subassociativity can be fully characterized in terms of the associativityof the subalgebras generated by 1, 2 or 3 elements:
If all subalgebras generated by one element are associative, then the whole algebra is power-associative.
If all subalgebras generated by two elements are associative, then the whole algebra is alternative (theorem ofArtin).
If all subalgebras generated by three elements are associative, then the whole algebra is associative too.
In particular, commutative or anticommutative products are flexible. Flexibility is usually not considered a form of subassociativity becauseit doesn't fit into the neat classification of theprevious section.
Flexibility is preserved by the Cayley-Dickson construction. Therefore, all hypercomplex multiplications are flexible (Richard D. Schafer, 1954). In particular, the multiplication of sedenions is flexible (but notalternative).
In a flexible algebra, the right-power of an element is equal to the matching left-power, but that doesn'tmake powers well-defined. A flexible algebra iscube-associative butnot necessarilypower-asspciative. In particular, fourth powers need not be well-defined:
A (A A) = (A A) A = A3 A A3 = A3 A may differ from A2A2
Example of a two-dimensional flexible algebra which isn't power-associative :
×
A
B
A
B
B
B
B
A
The operator at left is flexible because it's commutative. Yet, neither of the two possible fourth powers is well-defined:
A A3 = A B = B A2A2 = B B = A
B B3 = B B = A B2B2 = A A = B
Antiflexible Algebras:
An algebra is called antiflexible when itsassociator is rotation-invariant:
UVW [U,V,W] = [V,W,U]
An antiflexible ring need not be power-associative.
Hermann Weyl (1885-1955) named those structuresafter the Norwegian mathematicianSophus Lie (1842-1899).
The basic internal multiplication in a Lie algebra is a bilinear operator denoted by a square bracket (called a Lie bracket ) which must be anticommutative and obey the so-called Jacobi identity, namely:
[B,A] = [A,B]
[A,[B,C]] +[B,[C,A]] +[C,[A,B]] = 0
Anticommutativity implies [A,A] = 0 only in the absence of 2-torsion.
The cross-product gives ordinary 3D vectors a Lie algebra structure.
Representations of a Lie Algebra :
The bracket notation is compatible with the key exampleappearing inquantum mechanics,where the Lie bracket is obtained as the commutator over an ordinary linear algebra of linear operatorswith respect to the functional composition of operators, namely:
[U ,V ] = UVVU
If the Lie bracket is so defined, the Jacobi identity is a simple theorem,whose proof is straightforward; just sum up the following three equations:
[A,[B,C]] = A (BCCB) (BCCB)A
[B,[C,A]] = B (CAAC) (CAAC)B
[C,[A,B]] = C (ABBA) (ABBA)C
Linearity makes "" distribute over "", so the sum of the right-hand-sides sonsists of 12 terms, where the 6 possible permutations of the operatorseach appear twice with opposite signs. The whole sum is thus zero.
Conversely, an anticommutative algebra obeying the Lie identity is said to havea representation in terms of linear operators ifit's isomorphic to the Lie algebra formed by those operators whenthe bracket is defined as the commutator of operators.
Loosely speaking, the Lie algebra associated to a Lie group isits tangent space at the origin (about the identity).
Lie(G) is formed by all the left-invariant vector-fields onG.
A vector field X on a Lie GroupG is said to be invariant under left translations when
gG,hG, (dlg)h (Xh) = Xgh
where lg is the left-translation within G (lg(x) = g x) and dlg is its differential between tangent spaces.
Lie's third theorem states that every real finite-dimensional Lie algebra is associated to some Lie group. However, there exist infinite-dimensional Lie algebras not associated to any Lie group. Adrien Douady (1935-2007) pointed out the first example one late evening after a Bourbaki meeting...
Douady's counter-example is known as Heisenberg's Lie algebra (non-exponential Lie algebra) and arises naturally in quantum mechanics to describethe motion of particles on a straight line by means of three operators (X, Y and Z) acting on any square-integrablefunction f of a real variable (which form aninfinite-dimensional space):
This cannot be realized as the tangent space of a connected three-dimensional Lie group,because the Lie algebra associated to any such group is either abelian or solvable,and the non-exponential Lie algebra is neither.
Semisimple Lie Algebras & Semisimple Lie Groups :
A simple Lie algebra is a non-abelian Lie algebra without anynonzero proper ideal.
A simple Lie group is a connected Lie group whose Lie algebra is simple.
A semisimple Lie algebra is a direct sum of simple Lie algebras.
A semisimple Lie group is a connected Lie group whose Lie algebra is semisimple.
Any Lie algebra L is the semidirect sum of its radicalR(a maximal solvable ideal) and a semisimple algebra S.
(2023-04-08) They're Lie algebras if the bracket is alternative (i.e., [x,x] = 0).
Much of the literature about Leibniz algebras consist in pointing out that resultsknown for Lie algebras are also true in this more general context.
Originally, Leibniz algebras were called D-algebras.
The "D" stood forderivation, a name which identifies any operator D obeying the following Leibniz law of ordinary derivatives:
D (x y) = x D(y) + D(x) y
If D is the operator corresponding to multiplication to the right by some element z, that becomes the following substitute for associativity:
(x y) z = x (y z) + (x z) y
If brackets are used for multiplication, as is common here, that reads:
(Right) Leibniz Identity
[[x,y],z] = [[x,z],y] + [x,[y,z]]
Indeed, Blokh (1965) and Loday (1993) defined a Leibniz algebra as a vector space (or a module) with a bilinear bracket verifying that identity.
In the anticommutative case, the Leibniz identity and Jacobi's identity are equivalent. Thus a Lie algebra is just an anticommutative Leibniz algebra.
The Koszul duals of Leibniz algebras are called Zinbiel ("Leibniz" backwards). That was a pseudonym used by Loday who created a fictitious character by that name as a joke.
(2015-02-19) Turning any linear algebraA into a commutative one A+.
Those structures were introduced in 1933 by Pascual Jordan (1902-1980). They were named after him byA. Adrian Albert (1902-1980) in 1946. (Richard Schafer, a former student of Albert, wouldn'tuse that name, possibly because Jordan was a notorious Nazi).
Just like commutators turn linear operators into a Lie algebra, a Jordan algebra is formed by using the anti-commutator or Jordan product :
UV = (UV + VU )
Axiomatically, a Jordan algebra is a commutative algebra ( UV = VU ) obeying Jordan's identity : (UV) (UU) = U (V (UU)).
(2019-02-27) A 27-dimensional exceptional Jordan algebra.
(2007-04-30) Unital associative algebras with a quadratic form.
Those structures are named after the British geometer and philosopherWilliam Clifford (1845-1879) who originated the concept in 1876.
The first description of Clifford algebras centered on quadratic forms was given in 1945 by a founder of the Bourbaki collaboration: Claude Chevalley (1909-1984).
(2022-02-18) Commutative two-dimensional algebra of elements of the form x + y
The second component (the "Fermionic" part) has zero square: 2 = 0.
The grade of a product is the sum of the grades of its factors, modulo 4.
Products of gamma matrices : a =0 b =1 c =2 d =3 (with e = abcd)
0
Grade 1
Grade 2 (bivectors)
Grade 3
4
I
a
b
c
d
ab
ac
ad
bc
bd
cd
ae
be
ce
de
e
a
I
ab
ac
ad
b
c
d
de
-ce
be
e
cd
-bd
bc
ae
b
-ab
-I
bc
bd
a
-de
ce
-c
-d
ae
-cd
-e
-ad
ac
be
c
-ac
-bc
-I
cd
de
a
-be
b
-ae
-d
bd
ad
-e
-ab
ce
d
-ad
-bd
-cd
-I
-ce
be
a
ae
b
c
-bc
-ac
ab
-e
de
ab
-b
-a
de
-ce
I
-bc
-bd
-ac
-ad
e
-be
-ae
-d
c
cd
ac
-c
-de
-a
be
bc
I
-cd
ab
-e
-ad
-ce
d
-ae
-b
-bd
ad
-d
ce
-be
-a
bd
cd
I
e
ab
ac
-de
-c
b
-ae
bc
bc
de
c
-b
ae
ac
-ab
e
-I
cd
-bd
-d
ce
-be
-a
-ad
bd
-ce
d
-ae
-b
ad
-e
-ab
-cd
-I
bc
c
de
a
-be
ac
cd
be
ae
d
-c
e
ad
-ac
bd
-bc
-I
-b
-a
de
-ce
-ab
ae
-e
-cd
bd
-bc
be
ce
de
-d
c
-b
I
ab
ac
ad
-a
be
cd
e
ad
-ac
ae
d
-c
-ce
-de
-a
-ab
-I
bc
bd
-b
ce
-bd
-ad
e
ab
-d
ae
b
be
a
-de
-ac
-bc
-I
cd
-c
de
bc
ac
-ab
e
c
-b
ae
-a
be
ce
-ad
-bd
-cd
-I
-d
e
-ae
-be
-ce
-de
cd
-bd
bc
-ad
ac
-ab
a
b
c
d
-I
Products highlighted in blue commute. The others anticommute (except in trivial cases where one factor is the identity or both are equal).
There are two opposite ways to associate the letter "a" to the time coordinate (ct). Likewise, we can assign 3 of the 4 remaining letters (b,c,d,e) to 3 given orthogonal axesin 120 nonoriented ways or 1920 oriented ones. Thus, a spacetime reference frame canbe labeled in 1920 equivalent ways.
Covariant Differential Operators
The simplest one was found by Dirac in 1927. Its eigenvectors describes particles of spin ½ and mass m, like the electron.
... and then, you could liearize the sum of four squares. Actually. you could even linearize the sums of five squares... Paul Dirac (Accademia Dei Lincei, Rome 1975-04-15).
Three Distinct Products :
The product used so far is understood as the canonicalmultiplicative operator of the spacetime algebra,unambiguously known as the geometric product and denoted by mere juxtaposition of the operands, without printing any intervening operator symbol. It decomposes into a commutative part, called inner product or dot product (which is denoted by the dot symbol "." and always produces a scalar) and an anticommutative one,called outer product, exterior product or wedge product ( ).
U V = U . V + U V
with U . V = ½ (UV + VU) and U V = ½ (UV VU)
Such relations define the inner and outer productsin any distributive algebra. Both are distinct from the canonical algebra product (geometric product) unless the algebra is commutative.
(2017-12-01) The exterior product (wedge product) is anticommutative.
(2015-02-23) A special linear involution is singled out (adjunction or conjugation).
As the adjoint or conjugate of an element U is usually denoted U* such structures are also called *-algebras (star-algebras). The following properties are postulated:
(2015-02-23) Compact operators resemble ancient infinitesimals...
John von Neumann (1903-1957) introduced those structuresin 1929 (he called them simply rings of operators). Von Neumann presented their basic theory in 1936 with the help ofFrancis Murray(1911-1996).
By definition, a factor is a Von Neumann algebra with a trivialcenter (which is to say that only the scalar multiples of identity commute with all the elements of thealgebra).
(2009-09-25)
To a mathematician, the juxtaposition (or cartesian product ) of several vector spaces over the same field K is always a vector space over that field (as component-wise definitionsof addition and scaling satisfy the aboveaxioms).
When physicists state that some particular juxtaposition ofquantities (possibly a single numerical quantity by itself) is"not a scalar", "not a vector" or "not a tensor" they mean that the thing lacks an unambiguous and intrinsic definition.
Typically, a flawed vectorial definition would actually depend onthe choice of a frame of reference for the physical universe. For example, the derivative of a scalar with respect to the firstspatial coordinate is "not a scalar" (that quantity depends onwhat spatial frame of reference is chosen).
Less trivially, the gradient of a scalar is a physical covector(of which the above happens to be onecovariant coordinate). Indeed,the definition of a gradient specifies the same object (indualspace) for any choice of a physical coordinate basis.
Some physicists routinely introduce (especially in the contextofGeneral Relativity) vectors as "things that transform like elementary displacements" and covectors as "things that transform like gradients". Their students are thus expected to grasp a complicated notion (coordinate transformations) before the stage is set. Newbies will need several passesthrough that intertwined logic before they "get it".
I'd rather introduce the mathematical notion of a vector first. Having easily absorbed that straight notion, the student may then be asked to considerwhether a particular definition depends on a choice of coordinates.
For example, the linear coefficient of thermal expansionCTE) cannot be properly definedas a scalar (except for isotropic substances) it's a tensor. On the other hand, the related cubic CTE is always a scalar (which is equal to thetrace of the aforementionedCTE tensor).
(2007-08-21) Unifying some notations of mathematical physics...
Observing similitudes in distinct areas of mathematical physics and building on groundworkfrom Grassmann (1844) and Clifford (1876) David Hestenes (1933-)has been advocating a denotational unification,which has garnered quite a few enthusiastic followers.
The approach is called Geometric Algebra by its proponents. The central objects are called multivectors. Their coordinate-free manipulation goes by the name of multivector calculus or geometric calculus, a term which first appeared in the title of Hestenes' own doctoral dissertation(1963).
That's unrelated to the abstract field of Algebraic Geometry (which has been at the forefront of mainstream mathematical research for decades).