
Inlinear algebra,transposition is an operation that flips amatrix over its diagonal; that is, transposition switches the row and column indices of the matrixA to produce another matrix, called thetranspose ofA and often denotedAT (among other notations).[1]
The transpose of a matrix was introduced in 1858 by the British mathematicianArthur Cayley.[2]
The transpose of a matrixA, denoted byAT,[3]TA,Atr,tA orAt, may be constructed by any of the following methods:
Formally, theith row,jth column element ofAT is thejth row,ith column element ofA:
IfA is anm ×n matrix, thenAT is ann ×m matrix.
A square matrix whose transpose is equal to itself is called asymmetric matrix; that is,A is symmetric if
A square matrix whose transpose is equal to its negative is called askew-symmetric matrix; that is,A is skew-symmetric if
A squarecomplex matrix whose transpose is equal to the matrix with every entry replaced by itscomplex conjugate (denoted here with an overline) is called aHermitian matrix (equivalent to the matrix being equal to itsconjugate transpose); that is,A is Hermitian if
A squarecomplex matrix whose transpose is equal to the negation of its complex conjugate is called askew-Hermitian matrix; that is,A is skew-Hermitian if
A square matrix whose transpose is equal to itsinverse is called anorthogonal matrix; that is,A is orthogonal if
A square complex matrix whose transpose is equal to its conjugate inverse is called aunitary matrix; that is,A is unitary if
LetA andB be matrices andc be ascalar.
IfA is anm ×n matrix andAT is its transpose, then the result ofmatrix multiplication with these two matrices gives two square matrices:A AT ism ×m andATA isn ×n. Furthermore, these products aresymmetric matrices. Indeed, the matrix productA AT has entries that are theinner product of a row ofA with a column ofAT. But the columns ofAT are the rows ofA, so the entry corresponds to the inner product of two rows ofA. Ifpij is the entry of the product, it is obtained from rowsi andj inA. The entrypji is also obtained from these rows, thuspij =pji, and the product matrix (pij) is symmetric. Similarly, the productATA is a symmetric matrix.
A quick proof of the symmetry ofA AT results from the fact that it is its own transpose:

On acomputer, one can often avoid explicitly transposing a matrix inmemory by simply accessing the same data in a different order. For example,software libraries forlinear algebra, such asBLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement.
However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored inrow-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in afast Fourier transform algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasingmemory locality.
Ideally, one might hope to transpose a matrix with minimal additional storage. This leads to the problem of transposing ann ×m matrixin-place, withO(1) additional storage or at most storage much less thanmn. Forn ≠m, this involves a complicatedpermutation of the data elements that is non-trivial to implement in-place. Therefore, efficientin-place matrix transposition has been the subject of numerous research publications incomputer science, starting in the late 1950s, and several algorithms have been developed.
As the main use of matrices is to represent linear maps betweenfinite-dimensional vector spaces, the transpose is an operation on matrices that may be seen as the representation of some operation on linear maps.
This leads to a much more general definition of the transpose that works on every linear map, even when linear maps cannot be represented by matrices (such as in the case of infinite dimensional vector spaces). In the finite dimensional case, the matrix representing the transpose of a linear map is the transpose of the matrix representing the linear map, independently of thebasis choice.
LetX# denote thealgebraic dual space of anR-moduleX. LetX andY beR-modules. Ifu :X →Y is alinear map, then itsalgebraic adjoint ordual,[5] is the mapu# :Y# →X# defined byf ↦f ∘u. The resulting functionalu#(f) is called thepullback off byu. The followingrelation characterizes the algebraic adjoint ofu[6]
where⟨•, •⟩ is thenatural pairing (i.e. defined by⟨h,z⟩ :=h(z)). This definition also applies unchanged to left modules and to vector spaces.[7]
The definition of the transpose may be seen to be independent of any bilinear form on the modules, unlike the adjoint (below).
Thecontinuous dual space of atopological vector space (TVS)X is denoted byX′. IfX andY are TVSs then a linear mapu :X →Y isweakly continuous if and only ifu#(Y′) ⊆X′, in which case we lettu :Y′ →X′ denote the restriction ofu# toY′. The maptu is called thetranspose[8] ofu.
If the matrixA describes a linear map with respect tobases ofV andW, then the matrixAT describes the transpose of that linear map with respect to thedual bases.
Every linear map to the dual spaceu :X →X# defines a bilinear formB :X ×X →F, with the relationB(x,y) =u(x)(y). By defining the transpose of this bilinear form as the bilinear formtB defined by the transposetu :X## →X# i.e.tB(y,x) =tu(Ψ(y))(x), we find thatB(x,y) =tB(y,x). Here,Ψ is the naturalhomomorphismX →X## into thedouble dual.
If the vector spacesX andY have respectivelynondegeneratebilinear formsBX andBY, a concept known as theadjoint, which is closely related to the transpose, may be defined:
Ifu :X →Y is alinear map betweenvector spacesX andY, we defineg as theadjoint ofu ifg :Y →X satisfies
These bilinear forms define anisomorphism betweenX andX#, and betweenY andY#, resulting in an isomorphism between the transpose and adjoint ofu. The matrix of the adjoint of a map is the transposed matrix only if thebases areorthonormal with respect to their bilinear forms. In this context, many authors however, use the term transpose to refer to the adjoint as defined here.
The adjoint allows us to consider whetherg :Y →X is equal tou −1 :Y →X. In particular, this allows theorthogonal group over a vector spaceX with a quadratic form to be defined without reference to matrices (nor the components thereof) as the set of all linear mapsX →X for which the adjoint equals the inverse.
Over a complex vector space, one often works withsesquilinear forms (conjugate-linear in one argument) instead of bilinear forms. TheHermitian adjoint of a map between such spaces is defined similarly, and the matrix of the Hermitian adjoint is given by the conjugate transpose matrix if the bases are orthonormal.