Incomputer science,array programming refers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used inscientific and engineering settings.
Modern programming languages that support array programming (also known asvector ormultidimensional languages) have been engineered specifically to generalize operations onscalars to apply transparently tovectors,matrices, and higher-dimensional arrays. These includeAPL,J,Fortran,MATLAB,Analytica,Octave,R,Cilk Plus,Julia,Perl Data Language (PDL),Raku (programming language). In these languages, an operation that operates on entire arrays can be called avectorized operation,[1] regardless of whether it is executed on avector processor, which implements vector instructions. Array programming primitives concisely express broad ideas about data manipulation. The level of concision can be dramatic in certain cases: it is not uncommon[example needed] to find array programming languageone-liners that require several pages of object-oriented code.
The fundamental idea behind array programming is that operations apply at once to an entire set of values. This makes it ahigh-level programming model as it allows the programmer to think and operate on whole aggregates of data, without having to resort to explicit loops of individual scalar operations.
Kenneth E. Iverson described the rationale behind array programming (actually referring to APL) as follows:[2]
most programming languages are decidedly inferior to mathematical notation and are little used as tools of thought in ways that would be considered significant by, say, an applied mathematician.
The thesis is that the advantages of executability and universality found in programming languages can be effectively combined, in a single coherent language, with the advantages offered by mathematical notation. it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter.
Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for explorations.
[...]
Users of computers and programming languages are often concerned primarily with the efficiency of execution of algorithms, and might, therefore, summarily dismiss many of the algorithms presented here. Such dismissal would be short-sighted since a clear statement of an algorithm can usually be used as a basis from which one may easily derive a more efficient algorithm.
The basis behind array programming and thinking is to find and exploit the properties of data where individual elements are similar or adjacent. Unlike object orientation which implicitly breaks down data to its constituent parts (orscalar quantities), array orientation looks to group data and apply a uniform handling.
Function rank is an important concept to array programming languages in general, by analogy totensor rank in mathematics: functions that operate on data may be classified by the number of dimensions they act on. Ordinary multiplication, for example, is a scalar ranked function because it operates on zero-dimensional data (individual numbers). Thecross product operation is an example of a vector rank function because it operates on vectors, not scalars.Matrix multiplication is an example of a 2-rank function, because it operates on 2-dimensional objects (matrices).Collapse operators reduce the dimensionality of an input data array by one or more dimensions. For example, summing over elements collapses the input array by 1 dimension.
Array programming is very well suited toimplicit parallelization; a topic of much research nowadays. Further,Intel and compatible CPUs developed and produced after 1997 contained various instruction set extensions, starting fromMMX and continuing throughSSSE3 and3DNow!, which include rudimentarySIMD array capabilities. This has continued into the 2020s with instruction sets such asAVX-512, making modern CPUs sophisticated vector processors. Array processing is distinct fromparallel processing in that one physical processor performs operations on a group of items simultaneously while parallel processing aims to split a larger problem into smaller ones (MIMD) to be solved piecemeal by numerous processors. Processors withmultiple cores andGPUs with thousands ofgeneral computing cores are common as of 2023.
The canonical examples of array programming languages areFortran,APL, andJ. Others include:A+,Analytica,Chapel,IDL,Julia,K, Klong,Q,MATLAB,GNU Octave,Scilab,FreeMat,Perl Data Language (PDL),R,Raku,S-Lang,SAC,Nial,ZPL,Futhark, andTI-BASIC.
In scalar languages such asC andPascal, operations apply only to single values, soa+b expresses the addition of two numbers. In such languages, adding one array to another requires indexing and looping, the coding of which is tedious.
for(i=0;i<n;i++)for(j=0;j<n;j++)a[i][j]+=b[i][j];
In array-based languages, for example in Fortran, the nested for-loop above can be written in array-format in one line,
a=a+b
or alternatively, to emphasize the array nature of the objects,
a(:,:)=a(:,:)+b(:,:)
While scalar languages like C do not have native array programming elements as part of the language proper, this does not mean programs written in these languages never take advantage of the underlying techniques of vectorization (i.e., utilizing a CPU'svector-based instructions if it has them or by using multiple CPU cores). Some C compilers likeGCC at some optimization levels detect and vectorize sections of code that its heuristics determine would benefit from it. Another approach is given by theOpenMP API, which allows one to parallelize applicable sections of code by taking advantage of multiple CPU cores.
In array languages, operations are generalized to apply to both scalars and arrays. Thus,a+b expresses the sum of two scalars ifa andb are scalars, or the sum of two arrays if they are arrays.
An array language simplifies programming but possibly at a cost known as theabstraction penalty.[3][4][5] Because the additions are performed in isolation from the rest of the coding, they may not produce the optimally mostefficient code. (For example, additions of other elements of the same array may be subsequently encountered during the same execution, causing unnecessary repeated lookups.) Even the most sophisticatedoptimizing compiler would have an extremely hard time amalgamating two or more apparently disparate functions which might appear in different program sections or sub-routines, even though a programmer could do this easily, aggregating sums on the same pass over the array to minimizeoverhead).
The previous C code would become the following in theAda language,[6] which supports array-programming syntax.
A:=A+B;
APL uses single character Unicode symbols with no syntactic sugar.
A←A+B
This operation works on arrays of any rank (including rank 0), and on a scalar and an array. Dyalog APL extends the original language withaugmented assignments:
A+←B
Analytica provides the same economy of expression as Ada.
A := A + B;
Dartmouth BASIC had MAT statements for matrix and array manipulation in its third edition (1966).
DIMA(4),B(4),C(4)MATA=1MATB=2*AMATC=A+BMATPRINTA,B,C
Stata's matrix programming language Mata supports array programming. Below, we illustrate addition, multiplication, addition of a matrix and a scalar, element by element multiplication, subscripting, and one of Mata's many inverse matrix functions.
. mata:: A = (1,2,3) \(4,5,6): A123+-------------+1 |123 |2 |456 |+-------------+: B = (2..4) \(1..3): B123+-------------+1 |234 |2 |123 |+-------------+: C =J(3,2,1)// A 3 by 2 matrix of ones: C12+---------+1 |11 |2 |11 |3 |11 |+---------+: D = A+ B: D123+-------------+1 |357 |2 |579 |+-------------+: E = A*C: E12+-----------+1 |66 |2 |1515 |+-----------+: F = A:*B: F123+----------------+1 |2612 |2 |41018 |+----------------+: G = E :+3: G12+-----------+1 |99 |2 |1818 |+-----------+: H = F[(2\1), (1,2)]// Subscripting to get a submatrix of F and:// switch row 1 and 2: H12+-----------+1 |410 |2 |26 |+-----------+: I =invsym(F'*F)// Generalized inverse (F*F^(-1)F=F) of a:// symmetric positive semi-definite matrix: I[symmetric]123+-------------------------------------------+1 |0 |2 |03.25 |3 |0-1.75 .9444444444 |+-------------------------------------------+: end
The implementation inMATLAB allows the same economy allowed by using the Fortran language.
A=A+B;
A variant of the MATLAB language is theGNU Octave language, which extends the original language with augmented assignments:
A+=B;
Both MATLAB and GNU Octave natively supportlinear algebra operations such as matrix multiplication,matrix inversion, and the numerical solution ofsystem of linear equations, even using theMoore–Penrose pseudoinverse.[7][8]
TheNial example of the inner product of two arrays can be implemented using the native matrix multiplication operator. Ifa
is a row vector of size [1 n] andb
is a corresponding column vector of size [n 1].
a * b;
By contrast, theentrywise product is implemented as:
a .* b;
The inner product between two matrices having the same number of elements can be implemented with the auxiliary operator(:)
, which reshapes a given matrix into a column vector, and thetranspose operator'
:
A(:)' * B(:);
Therasdaman query language is a database-oriented array-programming language. For example, two arrays could be added with the following query:
SELECTA+BFROMA,B
The R language supportsarray paradigm by default. The following example illustrates a process of multiplication of two matrices followed by an addition of a scalar (which is, in fact, a one-element vector) and a vector:
>A<-matrix(1:6,nrow=2)# !!this has nrow=2 ... and A has 2 rows>A [,1] [,2] [,3][1,] 1 3 5[2,] 2 4 6>B<-t(matrix(6:1,nrow=2))# t() is a transpose operator !!this has nrow=2 ... and B has 3 rows --- a clear contradiction to the definition of A>B [,1] [,2][1,] 6 5[2,] 4 3[3,] 2 1>C<-A%*%B>C [,1] [,2][1,] 28 19[2,] 40 28>D<-C+1>D [,1] [,2][1,] 29 20[2,] 41 29>D+c(1,1)# c() creates a vector [,1] [,2][1,] 30 21[2,] 42 30
Raku supports the array paradigm via its Metaoperators.[9] The following example demonstrates the addition of arrays @a and @b using the Hyper-operator in conjunction with the plus operator.
[0] >my@a = [[1,1],[2,2],[3,3]];[[11] [22] [33]][1] >my@b = [[4,4],[5,5],[6,6]];[[44] [55] [66]][2] >@a »+« @b;[[55] [77] [99]]
The matrix left-division operator concisely expresses some semantic properties of matrices. As in the scalar equivalent, if the (determinant of the) coefficient (matrix)A
is not null then it is possible to solve the (vectorial) equationA * x = b
by left-multiplying both sides by theinverse ofA
:A−1
(in both MATLAB and GNU Octave languages:A^-1
). The following mathematical statements hold whenA
is afull ranksquare matrix:
A^-1 *(A * x)==A^-1 * (b)
(A^-1 * A)* x ==A^-1 * b
(matrix-multiplicationassociativity)x = A^-1 * b
where==
is the equivalencerelational operator.The previous statements are also valid MATLAB expressions if the third one is executed before the others (numerical comparisons may be false because of round-off errors).
If the system is overdetermined – so thatA
has more rows than columns – the pseudoinverseA+
(in MATLAB and GNU Octave languages:pinv(A)
) can replace the inverseA−1
, as follows:
pinv(A)*(A*x)==pinv(A)*(b)
(pinv(A)*A)*x==pinv(A)*b
(matrix-multiplication associativity)x=pinv(A)*b
However, these solutions are neither the most concise ones (e.g. still remains the need to notationally differentiate overdetermined systems) nor the most computationally efficient. The latter point is easy to understand when considering again the scalar equivalenta * x = b
, for which the solutionx = a^-1 * b
would require two operations instead of the more efficientx = b / a
.The problem is that generally matrix multiplications are notcommutative as the extension of the scalar solution to the matrix case would require:
(a * x)/ a ==b / a
(x * a)/ a ==b / a
(commutativity does not hold for matrices!)x * (a / a)==b / a
(associativity also holds for matrices)x = b / a
The MATLAB language introduces the left-division operator\
to maintain the essential part of the analogy with the scalar case, therefore simplifying the mathematical reasoning and preserving the conciseness:
A \ (A * x)==A \ b
(A \ A)* x ==A \ b
(associativity also holds for matrices, commutativity is no more required)x = A \ b
This is not only an example of terse array programming from the coding point of view but also from the computational efficiency perspective, which in several array programming languages benefits from quite efficient linear algebra libraries such asATLAS orLAPACK.[10]
Returning to the previous quotation of Iverson, the rationale behind it should now be evident:
it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, its distributivity over addition, and its ability to represent linear functions and geometric operations) is a different and much more difficult matter.Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for explorations.
The use of specialized and efficient libraries to provide more terse abstractions is also common in other programming languages. InC++ several linear algebra libraries exploit the language's ability tooverload operators. In some cases a very terse abstraction in those languages is explicitly influenced by the array programming paradigm, as theNumPy extension library toPython,Armadillo andBlitz++ libraries do.[11][12]