This articlerelies excessively onreferences toprimary sources. Please improve this article by addingsecondary or tertiary sources. Find sources: "Automatically Tuned Linear Algebra Software" – news ·newspapers ·books ·scholar ·JSTOR(November 2012) (Learn how and when to remove this message) |
| Automatically Tuned Linear Algebra Software | |
|---|---|
| Stable release | 3.10.3 / July 28, 2016; 9 years ago (2016-07-28) |
| Written in | C,FORTRAN 77 |
| Type | Software library |
| License | BSD License |
| Website | math-atlas |
| Repository | |
Automatically Tuned Linear Algebra Software (ATLAS) is asoftware library forlinear algebra. It provides a matureopen source implementation ofBLASAPIs forC andFORTRAN 77.
ATLAS is often recommended as a way to automatically generate anoptimized BLAS library. While its performance often trails that of specialized libraries written for one specifichardware platform, it is often the first or even only optimized BLAS implementation available on new systems and is a large improvement over the generic BLAS available atNetlib. For this reason, ATLAS is sometimes used as a performance baseline for comparison with other products.
ATLAS runs on mostUnix-like operating systems and onMicrosoft Windows (usingCygwin). It is released under aBSD-style license without advertising clause, and many well-known mathematics applications includingMATLAB,Mathematica,Scilab,SageMath, and some builds ofGNU Octave may use it.
ATLAS provides a full implementation of the BLAS APIs as well as some additional functions fromLAPACK, a higher-level library built on top of BLAS. In BLAS, functionality is divided into three groups called levels 1, 2 and 3.
Theoptimization approach is called Automated Empirical Optimization of Software (AEOS), which identifies four fundamental approaches to computer assisted optimization of which ATLAS employs three:[1]
Most of the Level 3 BLAS is derived fromGEMM, so that is the primary focus of the optimization.
The intuition that the operations will dominate over the data accesses only works for roughly square matrices.The real measure should be some kind of surface area to volume.The difference becomes important for very non-square matrices.
Copying the inputs allows the data to be arranged in a way that provides optimal access for the kernel functions, but this comes at the cost of allocating temporary space, and an extra read and write of the inputs.
So the first question GEMM faces is, can it afford to copy the inputs?
If so,
If not,
The actual decision is made through a simpleheuristic which checks for "skinny cases".
For 2nd Level Cache blocking a single cache edge parameter is used.The high level choose an order to traverse the blocks:ijk, jik, ikj, jki, kij, kji. These need not be the same order as the product is done within a block.
Typically chosen orders areijk orjik.Forjik the ideal situation would be to copyA and theNB wide panel ofB. Forijk swap the role ofA andB.
Choosing the bigger ofM orN for the outer loop reduces the footprint of the copy.But for largeK ATLAS does not even allocate such a large amount of memory.Instead it defines a parameter,Kp, to give best use of the L2 cache. Panels are limited toKp in length.It first tries to allocate (in thejik case).If that fails it tries.(If that fails it uses the no-copy version of GEMM, but this case is unlikely for reasonable choices of cache edge.)Kp is a function of cache edge andNB.
When integrating the ATLAS BLAS withLAPACK an important consideration is the choice of blocking factor for LAPACK. If the ATLAS blocking factor is small enough the blocking factor of LAPACK could be set to match that of ATLAS.
To take advantage of recursive factorization, ATLAS provides replacement routines for some LAPACK routines. These simply overwrite the corresponding LAPACK routines fromNetlib.
Installing ATLAS on a particular platform is a challenging process which is typically done by a system vendor or a local expert and made available to a wider audience.
For many systems, architectural default parameters are available; these are essentially saved searches plus the results of hand tuning. If the arch defaults work they will likely get 10–15% better performance than the install search. On such systems the installation process is greatly simplified.