Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

C++ tensors with broadcasting and lazy computing

License

NotificationsYou must be signed in to change notification settings

xtensor-stack/xtensor

Repository files navigation

xtensor

GHA LinuxGHA OSXGHA WindowsDocumentationDoxygen -> gh-pagesBinderJoin the Gitter Chat

Multi-dimensional arrays with broadcasting and lazy computing.

Introduction

xtensor is a C++ library meant for numerical analysis with multi-dimensionalarray expressions.

xtensor provides

  • an extensible expression system enablinglazy broadcasting.
  • an API following the idioms of theC++ standard library.
  • tools to manipulate array expressions and build uponxtensor.

Containers ofxtensor are inspired byNumPy, thePython array programming library.Adaptors for existing data structures tobe plugged into our expression system can easily be written.

In fact,xtensor can be used toprocess NumPy data structures inplaceusing Python'sbuffer protocol.Similarly, we can operate on Julia and R arrays. For more details on the NumPy,Julia and R bindings, check out thextensor-python,xtensor-julia andxtensor-r projects respectively.

Up to version 0.26.0,xtensor requires a C++ compiler supporting C++14.xtensor 0.26.x requires a C++ compiler supporting C++17.xtensor 0.27.x requires a C++ compiler supporting C++20.

Installation

Package managers

We provide a package for the mamba (or conda) package manager:

mamba install -c conda-forge xtensor

Install from sources

xtensor is a header-only library.

You can directly install it from the sources:

cmake -DCMAKE_INSTALL_PREFIX=your_install_prefixmake install

Installing xtensor using vcpkg

You can download and install xtensor using thevcpkg dependency manager:

git clone https://github.com/Microsoft/vcpkg.gitcd vcpkg./bootstrap-vcpkg.sh./vcpkg integrate install./vcpkg install xtensor

The xtensor port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, pleasecreate an issue or pull request on the vcpkg repository.

Trying it online

You can play withxtensor interactively in a Jupyter notebook right now! Just click on the binder link below:

Binder

The C++ support in Jupyter is powered by thexeus-cling C++ kernel. Together with xeus-cling, xtensor enables a similar workflow to that of NumPy with the IPython Jupyter kernel.

xeus-cling

Documentation

For more information on usingxtensor, check out the reference documentation

http://xtensor.readthedocs.io/

Dependencies

xtensor depends on thextl library andhas an optional dependency on thexsimdlibrary:

xtensorxtlxsimd (optional)
master^0.8.0^13.2.0
0.27.0^0.8.0^13.2.0
0.26.0^0.8.0^13.2.0
0.25.0^0.7.5^11.0.0
0.24.7^0.7.0^10.0.0
0.24.6^0.7.0^10.0.0
0.24.5^0.7.0^10.0.0
0.24.4^0.7.0^10.0.0
0.24.3^0.7.0^8.0.3
0.24.2^0.7.0^8.0.3
0.24.1^0.7.0^8.0.3
0.24.0^0.7.0^8.0.3
0.23.x^0.7.0^7.4.8
0.22.0^0.6.23^7.4.8

The dependency onxsimd is required if you want to enable SIMD accelerationinxtensor. This can be done by defining the macroXTENSOR_USE_XSIMDbefore including any header ofxtensor.

Usage

Basic usage

Initialize a 2-D array and compute the sum of one of its rows and a 1-D array.

#include<iostream>#include"xtensor/xarray.hpp"#include"xtensor/xio.hpp"#include"xtensor/xview.hpp"xt::xarray<double> arr1  {{1.0,2.0,3.0},   {2.0,5.0,7.0},   {2.0,5.0,7.0}};xt::xarray<double> arr2  {5.0,6.0,7.0};xt::xarray<double> res = xt::view(arr1,1) + arr2;std::cout << res;

Outputs:

{7, 11, 14}

Initialize a 1-D array and reshape it inplace.

#include<iostream>#include"xtensor/xarray.hpp"#include"xtensor/xio.hpp"xt::xarray<int> arr  {1,2,3,4,5,6,7,8,9};arr.reshape({3,3});std::cout << arr;

Outputs:

{{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}

Index Access

#include<iostream>#include"xtensor/xarray.hpp"#include"xtensor/xio.hpp"xt::xarray<double> arr1  {{1.0,2.0,3.0},   {2.0,5.0,7.0},   {2.0,5.0,7.0}};std::cout << arr1(0,0) << std::endl;xt::xarray<int> arr2  {1,2,3,4,5,6,7,8,9};std::cout << arr2(0);

Outputs:

1.01

The NumPy to xtensor cheat sheet

If you are familiar with NumPy APIs, and you are interested in xtensor, you cancheck out theNumPy to xtensor cheat sheetprovided in the documentation.

Lazy broadcasting withxtensor

Xtensor can operate on arrays of different shapes of dimensions in anelement-wise fashion. Broadcasting rules of xtensor are similar to those ofNumPy andlibdynd.

Broadcasting rules

In an operation involving two arrays of different dimensions, the array withthe lesser dimensions is broadcast across the leading dimensions of the other.

For example, ifA has shape(2, 3), andB has shape(4, 2, 3), theresult of a broadcasted operation withA andB has shape(4, 2, 3).

   (2, 3) # A(4, 2, 3) # B---------(4, 2, 3) # Result

The same rule holds for scalars, which are handled as 0-D expressions. IfAis a scalar, the equation becomes:

       () # A(4, 2, 3) # B---------(4, 2, 3) # Result

If matched up dimensions of two input arrays are different, and one of them hassize1, it is broadcast to match the size of the other. Let's say B has theshape(4, 2, 1) in the previous example, so the broadcasting happens asfollows:

   (2, 3) # A(4, 2, 1) # B---------(4, 2, 3) # Result

Universal functions, laziness and vectorization

Withxtensor, ifx,y andz are arrays ofbroadcastable shapes, thereturn type of an expression such asx + y * sin(z) isnot an array. Itis anxexpression object offering the same interface as an N-dimensionalarray, which does not hold the result.Values are only computed upon accessor when the expression is assigned to an xarray object. This allows tooperate symbolically on very large arrays and only compute the result for theindices of interest.

We provide utilities tovectorize any scalar function (taking multiplescalar arguments) into a function that will perform onxexpressions, applyingthe lazy broadcasting rules which we just described. These functions are calledxfunctions. They arextensor's counterpart to NumPy's universal functions.

Inxtensor, arithmetic operations (+,-,*,/) and all specialfunctions arexfunctions.

Iterating overxexpressions and broadcasting Iterators

Allxexpressions offer two sets of functions to retrieve iterator pairs (andtheirconst counterpart).

  • begin() andend() provide instances ofxiterators which can be used toiterate over all the elements of the expression. The order in whichelements are listed isrow-major in that the index of last dimension isincremented first.
  • begin(shape) andend(shape) are similar but take abroadcasting shapeas an argument. Elements are iterated upon in a row-major way, but certaindimensions are repeated to match the provided shape as per the rulesdescribed above. For an expressione,e.begin(e.shape()) ande.begin()are equivalent.

Runtime vs compile-time dimensionality

Two container classes implementing multi-dimensional arrays are provided:xarray andxtensor.

  • xarray can be reshaped dynamically to any number of dimensions. It is thecontainer that is the most similar to NumPy arrays.
  • xtensor has a dimension set at compilation time, which enables manyoptimizations. For example, shapes and strides ofxtensor instances areallocated on the stack instead of the heap.

xarray andxtensor container are bothxexpressions and can be involvedand mixed in universal functions, assigned to each other etc...

Besides, two access operators are provided:

  • The variadic templateoperator() which can take multiple integralarguments or none.
  • And theoperator[] which takes a single multi-index argument, which can beof size determined at runtime.operator[] also supports access with bracedinitializers.

Performances

Xtensor operations make use of SIMD acceleration depending on what instructionsets are available on the platform at hand (SSE, AVX, AVX512, Neon).

xsimd

Thexsimd project underlies thedetection of the available instruction sets, and provides generic high-levelwrappers and memory allocators for client libraries such as xtensor.

Continuous benchmarking

Xtensor operations are continuously benchmarked, and are significantly improvedat each new version. Current performances on statically dimensioned tensorsmatch those of the Eigen library. Dynamically dimension tensors for which theshape is heap allocated come at a small additional cost.

Stack allocation for shapes and strides

More generally, the library implement apromote_shape mechanism at build timeto determine the optimal sequence type to hold the shape of an expression. Theshape type of a broadcasting expression whose members have a dimensionalitydetermined at compile time will have a stack allocated sequence type. If atleast one note of a broadcasting expression has a dynamic dimension(for example anxarray), it bubbles up to the entire broadcasting expressionwhich will have a heap allocated shape. The same hold for views, broadcastexpressions, etc...

Therefore, when building an application with xtensor, we recommend usingstatically-dimensioned containers whenever possible to improve the overallperformance of the application.

Language bindings

xtensor-python

Thextensor-python projectprovides the implementation of twoxtensor containers,pyarray andpytensor which effectively wrap NumPy arrays, allowing inplace modification,including reshapes.

Utilities to automatically generate NumPy-style universal functions, exposed toPython from scalar functions are also provided.

xtensor-julia

Thextensor-julia projectprovides the implementation of twoxtensor containers,jlarray andjltensor which effectively wrap julia arrays, allowing inplace modification,including reshapes.

Like in the Python case, utilities to generate NumPy-style universal functionsare provided.

xtensor-r

Thextensor-r project provides theimplementation of twoxtensor containers,rarray andrtensor whicheffectively wrap R arrays, allowing inplace modification, including reshapes.

Like for the Python and Julia bindings, utilities to generate NumPy-styleuniversal functions are provided.

Library bindings

xtensor-blas

Thextensor-blas project providesbindings to BLAS libraries, enabling linear-algebra operations on xtensorexpressions.

xtensor-io

Thextensor-io project enables theloading of a variety of file formats into xtensor expressions, such as imagefiles, sound files, HDF5 files, as well as NumPy npy and npz files.

Building and running the tests

Building the tests requires theGTesttesting framework andcmake.

gtest and cmake are available as packages for most Linux distributions.Besides, they can also be installed with theconda package manager (even onwindows):

conda install -c conda-forge gtest cmake

Oncegtest andcmake are installed, you can build and run the tests:

mkdir buildcd buildcmake -DBUILD_TESTS=ON ../make xtest

You can also use CMake to download the source ofgtest, build it, and use thegenerated libraries:

mkdir buildcd buildcmake -DBUILD_TESTS=ON -DDOWNLOAD_GTEST=ON ../make xtest

Building the HTML documentation

xtensor's documentation is built with three tools

While doxygen must be installed separately, you can install breathe by typing

pip install breathe sphinx_rtd_theme

Breathe can also be installed withconda

conda install -c conda-forge breathe

Finally, go todocs subdirectory and build the documentation with thefollowing command:

make html

License

We use a shared copyright model that enables all contributors to maintain thecopyright on their contributions.

This software is licensed under the BSD-3-Clause license. See theLICENSE file for details.

About

C++ tensors with broadcasting and lazy computing

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp