- tensorstore.concat(layers:Iterable[TensorStore|Spec],axis:int|str,*,read:bool|None=
None,write:bool|None=None,context:Context|None=None,transaction:Transaction|None=None,rank:int|None=None,dtype:DTypeLike|None=None,domain:IndexDomain|None=None,shape:Iterable[int]|None=None,dimension_units:Iterable[Unit|str|Real|tuple[Real,str]|None]|None=None,schema:Schema|None=None)→TensorStore Virtually concatenates a sequence of
TensorStorelayers along an existing dimension.>>>store=ts.concat([...ts.array([1,2,3,4],dtype=ts.uint32),...ts.array([5,6,7,8],dtype=ts.uint32)...],...axis=0)>>>storeTensorStore({ 'context': {'data_copy_concurrency': {}}, 'driver': 'stack', 'dtype': 'uint32', 'layers': [ { 'array': [1, 2, 3, 4], 'driver': 'array', 'dtype': 'uint32', 'transform': {'input_exclusive_max': [4], 'input_inclusive_min': [0]}, }, { 'array': [5, 6, 7, 8], 'driver': 'array', 'dtype': 'uint32', 'transform': { 'input_exclusive_max': [8], 'input_inclusive_min': [4], 'output': [{'input_dimension': 0, 'offset': -4}], }, }, ], 'schema': {'domain': {'exclusive_max': [8], 'inclusive_min': [0]}}, 'transform': {'input_exclusive_max': [8], 'input_inclusive_min': [0]},})>>>awaitstore.read()array([1, 2, 3, 4, 5, 6, 7, 8], dtype=uint32)>>>store=ts.concat([...ts.array([[1,2,3],[4,5,6]],dtype=ts.uint32),...ts.array([[7,8,9],[10,11,12]],dtype=ts.uint32)...],...axis=0)>>>storeTensorStore({ 'context': {'data_copy_concurrency': {}}, 'driver': 'stack', 'dtype': 'uint32', 'layers': [ { 'array': [[1, 2, 3], [4, 5, 6]], 'driver': 'array', 'dtype': 'uint32', 'transform': { 'input_exclusive_max': [2, 3], 'input_inclusive_min': [0, 0], }, }, { 'array': [[7, 8, 9], [10, 11, 12]], 'driver': 'array', 'dtype': 'uint32', 'transform': { 'input_exclusive_max': [4, 3], 'input_inclusive_min': [2, 0], 'output': [ {'input_dimension': 0, 'offset': -2}, {'input_dimension': 1}, ], }, }, ], 'schema': {'domain': {'exclusive_max': [4, 3], 'inclusive_min': [0, 0]}}, 'transform': {'input_exclusive_max': [4, 3], 'input_inclusive_min': [0, 0]},})>>>awaitstore.read()array([[ 1, 2, 3], [ 4, 5, 6], [ 7, 8, 9], [10, 11, 12]], dtype=uint32)>>>store=ts.concat([...ts.array([[1,2,3],[4,5,6]],dtype=ts.uint32),...ts.array([[7,8,9],[10,11,12]],dtype=ts.uint32)...],...axis=-1)>>>storeTensorStore({ 'context': {'data_copy_concurrency': {}}, 'driver': 'stack', 'dtype': 'uint32', 'layers': [ { 'array': [[1, 2, 3], [4, 5, 6]], 'driver': 'array', 'dtype': 'uint32', 'transform': { 'input_exclusive_max': [2, 3], 'input_inclusive_min': [0, 0], }, }, { 'array': [[7, 8, 9], [10, 11, 12]], 'driver': 'array', 'dtype': 'uint32', 'transform': { 'input_exclusive_max': [2, 6], 'input_inclusive_min': [0, 3], 'output': [ {'input_dimension': 0}, {'input_dimension': 1, 'offset': -3}, ], }, }, ], 'schema': {'domain': {'exclusive_max': [2, 6], 'inclusive_min': [0, 0]}}, 'transform': {'input_exclusive_max': [2, 6], 'input_inclusive_min': [0, 0]},})>>>awaitstore.read()array([[ 1, 2, 3, 7, 8, 9], [ 4, 5, 6, 10, 11, 12]], dtype=uint32)>>>awaitts.concat([...ts.array([[1,2,3],[4,5,6]],dtype=ts.uint32).label["x","y"],...ts.array([[7,8,9],[10,11,12]],dtype=ts.uint32)...],...axis="y").read()array([[ 1, 2, 3, 7, 8, 9], [ 4, 5, 6, 10, 11, 12]], dtype=uint32)- Parameters:¶
- layers:Iterable[TensorStore|Spec]¶
Sequence of layers to concatenate. If a layer is specified as a
Specrather than aTensorStore, it must have a knowndomainand will be opened on-demand as needed for individualread and write operations.- axis:int|str¶
Existing dimension along which to concatenate. A negative number countsfrom the end. May also be specified by adimension label.
- read:bool|None=
None¶ Allow read access. Defaults to
Trueif neitherreadnorwriteis specified.- write:bool|None=
None¶ Allow write access. Defaults to
Trueif neitherreadnorwriteis specified.- context:Context|None=
None¶ Shared resource context. Defaults to a new (unshared) context with defaultoptions, as returned by
tensorstore.Context(). To share resources,such as cache pools, between multiple open TensorStores, you must specify acontext.- transaction:Transaction|None=
None¶ Transaction to use for opening/creating, and for subsequent operations. Bydefault, the open is non-transactional.
Note
To perform transactional operations using a
TensorStorethat waspreviously opened without a transaction, useTensorStore.with_transaction.- rank:int|None=
None¶ Constrains the rank of the TensorStore. If there is an index transform, therank constraint must match the rank of theinput space.
- dtype:DTypeLike|None=
None¶ Constrains the data type of the TensorStore. If a data type has already beenset, it is an error to specify a different data type.
- domain:IndexDomain|None=
None¶ Constrains the domain of the TensorStore. If there is an existingdomain, the specified domain is merged with it as follows:
The rank must match the existing rank.
All bounds must match, except that a finite or explicit bound is permitted tomatch an infinite and implicit bound, and takes precedence.
If both the new and existing domain specify non-empty labels for a dimension,the labels must be equal. If only one of the domains specifies a non-emptylabel for a dimension, the non-empty label takes precedence.
Note that if there is an index transform, the domain must match theinputspace, not the output space.
- shape:Iterable[int]|None=
None¶ Constrains the shape and origin of the TensorStore. Equivalent to specifying a
domainofts.IndexDomain(shape=shape).Note
This option also constrains the origin of all dimensions to be zero.
- dimension_units:Iterable[Unit|str|Real|tuple[Real,str]|None]|None=
None¶ Specifies the physical units of each dimension of the domain.
Thephysical unit for a dimension is the physical quantity corresponding to asingle index increment along each dimension.
A value of
Noneindicates that the unit is unknown. A dimension-lessquantity can be indicated by a unit of"".- schema:Schema|None=
None¶ Additional schema constraints to merge with existing constraints.