Movatterモバイル変換


[0]ホーム

URL:


Skip to content
tensorstore.open(spec:Spec|Any,*,read:bool|None=None,write:bool|None=None,open_mode:OpenMode|None=None,open:bool|None=None,create:bool|None=None,delete_existing:bool|None=None,assume_metadata:bool|None=None,assume_cached_metadata:bool|None=None,context:Context|None=None,transaction:Transaction|None=None,batch:Batch|None=None,kvstore:KvStore.Spec|KvStore|None=None,recheck_cached_metadata:RecheckCacheOption|None=None,recheck_cached_data:RecheckCacheOption|None=None,recheck_cached:RecheckCacheOption|None=None,rank:int|None=None,dtype:DTypeLike|None=None,domain:IndexDomain|None=None,shape:Iterable[int]|None=None,chunk_layout:ChunkLayout|None=None,codec:CodecSpec|None=None,fill_value:ArrayLike|None=None,dimension_units:Iterable[Unit|str|Real|tuple[Real,str]|None]|None=None,schema:Schema|None=None)Future[TensorStore]

Opens or creates aTensorStore from aSpec.

>>>store=awaitts.open(...{...'driver':'zarr',...'kvstore':{...'driver':'memory'...}...},...create=True,...dtype=ts.int32,...shape=[1000,2000,3000],...chunk_layout=ts.ChunkLayout(inner_order=[2,1,0]),...)>>>storeTensorStore({  'context': {    'cache_pool': {},    'data_copy_concurrency': {},    'memory_key_value_store': {},  },  'driver': 'zarr',  'dtype': 'int32',  'kvstore': {'driver': 'memory'},  'metadata': {    'chunks': [101, 101, 101],    'compressor': {      'blocksize': 0,      'clevel': 5,      'cname': 'lz4',      'id': 'blosc',      'shuffle': -1,    },    'dimension_separator': '.',    'dtype': '<i4',    'fill_value': None,    'filters': None,    'order': 'F',    'shape': [1000, 2000, 3000],    'zarr_format': 2,  },  'transform': {    'input_exclusive_max': [[1000], [2000], [3000]],    'input_inclusive_min': [0, 0, 0],  },})
Parameters:
spec:Spec|Any

TensorStore Spec to open. May also be specified asJSON or aURL.

read:bool|None=None

Allow read access. Defaults toTrue if neitherread norwrite is specified.

write:bool|None=None

Allow write access. Defaults toTrue if neitherread norwrite is specified.

open_mode:OpenMode|None=None

Overrides the existing open mode.

open:bool|None=None

Allow opening an existing TensorStore. Overrides the existing open mode.

create:bool|None=None

Allow creating a new TensorStore. Overrides the existing open mode. To open orcreate, specifycreate=True andopen=True.

delete_existing:bool|None=None

Delete any existing data before creating a new array. Overrides the existingopen mode. Must be specified in conjunction withcreate=True.

assume_metadata:bool|None=None

Neither read nor write stored metadata. Instead, just assume any necessarymetadata based on constraints in the spec, using the same defaults for anyunspecified metadata as when creating a new TensorStore. The stored metadataneed not even exist. Operations such as resizing that modify the storedmetadata are not supported. Overrides the existing open mode. Requires thatopen isTrue anddelete_existing isFalse. Thisoption takes precedence overassume_cached_metadata if that option is alsospecified.

Warning

This option can lead to data corruption if the assumed metadata doesnot match the stored metadata, or multiple concurrent writers usedifferent assumed metadata.

assume_cached_metadata:bool|None=None

Skip reading the metadata when opening. Instead, just assume any necessarymetadata based on constraints in the spec, using the same defaults for anyunspecified metadata as when creating a new TensorStore. The stored metadatamay still be accessed by subsequent operations that need to re-validate ormodify the metadata. Requires thatopen isTrue anddelete_existing isFalse. Theassume_metadataoption takes precedence if also specified.

Warning

This option can lead to data corruption if the assumed metadata doesnot match the stored metadata, or multiple concurrent writers usedifferent assumed metadata.

context:Context|None=None

Shared resource context. Defaults to a new (unshared) context with defaultoptions, as returned bytensorstore.Context(). To share resources,such as cache pools, between multiple open TensorStores, you must specify acontext.

transaction:Transaction|None=None

Transaction to use for opening/creating, and for subsequent operations. Bydefault, the open is non-transactional.

Note

To perform transactional operations using aTensorStore that waspreviously opened without a transaction, useTensorStore.with_transaction.

batch:Batch|None=None

Batch to use for reading any metadata required for opening.

Warning

If specified, the returnedFuture will not, in general, becomeready until the batch is submitted. Therefore, immediately awaiting thereturned future will lead to deadlock.

kvstore:KvStore.Spec|KvStore|None=None

Sets the associated key-value store used as the underlying storage.

If thekvstore has already been set, it isoverridden.

It is an error to specify this if the TensorStore driver does not use akey-value store.

recheck_cached_metadata:RecheckCacheOption|None=None

Time after which cached metadata is assumed to be fresh. Cached metadata olderthan the specified time is revalidated prior to use. The metadata is used tocheck the bounds of every read or write operation.

SpecifyingTrue means that the metadata will be revalidated prior to everyread or write operation. With the default value of"open", any cachedmetadata is revalidated when the TensorStore is opened but is not rechecked foreach read or write operation.

recheck_cached_data:RecheckCacheOption|None=None

Time after which cached data is assumed to be fresh. Cached data older than thespecified time is revalidated prior to being returned from a read operation.Partial chunk writes are always consistent regardless of the value of thisoption.

The default value ofTrue means that cached data is revalidated on everyread. To enable in-memory data caching, you must both specify acache_pool with a non-zerototal_bytes_limit and also specifyFalse,"open", or an explicit time bound forrecheck_cached_data.

recheck_cached:RecheckCacheOption|None=None

Sets bothrecheck_cached_data andrecheck_cached_metadata.

rank:int|None=None

Constrains the rank of the TensorStore. If there is an index transform, therank constraint must match the rank of theinput space.

dtype:DTypeLike|None=None

Constrains the data type of the TensorStore. If a data type has already beenset, it is an error to specify a different data type.

domain:IndexDomain|None=None

Constrains the domain of the TensorStore. If there is an existingdomain, the specified domain is merged with it as follows:

  1. The rank must match the existing rank.

  2. All bounds must match, except that a finite or explicit bound is permitted tomatch an infinite and implicit bound, and takes precedence.

  3. If both the new and existing domain specify non-empty labels for a dimension,the labels must be equal. If only one of the domains specifies a non-emptylabel for a dimension, the non-empty label takes precedence.

Note that if there is an index transform, the domain must match theinputspace, not the output space.

shape:Iterable[int]|None=None

Constrains the shape and origin of the TensorStore. Equivalent to specifying adomain ofts.IndexDomain(shape=shape).

Note

This option also constrains the origin of all dimensions to be zero.

chunk_layout:ChunkLayout|None=None

Constrains the chunk layout. If there is an existing chunk layout constraint,the constraints are merged. If the constraints are incompatible, an erroris raised.

codec:CodecSpec|None=None

Constrains the codec. If there is an existing codec constraint, the constraintsare merged. If the constraints are incompatible, an error is raised.

fill_value:ArrayLike|None=None

Specifies the fill value for positions that have not been written.

The fill value data type must be convertible to the actual data type, and theshape must bebroadcast-compatible with thedomain.

If an existing fill value has already been set as a constraint, it is anerror to specify a different fill value (where the comparison is done afternormalization by broadcasting).

dimension_units:Iterable[Unit|str|Real|tuple[Real,str]|None]|None=None

Specifies the physical units of each dimension of the domain.

Thephysical unit for a dimension is the physical quantity corresponding to asingle index increment along each dimension.

A value ofNone indicates that the unit is unknown. A dimension-lessquantity can be indicated by a unit of"".

schema:Schema|None=None

Additional schema constraints to merge with existing constraints.

Examples

Opening an existing TensorStore

To open an existing TensorStore, you can use aminimalSpec thatspecifies required driver-specific options, like the storage location.Information that can be determined automatically from the existing metadata,like the data type, domain, and chunk layout, may be omitted:

>>>store=awaitts.open(...{...'driver':'neuroglancer_precomputed',...'kvstore':{...'driver':'gcs',...'bucket':'neuroglancer-janelia-flyem-hemibrain',...'path':'v1.2/segmentation/',...},...},...read=True)>>>storeTensorStore({  'context': {    'cache_pool': {},    'data_copy_concurrency': {},    'gcs_request_concurrency': {},    'gcs_request_retries': {},    'gcs_user_project': {},  },  'driver': 'neuroglancer_precomputed',  'dtype': 'uint64',  'kvstore': {    'bucket': 'neuroglancer-janelia-flyem-hemibrain',    'driver': 'gcs',    'path': 'v1.2/segmentation/',  },  'multiscale_metadata': {'num_channels': 1, 'type': 'segmentation'},  'scale_index': 0,  'scale_metadata': {    'chunk_size': [64, 64, 64],    'compressed_segmentation_block_size': [8, 8, 8],    'encoding': 'compressed_segmentation',    'key': '8.0x8.0x8.0',    'resolution': [8.0, 8.0, 8.0],    'sharding': {      '@type': 'neuroglancer_uint64_sharded_v1',      'data_encoding': 'gzip',      'hash': 'identity',      'minishard_bits': 6,      'minishard_index_encoding': 'gzip',      'preshift_bits': 9,      'shard_bits': 15,    },    'size': [34432, 39552, 41408],    'voxel_offset': [0, 0, 0],  },  'transform': {    'input_exclusive_max': [34432, 39552, 41408, 1],    'input_inclusive_min': [0, 0, 0, 0],    'input_labels': ['x', 'y', 'z', 'channel'],  },})

Opening by URL

The same TensorStore opened in the previous section can be specified more concisely using aTensorStoreURL:

>>>store=awaitts.open(...'gs://neuroglancer-janelia-flyem-hemibrain/v1.2/segmentation/|neuroglancer-precomputed:',...read=True)

Note

The URL syntax is very limited in the options and parameters that may bespecified but is convenient in simple cases.

Opening with format auto-detection

Many formats can beauto-detected from aKvStoreURL alone:

>>>store=awaitts.open(...'gs://neuroglancer-janelia-flyem-hemibrain/v1.2/segmentation/',...read=True)>>>store.url'gs://neuroglancer-janelia-flyem-hemibrain/v1.2/segmentation/|neuroglancer-precomputed:'

A fullKvStoreJSONspec can also be specified instead of a URL:

>>>store=awaitts.open(...{...'driver':'gcs',...'bucket':'neuroglancer-janelia-flyem-hemibrain',...'path':'v1.2/segmentation/'...},...read=True)>>>store.url'gs://neuroglancer-janelia-flyem-hemibrain/v1.2/segmentation/|neuroglancer-precomputed:'

Creating a new TensorStore

To create a new TensorStore, you must specify required driver-specific options,like the storage location, as well asSchema constraints like thedata type and domain. Suitable defaults are chosen automatically for schemaproperties that are left unconstrained:

>>>store=awaitts.open(...{...'driver':'zarr',...'kvstore':{...'driver':'memory'...},...},...create=True,...dtype=ts.float32,...shape=[1000,2000,3000],...fill_value=42)>>>storeTensorStore({  'context': {    'cache_pool': {},    'data_copy_concurrency': {},    'memory_key_value_store': {},  },  'driver': 'zarr',  'dtype': 'float32',  'kvstore': {'driver': 'memory'},  'metadata': {    'chunks': [101, 101, 101],    'compressor': {      'blocksize': 0,      'clevel': 5,      'cname': 'lz4',      'id': 'blosc',      'shuffle': -1,    },    'dimension_separator': '.',    'dtype': '<f4',    'fill_value': 42.0,    'filters': None,    'order': 'C',    'shape': [1000, 2000, 3000],    'zarr_format': 2,  },  'transform': {    'input_exclusive_max': [[1000], [2000], [3000]],    'input_inclusive_min': [0, 0, 0],  },})

Partial constraints may be specified on the chunk layout, and the driver willdetermine a matching chunk layout automatically:

>>>store=awaitts.open(...{...'driver':'zarr',...'kvstore':{...'driver':'memory'...},...},...create=True,...dtype=ts.float32,...shape=[1000,2000,3000],...chunk_layout=ts.ChunkLayout(...chunk_shape=[10,None,None],...chunk_aspect_ratio=[None,2,1],...chunk_elements=10000000,...),...)>>>storeTensorStore({  'context': {    'cache_pool': {},    'data_copy_concurrency': {},    'memory_key_value_store': {},  },  'driver': 'zarr',  'dtype': 'float32',  'kvstore': {'driver': 'memory'},  'metadata': {    'chunks': [10, 1414, 707],    'compressor': {      'blocksize': 0,      'clevel': 5,      'cname': 'lz4',      'id': 'blosc',      'shuffle': -1,    },    'dimension_separator': '.',    'dtype': '<f4',    'fill_value': None,    'filters': None,    'order': 'C',    'shape': [1000, 2000, 3000],    'zarr_format': 2,  },  'transform': {    'input_exclusive_max': [[1000], [2000], [3000]],    'input_inclusive_min': [0, 0, 0],  },})

The schema constraints allow key storage characteristics to be specifiedindependent of the driver/format:

>>>store=awaitts.open(...{...'driver':'n5',...'kvstore':{...'driver':'memory'...},...},...create=True,...dtype=ts.float32,...shape=[1000,2000,3000],...chunk_layout=ts.ChunkLayout(...chunk_shape=[10,None,None],...chunk_aspect_ratio=[None,2,1],...chunk_elements=10000000,...),...)>>>storeTensorStore({  'context': {    'cache_pool': {},    'data_copy_concurrency': {},    'memory_key_value_store': {},  },  'driver': 'n5',  'dtype': 'float32',  'kvstore': {'driver': 'memory'},  'metadata': {    'blockSize': [10, 1414, 707],    'compression': {      'blocksize': 0,      'clevel': 5,      'cname': 'lz4',      'shuffle': 1,      'type': 'blosc',    },    'dataType': 'float32',    'dimensions': [1000, 2000, 3000],  },  'transform': {    'input_exclusive_max': [[1000], [2000], [3000]],    'input_inclusive_min': [0, 0, 0],  },})

Driver-specific constraints can be used in combination with, or instead of,schema constraints:

>>>store=awaitts.open(...{...'driver':'zarr',...'kvstore':{...'driver':'memory'...},...'metadata':{...'dtype':'>f4'...},...},...create=True,...shape=[1000,2000,3000])>>>storeTensorStore({  'context': {    'cache_pool': {},    'data_copy_concurrency': {},    'memory_key_value_store': {},  },  'driver': 'zarr',  'dtype': 'float32',  'kvstore': {'driver': 'memory'},  'metadata': {    'chunks': [101, 101, 101],    'compressor': {      'blocksize': 0,      'clevel': 5,      'cname': 'lz4',      'id': 'blosc',      'shuffle': -1,    },    'dimension_separator': '.',    'dtype': '>f4',    'fill_value': None,    'filters': None,    'order': 'C',    'shape': [1000, 2000, 3000],    'zarr_format': 2,  },  'transform': {    'input_exclusive_max': [[1000], [2000], [3000]],    'input_inclusive_min': [0, 0, 0],  },})

Usingassume_metadata for improved concurrent open efficiency

Normally, when opening or creating a chunked format likezarr, TensorStore first attempts to read the existingmetadata (and confirms that it matches any specified constraints), or (ifcreating is allowed) creates a new metadata file based on any specifiedconstraints.

When the same TensorStore stored on a distributed filesystem or cloud storage isopened concurrently from many machines, the simultaneous requests to read andwrite the metadata file by every machine can create contention and result inhigh latency on some distributed filesystems.

Theassume_metadata open mode allows redundant reading and writingof the metadata file to be avoided, but requires careful use to avoid datacorruption.

Example of skipping reading the metadata when opening an existing array

>>>context=ts.Context()>>># First create the array normally>>>store=awaitts.open({..."driver":"zarr",..."kvstore":"memory://"...},...context=context,...dtype=ts.float32,...shape=[5],...create=True)>>># Note that the .zarray metadata has been written.>>>awaitstore.kvstore.list()[b'.zarray']>>>awaitstore.write([1,2,3,4,5])>>>spec=store.spec()>>>specSpec({  'driver': 'zarr',  'dtype': 'float32',  'kvstore': {'driver': 'memory'},  'metadata': {    'chunks': [5],    'compressor': {      'blocksize': 0,      'clevel': 5,      'cname': 'lz4',      'id': 'blosc',      'shuffle': -1,    },    'dimension_separator': '.',    'dtype': '<f4',    'fill_value': None,    'filters': None,    'order': 'C',    'shape': [5],    'zarr_format': 2,  },  'transform': {'input_exclusive_max': [[5]], 'input_inclusive_min': [0]},})>>># Re-open later without re-reading metadata>>>store2=awaitts.open(spec,...context=context,...open=True,...assume_metadata=True)>>># Read data using the unverified metadata from `spec`>>>awaitstore2.read()

Example of skipping writing the metadata when creating a new array

>>>context=ts.Context()>>>spec=ts.Spec(json={"driver":"zarr","kvstore":"memory://"})>>>spec.update(dtype=ts.float32,shape=[5])>>># Open the array without writing the metadata.  If using a distributed>>># filesystem, this can safely be executed on multiple machines concurrently,>>># provided that the `spec` is identical and the metadata is either fully>>># constrained, or exactly the same TensorStore version is used to ensure the>>># same defaults are applied.>>>store=awaitts.open(spec,...context=context,...open=True,...create=True,...assume_metadata=True)>>>awaitstore.write([1,2,3,4,5])>>># Note that the data chunk has been written but not the .zarray metadata>>>awaitstore.kvstore.list()[b'0']>>># From a single machine, actually write the metadata to ensure the array>>># can be re-opened knowing the metadata.  This can be done in parallel with>>># any other writing.>>>awaitts.open(spec,context=context,open=True,create=True)>>># Metadata has now been written.>>>awaitstore.kvstore.list()[b'.zarray', b'0']

[8]ページ先頭

©2009-2025 Movatter.jp