PyTorch adapter
Overview
This layer provides functionalitythat enables you to treat CVAT projects and tasks as PyTorch datasets.
The code of this layer is located in thecvat_sdk.pytorch
package.To use it, you must install thecvat_sdk
distribution with thepytorch
extra.
Example
importtorchimporttorchvision.modelsfromcvat_sdkimportmake_clientfromcvat_sdk.pytorchimportProjectVisionDataset,ExtractSingleLabelIndex# create a PyTorch modelmodel=torchvision.models.resnet34(weights=torchvision.models.ResNet34_Weights.IMAGENET1K_V1)model.eval()# log into the CVAT serverwithmake_client(host="http://localhost",credentials=('user','password'))asclient:# get the dataset comprising all tasks for the Validation subset of project 12345dataset=ProjectVisionDataset(client,project_id=12345,include_subsets=['Validation'],# use transforms that fit our neural networktransform=torchvision.models.ResNet34_Weights.IMAGENET1K_V1.transforms(),target_transform=ExtractSingleLabelIndex())# print the number of images in the dataset (in other words, the number of frames# in the included tasks)print(len(dataset))# get a sample from the datasetimage,target=dataset[0]# evaluate the network on the sample and compare the output to the targetoutput=model(image)iftorch.equal(output,target):print("correct prediction")else:print("incorrect prediction")
Datasets
The key components of this layer are the dataset classes,ProjectVisionDataset
andTaskVisionDataset
,representing data & annotations contained in a CVAT project or task, respectively.Both of them are subclasses of thetorch.utils.data.Dataset
abstract class.
The interface ofDataset
is essentially that of a sequencewhose elements are samples from the dataset.In the case ofTaskVisionDataset
, each sample represents a frame from the taskand its associated annotations.The order of the samples is the same as the order of frames in the task.Deleted frames are omitted.
In the case ofProjectVisionDataset
,each sample is a sample from one of the project’s tasks,as if obtained from aTaskVisionDataset
instance created for that task.The full sequence of samples is built by concatenating the sequences of samplesfrom all included tasks in an unspecified orderthat is guaranteed to be consistent between executions.For details on what tasks are included, seeTask filtering.
Construction
Both dataset classes are instantiated by passing in an instance ofcvat_sdk.Client
and the ID of the project or task:
dataset=ProjectVisionDataset(client,123)dataset=TaskVisionDataset(client,456)
The referenced project or task must contain image data.Video data is currently not supported.
The constructors of these classes also support several keyword-only parameters:
transforms
,transform
,target_transform
:seeTransform support.label_name_to_index
:seeLabel index assignment.task_filter
,include_subsets
(ProjectVisionDataset
only):seeTask filtering.update_policy
:seeCaching.
During construction,the dataset objects either populate or validate the local data cache(seeCaching for details).Any necessary requests to the CVAT server are performed at this time.After construction, the objects make no more network requests.
Sample format
Indexing a dataset produces a sample.A sample has the form of a tuple with the following components:
sample[0]
(PIL.Image.Image
): the image.sample[1]
(cvat_sdk.pytorch.Target
): the annotations and auxiliary data.
The target object contains the following attributes:
target.annotations.tags
(list[cvat_sdk.models.LabeledImage]
):tag annotations associated with the current frame.target.annotations.shapes
(list[cvat_sdk.models.LabeledShape]
):shape annotations associated with the current frame.target.label_id_to_index
(Mapping[int, int]
):seeLabel index assignment.
Note that track annotations are currently inaccessible.
Transform support
The dataset classes support torchvision-like transformsthat you can supply to preprocess each sample before it’s returned.You can use this to convert the samples to a more convenient formator to preprocess the data.The transforms are supplied via the following constructor parameters:
transforms
: a callable that accepts two arguments (the image and the target)and returns a tuple with two elements.transform
: a callable that accepts an image.target_transform
: a callable that accepts a target.
Let the sample value prior to any transformations be(image, target)
.Here is what indexing the dataset will return for various combinations ofsupplied transforms:
transforms
:transforms(image, target)
.transform
:(transform(image), target)
.target_transform
:(image, target_transform(target))
.transform
andtarget_transform
:(transform(image), target_transform(target))
.
transforms
cannot be supplied at the same timeas eithertransform
ortarget_transform
.
Thecvat_sdk.pytorch
module contains some target transform classesthat are intended for common use cases.SeeTransforms.
Label index assignment
The annotation model classes (LabeledImage
andLabeledShape
)reference labels by their IDs on the CVAT server.This is usually not very useful for machine learning code,since those IDs are unpredictable and will be different between different projects,even if semantically the set of labels is the same.
Therefore, the dataset classes assign to each label a unique index thatis intended to be a project-independent identifier.These indices are accessible via thelabel_id_to_index
attributeon each sample’s target.This attribute maps IDs on the server to the assigned index.The mapping is the same for every sample.
By default, the dataset classes arrange all label IDs in an unspecified orderthat remains consistent across executions,and assign them sequential indices, starting with 0.
You can override this behavior and specify your own label indiceswith thelabel_name_to_index
constructor parameter.This parameter accepts a mapping from label name to index.The mapping must contain a key for each label in the project/task.When this parameter is specified, label indices are assignedby looking up each label’s name in the provided mapping and using the result.
Task filtering
Note: this section applies only toProjectVisionDataset
.
By default, aProjectVisionDataset
includes samplesfrom every task belonging to the project.You can change this using the following constructor parameters:
task_filter
(Callable[[models.ITaskRead], bool]
):if set, the callable will be called for every task,with an instance ofITaskRead
corresponding to that taskpassed as the argument.Only tasks for whichTrue
is returned will be included.include_subsets
(Container[str]
):if set, only tasks whose subset is a member of the containerwill be included.
Both parameters can be set,in which case tasks must fulfill both criteria to be included.
Caching
The images and annotations of a dataset can be substantial in size,so they are not downloaded from the server every time a dataset object is created.Instead, they are loaded from a cache on the local file system,which is maintained during dataset object constructionaccording to the policy set by theupdate_policy
constructor parameter.
The available policies are:
UpdatePolicy.IF_MISSING_OR_STALE
:If some data is already cached,query the server to determine if it is out of date.If so, discard it.Then, download all necessary data that is missing from the cache and cache it.This is the default policy.
UpdatePolicy.NEVER
:If some necessary data is missing from the cache,raise an exception.Don’t make any network requests.Note that this policy permits the use of stale data.
By default, the cache is located in a platform-specific per-user directory.You can change this location with thecache_dir
setting in theClient
configuration.
Transforms
The layer provides some classes whose instances are callablessuitable for usage with thetarget_transform
dataset constructor parameterthat are intended to simplify working with CVAT datasets in common scenarios.
ExtractBoundingBoxes
Intended for object detection tasks.
Constructor parameters:
include_shape_types
(Iterable[str]
).The values must be from the following list:"ellipse"
"points"
"polygon"
"polyline"
"rectangle"
Effect: Gathers all shape annotations from the input target objectwhose types are contained in the value ofinclude_shape_types
.Then returns a dictionary with the following string keys(whereN
is the number of gathered shapes):
"boxes"
(a floating-point tensor of shapeN
x4
).Each row represents the bounding box the corresponding shapein the following format:[x_min, y_min, x_max, y_max]
."labels"
(an integer tensor of shapeN
).Each element is the index of the label of the corresponding shape.
Example:
ExtractBoundingBoxes(include_shape_types=['rectangle','ellipse'])
ExtractSingleLabelIndex
Intended for image classification tasks.
Constructor parameters: None.
Effect: If the input target object contains no tag annotationsor more than one tag annotation, raisesValueError
.Otherwise, returns the index of the label in the solitary tag annotationas a zero-dimensional tensor.
Example:
ExtractSingleLabelIndex()