Movatterモバイル変換


[0]ホーム

URL:


Skip to content
Esri Developer

ArcGIS API for PythonAPI Reference

arcgis.learn module

Functions for calling the Deep Learning Tools.

Data Preparation Methods

export_training_data

arcgis.learn.export_training_data(input_raster,input_class_data=None,chip_format=None,tile_size=None,stride_size=None,metadata_format=None,classvalue_field=None,buffer_radius=None,output_location=None,context=None,input_mask_polygons=None,rotation_angle=0,reference_system='MAP_SPACE',process_all_raster_items=False,blacken_around_feature=False,fix_chip_size=True,additional_input_raster=None,input_instance_data=None,instance_class_value_field=None,min_polygon_overlap_ratio=0,*,gis=None,future=False,estimate=False,**kwargs)

Function is designed to generate training sample image chips from the input imagery data withlabeled vector data or classified images. The output of this service tool is the data store stringwhere the output image chips, labels and metadata files are going to be stored.

Note

This function is supported with ArcGIS Enterprise (Image Server)

Parameter

Description

input_raster

RequiredImageryLayer/Raster/Item/String (URL).Raster layer that needs to be exported for training.

input_class_data

Labeled data, either a feature layer or image layer.Vector inputs should follow a training sample format asgenerated by the ArcGIS Pro Training Sample Manager.Raster inputs should follow a classified raster format as generated by the Classify Raster tool.

chip_format

Optional string. The raster format for the image chip outputs.

  • TIFF: TIFF format

  • PNG: PNG format

  • JPEG: JPEG format

  • MRF: MRF (Meta Raster Format)

tile_size

Optional dictionary. The size of the image chips.

Example: {“x”: 256, “y”: 256}

stride_size

Optional dictionary. The distance to move in the X and Y when creatingthe next image chip.When stride is equal to the tile size, there will be no overlap.When stride is equal to half of the tile size, there will be 50% overlap.

Example: {“x”: 128, “y”: 128}

metadata_format

Optional string. The format of the output metadata labels. There are 4 options for output metadata labels for the training data,KITTI Rectangles, PASCAL VOCrectangles, Classified Tiles (a class map) and RCNN_Masks. If your input training sample datais a feature class layer such as building layer or standard classification training sample file,use the KITTI or PASCAL VOC rectangle option.The output metadata is a .txt file or .xml file containing the training sample data containedin the minimum bounding rectangle. The name of the metadata file matches the input source imagename. If your input training sample data is a class map, use the Classified Tiles as your output metadata format option.

  • KITTI_rectangles: The metadata follows the same format as the Karlsruhe Institute of Technology and Toyota echnological Institute (KITTI) Object Detection Evaluation dataset. The KITTI dataset is a vision benchmark suite. This is the default.The label files are plain text files. All values, both numerical or strings, are separated by spaces, and each row corresponds to one object. This format can be used with FasterRCNN, RetinaNet, SingleShotDetector and YOLOv3 models.

  • PASCAL_VOC_rectangles: The metadata follows the same format as the Pattern Analysis, Statistical Modeling andComputational Learning, Visual Object Classes (PASCAL_VOC) dataset. The PASCAL VOC dataset is a standardizedimage data set for object class recognition.The label files are XML files and contain information about image name,class value, and bounding box(es).This format can be used with FasterRCNN, RetinaNet, SingleShotDetector and YOLOv3 models.

  • Classified_Tiles: This option will output one classified image chip per input image chip.No other meta data for each image chip. Only the statistics output has more information on theclasses such as class names, class values, and output statistics.This format can be used with BDCNEdgeDetector, DeepLab, HEDEdgeDetector, MultiTaskRoadExtractor, PSPNetClassifier and UnetClassifier models.

  • RCNN_Masks: This option will output image chips that have a mask on the areas where the sample exists.The model generates bounding boxes and segmentation masks for each instance of an object in the image.This format can be used with MaskRCNN model.

  • Labeled_Tiles: This option will label each output tile with a specific class.This format is used for image classification.This format can be used with FeatureClassifier model.

  • MultiLabeled_Tiles: Each output tile will be labeled with one or more classes.For example, a tile may be labeled agriculture and also cloudy. This format is used for object classification.This format can be used with FeatureClassifier model.

  • Export_Tiles: The output will be image chips with no label.This format is used for image enhancement techniques such as Super Resolution and Change Detection.This format can be used with ChangeDetector, CycleGAN, Pix2Pix and SuperResolution models.

  • CycleGAN: The output will be image chips with no label. This format is used for imagetranslation technique CycleGAN, which is used to train images that do not overlap.

  • Imagenet: Each output tile will be labeled with a specific class. This format is usedfor object classification; however, it can also be used for object tracking when the Deep Sortmodel type is used during training.

  • Panoptic_Segmentation: The output will be one classified image chip and one instance perinput image chip. The output will also have image chips that mask the areas where the sample exists;these image chips will be stored in a different folder. This format is used for both pixel classificationand instance segmentation, therefore there will be two output labels folders.

classvalue_field

Optional string. Specifies the field which contains the class values. If no field is specified,the system will look for a ‘value’ or ‘classvalue’ field. If this feature doesnot contain a class field, the system will presume all records belong the 1 class.

buffer_radius

Optional integer. Specifies a radius for point feature classes to specify training sample area.

output_location

This is the output location for training sample data.It can be the server data store path or a shared file system path.

Example:

Server datastore path -

  • /fileShares/deeplearning/rooftoptrainingsamples

  • /rasterStores/rasterstorename/rooftoptrainingsamples

File share path -

  • \\servername\deeplearning\rooftoptrainingsamples

context

Optional dictionary. Context contains additional settings that affect task execution.Dictionary can contain value for following keys:

  • exportAllTiles - Choose if the image chips with overlapped labeled data will be exported.

    • True - Export all the image chips, including those that do not overlap labeled data.

    • False - Export only the image chips that overlap the labelled data. This is the default.

  • startIndex - Allows you to set the start index for the sequence of image chips.This lets you append more image chips to an existing sequence. The default value is 0.

  • cellSize - cell size can be set using this key in context parameter

  • extent - Sets the processing extent used by the function

Setting context parameter will override the values set using arcgis.envvariable for this particular function.(cellSize, extent)

Example:

{“exportAllTiles” : False, “startIndex”: 0 }

input_mask_polygons

OptionalFeatureLayer. The feature layer that delineates the area whereimage chips will be created.Only image chips that fall completely within the polygons will be created.

rotation_angle

Optional float. The rotation angle that will be used to generate additionalimage chips.

An image chip will be generated with a rotation angle of 0, whichmeans no rotation. It will then be rotated at the specified angle tocreate an additional image chip. The same training samples will becaptured at multiple angles in multiple image chips for data augmentation.The default rotation angle is 0.

reference_system

Optional string. Specifies the type of reference system to be used to interpretthe input image. The reference system specified should match the reference systemused to train the deep learning model.

  • MAP_SPACE : The input image is in a map-based coordinate system. This is the default.

  • IMAGE_SPACE : The input image is in image space, viewed from the direction of the sensorthat captured the image, and rotated such that the tops of buildings and trees point upward in the image.

  • PIXEL_SPACE : The input image is in image space, with no rotation and no distortion.

process_all_raster_items

Optional bool. Specifies how all raster items in an image service will be processed.

  • False : all raster items in the image service will be mosaicked together and processed. This is the default.

  • True : all raster items in the image service will be processed as separate images.

blacken_around_feature

Optional bool. Specifies whether to blacken the pixels around each object or feature in each image tile.This parameter only applies when the metadata format is set to Labeled_Tiles and an input feature class or classified raster has been specified.

  • False : Pixels surrounding objects or features will not be blackened. This is the default.

  • True : Pixels surrounding objects or features will be blackened.

fix_chip_size

Optional bool. Specifies whether to crop the exported tiles such that they are all the same size.This parameter only applies when the metadata format is set to Labeled_Tiles and an input feature class or classified raster has been specified.

  • True : Exported tiles will be the same size and will center on the feature. This is the default.

  • False : Exported tiles will be cropped such that the bounding geometry surrounds only the feature in the tile.

additional_input_raster

OptionalImageryLayer/Raster/Item/String (URL).An additional input imagery source that will be used for image translation methods.

This parameter is valid when the metadata_format parameter is set to Classified_Tiles, Export_Tiles, or CycleGAN.

input_instance_data

Optional. The training sample data collected that contains classes for instance segmentation.

The input can also be a point feature without a class value field or an integer raster without any class information.

This parameter is only valid when the metadata_format parameter is set to Panoptic_Segmentation.

instance_class_value_field

Optional string. The field that contains the class values for instance segmentation.If no field is specified, the tool will use a value or class value field, if one is present.If the feature does not contain a class field, the tool will determine that all records belong to one class.

This parameter is only valid when the metadata_format parameter is set to Panoptic_Segmentation.

min_polygon_overlap_ratio

Optional float. The minimum overlap percentage for a feature to be included in the training data.If the percentage overlap is less than the value specified, the feature will be excluded from thetraining chip, and will not be added to the label file.

The percent value is expressed as a decimal. For example, to specify an overlap of 20 percent,use a value of 0.2. The default value is 0, which means that all features will be included.

This parameter improves the performance of the tool and also improves inferencing.The speed is improved since less training chips are created. Inferencing is improvedsince the model is trained to only detect large patches of objects and ignores smallcorners of features.

This parameter is honoured only when the input_class_data parameter value is a feature service.

gis

OptionalGIS . The GIS on which this tool runs. If not specified, the active GIS is used.

estimate

Keyword only parameter. Optional Boolean. If True, the number of credits needed to run the operation will be returned as a float.Available only on ArcGIS Online

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

Returns:

Output string containing the location of the exported training data

export_point_dataset

arcgis.learn.export_point_dataset(data_path,output_path,block_size=50.0,max_points=8192,extra_features=[],**kwargs)

Note:This function has been deprecated starting fromArcGIS API forPython version 1.9.0. Export data usingPrepare Point Cloud Training Data tool availablein 3D Analyst Extension from ArcGIS Pro 2.8 onwards.

estimate_batch_size

arcgis.learn.estimate_batch_size(model,mode='train',**kwargs)

Function to calculate estimated batch size based on GPU capacity, size of model and data.

Parameter

Description

model

Required arcgis.learn imagery model. Modelinstance for which batch size should be estimated.Not supported for text, tabular, timeseriesor tracking models such as FullyConnectedNetwork,MLModel, TimeSeriesModel, SiamMask, PSETAEand EfficientDet models.

mode

Optional string. Default train. The mode forwhich batch size is estimated. Supported ‘train’and ‘eval’ mode for calculating batch size intraining mode and evaluation mode respectively.Note: max_batchsize is capped at 1024 for trainand eval mode and recommended_batchsize iscapped at 64 for train mode.

Returns:

Named tuple of recommended_batchsize and max_batchsize

prepare_data

arcgis.learn.prepare_data(path,class_mapping=None,chip_size=224,val_split_pct=0.1,batch_size=64,transforms=None,collate_fn=<function_bb_pad_collate>,seed=42,dataset_type=None,resize_to=None,working_dir=None,**kwargs)

Prepares a data object from training sample exported by theExport Training Data tool in ArcGIS Pro or Image Server, or trainingsamples in the supported dataset formats. This data object consists oftraining and validation data sets with the specified transformations,chip size, batch size, split percentage, etc.

Parameter

Description

path

Required string. Path to data directory or a list of pathsin case of multi-folder training.

class_mapping

Optional dictionary. Mapping from id toits string label. Not supported for MaskRCNN model.

chip_size

Optional integer, default 224. Size of the image to trainthe model. Images are cropped to the specified chip_size.If image size is less than chip_size, the image size isused as chip_size. A chip size that is a multiple of 32pixels is recommended. Not supported for SuperResolution,SiamMask, WNet_cGAN, Pix2Pix and CycleGAN.

val_split_pct

Optional float. Percentage of training data to keepas validation.

batch_size

Optional integer. Default 64. Batch size for mini batchgradient descent (Reduce it if getting CUDA Out of MemoryErrors). Batch size is required to be greater than 1. IfNone is provided, a recommended batch size is used. This isestimated based on GPU capacity, size of model and data.To explicitly find the recommended batch_size,use arcgis.learn.estimate_batch_size() method.

transforms

Optional tuple. Fast.ai transforms for dataaugmentation of training and validation datasetsrespectively (We have set good defaults which workfor satellite imagery well). If transforms is settoFalse no transformation will take place andchip_size parameter will also not take effect.If the dataset_type is ‘PointCloud’ and ‘PointCloudOD’, useTransform3d.

collate_fn

Optional function. Passed to PyTorch to collate datainto batches(usually default works).

seed

Optional integer. Random seed for reproducibletrain-validation split.

dataset_type

Optional string.prepare_data()function will infer thedataset_type on its own ifit contains a map.txt file. If the path does not containthe map.txt file pass one of ‘PASCAL_VOC_rectangles’,‘KITTI_rectangles’, ‘Imagenet’.This parameter is mandatory for dataset‘PointCloud’, ‘PointCloudOD’, ‘ImageCaptioning’,‘ChangeDetection’, ‘WNet_cGAN’ and ‘ObjectTracking’.Note:For details on dataset_type please refer to thislink.

resize_to

Optional integer or tuple of integers.A tuple should be of the form (height, width).Resize the images to a given size.Works only for “PASCAL_VOC_rectangles”, “Labelled_Tiles”,“superres” and “Imagenet”.First resizes the image to the givensize and then crops images of size equal to chip_size.Note: If resize_to is less than chip_size, theresize_to is used as chip_size.

working_dir

Optional string. Sets the default path to be used asa prefix for saving trained models and checkpoints.

Keyword Arguments

Parameter

Description

n_masks

Optional int. Default value is 30.Required for MaXDeepLab panoptic segmentationmodel. It represents the max number of classlabels and instances any image can contain.To compute the exact value for your dataset,use thecompute_n_masks()method available with MaXDeepLab model.

downsample_factor

Optional float. Factor to downsample the imagesfor image SuperResolution.for example: if value is 2 and image size 256x256,it will create label images of size 128x128.Default is 4

min_points

For dataset_type=’PointCloud’ and ‘PointCloudOD’:Optional int. Filtering based on minimum numberof points in a block. Setmin_points=1000 tofilter out blocks with less than 1000 points.

For dataset_type=’PSETAE’:Optional int. Number of pixels equal to or multiplesof 64 to sample from the each masked region of trainingdata i.e. 64, 128 etc.

extra_features

Optional List. Contains a list of stringswhich mentions extra features to be used fortraining, applicable with dataset_type ‘PointCloud’and ‘PointCloudOD’. By default only x, y, and z areconsidered for training irrespective of what featureswere exported.For example: [‘intensity’, ‘numberOfReturns’, ‘returnNumber’,‘red’, ‘green’, ‘blue’, ‘nearInfrared’].

remap_classes

Optional dictionary {int:int}.Mapping from class values to user defined values,in both training and validation data.

For dataset_type=’PointCloud’:It will remap LAS classcode structure.For example: {1:3, 2:4} will remap LAS classcode 1 to 3and classcode 2 to 4.

For dataset_type=’PointCloudOD’:It will remap object class ids. When thisparameter is set asremap_classes={5:3, 2:4},then ‘5’ and 2 class values will be considered as ‘3’, and‘4’, respectively.

classes_of_interest

Optional list of int.

For dataset_type=’PointCloud’:This will filter training blocks based onclasses_of_interest. If we have “1, 3, 5, 7”LAS classcodes in our dataset, but we are mainlyinterested in 1 and 3 classcodes, Setclasses_of_interest=[1,3]. Only those blockswill be considered for training which either have1 or 3 LAS classcodes in them, rest of the blocks willbe filtered out. If remapping of rest of the classcodesis required, setbackground_classcode to some value.

For dataset_type=’PointCloudOD’:This will filter training blocks based onclasses_of_interest. If we have “2, 3, 10, 16”object classes in the 3d feature class, but we aremainly interested in 2 and 10 object classes,Setclasses_of_interest=[2,10]. Only those blockswill be considered for training which either have2 or 10 object classes in them, the rest of the blocks willbe filtered out. Setbackground_classcode asTrueto discard other classes.

Note:classes_of_interest is applied on theremapped class structure,ifremap_classes is also used.

background_classcode

This parameter is only applicable whenclasses_of_interest is specified.

For dataset_type=’PointCloud’:Optional int. Default: None.This will remap other class values, exceptclasses_of_interest tobackground_classcode.

For dataset_type=’PointCloudOD’:Optional Bool. Default: False.If set to ‘True’, onlyclasses_of_interestclass values will be considered and rest ofthe class values will be discarded.

stratify

Optional boolean, default False.If True, prepare_datawill try to maintain the class proportion intrain and validation data according to theval_split_pct.Default value feature classification is True.Default value pixel classification is False.

Note:Applies to single label feature classification,object detection and pixel classification.

timesteps_of_interest

Optional list. List of time steps of interest.This will filter multi-temporal timesereis basedontimesteps_of_interest. If the dataset havetime-steps [0, 1, 2, 3], but we are mainly interestedin 0, 1 and 2, Settimesteps_of_interest=[0,1,2].Only those time-steps will be considered for training,rest of the time-steps will be filtered out.Applicable only for dataset_type=’PSETAE’.

channels_of_interest

Optional list. List of spectral bands/channels of interest.This will filter out bands from rasters ofmulti-temporal timeseries based onchannels_of_interest list. If we have bands[0,1,2,3,4] in our dataset, but we are mainlyinterested in 0, 1 and 2, Setchannels_of_interest=[0,1,2].Only those spectral bands will be considered for training.Applicable only for dataset_type=’PSETAE’.

n_temporal

Required int. Number of temporal observations or time steps.Applicable only for dataset_type=’PSETAE’.

n_temporal_dates

Required list of strings. The dates of that observationswill be used for the positional encoding and should bestored as a list of dates strings in YYYY-MM-DD format.For example, If we have stacked imagery of n bands eachfrom two dates then, [‘YYYY-MM-DD’,’YYYY-MM-DD’].Applicable only for dataset_type=’PSETAE’.

num_workers

Optional int. Default0.number of subprocesses to use for data loading on theWindows operating system.0 means that the data willbe loaded in the main process.

forecast_timesteps

Required int. Default set to 1. How far themodel should forecast into the future. A forecast timestepis the interval at which predictions are made, For example,If we have 8-hourly data point and we want to make a 8 hr,16 hr, 24 hr forecast, forecast timesteps is set to 1, 2, 3respectively and so on. In case of hourly and monthly datapoint, for forecasts of 1, 2, 3 hr/month, forecast timestepis set to 1, 2, 3 respectively and so on. Applicable onlyfor climaX model architecuture.

hrs_each_step

Optional int. Default set to 1 (hrs). Number of hours inwhich data is collected, for example, if you have 8-hourly,hourly, montly, daily then, hrs_each_step is to be set to8, 1, 720 (30 days * 24), 24 hrs respectively. Applicableonly for climaX model architecuture.

Returns:

data object

prepare_tabulardata

arcgis.learn.prepare_tabulardata(input_features=None,variable_predict=None,explanatory_variables=None,explanatory_rasters=None,date_field=None,cell_sizes=[3,4,5,6,7],distance_features=None,preprocessors=None,val_split_pct=0.1,seed=42,batch_size=64,index_field=None,working_dir=None,**kwargs)

Prepares a tabular data object from input_features and optionally rasters.

Parameter

Description

input_features

OptionalFeatureLayer Object or spatially enabled dataframe.This contains features denoting the value of the dependent variable.Leave empty for using rasters with MLModel.

variable_predict

Optional String or List, denoting the field_names ofthe variable to predict.Keep none for unsupervised training using ML Model. For timeseries itwill work for continuous variable.As of now we support only binary classification in fairness evaluation.

explanatory_variables

Optional list containing field names from input_featuresBy default the field type is continuous.To override field type to categorical, passa 2-sized tuple in the list containing:

  1. field to be taken as input from the input_features.

2. True/False denoting Categorical/Continuous variable.If the field is text, the value should be ‘text’

and if the field is image path, the value should be ‘image’.

For example:

[“Field_1”, (“Field_2”, True)]or[“Field_1”, (“Field_3”, ‘text’)]

Here Field_1 is treated as continuous andField_2 as categorical and Field_3 as Text

explanatory_rasters

Optional list containing Raster objects.By default the rasters are continuous.To mark a raster categorical, pass a 2-sized tuple containing:

  1. Raster object.

  2. True/False denoting Categorical/Continuous variable.

For example:

[raster_1, (raster_2, True)]

Here raster_1 is treated as continuous andraster_2 as categorical.To select only specific bands of raster, pass 2/3 sized tuplecontaining:

  1. Raster object.

  2. True/False denoting Categorical/Continuous variable.

  3. Tuple holding the indexes of the bands to be used.

For example:

[raster_1, (raster_2, True,(0,)),(raster_3, (0,1,2))]

Here bands with indexes 0 will be chosen from raster_2and it will be treated as categorical variable, bands withindexes 0,1,2 will be chosen from raster_3 and they will betreated as continuous.

date_field

Optional field_name.This field contains the date in the input_features.The field type can be a string or date time field.If specified, the field will be split intoYear, month, week, day, dayofweek, dayofyear,is_month_end, is_month_start, is_quarter_end,is_quarter_start, is_year_end, is_year_start,hour, minute, second, elapsed and these will be addedto the prepared data as columns.All fields other than elapsed and dayofyear are treatedas categorical.

cell_sizes

Size of H3 cells (specified as H3 resolution) for spatiallyaggregating input features and passing in the cell ids as additionalexplanatory variables to the model. If a spatial dataframe is passedas input_features, ensure that the spatial reference is 4326,and the geometry type is Point. Not applicable when explanatory_rastersare provided. Not applicable for MLModel.

distance_features

Optional list ofFeatureLayer objects.Distance is calculated from features in these layersto features in input_features.Nearest distance to each feature is added in the prepareddata.Field names in the prepared data added are“NEAR_DIST_1”, “NEAR_DIST_2” etc.

preprocessors

For FullyConnectedNetworks: All the transformsare applied by default and hence users need notpass any additional transforms/preprocessors.For MLModel which uses Scikit-learn transforms:

  1. Supply a column transformer object.

  2. Supply a list of tuple,

For example:

[(‘Col_1’, ‘Col_2’, Transform1()), (‘Col_3’, Transform2())]

Categorical data is by default encoded.If nothing is specified, default transforms are appliedto fill missing values and normalize categorical data.For Raster use raster.name for the first band,raster.name_1 for 2nd band, raster.name_2 for 3rdand so on.

val_split_pct

Optional float. Percentage of training data to keepas validation.By default 10% data is kept for validation.

seed

Optional integer. Random seed for reproducibletrain-validation split.Default value is 42.

batch_size

Optional integer. Batch size for mini batch gradientdescent (Reduce it if getting CUDA Out of MemoryErrors).Default value is 64.

index_field

Optional string. Field Name in the input featureswhich will be used as index field for the data.Used for Time Series, to visualize values on thex-axis.

working_dir

Optional string. Sets the default path to be used asa prefix for saving trained models and checkpoints.

Keyword Arguments

Parameter

Description

stratify

Optional boolean.If True, prepare_tabulardatawill try to maintain the class proportion intrain and validation data according to theval_split_pct.Default value is False.

Note

Applies to classification problems.

random_split

Optional boolean. sets the behaviour of train and validationsplit to random or last n steps. If set to True then randomsampling will be performed. Otherwise, last n steps will beused as validation. val_split_pct will determine the numberthe records for validation.Default value is True

Note

Applies to timeseries

Returns:

TabularData object

prepare_textdata

arcgis.learn.prepare_textdata(path,task,text_columns=None,label_columns=None,train_file='train.csv',valid_file=None,val_split_pct=0.1,seed=42,batch_size=8,process_labels=False,remove_html_tags=False,remove_urls=False,working_dir=None,dataset_type=None,class_mapping=None,**kwargs)

Prepares a text data object from the files present at data folder

Parameter

Description

path

Required directory path.The directory path where the training andvalidation files are present.

task

Required string.The task for which the dataset is prepared.Available choice at this point is“classification”, “sequence_translation” or “entity_recognition”.

text_columns

Optional string.This parameter is mandatory when task is “classification” or “sequence_translation”.This parameter is mandatory when task isentity_recognition task with input dataset_typeascsv.The column that will contain the input text.

label_columns

Optional list.This parameter is mandatory when task is “classification” or “sequence_translation”.The list of columns denoting the classlabel/translated text to predict. Providea list of columns in case of multi-labelclassification problem.

train_file

Optional string.The file name containing the training data.Supported file formats/extensions are .csvand .tsvDefault value istrain.csv

valid_file

Optional string.The file name containing the validation data.Supported file formats/extensions are .csvand .tsv.Default value isNone. If None then someportion of the training data will be keptfor validation (based on the value ofval_split_pct parameter)

val_split_pct

Optional float.Percentage of training data to keep asvalidation.By default 10% data is kept for validation.

seed

Optional integer.Random seed for reproducible train-validationsplit.Default value is 42.

batch_size

Optional integer.Batch size for mini batch gradient descent(Reduce it if getting CUDA Out of MemoryErrors).Default value is 16.

process_labels

Optional boolean.If true, default processing functions willbe called on label columns as well.Default value is False.

remove_html_tags

Optional boolean.If true, remove html tags from text.Default value is False.

remove_urls

Optional boolean.If true, remove urls from text.Default value is False.

working_dir

Optional string.Sets the default path to be used as a prefixfor saving trained models and checkpoints.

dataset_type

Optional list.This parameter is mandatory when task is “entity_recognition”Accepted data formatfor this model are - ‘ner_json’,’BIO’ or ‘LBIOU’, ‘csv’Forcsv dataset type. If an entity has multiple values. It should beseparated by,.

class_mapping

Optional dictionary. This parameter is optional and can only be used when thetask is entity recognition. The dictionary specifies the location entity. Use the format:class_mapping={‘address_tag’: ‘location’}.The value linked to the ‘address_tag’ key will be identified as a location entity.If the model extracts multiple location entities from a single document,each location will be listed separately in the results.

Keyword Arguments

Parameter

Description

stratify

Optional boolean.If True, prepare_textdatawill try to maintain the class proportion intrain and validation data according to theval_split_pct.The default value is True.

Note

Applies only to single-label text classification.

encoding

Optional string.Applicable only when task is entity_recognition:The encoding to read the csv/json file.Default is ‘UTF-8’

Returns:

TextData object

Transform3d

classarcgis.learn.Transform3d(rotation=[2.5,2.5,45],scaling=5,jitter=0.0,**kwargs)

Create transformations for 3D datasets, that can be used inprepare_data() to apply data augmentationwith a 50% probability. Applicable for dataset_type=’PointCloud’and dataset_type=’PointCloudOD’.

Parameter

Description

rotation

An optional list of float. It defines a value indegrees for each X, Y, and Z, dimensions which willbe used to rotate a block around the X, Y, and Z, axes.

Example:A value of [2, 3, 180] means a random value for eachX, Y, and Z will be selected between, [-2, 2], [-3, 3],and [-180, 180], respectively. The block will rotatearound the respective axis as per the selected randomvalue.

Note: For dataset_type=’PointCloudOD’, rotation aroundthe X and Y axes will not be considered.Default: [2.5, 2.5, 45]

scaling

An optional float. It defines a percentage value, thatwill be used to apply scaling transformation to a block.

Example:A value of 5 means, for each X, Y, and Z, dimensions arandom value will be selected within the range of [0, 5],where the block might be scaled up or scaled down randomly,in the respective dimension.

Note: For dataset_type=’PointCloudOD’, the same scalepercentage in all three directions is considered.Default: 5

jitter

Optional float within [0, 1]. It defines a value inmeters, which is used to add random variations inX, Y, and Z of all points.

Example:if the value provided is 0.1 then within the rangeof [-0.1, 0.1] a random value is selected, Theselected value is then added to the point’s X coordinate.Similarly, it is applied for Y and Z coordinates.

Note: Only applicable for dataset_type=’PointCloud’.Default: 0.0.

Returns:

Transform3d object

Automated Machine Learning

AutoML

classarcgis.learn.AutoML(data=None,total_time_limit=3600,mode='Basic',algorithms=None,eval_metric='auto',n_jobs=1,ml_task='auto',**kwargs)

Automates the process of model selection, training and hyperparameter tuning ofmachine learning models within a specified time limit. Based uponMLJar(https://github.com/mljar/mljar-supervised/) and scikit-learn.

Note that automated machine learning support is provided only for supervised learning.Referhttps://supervised.mljar.com/

Parameter

Description

data

Required TabularDataObject. Returned data object fromprepare_tabulardata() function.

total_time_limit

Optional Int. The total time limit in seconds forAutoML training.Default is 3600 (1 Hr)

mode

Optional Str.Can be {Basic, Intermediate, Advanced}. This parameter definesthe goal of AutoML and how intensive the AutoML search will be.

Basic : To to be used when the user wants to explain and understand the data. Uses 75%/25% train/test split. Uses the following models: Baseline, Linear, Decision Tree, Random Trees, XGBoost, Neural Network, and Ensemble. Has full explanations in reports: learning curves, importance plots, and SHAP plots.Intermediate : To be used when the user wants to train a model that will be used in real-life use cases. Uses 5-fold CV (Cross-Validation). Uses the following models: Linear, Random Trees, LightGBM, XGBoost, CatBoost, Neural Network, and Ensemble. Has learning curves and importance plots in reports.

Advanced : To be used for machine learning competitions (maximum performance). Uses 10-fold CV (Cross-Validation). Uses the following models: Decision Tree, Random Trees, Extra Trees, XGBoost, CatBoost, Neural Network, Nearest Neighbors, Ensemble, and Stacking.It has only learning curves in the reports. Default is Basic

algorithms

Optional. List of str.The list of algorithms that will be used in the training. The algorithms can be:Linear, Decision Tree, Random Trees, Extra Trees, LightGBM, Xgboost, Neural Network

eval_metric

Optional Str. The metric to be used to compare models.Possible values are:For binary classification - logloss (default), auc, f1, average_precision,accuracy.For multiclass classification - logloss (default), f1, accuracyFor regression - rmse (default), mse, mae, r2, mape, spearman, pearson

Note - If there are only 2 unique values in the target, thenbinary classification is performed,If number of unique values in the target is between 2 and 20 (included), thenmulticlass classification is performed,In all other cases, regression is performed on the dataset.

n_jobs

Optional. Int.Number of CPU cores to be used. By default, it is set to 1.Set itto -1 to use all the cores.

kwargs

sensitive_variables

Optional. List of strings.Variables in the feature class/dataframe which are sensitive and prone to model bias.Ex - [‘sex’,’race’] or [‘nationality’]

fairness_metric

Optional. String.Name of fairness metric based on which fairness optimization should be done on the evaluated models.Available metrics for binary classification are ‘demographic_parity_difference’ , ‘demographic_parity_ratio’,‘equalized_odds_difference’, ‘equalized_odds_ratio’.‘demographic_parity_ratio’ is the default.Available metrics for regression are ‘group_loss_ratio’ (Default) and ‘group_loss_difference’.

fairness_threshold

Optional. Float.Required when the chosen metric is group_loss_differenceThe threshold value for fairness metric. Default values are as follows:- fordemographic_parity_difference the metric value should be below 0.25,- fordemographic_parity_ratio the metric value should be above 0.8,- forequalized_odds_difference the metric value should be below 0.25,- forequalized_odds_ratio the metric value should be above 0.8.- forgroup_loss_ratio the metric value should be above 0.8.- forgroup_loss_difference the metric value should be below 0.25,

privileged_groups

Optional. List.List of previleged groups in the sensitive attribute.For example, in binary classification task, a privileged group is the one with the highest selection rate.Example value: [{“sex”: “Male”}]

underprivileged_groups

Optional. List.List of underprivileged groups in the sensitive attribute.For example, in binary classification task, an underprivileged groupis the one with the lowest selection rate.Example value: [{“sex”: “Female”}]

Returns:

AutoML Object

copy_and_overwrite(from_path,to_path)
fairness_score(sensitive_feature,fairness_metrics=None,visualize=False)

Shows sample results for the model.

Returns:

tuple/dataframe

fit(sample_weight=None)

Fits the AutoML model.

classmethodfrom_model(emd_path)

Creates anAutoML Model Object from an Esri Model Definition (EMD) file.The model object created can only be used for inference on a new datasetand cannot be retrained.

Parameter

Description

emd_path

Required string. Path to Esri Model Definitionfile.

Returns:

AutoML Object

get_ml_task(all_labels)
predict(input_features=None,explanatory_rasters=None,datefield=None,distance_features=None,output_layer_name='PredictionLayer',gis=None,prediction_type='features',output_raster_path=None,match_field_names=None,cell_sizes=[3,4,5,6,7],confidence=True,get_local_explanations=False,**kwargs)

Predict on data from feature layer, dataframe and or raster data.

Parameter

Description

input_features

OptionalFeatureLayer or spatial dataframe. Required if prediction_type=’features’.Contains features with location andsome or all fields required to infer the dependent variable value.

explanatory_rasters

Optional list. Required if prediction_type=’raster’.Contains a list of raster objects containingsome or all fields required to infer the dependent variable value.

datefield

Optional string. Field name from feature layerthat contains the date, time for the input features.Same asprepare_tabulardata().

cell_sizes

Size of H3 cells (specified as H3 resolution) for spatiallyaggregating input features and passing in the cell ids as additionalexplanatory variables to the model. If a spatial dataframe is passedas input_features, ensure that the spatial reference is 4326,and the geometry type is Point. Not applicable when explanatory_rastersare provided.

distance_features

Optional List ofFeatureLayer objects.These layers are used for calculation of field “NEAR_DIST_1”,“NEAR_DIST_2” etc in the output dataframe.These fields contain the nearest feature distancefrom the input_features.Same asprepare_tabulardata() .

output_layer_name

Optional string. Used for publishing the output layer.

gis

OptionalGIS Object. Used for publishing the item.If not specified then active gis user is taken.

prediction_type

Optional String.Set ‘features’ or ‘dataframe’ to make output feature layer predictions.With this feature_layer argument is required.

Set ‘raster’, to make prediction raster.With this rasters must be specified.

output_raster_path

Optional path.Required when prediction_type=’raster’, savesthe output raster to this path.

match_field_names

Optional dictionary.Specify mapping of field names from prediction setto training set.For example:

{
“Field_Name_1”: “Field_1”,
“Field_Name_2”: “Field_2”
}

confidence

Optional Bool.Set confidence to True to get prediction confidence for classificationuse cases.Default is True.

Returns:

FeatureLayer if prediction_type=’features’, dataframe for prediction_type=’dataframe’ else creates an output raster.

predict_proba()
Returns:

output from AutoML’s model.predict_proba() with prediction probability for the training data

report()
Returns:

a report of the different models trained by AutoML along with their performance.

save(path)

Saves the model in the path specified. Creates an Esri Model and a dlpk.Uses pickle to save the model and transforms.

Returns:

path

score()
Returns:

output from AutoML’s model.score(), R2 score in case of regression and Accuracy in case of classification.

show_results(rows=5)

Shows sample results for the model.

Returns:

dataframe

AutoDL

classarcgis.learn.AutoDL(data=None,total_time_limit=2,mode='basic',network=None,verbose=True,**kwargs)

Automates the process of model selection, training and hyperparameter tuning ofarcgis.learn supported deep learning models within a specified time limit.

Parameter

Description

data

Required ImageryDataObject. Returned data object fromprepare_data() function.

total_time_limit

Optional Int. The total time limit in hours forAutoDL training.Default is 2 Hr.

mode

Optional String.Can be “basic” or “advanced”.

  • basic : To be used when the user wants to train all selected networks.

  • advanced : To be used when the user wants to tune hyper parameters of two

best performing models from basic mode.

network

Optional List of str.The list of models that will be used in the training.For eg:Supported Object Detection models:[“SingleShotDetector”, “RetinaNet”, “FasterRCNN”, “YOLOv3”, “MaskRCNN”, “DETReg” ,”RTDetrV2”,”ATSS”,“CARAFE”, “CascadeRCNN”, “CascadeRPN”, “DCN”, ‘Detectors’,‘DoubleHeads’, ‘DynamicRCNN’, ‘EmpiricalAttention’, ‘FCOS’, ‘FoveaBox’,‘FSAF’, ‘GHM’, ‘LibraRCNN’, ‘PaFPN’, ‘PISA’, ‘RegNet’,’RepPoints’,‘Res2Net’, ‘SABL’, ‘VFNet’]Supported Pixel Classification models:[“DeepLab”, “UnetClassifier”, “PSPNetClassifier”, “SamLoRA”,“ANN”, “APCNet”, “CCNet”, “CGNet”, “HRNet”, ‘DeepLabV3Plus’, “Mask2Former”,‘DMNet’, ‘DNLNet’, ‘FastSCNN’, ‘FCN’, ‘GCNet’, ‘MobileNetV2’,‘NonLocalNet’, ‘PSANet’, ‘SemFPN’, ‘UperNet’]

verbose

Optional Boolean.To be used to display logs while training the models.Default is True.

Returns:

AutoDL Object

average_precision_score()

Calculates the average of the “average precision score” of all classes for selected networks

fit(**kwargs)

Train the selected networks for the specified number of epochs and using thespecified learning rates

mIOU()

Calculates the mIOU of all classes for selected networks

report(allow_plot=False)

returns a HTML report of the different models trained by AutoDL along with their performance.

score(allow_plot=False)

returns output from AutoDL’s model.score(), “average precision score” in case of detection and accuracy in case of classification.

show_results(rows=5,threshold=0.25,**kwargs)

Shows sample results for the model.

Parameter

Description

rows

Optional number of rows. By default, 5 rowsare displayed.

Returns:

dataframe

supported_classification_models()

Supported classification models.

supported_detection_models()

Supported detection models.

ImageryModel

classarcgis.learn.ImageryModel

Imagery Model is used to fine tune models trained using AutoDL

available_metrics()

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

average_precision_score()

Computes average precision on the validation set for each class.

Parameter

Description

detect_thresh

Optional float. The probability above whicha detection will be considered for computingaverage precision.

iou_thresh

Optional float. The intersection over unionthreshold with the ground truth labels, abovewhich a predicted bounding box will beconsidered a true positive.

mean

Optional bool. If False returns class-wiseaverage precision otherwise returns meanaverage precision.

Returns:

dict if mean is False otherwisefloat

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',**kwargs)

Train the model for the specified number of epochs while using thespecified learning rates

Parameter

Description

epochs

Optional integer. Number of cycles of trainingon the data. Increase it if the model is underfitting.Default value is 10.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

load(path,data)

Loads a compatible saved model for inferencing or fine tuning from the disk,which can be used to further fine tune the models saved using AutoDL.

Parameter

Description

path

Required string. Path toEsri Model Definition(EMD) or DLPK file.

data

Required ImageryDataObject. Returned dataobject fromprepare_data() function.

lr_find(allow_plot=True)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mIOU()

Computes mean IOU on the validation set for each class.

Parameter

Description

mean

Optional bool. If False returns class-wisemean IOU, otherwise returns mean iou of allclasses combined.

show_progress

Optional bool. Displays the progress bar ifTrue.

Returns:

dict if mean is False otherwisefloat

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector,FeatureClassifier andRetinaNet .torchscript format is supported bySiamMask .For usage of SiamMask model in ArcGIS Pro 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size canbe passed as an optional keyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object. Used for publishing the item.If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters:Booleanoverwrite if True, it will overwritethe item on ArcGIS Online/Enterprise, default False.

show_results(rows=5,**kwargs)

Displays the results of a trained model on a part of the validation set.

rows

Optional int. Number of rows of resultsto be displayed.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Object Classification Models

FeatureClassifier

classarcgis.learn.FeatureClassifier(data,backbone='resnet34',pretrained_path=None,mixup=False,oversample=False,backend='pytorch',*args,**kwargs)

Creates an image classifier to classify the area occupied by ageographical feature based on the imagery it overlaps with.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

backbone

Optional string. Backbone convolutional neural networkmodel used for feature extraction, which isresnet34by default.Supported backbones: ResNet family and specified Timmmodels(experimental support) frombackbones().

pretrained_path

Optional string. Path where pre-trained model issaved.

mixup

Optional boolean. If set to True, it createsnew training images by randomly mixing training set images.

The default is set to False.

oversample

Optional boolean. If set to True, it oversamples unbalancedclasses of the dataset during training. Not supported withMultiLabel dataset.

backend

Optional string. Controls the backend framework to be usedfor this model, which is ‘pytorch’ by default.

valid options are “pytorch”, “tensorflow

wavelengths

Optional list. A list of central wavelengthscorresponding to each data band (in micrometers).

Returns:

FeatureClassifier Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

staticbackbones()

Supported list of backbones for this model.

categorize_features(feature_layer,raster=None,class_value_field='class_val',class_name_field='prediction',confidence_field='confidence',cell_size=1,coordinate_system=None,predict_function=None,batch_size=64,overwrite=False)

Deprecated since version 1.7.1:Please useclassify_objects() instead

Deprecated since version 1.7.1:Please useclassify_objects() instead

Categorizes each feature by classifying its attachments or an image of its geographical area (using the provided Imagery Layer)and updates the feature layer with the prediction results in theoutput_label_field.Deprecated, Use the Classify Objects Using Deep Learning tool orclassify_objects()

Parameter

Description

feature_layer

Required. PublicFeatureLayer or path of local feature class for classification with read, write, edit permissions.

raster

Optional.ImageryLayer or path of local raster to be used for exporting image chips. (Requires arcpy)

class_value_field

Required string. Output field to be added in the layer, containing class value of predictions.

class_name_field

Required string. Output field to be added in the layer, containing class name of predictions.

confidence_field

Optional string. Output column name to be added in the layer which contains the confidence score.

cell_size

Optional float. Cell size to be used for exporting the image chips.

coordinate_system

Optional. Cartographic Coordinate System to be used for exporting the image chips.

predict_function

Optional list of tuples. Used for calculation of final prediction result when each featurehas more than one attachment. Thepredict_function takes as input a list of tuples.Each tuple has first element as the class predicted and second element is the confidence score.The function should return the final tuple classifying the feature and its confidence.

batch_size

Optional integer. The no of images or tiles to process in a single go.

The default value is 64.

overwrite

Optional boolean. If set to True the output fields will be overwritten by new values.

The default value is False.

Returns:

Boolean : True if operation is successful, False otherwise

classify_features(feature_layer,labeled_tiles_directory,input_label_field,output_label_field,confidence_field=None,predict_function=None)

Deprecated in ArcGIS version 1.9.1 and later: Use the Classify Objects Using Deep Learning tool orclassify_objects()

Classifies the exported images and updates the feature layer with the prediction results in theoutput_label_field.Works with RGB images only.

Parameter

Description

feature_layer

Required.FeatureLayer for classification.

labeled_tiles_directory

Required. Folder structure containing images and labels folder. Thechips should have been generated using the export training data tool inthe Labeled Tiles format, and the labels should contain the OBJECTIDsof the features to be classified.

input_label_field

Required. Value field name which created the labeled tiles. This fieldshould contain the OBJECTIDs of the features to be classified. In case ofattachments this field is not used.

output_label_field

Required. Output column name to be added in the layer which contains predictions.

confidence_field

Optional. Output column name to be added in the layer which contains the confidence score.

predict_function

Optional. Used for calculation of final prediction result when each featurehas more than one attachment. Thepredict_function takes as input a list of tuples.Each tuple has first element as the class predicted and second element is the confidence score.The function should return the final tuple classifying the feature and its confidence

Returns:

Boolean : True/False if operation is successful

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

staticfoundation_model_backbones()

Supported list of foundation model backbones for this model.

classmethodfrom_model(emd_path,data=None)

Creates a Feature classifier from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

FeatureClassifier Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_confusion_matrix(**kwargs)

Plots a confusion matrix of the model predictions to evaluate accuracykwargs

Parameter

Description

thresh

confidence score threshold for multilabel predictions,defaults to 0.5

plot_hard_examples(num_examples)

Plots the hard examples with their heatmaps.

Parameter

Description

num_examples

Number of hard examples to plotprepare_data() function.

plot_losses()

Plot validation and training losses after fitting the model.

predict(img_path,visualize=False,gradcam=False)

Runs prediction on an Image. Works with RGB images only.

Parameter

Description

img_path

Required. Path to the image file to make thepredictions on.

visualize

Optional: Set this parameter to True tovisualize the image being predicted.

gradcam

Optional: Set this parameter to True toget gradcam visualization to help withexplanability of the prediction. If setto True, visualize parameter must alsobe set to True.

Returns:

prediction label and confidence

predict_folder_and_create_layer(folder,feature_layer_name,gis=None,prediction_field='predict',confidence_field='confidence')

Predicts on images present in the given folder and creates a feature layer.The images stored in the folder contain GPS information as part of EXIF metadata.Works with RGB images only.

Parameter

Description

folder

Required String. Folder containing images to inference on.

feature_layer_name

Required String. The name of the feature layer used to publish.

gis

OptionalGIS Object, the GIS on which this tool runs. If not specified,the active GIS is used.

prediction_field

Optional String. The field name to use to add predictions.

confidence_field

Optional String. The field name to use to add confidence.

Returns:

FeatureCollection Object

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,gradcam=False,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

gradcam

Optional boolean. Setting this to True for labelled tileswill enable the ‘explainability_map’ parameter in theClassify Object Using Deep Learning tool in ArcGIS Pro/Online.The explainability_map parameter can be used to visualizethe Grad-CAM from the tool. Setting this to True willalso save Explainability Map in the saved folderDefault is set to False. This feature works only with RGB images.

kwargs

Optional Parameters.

show_results(rows=5,**kwargs)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

statictorchgeo_backbones()

Supported list of torchgeo backbones for this model.

statictransformer_backbones()

Supported list of transformer backbones for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Object Detection Models

FasterRCNN

classarcgis.learn.FasterRCNN(data,backbone='resnet50',pretrained_path=None,**kwargs)

Model architecture fromhttps://arxiv.org/abs/1506.01497.Creates aFasterRCNN object detection model,based onhttps://github.com/pytorch/vision/blob/master/torchvision/models/detection/faster_rcnn.py.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

backbone

Optional string. Backbone convolutional neural networkmodel used for feature extraction, whichisresnet50 by default.Supported backbones: ResNet family and specified Timmmodels(experimental support) frombackbones().

pretrained_path

Optional string. Path where pre-trained model issaved.

kwargs

Returns:

FasterRCNN Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

average_precision_score(detect_thresh=0.2,iou_thresh=0.1,mean=False,show_progress=True)

Computes average precision on the validation set for each class.

Parameter

Description

detect_thresh

Optional float. The probability above whicha detection will be considered for computingaverage precision.

iou_thresh

Optional float. The intersection over unionthreshold with the ground truth labels, abovewhich a predicted bounding box will beconsidered a true positive.

mean

Optional bool. If False returns class-wiseaverage precision otherwise returns meanaverage precision.

Returns:

dict if mean is False otherwisefloat

staticbackbones()

Supported list of backbones for this model.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aFasterRCNN object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

FasterRCNN Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path,threshold=0.5,nms_overlap=0.1,return_scores=False,visualize=False,resize=False)

Runs prediction on an Image. This method is only supported for RGB images.

Parameter

Description

image_path

Required. Path to the image file to make thepredictions on.

threshold

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

return_scores

Optional boolean. Will return the probabilityscores of the bounding box predictions if True.

visualize

Optional boolean. Displays the image withpredicted bounding boxes if True.

resize

Optional boolean. Resizes the image to thesame size (chip_size parameter in prepare_data)that the model was trained on, before detectingobjects. Note that if resize_to parameter wasused in prepare_data, the image is resized tothat size instead.

By default, this parameter is false and thedetections are run in a sliding window fashionby applying the model on cropped sections ofthe image (of the same size as the model wastrained on).

Returns:

Returns a tuple with predictions, labels and optionally confidence scoresif return_scores=True. The predicted bounding boxes are returned as a listof lists containing the xmin, ymin, width and height of each predicted objectin each image. The labels are returned as a list of class values and theconfidence scores are returned as a list of floats indicating the confidenceof each prediction.

predict_video(input_video_path,metadata_file,threshold=0.5,nms_overlap=0.1,track=False,visualize=False,output_file_path=None,multiplex=False,multiplex_file_path=None,tracker_options={'assignment_iou_thrd':0.3,'detect_frames':10,'vanish_frames':40},visual_options={'color':(255,255,255),'fontface':0,'show_labels':True,'show_scores':True,'thickness':2},resize=False)

Runs prediction on a video and appends the output VMTI predictions in the metadata file.This method is only supported for RGB images.

Parameter

Description

input_video_path

Required. Path to the video file to make thepredictions on.

metadata_file

Required. Path to the metadata csv file wherethe predictions will be saved in VMTI format.

threshold

Optional float. The probability above whicha detection will be considered.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

track

Optional bool. Set this parameter as True toenable object tracking.

visualize

Optional boolean. If True a video is savedwith prediction results.

output_file_path

Optional path. Path of the final video to be saved.If not supplied, video will be saved at path input_video_pathappended with _prediction.avi. Supports only AVI and MP4 formats.

multiplex

Optional boolean. Runs Multiplex using the VMTI detections.

multiplex_file_path

Optional path. Path of the multiplexed video to be saved.By default a new file with _multiplex.MOV extension is savedin the same folder.

tracking_options

Optional dictionary. Set different parameters forobject tracking. assignment_iou_thrd parameter is usedto assign threshold for assignment of trackers,vanish_frames is the number of frames the object shouldbe absent to consider it as vanished, detect_framesis the number of frames an object should be detectedto track it.

visual_options

Optional dictionary. Set different parameters forvisualization.show_scores boolean, to view scores on predictions,show_labels boolean, to view labels on predictions,thickness integer, to set the thickness level of box,fontface integer, fontface value from opencv values,color tuple (B, G, R), tuple containing values between0-255.

resize

Optional boolean. Resizes the video frames to the same size(chip_size parameter in prepare_data) that the model wastrained on, before detecting objects. Note that ifresize_to parameter was used in prepare_data,the video frames are resized to that size instead.

By default, this parameter is false and the detectionsare run in a sliding window fashion by applying themodel on cropped sections of the frame (of the samesize as the model was trained on).

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,thresh=0.5,nms_overlap=0.1)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

thresh

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

statictorchgeo_backbones()

Supported list of torchgeo backbones for this model.

statictransformer_backbones()

Supported list of transformer backbones for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

RetinaNet

classarcgis.learn.RetinaNet(data,scales=None,ratios=None,backbone=None,pretrained_path=None,*args,**kwargs)

Creates a RetinaNet Object Detector with the specified zoom scalesand aspect ratios.Based on theFast.ai notebook

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

scales

Optional list of float values. Zoom scales of anchor boxes.

ratios

Optional list of float values. Aspect ratios of anchorboxes.

backbone

Optional string. Backbone convolutional neural networkmodel used for feature extraction, whichisresnet50 by default.Supported backbones: ResNet family and specified Timmmodels(experimental support) frombackbones().

pretrained_path

Optional string. Path where pre-trained model issaved.

Returns:

RetinaNet Object

MIN_BATCH_VAL_AMP=8
propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

average_precision_score(detect_thresh=0.5,iou_thresh=0.1,mean=False,show_progress=True)

Computes average precision on the validation set for each class.

Parameter

Description

detect_thresh

Optional float. The probability above whicha detection will be considered for computingaverage precision.

iou_thresh

Optional float. The intersection over unionthreshold with the ground truth labels, abovewhich a predicted bounding box will beconsidered a true positive.

mean

Optional bool. If False returns class-wiseaverage precision otherwise returns meanaverage precision.

Returns:

dict if mean is False otherwisefloat

staticbackbones()

Supported list of backbones for this model.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates a RetinaNet Object Detector from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

RetinaNet Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path,threshold=0.5,nms_overlap=0.1,return_scores=True,visualize=False,resize=False,batch_size=1)

Predicts and displays the results of a trained model on a single image.This method is only supported for RGB images.

Parameter

Description

image_path

Required. Path to the image file to make thepredictions on.

thresh

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

return_scores

Optional boolean.Will return the probability scores of thebounding box predictions if True.

visualize

Optional boolean. Displays the image withpredicted bounding boxes if True.

resize

Optional boolean. Resizes the image to thesame size (chip_size parameter in prepare_data)that the model was trained on, before detectingobjects. Note that if resize_to parameter wasused in prepare_data, the image is resized tothat size instead.

By default, this parameter is false and thedetections are run in a sliding window fashionby applying the model on cropped sections ofthe image (of the same size as the model wastrained on).

batch_size

Optional int. Batch size to be usedduring tiled inferencing. Default value 1.

Returns:

‘List’ of xmin, ymin, width, height of predicted bounding boxes on the given image

predict_video(input_video_path,metadata_file,threshold=0.5,nms_overlap=0.1,track=False,visualize=False,output_file_path=None,multiplex=False,multiplex_file_path=None,tracker_options={'assignment_iou_thrd':0.3,'detect_frames':10,'vanish_frames':40},visual_options={'color':(255,255,255),'fontface':0,'show_labels':True,'show_scores':True,'thickness':2},resize=False)

Runs prediction on a video and appends the output VMTI predictions in the metadata file.This method is only supported for RGB images.

Parameter

Description

input_video_path

Required. Path to the video file to make thepredictions on.

metadata_file

Required. Path to the metadata csv file wherethe predictions will be saved in VMTI format.

threshold

Optional float. The probability above whicha detection will be considered.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

track

Optional bool. Set this parameter as True toenable object tracking.

visualize

Optional boolean. If True a video is savedwith prediction results.

output_file_path

Optional path. Path of the final video to be saved.If not supplied, video will be saved at path input_video_pathappended with _prediction.avi. Supports only AVI and MP4 formats.

multiplex

Optional boolean. Runs Multiplex using the VMTI detections.

multiplex_file_path

Optional path. Path of the multiplexed video to be saved.By default a new file with _multiplex.MOV extension is savedin the same folder.

tracking_options

Optional dictionary. Set different parameters forobject tracking. assignment_iou_thrd parameter is usedto assign threshold for assignment of trackers,vanish_frames is the number of frames the object shouldbe absent to consider it as vanished, detect_framesis the number of frames an object should be detectedto track it.

visual_options

Optional dictionary. Set different parameters forvisualization.show_scores boolean, to view scores on predictions,show_labels boolean, to view labels on predictions,thickness integer, to set the thickness level of box,fontface integer, fontface value from opencv values,color tuple (B, G, R), tuple containing values between0-255.

resize

Optional boolean. Resizes the video frames to the same size(chip_size parameter in prepare_data) that the model wastrained on, before detecting objects. Note that ifresize_to parameter was used in prepare_data,the video frames are resized to that size instead.

By default, this parameter is false and the detectionsare run in a sliding window fashion by applying themodel on cropped sections of the frame (of the samesize as the model was trained on).

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,thresh=0.5,nms_overlap=0.1)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

thresh

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

statictransformer_backbones()
unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

YOLOv3

classarcgis.learn.YOLOv3(data=None,pretrained_path=None,**kwargs)

Creates a YOLOv3 object detector.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function. YOLOv3 only supports imagesizes in multiples of 32 (e.g. 256, 416, etc.)

pretrained_path

Optional string. Path where pre-trained model issaved.

Returns:

YOLOv3 Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

average_precision_score(detect_thresh=0.1,iou_thresh=0.1,mean=False,show_progress=True)

Computes average precision on the validation set for each class.

Parameter

Description

detect_thresh

Optional float. The probability above whicha detection will be considered for computingaverage precision. Defaults to 0.1. To bemodified according to the dataset and training.

iou_thresh

Optional float. The intersection over unionthreshold with the ground truth labels, abovewhich a predicted bounding box will beconsidered a true positive.

mean

Optional bool. If False returns class-wiseaverage precision otherwise returns meanaverage precision.

Returns:

dict if mean is False otherwisefloat

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates a YOLOv3 Object Detector from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

YOLOv3 Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path,threshold=0.1,nms_overlap=0.1,return_scores=True,visualize=False,resize=False,batch_size=1)

Predicts and displays the results of a trained model on a single image.The image size should at least be 416x416px if using COCO pretrained weights.This method is only supported for RGB images.

Parameter

Description

image_path

Required. Path to the image file to make thepredictions on.

threshold

Optional float. The probability above whicha detection will be considered valid.Defaults to 0.1. To be modified accordingto the dataset and training.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

return_scores

Optional boolean.Will return the probability scores of thebounding box predictions if True.

visualize

Optional boolean. Displays the image withpredicted bounding boxes if True.

resize

Optional boolean. Resizes the image to thesame size (chip_size parameter in prepare_data)that the model was trained on, before detectingobjects. Note that if resize_to parameter wasused in prepare_data, the image is resized tothat size instead.

By default, this parameter is false and thedetections are run in a sliding window fashionby applying the model on cropped sections ofthe image (of the same size as the model wastrained on).

batch_size

Optional int. Batch size to be usedduring tiled inferencing. Default value 1.

Returns:

‘List’ of xmin, ymin, width, height of predicted bounding boxes on the given image

predict_video(input_video_path,metadata_file,threshold=0.1,nms_overlap=0.1,track=False,visualize=False,output_file_path=None,multiplex=False,multiplex_file_path=None,tracker_options={'assignment_iou_thrd':0.3,'detect_frames':10,'vanish_frames':40},visual_options={'color':(255,255,255),'fontface':0,'show_labels':True,'show_scores':True,'thickness':2},resize=False)

Runs prediction on a video and appends the output VMTI predictions in the metadata file.This method is only supported for RGB images.

Parameter

Description

input_video_path

Required. Path to the video file to make thepredictions on.

metadata_file

Required. Path to the metadata csv file wherethe predictions will be saved in VMTI format.

threshold

Optional float. The probability above whicha detection will be considered. Defaults to0.1. To be modified according to the datasetand training.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

track

Optional bool. Set this parameter as True toenable object tracking.

visualize

Optional boolean. If True a video is savedwith prediction results.

output_file_path

Optional path. Path of the final video to be saved.If not supplied, video will be saved at path input_video_pathappended with _prediction.avi. Supports only AVI and MP4 formats.

multiplex

Optional boolean. Runs Multiplex using the VMTI detections.

multiplex_file_path

Optional path. Path of the multiplexed video to be saved.By default a new file with _multiplex.MOV extension is savedin the same folder.

tracking_options

Optional dictionary. Set different parameters forobject tracking. assignment_iou_thrd parameter is usedto assign threshold for assignment of trackers,vanish_frames is the number of frames the object shouldbe absent to consider it as vanished, detect_framesis the number of frames an object should be detectedto track it.

visual_options

Optional dictionary. Set different parameters forvisualization.show_scores boolean, to view scores on predictions,show_labels boolean, to view labels on predictions,thickness integer, to set the thickness level of box,fontface integer, fontface value from opencv values,color tuple (B, G, R), tuple containing values between0-255.

resize

Optional boolean. Resizes the video frames to the same size(chip_size parameter in prepare_data) that the model wastrained on, before detecting objects. Note that ifresize_to parameter was used in prepare_data,the video frames are resized to that size instead.

By default, this parameter is false and the detectionsare run in a sliding window fashion by applying themodel on cropped sections of the frame (of the samesize as the model was trained on).

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,thresh=0.1,nms_overlap=0.1)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

thresh

Optional float. The probability above whicha detection will be considered valid.Defaults to 0.1. To be modified accordingto the dataset and training.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

propertysupported_backbones

Supported backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

SingleShotDetector

classarcgis.learn.SingleShotDetector(data,grids=None,zooms=[1.0],ratios=[[1.0,1.0]],backbone=None,drop=0.3,bias=-4.0,focal_loss=False,pretrained_path=None,location_loss_factor=None,ssd_version=2,backend='pytorch',*args,**kwargs)

Creates a Single Shot Detector with the specified grid sizes, zoom scalesand aspect ratios. Based on Fast.ai MOOC Version2 Lesson 9.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

grids

Required list. Grid sizes used for creating anchorboxes.

zooms

Optional list. Zooms of anchor boxes.

ratios

Optional list of tuples. Aspect ratios of anchorboxes.

backbone

Optional string. Backbone convolutional neural networkmodel used for feature extraction, whichisresnet34 by default.Supported backbones: ResNet, DenseNet, VGG familiesand specified Timm models(experimental support) frombackbones().

dropout

Optional float. Dropout probability. Increase it toreduce overfitting.

bias

Optional float. Bias for SSD head.

focal_loss

Optional boolean. Uses Focal Loss if True.

pretrained_path

Optional string. Path where pre-trained model issaved.

location_loss_factor

Optional float. Sets the weight of the bounding boxloss. This should be strictly between 0 and 1. Thisis defaultNone which gives equal weight to bothlocation and classification loss. This factoradjusts the focus of model on the location ofbounding box.

ssd_version

Optional int within [1,2]. Use version=1 for arcgis v1.6.2 or earlier

backend

Optional string. Controls the backend framework to be usedfor this model, which is ‘pytorch’ by default.

valid options are ‘pytorch’, ‘tensorflow’

wavelengths

Optional list. A list of central wavelengthscorresponding to each data band (in micrometers).

Returns:

SingleShotDetector Object

MIN_BATCH_VAL_AMP=8
propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

average_precision_score(detect_thresh=0.2,iou_thresh=0.1,mean=False,show_progress=True)

Computes average precision on the validation set for each class.

Parameter

Description

detect_thresh

Optional float. The probability above whicha detection will be considered for computingaverage precision.

iou_thresh

Optional float. The intersection over unionthreshold with the ground truth labels, abovewhich a predicted bounding box will beconsidered a true positive.

mean

Optional bool. If False returns class-wiseaverage precision otherwise returns meanaverage precision.

Returns:

dict if mean is False otherwisefloat

staticbackbones()

Supported list of backbones for this model.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_emd(data,emd_path)

Creates a Single Shot Detector from an Esri Model Definition (EMD) file.

Parameter

Description

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

emd_path

Required string. Path to Esri Model Definitionfile.

Returns:

SingleShotDetector Object

classmethodfrom_model(emd_path,data=None)

Creates a Single Shot Detector from an Esri Model Definition (EMD) file.

Note: Only supported for Pytorch models.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

SingleShotDetector Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path,threshold=0.5,nms_overlap=0.1,return_scores=False,visualize=False,resize=False,batch_size=1)

Runs prediction on an Image.This method is only supported for RGB images.

Parameter

Description

image_path

Required. Path to the image file to make thepredictions on.

threshold

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

return_scores

Optional boolean. Will return the probabilityscores of the bounding box predictions if True.

visualize

Optional boolean. Displays the image withpredicted bounding boxes if True.

resize

Optional boolean. Resizes the image to thesame size (chip_size parameter in prepare_data)that the model was trained on, before detectingobjects. Note that if resize_to parameter wasused in prepare_data, the image is resized tothat size instead.

By default, this parameter is false and thedetections are run in a sliding window fashionby applying the model on cropped sections ofthe image (of the same size as the model wastrained on).

batch_size

Optional int. Batch size to be usedduring tiled inferencing. Default value 1.

Returns:

‘List’ of xmin, ymin, width, height of predicted bounding boxes on the given image

predict_video(input_video_path,metadata_file,threshold=0.5,nms_overlap=0.1,track=False,visualize=False,output_file_path=None,multiplex=False,multiplex_file_path=None,tracker_options={'assignment_iou_thrd':0.3,'detect_frames':10,'vanish_frames':40},visual_options={'color':(255,255,255),'fontface':0,'show_labels':True,'show_scores':True,'thickness':2},resize=False)

Runs prediction on a video and appends the output VMTI predictions in the metadata file.This method is only supported for RGB images.

Parameter

Description

input_video_path

Required. Path to the video file to make thepredictions on.

metadata_file

Required. Path to the metadata csv file wherethe predictions will be saved in VMTI format.

threshold

Optional float. The probability above whicha detection will be considered.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

track

Optional bool. Set this parameter as True toenable object tracking.

visualize

Optional boolean. If True a video is savedwith prediction results.

output_file_path

Optional path. Path of the final video to be saved.If not supplied, video will be saved at path input_video_pathappended with _prediction.avi. Supports only AVI and MP4 formats.

multiplex

Optional boolean. Runs Multiplex using the VMTI detections.

multiplex_file_path

Optional path. Path of the multiplexed video to be saved.By default a new file with _multiplex.MOV extension is savedin the same folder.

tracking_options

Optional dictionary. Set different parameters forobject tracking. assignment_iou_thrd parameter is usedto assign threshold for assignment of trackers,vanish_frames is the number of frames the object shouldbe absent to consider it as vanished, detect_framesis the number of frames an object should be detectedto track it.

visual_options

Optional dictionary. Set different parameters forvisualization.show_scores boolean, to view scores on predictions,show_labels boolean, to view labels on predictions,thickness integer, to set the thickness level of box,fontface integer, fontface value from opencv values,color tuple (B, G, R), tuple containing values between0-255.

resize

Optional boolean. Resizes the image to thesame size (chip_size parameter in prepare_data)that the model was trained on, before detectingobjects. Note that if resize_to parameter wasused in prepare_data, the image is resized tothat size instead.

By default, this parameter is false and thedetections are run in a sliding window fashionby applying the model on cropped sections ofthe image (of the same size as the model wastrained on).

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,thresh=0.5,nms_overlap=0.1)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

thresh

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

statictorchgeo_backbones()

Supported list of torchgeo backbones for this model.

statictransformer_backbones()

Supported list of transformer backbones for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

RTDetrV2

classarcgis.learn.RTDetrV2(data,backbone='resnet18',pretrained_path=None,**kwargs)

Model architecture fromhttps://arxiv.org/pdf/2407.17140.Creates aRTDetrV2 object detection model,based onhttps://github.com/lyuwenyu/RT-DETR/tree/main.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

backbone

Optional string. Backbone convolutional neural networkmodel used for feature extraction, whichisresnet50 by default.Supported backbones: ResNet family and specified Timmmodels(experimental support) frombackbones().

pretrained_path

Optional string. Path where pre-trained model issaved.

Returns:

RTDetrV2 Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

average_precision_score(detect_thresh=0.2,iou_thresh=0.1,mean=False,show_progress=True)

Computes average precision on the validation set for each class.

Parameter

Description

detect_thresh

Optional float. The probability above whicha detection will be considered for computingaverage precision.

iou_thresh

Optional float. The intersection over unionthreshold with the ground truth labels, abovewhich a predicted bounding box will beconsidered a true positive.

mean

Optional bool. If False returns class-wiseaverage precision otherwise returns meanaverage precision.

Returns:

dict if mean is False otherwisefloat

staticbackbones()

Supported list of backbones for this model.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aRTDetrV2 object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

RTDetrV2 Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path,threshold=0.5,nms_overlap=0.1,return_scores=False,visualize=False,resize=False)

Runs prediction on an Image. This method is only supported for RGB images.

Parameter

Description

image_path

Required. Path to the image file to make thepredictions on.

threshold

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

return_scores

Optional boolean. Will return the probabilityscores of the bounding box predictions if True.

visualize

Optional boolean. Displays the image withpredicted bounding boxes if True.

resize

Optional boolean. Resizes the image to thesame size (chip_size parameter in prepare_data)that the model was trained on, before detectingobjects. Note that if resize_to parameter wasused in prepare_data, the image is resized tothat size instead.

By default, this parameter is false and thedetections are run in a sliding window fashionby applying the model on cropped sections ofthe image (of the same size as the model wastrained on).

Returns:

Returns a tuple with predictions, labels and optionally confidence scoresif return_scores=True. The predicted bounding boxes are returned as a listof lists containing the xmin, ymin, width and height of each predicted objectin each image. The labels are returned as a list of class values and theconfidence scores are returned as a list of floats indicating the confidenceof each prediction.

predict_video(input_video_path,metadata_file,threshold=0.5,nms_overlap=0.1,track=False,visualize=False,output_file_path=None,multiplex=False,multiplex_file_path=None,tracker_options={'assignment_iou_thrd':0.3,'detect_frames':10,'vanish_frames':40},visual_options={'color':(255,255,255),'fontface':0,'show_labels':True,'show_scores':True,'thickness':2},resize=False)

Runs prediction on a video and appends the output VMTI predictions in the metadata file.This method is only supported for RGB images.

Parameter

Description

input_video_path

Required. Path to the video file to make thepredictions on.

metadata_file

Required. Path to the metadata csv file wherethe predictions will be saved in VMTI format.

threshold

Optional float. The probability above whicha detection will be considered.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

track

Optional bool. Set this parameter as True toenable object tracking.

visualize

Optional boolean. If True a video is savedwith prediction results.

output_file_path

Optional path. Path of the final video to be saved.If not supplied, video will be saved at path input_video_pathappended with _prediction.

multiplex

Optional boolean. Runs Multiplex using the VMTI detections.

multiplex_file_path

Optional path. Path of the multiplexed video to be saved.By default a new file with _multiplex.MOV extension is savedin the same folder.

tracking_options

Optional dictionary. Set different parameters forobject tracking. assignment_iou_thrd parameter is usedto assign threshold for assignment of trackers,vanish_frames is the number of frames the object shouldbe absent to consider it as vanished, detect_framesis the number of frames an object should be detectedto track it.

visual_options

Optional dictionary. Set different parameters forvisualization.show_scores boolean, to view scores on predictions,show_labels boolean, to view labels on predictions,thickness integer, to set the thickness level of box,fontface integer, fontface value from opencv values,color tuple (B, G, R), tuple containing values between0-255.

resize

Optional boolean. Resizes the video frames to the same size(chip_size parameter in prepare_data) that the model wastrained on, before detecting objects. Note that ifresize_to parameter was used in prepare_data,the video frames are resized to that size instead.

By default, this parameter is false and the detectionsare run in a sliding window fashion by applying themodel on cropped sections of the frame (of the samesize as the model was trained on).

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,thresh=0.5,nms_overlap=0.1)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

thresh

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

MaskRCNN

classarcgis.learn.MaskRCNN(data,backbone=None,pretrained_path=None,pointrend=False,*args,**kwargs)
propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

average_precision_score(detect_thresh=0.5,iou_thresh=0.5,mean=False,show_progress=True,tta_prediction=False)

Computes average precision on the validation set for each class.

Returns:

dict if mean is False otherwisefloat

staticbackbones()

Supported list of backbones for this model.

fastai=<module'fastai'from'/opt/conda/envs/arcgis/lib/python3.11/site-packages/fastai/__init__.py'>
fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None,**kwargs)

Creates aMaskRCNN Instance segmentation object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

MaskRCNN Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path,threshold=0.5,nms_overlap=0.1,return_scores=True,visualize=False,resize=False,tta_prediction=False,**kwargs)

Predicts and displays the results of a trained model on a single image.This method is only supported for RGB images.

Parameter

Description

image_path

Required. Path to the image file to make thepredictions on.

thresh

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

return_scores

Optional boolean.Will return the probability scores of thebounding box predictions if True.

visualize

Optional boolean. Displays the image withpredicted bounding boxes if True.

resize

Optional boolean. Resizes the image to thesame size (chip_size parameter in prepare_data)that the model was trained on, before detectingobjects. Note that if resize_to parameter wasused in prepare_data, the image is resized tothat size instead.

By default, this parameter is false and thedetections are run in a sliding window fashionby applying the model on cropped sections ofthe image (of the same size as the model wastrained on).

tta_prediction

Optional bool. Perform test time augmentationwhile predicting

kwargs

Parameter

Description

batch_size

Optional int. Batch size to be usedduring tiled inferencing

min_obj_size

Optional int. Minimum object sizeto be detected.

Returns:

‘List’ of xmin, ymin, width, height, labels, scores, of predicted bounding boxes on the given image

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=4,mode='mask',mask_threshold=0.5,box_threshold=0.7,tta_prediction=False,imsize=5,index=0,alpha=0.5,cmap='tab20',**kwargs)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

mode

Required arguments within [‘bbox’, ‘mask’, ‘bbox_mask’].
  • bbox - For visualizing only bounding boxes.

  • mask - For visualizing only mask

  • bbox_mask - For visualizing both mask and bounding boxes.

mask_threshold

Optional float. The probability above whicha pixel will be considered mask.

box_threshold

Optional float. The probability above whicha detection will be considered valid.

tta_prediction

Optional bool. Perform test time augmentationwhile predicting

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

statictorchgeo_backbones()

Supported list of torchgeo backbones for this model.

statictransformer_backbones()

Supported list of transformer backbones for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

MMDetection

classarcgis.learn.MMDetection(data,model,model_weight=False,pretrained_path=None,**kwargs)

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

model

Required model name or path to the configuration filefromMMDetection repository. The list of thesupported models can be queried usingsupported_models .

model_weight

Optional path of the model weight fromMMDetection repository.

pretrained_path

Optional string. Path where pre-trained model issaved.

Returns:

MMDetection Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

average_precision_score(detect_thresh=0.2,iou_thresh=0.1,mean=False,show_progress=True)

Computes average precision on the validation set for each class.

Parameter

Description

detect_thresh

Optional float. The probability above whicha detection will be considered for computingaverage precision.

iou_thresh

Optional float. The intersection over unionthreshold with the ground truth labels, abovewhich a predicted bounding box will beconsidered a true positive.

mean

Optional bool. If False returns class-wiseaverage precision otherwise returns meanaverage precision.

Returns:

dict if mean is False otherwisefloat

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aMMDetection object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

MMDetection Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path,threshold=0.5,nms_overlap=0.1,return_scores=False,visualize=False,resize=False)

Runs prediction on an Image. This method is only supported for RGB images.

Parameter

Description

image_path

Required. Path to the image file to make thepredictions on.

threshold

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

return_scores

Optional boolean. Will return the probabilityscores of the bounding box predictions if True.

visualize

Optional boolean. Displays the image withpredicted bounding boxes if True.

resize

Optional boolean. Resizes the image to the same size(chip_size parameter in prepare_data) that the model was trained on,before detecting objects.Note that if resize_to parameter was used in prepare_data,the image is resized to that size instead.

By default, this parameter is false and the detections are runin a sliding window fashion by applying the model on cropped sectionsof the image (of the same size as the model was trained on).

Returns:

Returns a tuple with predictions, labels and optionally confidence scoresif return_scores=True. The predicted bounding boxes are returned as a listof lists containing the xmin, ymin, width and height of each predicted objectin each image. The labels are returned as a list of class values and theconfidence scores are returned as a list of floats indicating the confidenceof each prediction.

predict_video(input_video_path,metadata_file,threshold=0.5,nms_overlap=0.1,track=False,visualize=False,output_file_path=None,multiplex=False,multiplex_file_path=None,tracker_options={'assignment_iou_thrd':0.3,'detect_frames':10,'vanish_frames':40},visual_options={'color':(255,255,255),'fontface':0,'show_labels':True,'show_scores':True,'thickness':2},resize=False)

Runs prediction on a video and appends the output VMTI predictions in the metadata file.This method is only supported for RGB images.

Parameter

Description

input_video_path

Required. Path to the video file to make thepredictions on.

metadata_file

Required. Path to the metadata csv file wherethe predictions will be saved in VMTI format.

threshold

Optional float. The probability above whicha detection will be considered.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

track

Optional bool. Set this parameter as True toenable object tracking.

visualize

Optional boolean. If True a video is savedwith prediction results.

output_file_path

Optional path. Path of the final video to be saved.If not supplied, video will be saved at path input_video_pathappended with _prediction.avi. Supports only AVI and MP4 formats.

multiplex

Optional boolean. Runs Multiplex using the VMTI detections.

multiplex_file_path

Optional path. Path of the multiplexed video to be saved.By default a new file with _multiplex.MOV extension is savedin the same folder.

tracking_options

Optional dictionary. Set different parameters forobject tracking. assignment_iou_thrd parameter is usedto assign threshold for assignment of trackers,vanish_frames is the number of frames the object shouldbe absent to consider it as vanished, detect_framesis the number of frames an object should be detectedto track it.

visual_options

Optional dictionary. Set different parameters forvisualization.show_scores boolean, to view scores on predictions,show_labels boolean, to view labels on predictions,thickness integer, to set the thickness level of box,fontface integer, fontface value from opencv values,color tuple (B, G, R), tuple containing values between0-255.

resize

Optional boolean. Resizes the video frames to the same size(chip_size parameter in prepare_data) that the model was trained on,before detecting objects.Note that if resize_to parameter was used in prepare_data,the video frames are resized to that size instead.

By default, this parameter is false and the detections are runin a sliding window fashion by applying the model on cropped sectionsof the frame (of the same size as the model was trained on).

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,thresh=0.5,nms_overlap=0.1)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

thresh

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

propertysupported_datasets

Supported dataset types for this model.

supported_models=['atss','carafe','cascade_rcnn','cascade_rpn','dcn','detectors','dino','double_heads','dynamic_rcnn','empirical_attention','fcos','foveabox','fsaf','ghm','hrnet','libra_rcnn','nas_fcos','pafpn','pisa','regnet','reppoints','res2net','sabl','vfnet']

List of models supported by this class.

supported_transformer_models=['dino']

List of transformer models supported by this class.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

DETReg

classarcgis.learn.DETReg(data,backbone='resnet50',pretrained_path=None,**kwargs)

Model architecture fromhttps://arxiv.org/abs/2106.04550.Creates aDETReg object detection model,based onhttps://github.com/amirbar/DETReg.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

backbone

Optional string. Backbone convolutional neural networkmodel used for feature extraction. resnet50 is theonly backbone that is currently supported. resnet50 isused by default.

pretrained_path

Optional string. Path where pre-trained model issaved.

Returns:

DETReg Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

average_precision_score(detect_thresh=0.2,iou_thresh=0.1,mean=False,show_progress=True)

Computes average precision on the validation set for each class.

Parameter

Description

detect_thresh

Optional float. The probability above whicha detection will be considered for computingaverage precision.

iou_thresh

Optional float. The intersection over unionthreshold with the ground truth labels, abovewhich a predicted bounding box will beconsidered a true positive.

mean

Optional bool. If False returns class-wiseaverage precision otherwise returns meanaverage precision.

Returns:

dict if mean is False otherwisefloat

staticbackbones()

Supported list of backbones for this model.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aDETReg object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

DETReg Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path,threshold=0.5,nms_overlap=0.1,return_scores=False,visualize=False,resize=False)

Runs prediction on an Image. This method is only supported for RGB images.

Parameter

Description

image_path

Required. Path to the image file to make thepredictions on.

threshold

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

return_scores

Optional boolean. Will return the probabilityscores of the bounding box predictions if True.

visualize

Optional boolean. Displays the image withpredicted bounding boxes if True.

resize

Optional boolean. Resizes the image to thesame size (chip_size parameter in prepare_data)that the model was trained on, before detectingobjects. Note that if resize_to parameter wasused in prepare_data, the image is resized tothat size instead.

By default, this parameter is false and thedetections are run in a sliding window fashionby applying the model on cropped sections ofthe image (of the same size as the model wastrained on).

Returns:

Returns a tuple with predictions, labels and optionally confidence scoresif return_scores=True. The predicted bounding boxes are returned as a listof lists containing the xmin, ymin, width and height of each predicted objectin each image. The labels are returned as a list of class values and theconfidence scores are returned as a list of floats indicating the confidenceof each prediction.

predict_video(input_video_path,metadata_file,threshold=0.5,nms_overlap=0.1,track=False,visualize=False,output_file_path=None,multiplex=False,multiplex_file_path=None,tracker_options={'assignment_iou_thrd':0.3,'detect_frames':10,'vanish_frames':40},visual_options={'color':(255,255,255),'fontface':0,'show_labels':True,'show_scores':True,'thickness':2},resize=False)

Runs prediction on a video and appends the output VMTI predictions in the metadata file.This method is only supported for RGB images.

Parameter

Description

input_video_path

Required. Path to the video file to make thepredictions on.

metadata_file

Required. Path to the metadata csv file wherethe predictions will be saved in VMTI format.

threshold

Optional float. The probability above whicha detection will be considered.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

track

Optional bool. Set this parameter as True toenable object tracking.

visualize

Optional boolean. If True a video is savedwith prediction results.

output_file_path

Optional path. Path of the final video to be saved.If not supplied, video will be saved at path input_video_pathappended with _prediction.

multiplex

Optional boolean. Runs Multiplex using the VMTI detections.

multiplex_file_path

Optional path. Path of the multiplexed video to be saved.By default a new file with _multiplex.MOV extension is savedin the same folder.

tracking_options

Optional dictionary. Set different parameters forobject tracking. assignment_iou_thrd parameter is usedto assign threshold for assignment of trackers,vanish_frames is the number of frames the object shouldbe absent to consider it as vanished, detect_framesis the number of frames an object should be detectedto track it.

visual_options

Optional dictionary. Set different parameters forvisualization.show_scores boolean, to view scores on predictions,show_labels boolean, to view labels on predictions,thickness integer, to set the thickness level of box,fontface integer, fontface value from opencv values,color tuple (B, G, R), tuple containing values between0-255.

resize

Optional boolean. Resizes the video frames to the same size(chip_size parameter in prepare_data) that the model wastrained on, before detecting objects. Note that ifresize_to parameter was used in prepare_data,the video frames are resized to that size instead.

By default, this parameter is false and the detectionsare run in a sliding window fashion by applying themodel on cropped sections of the frame (of the samesize as the model was trained on).

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,thresh=0.5,nms_overlap=0.1)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

thresh

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

EfficientDet

classarcgis.learn.EfficientDet(data,backbone=None,pretrained_path=None,*args,**kwargs)

Creates a EfficientDet model for Object Detection. Supports RGB -JPEG imagery. Based on TFLite Model Maker

Argument

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.Only (JPEG+PASCAL_VOC_rectangles) format supported.

backbone

Optional String. Backbone convolutional neural networkmodel used for EfficientDet, whichisefficientdet_lite0 by default.

pretrained_path

Optional String. Path where a compatible pre-trainedmodel is saved. Accepts a Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

Returns:

EfficientDet Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

average_precision_score(mean=False)

Computes average precision on the validation set for each class.

Argument

Description

mean

Optional bool. If False returns class-wiseaverage precision otherwise returns meanaverage precision.

Returns:

dict if mean is False otherwisefloat

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.Recommended to set to False.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

classmethodfrom_model(emd_path,data=None)

Creates aEfficientDet object from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

EfficientDet Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path,threshold=0.5,nms_overlap=0.1,return_scores=True,visualize=False,resize=False,**kwargs)

Predicts and displays the results of a trained model on a single image.This method is only supported for RGB images.

Argument

Description

image_path

Required. Path to the image file to make thepredictions on.

thresh

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

return_scores

Optional boolean.Will return the probability scores of thebounding box predictions if True.

visualize

Optional boolean. Displays the image withpredicted bounding boxes if True.

resize

Optional boolean. Resizes the image to thesame size (chip_size parameter in prepare_data)that the model was trained on, before detectingobjects. Note that if resize_to parameter wasused in prepare_data, the image is resized tothat size instead.

By default, this parameter is false and thedetections are run in a sliding window fashionby applying the model on cropped sections ofthe image (of the same size as the model wastrained on).

Returns:

‘List’ of xmin, ymin, width, height, labels, scores, of predicted bounding boxes on the given image

predict_video(input_video_path,metadata_file,threshold=0.5,nms_overlap=0.1,track=False,visualize=False,output_file_path=None,multiplex=False,multiplex_file_path=None,tracker_options={'assignment_iou_thrd':0.3,'detect_frames':10,'vanish_frames':40},visual_options={'color':(255,255,255),'fontface':0,'show_labels':True,'show_scores':True,'thickness':2},resize=False)

Runs prediction on a video and appends the output VMTI predictions in the metadata file.This method is only supported for RGB images.

Argument

Description

input_video_path

Required. Path to the video file to make thepredictions on.

metadata_file

Required. Path to the metadata csv file wherethe predictions will be saved in VMTI format.

threshold

Optional float. The probability above whicha detection will be considered.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

track

Optional bool. Set this parameter as True toenable object tracking.

visualize

Optional boolean. If True a video is savedwith prediction results.

output_file_path

Optional path. Path of the final video to be saved.If not supplied, video will be saved at path input_video_pathappended with _prediction.avi. Supports only AVI and MP4 formats.

multiplex

Optional boolean. Runs Multiplex using the VMTI detections.

multiplex_file_path

Optional path. Path of the multiplexed video to be saved.By default a new file with _multiplex.MOV extension is savedin the same folder.

tracking_options

Optional dictionary. Set different parameters forobject tracking. assignment_iou_thrd parameter is usedto assign threshold for assignment of trackers,vanish_frames is the number of frames the object shouldbe absent to consider it as vanished, detect_framesis the number of frames an object should be detectedto track it.

visual_options

Optional dictionary. Set different parameters forvisualization.show_scores boolean, to view scores on predictions,show_labels boolean, to view labels on predictions,thickness integer, to set the thickness level of box,fontface integer, fontface value from opencv values,color tuple (B, G, R), tuple containing values between0-255.

resize

Optional boolean. Resizes the image to thesame size (chip_size parameter in prepare_data)that the model was trained on, before detectingobjects. Note that if resize_to parameter wasused in prepare_data, the image is resized tothat size instead.

By default, this parameter is false and thedetections are run in a sliding window fashionby applying the model on cropped sections ofthe image (of the same size as the model wastrained on).

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,thresh=0.5,nms_overlap=0.1)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

thresh

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

propertysupported_backbones

Supported torchvision backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Pixel Classification Models

UnetClassifier

classarcgis.learn.UnetClassifier(data,backbone=None,pretrained_path=None,backend='pytorch',*args,**kwargs)

Creates a Unet like classifier based on given pretrained encoder.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

backbone

Optional string. Backbone convolutional neural networkmodel used for feature extraction, whichisresnet34 by default.Supported backbones: ResNet family and specified Timmmodels(experimental support) frombackbones().

pretrained_path

Optional string. Path where pre-trained model issaved.

backend

Optional string. Controls the backend framework to be usedfor this model, which is ‘pytorch’ by default.

valid options are ‘pytorch’, ‘tensorflow’

kwargs

Parameter

Description

class_balancing

Optional boolean. If True, it will balance thecross-entropy loss inverse to the frequencyof pixels per class. Default: False.

mixup

Optional boolean. If True, it will use mixupaugmentation and mixup loss. Default: False

focal_loss

Optional boolean. If True, it will use focal lossDefault: False

dice_loss_fraction

Optional float.Min_val=0, Max_val=1If > 0 , model will use a combination of default orfocal(if focal=True) loss with the specified fractionof dice loss.E.g.for dice = 0.3, loss = (1-0.3)*default loss + 0.3*diceDefault: 0

dice_loss_average

Optional str.micro: Micro dice coefficient will be used for losscalculation.macro: Macro dice coefficient will be used for losscalculation.A macro-average will compute the metric independentlyfor each class and then take the average (hence treatingall classes equally), whereas a micro-average willaggregate the contributions of all classes to compute theaverage metric. In a multi-class classification setup,micro-average is preferable if you suspect there might beclass imbalance (i.e you may have many more examples ofone class than of other classes)Default: ‘micro’

ignore_classes

Optional list. It will contain the list of classvalues on which model will not incur loss.Default: []

Returns:

UnetClassifier Object

accuracy()

Computes per pixel accuracy on validation set.

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

staticbackbones()

Supported list of backbones for this model.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_emd(data,emd_path)

Creates a Unet like classifier from an Esri Model Definition (EMD) file.

Parameter

Description

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

emd_path

Required string. Path to Esri Model Definitionfile.

Returns:

UnetClassifier Object

classmethodfrom_model(emd_path,data=None)

Creates a Unet like classifier from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

UnetClassifier Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

mIOU(mean=False,show_progress=True)

Computes mean IOU on the validation set for each class.

Parameter

Description

mean

Optional bool. If False returns class-wisemean IOU, otherwise returns mean iou of allclasses combined.

show_progress

Optional bool. Displays the progress bar ifTrue.

Returns:

dict if mean is False otherwisefloat

per_class_metrics(ignore_classes=[])

Computer per class precision, recall and f1-score on validation set.

Parameter

Description

self

segmentation model object -> [PSPNetClassifier | UnetClassifier | DeepLab]

ignore_classes

Optional list. It will contain the list of classvalues on which model will not incur loss.Default: []

Returns per class precision, recall and f1 scores

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,**kwargs)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

statictorchgeo_backbones()

Supported list of torchgeo backbones for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

PSPNetClassifier

classarcgis.learn.PSPNetClassifier(data,backbone=None,use_unet=True,pyramid_sizes=[1,2,3,6],pretrained_path=None,unet_aux_loss=False,pointrend=False,*args,**kwargs)

Model architecture fromhttps://arxiv.org/abs/1612.01105.Creates a PSPNet Image Segmentation/ Pixel Classification model.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

backbone

Optional string. Backbone convolutional neural networkmodel used for feature extraction, whichisresnet50 by default.Supported backbones: ResNet, DenseNet, VGG familiesand specified Timm models(experimental support) frombackbones().

use_unet

Optional Bool. Specify whether to use Unet-Decoder or not,Default True.

pyramid_sizes

Optional List. The sizes at which the feature map is pooled at.Currently set to the best set reported in the paper,i.e, (1, 2, 3, 6)

pretrained

Optional Bool. If True, use the pretrained backbone

pretrained_path

Optional string. Path where pre-trained PSPNet model issaved.

unet_aux_loss

Optional. Bool If True will use auxiliary loss for PSUnet.Default set to False. This flag is applicable only whenuse_unet is True.

pointrend

Optional boolean. If True, it will use PointRendarchitecture on top of the segmentation head.Default: False. PointRend architecture fromhttps://arxiv.org/pdf/1912.08193.pdf.

kwargs

Parameter

Description

class_balancing

Optional boolean. If True, it will balance thecross-entropy loss inverse to the frequencyof pixels per class. Default: False.

mixup

Optional boolean. If True, it will use mixupaugmentation and mixup loss. Default: False

focal_loss

Optional boolean. If True, it will use focal loss.Default: False

dice_loss_fraction

Optional float.Min_val=0, Max_val=1If > 0 , model will use a combination of default orfocal(if focal=True) loss with the specified fractionof dice loss.

Example:

for dice = 0.3, loss = (1-0.3)*default loss + 0.3*dice

Default: 0

dice_loss_average

Optional str.

  • micro”: Micro dice coefficient will be used for loss calculation.

  • macro”: Macro dice coefficient will be used for loss calculation.

A macro-average will compute the metric independentlyfor each class and then take the average (hence treatingall classes equally), whereas a micro-average willaggregate the contributions of all classes to compute theaverage metric. In a multi-class classification setup,micro-average is preferable if you suspect there might beclass imbalance (i.e you may have many more examples ofone class than of other classes)Default: ‘micro’

ignore_classes

Optional list. It will contain the list of classvalues on which model will not incur loss.Default: []

keep_dilation

Optional boolean. When PointRend architecture is used,keep_dilation=True can potentially improve accuracyat the cost of memory consumption. Default: False

Returns:

PSPNetClassifier Object

accuracy(input=None,target=None,void_code=0,class_mapping=None)

Computes per pixel accuracy.

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

staticbackbones()

Supported list of backbones for this model.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

freeze()

Freezes the pretrained backbone.

classmethodfrom_model(emd_path,data=None)

Creates a PSPNet classifier from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

PSPNetClassifier Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

mIOU(mean=False,show_progress=True)

Computes mean IOU on the validation set for each class.

Parameter

Description

mean

Optional bool. If False returns class-wisemean IOU, otherwise returns mean iou of allclasses combined.

show_progress

Optional bool. Displays the progress bar ifTrue.

Returns:

dict if mean is False otherwisefloat

per_class_metrics(ignore_classes=[])

Computer per class precision, recall and f1-score on validation set.

Parameter

Description

self

segmentation model object -> [PSPNetClassifier | UnetClassifier | DeepLab]

ignore_classes

Optional list. It will contain the list of classvalues on which model will not incur loss.Default: []

Returns per class precision, recall and f1 scores

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,**kwargs)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

statictorchgeo_backbones()

Supported list of torchgeo backbones for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

DeepLab

classarcgis.learn.DeepLab(data,backbone=None,pretrained_path=None,pointrend=False,*args,**kwargs)

Model architecture fromhttps://arxiv.org/abs/1706.05587.Creates aDeepLab Image Segmentation/ Pixel Classification model,based onhttps://github.com/pytorch/vision/tree/master/torchvision/models/segmentation.

Parameter

Description

data

Required fastai Databunch. Returned data object from function.

backbone

Optional string. Backbone convolutional neural networkmodel used for feature extraction, whichisresnet101 by default since it is pretrained intorchvision.Supported backbones: ResNet, DenseNet, VGG family andspecified Timm models(experimental support) frombackbones().

pretrained_path

Optional string. Path where pre-trained model issaved.

pointrend

Optional boolean. If True, it will use PointRendarchitecture on top of the segmentation head.Default: False. PointRend architecture fromhttps://arxiv.org/pdf/1912.08193.pdf.

kwargs

Parameter

Description

class_balancing

Optional boolean. If True, it will balance thecross-entropy loss inverse to the frequencyof pixels per class. Default: False.

mixup

Optional boolean. If True, it will use mixupaugmentation and mixup loss. Default: False

focal_loss

Optional boolean. If True, it will use focal loss.Default: False

dice_loss_fraction

Optional float.Min_val=0, Max_val=1If > 0 , model will use a combination of default orfocal(if focal=True) loss with the specified fractionof dice loss.E.g.for dice = 0.3, loss = (1-0.3)*default loss + 0.3*diceDefault: 0

dice_loss_average

Optional str.

  • micro: Micro dice coefficient will be used for loss calculation.

  • macro: Macro dice coefficient will be used for loss calculation.

A macro-average will compute the metric independentlyfor each class and then take the average (hence treatingall classes equally), whereas a micro-average willaggregate the contributions of all classes to compute theaverage metric. In a multi-class classification setup,micro-average is preferable if you suspect there might beclass imbalance (i.e you may have many more examples ofone class than of other classes)Default: ‘micro’

ignore_classes

Optional list. It will contain the list of classvalues on which model will not incur loss.Default: []

keep_dilation

Optional boolean. When PointRend architecture is used,keep_dilation=True can potentially improves accuracyat the cost of memory consumption. Default: False

wavelengths

Optional list. A list of central wavelengthscorresponding to each data band (in micrometers).

Returns:

DeepLab Object

accuracy()

Computes per pixel accuracy on validation set.

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

staticbackbones()

Supported list of backbones for this model.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aDeepLab semantic segmentation object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

DeepLab Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

mIOU(mean=False,show_progress=True)

Computes mean IOU on the validation set for each class.

Parameter

Description

mean

Optional bool. If False returns class-wisemean IOU, otherwise returns mean iou of allclasses combined.

show_progress

Optional bool. Displays the progress bar ifTrue.

Returns:

dict if mean is False otherwisefloat

per_class_metrics(ignore_classes=[])

Computer per class precision, recall and f1-score on validation set.

Parameter

Description

self

segmentation model object -> [PSPNetClassifier | UnetClassifier | DeepLab]

ignore_classes

Optional list. It will contain the list of classvalues on which model will not incur loss.Default: []

Returns per class precision, recall and f1 scores

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,**kwargs)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

statictorchgeo_backbones()

Supported list of torchgeo backbones for this model.

statictransformer_backbones()

Supported list of transformer backbones for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

BDCNEdgeDetector

classarcgis.learn.BDCNEdgeDetector(data,backbone='vgg19',pretrained_path=None)

Model architecture fromhttps://arxiv.org/pdf/1902.10903.pdf.Creates aBDCNEdgeDetector model

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

backbone

Optional string. Backbone convolutional neural networkmodel used for feature extraction, which isvgg19 bydefault.Supported backbones: ResNet, Vgg family and specified Timmmodels(experimental support) frombackbones().

pretrained_path

Optional string. Path where pre-trained model issaved.

Returns:

BDCNEdgeDetector Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

staticbackbones()

Supported list of backbones for this model.

compute_precision_recall(thresh=0.5,buffer=3,show_progress=True)

Computes precision, recall and f1 score on validation set.

Parameter

Description

thresh

Optional float. The probability on whichthe detection will be considered edge pixel.

buffer

Optional int. pixels in neighborhood toconsider true detection.

Returns:

dict

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aBDCNEdgeDetector object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

BDCNEdgeDetector Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,thresh=0.5,thinning=True,**kwargs)

Displays the results of a trained model on a part of the validation set.

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

statictransformer_backbones()
unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

HEDEdgeDetector

classarcgis.learn.HEDEdgeDetector(data,backbone='vgg19',pretrained_path=None,**kwargs)

Model architecture fromhttps://arxiv.org/pdf/1504.06375.pdf.Creates aHEDEdgeDetector model

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

backbone

Optional string. Backbone convolutional neural networkmodel used for feature extraction, which isvgg19 bydefault.Supported backbones: ResNet, Vgg family and specified Timmmodels(experimental support) frombackbones().

pretrained_path

Optional string. Path where pre-trained model issaved.

wavelengths

Optional list. A list of central wavelengthscorresponding to each data band (in micrometers).

Returns:

HEDEdgeDetector Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

staticbackbones()

Supported list of backbones for this model.

compute_precision_recall(thresh=0.5,buffer=3,show_progress=True)

Computes precision, recall and f1 score on validation set.

Parameter

Description

thresh

Optional float. The probability on whichthe detection will be considered edge pixel.

buffer

Optional int. pixels in neighborhood toconsider true detection.

Returns:

dict

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aHEDEdgeDetector object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

HEDEdgeDetector Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,thresh=0.5,thinning=True,**kwargs)

Displays the results of a trained model on a part of the validation set.

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

statictorchgeo_backbones()

Supported list of torchgeo backbones for this model.

statictransformer_backbones()

Supported list of transformer backbones for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

MultiTaskRoadExtractor

classarcgis.learn.MultiTaskRoadExtractor(data,backbone=None,pretrained_path=None,*args,**kwargs)

Creates a Multi-Task Learning model for binary segmentation of roads. Supports RGBand Multispectral Imagery.Implementation based onhttps://doi.org/10.1109/CVPR.2019.01063 .

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

backbone

Optional String. Backbone convolutional neural networkmodel used for feature extraction. If hourglass is chosen asthe mtl_model (Architecture), then this parameter isignored as hourglass uses a special customisedarchitecture.This parameter is used withlinknet model.Default: ‘resnet34’Supported backbones: ResNet family and specified Timmmodels(experimental support) frombackbones().

pretrained_path

Optional String. Path where a compatible pre-trainedmodel is saved. Accepts a Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

kwargs

Parameter

Description

mtl_model

Optional String. It is used to create modelfrom linknet orhourglass based neural architectures.Supported: ‘linknet’, ‘hourglass’.Default: ‘hourglass’

gaussian_thresh

Optional float. Sets the gaussian thresholdwhich allows to set the required road width.Range: 0.0 to 1.0Default: 0.76

orient_bin_size

Optional Int. Sets the bin size fororientation angles.Default: 20

orient_theta

Optional Int. Sets the width of orientationmask.Default: 8

Returns:

MultiTaskRoadExtractor Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

staticbackbones()

Supported list of backbones for this model.

fit(epochs=10,lr=None,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates a Multi-Task Learning model for binary segmentation from aDeep Learning Package(DLPK) or Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

MultiTaskRoadExtractor Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

mIOU(mean=False,show_progress=True)

Computes mean IOU on the validation set for each class.

Parameter

Description

mean

Optional bool. If False returns class-wisemean IOU, otherwise returns mean iou of allclasses combined.

show_progress

Optional bool. Displays the prgress bar ifTrue.

Returns:

dict if mean is False otherwisefloat

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=2,**kwargs)

Shows the ground truth and predictions of model side by side.

kwargs

Parameter

Description

rows

Number of rows of data to be displayed, ifbatch size is smaller, then the rows willdisplay the value provided for batch size.

alpha

Optional Float. Opacity parameter for labeloverlay on image. Float [0.0 - 1.0]Default: 0.6

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

statictorchgeo_backbones()

Supported list of torchgeo backbones for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

ConnectNet

classarcgis.learn.ConnectNet(data,backbone=None,pretrained_path=None,*args,**kwargs)

Creates a ConnectNet model for binary segmentation of linear features. Supports RGBand Multispectral Imagery.Implementation based onhttps://doi.org/10.1109/CVPR.2019.01063 .

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

backbone

Optional String. Backbone CNN model to be used forcreating the base. If hourglass is chosen asthemtl_model (Architecture), then this parameteris ignored as hourglass uses a special customisedarchitecture.This parameter is to be used withlinknet architecture.Default: ‘resnet34’

Usesupported_backbones property to get the listof all the supported backbones.

pretrained_path

Optional String. Path where a compatible pre-trainedmodel is saved. Accepts a Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

kwargs

Parameter

Description

mtl_model

Optional String. It is used to create modelfrom linknet orhourglass based neural architectures.Supported: ‘linknet’, ‘hourglass’.Default: ‘hourglass’

gaussian_thresh

Optional float. Sets the gaussian thresholdwhich allows to set the required width ofthe linear feature.Range: 0.0 to 1.0Default: 0.76

orient_bin_size

Optional Int. Sets the bin size fororientation angles.Default: 20

orient_theta

Optional Int. Sets the width of orientationmask.Default: 8

Returns:

ConnectNet Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

staticbackbones()

Supported list of backbones for this model.

fit(epochs=10,lr=None,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates a Multi-Task Learning model for binary segmentation from aDeep Learning Package(DLPK) or Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

MultiTaskRoadExtractor Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

mIOU(mean=False,show_progress=True)

Computes mean IOU on the validation set for each class.

Parameter

Description

mean

Optional bool. If False returns class-wisemean IOU, otherwise returns mean iou of allclasses combined.

show_progress

Optional bool. Displays the prgress bar ifTrue.

Returns:

dict if mean is False otherwisefloat

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=2,**kwargs)

Shows the ground truth and predictions of model side by side.

kwargs

Parameter

Description

rows

Number of rows of data to be displayed, ifbatch size is smaller, then the rows willdisplay the value provided for batch size.

alpha

Optional Float. Opacity parameter for labeloverlay on image. Float [0.0 - 1.0]Default: 0.6

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

statictorchgeo_backbones()

Supported list of torchgeo backbones for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

ChangeDetector

classarcgis.learn.ChangeDetector(data,backbone=None,attention_type='PAM',pretrained_path=None,**kwargs)

Creates a Change Detection model.

A Spatial-Temporal Attention-Based Method and a New Datasetfor Remote Sensing Image Change Detection -https://www.mdpi.com/2072-4292/12/10/1662

Parameter

Description

data

Required fastai Databunch. Returned data objectfromprepare_data() function.

backbone

Optional function. Backbone CNN model to be usedfor creating the encoder of theChangeDetector,which isresnet18 by default. It supportsthe ResNet family of backbones.

attention_type

Optional string. It’s value can be either be “PAM”(Pyramid Attention Module) or “BAM”(Basic Attention Module).Defaults to “PAM”.

pretrained_path

Optional string. Path where pre-trained model issaved.

Returns:

ChangeDetector object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates a ChangeDetector model from an Esri Model Definition (EMD)file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Optional fastai Databunch. Returneddata object fromprepare_data() function orNone for inferencing.

Returns:

ChangeDetector Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

precision_recall_score()

Computes precision, recall and f1 score.

predict(before_image,after_image,**kwargs)

Predict on a pair of images.

Parameter

Description

before_image

Required string. Path to image from before.

after_image

Required string. Path to image from later.

Kwargs

Parameter

Description

crop_predict

Optional Boolean. If True, It will predictusing a sliding window strategy. Typically, usedwhen image size is larger than thechip_sizethe model is trained on. Default False.

visualize

Optional Boolean. If True, It will plotthe predictions on the notebook. Default False.

save

Optional Boolean. If true will write theprediction file on the disk. Default False.

Returns:

PyTorch Tensor of the change mask.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=4,**kwargs)

Displays the results of a trained model on the validation set.

propertysupported_backbones

Supported torchvision backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

MMSegmentation

classarcgis.learn.MMSegmentation(data,model,model_weight=False,pretrained_path=None,**kwargs)

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

model

Required model name or path to the configuration filefromMMSegmentation repository. The list of thesupported models can be queried usingsupported_models

model_weight

Optional path of the model weight fromMMSegmentation repository.

pretrained_path

Optional string. Path where pre-trained model issaved.

kwargs

class_balancing

Optional boolean. If True, it will balance thecross-entropy loss inverse to the frequencyof pixels per class. Default: False.

ignore_classes

Optional list. It will contain the list of classvalues on which model will not incur loss.Default: []

seq_len

Optional int. Number of timestamp bands.Applicable for prithvi100m model only.Default: 1

Returns:

MMSegmentation Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aMMSegmentation object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

MMSegmentation Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,thresh=0.5,thinning=True,**kwargs)

Displays the results of a trained model on a part of the validation set.

propertysupported_datasets

Supported dataset types for this model.

supported_models=['ann','apcnet','ccnet','cgnet','deeplabv3','deeplabv3plus','dmnet','dnlnet','emanet','fastscnn','fcn','gcnet','hrnet','mask2former','mobilenet_v2','nonlocal_net','ocrnet','prithvi100m','psanet','pspnet','resnest','sem_fpn','unet','upernet']

List of models supported by this class.

supported_transformer_models=['mask2former','prithvi100m']

List of transformer based models supported by this class.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

MaXDeepLab

classarcgis.learn.MaXDeepLab(data,backbone=None,pretrained_path=None,**kwargs)

Creates aMaXDeepLab panoptic segmentation model.

Parameter

Description

data

Required fastai Databunch. Returned dataobject fromprepare_data() function.MaXDeepLab only supports image sizes inmultiples of 16 (e.g. 256, 416, etc.).

pretrained_path

Optional string. Path where pre-trainedmodel is saved.

Returns:

MaXDeepLab Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_n_masks()

Computes the maximum number of class labels and masks in any chip in the entire dataset.Note: It might take long time for larger datasets.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aMaXDeepLabPanopticSegmentation object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

MaXDeepLab Panoptic Segmentation Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

propertysupported_backbones

Supported backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

SamLoRA

classarcgis.learn.SamLoRA(data,backbone='vit_b',pretrained_path=None,**kwargs)

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

backbone

Optional string. Default:vit_bBackbone model architecture.Supported backbones: Vision Transformers(huge, large, and base) pretrained by Meta.Usesupported_backbones property to get thelist of all the supported backbones.

pretrained_path

Optional string. Path where pre-trained model issaved.

kwargs

class_balancing

Optional boolean. If True, it will balance thecross-entropy loss inverse to the frequencyof pixels per class. Default: False.

ignore_classes

Optional list. It will contain the list of classvalues on which model will not incur loss.Default: []

Returns:

SamLoRA Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aSamLoRA object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data()function or None for inferencing.

Returns:

SamLoRA Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,**kwargs)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional Integer. Number of rows of resultsto be displayed.

kwargs

Parameter

Description

alpha

Optional Float. Default value is 0.5.Opacity of the lables for the correspondingimages. Values range between 0 and 1, where1 means opaque.

propertysupported_backbones

Supported list of backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Image Translation Models

CycleGAN

classarcgis.learn.CycleGAN(data,pretrained_path=None,gen_blocks=9,lsgan=True,*args,**kwargs)

Creates a model object which generates images of type A from type B or type B from type A.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

pretrained_path

Optional string. Path where pre-trained model issaved.

gen_blocks

Optional integer. Number of ResNet blocks to usein generator.

lsgan

Optional boolean. If True, it will use Mean Squared Errorelse it will use Binary Cross Entropy.

Returns:

CycleGAN Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_metrics()

Computes Frechet Inception Distance (FID) on validation set.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aCycleGAN object from an Esri Model Definition (EMD) file.

Parameter

Description

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

Returns:

CycleGAN Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(img_path,convert_to)

Predicts and display the image.

Parameter

Description

img_path

Required path of an image.

convert_to

‘A’ if we want to generate image of type ‘A’from type ‘B’ or ‘B’ if we want to generateimage of type ‘B’ from type ‘A’ where A andB are the domain specifications that wereused while training.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,**kwargs)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

kwargs

rgb_bands

Optional list of integers (band numbers)to be considered for rgb visualization.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Pix2Pix

classarcgis.learn.Pix2Pix(data,pretrained_path=None,backbone=None,perceptual_loss=False,*args,**kwargs)

Creates a model object which generates fake images of type B from type A.

Parameter

Description

data

Required fastai Databunch with image chip sizesin multiples of 256. Returned data object fromprepare_data() function.

pretrained_path

Optional string. Path where pre-trained model issaved.

backbone

Optional function. Backbone CNN model to be used forcreating the base of thePix2Pix, whichis UNet with vanilla encoder by default.Compatible backbones as encoder: ‘resnet18’, ‘resnet34’,‘resnet50’, “resnet101”, “resnet152”, ‘resnext50_32x4d’, ‘wide_resnet50_2’

perceptual_loss

Optional boolean. True when Perceptual loss is used.Default set to False.

Returns:

Pix2Pix Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_metrics(show_progress=True)

Computes Peak Signal-to-Noise Ratio (PSNR) andStructural Similarity Index Measure (SSIM) on validation set.Additionally, computes Frechet Inception Distance (FID) forRGB imagery only.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aPix2Pix object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

Pix2Pix Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(path)

Predicts and display the image.

Parameter

Description

img_path

Required path of an image.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=2,**kwargs)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

kwargs

rgb_bands

Optional list of integers (band numbers)to be considered for rgb visualization.

propertysupported_backbones

Supported backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Pix2PixHD

classarcgis.learn.Pix2PixHD(data,pretrained_path=None,*args,**kwargs)

Creates a model object which generates fake images of type B from type A.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

pretrained_path

Optional string. Path where pre-trained model issaved.

kwargs

n_gen_filters

Optional int. Number of gen filters in first conv layer.Default: 64

gen_network

Optional string (global/local). Selects model to use for generator.Use global if gpu memory is less.Default: “local”

n_downsample_global

Optional int. Number of downsampling layers in gen_networkDefault: 4

n_blocks_global

Optional int. Number of residual blocks in the globalgenerator network.Default: 9

n_local_enhancers

Optional int. Number of local enhancers to use.Default: 1

n_blocks_local

Optional int. number of residual blocks in the localenhancer network.Default: 3

norm

Optional string. instance normalization or batch normalizationDefault: “instance”

lsgan

Optional bool. Use least square GAN, if True,use vanilla GAN.Default: True

n_dscr_filters

Optional int. number of discriminator filters in first conv layer.Default: 64

n_layers_dscr

Optional int. only used if which_model_net_dscr==n_layers.Default: 3

n_dscr

Optional int. number of discriminators to use.Default: 2

feat_loss

Optional bool. if ‘True’, use discriminatorfeature matching loss.Default: True

vgg_loss

Optional bool. if ‘True’, use VGG feature matching loss.Default: True (supported for 3 band imagery only).

lambda_feat

Optional int. weight for feature matching loss.Default: 10

lambda_l1

Optional int. weight for feature matching loss.Default: 100 (not supported for 3 band imagery)

Returns:

Pix2PixHD Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_metrics(accuracy=True,show_progress=True)

Computes Peak Signal-to-Noise Ratio (PSNR) andStructural Similarity Index Measure (SSIM) on validation set.Additionally, computes Frechet Inception Distance (FID) forRGB imagery only.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aPix2PixHD object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

Pix2PixHD Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(path)

Predicts and display the image.

Parameter

Description

img_path

Required path of an image.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=2,**kwargs)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

kwargs

rgb_bands

Optional list of integers (band numbers)to be considered for rgb visualization.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

WNet_cGAN

classarcgis.learn.WNet_cGAN(data,pretrained_path=None,*args,**kwargs)

Creates a model object which generates images of type C from type A and type B.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

pretrained_path

Optional string. Path where pre-trained model issaved.

Returns:

WNet_cGAN Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_metrics(accuracy=True,show_progress=True)

Computes Peak Signal-to-Noise Ratio (PSNR) andStructural Similarity Index Measure (SSIM) on validation set.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aWNet_cGAN object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

WNet_cGAN Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(img_path1,img_path2)

Predicts and display the image.This method is only supported for RGB images.

Parameter

Description

img_path1

Required path of an image 1.

img_path2

Required path of an image 2.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=2,**kwargs)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

SuperResolution

classarcgis.learn.SuperResolution(data,backbone=None,pretrained_path=None,*args,**kwargs)

Creates a model object which increases the resolution and improves the quality of images.Based on Fast.ai MOOC Lesson 7 andhttps://github.com/Janspiry/Image-Super-Resolution-via-Iterative-Refinement.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

backbone

Optional string. Backbone CNN model to be used forcreating the base of theSuperResolution, whichisresnet34 by default.Compatible backbones: ‘SR3’, ‘SR3_UViT’,‘resnet18’, ‘resnet34’, ‘resnet50’, ‘resnet101’,‘resnet152’.

pretrained_path

Optional string. Path where pre-trained model issaved.

In addition to explicitly named parameters, the SuperResolution model with ‘SR3’ backbonesupports the optional key word arguments:

kwargs

Parameter

Description

inner_channel

Optional int. Channel dimension.Default: 64.

norm_groups

Optional int. Group normalization.Default: 32

channel_mults

Optional list. Depth or channel multipliers.Default: [1, 2, 4, 4, 8, 8]

attn_res

Optional int. Number of attention in residualblocks. Default: 16

res_blocks

Optional int. Number of resnet block.Default: 3

dropout

Optional float. Dropout.Default: 0

schedule

Optional string. Type of noise schedule.Available types are “linear”, ‘warmup10’,‘warmup50’, ‘const’, ‘jsd’, ‘cosine’.Default: ‘linear’

n_timestep

Optional int. Number of time-steps.Default: 1000

linear_start

Optional float. Schedule start.Default: 1e-06

linear_end

Optional float. Schedule end.Default: 1e-02

And, with ‘SR3_UViT’ backbone supports the below optional key word arguments:

patch_size

Optional int. Patch size for generating patchembeddings. Default: 16

embed_dim

Optional int. Dimension of embeddings.Default: 768

depth

Optional int. Depth of model.Default: 17

num_heads

Optional int. Number of attention heads.Default: 12

mlp_ratio

Optional float. Ratio of MLP.Default: 4.0

qkv_bias

Optional bool. Addition of bias in QK Vector.Default: False

Returns:

SuperResolution Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_metrics(accuracy=True,show_progress=True,**kwargs)

Computes Peak Signal-to-Noise Ratio (PSNR) andStructural Similarity Index Measure (SSIM) on validation set.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_emd(data,emd_path)

Creates a SuperResolution object from an Esri Model Definition (EMD) file.

Parameter

Description

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

emd_path

Required string. Path to Esri Model Definitionfile.

Returns:

SuperResolution Object

classmethodfrom_model(emd_path,data=None)

Creates aSuperResolution object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

SuperResolution Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(img_path)

Predicts and display the image.

Parameter

Description

img_path

Required path of an image.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=None,**kwargs)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

kwargs

sampling_type

Optional string. Type of sampling.Default: ‘ddim’. keyword arguments applicable forSR3 model type only.

n_timestep

Optional int. Number of time-steps for the sampling process.Default: 200

propertysupported_backbones

Supported backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

ImageCaptioner

classarcgis.learn.ImageCaptioner(data,backbone=None,pretrained_path=None,**kwargs)

Creates an Image Captioning model.

Parameter

Description

data

Required fastai Databunch. Returned data objectfromprepare_data() function.

backbone

Optional function. Backbone CNN model to be usedfor creating the encoder of theImageCaptioner ,which isresnet34 by default. It supportsthe ResNet family of backbones.

pretrained_path

Optional string. Path where pre-trained model issaved.

kwargs

Parameter

Description

decoder_params

Optional dictionary. The keys of the dictionary areembed_size,hidden_size,attention_size,teacher_forcing,dropout andpretrained_embeddings.

Default values:

decoder_params={
‘embed_size’:100,
‘hidden_size’:100,
‘attention_size’:100,
‘teacher_forcing’:1,
‘dropout’:0.1,
‘pretrained_emb’:False
}

Parameter Explanation:

  • ‘embed_size’: Size of embedding to be used during training.

  • ‘hidden_size’: Size of hidden layer.

  • ‘attention_size’: Size of intermediate attention layer.

  • ‘teacher_forcing’: Probability of teacher forcing.

  • ‘dropout’: Dropout probability.

  • ‘pretrained_emb’: If true, it will use fasttext embeddings.

Returns:

ImageCaptioner Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

bleu_score(**kwargs)

Computes bleu score over validation set.

kwargs

Parameter

Description

beam_width

Optional int. The size of beam to be usedduring beam search decoding. Default is 5.

max_len

Optional int. The maximum length of thesentence to be decoded. Default is 20.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates a ImageCaptioner model from an Esri Model Definition (EMD)file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Optional fastai Databunch. Returneddata object fromprepare_data() function orNone for inferencing.

Returns:

ImageCaptioner Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(path,visualize=True,**kwargs)
save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=4,**kwargs)

Shows the ground truth and predictions of model side by side.

kwargs

Parameter

Description

beam_width

Optional int. The size of beam to be usedduring beam search decoding. Default is 3.

max_len

Optional int. The maximum length of thesentence to be decoded. Default is 15.

propertysupported_backbones

Supported torchvision backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

3D Models

PointCNN

classarcgis.learn.PointCNN(data,pretrained_path=None,*args,**kwargs)

Model architecture fromhttps://arxiv.org/abs/1801.07791.Creates a Point Cloud classification model.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

pretrained_path

Optional String. Path where pre-trained modelis saved.

kwargs

Parameter

Description

encoder_params

Optional dictionary. The keys of the dictionary areout_channels,P,K,D andm.

Examples:

{‘out_channels’:[16, 32, 64, 96],
‘P’:[-1, 768, 384, 128],
‘K’:[12, 16, 16, 16],
‘D’:[1, 1, 2, 2],
‘m’:8
}

Length ofout_channels,P,K,D should be same.The length denotes the number of layers in encoder.

Parameter Explanation

  • ‘out_channels’: Number of channels produced by each layer,

  • ‘P’: Number of points in each layer,

  • ‘K’: Number of K-nearest neighbor in each layer,

  • ‘D’: Dilation in each layer,

  • ‘m’: Multiplier which is multiplied by each element of out_channel.

dropout

Optional float. This parameter will control overfitting.The range of this parameter is [0,1).

sample_point_num

Optional integer. The number of points that the modelwill actually process.

focal_loss

Optional boolean. If True, it will use focal loss.Default: False

Returns:

PointCNN Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_precision_recall()

Computes precision, recall and f1-score on the validation sets.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates. The precision, recall and f1 scoresshown in the training table are macro averaged over all classes.

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.The default value is ‘False’.

kwargs

Parameter

Description

iters_per_epoch

Optional integer. The number of iterationsto run during the training phase.

classmethodfrom_model(emd_path,data=None)

Creates an PointCNN model object from a Deep Learning Package(DLPK)or Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

PointCNN Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict_h5(path,output_path=None,**kwargs)

This method is used for infrencing using HDF file.

Parameter

Description

path

Required string. The path to folder where the HDFfiles which needs to be predicted are present.

output_path

Optional string. The path to folder where to dumpthe resulting HDF files. Defaults toresultsfolder in input path.

kwargs

Parameter

Description

batch_size

Optional integer. The number of blocks to processin one batch. Default is set to 1.

Returns:

Path where files are dumped.

predict_las(path,output_path=None,print_metrics=False,**kwargs)

Note: This method has been deprecated starting fromArcGIS API forPython version 1.9.0.UseClassify Points Using Trained Model tool available in 3D Analystextension from ArcGIS Pro 2.8 onwards.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=2,**kwargs)

Displays the results from your model on the validation setwith ground truth on the left and predictions on the right.Visualization of data, exported in a geographic coordinate systemis not yet supported.

Parameter

Description

rows

Optional rows. Number of rows to show. Defaultvalue is 2 and maximum value is thebatch_sizepassed inprepare_data().

kwargs

Parameter

Description

color_mapping

Optional dictionary. Mapping from class valueto RGB values. Default value example:{0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.

mask_class

Optional list of integers. Array containingclass values to mask. Use this parameter todisplay the classes of interest.Default value is [].Example: All the classes are in [0, 1, 2]to display only class0 set the mask classparameter to be [1, 2]. List of all classescan be accessed fromdata.classes attributewheredata is theDatabunch object returnedbyprepare_data() function.

width

Optional integer. Width of the plot. Defaultvalue is 750.

height

Optional integer. Height of the plot. Defaultvalue is 512.

max_display_point

Optional integer. Maximum number of pointsto display. Default is 20000. A warning willbe raised if the total points to display exceedsthis parameter. Setting this parameter willrandomly sample the specified number of pointsand once set, it will be used for future uses.

unfreeze()

Not implemented for this model asnone of the layers are frozen by default.

RandLANet

classarcgis.learn.RandLANet(data,pretrained_path=None,*args,**kwargs)

Model architecture fromhttps://arxiv.org/pdf/1911.11236v3.pdf.Creates RandLANet point cloud segmentation model.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data function.

pretrained_path

Optional String. Path where pre-trained modelis saved.

kwargs

Parameter

Description

encoder_params

Optional dictionary. The keys of the dictionary areout_channels,sub_sampling_ratio,k_n.

Examples:

{‘out_channels’:[16, 64, 128, 256],‘sub_sampling_ratio’:[4, 4, 4, 4],‘k_n’:16}

Length ofout_channels andsub_sampling_ratio should be same.The length denotes the number of layers in encoder.

Parameter Explanation
  • ‘out_channels’: Number of channels produced by each layer,

  • ‘sub_sampling_ratio’: Sampling ratio of random sampling at each layer,

  • ‘k_n’: Number of K-nearest neighbor for a point.

focal_loss

Optional boolean. If True, it will use focal loss.Default: False

Returns:

RandLANet Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_precision_recall()

Computes precision, recall and f1-score on the validation sets.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates. The precision, recall and f1 scoresshown in the training table are macro averaged over all classes.

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.The default value is ‘False’.

kwargs

Parameter

Description

iters_per_epoch

Optional integer. The number of iterationsto run during the training phase.

classmethodfrom_model(emd_path,data=None)

Creates an RandLANet model object from a Deep Learning Package(DLPK)or Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

RandLANet Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict_h5(path,output_path=None,**kwargs)

This method is used for infrencing using HDF file.

Parameter

Description

path

Required string. The path to folder where the HDFfiles which needs to be predicted are present.

output_path

Optional string. The path to folder where to dumpthe resulting HDF files. Defaults toresultsfolder in input path.

kwargs

Parameter

Description

batch_size

Optional integer. The number of blocks to processin one batch. Default is set to 1.

Returns:

Path where files are dumped.

predict_las(path,output_path=None,print_metrics=False,**kwargs)

Note: This method has been deprecated starting fromArcGIS API forPython version 1.9.0.UseClassify Points Using Trained Model tool available in 3D Analystextension from ArcGIS Pro 2.8 onwards.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=2,**kwargs)

Displays the results from your model on the validation setwith ground truth on the left and predictions on the right.Visualization of data, exported in a geographic coordinate systemis not yet supported.

Parameter

Description

rows

Optional rows. Number of rows to show. Defaultvalue is 2 and maximum value is thebatch_sizepassed inprepare_data().

kwargs

Parameter

Description

color_mapping

Optional dictionary. Mapping from class valueto RGB values. Default value example:{0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.

mask_class

Optional list of integers. Array containingclass values to mask. Use this parameter todisplay the classes of interest.Default value is [].Example: All the classes are in [0, 1, 2]to display only class0 set the mask classparameter to be [1, 2]. List of all classescan be accessed fromdata.classes attributewheredata is theDatabunch object returnedbyprepare_data() function.

width

Optional integer. Width of the plot. Defaultvalue is 750.

height

Optional integer. Height of the plot. Defaultvalue is 512.

max_display_point

Optional integer. Maximum number of pointsto display. Default is 20000. A warning willbe raised if the total points to display exceedsthis parameter. Setting this parameter willrandomly sample the specified number of pointsand once set, it will be used for future uses.

unfreeze()

Not implemented for this model asnone of the layers are frozen by default.

SQNSeg

classarcgis.learn.SQNSeg(data,pretrained_path=None,*args,**kwargs)

Model architecture fromhttps://arxiv.org/pdf/2104.04891.pdf.Creates SQNSeg point cloud segmentation model.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data function.

pretrained_path

Optional String. Path where pre-trained modelis saved.

kwargs

Parameter

Description

encoder_params

Optional dictionary. The keys of the dictionary areout_channels,sub_sampling_ratio,k_n.

Examples:

{‘out_channels’:[16, 64, 128, 256],‘sub_sampling_ratio’:[4, 4, 4, 4],‘k_n’:16}

Length ofout_channels andsub_sampling_ratio should be same.The length denotes the number of layers in encoder.

Parameter Explanation
  • ‘out_channels’: Number of channels produced by each layer,

  • ‘sub_sampling_ratio’: Sampling ratio of random sampling at each layer,

  • ‘k_n’: Number of K-nearest neighbor for a point.

focal_loss

Optional boolean. If True, it will use focal loss.Default: False

Returns:

SQNSeg Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_precision_recall()

Computes precision, recall and f1-score on the validation sets.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates. The precision, recall and f1 scoresshown in the training table are macro averaged over all classes.

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.The default value is ‘False’.

kwargs

Parameter

Description

iters_per_epoch

Optional integer. The number of iterationsto run during the training phase.

classmethodfrom_model(emd_path,data=None)

Creates an SQNSeg model object from a Deep Learning Package(DLPK)or Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

SQNSeg Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict_h5(path,output_path=None,**kwargs)

This method is used for infrencing using HDF file.

Parameter

Description

path

Required string. The path to folder where the HDFfiles which needs to be predicted are present.

output_path

Optional string. The path to folder where to dumpthe resulting HDF files. Defaults toresultsfolder in input path.

kwargs

Parameter

Description

batch_size

Optional integer. The number of blocks to processin one batch. Default is set to 1.

Returns:

Path where files are dumped.

predict_las(path,output_path=None,print_metrics=False,**kwargs)

Note: This method has been deprecated starting fromArcGIS API forPython version 1.9.0.UseClassify Points Using Trained Model tool available in 3D Analystextension from ArcGIS Pro 2.8 onwards.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=2,**kwargs)

Displays the results from your model on the validation setwith ground truth on the left and predictions on the right.Visualization of data, exported in a geographic coordinate systemis not yet supported.

Parameter

Description

rows

Optional rows. Number of rows to show. Defaultvalue is 2 and maximum value is thebatch_sizepassed inprepare_data().

kwargs

Parameter

Description

color_mapping

Optional dictionary. Mapping from class valueto RGB values. Default value example:{0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.

mask_class

Optional list of integers. Array containingclass values to mask. Use this parameter todisplay the classes of interest.Default value is [].Example: All the classes are in [0, 1, 2]to display only class0 set the mask classparameter to be [1, 2]. List of all classescan be accessed fromdata.classes attributewheredata is theDatabunch object returnedbyprepare_data() function.

width

Optional integer. Width of the plot. Defaultvalue is 750.

height

Optional integer. Height of the plot. Defaultvalue is 512.

max_display_point

Optional integer. Maximum number of pointsto display. Default is 20000. A warning willbe raised if the total points to display exceedsthis parameter. Setting this parameter willrandomly sample the specified number of pointsand once set, it will be used for future uses.

unfreeze()

Not implemented for this model asnone of the layers are frozen by default.

PTv3Seg

classarcgis.learn.PTv3Seg(data,pretrained_path=None,*args,**kwargs)

Model architecture fromhttps://arxiv.org/pdf/2312.10035.Creates PTv3Seg point cloud segmentation model.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data function.

pretrained_path

Optional String. Path where pre-trained modelis saved.

kwargs

Parameter

Description

sub_sampling_ratio

Optional int. Sampling ratio of points in eachlayer. Default: 2.

seq_len

Optional int. Sequence length for transformer.Default: 1024.

voxel_size

Optional float. Defines the size of voxels inmeters for a block. Default: 0.02.

focal_loss

Optional boolean. If True, it will use focal loss.Default: False.

Returns:

PTv3Seg Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_precision_recall()

Computes precision, recall and f1-score on the validation sets.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates. The precision, recall and f1 scoresshown in the training table are macro averaged over all classes.

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.The default value is ‘False’.

kwargs

Parameter

Description

iters_per_epoch

Optional integer. The number of iterationsto run during the training phase.

classmethodfrom_model(emd_path,data=None)

Creates an PTv3Seg model object from a Deep Learning Package(DLPK)or Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

PTv3Seg Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict_h5(path,output_path=None,**kwargs)

This method is used for infrencing using HDF file.

Parameter

Description

path

Required string. The path to folder where the HDFfiles which needs to be predicted are present.

output_path

Optional string. The path to folder where to dumpthe resulting HDF files. Defaults toresultsfolder in input path.

kwargs

Parameter

Description

batch_size

Optional integer. The number of blocks to processin one batch. Default is set to 1.

Returns:

Path where files are dumped.

predict_las(path,output_path=None,print_metrics=False,**kwargs)

Note: This method has been deprecated starting fromArcGIS API forPython version 1.9.0.UseClassify Points Using Trained Model tool available in 3D Analystextension from ArcGIS Pro 2.8 onwards.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=2,**kwargs)

Displays the results from your model on the validation setwith ground truth on the left and predictions on the right.Visualization of data, exported in a geographic coordinate systemis not yet supported.

Parameter

Description

rows

Optional rows. Number of rows to show. Defaultvalue is 2 and maximum value is thebatch_sizepassed inprepare_data().

kwargs

Parameter

Description

color_mapping

Optional dictionary. Mapping from class valueto RGB values. Default value example:{0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.

mask_class

Optional list of integers. Array containingclass values to mask. Use this parameter todisplay the classes of interest.Default value is [].Example: All the classes are in [0, 1, 2]to display only class0 set the mask classparameter to be [1, 2]. List of all classescan be accessed fromdata.classes attributewheredata is theDatabunch object returnedbyprepare_data() function.

width

Optional integer. Width of the plot. Defaultvalue is 750.

height

Optional integer. Height of the plot. Defaultvalue is 512.

max_display_point

Optional integer. Maximum number of pointsto display. Default is 20000. A warning willbe raised if the total points to display exceedsthis parameter. Setting this parameter willrandomly sample the specified number of pointsand once set, it will be used for future uses.

unfreeze()

Not implemented for this model asnone of the layers are frozen by default.

MMDetection3D

classarcgis.learn.MMDetection3D(data,model='SECOND',pretrained_path=None,**kwargs)

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

model

Required model name or path to the configuration filefromMMDetection3D repository.The list of the supported models can be queried usingsupported_models.

pretrained_path

Optional string. Path where pre-trained model issaved.

kwargs

Parameter

Description

voxel_parms

Optional dictionary. The keys of the dictionary arevoxel_size,voxel_points, andmax_voxels. Thedefault value ofvoxel_size,`voxel_points`, andmax_voxels are automatically calculated based onthe ‘block size’, ‘object size’ and ‘average no.of points per block’ of the exported data.

Example:
{‘voxel_size’: [0.05, 0.05, 0.1],
‘voxel_points’: 10,
‘max_voxels’:[20000, 40000],
}

Parameter Explanation:

  • ‘voxel_size’: List of voxel dimensions in meter[x,y,z],

  • ‘voxel_points’: An Int, that decides the maximumnumber of points per voxel,

  • ‘max_voxels’: List of maximum number of voxels in[training, validation].

Default: None.

Returns:

MMDetection3D Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

average_precision_score(detect_thresh=0.2,iou_thresh=0.1,nms_overlap=0.2,mean=False,**kwargs)

Computes average precision on the validation/train set for each class.

Parameter

Description

detect_thresh

Optional float. The probability above whicha detection will be considered for computingaverage precision.Default: 0.3.

iou_thresh

Optional float. The intersection over unionthreshold with the ground truth labels, abovewhich a predicted bounding box will beconsidered a true positive.Default: 0.1.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.Default: 0.01.

mean

Optional bool. If False returns class-wiseaverage precision otherwise returns meanaverage precision.Default: False.

kwargs

Parameter

Description

view_type

Optional string. Dataset type to display theresults.

  • valid - For validation set.

  • train - For training set.

Default: ‘valid’.

Returns:

dict if mean is False otherwisefloat

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aMMDetection3D object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

MMDetection3D Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

plot_losses()

Plot validation and training losses after fitting the model.

predict_h5(path,output_path=None,**kwargs)

This method is used for infrencing using HDF file.

Parameter

Description

path

Required string. The path to folder where the HDFfiles which needs to be predicted are present.

output_path

Optional string. The path to folder where to dumpthe resulting HDF files. Defaults toresultsfolder in input path.

kwargs

Parameter

Description

batch_size

Optional integer. The number of blocks to processin one batch. Default is set to 1.

detect_thresh

Optional float. The probability above whicha detection will be considered valid.Default: 0.1.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.Default: 0.6.

Returns:

Path where files are dumped.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=2,detect_thresh=0.3,nms_overlap=0.01,**kwargs)

Displays the results of the trained model on a part of validation/train set.Colors of the PointCloud are only used for better visualization, and it doesnot depict the actual classcode colors. Visualization of data, exported in ageographic coordinate system is not yet supported.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

detect_thresh

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

kwargs

Parameter

Description

color_mapping

Optional dictionary. Mapping from object idto RGB values. Colors of the PointCloud viacolor_mapping are only used for bettervisualization, and it does not depict theactual classcode colors. Default value example:{0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.

max_display_point

Optional integer. Maximum number of pointsto display. Default is 20000. A warning willbe raised if the total points to display exceedsthis parameter. Setting this parameter willrandomly sample the specified number of pointsand once set, it will be used for future uses.

view_type

Optional string. Dataset type to display theresults.

  • valid - For validation set.

  • train - For training set.

Default: ‘valid’.

supported_models=['SECOND']

List of models supported by this class.

unfreeze()

Not implemented for this model asnone of the layers are frozen by default.

PTv3Det

classarcgis.learn.PTv3Det(data,pretrained_path=None,**kwargs)

Model architecture fromhttps://arxiv.org/pdf/2312.10035.Creates PTv3Det point cloud detection model.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data function.

pretrained_path

Optional String. Path where pre-trained modelis saved.

kwargs

Parameter

Description

voxel_parms

Optional dictionary. The keys of the dictionary arevoxel_size,voxel_points, andmax_voxels. Thedefault value ofvoxel_size,`voxel_points`, andmax_voxels are automatically calculated based onthe ‘block size’, ‘object size’ and ‘average no.of points per block’ of the exported data.

Example:
{‘voxel_size’: [0.05, 0.05, 0.1],
‘voxel_points’: 10,
‘max_voxels’:[20000, 40000],
}

Parameter Explanation:

  • ‘voxel_size’: List of voxel dimensions in meter[x,y,z],

  • ‘voxel_points’: An Int, that decides the maximumnumber of points per voxel,

  • ‘max_voxels’: List of maximum number of voxels in[training, validation].

Default: None.

seq_len

Optional int. Sequence length for transformer.Default: 1024.

Returns:

PTv3Det Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

average_precision_score(detect_thresh=0.2,iou_thresh=0.1,nms_overlap=0.2,mean=False,**kwargs)

Computes average precision on the validation/train set for each class.

Parameter

Description

detect_thresh

Optional float. The probability above whicha detection will be considered for computingaverage precision.Default: 0.3.

iou_thresh

Optional float. The intersection over unionthreshold with the ground truth labels, abovewhich a predicted bounding box will beconsidered a true positive.Default: 0.1.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.Default: 0.01.

mean

Optional bool. If False returns class-wiseaverage precision otherwise returns meanaverage precision.Default: False.

kwargs

Parameter

Description

view_type

Optional string. Dataset type to display theresults.

  • valid - For validation set.

  • train - For training set.

Default: ‘valid’.

Returns:

dict if mean is False otherwisefloat

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aPTv3Det object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

PTv3Det Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

plot_losses()

Plot validation and training losses after fitting the model.

predict_h5(path,output_path=None,**kwargs)

This method is used for infrencing using HDF file.

Parameter

Description

path

Required string. The path to folder where the HDFfiles which needs to be predicted are present.

output_path

Optional string. The path to folder where to dumpthe resulting HDF files. Defaults toresultsfolder in input path.

kwargs

Parameter

Description

batch_size

Optional integer. The number of blocks to processin one batch. Default is set to 1.

detect_thresh

Optional float. The probability above whicha detection will be considered valid.Default: 0.1.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.Default: 0.6.

Returns:

Path where files are dumped.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=2,detect_thresh=0.3,nms_overlap=0.01,**kwargs)

Displays the results of the trained model on a part of validation/train set.Colors of the PointCloud are only used for better visualization, and it doesnot depict the actual classcode colors. Visualization of data, exported in ageographic coordinate system is not yet supported.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

detect_thresh

Optional float. The probability above whicha detection will be considered valid.

nms_overlap

Optional float. The intersection over unionthreshold with other predicted boundingboxes, above which the box with the highestscore will be considered a true positive.

kwargs

Parameter

Description

color_mapping

Optional dictionary. Mapping from object idto RGB values. Colors of the PointCloud viacolor_mapping are only used for bettervisualization, and it does not depict theactual classcode colors. Default value example:{0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.

max_display_point

Optional integer. Maximum number of pointsto display. Default is 20000. A warning willbe raised if the total points to display exceedsthis parameter. Setting this parameter willrandomly sample the specified number of pointsand once set, it will be used for future uses.

view_type

Optional string. Dataset type to display theresults.

  • valid - For validation set.

  • train - For training set.

Default: ‘valid’.

unfreeze()

Not implemented for this model asnone of the layers are frozen by default.

Object Tracking Models

SiamMask

classarcgis.learn.SiamMask(data=None,**kwargs)

Creates aSiamMask object.

Parameter

Description

data

Optional fastai Databunch. Returned data object fromprepare_data() function with dataset_type as‘ObjectTracking’ and data format as ‘YouTube-VOS’.Default value is None.

Returns:

SiamMask Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_metrics(iou_thres=0.2)

Computes mean IOU and f-measure on validation set.

Parameter

Description

iou_thresh

Optional float. The intersection over unionthreshold with the ground truth mask, abovewhich a predicted mask will beconsidered a true positive.

Returns:

dict with mean IOU and F-Measure

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

freeze()

Freezes the pretrained backbone.

classmethodfrom_model(emd_path,data=None)

Creates aSiamMask Object tracker from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

SiamMask Object

init(frame,detections,labels=None,reset=True,**kwargs)

Initializes the position of the object in the frame/Image using detections.

Parameter

Description

frame

Required numpy array. frame is used toinitialize the objects to track.

detections

Required list. A list of bounding boxes.

labels

Optional list. A list of labels correspondingto the bounding boxes.

Returns:

Track list

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

remove(track_ids)

Removes the tracks from the track list using track_ids

Parameter

Description

track_ids

Required List. List of track ids to be removedfrom the track list.

Returns:

Updated track list

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5)

Displays the results of a trained model on a part of the validation set

Parameter

Description

rows

Optional int. Number of rows to display.

propertysupported_backbones

Supported torchvision backbones for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

update(frame,**kwargs)

Tracks the position of the object in the frame/Image

Parameter

Description

frame

Required numpy array. frame is used to updatethe object track.

kwargs

Parameter

Description

detections

Optional list. A list of bounding boxes.

labels

Optional list. A list of labels.

Returns:

Updated track list

DeepSort

classarcgis.learn.DeepSort(data,**kwargs)

Creates aDeepSort object.

Parameter

Description

data

Fastai Databunch. Returned data object fromprepare_data() function withdataset_type=Imagenet.Default value is None.DeepSort only supports image size of (3, 128, 64)

Returns:

DeepSort Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates a DeepSort Object tracker from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

DeepSort Object

init(frame,detections=None,labels=None,scores=None,**kwargs)

Initializes theDeepSort tracker for inference.

Parameter

Description

frame

Required numpy array. Frame is used toinitialize the tracker.

detections

Required list. A list of bounding boxescorresponding to the detections.

labels

Optional list. A list of labelscorresponding to the detections.

scores

Optional list. A list of scorescorresponding to the detections.

Returns:

Track list

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

remove(track_ids)

Removes the tracks from the track list using track_ids.

Parameter

Description

track_ids

Required list. list of track ids to be removedfrom the track list.

Returns:

Updated track list

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

propertysupported_backbones

Supported torchvision backbones for this model.

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

update(frame,detections=None,labels=None,scores=None,**kwargs)

Updates theDeepSort tracker.

Parameter

Description

frame

Required numpy array. Frame is used toupdate the tracker.

detections

Required list. A list of bounding boxescorresponding to the detections.bounding box = [xmin, ymin, width, height]

labels

Optional list. A list of labelscorresponding to the detections.

scores

Optional list. A list of scorescorresponding to the detections.

Returns:

Track list

ObjectTracker

classarcgis.learn.ObjectTracker(tracker,detector=None,tracker_options={'detect_fail_interval':5,'detect_track_failure':True,'detection_interval':5,'detection_threshold':0.3,'enable_post_processing':True,'knn_distance_ratio':0.75,'min_obj_size':10,'recover_conf_threshold':0.1,'recover_iou_threshold':0.1,'recover_track':True,'search_period':60,'stab_period':6,'status_fail_threshold':0.6,'status_history':60,'template_history':25})

CreatesObjectTracker Object.

Parameter

Description

tracker

Required. Returned tracker object fromfrom_model API of object tracking models.

detector

Optional. Returned detector object fromfrom_model API of object detection models.

tracker_options

Optional dictionary. A dictionary withkeys as parameter names and values asparameter values.

  • enable_post_processing” - refers tothe flag which enables/disables post_processingof tracks internal to ObjectTracker module.For DeepSort, it’s recommended to keep thisflag as False. Default - True

  • detection_interval” - refers tothe interval in frames at which the detectoris invoked. It should be >= 1

  • detection_threshold” - refers tothe lower threshold for selecting thedetections.

  • detect_track_failure” - refers tothe flag which enables/disables the logicto detect whether the object appearancehas changed detection.

  • recover_track” - refers to the flag whichenables/disables track recovery post failure.

  • stab_period” - refers to the number of framesafter which post processing starts.

  • detect_fail_interval” - refers to the numberof frames after which to detect track failure.

  • min_obj_size” - refers to the size in pixelsbelow which tracking is assumed to havefailed.

  • template_history” - refers to the number offrames before the current frame at whichtemplate image is fetched.

  • status_history” - refers to thenumber of frames over which status of thetrack is used to detect track failure.

  • status_fail_threshold” - refers to thethreshold for the ratio between numberof frames for which object is searchedfor and the total number of frames whichneeds to be crossed for track failuredetection.

  • search_period” - refers to thenumber of frames for which object issearched for before declaring object islost.

  • knn_distance_ratio” - refers to thethreshold for ratio of the distances betweentemplate descriptor and the two best matcheddetection descriptor, used for filteringbest matches.

  • recover_conf_threshold” - refersto the minimum confidence value over whichrecovery logic is enabled.

  • recover_iou_threshold - refers to the minimumoverlap between template and detection forsuccessful recovery.

Returns:

ObjectTracker Object

init(frame,detections=None,labels=None,reset=True)

Initializes tracks based on the detections returned by detector/manually fed to the function.

Parameter

Description

frame

Required numpy array. frame is used toinitialize the objects to track.

detections

Optional list. A list of bounding box tointialize the tracks.

labels

Optional list. A list of labels correspondingto the detections.

reset

Optional flag. Indicates whether to resetthe tracker and remove all existing tracksbefore initialization.

Returns:

list of active track objects

remove(tracks_ids)

Removes the tracks corresponding to track_ids parameter.

Parameter

Description

tracks_ids

Required list. List of track ids to beremoved.

update(frame)

Tracks the position of the object in the frame/Image.

Parameter

Description

frame

Required numpy array. frame is the currentframe to be used to track the objects.

Returns:

list of active track objects

Track

classarcgis.learn.Track(id,label,bbox,mask)

Creates a Track object, used to maintain the state of a track

Parameter

Description

id

Required int. ID for each track initialized

label

Required String. label/class name of the track

bbox

Required list. Bounding box of the track

mask

Required numpy array. Mask for the tack

Returns:

Track Object

Scanned Maps

ScannedMapDigitizer

classarcgis.learn.ScannedMapDigitizer(input_folder,output_folder)

Creates the object forScannedMapDigitizer class

Parameter

Description

input_folder

Path to the folder that contains extractedmaps

output_folder

Path to the folder where intermediateresults should get generated

classmethodcreate_mask(color_list,color_delta=60,kernel_size=None,kernel_type='rect',show_result=True)

Generates the binary masked images

Parameter

Description

color_list

A list containing different color inputsin list/tuple format [(r, g, b)].For eg: [[110,10,200], [210,108,11]].

color_delta

A value which defines the range around thethreshold value for a specific color usedfor creating the mask images.Default value is 60.

kernel_size

A list of 2 integers corresponding to sizeof the morphological filter operationsclosing and opening respectively.

kernel_type

A string value defining the type/shape ofthe kernel. kernel type can be “rect”,“elliptical” or “cross”.Default value is “rect”.

show_result

A boolean value. Set to “True” to visualizeresults and set to “False” otherwise.

classmethodcreate_template_image(color,color_delta=10,kernel_size=2,show_result=True)

This method generates templates and color masks from scanned maps whichare used in the subsequent step of template matching.

Parameter

Description

color

A list containing r, g, b value representing land color.The color parameter is required for extractingthe land region and generating the binary mask.

color_delta

A value which defines the range around thethreshold value for a specific color used forcreating the mask images.Default value is 60.

kernel_size

An integer corresponding to size of kernelused for dilation(morphological operation).

show_result

A Boolean value. Set to “True” to visualizeresults and set to “False” otherwise.

classmethoddigitize_image(show_result=True)

This method is the final step in the pipeline that maps thespecies regions on the search image using the computedtransformations.Also, it generates the shapefiles for the species region that can bevisualized using ArcGIS Pro and further edited.

Parameter

Description

show_result

A Boolean value. Set to “True” to visualizeresults and set to “False” otherwise.

classmethodgeoreference_image(padding_param,show_result=True)

This method estimates the control point pairs by traversing thecontours of template image and finding the corresponding matcheson the search region ROI image

Parameter

Description

padding_param

A tuple that contains x-paddingand y-padding at 0th and 1st indexrespectively.

show_result

A Boolean value. Set to “True” tovisualize results and set to “False”otherwise.

classmethodget_search_region_extent()

Getter function for search region extent

classmethodmatch_template_multiscale(min_scale,max_scale,num_scales,show_result=True)

This method finds the location of the best match of a smaller image(template) in a larger image(search image) assuming it exists in thelarger image.

Parameter

Description

min_scale

An integer representing the minimum scaleat which template matching is performed.

max_scale

An integer representing maximum scale atwhich template matching is performed.

num_scales

An integer representing the numberof scales at which template matching isperformed.

show_result

A Boolean value. Set to “True” to visualizeresults and set to “False” otherwise.

classmethodprepare_search_region(search_image,color,extent,image_height,image_width,show_result=True)

This method prepares the search region in which the prepared templates areto be searched.

Parameter

Description

search_image

Path to the bigger image/shapefile.

color

A list containing r, g, b value representing water color.For Eg: [173, 217, 219].

extent

Extent defines the extreme longitude/latitudeof the search region.

image_height

Height of the search region.

image_width

Width of the search region.

show_result

A boolean value. Set to “True” to visualizeresults and set to “False” otherwise.

classmethodset_search_region_extent(extent)

Creates the object forScannedMapDigitizer class

Parameter

Description

extent

Extent defines the extreme longitude/latitudeof the search region.

Feature, Tabular and Timeseries models

FullyConnectedNetwork

classarcgis.learn.FullyConnectedNetwork(data,layers=None,emb_szs=None,**kwargs)

Creates aFullyConnectedNetwork Object.Based on the Fast.ai’s Tabular Learner

Parameter

Description

data

Required TabularDataObject. Returned data object fromprepare_tabulardata function.

layers

Optional list, specifying the number of nodes in each layer.Default: [500, 100] is used.2 layers each with nodes 500 and 100 respectively.

emb_szs

Optional dict, variable name with embedding sizefor categorical variables.If not specified, then calculated using fastai.

Returns:

FullyConnectedNetwork Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

propertyfeature_importances_
Returns:

the global feature importance summary plot from SHAP.Feature is temporarily disabled.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aFullyConnectedNetwork Object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_tabulardata function or None forinferencing.

Returns:

FullyConnectedNetwork Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(input_features=None,explanatory_rasters=None,datefield=None,distance_features=None,output_layer_name='PredictionLayer',gis=None,prediction_type='features',output_raster_path=None,match_field_names=None,explain=False,explain_index=None)

Predict on data from feature layer, dataframe and or raster data.

Parameter

Description

input_features

OptionalFeatureLayer or spatially enabled dataframe.Required if prediction_type=’features’.Contains features with location andsome or all fields required to infer the dependent variable value.

explanatory_rasters

Optional list of Raster Objects.If prediction_type=’raster’, must contain all rastersrequired to make predictions.

datefield

Optional string. Field name from feature layerthat contains the date, time for the input features.Same asprepare_tabulardata() .

distance_features

Optional List ofFeatureLayer objects.These layers are used for calculation of field “NEAR_DIST_1”,“NEAR_DIST_2” etc in the output dataframe.These fields contain the nearest feature distancefrom the input_features.Same asprepare_tabulardata() .

output_layer_name

Optional string. Used for publishing the output layer.

gis

OptionalGIS Object. Used for publishing the item.If not specified then active gis user is taken.

prediction_type

Optional String.Set ‘features’ or ‘dataframe’ to make output feature layer predictions.With this feature_layer argument is required.

Set ‘raster’, to make prediction raster.With this rasters must be specified.

output_raster_path

Optional path.Required when prediction_type=’raster’, savesthe output raster to this path.

match_field_names

Optional dictionary.Specify mapping of field names from prediction setto training set.For example:

{
“Field_Name_1”: “Field_1”,
“Field_Name_2”: “Field_2”
}

explain

Optional Bool.Setting this parameter to true generates prediction explaination plot.Plot is generated using model interpretability library called SHAP.(https://github.com/slundberg/shap). Feature is temporarily disabled.

explain_index

Optional Int.The index of the dataframe passed to the predict function for which modelinterpretability is desired. If the parameter is not passed and if theexplain parameter is set to true, the SHAP plot will be generated for arandom index of the dataframe.

Returns:

Feature Layer if prediction_type=’features’, dataframe for prediction_type=’dataframe’ else creates an output raster.

save(name_or_path,framework='PyTorch',publish=False,gis=None,save_optimizer=False,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Folder path to save the model.

framework

Optional string. Defines the framework of themodel. (Only supported bySingleShotDetector, currently.)If framework used isTF-ONNX,batch_size can bepassed as an optional keyword argument.

Framework choice: ‘PyTorch’ and ‘TF-ONNX’

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object. Used for publishing the item.If not specified then active gis user is taken.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

kwargs

Optional Parameters:Booleanoverwrite if True, it will overwritethe item on ArcGIS Online/Enterprise, default False.

score()
Returns:

R2 score for regression model and Accuracy for classification model.

show_results(rows=5)

Prints the rows of the dataframe with target and prediction columns.

Parameter

Description

rows

Optional Integer.Number of rows to print.

Returns:

dataframe

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

MLModel

classarcgis.learn.MLModel(data,model_type,fairness_args=None,**kwargs)

Creates a machine learning model based on its implementation from scikit-learn, xgboost, lightgbm, catboost.For supervised learning:Referscikit-learn,xgboost,lightgbm ,catboost .

For unsupervised learning:1. Clustering Models2. Gaussian Mixture Models3. Novelty and outlier detectionReferhttps://scikit-learn.org/stable/unsupervised_learning.html

Returns:

MLModel Object

decision_function()
Returns:

output from scikit-learn’s model.decision_function()

fairness_score(sensitive_feature,fairness_metrics=None,visualize=False)

Shows sample fairness score and plots for the model.

Returns:

dataframe

propertyfeature_importances_
Returns:

the global feature importance summary plot from SHAP. Most of the sklearn models are supported by this method.

fit()
classmethodfrom_model(emd_path,data=None)

Creates aMLModel Object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Esri Model Definitionfile.

data

Required TabularDataObject or None. Returned dataobject fromprepare_tabulardata function or None forinferencing.

Returns:

MLModel Object

initialize_fair_model(fairness_args)
kneighbors(X=None,n_neighbors=None,return_distance=True)
Returns:

output from scikit-learn’s model.kneighbors()

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toEsri Model Definition(EMD) file.

mahalanobis()
Returns:

output from scikit-learn’s model.mahalanobis()

predict(input_features=None,explanatory_rasters=None,datefield=None,distance_features=None,output_layer_name=None,gis=None,prediction_type='features',output_raster_path=None,match_field_names=None,explain=False,explain_index=None)

Predict on data from feature layer, dataframe and or raster data.

Parameter

Description

input_features

OptionalFeatureLayer or spatial dataframe. Required if prediction_type=’features’.Contains features with location andsome or all fields required to infer the dependent variable value.

explanatory_rasters

Optional list. Required if prediction_type=’raster’.Contains a list of raster objects containingsome or all fields required to infer the dependent variable value.

datefield

Optional string. Field name from feature layerthat contains the date, time for the input features.Same asprepare_tabulardata() .

distance_features

Optional List ofFeatureLayer objects.These layers are used for calculation of field “NEAR_DIST_1”,“NEAR_DIST_2” etc in the output dataframe.These fields contain the nearest feature distancefrom the input_features.Same asprepare_tabulardata() .

output_layer_name

Optional string. Used for publishing the output layer.

gis

OptionalGIS Object. Used for publishing the item.If not specified then active gis user is taken.

prediction_type

Optional String.Set ‘features’ or ‘dataframe’ to make output feature layer predictions.With this feature_layer argument is required.

Set ‘raster’, to make prediction raster.With this rasters must be specified.

output_raster_path

Optional path.Required when prediction_type=’raster’, savesthe output raster to this path.

match_field_names

Optional dictionary.Specify mapping of field names from prediction setto training set.For example:

{
“Field_Name_1”: “Field_1”,
“Field_Name_2”: “Field_2”
}

explain

Optional Bool.Setting this parameter to true generates prediction explanation plot.Plot is generated using model interpretability library called SHAP.(https://github.com/slundberg/shap)

explain_index

Optional Int.The index of the dataframe passed to the predict function for which modelinterpretability is desired. If the parameter is not passed and if theexplain parameter is set to true, the SHAP plot will be generated for arandom index of the dataframe.

Returns:

FeatureLayer if prediction_type=’features’, dataframe for prediction_type=’dataframe’ else creates an output raster.

predict_proba()
Returns:

output from scikit-learn’s model.predict_proba()

save(name_or_path,publish=False,gis=None,**kwargs)

Saves the model, creates an Esri Model Definition. Uses pickle to save the model.Using protocol level 2. Protocol level is backward compatible.

Returns:

dataframe

score()
Returns:

output from scikit-learn’s model.score(), R2 score in case of regression and Accuracy in case of classification.

For KMeans returns Opposite of the value of X on the K-means objective.

show_results(rows=5)

Shows sample results for the model.

Returns:

dataframe

TimeSeriesModel

classarcgis.learn.TimeSeriesModel(data,seq_len,model_arch='InceptionTime',location_var=None,multistep=False,**kwargs)

Creates aTimeSeriesModel Object.Based on the Fast.ai’shttps://github.com/timeseriesAI/timeseriesAI

Parameter

Description

data

Required TabularDataObject. Returned data object fromprepare_tabulardata function.

seq_len

Required Integer. Sequence Length for the series.In case of raster only, seq_len = number of rasters,any other passed value will be ignored.

model_arch

Optional string. Model Architecture.Allowed “InceptionTime”, “ResCNN”,“Resnet”, “FCN”, “TimeSeriesTransformer”, “LSTM”. “LSTM”supports both “LSTM” and “Bi-LSTM”. “Bi-LSTM” is enabled by passingbidirectional=True in kwargs.

location_var

Optional string. Location variable in case ofNetCDF dataset.

multistep

Optional string. It will set the model to generatemore than one time-step as output in multivariate scenario.Compared to current auto-regressive fashion, it will generatemulti-step output in single pass.This option is only applicable in multivariatescenario. Univariate implementation will ignore this flag.Default value isFalse

**kwargs

Optional kwargs.

Returns:

TimeSeriesModel Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None,**kwargs)

Creates aTimeSeriesModel Object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_tabulardata function or None forinferencing.

Returns:

TimeSeriesModel Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

predict(input_features=None,explanatory_rasters=None,datefield=None,distance_features=None,output_layer_name='PredictionLayer',gis=None,prediction_type='features',output_raster_path=None,match_field_names=None,number_of_predictions=None)

Predict on data from feature layer and or raster data.

Parameter

Description

input_features

OptionalFeatureLayer or spatially enabled dataframe.Contains features with location of the input data.Required if prediction_type is ‘features’ or ‘dataframe’

explanatory_rasters

Optional list of Raster Objects.Required if prediction_type is ‘rasters’

datefield

Optional field_name.This field contains the date in the input_features.The field type can be a string or date time field.If specified, the field will be split intoYear, month, week, day, dayofweek, dayofyear,is_month_end, is_month_start, is_quarter_end,is_quarter_start, is_year_end, is_year_start,hour, minute, second, elapsed and these will be addedto the prepared data as columns.All fields other than elapsed and dayofyear are treatedas categorical.

distance_features

Optional List ofFeatureLayer objects.These layers are used for calculation of field “NEAR_DIST_1”,“NEAR_DIST_2” etc in the output dataframe.These fields contain the nearest feature distancefrom the input_features.Same asprepare_tabulardata().

output_layer_name

Optional string. Used for publishing the output layer.

gis

OptionalGIS Object. Used for publishing the item.If not specified then active gis user is taken.

prediction_type

Optional String.Set ‘features’ or ‘dataframe’ to make output predictions.

output_raster_path

Optional path. Required when prediction_type=’raster’, savesthe output raster to this path.

match_field_names

Optional string.Specify mapping of the original training set with prediction set.

number_of_predictions

Optional int for univariate time series.Specify the number of predictions to make, adds new rows to the dataframe.For multivariate or if None, it expects the dataframe to have empty rows.if multi-step is set to True during training then it does not need emptyrows. If multi-step is set to False then dataframe needs to have rows withNA values invariable predict and non-NA values inexplnatory_variblesFor prediction_type=’raster’, a new raster is created.

Returns:

FeatureLayer/dataframe if prediction_type=’features’/’dataframe’, else returns True and saves output

raster at the specified path.

save(name_or_path,framework='PyTorch',publish=False,gis=None,save_optimizer=False,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Folder path to save the model.

framework

Optional string. Defines the framework of themodel. (Only supported bySingleShotDetector, currently.)If framework used isTF-ONNX,batch_size can bepassed as an optional keyword argument.

Framework choice: ‘PyTorch’ and ‘TF-ONNX’

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object. Used for publishing the item.If not specified then active gis user is taken.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

kwargs

Optional Parameters:Booleanoverwrite if True, it will overwritethe item on ArcGIS Online/Enterprise, default False.

score()
Returns:

R2 score for regression model and Accuracy for classification model.

show_results(rows=5)

Prints the graph with predictions.

Experimental support for multivariate timeseries.

Parameter

Description

rows

Optional Integer.Number of rows to print.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Raster Time Series Models

PSETAE

classarcgis.learn.PSETAE(data,pretrained_path=None,*args,**kwargs)

Creates a Pixel-Set encoder + Temporal Attention Encoder sequence classifier.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data function.

pretrained_path

Optional string. Path where pre-trained model issaved.

Keyword Arguments

Parameter

Description

mlp1

Optional list. Dimensions of thesuccessive feature spaces of MLP1.default set to [32, 64]

pooling

Optional string. Pixel-embeddingpooling strategy, can be chosen in(‘mean’,’std’,’max’,’min’).default set to ‘mean’

mlp2

Optional list. Dimensions of thesuccessive feature spaces of MLP2.default set to [128, 128]

n_head

Optional integer. Number of attention heads.default set to 4

d_k

Optional integer. Dimension of thekey and query vectors. default set to 32

dropout

Optional float. dropout. default set to 0.2

T

Optional integer. Period to use forthe positional encoding.default set to 1000

mlp4

Optional list. dimensions of decoder mlp.default set to [64, 32]

Returns:

PSETAE Object

accuracy()

Computes overall accuracy (OA) on validation set.

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_metrics()

Computes mean intersection over union (mIOU) andoverall accuracy (OA) on validation set.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates a PSETAE object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data function or None forinferencing.

Returns:

PSETAE Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

mIOU()

Computes mean intersection over union (mIOU) on validation set.

per_class_metrics()

Computes IoU, Precision, Recall, F1-score for all classes.

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=20,**kwargs)

Displays the results of a trained model on a part of the validation set.

kwargs

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

ClimaX

classarcgis.learn.ClimaX(data,backbone=None,pretrained_path=None,*args,**kwargs)

Creates ClimaX model object: a foundational model forweather and climate forecasting tasks.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data function.

backbone

Optional string. pretrained foundational modelsas backbone. Compatible backbones: ‘5.625deg’,‘1.40625deg’. Default set to ‘5.625deg’.

pretrained_path

Optional string. Path where pre-trained model issaved.

Keyword Arguments

Parameter

Description

patch_size

Optional int. Patch size for generating patchembeddings. Default: 4

embed_dim

Optional int. Dimension of embeddings.Default: 1024

depth

Optional int. Depth of model.Default: 8

num_heads

Optional int. Number of attention heads.Default: 16

mlp_ratio

Optional float. Ratio of MLP.Default: 4.0

decoder_depth

Optional int. Depth of decoder.Default: 2

drop_path

Optional float. stochastic depth or randomlydrops entire layers. Default: 0.1

drop_rate

Optional float. randomly drops neurons.Default: 0.1

parallel_patch_embed

Optional bol. parallel embdedding of patches.Default: True

Returns:

ClimaX Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

compute_metrics()

Computes latitude weighted root mean squared error on validation set.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates a ClimaX object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data function or None forinferencing.

Returns:

ClimaX Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

show_results(rows=5,variable='',**kwargs)

Displays the results of a trained model on a part of the validation set.

Parameter

Description

rows

Optional int. Number of rows of resultsto be displayed.

total_sample_size

Optional int. Number of rows of resultsto be displayed.

variable_no

Optional int. variable count to be displayed

propertysupported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Unstructured Text Models

arcgis.learn.text module

Inferencing Methods

detect_objects

arcgis.learn.detect_objects(input_raster,model,model_arguments=None,output_name=None,run_nms=False,confidence_score_field=None,class_value_field=None,max_overlap_ratio=0,context=None,process_all_raster_items=False,*,gis=None,future=False,estimate=False,**kwargs)

Function can be used to generate feature service that contains polygons on detected objectsfound in the imagery data using the designated deep learning model. Note that the deep learninglibrary needs to be installed separately, in addition to the server’s built in Python 3.x library.

Note

This function is supported with ArcGIS Enterprise (Image Server) and ArcGIS Image for ArcGIS Online.

Parameter

Description

input_raster

Required. raster layer that contains objects that needs to be detected.

model

RequiredModel object.

model_arguments

Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.

eg: {“name1”:”value1”, “name2”: “value2”}

output_name

Optional. If not provided, aFeatureLayer is created by the method and used as the output .You can pass in an existing Feature Service Item from your GIS to use that instead.Alternatively, you can pass in the name of the output Feature Service that should be created by this methodto be used as the output for the tool.A RuntimeError is raised if a service by that name already exists

run_nms

Optional bool. Default value is False. If set to True, runs the Non Maximum Suppression tool.

confidence_score_field

Optional string. The field in the feature class that contains the confidence scores as output by the object detection method.This parameter is required when you set the run_nms to True

class_value_field

Optional string. The class value field in the input feature class.If not specified, the function will use the standard class value fieldsClassvalue and Value. If these fields do not exist, all features willbe treated as the same object class.Set only if run_nms is set to True

max_overlap_ratio

Optional integer. The maximum overlap ratio for two overlapping features.Defined as the ratio of intersection area over union area.Set only if run_nms is set to True

context

Optional dictionary. Context contains additional settings that affect task execution.Dictionary can contain value for following keys:

  • cellSize - Set the output raster cell size, or resolution

  • extent - Sets the processing extent used by the function

  • parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”

  • mask: Only cells that fall within the analysis mask will be considered in the operation.

Eg: {“mask”: {“url”: “<feature_service_url>”}}

  • processorType - Sets the processor type. “CPU” or “GPU”

Eg: {“processorType” : “CPU”}

Setting context parameter will override the values set using arcgis.envvariable for this particular function.

process_all_raster_items

Optional bool. Specifies how all raster items in an image service will be processed.

  • False : all raster items in the image service will be mosaicked together and processed. This is the default.

  • True : all raster items in the image service will be processed as separate images.

gis

OptionalGIS . The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

estimate

Keyword only parameter. Optional Boolean. If True, the number of credits needed to run the operation will be returned as a float.Available only on ArcGIS Online.

Returns:

The output feature layer item containing the detected objects

classify_objects

arcgis.learn.classify_objects(input_raster,model,model_arguments=None,input_features=None,class_label_field=None,process_all_raster_items=False,output_name=None,context=None,*,gis=None,future=False,estimate=False,**kwargs)

Function can be used to output feature service with assigned class label for each feature based oninformation from overlapped imagery data using the designated deep learning model.

Note

This function is supported with ArcGIS Enterprise (Image Server) and ArcGIS Image for ArcGIS Online.

Parameter

Description

input_raster

Required. raster layer that contains objects that needs to be classified.

model

RequiredModel object.

model_arguments

Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.

eg: {“name1”:”value1”, “name2”: “value2”}

input_features

OptionalFeatureLayer.The point, line, or polygon input feature layer that identifies the location of each object to beclassified and labelled. Each row in the input feature layer represents a single object.

If no input feature layer is specified, the function assumes that each input image contains a single objectto be classified. If the input image or images use a spatial reference, the output from the function is afeature layer, where the extent of each image is used as the bounding geometry for each labelledfeature layer. If the input image or images are not spatially referenced, the output from the functionis a table containing the image ID values and the class labels for each image.

class_label_field

Optional str. The name of the field that will contain the classification label in the output feature layer.

If no field name is specified, a new field called ClassLabel will be generated in the output feature layer.

Example:

“ClassLabel”

process_all_raster_items

Optional bool.

  • If set to False, all raster items in the image service will be mosaicked together and processed. This is the default.

  • If set to True, all raster items in the image service will be processed as separate images.

output_name

Optional. If not provided, aFeatureLayer is created by the method and used as the output .You can pass in an existing Feature Service Item from your GIS to use that instead.Alternatively, you can pass in the name of the output Feature Service that should be created by this methodto be used as the output for the tool.A RuntimeError is raised if a service by that name already exists

context

Optional dictionary. Context contains additional settings that affect task execution.Dictionary can contain value for following keys:

  • cellSize - Set the output raster cell size, or resolution

  • extent - Sets the processing extent used by the function

  • parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”

  • processorType - Sets the processor type. “CPU” or “GPU”

Eg: {“processorType” : “CPU”}

Setting context parameter will override the values set using arcgis.envvariable for this particular function.

gis

OptionalGIS . The GIS on which this tool runs. If not specified, the active GIS is used.

estimate

Keyword only parameter. Optional Boolean. If True, the number of credits needed to run the operation will be returned as a float.Available only on ArcGIS Online

Returns:

The output feature layer item containing the classified objects

classify_pixels

arcgis.learn.classify_pixels(input_raster,model,model_arguments=None,output_name=None,context=None,process_all_raster_items=False,*,gis=None,future=False,estimate=False,**kwargs)

Function to classify input imagery data using a deep learning model.Note that the deep learning library needs to be installed separately,in addition to the server’s built in Python 3.x library.

Note

This function is supported with ArcGIS Enterprise (Image Server) and ArcGIS Image for ArcGIS Online.

Parameter

Description

input_raster

Required. raster layer that needs to be classified.

model

RequiredModel object.

model_arguments

Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.

eg: {“name1”:”value1”, “name2”: “value2”}

output_name

Optional. If not provided, an imagery layer is created by the method and used as the output .You can pass in an existing Image Service Item from your GIS to use that instead.Alternatively, you can pass in the name of the output Image Service that should be created by this methodto be used as the output for the tool.A RuntimeError is raised if a service by that name already exists

context

Optional dictionary. Context contains additional settings that affect task execution.Dictionary can contain value for following keys:

  • outSR - (Output Spatial Reference) Saves the result in the specified spatial reference

  • snapRaster - Function will adjust the extent of output rasters so that theymatch the cell alignment of the specified snap raster.

  • cellSize - Set the output raster cell size, or resolution

  • extent - Sets the processing extent used by the function

  • parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”

  • processorType - Sets the processor type. “CPU” or “GPU”

    Example:

    {“outSR” : {spatial reference}}

Setting context parameter will override the values set using arcgis.envvariable for this particular function.

process_all_raster_items

Optional bool. Specifies how all raster items in an image service will be processed.

  • False : all raster items in the image service will be mosaicked together and processed. This is the default.

  • True : all raster items in the image service will be processed as separate images.

gis

OptionalGIS . The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

estimate

Keyword only parameter. Optional Boolean. If True, the number of credits needed to run the operation will be returned as a float.Available only on ArcGIS Online.

tiles_only

Keyword only parameter. Optional boolean.In ArcGIS Online, the default output image service for this function would be a Tiled Imagery Layer.To create Dynamic Imagery Layer as output in ArcGIS Online, set tiles_only parameter to False.

Function will not honor tiles_only parameter in ArcGIS Enterprise and will generate Dynamic Imagery Layer by default.

Returns:

The classified imagery layer item

compute_accuracy_for_object_detection

arcgis.learn.compute_accuracy_for_object_detection(detected_features,ground_truth_features,detected_class_value_field=None,ground_truth_class_value_field=None,min_iou=None,mask_features=None,out_accuracy_table_name=None,out_accuracy_report_name=None,context=None,*,gis=None,future=False,estimate=False,**kwargs)

Function can be used to calculate the accuracy of a deep learning model by comparing the detected objects fromthe detect_objects function to ground truth data.Function available in ArcGIS Image Server 10.9 and higher (not available in ArcGIS Online).

Parameter

Description

detected_features

Required. The input polygon feature layer containing the objectsdetected from the detect_objects function.

ground_truth_features

Required. The polygon feature layer containing ground truth data.

detected_class_value_field

Optional dictionary. The field in the detected objects feature classthat contains the class names or class values.

If a field name is not specified, a Classvalue or Value field willbe used. If these fields do not exist, all records will beidentified as belonging to one class.

The class values or class names must match those in the ground truth feature class exactly.

Syntax: A string describing the detected class value field.

Example: “class”

ground_truth_class_value_field

The field in the ground truth feature class that contains the classnames or class values.

If a field name is not specified, a Classvalue or Value field willbe used. If these fields do not exist, all records will beidentified as belonging to one class.

The class values or class names must match those in the detected objects feature class exactly.

Example: “class”

min_iou

The Intersection over Union (IoU) ratio to use as a threshold toevaluate the accuracy of the object-detection model. The numeratoris the area of overlap between the predicted bounding box andthe ground truth bounding box. The denominator is the area ofunion or the area encompassed by both bounding boxes.

min_IoU value should be in the range 0 to 1. [0,1]Example:

0.5

mask_features

OptionalFeatureLayer. A polygon feature service layer that delineatesthe area where accuracy will be computed. Only the image area thatfalls completely within the polygons will be assessed for accuracy.

out_accuracy_table_name

Optional. Name of the output accuracy table item to be created.If not provided, a random name is generated by the method and used asthe output name.

out_accuracy_report_name

Optional. Accuracy report can either be added as an item to the portal.or can be written to a datastore.To add as an item, specify the name of the output report item (pdf item)to be created.Example:

“accuracyReport”

In order to write accuracy report to datastore, specify the datastore path as value to uri key.

Example -

“/fileShares/yourFileShareFolderName/accuracyReport”

context

Optional dictionary. Context contains additional settings that affect task execution.Dictionary can contain value for following keys:

  • cellSize - Set the output raster cell size, or resolution

  • extent - Sets the processing extent used by the function

  • parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”

  • processorType - Sets the processor type. “CPU” or “GPU”

Eg: {“processorType” : “CPU”}

Setting context parameter will override the values set using arcgis.envvariable for this particular function.

gis

OptionalGIS . The GIS on which this tool runs. If not specified, the active GIS is used.

estimate

Keyword only parameter. Optional Boolean. If True, the number of credits needed to run the operation will be returned as a float.Available only on ArcGIS Online

Returns:

The output accuracy table item or/and accuracy report item (or datastore path to accuracy report)

# Usage Example: This example generates an accuracy table for a specified minimum IoU value.compute_accuracy_op=compute_accuracy_for_object_detection(detected_features=detected_features,ground_truth_features=ground_truth_features,detected_class_value_field="ClassValue",ground_truth_class_value_field="Class",min_iou=0.5,mask_features=None,out_accuracy_table_name="accuracy_table",out_accuracy_report_name="accuracy_report",gis=gis)

detect_change_using_deep_learning

arcgis.learn.detect_change_using_deep_learning(from_raster,to_raster,model,output_classified_raster=None,model_arguments=None,context=None,*,gis=None,future=False,estimate=False,**kwargs)

Runs a trained deep learning model to detect change between two rasters.Function available in ArcGIS Image Server 11.1 and higher.

Argument

Description

from_raster

Required ImageryLayer object. The previous raster to use for change detection.

to_raster

Required ImageryLayer object. The recent raster to use for change detection.

model

Required. The deep learning model to be used for the change detection.It can be passed as a dlpk portal item, datastore path to the Esri Model Definition (EMD)file or the EMD JSON string.

output_classified_raster

Optional String. If not provided, an Image Service is created by the method and used as the output raster.You can pass in an existing Image Service Item from your GIS to use that instead.

Alternatively, you can pass in the name of the output Image Service that should be created by this method to beused as the output for the tool.

A RuntimeError is raised if a service by that name already exists.

model_arguments

Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.

eg: {“name1”:”value1”, “name2”: “value2”}

context

Context contains additional settings that affect task execution.

context parameter overwrites values set through arcgis.env parameter

This function has the following settings:

  • Cell size (cellSize) - Set the output raster cell size, or resolution

  • Output Spatial Reference (outSR): The output raster will be

projected into the output spatial reference.

Example:

{“outSR”: {spatial reference}}

  • Extent (extent): A bounding box that defines the analysis area.

Example:

{“extent”: {“xmin”: -122.68,“ymin”: 45.53,“xmax”: -122.45,“ymax”: 45.6,“spatialReference”: {“wkid”: 4326}}}

  • Parallel Processing Factor (parallelProcessingFactor): controls

Raster Processing (CPU) service instances.

Example:

Syntax example with a specified number of processing instances:

{“parallelProcessingFactor”: “2”}

Syntax example with a specified percentage of totalprocessing instances:

{“parallelProcessingFactor”: “60%”}

gis

Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional Boolean. If True, the result will be a GPJob object andresults will be returned asynchronously.

estimate

Keyword only parameter. Optional Boolean. If True, the number of credits needed to run the operation will be returned as a float.Available only on ArcGIS Online

folder

Keyword only parameter. Optional str or dict. Creates a folder in the portal, if it doesnot exist, with the given folder name and persists the output in this folder.The dictionary returned by the gis.content.create_folder() can also be passed in as input.

Example:

{‘username’: ‘user1’, ‘id’: ‘6a3b77c187514ef7873ba73338cf1af8’, ‘title’: ‘trial’}

Returns:

The output imagery layer item

# Usage Example 1:from_raster=gis.content.search("from_raster",item_type="Imagery Layer")[0].layers[0]to_raster=gis.content.search("to_raster",item_type="Imagery Layer")[0].layers[0]change_detection_model=gis.content.search("my_detection_model")[0]detect_change_op=detect_change_using_deep_learning(from_raster=from_raster,to_raster=to_raster,model=change_detection_model,gis=gis)

Embeddings

classarcgis.learn.Embeddings(dataset_type='image',backbone=None,**kwargs)

Creates anEmbeddings Object. This object is capable of givingembeddings for text as well as images. The image embeddings arecurrently supported for RGB images only

Parameter

Description

dataset_type

Required string. The type of data for whichwe would like to get the embedding vectors.Valid values aretext &image. Defaultis set toimage.

Note

The image embeddings are currently supported forRGB images only.

backbone

Optional string. Specify the backbone/model-nameto be used to get the embedding vectors.Default backbone forimage dataset-type isresnet34 and fortext dataset-type issentence-transformers/distilbert-base-nli-stsb-mean-tokens

To learn more about the available models forfor gettingtext embeddings, kindly visit:-https://huggingface.co/sentence-transformers

kwargs

Parameter

Description

working_dir

Option str. Path to a directory on local filesystem.If directory is not present, it will be created.This directory is used as the location to save themodel.

Returns:

Embeddings Object

get(text_or_list,batch_size=32,show_progress=True,return_embeddings=False,**kwargs)

Method to get the embedding vectors for the image/text items.

Parameter

Description

text_or_list

Required string or List. String containingdirectory path or list of directory paths whereimage/text files are present for which the user wantsto get the embedding vectors.

batch_size

Optional integer. The number of items to processin one batch. Default is set to 32.

show_progress

Optional boolean. If set to True, will display aprogress bar depicting the items processed so far.Default is set toTrue.

return_embeddings

Optional boolean. If set to True, a dataframecontaining the embeddings will be returned. If setto False, they will be saved in a h5 file.Default is set toFalse.

kwargs

Parameter

Description

normalize

Optional boolean. If set totrue, will normalizethe image withimagenet-stats (mean andstd-deviation for each color channel in RGB image).This argument is valid only fordataset-type image.Default is set to True.

file_extensions

Optional String or List. The file extension(s) forwhich the user wish to get embedding vectors for.Allowed values fordataset-type image are -[‘png’, ‘jpg’, ‘jpeg’, ‘tiff’, ‘tif’, ‘bmp’]Allowed values fordataset-type text are -[‘csv’, ‘txt’, ‘json’]

Note

For json files, if we have nested json structures, then text will be extracted only from the 1st level.

chip_size

Optional integer. Resize the image tochip_size X chip_size pixels.This argument is valid only fordataset-type image.Default is set to 224

encoding

Optional string. The encoding to read the text/csv/json file. Applicable only fordataset-type text.Default isUTF-8

text_column

Optional string. The column that will be used to getthe text content fromcsv orjson file types.This argument is valid only fordataset-type text.Default is set totext

remove_urls

Optional boolean. If true, remove urls from text.This argument is valid only fordataset-type text.Default value is False.

remove_html_tags

Optional boolean. If true, remove html tags from text.This argument is valid only fordataset-type text.Default value is False.

pooling_strategy

Optional string. The transformer model gives embeddingsfor each word/token present in the text. The type ofpooling to be done on those word/token vectors in orderto form the text embeddings.Allowed values are - [‘mean’, ‘max’, ‘first’]This argument is valid only fordataset-type text.Default value ismean.

Returns:

The path of the H5 file where items & corresponding embeddings are saved.

load(file_path,load_to_memory=True)

Load the extracted embeddings from the H5 file

Parameter

Description

file_path

Required string. The path to the H5 file whichgets auto generated after the call to thegetmethod of theEmbeddings class

load_to_memory

Optional Bool. whether or not to load the entirecontent of the H5 file to memory. Loading very largeH5 files into the memory takes up lot of RAM space.Use this parameter with caution for large H5 files.Default is set to True.

Returns:

Whenload_to_memory param isTrue - A 2 item tuple containingthe numpy arrays of extracted embeddings and itemsWhenload_to_memory param isFalse - A 3 item tuple containingthe H5 file handler & 2 H5 dataset object of extracted embeddingsand items

classmethodsupported_backbones(dataset_type='image')

Get available backbones/model-name for the givendataset-type

Parameter

Description

dataset_type

Required string. The type of data for whichwe would like to get the embedding vectors.Valid values aretext &image. Defaultis set toimage

Returns:

a list containing the available models for the givendataset-type

visualize(file_path,visualize_with_items=True,n_clusters=5,dimensions=3)

Method to visualize the embedding vectors for the image/text items.This method uses the K-Means clustering algorithm to partition theembeddings vectors into n-clusters. This requires the loading theentire content of the H5 file to RAM. Loading very large H5 filesinto the memory takes up lot of RAM space. Use this method withcaution for large H5 files.

Parameter

Description

file_path

Required string. The path to the H5 file whichgets auto generated after the call to thegetmethod of theEmbeddings class.

visualize_with_items

Optional Bool. Whether or not to visualize theembeddings with items. Default is set to True.

n_clusters

Optional integer. The number of clusters to createfor the embedding vectors. This value will be passedto theKMeans algorithm to generate the clusters.Default is set to 5.

dimensions

Optional integer. The number of dimensions to projectthe embedding vectors for visualization purpose.Allowed values are2 &3Default is set to 3.

Model Management

Model

classarcgis.learn.Model(model=None)
from_json(model)

Function is used to initialize Model object from model definition JSON

# Usage example>>>model=Model()>>>model.from_json({"Framework":"TensorFlow","ModelConfiguration":"DeepLab","InferenceFunction":"``[functions]System\DeepLearning\ImageClassifier.py``","ModelFile":"``\\folder_path_of_pb_file\frozen_inference_graph.pb``","ExtractBands":[0,1,2],"ImageWidth":513,"ImageHeight":513,"Classes":[{"Value":0,"Name":"Evergreen Forest","Color":[0,51,0]},{"Value":1,"Name":"Grassland/Herbaceous","Color":[241,185,137]},{"Value":2,"Name":"Bare Land","Color":[236,236,0]},{"Value":3,"Name":"Open Water","Color":[0,0,117]},{"Value":4,"Name":"Scrub/Shrub","Color":[102,102,0]},{"Value":5,"Name":"Impervious Surface","Color":[236,236,236]}]})
from_model_path(model)

Function is used to initialize Model object from url of model package or path of model definition file

# Usage Example #1:>>>model=Model()>>>model.from_model_path("https://xxxportal.esri.com/sharing/rest/content/items/<itemId>")# Usage Example #2:>>>model=Model()>>>model.from_model_path("\\sharedstorage\sharefolder\findtrees.emd")
install(*,gis=None,future=False,**kwargs)

Function is used to install the uploaded model package (*.dlpk). Optionally after inferencingthe necessary information using the model, the model can be uninstalled by uninstall_model()

Parameter

Description

gis

OptionalGIS . The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

Returns:

Path where model is installed

query_info(*,gis=None,future=False,**kwargs)

Function is used to extract the deep learning model specific settings from the model package item or model definition file.

Parameter

Description

gis

OptionalGIS . The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

Returns:

The key model information in dictionary format that describes what the settings are essential for this type of deep learning model.

uninstall(*,gis=None,future=False,**kwargs)

Function is used to uninstall the uploaded model package that was installed using the install_model()This function will delete the named deep learning model from the server but not the portal item.

Parameter

Description

gis

OptionalGIS . The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

Returns:

itemId of the uninstalled model package item

ModelExtension

classarcgis.learn.ModelExtension(data,model_conf,backbone=None,pretrained_path=None,**kwargs)

Creates a ModelExtension object, to train the model for object detection, semantic segmentation, and edge detection.

Parameter

Description

data

Required fastai Databunch. Returned data object fromprepare_data() function.

model_conf

A class definition contains the following methods:

  • get_model(self,data,backbone=None,**kwargs): for model definition,

  • on_batch_begin(self,learn,model_input_batch,model_target_batch,**kwargs): for feeding input to the model during training,

  • transform_input(self,xb): for feeding input to the model during inferencing/validation,

  • transform_input_multispectral(self,xb): for feeding input to the model during inferencing/validation in case of multispectral data,

  • loss(self,model_output,*model_target): to return loss value of the model

  • post_process(self,pred,nms_overlap,thres,chip_size,device): to post-processthe output of the object-detection model.

  • post_process(self,pred,thres): to post-process the output of the segmentation model.

backbone

Optional function. If custom model requires any backbone.

pretrained_path

Optional string. Path where pre-trained model issaved.

Returns:

ModelExtension Object

propertyavailable_metrics

List of available metrics that are displayed in the trainingtable. Setmonitor value to be one of these while callingthefit method.

fit(epochs=10,lr=None,one_cycle=True,early_stopping=False,checkpoint=True,tensorboard=False,monitor='valid_loss',mixed_precision=False,**kwargs)

Train the model for the specified number of epochs and using thespecified learning rates

Parameter

Description

epochs

Required integer. Number of cycles of trainingon the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rateto be used for training the model. Iflr=None,an optimal learning rate is automatically deducedfor training the model.

one_cycle

Optional boolean. Parameter to select 1cyclelearning rate schedule. If set toFalse nolearning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping.If set to ‘True’ training will stop if parametermonitor value stops improving for 5 epochs.A minimum difference of 0.001 is required forit to be considered an improvement.

checkpoint

Optional boolean or string.Parameter to save checkpoint during training.If set toTrue the best modelbased onmonitor will be saved duringtraining. If set to ‘all’, all checkpointsare saved. If set to False, checkpointing willbe off. Setting this parameter loads the bestmodel at the end of training.

tensorboard

Optional boolean. Parameter to write the training log.If set to ‘True’ the log will be saved at<dataset-path>/training_log which can be visualized intensorboard. Required tensorboardx version=2.1

The default value is ‘False’.

Note

Not applicable for Text Models

monitor

Optional string. Parameter specifieswhich metric to monitor while checkpointingand early stopping. Defaults to ‘valid_loss’. Valueshould be one of the metric that is displayed inthe training table. Use{model_name}.available_metricsto list the available metrics to set here.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precisiontraining. If set toTrue, model training will be done inmixed precision mode. OnlyPytorch based models are supported.This feature is experimental.The default value is ‘False’.

classmethodfrom_model(emd_path,data=None)

Creates aModelExtension object from an Esri Model Definition (EMD) file.

Parameter

Description

emd_path

Required string. Path to Deep Learning Package(DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned dataobject fromprepare_data() function or None forinferencing.

Returns:

ModelExtension Object

load(name_or_path,**kwargs)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Parameter

Description

name_or_path

Required string. Name or Path toDeep Learning Package (DLPK) orEsri Model Definition(EMD) file.

Keyword Arguments

Parameter

Description

strict

Optional boolean, default True.Whether to strictly enforce the keys offile`s state dict match with the model`Module.state_dict.

lr_find(allow_plot=True,mixed_precision=False,**kwargs)

Runs the Learning Rate Finder. Helps in choosing theoptimum learning rate for training the model.

Parameter

Description

allow_plot

Optional boolean. Display the plot of lossesagainst the learning rates and mark the optimalvalue of the learning rate on the plot.The default value is ‘True’.

mixed_precision

Optional boolean. Parameter to enable/disable mixed precision.If set toTrue, optimum learning rate will be derived in mixed precision mode.OnlyPytorch based models are supported.The default value is ‘False’.

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path,framework='PyTorch',publish=False,gis=None,compute_metrics=True,save_optimizer=False,save_inference_file=True,**kwargs)

Saves the model weights, creates an Esri Model Definition and DeepLearning Package zip for deployment to Image Server or ArcGIS Pro.

Parameter

Description

name_or_path

Required string. Name of the model to save. Itstores it at the pre-defined location. If pathis passed then it stores at the specified pathwith model name as directory name and createsall the intermediate directories.

framework

Optional string. Exports the model in thespecified framework format (‘PyTorch’, ‘tflite’‘torchscript’, and ‘TF-ONXX’ (deprecated)).Only models saved with the default framework(PyTorch) can be loaded usingfrom_model.tflite framework (experimental support) issupported bySingleShotDetector- tensorflow backend only,RetinaNet - tensorflowbackend only.``torchscript`` format is supported bySiamMask,MaskRCNN,SingleShotDetector,YOLOv3 andRetinaNet.For usage of SiamMask model in ArcGIS Pro >= 2.8,load thePyTorch framework saved modeland export it withtorchscript frameworkusing ArcGIS API for Python >= v1.8.5.For usage of SiamMask model in ArcGIS Pro 2.9,set framework totorchscript and use themodel files additionally generated inside‘torch_scripts’ folder.If framework isTF-ONNX (Only supported forSingleShotDetector),batch_size can be passed as an optionalkeyword argument.

publish

Optional boolean. Publishes the DLPK as an item.

gis

OptionalGIS Object.Used for publishing the item. If not specifiedthen active gis user is taken.

compute_metrics

Optional boolean. Used for computing modelmetrics.

save_optimizer

Optional boolean. Used for saving the model-optimizerstate along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference filealong with the model.If False, the model will not work with ArcGIS Pro 2.6or earlier. Default is set to True.

kwargs

Optional Parameters.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

list_models

arcgis.learn.list_models(*,gis=None,future=False,**kwargs)

Function is used to list all the installed deep learning models.

Note

This function is supported with ArcGIS Enterprise (Image Server)

Parameter

Description

gis

OptionalGIS . The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

Returns:

list of deep learning models installed

train_model

arcgis.learn.train_model(input_folder,model_type,model_arguments=None,batch_size=2,max_epochs=None,learning_rate=None,backbone_model=None,validation_percent=None,pretrained_model=None,stop_training=True,freeze_model=True,overwrite_model=False,output_name=None,context=None,*,gis=None,future=False,**kwargs)

Function can be used to train a deep learning model using the output from theexport_training_data function.It generates the deep learning model package (*.dlpk) and adds it to your enterprise portal.train_model function performs the training using the Raster Analytics server.

Note

This function is supported with ArcGIS Enterprise (Image Server)

Parameter

Description

input_folder

Required string or list. This is the input location for the training sample data.It can be the path of output location on the file share raster data store or ashared file system path.The training sample data folder needs to be the output of export_training_data function,containing “images” and “labels” folder,as well as the JSON model definition file written out together by the function.

File share raster store and datastore path examples:
  • /rasterStores/yourRasterStoreFolderName/trainingSampleData

  • /fileShares/yourFileShareFolderName/trainingSampleData

Shared path example:
  • serverNamedeepLearning rainingSampleData

The function also support multiple input folders. In this case,specify the list of input folders

list of file share raster store and datastore path examples:
  • [“/rasterStores/yourRasterStoreFolderName/trainingSampleDataA”, “/rasterStores/yourRasterStoreFolderName/trainingSampleDataB”]

  • [“/fileShares/yourFileShareFolderName/trainingSampleDataA”, “/fileShares/yourFileShareFolderName/trainingSampleDataB”]

list of shared path example:
  • [”serverNamedeepLearning rainingSampleDataA”, “serverNamedeepLearning rainingSampleDataB”]

Multiple input folders are supported when all the following conditions are met:

  • The metadata format must be one of the following types: Classified_Tiles, Labeled_Tiles, Multi-labeled Tiles, PASCAL_VOC_rectangles, or RCNN_Masks.

  • All training data must have the same metadata format.

  • All training data must have the same number of bands.

  • All training data must have the same tile size.

model_type

Required string. The model type to use for training the deep learning model.Possible values:

  • SSD - The Single Shot Detector (SSD) is used for object detection.

  • UNET - U-Net is used for pixel classification.

  • FEATURE_CLASSIFIER - The Feature Classifier is used for object classification.

  • PSPNET - The Pyramid Scene Parsing Network (PSPNET) is used for pixel classification.

  • RETINANET - The RetinaNet is used for object detection.

  • MASKRCNN - The MarkRCNN is used for object detection

  • YOLOV3 - The YOLOv3 approach will be used to train the model. YOLOv3 is used for object detection.

  • DeepLabV3 - The DeepLabV3 approach will be used to train the model. DeepLab is used for pixel classification.

  • FASTERRCNN - The FasterRCNN approach will be used to train the model. FasterRCNN is used for object detection.

  • BDCN_EDGEDETECTOR - The Bi-Directional Cascade Network (BDCN) architecture will be used to train the model.The BDCN Edge Detector is used for pixel classification. This approach is useful to improve edge detection for objects at different scales.

  • HED_EDGEDETECTOR - The Holistically-Nested Edge Detection (HED) architecture will be used to train the model.The HED Edge Detector is used for pixel classification. This approach is useful to in edge and object boundary detection.

  • MULTITASK_ROADEXTRACTOR - The Multi Task Road Extractor architecture will be used to train the model.The Multi Task Road Extractor is used for pixel classification. This approach is useful for road network extraction from satellite imagery.

  • CONNECTNET - The ConnectNet architecture will be used to train the model. ConnectNet is used for pixel classification.This approach is useful for road network extraction from satellite imagery.

  • PIX2PIX - The Pix2Pix approach will be used to train the model. Pix2Pix is used for image-to-image translation.This approach creates a model object that generates images of one type to another. The input training data for thismodel type uses the Export Tiles metadata format.

  • CYCLEGAN - The CycleGAN approach will be used to train the model. CycleGAN is used for image-to-image translation.This approach creates a model object that generates images of one type to another. This approach is unique in thatthe images to be trained do not need to overlap. The input training data for this model type uses the CycleGAN metadata format.

  • SUPERRESOLUTION - The Super-resolution approach will be used to train the model. Super-resolution is used forimage-to-image translation. This approach creates a model object that increases the resolution and improves thequality of images. The input training data for this model type uses the Export Tiles metadata format.

  • CHANGEDETECTOR - The Change detector approach will be used to train the model. Change detector is used forpixel classification. This approach creates a model object that uses two spatial-temporal images to createa classified raster of the change. The input training data for this model type uses the Classified Tiles metadata format.

  • IMAGECAPTIONER - The Image captioner approach will be used to train the model. Image captioner is used forimage-to-text translation. This approach creates a model that generates text captions for an image.

  • SIAMMASK - The Siam Mask approach will be used to train the model. Siam Mask is used for object detection in videos.The model is trained using frames of the video and detects the classes and bounding boxes of the objects in each frame.The input training data for this model type uses the MaskRCNN metadata format.

  • MMDETECTION - The MMDetection approach will be used to train the model. MMDetection is used for object detection.The supported metadata formats are PASCAL Visual Object Class rectangles and KITTI rectangles.

  • MMSEGMENTATION - The MMSegmentation approach will be used to train the model. MMDetection is used for pixel classification.The supported metadata format is Classified Tiles.

  • DEEPSORT - The Deep Sort approach will be used to train the model. Deep Sort is used for object detection in videos.The model is trained using frames of the video and detects the classes and bounding boxes of the objects in each frame.The input training data for this model type uses the Imagenet metadata format.Where Siam Mask is useful while tracking an object, Deep Sort is useful in training a model to track multiple objects.

  • PIX2PIXHD - The Pix2PixHD approach will be used to train the model. Pix2PixHD is used for image-to-image translation.This approach creates a model object that generates images of one type to another.The input training data for this model type uses the Export Tiles metadata format.

  • MAXDEEPLAB - The MAXDEEPLAB approach will be used to train the model. It is used for Panoptic Segmentation.

model_arguments

Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.

Example:

{“name1”:”value1”, “name2”: “value2”}

batch_size

Optional int.The number of training samples to be processed for training at one time.If the server has a powerful GPU, this number can be increased to 16, 36, 64, and so on.

Example:

4

max_epochs

Optional int. The maximum number of epochs that the model should be trained.One epoch means the whole training dataset will be passed forward and backwardthrough the deep neural network once.

Example:

20

learning_rate

Optional float.The rate at which the weights are updated during the training.It is a small positive value in the range between 0.0 and 1.0.If learning rate is set to 0, it will extract the optimal learning ratefrom the learning curve during the training process.

Example:

0.0

backbone_model

Optional string.Specifies the preconfigured neural network to be used as an architecture for training the new model.Possible values: DENSENET121 , DENSENET161 , DENSENET169 , DENSENET201 , MOBILENET_V2 ,RESNET18 , RESNET34 , RESNET50 , RESNET101 , RESNET152 , VGG11 , VGG11_BN , VGG13 ,VGG13_BN , VGG16 , VGG16_BN , VGG19 , VGG19_BN , DARKNET53 , REID_V1 , REID_V2

Example:

RESNET34

validation_percent

Optional float.The percentage (in %) of training sample data that will be used for validating the model.

Example:

10

pretrained_model

Optional dlpk portal item.

The pretrained model to be used for fine tuning the new model.It is a deep learning model package (dlpk) portal item.

stop_training

Optional bool.Specifies whether early stopping will be implemented.

  • True - The model training will stop when the model is no longer improving,regardless of the maximum epochs specified. This is the default.

  • False - The model training will continue until the maximum epochs is reached.

freeze_model

Optional bool.Specifies whether to freeze the backbone layers in the pretrained model,so that the weights and biases in the backbone layers remain unchanged.

  • True - The predefined weights and biases will not be altered in the backboneModel.This is the default.

  • False - The weights and biases of the backboneModel may be altered to betterfit your training samples. This may take more time to process butusually could get better results.

overwrite_model

Optional bool.Overwrites an existing deep learning model package (.dlpk) portal item with the same name.

If the output_name parameter uses the file share data store path, this overwriteModel parameter is not applied.

  • True - The portal .dlpk item will be overwritten.

  • False - The portal .dlpk item will not be overwritten. This is the default.

output_name

Optional. trained deep learning model package can either be added as an itemto the portal or can be written to a datastore.

To add as an item, specify the name of the output deep learning model package (item)to be created.

Example -

“trainedModel”

In order to write the dlpk to fileshare datastore, specify the datastore path.

Example -

“/fileShares/filesharename/folder”

context

Optional dictionary. Context contains additional settings that affect task execution.Dictionary can contain value for following keys:

  • cellSize - Set the output raster cell size, or resolution

  • extent - Sets the processing extent used by the function

  • parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”

  • processorType - Sets the processor type. “CPU” or “GPU”

Example -

{“processorType” : “CPU”}

Setting context parameter will override the values set using arcgis.envvariable for this particular function.

gis

OptionalGIS . The GIS on which this tool runs. If not specified, the active GIS is used.

Returns:

Returns the dlpk portal item that has properties for title, type, filename, file, id and folderId.

AIServiceConnection

classarcgis.learn.AIServiceConnection(connection_file_path)

Provides helper methods to read and access AI Service Connection Files.

Parameter

Description

connection_file_path

Required String. Path to the AI Service Connection File.

Returns:

AIServiceConnection Object

get_dict()

Returns a dictionary representation of the object with all the connection properties.

Your browser is no longer supported. Please upgrade your browser for the best experience. See ourbrowser deprecation post for more details.


[8]ページ先頭

©2009-2025 Movatter.jp