Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Tensorflow implementations of ConvNeXt V1 + V2 models w/ weights, including conversion and evaluation scripts.

License

NotificationsYou must be signed in to change notification settings

zibbini/convnext-v2_tensorflow

Repository files navigation

The ConvNeXt family of models are a series of convolutional neural networks (ConvNets) that achieve state-of-the-art results on image classification benchmarks, and can be readily applied to plenty of other image-based tasks. For more info on these models, refer to the following papers:

This repo contains unofficial implementations of both versions of ConvNeXt models in tensorflow. These are pretty much straight copies from the official PyTorch implementations (ConvNeXt V1 &ConvNeXt V2 with minor changes to suit Keras' functional API. Of note, these implementations:

  • Can accept custom input shapes and input tensors
  • Are capable of mixed precision use
  • Include model configurations (atto,femto,pico,nano) for ConvNeXt-V1 matching those found in the officialpytorch-image-models library.

Weights are currently available for ConvNeXt V1 official configurations, but ConvNeXt V2 weights are still yet to be processed.

Usage

For installation of tensorflow and keras, please refer to the followingguide. Once TensorFlow is installed, you can make use of the models like so:

# Construct 'tiny' model with standard ImageNet resolutionfromconvnext_tfimportconvnext_v1model=convnext_v1.convnext_tiny(input_shape=(224,224,3))
# Construct the model with pre-processing layersfromtensorflowimportkerasfromconvnext_tfimportconvnext_v1inputs=keras.Input((224,224,3))input_tensor=keras.layers.Rescaling(1/255)(inputs)# Rescale inputs to 0 - 1cxn_tiny=convnext_v1.convnext_tiny(input_tensor=input_tensor)model=keras.Model(inputs,cxn_tiny.output)
# Construct 'tiny' model with custom resolutionfromconvnext_tfimportconvnext_v1model=convnext_v1.convnext_tiny(input_shape=(512,512,3))
# Use imagenet weights to load modelfromconvnext_tfimportconvnext_v1# Load weights trained on ImageNet-1k datasetmodel=convnext_v1.convnext_tiny(weights='imagenet_1k')# Load weights trained on ImageNet-22k dataset and fine-tuned on ImageNet-1k datasetmodel=convnext_v1.convnext_tiny(weights='imagenet_22k')
# Use custom classification headfromconvnext_tfimportconvnext_v1fromtensorflowimportkerasinputs=keras.Input((224,224,3))cxn_tiny=convnext_v1.convnext_tiny(input_tensor=inputs,include_top=False)x=cxn_tiny.outputx=keras.layers.GlobalAveragePooling2D()(x)x=keras.layers.Dense(128,activation='relu')(x)x=keras.layers.Dense(32,activation='relu')(x)outputs=keras.layers.Dense(4,activation='sigmoid')(x)model=keras.Model(inputs,outputs)

Conversion and Evaluation

Conversion scripts areconvert_weights_v1.py andconvert_weights_v2.py for ConvNeXt V1 and V2 models respectively. Evaluation code can be found ineval_1k.py. Note that the official validation ground truth labels found in the dev kit on ImageNet's website are incorrect - instead, you need to download the bounding box validation dataset from their website. The XML files in this dataset will include the correct ground truth labels which can be used for evaluation purposes.

Results and pre-trained models

These are comparison results between PyTorch and TensorFlow implementations from evaluation on the ImageNet validation dataset. Converted weights for use in TensorFlow are linked in themodel columns.

ImageNet-1K trained models
nameresolutionPyTorch acc@1TensorFlow acc@1#paramsFLOPsmodel
ConvNeXt-T224x22482.181.328M4.5Gmodel
ConvNeXt-S224x22483.182.450M8.7Gmodel
ConvNeXt-B224x22483.883.389M15.4Gmodel
ConvNeXt-B384x38485.184.989M45.0Gmodel
ConvNeXt-L224x22484.383.9198M34.4Gmodel
ConvNeXt-L384x38485.585.4198M101.0Gmodel
ImageNet-22K trained models
nameresolutionPyTorch acc@1TensorFlow acc@1#paramsFLOPsmodel
ConvNeXt-T224x22482.982.329M4.5Gmodel
ConvNeXt-T384x38484.184.029M13.1Gmodel
ConvNeXt-S224x22484.684.150M8.7Gmodel
ConvNeXt-S384x38485.885.850M25.5Gmodel
ConvNeXt-B224x22485.885.489M15.4Gmodel
ConvNeXt-B384x38486.886.889M47.0Gmodel
ConvNeXt-L224x22486.686.4198M34.4Gmodel
ConvNeXt-L384x38487.587.5198M101.0Gmodel
ConvNeXt-XL224x22487.086.8350M60.9Gmodel
ConvNeXt-XL384x38487.887.7350M179.0Gmodel

Acknowledgements and References

About

Tensorflow implementations of ConvNeXt V1 + V2 models w/ weights, including conversion and evaluation scripts.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp