Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Releases: huggingface/pytorch-image-models

Release v1.0.17

10 Jul 16:04
Compare
Choose a tag to compare
Loading

July 7, 2025

  • MobileNet-v5 backbone tweaks for improved Google Gemma 3n behaviour (to pair with updated official weights)
    • Add stem bias (zero'd in updated weights, compat break with old weights)
    • GELU -> GELU (tanh approx). A minor change to be closer to JAX
  • Add two arguments to layer-decay support, a min scale clamp and 'no optimization' scale threshold
  • Add 'Fp32' LayerNorm, RMSNorm, SimpleNorm variants that can be enabled to force computation of norm in float32
  • Some typing, argument cleanup for norm, norm+act layers done with above
  • Support Naver ROPE-ViT (https://github.com/naver-ai/rope-vit) ineva.py, add RotaryEmbeddingMixed module for mixed mode, weights on HuggingFace Hub
modelimg_sizetop1top5param_count
vit_large_patch16_rope_mixed_ape_224.naver_in1k22484.8497.122304.4
vit_large_patch16_rope_mixed_224.naver_in1k22484.82897.116304.2
vit_large_patch16_rope_ape_224.naver_in1k22484.6597.154304.37
vit_large_patch16_rope_224.naver_in1k22484.64897.122304.17
vit_base_patch16_rope_mixed_ape_224.naver_in1k22483.89496.75486.59
vit_base_patch16_rope_mixed_224.naver_in1k22483.80496.71286.44
vit_base_patch16_rope_ape_224.naver_in1k22483.78296.6186.59
vit_base_patch16_rope_224.naver_in1k22483.71896.67286.43
vit_small_patch16_rope_224.naver_in1k22481.2395.02221.98
vit_small_patch16_rope_mixed_224.naver_in1k22481.21695.02221.99
vit_small_patch16_rope_ape_224.naver_in1k22481.00495.01622.06
vit_small_patch16_rope_mixed_ape_224.naver_in1k22480.98694.97622.06
  • Some cleanup of ROPE modules, helpers, and FX tracing leaf registration
  • Preparing version 1.0.17 release

What's Changed

New Contributors

Full Changelog:v1.0.16...v1.0.17

Contributors

  • @RyanMullins
  • @rwightman
  • @GuillaumeErhard
  • @robin-ede
RyanMullins, rwightman, and 2 other contributors
Assets2
Loading
Setioris, dundd2, brendanartley, and luckymouse0 reacted with thumbs up emoji
4 people reacted

Release v1.0.16

26 Jun 18:44
7101adb
This commit was created on GitHub.com and signed with GitHub’sverified signature.
GPG key ID:B5690EEEBB952194
Verified
Learn about vigilant mode.
Compare
Choose a tag to compare
Loading

June 26, 2025

  • MobileNetV5 backbone (w/ encoder only variant) forGemma 3n image encoder
  • Version 1.0.16 released

June 23, 2025

  • Add F.grid_sample based 2D and factorized pos embed resize to NaFlexViT. Faster when lots of different sizes (based on example byhttps://github.com/stas-sl).
  • Further speed up patch embed resample by replacing vmap with matmul (based on snippet byhttps://github.com/stas-sl).
  • Add 3 initial native aspect NaFlexViT checkpoints created while testing, ImageNet-1k and 3 different pos embed configs w/ same hparams.
ModelTop-1 AccTop-5 AccParams (M)Eval Seq Len
naflexvit_base_patch16_par_gap.e300_s576_in1k83.6796.4586.63576
naflexvit_base_patch16_parfac_gap.e300_s576_in1k83.6396.4186.46576
naflexvit_base_patch16_gap.e300_s576_in1k83.5096.4686.63576
  • Support gradient checkpointing forforward_intermediates and fix some checkpointing bugs. Thankshttps://github.com/brianhou0208
  • Add 'corrected weight decay' (https://arxiv.org/abs/2506.02285) as option to AdamW (legacy), Adopt, Kron, Adafactor (BV), Lamb, LaProp, Lion, NadamW, RmsPropTF, SGDW optimizers
  • Switch PE (perception encoder) ViT models to use native timm weights instead of remapping on the fly
  • Fix cuda stream bug in prefetch loader

June 5, 2025

  • Initial NaFlexVit model code. NaFlexVit is a Vision Transformer with:
    1. Encapsulated embedding and position encoding in a single module
    2. Support for nn.Linear patch embedding on pre-patchified (dictionary) inputs
    3. Support for NaFlex variable aspect, variable resolution (SigLip-2:https://arxiv.org/abs/2502.14786)
    4. Support for FlexiViT variable patch size (https://arxiv.org/abs/2212.08013)
    5. Support for NaViT fractional/factorized position embedding (https://arxiv.org/abs/2307.06304)
  • Existing vit models invision_transformer.py can be loaded into the NaFlexVit model by adding theuse_naflex=True flag tocreate_model
    • Some native weights coming soon
  • A full NaFlex data pipeline is available that allows training / fine-tuning / evaluating with variable aspect / size images
    • To enable intrain.py andvalidate.py add the--naflex-loader arg, must be used with a NaFlexVit
  • To evaluate an existing (classic) ViT loaded in NaFlexVit model w/ NaFlex data pipe:
    • python validate.py /imagenet --amp -j 8 --model vit_base_patch16_224 --model-kwargs use_naflex=True --naflex-loader --naflex-max-seq-len 256
  • The training has some extra args features worth noting
    • The--naflex-train-seq-lens' argument specifies which sequence lengths to randomly pick from per batch during training
    • The--naflex-max-seq-len argument sets the target sequence length for validation
    • Adding--model-kwargs enable_patch_interpolator=True --naflex-patch-sizes 12 16 24 will enable random patch size selection per-batch w/ interpolation
    • The--naflex-loss-scale arg changes loss scaling mode per batch relative to the batch size,timm NaFlex loading changes the batch size for each seq len

May 28, 2025

What's Changed

New Contributors

Full Changelog:v1.0.15...v1.0.16

Contributors

  • @rwightman
  • @amorehead
  • @brianhou0208
  • @yutong-xiang-97
  • @emmanuel-ferdman
  • @sddongxh
  • @atharva-pathak
  • @ryan-caesar-ramos
rwightman, amorehead, and 6 other contributors
Loading
zyannick, jrzmnt, and PeachPure reacted with hooray emojiSetioris, Puiching-Memory, eek, johko, stas-sl, zyannick, and PeachPure reacted with heart emojiisaaccorley, brendanartley, Setioris, and eek reacted with rocket emoji
10 people reacted

Release v1.0.15

23 Feb 05:07
Compare
Choose a tag to compare
Loading

Feb 21, 2025

  • SigLIP 2 ViT image encoders added (https://huggingface.co/collections/timm/siglip-2-67b8e72ba08b09dd97aecaf9)
    • Variable resolution / aspect NaFlex versions are a WIP
  • Add 'SO150M2' ViT weights trained with SBB recipes, great results, better for ImageNet than previous attempt w/ less training.
    • vit_so150m2_patch16_reg1_gap_448.sbb_e200_in12k_ft_in1k - 88.1% top-1
    • vit_so150m2_patch16_reg1_gap_384.sbb_e200_in12k_ft_in1k - 87.9% top-1
    • vit_so150m2_patch16_reg1_gap_256.sbb_e200_in12k_ft_in1k - 87.3% top-1
    • vit_so150m2_patch16_reg4_gap_256.sbb_e200_in12k
  • Updated InternViT-300M '2.5' weights
  • Release 1.0.15

Feb 1, 2025

  • FYI PyTorch 2.6 & Python 3.13 are tested and working w/ current main and released version oftimm

Jan 27, 2025

What's Changed

New Contributors

Full Changelog:v1.0.14...v1.0.15

Contributors

  • @collinmccarthy
  • @rwightman
  • @adamjstewart
  • @brianhou0208
  • @ClashLuke
  • @JosuaRieder
collinmccarthy, rwightman, and 4 other contributors
Loading
brendanartley, dingjiongfeng, and lichss reacted with thumbs up emojidingjiongfeng reacted with laugh emoji
3 people reacted

Release v1.0.14

19 Jan 23:05
Compare
Choose a tag to compare
Loading

Jan 19, 2025

  • Fix loading of LeViT safetensor weights, remove conversion code which should have been deactivated
  • Add 'SO150M' ViT weights trained with SBB recipes, decent results, but not optimal shape for ImageNet-12k/1k pretrain/ft
    • vit_so150m_patch16_reg4_gap_256.sbb_e250_in12k_ft_in1k - 86.7% top-1
    • vit_so150m_patch16_reg4_gap_384.sbb_e250_in12k_ft_in1k - 87.4% top-1
    • vit_so150m_patch16_reg4_gap_256.sbb_e250_in12k
  • Misc typing, typo, etc. cleanup
  • 1.0.14 release to get above LeViT fix out

What's Changed

New Contributors

Full Changelog:v1.0.13...v1.0.14

Contributors

  • @rwightman
  • @adamjstewart
  • @JosuaRieder
rwightman, adamjstewart, and JosuaRieder
Loading
knotgrass and Falis-EPG reacted with hooray emojibrendanartley, nirvan840, and Geo99pro reacted with rocket emoji
5 people reacted

Release v1.0.13

09 Jan 18:49
Compare
Choose a tag to compare
Loading

Jan 9, 2025

  • Add support to train and validate in purebfloat16 orfloat16
  • wandb project name arg added byhttps://github.com/caojiaolong, use arg.experiment for name
  • Fix old issue w/ checkpoint saving not working on filesystem w/o hard-link support (e.g. FUSE fs mounts)
  • 1.0.13 release

Jan 6, 2025

  • Addtorch.utils.checkpoint.checkpoint() wrapper intimm.models that defaultsuse_reentrant=False, unlessTIMM_REENTRANT_CKPT=1 is set in env.

Dec 31, 2024

What's Changed

New Contributors

Full Changelog:v1.0.12...v1.0.13

Contributors

  • @rwightman
  • @grodino
  • @laclouis5
  • @brianhou0208
  • @ruidazeng
  • @ariG23498
rwightman, grodino, and 4 other contributors
Loading
brianhou0208 and luckymouse0 reacted with hooray emoji
2 people reacted

Release v1.0.12

03 Dec 19:05
Compare
Choose a tag to compare
Loading

Nov 28, 2024

Nov 12, 2024

  • Optimizer factory refactor
    • New factory works by registering optimizers using an OptimInfo dataclass w/ some key traits
    • Addlist_optimizers,get_optimizer_class,get_optimizer_info to reworkedcreate_optimizer_v2 fn to explore optimizers, get info or class
    • deprecateoptim.optim_factory, move fns tooptim/_optim_factory.py andoptim/_param_groups.py and encourage import viatimm.optim
  • Add Adopt (https://github.com/iShohei220/adopt) optimizer
  • Add 'Big Vision' variant of Adafactor (https://github.com/google-research/big_vision/blob/main/big_vision/optax.py) optimizer
  • Fix original Adafactor to pick better factorization dims for convolutions
  • Tweak LAMB optimizer with some improvements in torch.where functionality since original, refactor clipping a bit
  • dynamic img size support in vit, deit, eva improved to support resize from non-square patch grids, thankshttps://github.com/wojtke

Oct 31, 2024

Add a set of new very well trained ResNet & ResNet-V2 18/34 (basic block) weights. Seehttps://huggingface.co/blog/rwightman/resnet-trick-or-treat

Oct 19, 2024

  • Cleanup torch amp usage to avoid cuda specific calls, merge support for Ascend (NPU) devices fromMengqingCao that should work now in PyTorch 2.5 w/ new device extension autoloading feature. Tested Intel Arc (XPU) in Pytorch 2.5 too and it (mostly) worked.

What's Changed

New Contributors

Full Changelog:v1.0.11...v1.0.12

Contributors

  • @rwightman
  • @grodino
  • @JohannesTheo
  • @antoinebrl
  • @sinahmr
  • @mrT23
  • @AlinaImtiaz018
  • @NightMachinery
  • @MengqingCao
  • @wojtke
  • @JosuaRieder
rwightman, grodino, and 9 other contributors
Loading
651961 and brianhou0208 reacted with thumbs up emoji651961 reacted with rocket emoji
2 people reacted

v1.0.11 Release

16 Oct 21:19
Compare
Choose a tag to compare
Loading

Quick turnaround from 1.0.10 to fix an error impacting 3rd party packages that still import through a deprecated path that isn't tested.

Oct 16, 2024

Oct 14, 2024

  • Pre-activation (ResNetV2) version of 18/18d/34/34d ResNet model defs added by request (weights pending)
  • Release 1.0.10

Oct 11, 2024

  • MambaOut (https://github.com/yuweihao/MambaOut) model & weights added. A cheeky take on SSM vision models w/o the SSM (essentially ConvNeXt w/ gating). A mix of original weights + custom variations & weights.
modelimg_sizetop1top5param_count
mambaout_base_plus_rw.sw_e150_r384_in12k_ft_in1k38487.50698.428101.66
mambaout_base_plus_rw.sw_e150_in12k_ft_in1k28886.91298.236101.66
mambaout_base_plus_rw.sw_e150_in12k_ft_in1k22486.63298.156101.66
mambaout_base_tall_rw.sw_e500_in1k28884.97497.33286.48
mambaout_base_wide_rw.sw_e500_in1k28884.96297.20894.45
mambaout_base_short_rw.sw_e500_in1k28884.83297.2788.83
mambaout_base.in1k28884.7296.9384.81
mambaout_small_rw.sw_e450_in1k28884.59897.09848.5
mambaout_small.in1k28884.596.97448.49
mambaout_base_wide_rw.sw_e500_in1k22484.45496.86494.45
mambaout_base_tall_rw.sw_e500_in1k22484.43496.95886.48
mambaout_base_short_rw.sw_e500_in1k22484.36296.95288.83
mambaout_base.in1k22484.16896.6884.81
mambaout_small.in1k22484.08696.6348.49
mambaout_small_rw.sw_e450_in1k22484.02496.75248.5
mambaout_tiny.in1k28883.44896.53826.55
mambaout_tiny.in1k22482.73696.126.55
mambaout_kobe.in1k28881.05495.7189.14
mambaout_kobe.in1k22479.98694.9869.14
mambaout_femto.in1k28879.84895.147.3
mambaout_femto.in1k22478.8794.4087.3

Sept 2024

Loading
knotgrass and Geo99pro reacted with hooray emojigau-nernst and Geo99pro reacted with rocket emoji
3 people reacted

Release v1.0.10

15 Oct 04:44
Compare
Choose a tag to compare
Loading

Oct 14, 2024

  • Pre-activation (ResNetV2) version of 18/18d/34/34d ResNet model defs added by request (weights pending)
  • Release 1.0.10

Oct 11, 2024

  • MambaOut (https://github.com/yuweihao/MambaOut) model & weights added. A cheeky take on SSM vision models w/o the SSM (essentially ConvNeXt w/ gating). A mix of original weights + custom variations & weights.
modelimg_sizetop1top5param_count
mambaout_base_plus_rw.sw_e150_r384_in12k_ft_in1k38487.50698.428101.66
mambaout_base_plus_rw.sw_e150_in12k_ft_in1k28886.91298.236101.66
mambaout_base_plus_rw.sw_e150_in12k_ft_in1k22486.63298.156101.66
mambaout_base_tall_rw.sw_e500_in1k28884.97497.33286.48
mambaout_base_wide_rw.sw_e500_in1k28884.96297.20894.45
mambaout_base_short_rw.sw_e500_in1k28884.83297.2788.83
mambaout_base.in1k28884.7296.9384.81
mambaout_small_rw.sw_e450_in1k28884.59897.09848.5
mambaout_small.in1k28884.596.97448.49
mambaout_base_wide_rw.sw_e500_in1k22484.45496.86494.45
mambaout_base_tall_rw.sw_e500_in1k22484.43496.95886.48
mambaout_base_short_rw.sw_e500_in1k22484.36296.95288.83
mambaout_base.in1k22484.16896.6884.81
mambaout_small.in1k22484.08696.6348.49
mambaout_small_rw.sw_e450_in1k22484.02496.75248.5
mambaout_tiny.in1k28883.44896.53826.55
mambaout_tiny.in1k22482.73696.126.55
mambaout_kobe.in1k28881.05495.7189.14
mambaout_kobe.in1k22479.98694.9869.14
mambaout_femto.in1k28879.84895.147.3
mambaout_femto.in1k22478.8794.4087.3

Sept 2024

Loading
MARD1NO, jizhang02, lucadiliello, Geo99pro, ZeroAct, Gavince, and cristianocolpo reacted with hooray emojipvti and brendanartley reacted with rocket emoji
9 people reacted

Release v1.0.9

23 Aug 23:42
Compare
Choose a tag to compare
Loading

Aug 21, 2024

  • Updated SBB ViT models trained on ImageNet-12k and fine-tuned on ImageNet-1k, challenging quite a number of much larger, slower models
modeltop1top5param_countimg_size
vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k87.43898.25664.11384
vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k86.60897.93464.11256
vit_betwixt_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k86.59498.0260.4384
vit_betwixt_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k85.73497.6160.4256
  • MobileNet-V1 1.25, EfficientNet-B1, & ResNet50-D weights w/ MNV4 baseline challenge recipe
modeltop1top5param_countimg_size
resnet50d.ra4_e3600_r224_in1k81.83895.92225.58288
efficientnet_b1.ra4_e3600_r240_in1k81.44095.7007.79288
resnet50d.ra4_e3600_r224_in1k80.95295.38425.58224
efficientnet_b1.ra4_e3600_r240_in1k80.40695.1527.79240
mobilenetv1_125.ra4_e3600_r224_in1k77.60093.8046.27256
mobilenetv1_125.ra4_e3600_r224_in1k76.92493.2346.27224
  • Add SAM2 (HieraDet) backbone arch & weight loading support

  • Add Hiera Small weights trained w/ abswin pos embed on in12k & fine-tuned on 1k

modeltop1top5param_count
hiera_small_abswin_256.sbb2_e200_in12k_ft_in1k84.91297.26035.01
hiera_small_abswin_256.sbb2_pd_e200_in12k_ft_in1k84.56097.10635.01

Aug 8, 2024

Loading
brendanartley, gau-nernst, alvarobartt, josephbou, philipp-zettl, merveenoyan, MARD1NO, oshiteku, developer0hye, guoyanannan, and 3 more reacted with rocket emoji
13 people reacted

Release v1.0.8

29 Jul 05:18
Compare
Choose a tag to compare
Loading

July 28, 2024

  • Addmobilenet_edgetpu_v2_m weights w/ra4 mnv4-small based recipe. 80.1% top-1 @ 224 and 80.7 @ 256.
  • Release 1.0.8

July 26, 2024

  • More MobileNet-v4 weights, ImageNet-12k pretrain w/ fine-tunes, and anti-aliased ConvLarge models
modeltop1top1_errtop5top5_errparam_countimg_size
mobilenetv4_conv_aa_large.e230_r448_in12k_ft_in1k84.9915.0197.2942.70632.59544
mobilenetv4_conv_aa_large.e230_r384_in12k_ft_in1k84.77215.22897.3442.65632.59480
mobilenetv4_conv_aa_large.e230_r448_in12k_ft_in1k84.6415.3697.1142.88632.59448
mobilenetv4_conv_aa_large.e230_r384_in12k_ft_in1k84.31415.68697.1022.89832.59384
mobilenetv4_conv_aa_large.e600_r384_in1k83.82416.17696.7343.26632.59480
mobilenetv4_conv_aa_large.e600_r384_in1k83.24416.75696.3923.60832.59384
mobilenetv4_hybrid_medium.e200_r256_in12k_ft_in1k82.9917.0196.673.3311.07320
mobilenetv4_hybrid_medium.e200_r256_in12k_ft_in1k82.36417.63696.2563.74411.07256
modeltop1top1_errtop5top5_errparam_countimg_size
efficientnet_b0.ra4_e3600_r224_in1k79.36420.63694.7545.2465.29256
efficientnet_b0.ra4_e3600_r224_in1k78.58421.41694.3385.6625.29224
mobilenetv1_100h.ra4_e3600_r224_in1k76.59623.40493.2726.7285.28256
mobilenetv1_100.ra4_e3600_r224_in1k76.09423.90693.0046.9964.23256
mobilenetv1_100h.ra4_e3600_r224_in1k75.66224.33892.5047.4965.28224
mobilenetv1_100.ra4_e3600_r224_in1k75.38224.61892.3127.6884.23224
  • Prototype ofset_input_size() added to vit and swin v1/v2 models to allow changing image size, patch size, window size after model creation.
  • Improved support in swin for different size handling, in addition toset_input_size,always_partition andstrict_img_size args have been added to__init__ to allow more flexible input size constraints
  • Fix out of order indices info for intermediate 'Getter' feature wrapper, check out or range indices for same.
  • Add severaltiny < .5M param models for testing that are actually trained on ImageNet-1k
modeltop1top1_errtop5top5_errparam_countimg_sizecrop_pct
test_efficientnet.r160_in1k47.15652.84471.72628.2740.361921.0
test_byobnet.r160_in1k46.69853.30271.67428.3260.461921.0
test_efficientnet.r160_in1k46.42653.57470.92829.0720.361600.875
test_byobnet.r160_in1k45.37854.62270.57229.4280.461600.875
test_vit.r160_in1k42.058.068.66431.3360.371921.0
test_vit.r160_in1k40.82259.17867.21232.7880.371600.875
  • Fix vit reg token init, thanksPromisery
  • Other misc fixes

June 24, 2024

  • 3 more MobileNetV4 hyrid weights with different MQA weight init scheme
modeltop1top1_errtop5top5_errparam_countimg_size
mobilenetv4_hybrid_large.ix_e600_r384_in1k84.35615.64496.8923.10837.76448
mobilenetv4_hybrid_large.ix_e600_r384_in1k83.99016.01096.7023.29837.76384
mobilenetv4_hybrid_medium.ix_e550_r384_in1k83.39416.60696.7603.24011.07448
mobilenetv4_hybrid_medium.ix_e550_r384_in1k82.96817.03296.4743.52611.07384
mobilenetv4_hybrid_medium.ix_e550_r256_in1k82.49217.50896.2783.72211.07320
mobilenetv4_hybrid_medium.ix_e550_r256_in1k81.44618.55495.7044.29611.07256
  • florence2 weight loading in DaViT model
Loading
dhkim0225, ChristophReich1996, developer0hye, tolgacangoz, harishkanamarla, and mengtianyoo reacted with thumbs up emojitolgacangoz and harishkanamarla reacted with hooray emojitolgacangoz, brendanartley, and harishkanamarla reacted with heart emojitolgacangoz, gau-nernst, dnth, evdcush, and harishkanamarla reacted with rocket emojitolgacangoz reacted with eyes emoji
10 people reacted
Previous134567
Previous

[8]ページ先頭

©2009-2025 Movatter.jp