Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork1.8k
Compare
420ce84
New Models
DPT
The DPT model adapts the Vision Transformer (ViT) architecture for dense prediction tasks like semantic segmentation. It uses a ViT as a powerful backbone, processing image information with a global receptive field at each stage. The key innovation lies in its decoder, which reassembles token representations from various transformer stages into image-like feature maps at different resolutions. These are progressively combined using convolutional PSP and FPN blocks to produce full-resolution, high-detail predictions.
The model insmp
can be used with a wide variety of transformer-based encoders
importsegmentation_models_pytorchassmp# initialize with your own pretrained encodermodel=smp.DPT("tu-mobilevitv2_175.cvnets_in1k",classes=2)# load fully-pretrained on ADE20Kmodel=smp.from_pretrained("smp-hub/dpt-large-ade20k")# load the same checkpoint for finetuningmodel=smp.from_pretrained("smp-hub/dpt-large-ade20k",classes=1,strict=False)
The full table of DPT's supportedtimm
encoders can be foundhere.
- Adding DPT by@vedantdalimkar in#1079
Models export
A lot of work was done to add support fortorch.jit.script
,torch.compile
(without graph breaks:fullgraph=True
) andtorch.export
features in all encoders and models.
This provides several advantages:
torch.jit.script
: Enables serialization of models into a static graph format, enabling deployment in environments without a Python interpreter and allowing for graph-based optimizations.torch.compile
(withfullgraph=True
): Leverages Just-In-Time (JIT) compilation (e.g., via Triton or Inductor backends) to generate optimized kernels, reducing Python overhead and enabling significant performance improvements through techniques like operator fusion, especially on GPU hardware.fullgraph=True
minimizes graph breaks, maximizing the scope of these optimizations.torch.export
: Produces a standardized Ahead-Of-Time (AOT) graph representation, simplifying the process of exporting models to various inference backends and edge devices (e.g., through ExecuTorch) while preserving model dynamism where possible.
PRs:
- Fix torch compile, script, export by@qubvel in#1031
- Fix Efficientnet encoder for torchscript by@qubvel in#1037
Core
All encoders from third-party libraries such asefficientnet-pytorch
andpretrainedmodels.pytorch
are now vendored by SMP. This means we have copied and refactored the underlying code and moved all checkpoints to thesmp-hub. As a result, you will havefewer additional dependencies when installingsmp
and get much faster weights downloads.
- Move encoders weights to HF-Hub by@qubvel in#1035
- Vendor pretrainedmodels by@adamjstewart in#1039
- Vendor efficientnet-pytorch by@adamjstewart in#1036
🚨🚨🚨 Breaking changes
UperNet model was significantly changed to reflect the original implementation and to bring pretrained checkpoints into SMP. Unfortunately, UperNet model weights trained with v0.4.0 will be not compatible with SMP v0.5.0.
While the high-level API for modeling should be backward compatible with v0.4.0, internal modules (such as encoders, decoders, blocks) might have changed initialization and forward interfaces.
timm-
prefixed encoders are deprecated,tu-
variants are now the recommended way to use encoders from thetimm
library. Most of thetimm-
encoders are internally switched to theirtu-
equivalent with state_dict re-mapping (backward-compatible), but this support will be dropped in upcoming versions.
Other changes
- Enable any resolution for Unet by@qubvel in#1029
- Update README.md by@qubvel in#1046
- Add binary segmentation example using cpu by@omidvarnia in#1057
- Load model with mismatched sizes by@qubvel in#1107
- Deprecate use_batchnorm in favor of generalized use_norm parameter by@GuillaumeErhard in#1095
- Extend usage of interpolation_mode to MAnet / UnetPlusPlus / FPN and align PAN by@GuillaumeErhard in#1108
- Fix cls token slicing for DPT by@qubvel in#1121
- add upsampling parameter#1106 by@DCalhas in#1123
- Fix#1125 by@Fede1995 in#1126
New Contributors
- @omidvarnia made their first contribution in#1057
- @GuillaumeErhard made their first contribution in#1095
- @kocabiyik made their first contribution in#1113
- @vedantdalimkar made their first contribution in#1079
- @DCalhas made their first contribution in#1123
- @Fede1995 made their first contribution in#1126
Full Changelog:v0.4.0...v0.5.0
Assets2
Uh oh!
There was an error while loading.Please reload this page.