This article mayrequirecleanup to meet Wikipedia'squality standards. The specific problem is:citation formatting errors. Please helpimprove this article if you can.(September 2025) (Learn how and when to remove this message) |
| EfficientNet | |
|---|---|
| Developer | Google AI |
| Initial release | May 2019 |
| Repository | github |
| Written in | Python |
| License | Apache License 2.0 |
| Website | Google AI Blog |
EfficientNet is a family ofconvolutional neural networks (CNNs) forcomputer vision published by researchers atGoogle AI in 2019.[1] Its key innovation iscompound scaling, which uniformly scales all dimensions of depth, width, and resolution using a single parameter.
EfficientNet models have been adopted in various computer vision tasks, includingimage classification,object detection, andsegmentation.
EfficientNet introducescompound scaling, which, instead of scaling one dimension of the network at a time, such as depth (number of layers), width (number of channels), or resolution (input image size), uses acompound coefficient to scale all three dimensions simultaneously. Specifically, given a baseline network, the depth, width, and resolution are scaled according to the following equations:[1]subject to and. The condition is such that increasing by a factor of would increase the total FLOPs of running the network on an image approximately times. Thehyperparameters,, and are determined by a smallgrid search. The original paper suggested 1.2, 1.1, and 1.15, respectively.
Architecturally, they optimized the choice of modules byneural architecture search (NAS), and found that the inverted bottleneck convolution (which they calledMBConv) used inMobileNet worked well.
The EfficientNet family is a stack of MBConv layers, with shapes determined by the compound scaling. The original publication consisted of 8 models, from EfficientNet-B0 to EfficientNet-B7, with increasing model size and accuracy. EfficientNet-B0 is the baseline network, and subsequent models are obtained by scaling the baseline network by increasing.
EfficientNet has been adapted for fast inference onedgeTPUs[2] and centralized TPU orGPUclusters by NAS.[3]
EfficientNet V2 was published in June 2021. The architecture was improved by further NAS search with more types of convolutional layers.[4] It also introduced a training method, which progressively increases image size during training, and uses regularization techniques likedropout,RandAugment,[5] and Mixup.[6] The authors claim this approach mitigates accuracy drops often associated with progressive resizing.
{{cite journal}}:Cite journal requires|journal= (help)