- Notifications
You must be signed in to change notification settings - Fork7.7k
The fastai deep learning library
License
fastai/fastai
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
You can use fastai without any installation by usingGoogleColab. In fact, every page of thisdocumentation is also available as an interactive notebook - click “Openin colab” at the top of any page to open it (be sure to change the Colabruntime to “GPU” to have it run fast!) See the fast.ai documentation onUsing Colab for more information.
You can install fastai on your own machines with:pip install fastai.
If you plan to develop fastai yourself, or want to be on the cuttingedge, you can use an editable install (if you do this, you should alsouse an editable install offastcore to go with it.) Firstinstall PyTorch, and then:
git clone https://github.com/fastai/fastaipip install -e "fastai[dev]"The best way to get started with fastai (and deep learning) is to readthebook,and completethe free course.
To see what’s possible with fastai, take a look at theQuickStart, which shows how to usearound 5 lines of code to build an image classifier, an imagesegmentation model, a text sentiment model, a recommendation system, anda tabular model. For each of the applications, the code is much thesame.
Read through theTutorials tolearn how to train your own models on your own datasets. Use thenavigation sidebar to look through the fastai documentation. Everyclass, function, and method is documented here.
To learn about the design and motivation of the library, read thepeerreviewed paper.
fastai is a deep learning library which provides practitioners withhigh-level components that can quickly and easily providestate-of-the-art results in standard deep learning domains, and providesresearchers with low-level components that can be mixed and matched tobuild new approaches. It aims to do both things without substantialcompromises in ease of use, flexibility, or performance. This ispossible thanks to a carefully layered architecture, which expressescommon underlying patterns of many deep learning and data processingtechniques in terms of decoupled abstractions. These abstractions can beexpressed concisely and clearly by leveraging the dynamism of theunderlying Python language and the flexibility of the PyTorch library.fastai includes:
- A new type dispatch system for Python along with a semantic typehierarchy for tensors
- A GPU-optimized computer vision library which can be extended in purePython
- An optimizer which refactors out the common functionality of modernoptimizers into two basic pieces, allowing optimization algorithms tobe implemented in 4–5 lines of code
- A novel 2-way callback system that can access any part of the data,model, or optimizer and change it at any point during training
- A new data block API
- And much more…
fastai is organized around two main design goals: to be approachable andrapidly productive, while also being deeply hackable and configurable.It is built on top of a hierarchy of lower-level APIs which providecomposable building blocks. This way, a user wanting to rewrite part ofthe high-level API or add particular behavior to suit their needs doesnot have to learn how to use the lowest level.
It’s very easy to migrate from plain PyTorch, Ignite, or any otherPyTorch-based library, or even to use fastai in conjunction with otherlibraries. Generally, you’ll be able to use all your existing dataprocessing code, but will be able to reduce the amount of code yourequire for training, and more easily take advantage of modern bestpractices. Here are migration guides from some popular libraries to helpyou on your way:
Due to python multiprocessing issues on Jupyter and Windows,num_workers ofDataloader is reset to 0 automatically to avoidJupyter hanging. This makes tasks such as computer vision in Jupyter onWindows many times slower than on Linux. This limitation doesn’t existif you use fastai from a script.
Seethisexampleto fully leverage the fastai API on Windows.
We recommend using Windows Subsystem for Linux (WSL) instead – if you dothat, you can use the regular Linux installation approach, and you won’thave any issues withnum_workers.
To run the tests in parallel, launch:
nbdev_test
For all the tests to pass, you’ll need to install the dependenciesspecified as part of dev_requirements in settings.ini
pip install -e .[dev]
Tests are written usingnbdev, for example see the documentation fortest_eq.
After you clone this repository, make sure you have runnbdev_install_hooks in your terminal. This install Jupyter and githooks to automatically clean, trust, and fix merge conflicts innotebooks.
After making changes in the repo, you should runnbdev_prepare andmake additional and necessary changes in order to pass all the tests.
For those interested in official docker containers for this project,they can be foundhere.
About
The fastai deep learning library
Topics
Resources
License
Code of conduct
Contributing
Uh oh!
There was an error while loading.Please reload this page.


