- Notifications
You must be signed in to change notification settings - Fork80
PyTorch implementation of the CortexNet predictive model
Atcold/pytorch-CortexNet
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
This repo contains thePyTorch implementation ofCortexNet.
Check theproject website for further information.
The project consists of the following folders and files:
data/: containsBash scripts and aPython class definition inherent video data loading;image-pretraining/: hosts the code for pre-training TempoNet's discriminative branch;model/: stores several network architectures, includingPredNet, an additive feedbackModel01, and a modulatory feedbackModel02 (CortexNet);notebook/: collection ofJupyter Notebooks for data exploration and results visualisation;utils/: scripts for- (current or former) training error plotting,
- experiments
diff, - multi-node synchronisation,
- generative predictions visualisation,
- network architecture graphing;
results@: link to the location where experimental results will be saved within 3-digit folders;new_experiment.sh*: creates a new experiment folder, updateslast@, prints a memo about last used settings;last@: symbolic link pointing to a new results sub-directory created bynew_experiment.sh;main.py: training script forCortexNet inMatchNet orTempoNet configuration;
- scikit-video: accessing images / videos
pip install sk-video
- tqdm: progress bar
conda config --add channels conda-forgeconda update --allconda install tqdm
This project has been realised withPyCharm byJetBrains and theVim editor.Grip has been also fundamental for crafting decent documtation locally.
Once you've determined where you'd like to save your experimental results — let's call this directory<my saving location> — run the following commands from the project's root directory:
ln -s<my saving location> results# replace <my saving location>mkdir results/000&& touch results/000/train.log# init. placeholderln -s results/000 last# create pointer to the most recent result
Ready to run your first experiment?Type the following:
./new_experiment.sh
Let's say your machine hasN GPUs.You can choose to use any of these, by specifying the indexn = 0, ..., N-1.Therefore, typeCUDA_VISIBLE_DEVICES=n just beforepython ... in the following sections.
- Downloade-VDS35 (e.g.
e-VDS35-May17.tar) fromhere. - Use
data/resize_and_split.shto prepare your (video) data for training.It resizes videos present in folders of folders (i.e. directory of classes) and may split them into training and validation set.May also skip short videos and trim longer ones.Checkdata/README.mdfor more details. - Run the
main.pyscript to start training.Use-hto print the command line interface (CLI) arguments help.
python -u main.py --mode MatchNet<CLI arguments>| tee last/train.log
- Downloade-VDS35 (e.g.
e-VDS35-May17.tar) fromhere. - Pre-train the forward branch (see
image-pretraining/) on an image data set (e.g.33-image-set.tarfromhere); - Use
data/resize_and_sample.shto prepare your (video) data for training.It resizes videos present in folders of folders (i.e. directory of classes) and samples them.Videos are then distributed across training and validation set.May also skip short videos and trim longer ones.Checkdata/README.mdfor more details. - Run the
main.pyscript to start training.Use-hto print the CLI arguments help.
python -u main.py --mode TempoNet --pre-trained<path><CLI args>| tee last/train.log
To run on a specific GPU, sayn, typeCUDA_VISIBLE_DEVICES=n just beforepython ....
About
PyTorch implementation of the CortexNet predictive model
Topics
Resources
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors3
Uh oh!
There was an error while loading.Please reload this page.