- Notifications
You must be signed in to change notification settings - Fork0
Animadversio/KeyPointDataPreprocess4DeepLabCut
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
A short script to re-format the key point coordinate data got from ImageJ (txt) into format (hdf5) required by DeepLabCut to make training dataset.
DeepLabCut is a powerful tool for automatic key point detection in video, thus a game changing tool for behavior quantification. But currently the GUI in DeepLabCut is not very stable, this script enable a workaround of frame extraction and frame annotating procedure. We could do frame extraction and annotation inImageJ, using a plugin calledPoint Picker. And using this script to reformating the data intohdf5 required by deeplabcut for further training set construction.
Tools
DeepLabCutImageJ: traditional image analysis software for scientific image analysis. It has pretty robust and easy to use GUI and lots of plugins. It can transform the.avimovie into frame sequences in many formats, and can also take screenshots and save as image files. Thus, it's suitable forFrame ExtractionPoint Picker: An ImageJ plugin developped by EPFL scientists which can pick up to 1024 points in an image, and can deal with image sequence well. Thus it's suitable toLabel Frames
- Interface (data format transforming) script: (The only part I write) Transform the output file of
Point Pickerto the required formats ofdeeplabcut.create_training_dataset(.).Pandas:DataFrameinPandasis the intermediate data structure I used to do the transformation.
Here is the complete workflow / protocol we can follow when starting a behavioral quantification project.
Video recording
Create a project in
DeepLabCut:config_path =deeplabcut.create_new_project('Name of the project','Name of the experimenter', ['Full path of video 1','Full path of video2'], working_directory='Full path of the working directory',copy_videos=True/False)
Frame Extraction:
- We can use the automatic extraction tool in
deeplabcutas baseline:deeplabcut.extract_frames(config_path,'automatic','kmeans', crop=True, checkcropping=True)(which is relatively robust)
- And then, supplement it with manual selection from
ImageJ - Open the video with
ImageJ. Slide and select the key frames and saveaspngfiles inImageJ - Note the training set can consist of frames of different size, cropped and uncropped mixed together.
- We can use the automatic extraction tool in
Label the Frames:
- We can use the GUI tool in
deeplabcut.label_frames(config_path) - Or
- Use the
ImageJ,Point PickerpluginImport>Image Sequenceselect the folder of the extracted frames- Open
Point Pickerand mark the key points in sequence from the first image. - Export the coordinates data with
Import/Exporttool. save thedata.txtfile
- Convert the
txtdate intocsvandhdf5
- We can use the GUI tool in
Check the annotation:
deeplabcut.check_labels(config_path)
Create Dataset:
deeplabcut.create_training_dataset(config_path,num_shuffles=1)
Train Network:
deeplabcut.train_network(config_path,shuffle=1)
Evaluate Network: See the training loss and test loss
deeplabcut.evaluate_network(config_path,shuffle=[1], plotting=True)
Video Analysis:
deeplabcut.analyze_videos(config_path,['/analysis/project/videos/reachingvideo1.avi'],shuffle=1,save_as_csv=True)deeplabcut.create_labeled_video(config_path ['/analysis/project/videos/reachingvideo1.avi','/analysis/project/videos/reachingvideo2.avi'])deeplabcut.plot_trajectories(config_path,['/analysis/project/videos/reachingvideo1.avi'])
About
A short script to re-format the key point coordinate data got from ImageJ (txt) into format (hdf5) required by DeepLabCut to make training dataset.
Topics
Resources
Uh oh!
There was an error while loading.Please reload this page.
