Learning Templates

Learning Templates - Executables

← Home

Download Binaries (MacOSX, Linux)


Overview

The main scripts are in the root directory. Execute a script without any arguments to see how the script can be used. Useful scripts are listed below, visit the data page to see the output file formats.

1. Analyze dataset with existing template

fitAll.py - match final templates to a collection of shapes (final templates can be found in templates/final directory).

2. Learn a template

2.1 From manual initialization

createInitTmplt.py - manually create initial template. Press '`' during the execution to see some hotkeys.

learnTmplt.py - learn template (starting with initial template), the output is a final set of templates for a given collection. Note that the working directory (./scratch by default) will contain the directory "workDir/Templates/workDir_GenFinal" which will include fitting of all models to the final template (same as output of fitAll.py).

2.2 From automatic initialization

autoLearnTmplt.py - learn template from several candidate automatic segmentations.

3. Results format, visualization, pre-analyzed datasets

Pre-analyzed datasets and the format description can be found here.

templates - This directory contains initial and final templates. Note that CoSeg are templates learned on Co-Segmentation Benchmark dataset and FC are templates learned on Fuzzy Correspondence dataset. "chair", "plane", "bike", "helicopter" are learned on large datasets from 3D Warehouse (note that *_gtonly templates are learned on a 100-model subset of models with ground truth) see the paper for details.

./scripts/web/viewtemplates.php - visualizes analysis results (note that you will need a web server with php support, also you will need to edit the $dataDir so the directory that contains subdirectories with results).

exportResults.py - exports the analysis results in a simpler format.

correspondences - there is no script that explicitly computes the correspondences (though, App/EvalTemplate.cpp has this code). You can use nearest neighbors of co-aligned points (produced with exportResults.py) to compute correspondences. Or, download matlab figures for error rates on correspondence benchmark presented in the paper.

4. Parallel processing

All results for our paper were produced on a beowulf cluster, using "qsub" command. If qsub is detected this distribution will be executed in parallel. Refer to scripts/largescale/mydb.py for details, especially ExecuteNoWait function.