Deep Convolutional Neural Network

machineauthorcommitcommit dateplatformlosstimecheckrel timepoints

Your job is to improve the performance of the existing code, by altering the code in sol/dcnnsol.hpp. (You may also write some code in sol/dcnnsol.cpp; however, all the existing code is templated and must remain in the header file.)

Input data

The folder containing input data is set by the command-line parameter --data-folder.

The input data are already available at parlab, in the folder /home/_teaching/hiperf/dcnndata. Therefore, the program shall be invoked as:

srun -p mpi-homo-short -n 1 -c 64 ./dcnn --data-folder=/home/_teaching/hiperf/dcnndata

The input data may be downloaded from parlab via scp or compressed from here:

Test parameters

data-folder - the folder containing the input data files (default: data).

minibatch - the number of images in a testing minibatch (processed in one call to the forward functions). Default: 16.

superbatch - the number of minibatches in a testing batch (each minibatch is assigned to a different thread). Default: 8 (1 in Debug mode).

total - the total number of images submitted into testing (shall be divisible by minibatch*superbatch). Default: 2048 (16 in Debug mode).

Credits

The DCNN architecture was taken from [Hasanpour 2016]. The original implementation used the Caffe framework and was later converted to Pytorch.

Both the pretrained weights and the test images were converted from publicly available data:

References

[Hasanpour 2016] Hasanpour, Seyyed Hossein, et al. Lets keep it simple, using simple architectures to outperform deeper and more complex architectures. arXiv:1608.06037.