Skip to content

Commit d153a6a

Browse files
committed
README
1 parent 202029d commit d153a6a

File tree

3 files changed

+293
-236
lines changed

3 files changed

+293
-236
lines changed

README.md

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
## PyTorch implementation of [\[1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference\]](https://arxiv.org/abs/1611.06440) ##
2+
3+
This demonstrates pruning a VGG16 based classifier that classifies a small dog/cat dataset.
4+
5+
This was able to reduce the CPU runtime by x3 and the model size by x4.
6+
For more details you can read the [blog post](https://jacobgil.github.io/deeplearning/pruning-deep-learning).
7+
8+
At each pruning step 512 filters are removed from the network.
9+
10+
11+
Usage
12+
-----
13+
14+
This repository uses the PyTorch ImageFolder loader, so it assumes that the images are in a different directory for each category.
15+
`Train
16+
......... dogs
17+
......... cats
18+
`
19+
`Test
20+
......... dogs
21+
......... cats
22+
`
23+
24+
The images were taken from [here](https://www.kaggle.com/c/dogs-vs-cats) but you should try training this on your own data and see if it works!
25+
26+
Training:
27+
`python finetune.py --train`
28+
29+
Pruning:
30+
`python finetune.py --prune`
31+
32+
TBD
33+
---
34+
35+
- Change the pruning to be done in one pass. Currently each of the 512 filters are pruned sequentually. This is inefficient since allocating new layers, especially fully connected layers with lots of parameters, is slow.
36+
In principle this can be done in a single pass.
37+
38+
{% highlight python %}
39+
for layer_index, filter_index in prune_targets:
40+
model = prune_vgg16_conv_layer(model, layer_index, filter_index)
41+
{% endhighlight %}
42+
43+
- Change prune_vgg16_conv_layer to support additional architectures.
44+
The most immediate one would be VGG with batch norm.

0 commit comments

Comments
 (0)