Texture Synthesis with Convolutional Neural Networks

Here we present a number of textures synthesised using deep Convolutional Neural Networks as described here.
The source textures are taken from the CG texture database and down-sampled such that the total number of pixels equals 256^2. This down-sampling is done to match the scale of the images on which the network was trained and to decrease computational costs. Right now the generation of one texture takes about 10 mins running on a K40 GPU.
The textures were generated by matching the correlations between feature maps in layers 'pool4', 'pool3', 'pool2', 'pool1', 'conv1_1' of a normalised version of the 19-layer VGG-Network described in the work by Simonyan and Zisserman. The weights in the normalised network are scaled such that the mean activation of each filter over images and positions is equal to one. Such re-scaling can always be done without changing the output of a neural network as long as the non-linearities in the network is are rectifying linear. The normalised network can be downloaded here. The synthesis was carried out using the Berkley Vision Caffe-framework






























































































































































































































































































































































University of Tuebingen BCCN CIN MPI