DeepLearnToolbox – Matlab/Octave toolbox for deep learning. Includes Deep Belief Nets, Stacked Autoencoders, Convolutional Neural Nets, Convolutio…
- Deep Learning is a new subfield of machine learning that focuses on learning deep hierarchical models of data.
- For a more informal introduction, see the following videos by Geoffrey Hinton and Andrew Ng.
- If you use this toolbox in your research please cite Prediction as a candidate for learning deep hierarchical models of data
- – Utility functions used by the libraries
- – Data used by the examples
Run command on pcDuino Ubuntu Terminal:
$ sudo apt-get install git $ git clone https://github.com/rasmusbergpalm/DeepLearnToolbox
Directories included in the toolbox
NN/
– A library for Feedforward Backpropagation Neural Networks
CNN/
– A library for Convolutional Neural Networks
DBN/
– A library for Deep Belief Networks
SAE/
– A library for Stacked Auto-Encoders
CAE/
– A library for Convolutional Auto-Encoders
util/
– Utility functions used by the libraries
data/
– Data used by the examples
tests/
– unit tests to verify toolbox is working
For references on each library check REFS.md
Setup
- Download.
- addpath(genpath(‘DeepLearnToolbox’));
Known errors
test_cnn_gradients_are_numerically_correct
fails on Octave because of a bug in Octave’s convn implementation. See http://savannah.gnu.org/bugs/?39314
test_example_CNN
fails in Octave for the same reason.
Example: Deep Belief Network
function test_example_DBN load mnist_uint8; train_x = double(train_x) / 255; test_x = double(test_x) / 255; train_y = double(train_y); test_y = double(test_y); %% ex1 train a 100 hidden unit RBM and visualize its weights rand('state',0) dbn.sizes = [100]; opts.numepochs = 1; opts.batchsize = 100; opts.momentum = 0; opts.alpha = 1; dbn = dbnsetup(dbn, train_x, opts); dbn = dbntrain(dbn, train_x, opts); figure; visualize(dbn.rbm{1}.W'); % Visualize the RBM weights %% ex2 train a 100-100 hidden unit DBN and use its weights to initialize a NN rand('state',0) %train dbn dbn.sizes = [100 100]; opts.numepochs = 1; opts.batchsize = 100; opts.momentum = 0; opts.alpha = 1; dbn = dbnsetup(dbn, train_x, opts); dbn = dbntrain(dbn, train_x, opts); %unfold dbn to nn nn = dbnunfoldtonn(dbn, 10); nn.activation_function = 'sigm'; %train nn opts.numepochs = 1; opts.batchsize = 100; nn = nntrain(nn, train_x, train_y, opts); [er, bad] = nntest(nn, test_x, test_y); assert(er < 0.10, 'Too big error');
Leave a Reply
You must be logged in to post a comment.