Input parameters for a DBN with 256-Neuron RBM layers

We will initialize various parameters that are needed by the DBN class defined earlier:

 finetune_act_func = tf.nn.relu rbm_layers = [256] do_pretrain = True name = 'dbn' rbm_layers = [256] finetune_act_func ='relu' do_pretrain = True rbm_learning_rate = [0.001] rbm_num_epochs = [1] rbm_gibbs_k= [1] rbm_stddev= 0.1 rbm_gauss_visible= False momentum= 0.5 rbm_batch_size= [32] finetune_learning_rate = 0.01 finetune_num_epochs = 1 finetune_batch_size = 32 finetune_opt = 'momentum' finetune_loss_func = 'softmax_cross_entropy' finetune_dropout = 1 finetune_act_func = tf.nn.sigmoid

Once the parameters are defined, let's run the DBN network on the MNIST dataset:

srbm = dbn.DeepBeliefNetwork(

Get Neural Network Programming with TensorFlow now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.