First, we load the train and test datasets, X_train and X_test:
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)def get_random_block_from_data(data, batch_size): start_index = np.random.randint(0, len(data) - batch_size) return data[start_index:(start_index + batch_size)]X_train = mnist.train.imagesX_test = mnist.test.images
Define the variables for the number of samples, n_samples, training_epoch, and batch_size for each iteration of the training and display_step:
n_samples = int(mnist.train.num_examples)training_epochs = 2batch_size = 128display_step = 1
Instantiate the autoencoder and the optimizer. The autoencoder has 200 hidden units and uses sigmoid as the transfer_function ...