Appendix D

MATLAB® Programs for Neural Systems

D.1.1 Defining Feedforward Network Architecture

Feedforward networks often have one or more hidden layers of sigmoid neurons followed by an output layer of linear neurons. Multiple layers of neurons with nonlinear transfer functions allow the network to learn nonlinear and linear relationships between input and output vectors. The function newff() creates a feedforward backpropagation network architecture with desired number of layers and neurons. The general form of use of the function is given below, which returns an N-layer feedforward backpropagation network object:

net = newff([PN], [S1 S2…SN], {TF1 TF2…TFN}, BTF, LF, PF);

where the first input PN is an N × 2 matrix of minimum and maximum values for N input elements. S1 S2…SN are the sizes (number of neurons) of the layers of the network architecture. TFi is the transfer function of the ith layer; the default is ‘tansig’. The transfer functions TFi can be any differentiable transfer function such as tansig, logsig or purelin. BTF is the backpropagation network training function; the default is ‘trainlm’. Different training functions with their features are described in Section 4.7.2 of Chapter 4. LF is the backpropagation weight/bias learning function with gradient descent, such as ‘learngd’, ‘learngdm’. The default is ‘learngdm’. The function ‘learngdm’ is used to calculate the weight change dW for a given neuron from the neuron's input P and error E. Learning occurs according ...

Get Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.