back-propagation


back-propagation

(Or "backpropagation") A learning algorithm for modifying afeed-forward neural network which minimises a continuous"error function" or "objective function."Back-propagation is a "gradient descent" method of trainingin that it uses gradient information to modify the networkweights to decrease the value of the error function onsubsequent tests of the inputs. Other gradient-based methodsfrom numerical analysis can be used to train networks moreefficiently.

Back-propagation makes use of a mathematical trick when thenetwork is simulated on a digital computer, yielding in justtwo traversals of the network (once forward, and once back)both the difference between the desired and actual output, andthe derivatives of this difference with respect to theconnection weights.