[wpdreams_ajaxsearchpro_results id=1 element='div']

Backprop neural net: what is it?

[ad_1]

A back-propagation neural network is an artificial neural network that uses back-propagation to learn by example. It can solve complex problems and adapt its structure based on information received. The concept was refined over the years and recognized as a breakthrough in 1974.

In the world of programming, computers, and artificial intelligence, a back-propagation neural network is simply a kind of artificial neural network (ANN) that uses back-propagation. Backpropagation is key and is a commonly used algorithm that instructs an ANN how to perform a certain task. While this concept may seem confusing, and after going through the equations required during the process it seems completely foreign, this concept, along with the full neural network, is easy enough to understand.

For those unfamiliar with neural networks, an ANN, or simply an NN for “neural network,” is a mathematical model modeled after some feature of real-life neural networks, such as those found in living things. The human brain is the latest neural network whose functioning provides some clues on how to improve the structure and functioning of artificial NNs. Like a very rudimentary brain, an ANN has a network of interconnected artificial neurons that process information.

The fascinating thing is that an ANN can adapt and modify its structure when necessary, based on the information it receives from the environment and from within the network. It is a sophisticated computational model that uses nonlinear statistical data analysis and is capable of interpreting complex relationships between data such as inputs and outputs. It can solve problems that cannot be solved using traditional computational methods.

The idea for a backpropagation neural network first arose in the year 1969 from the work of Arthur E. Bryson and Yu-Chi Ho. Over the next few years, other programmers and scientists refined the idea. Since 1974 the backpropagation neural network has been recognized as an innovative breakthrough in the study and creation of artificial neural networks.

Neural network learning is an important task within an ANN that ensures it continues to be able to process data correctly and thus perform its function correctly. A backpropagation neural network uses a generalized form of the delta rule to allow the neural network to learn. This means that it uses a teacher capable of calculating the desired outputs starting from certain inputs introduced into the network.

In other words, a backpropagation neural network learns by example. The programmer provides a learning model that demonstrates what the correct output would be, given a specific set of inputs. This example of input-output is the teacher, or model, upon which other parts of the network can model subsequent computations.

The whole process proceeds methodically at measured intervals. Given a defined set of inputs, the ANN applies the computation learned from the model to obtain an initial output. Then compare this output to the originally known, expected, or good output and make any necessary adjustments. An error value is calculated in the process. This is then propagated back and forth through the backpropagation neural network until the best possible output is determined.

[ad_2]