[ad_1]
The Von Neumann architecture uses a single processor and isolated memory banks, while neural networks have thousands of parallel nodes storing memories in interconnections. Neural networks are used by animal brains and handle noisy data better. In feedforward neural networks, data moves through layers to an output level. Back propagation limits signals from previous layers. Neural networks are trained by tuning individual neurons, such as recognizing faces.
In a typical computer, built according to the so-called Von Neumann architecture, the memory banks live in an isolated module. There is a single processor, which processes the instructions and the memory rewrites them one by one, using a serial architecture. A different approach to computer science is the neural network. In a neural network, made up of thousands or even millions of individual “neurons” or “nodes,” all processing is highly parallel and distributed. The “memories” are stored within the complex interconnections and weightings between the nodes.
The neural network is the type of computer architecture used by animal brains in nature. This isn’t necessarily because the neural network is an inherently superior mode of processing than serial computing, but because a brain using serial computing would be much more difficult to incrementally evolve. Neural networks also tend to handle “noisy data” better than serial computers.
In a feedforward neural network, an “input layer” filled with specialized nodes acquires information, then sends a signal to a second layer based on information received from the outside. This information is usually a binary “yes or no” signal. Sometimes, to go from a “no” to a “yes”, the node must experience a certain threshold of arousal or stimulation.
Data moves from the input level through secondary and tertiary levels, and so on, until it reaches a final “output level” that displays the results on a screen for programmers to analyze. The human retina works on the basis of neural networks. Top-level nodes detect simple geometric features in the field of view, such as colors, lines, and edges. Sub-nodes begin to abstract more sophisticated features, such as motion, texture, and depth. The final “output” is what our consciousness registers when we look into the visual field. The initial input is just a complex arrangement of photons that would mean little without the neurological hardware to make sense of it in terms of meaningful qualities, such as the idea of an enduring object.
In back propagation of neural networks, outputs from previous layers can go back to those layers to limit further signals. Most of our senses work this way. The initial data may suggest an “informal guess” about the final outcome, followed by the analysis of future data in the context of that educated guess. In optical illusions, our senses make educated guesses that turn out to be wrong.
Instead of programming neural networks algorithmically, programmers must set up a neural network by training or delicately tuning individual neurons. For example, training a neural network to recognize faces would require many training sessions where different “face-like” and “non-face” objects were shown to the network, accompanied by positive or negative feedback to convince the neural network improve recognition skills.