NEURAL NETWORK – 21094

The term neural network refers to any of several forms of alternative computer technology. The basic idea behind all neural networks is to mimic the workings of the human brain.

Assets

Compared with digital computers, neural networks are fast. They can reach conclusions more rapidly than digital machines.

Neural networks are good at tasks such as object recognition and speech recognition. Neural networks can take small bits of information about an object, sound, or other complex thing, and fill in the gaps to get the whole. This was vividly demonstrated when an early version of a neural network took an incomplete (20 percent) radar image of a jet plane and, on the basis of that data, produced a complete graphic of the type of aircraft that caused the echoes.

Neural networks can learn from their mistakes, improving their performance after repeating a task many times. They also exhibit graceful degradation, so that if part of the system is destroyed, the rest can keep things going, albeit at a slower speed and/or with less accuracy.

Limitations

Neural networks are imprecise. If you ask one to balance your checkbook, it will come close, but it will not give an exact answer. Neural networks are not designed to do calculations of the sort a digital computer can carry out. A $5.00 calculator will outperform even the most complex neural network at basic arithmetic. In that sense, neural network technology resembles analog computer technology.

Another weakness of neural networks arises from the fact that they inevitably make mistakes as they zero in on their conclusions. Digital machines break problems down into miniscule pieces, meticulously grinding out a solution to a level of exactness limited only by the number of transistors that can be fabricated onto a chip of silicon. Neural networks tackle problems as a whole, modifying their outlook until the results satisfy certain conditions.

One might make the generalization that digital computers are analytical, while neural networks are intuitive.

Fuzzy logic

Digital machines recognize, at the fundamental level, two conditions or states: logic 1 and logic 0. These two logic states can be specified in terms of high/low, true/false, plus/minus, yes/no, red/green, up/down, front/back, or any other clear-cut dichotomy. The human brain is made up of neurons and synapses in a huge network, all of which can communicate with a vast number of others. In a neural network, “neurons” and “synapses” are the processing elements and the data paths between them. The earliest neuralnetwork enthusiasts postulated that the human brain works like a huge digital machine, its neurons and synapses either “firing” or “staying quiet.” Later, it was learned that things are more complicated than that.

In some neural networks, the neurons can send only two different types of signals, and represent the brain as theorized in the 1950s. However, results can be modified by giving some neurons and/or synapses more importance than others. This creates fuzzy logic, in which truth and falsity exist with varying validity.

Neural networks and artificial intelligence

Some researchers suggest that the ultimate goal of AI can be reached by a “marriage” of digital and neural-network technologies. Others think neural networks represent a dead end, and that digital technology has clearly proven itself to be the best way to build a computer. Neural-network research has gone through boom-and-bust cycles, partly as a result of differences of opinion.

Psychologists are interested in this technology because it might help them answer questions about the human brain. However, no neural network has come close to such complexity. Even the biggest neural networks conceived, with billions of neurons and trillions of synapses, would be less intelligent than a cat or dog.

You may also like...