Neural networks are empirical mathematical models that emulate the processes people use to recognize patterns, learn tasks, and solve problems. The development of neural networks was originally motivated by investigating learning in simple mammalian brains – in particular, reinforcement learning from sensory feedback, and the idea that a neuron in a mammalian brain is activated and “fires” a connected neuron only after some (electrical) threshold is exceeded. Neural networks are usually characterized in terms of the number and types of connections between individual artificial neurons, called processing elements, and the learning rules used when data is presented to the network. Every artificial neuron has a transfer function, typically non-linear, that generates a single output value from all of the input values that are applied to it. Every connection has a weight that is applied to the value associated with the connection. A particular organization of artificial neurons and connections is often referred to as a neural network architecture.
The power of neural networks comes from their ability to learn from experience (that is, from empirical data collected in some problem domain). A neural network learns how to identify patterns by adjusting its weights in response to data input. The learning that occurs in a neural network can be supervised or unsupervised. With supervised learning, every training sample has an associated known output value. The difference between the known output value and the neural network output value is used during training to adjust the connection weights in the network. With unsupervised learning, the neural network identifies clusters in the input data that are “close” to each other based on some mathematical definition of distance. In either case, after a neural network has been trained, it can be deployed within an application and used to make decisions or perform actions when new data is presented. Neural networks and allied techniques such as genetic algorithms and fuzzy logic are among the most powerful tools available for detecting and describing subtle relationships in massive quantities of seemingly unrelated data.
“Training” a neural network is the process of adjusting its “weights” over multiple iterations through some (empirically collected) dataset. Weights in a neural network, though calculated dynamically, are analogous to coefficients in, for example, a linear regression model. At the end of each iteration through a dataset, a neural network has a specific structure (values for weights, number of weight connections, etc.) and thus it can compute an output which can be compared to the “target” output in the training dataset. The difference between the network output and the target output is error. During training the objective is to minimize this error; the back-propagation algorithm in essence divides the error and allocates a portion to every weight. This process is how a neural network “learns.” In effect, a feed-forward neural network is an equation which maps a multi-dimensional interval of input values to an interval of output values.
While a number of named neural network paradigms can be found in the literature, by far the most common is a feed-forward neural network (also referred to as a multi-layer perceptron) trained using a back-propagation algorithm. A feed-forward network comprises some number of layers – an input layer, one or more hidden layers (each with some number of hidden “units” or “processing elements”), and an output layer. Hidden units each contain a nonlinear, differentiable transfer function (typically a sigmoid or hyperbolic tangent), and are connected by weights (whose values are initially set to random numbers, typically between 0 and 1).
Neural networks are particularly well-suited to identifying (mapping) non-linear relationships that cannot be captured by standard statistical metrics or linear modeling approaches. Neural networks typically generalize very well when training data is appropriately distributed in the problem space, and they are very tolerant of noise in data (though as with any mathematical modeling, over-fitting is always an issue to be considered).
In today’s increasingly interconnected world, commercial and social relationships generate a wealth of product, service, and transaction data that is transferred, manipulated, and archived in electronic form. There is a critical need for better ways to find, organize, and exploit this abundance of data. Neural networks replace or enhance conventional methods that rely on traditional statistics to perform pattern recognition, classification, curve fitting, and prediction. Neural network-based technologies permit rapid identification of patterns in data which represent new knowledge. New knowledge significantly increases the value and quality of decisions based on the data. As knowledge accumulates, insights and strategies emerge to drive innovations that lead to improved organization responsiveness, more efficient use of resources, and ultimately greater profitability.
Neural network software developed by NeuralWare offers sophisticated classification, analysis, and prediction technology that can be embedded in products and services and then deployed using the Internet or incorporated within the Internet infrastructure. NeuralWare solutions are found in a wide variety of industry applications, including:
In short, neural networks are useful in any application domain in which large quantities of data are generated, and where important relationships among variables represented by the data are not known and cannot be determined from traditional statistical analysis or first-principles models.