Neural Network Basics
An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well.
Neural Network Definition
Work on artificial neural networks, commonly referred to as “neural networks,” has been motivated right from its inception by the recognition that the human brain computes in an entirely different way from the conventional digital computer. The brain is a highly complex, nonlinear, and parallel computer (information-processing system). It has the capability to organize its structural constituents, known as neurons, so as to perform certain computations (e.g., pattern recognition, perception, and motor control) many times faster than the fastest digital computer in existence today. Consider, for example, human vision, which is an information-processing task. It is the function of the visual system to provide a representation of the environment around us and, more important, to supply the information we need to interact with the environment. To be specific, the brain routinely accomplishes perceptual recognition tasks (e.g., recognizing a familiar face embedded in an unfamiliar scene) in approximately 100–200 ms, whereas tasks of much lesser complexity take a great deal longer on a powerful computer.
More specifically, “A neural network is an interconnected assembly of simple processing elements, units or nodes, whose functionality is loosely based on the animal neuron. The processing ability of the network is stored in the inter unit connection strengths, or weights, obtained by a process of adaptation to, or learning from, a set of training patterns”.
Artificial neural networks are an attempt at modeling the information processing capabilities of nervous systems. Thus, first of all, we need to consider the essential properties of biological neural networks from the viewpoint of information processing. This will allow us to design abstract models of artificial neural networks, which can then be simulated and analyzed. The human brain has capabilities in processing information and marking instantaneous decision. The many researchers shown that the human brain make computations in a radically different manner to that done by binary computers. The neurons is a massive network of parallel and distributed computing elements, many scientists are working last few decades to build computational system called neural network, which is also called as connectionist model. A neural network is composed of set of parallel and distributed processing units called nodes or neurons, these neurons are interconnected by means of unidirectional or bidirectional links by ordering them in layers.
Figure 1: Basic Structure of Neural Network
The basic unit of neural network is neuron, it consist of N no of inputs to the network are represented by \( x(n) \) and each input are multiply by a connection weight these weights are represented by \( w(n) \). The product of input and weight are simply summed and feed through a transfer function (activation function) to generate the result (output).
Benefits of Neural Networks
It is apparent that a neural network derives its computing power through, first, its massively parallel distributed structure and, second, its ability to learn and therefore generalize. Generalization refers to the neural network’s production of reasonable outputs for inputs not encountered during training (learning). These two information processing capabilities make it possible for neural networks to find good approximate solutions to complex (large-scale) problems that are intractable. Neural networks offer the following useful properties and capabilities:
- Non-linearity: An artificial neuron can be linear or nonlinear. A neural network, made up of an interconnection of nonlinear neurons, is itself nonlinear. Moreover, the nonlinearity is of a special kind in the sense that it is distributed throughout the network. Nonlinearity is a highly important property, particularly if the underlying physical mechanism responsible for generation of the input signal (e.g., speech signal) is inherently nonlinear.
- Input–Output Mapping: A popular paradigm of learning, called learning with a teacher or supervised learning, involves modification of the synaptic weights of a neural network by applying a set of labeled training examples, or task examples. Each example consists of a unique input signal and a corresponding desired (target) response. The network is presented with an example picked at random from the set, and the synaptic weights (free parameters) of the network are modified to minimize the difference between the desired response and the actual response of the network produced by the input signal in accordance with an appropriate statistical criterion. The training of the network is repeated for many examples in the set, until the network reaches a steady state where there are no further significant changes in the synaptic weights. The previously applied training examples may be reapplied during the training session, but in a different order. Thus the network learns from the examples by constructing an input–output mapping for the problem at hand.
- Evidential Response: In the context of pattern classification, a neural network can be designed to provide information not only about which particular pattern to select, but also about the confidence in the decision made. This latter information may be used to reject ambiguous patterns, should they arise, and thereby improve the classification performance of the network.
- Contextual Information: Knowledge is represented by the very structure and activation state of a neural network. Every neuron in the network is potentially affected by the global activity of all other neurons in the network. Consequently, contextual information is dealt with naturally by a neural network.
- VLSI Implementability: The massively parallel nature of a neural network makes it potentially fast for the computation of certain tasks. This same feature makes a neural network well suited for implementation using very-large-scale-integrated (VLSI) technology. One particular beneficial virtue of VLSI is that it provides a means of capturing truly complex behavior in a highly hierarchical fashion.
- Uniformity of Analysis and Design: Basically, neural networks enjoy universality as information processors. We say this in the sense that the same notation is used in all domains involving the application of neural networks. This feature manifests itself in different ways:
- Neurons, in one form or another, represent an ingredient common to all neural networks.
- This commonality makes it possible to share theories and learning algorithms in different applications of neural networks.
- Modular networks can be built through a seamless integration of modules.
 Haykin, Simon S., et al. Neural Networks and Learning Machines, Volume 3, Upper Saddle River, NJ, USA, Pearson, 2009.
 Shruti B. Hiregoudar, Manjunath. K and K. S. Patil, “A Survey: Research Summary on Neural Networks”, International Journal of Research in Engineering and Technology, Volume 03, Special Issue: 03, May-2014
 Rojas, Raúl. Neural networks: a systematic introduction, Springer Science & Business Media, 2013.