How Neural Network Works using Simple Example

November 11, 2017 Author: munishmishra04_3od47tgp
Print Friendly, PDF & Email

The simplest definition of a neural network, more properly referred to as an ‘artificial’ neural network (ANN), is provided by the inventor of one of the first neurocomputers, Dr. Robert Hecht-Nielsen. He defines a neural network as:




“…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.
ANNs are processing devices (algorithms or actual hardware) that are loosely modeled after the neuronal structure of the mamalian cerebral cortex but on much smaller scales. A large ANN might have hundreds or thousands of processor units, whereas a mamalian brain has billions of neurons with a corresponding increase in magnitude of their overall interaction and emergent behavior. Although ANN researchers are generally not concerned with whether their networks accurately resemble biological systems, some have. For example, researchers have accurately simulated the function of the retina and modeled the eye rather well.

Neural Network Example

Figure 1 Neural Network Example

Although the mathematics involved with neural networking is not a trivial matter, a user can rather easily gain at least an operational understanding of their structure and function.




Neural networks are typically organized in layers. Layers are made up of a number of interconnected ‘nodes’ which contain an ‘activation function’. Patterns are presented to the network via the ‘input layer’, which communicates to one or more ‘hidden layers’ where the actual processing is done via a system of weighted ‘connections’. The hidden layers then link to an ‘output layer’. Most ANNs contain some form of ‘learning rule’ which modifies the weights of the connections according to the input patterns that it is presented with. In a sense, ANNs learn by example as do their biological counterparts; a child learns to recognize dogs from examples of dogs.

Although there are many different kinds of learning rules used by neural networks, this demonstration is concerned only with one; the delta rule. The delta rule is often utilized by the most common class of ANNs called ‘backpropagational neural networks’ (BPNNs). Backpropagation is an abbreviation for the backwards propagation of error.

Neural Network Example:

A simple two-layer network applied to the AND problem

Figure 2 A simple two-layer network applied to the AND problem

Consider a simple neural network made up of two inputs connected to a single output unit (Figure 2). The output of the network is determined by calculating a weighted sum of its two inputs and comparing this value with a threshold ​\( θ \)​. If the net input (net) is greater than the threshold, the output is 1, otherwise it is 0. Mathematically, we can summarize the computation performed by the output unit as follows:



\[ net= w_1 I_1+w_2 I_2 \]

\[ if (net> θ) then (o=1),otherwise (o=0) \]

Suppose that the output unit performs a logical AND operation on its two inputs (shown in Figure 2). One way to think about the AND operation is that it is a classification decision. We can imagine that all Jets and Sharks gang members can be identified on the basis of two characteristics: their marital status (single or married) and their occupation (pusher or bookie). We can present this information to our simple network as a 2-dimensional binary input vector where the first element of the vector indicates marital status (single = 0 / married = 1) and the second element indicates occupation (pusher = 0 and bookie = 1). At the output, the Jets gang members comprise “class 0” and the Sharks gang members comprise “class 1”. By applying the AND operator to the inputs, we classify an individual as a member of the Shark’s gang only if they are both married AND a bookie; i.e., the output is 1 only when both of the inputs are 1.

The AND function is easy to implement in our simple network. Based on the network equations, there are four inequalities that must be satisfied:

\[ w_1 0+w_2 0<θ \]

\[ w_1 0+w_2 1<θ \]

\[ w_1 1+w_2 0<θ \]

\[ w_1 1+w_2 1<θ \]

Here’s one possible solution. If both weights are set to 1 and the threshold is set to 1.5, then

\[ (1)(0)+(1)(0)<1.5=>0 \]

\[ (1)(0)+(1)(1)<1.5=>0 \]

\[ (1)(1)+(1)(0)<1.5=>0 \]

\[ (1)(1)+(1)(1)<1.5=>1 \]

Although it is straightforward to explicitly calculate a solution to the AND problem, an obvious question concerns how the network might learn such a solution. That is, given random values for the weights can we define an incremental procedure which will converge to a set of weights which implements AND.

References

[1] “Neural Network Examples and Demonstrations”, available online at: http://ecee.colorado.edu/~ecen4831/lectures/NNdemo.html

[2] Guoqiang Peter Zhang, “Neural Networks for Classification: A Survey”, IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications And Reviews, Vol. 30, No. 4, November 2000

[3] Israel Tabarez-Paz and Israel Tabarez-Paz, “Improving of Artificial Neural Networks Performance by Using GPU ‘S: A Survey”, CS IT-CSCP 1943 (2013): pp. 39-48

[4] “Background”, available online at: http://staff.itee.uq.edu.au/janetw/cmc/chapters/BackProp/index2.html

 

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Insert math as
Block
Inline
Additional settings
Formula color
Text color
#333333
Type math using LaTeX
Preview
\({}\)
Nothing to preview
Insert