# Introduction of HopField Neural Network

December 22, 2017

Human beings are constantly thinking since ages about the reasons for human capabilities and incapabilities. Successful attempts have been made to design and develop systems that emulate human capabilities or help overcome human incapabilities. The human brain, which has taken millions of years to evolve to its present architecture, excels at tasks such as vision, speech, information retrieval, complex pattern recognition, all of which are extremely difficult tasks for conventional computers. A number of mechanisms have been which seems to enable human brain to handle various problems. These mechanisms include association; generalization and self-organization. A brain similar computational technique namely HopField Neural Network is explained here.

### Working of Hop Field Neural Network

A neural network (or more formally artificial neural network) is a mathematical model or computational model inspired by the structure and functional aspects of biological neural networks. It consists of an interconnected group of artificial neurons. The original inspiration for the term Artificial Neural Network came from examination of central nervous systems and their neurons, axons, dendrites and synapses which constitute the processing elements of biological neural networks. One of the milestones for the current renaissance in the field of neural networks was the associative model proposed by Hopfield at the beginning of the 1980s.

Hopfield Neural Network is an example of the network with feedback (so-called recurrent network), where outputs of neurons are connected to input of every neuron by means of the appropriate weights. Of course there are also inputs which provide neurons with components of test vector.

Hopfield network is a recurrent neural network in which any neuron is an input as well as output unit, and

• Each neuron ​$$i$$​ is a perceptron with the binary threshold activation function,
• Any pair of neurons ​$$(i,j)$$​ is connected by two weighted links ​$$w_{ij}$$​ and ​$$w_{ji}$$

Formally speaking, a neuron is characterized by

$u_i (t)=sign(u_i (t) )$

$\ u_i (t) = \begin{cases} 1 & \quad \text{if } u_i (t) \text{ ≥ 0}\\ -1 & \quad \text{ otherwise} \end{cases}$

Where, ​$$u_i (t)$$​ the total input at time, is computed by

$u_i (t)= ∑_{j≠i}v_j (t-1) w_ji+θ_i$

A neuron$$i$$ is called stable at time ​$$t$$​ if ​$$v_i (t)= v_i (t-1)=sign(u_i (t-1) )$$​ . A Hopfield net is called stable if all of its neurons are stable. Here after, we drop the time t, and assume that all the biases are 0; i.e., we use​$$u_i$$​instead of ​$$u_i (t)$$​, and $$θ_i = 0$$ for all i.

### Architecture of Hop Field Neural Network

Following are some important points to keep in mind about discrete Hopfield network –

Figure 1: Hopfield Network

• This model consists of neurons with one inverting and one non-inverting output.
• The output of each neuron should be the input of other neurons but not the input of self.
• Weight/connection strength is represented by ​$$w_{ij}$$​.
• Connections can be excitatory as well as inhibitory. It would be excitatory, if the output of the neuron is same as the input, otherwise inhibitory.
• Weights should be symmetrical, i.e ​$$w_{ij}= w_{ji}$$​.

The output from ​$$Y_1$$​ going to ​$$Y_2$$​, ​$$Y_i$$​ and ​$$Y_n$$​ have the weights  ​$$w_{12}, Y_{1i}$$​  and ​$$w_{1n}$$​ respectively. Similarly, other arcs have the weights on them.

### Properties of the Hopfield network

• A recurrent network with all nodes connected to all other nodes
• Nodes have binary outputs (either 0,1 or -1,1)
• Weights between the nodes are symmetric ​$$w_{ij}= w_{ji}$$
• No connection from a node to itself is allowed
• Nodes are updated asynchronously (i.e. nodes are selected at random)
• The network has no “hidden” nodes or lay

References

[1] “Artificial Neural Network – Hopfield Networks”, available online at: https://www.tutorialspoint.com/artificial_neural_network/artificial_neural_network_hopfield.htm