As humans, we have managed to find new tools or ideas inspired by nature by using our imitation and inference skills.

“In all things of nature there is something of the marvelous.”

Aristotle

For example, we observed to mosquitoes and learned how to make better design for injector needles. We utilized from shark skin when designing swimsuits, and humpback whale fins when designing turbine blades [1]. What they have learned from nature has helped to advance humankind and make life easier.

“Nature is a volume of which God is the author”

William Harvey

It is true that nature is a wonderful book, but another great point is reading ability of people this book. People can read, and learn from this book. It is just amazing!

A group of scientists also researched this ability itself and tried to imitate it. Two scientist named McCulloch and Pitts researched on that way and presented first formal artificial neuron model[2]. Their this work made basis for ANN progress that will become future[3].

Perceptron model is another pioneer study on artificial neural systems. Neuron-like elements, named Perceptron, presented 1958 by the Frank Rosenblatt[3][4].

Perceptrons had learning ability and arranging the connections, perceptron can be trained to learn some patterns[3].

The idea of Perceptron was developed in the following years, and it still provided the ground for some machine learning methods still used today [3].

It is a nice way to examine the Perceptron algorithm to start understanding the artificial neural networks.

How does the Perceptron algorithm work?

Let’s answer this question from Zurada’s Artificial Neural Systems book [3]:

Single Layer Discrete Perceptron Algorithm

In the Perceptron learning rule, the signals at the input are weighted by the connection weights (w). Then result is passed through a threshold and the neuron output is obtained.

In the Perceptron learning rule there is a learning signal value, similar to the loss function.

This learning signal is processed with a constant coefficient and inputs, resulting in weight differences. Finally, weights and these weight differences are summed up and new connection weight values are obtained.

Let’s examine the Perceptron method, which we have briefly summarized above, step by step.

Before the explaining, let’s define the variables that we will use when expressing equations:

  • x          : Input signals of the network.
  • w         : Weight values of the inputs.
  • △w      : Weight adjustment values.
  • r           : Learning signal.
  • d          : Desired output response according to the inputs.
  • o          : Actual output response of the neuron.
  • i           : Number of sample. (x1,x2,..)
  • j           : Paramter number of the input sample.

Firstly, the input values are summed by multiplying the link weights to calculate the output from the input signals. This can be achieved by matrix multiplication between the input signal vector and the weight vector.

NET=w_i^t*x

The “net” value obtained is given as an input to a threshold function that named as “Signum”. The Signum function output is also the neuron output.

o_i = sgn(NET) = sgn(w_i^t*x)

In a single-layer and single-neuron system, the output of the neuron is also the output of the network.

By the way; “Signum” is an activation function and returns 1 if its value is greater than 0, and -1 if it is less than 0.

To calculate the learning signal; the difference between the expected output and the output of the network is taken. The learning signal value will be 2,0 or -2.

r=d_i-o_i

Using the learning signal; weight adjustment values are calculated.

\Delta{w_i}=crx = c[d_i-sgn(w_i^t*x)]x

Finally; weights and weight adjustment values are summed and new weights are obtained.

w = w+\Delta{w_i}

This calculations repeated until network error become zero. Thus, network values that correctly separate the pattern will be found.

Perceptron Learning Algorithm Code

You can use the following link to view the sample code for Single Layer Discrete Perceptron Algorithm.

SLDP Code