Friday, May 5, 2017

Artificial Neural Networks - Introduction

Author: Marek Libra

Posted under creative commons from Knol

The Artificial Neural Network (NN later) is a favorite subject to study in the recent years. It was successfully applied in a wide range of problem domains like finance, engineering, medicine, geology, physics or control.

    Neural networks are useful especially for solving problems of prediction, classification or control. They are also a good alternative to classical statistical approaches like regression analysis.

    The artificial neural networks try to model some properties of biological neural networks. The biological neural networks build the nervous system of biological organisms. This inspiration is commonly known fact and it is mentioned in most of neural networks publications.

    The NN is built from a large number of simple processing units called artificial neurons (called
just neurons later).

    The interface of an artificial neuron stays from n numeric inputs and one numeric output. Some models of neurons consider one next special input called bias. Each the input is evaluated by its numeric weight. The neuron can perform two operations: compute and adapt.

    The compute operation transforms inputs to output. The compute operation takes numerical
inputs and computes their weighted sum. It performs a so called activation function to this sum
(a mathematical transformation) afterwards. The result of the activation function is set as a value
to the output interface.

    The adapt operation, based on a pair of inputs and awaited outputs specified by the user,
tunes the weights of an NN for a better approximation of the computed output compared to
the awaited output for considered input.

    The neurons in an NN are ordered and numerically signed (from N) according to the order.
    A lot of models of NNs are known. These models differs to each other by different usage of

    • the domain of numeric input, output and weights (real, integer or finite set like {0,1}),

    • the presence of bias (yes or no),

    • the definition of an activation function (sigmoid, hyperbolic tangents, discrete threshold,

    • the topology of interconnected neurons (feed-forward or recurrent),

    • the ability to change the number of neurons or the network topology during the lifetime of
      the network,

    • the algorithm of the computation flow through the network over neurons,

    • the simulation time (discrete or continuous) or

    • the adaptation algorithm (none, back propagation, perceptron rule, genetic, simulated annealing etc.).

    A good taxonomy of NN models can be found i.e. in [1]

    More detailed general descriptions, which are formal and well readable, can be found in [2] .

Further Reading



  • [1] Šíma and P. Orponen. General purpose computation with neural
  • [2] David M Skapura. Building Neural Networks. Addison-Wesley, 1995

Source Knol:
Knol Nrao - 5193

Mode detailed reading

Artificial Neural Networks: Mathematics of Backpropagation (Part 4)
October 28, 2014 in ml primers, neural networks

Updated 8 May 2017, 28 April 2012.

No comments:

Post a Comment