Tuesday, November 20, 2018

Artificial Neural Networks - Introduction



Author: Marek Libra

Posted under creative commons from Knol

The Artificial Neural Network (NN later) is a topic in artificial intelligence methods and techniques.  It was successfully applied in a wide range of problem domains like finance, engineering, medicine, geology, physics or control.

Neural networks are useful especially for solving problems of prediction, classification or control. They are also a good alternative to classical statistical approaches like regression analysis.

The artificial neural networks techniques were developed based on the model of biological neural networks. The biological neural networks are the basis of functioning of the nervous system of biological organisms. This inspiration is commonly known fact and it is mentioned in most of neural networks publications.

 The NN is built from a large number of simple processing units called artificial neurons (called
just neurons later).

The interface of an artificial neuron stays from n numeric inputs and one numeric output. Some models of neurons consider one next special input called bias. Each  input is evaluated by its numeric weight. The neuron can perform two operations: compute and adapt.


The compute operation transforms inputs to output. The compute operation takes numerical
inputs and computes their weighted sum. It performs a so called activation function to this sum
(a mathematical transformation) afterwards. The result of the activation function is set as a value
to the output interface.

The adapt operation, based on a pair of inputs and awaited outputs specified by the user,
tunes the weights of an NN for a better approximation of the computed output compared to
the awaited output for considered input.

The neurons in an NN are ordered and numerically signed (from N) according to the order.
    A lot of models of NNs are known. These models differs to each other by different usage of

    • the domain of numeric input, output and weights (real, integer or finite set like {0,1}),

    • the presence of bias (yes or no),

    • the definition of an activation function (sigmoid, hyperbolic tangents, discrete threshold,
      etc),

    • the topology of interconnected neurons (feed-forward or recurrent),

    • the ability to change the number of neurons or the network topology during the lifetime of
      the network,

    • the algorithm of the computation flow through the network over neurons,

    • the simulation time (discrete or continuous) or

    • the adaptation algorithm (none, back propagation, perceptron rule, genetic, simulated annealing etc.).

 A good taxonomy of NN models can be found i.e. in [1]

More detailed general descriptions, which are formal and well readable, can be found in [2] .

References

  • [1] Šíma and P. Orponen. General purpose computation with neural
  • [2] David M Skapura. Building Neural Networks. Addison-Wesley, 1995

Source Knol: /knol.google.com/   marek-libra/artificial-neural-networks/5rqq7q8930m0/12#
Knol Nrao - 5193


Further Reading

Simple mathematical steps in Neural Network problem solving



Mode detailed reading

Artificial Neural Networks: Mathematics of Backpropagation (Part 4)
October 28, 2014 in ml primers, neural networks
http://briandolhansky.com/blog/2013/9/27/artificial-neural-networks-backpropagation-part-4


But what is a neural network? | Chapter 1, Deep learning
9,774,444 views 5 Oct 2017 - 19 sep 2021

Further Reading

Knols

  • Feed-Forward Neural Networks
  • Adaptation of Feed-Forward Neural Networks
  • The Perceptron Rule
  • The Back Propagation


Updated 21 November 2018,  8 June 2017, 8 May 2017, 28 April 2012.

No comments:

Post a Comment