A BASIC INTRODUCTION TO NEURAL NETWORKS

WHAT IS A NEURAL NETWORK?

                   The simplest definition of the neural network is " a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.'
 BASIC OF NEURAL NETWORKS

                   Neural networks are typically organized in layers.  Layers are made up of a number of interconnected 'nodes' which contain an 'activation function'. Patterns are presented to the network via the ' input layer', which communicates to one or more 'hidden layers' where the actual processing is done via system of weighted 'connections'.  the hidden layers then link to an 'output' layer' where the answer is output.





     WHY STUDY NEURAL NETWORKS ?

                             This question is pertinent here, because, depending on one's motive, the study of connections can take place from differing perspectives.  It also helps to know what questions we are trying to answer in order to avoid the kind of religious wars that sometimes break out when the words "Connection" or "neural network" are mentioned.


TYPES OF NEURAL NETWORKS BASED ON ARCHITECTURE 
                           

 The different types of neural networks                           
are based on their architecture.

1. SIMPLE LAYER PERCEPTION 

       A neuron receives inputs from multiple neurons and outputs a value based upon the activation function. The perceptron is one of the easiest data structures for the study of neural networking. The perceptron models a neuron's behavior in the following way: First, the perceptron receives several input values.

2.RADIAL BASIS NETWORKS

      8 Radial Basis Function Networks. Radial basis function (RBFnetworks are a commonly used type of artificial neural network for function approximation problems. ... An RBF network is a type of feed forward neural network composed of three layers, namely the input layer, the hidden layer and the output layer.
 
 3.MULTI LAYER PERCEPTION
       
         A multilayer feedforward neural network is an interconnection of perceptrons in which data and calculations flow in a single direction, from the input data to the outputs. The number of layers in a neural network is the number of layers of perceptrons.

4.RECURRENT NEURAL NETWORK
           
         A recurrent neural network (RNN) is a type of artificial neural network commonly used in speech recognition and natural language processing (NLP). RNNs are designed to recognize a data's sequential characteristics and use patterns to predict the next likely scenario.

5.LONG SHORT-TERM MEMORY
   Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. LSTMs were developed to deal with vanish
6.HOPFIELD NEURAL NETWORK
           Hopfiled  network is a special kind of neural network whose response is different from other neural networks.  It is calculated by coverging ilerative process.  It has just one layer of neurons relating to the size of the input and output, which must be the same.
 
7.BOLTZMANN MACHINE 

           It is a network of symmetrically connected, neuron_like units that make stochastic decisions about whether to be on or off.  Boltz-mann machines have a simple learning algorithm that allows them to discover interesting features in datasets composed of binary vectors.
     

                                                      *****

                                       

               

Comments

Post a Comment