Part 2 of 'All About Neural Networks'


Want to become a writer at Eat My News? Here is an opportunity to join the Board of Young Leaders Program by Eat My News. Click here to know more:  bit.ly/boardofyoungleaders



There is no doubt that the field of AI has been evolving exponentially in the past decade or so. With all of these applications present in our daily lives, we need various mechanisms that will be able to handle all of these types of applications. Fortunately, there are different types of neural networks, each having a specific structure or way of processing information so that it can mostly fit a certain application.


Types of Neural Networks


Well, there are many types of neural networks that might be in the development stage. These types are differentiated from each other by some factors or features: structure, how data flows, depth of activation filters, the density of neurons used, etc.


Some Types of Neural Networks

1. Feed-Forward Neural Networks


They are considered the simplest form of neural networks because their input data travels in one direction only, passing through artificial neural nodes and exiting through output nodes. They can be further classified as being single-layered or multi-layered feed-forward neural networks.

The number of layers depends on the complexity of the function, more layers mean more complexity. A Feed Forward Neural Network is unidirectional, always processing in a forward propagation manner (hence the name) but no backward propagation.

Weights are static and an activation function is fed by inputs that are multiplied by the weights or connections between each neuron and the other. The neuron is activated if its final computed value is above a specific threshold (usually 0) and the neuron then produces 1 as an output.

In contrast, the neuron is not activated if it is below a certain threshold (usually 0) and then it produces an output of -1. These neural networks are fairly simple to maintain and are equipped to deal with data that contains a lot of noise.

Applications of Feed Forward Neural Networks are found in computer vision and speech recognition, where classifying the target classes is complicated.


2. Recurrent Neural Network(RNN) – Long Short Term Memory


The notion behind this neural network is that it works by saving the output of a layer and then feeding it back to the input as a way to assist in predicting the outcome of the layer.

The first layer is typically a feed-forward neural network, which is then followed by a recurrent neural network layer where some information is maintained in the previous step is remembered by a memory function. As the neurons fire sequentially, this makes each neuron act like a memory cell in performing computations.

In this process, the neural network works on the front propagation and remember what information it needs for later use. If the prediction is wrong the neural network uses the learning rate to imply small changes, so that it will gradually work towards making the right prediction during the back-propagation.


3. Radial Basis Function Neural Networks(RBF)


Radial Basis Function Network comprises of an information vector followed by a layer of RBF neurons and a yield layer with one node per category. Classification is performed by estimating the info's closeness data points from the training set where every neuron stores a model. This will be one of the models from the preparation set.

At the point when another input vector should be classified, every neuron computes the Euclidean distance between the input and its model. For instance, on the off chance that we have two classes, class A and class B, for example, at that point, the new input to be classified is more near to class A models than the class B’s. Thus, it could be labeled or classified as class A.

Each RBF neuron compares the input vector to its model and yields a value ranging which is a measure of similarity from 0 to 1. As the input equates to the model, the output of that RBF neuron will be 1 and, with the distance growing between the input and prototype, the response falls off exponentially towards 0.

The curve produced out of the neuron's response tends to reshape towards a typical bell curve. The output layer consists of a set of neurons (one per category).

Radial Basis Function Networks have been applied in Power Restoration Systems. Power systems have expanded in size and increased in complexity. Both factors increase the risk of significant power outages. After a blackout, power should be reestablished as fast as possible. And this is where RBFs role comes into play.


4. Modular Neural Network

Modular Neural Networks (MNNs) has a collection of various neural network systems working freely and contributing towards an output. Each neural network has a set of inputs that are unique to the other networks, all of which are building and performing sub-tasks. These systems don't connect or flag each other in the process of accomplishing their tasks.

The main advantage of a modular neural network is that it breakdowns a huge computational procedure into littler segments, minimizing the complexity. This breakdown will help in decreasing the number of connections and negates the interaction of these networks with each other, thus increasing the overall computational speed. However, the processing time will rely upon the number of neurons and their inclusion in computing the outcomes.

Examples of Modular Neural Network applications would include stock market prediction systems, adaptive MNN for character recognition, and compression of high-level input data.



- Written by Eyad Aoun (EMN Community Member From Egypt)

- Edited by Mridul Goyal (EMN Community Member From New Delhi, India)