A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.
Generally, the working of a human brain by making the right connections is the idea behind Artificial Neural Networks(ANNs). That was limited to the use of silicon and wires as living neurons and dendrites. Here, neurons, part of the human brain. That was composed of 86 billion nerve cells. Also, connected to thousands of cells by Axons. Although, there are various inputs from sensory organs. That was accepted by dendrites. As a result, it creates electric impulses. That is used to travel through the Artificial neural network. Thus, to handle the different issues, a neuron send a message to another neuron.
Machine Learning in ANNs
As there are too many Machine learning strategies are present, let’s see them one by one:
a. Supervised Learning
Generally, in this learning, a teacher is present to teach. That teacher must be aware of ANN.
The teacher feeds only example data. That teacher already knows the answers.
b. Unsupervised Learning
If there is presently no data set. Then we need this learning technique.
c. Reinforcement Learning
This Machine learning technique is based on observation. Although, if it’s negative the networks need to adjust their weights. That is able to make a different required decision the next time.
How do Neural Networks work
As we have seen Artificial Neural Networks are made up of a number of different layers. Each layer houses artificial neurons called units. These artificial neurons allow the layers to process, categorize, and sort information. Alongside the layers are processing nodes. Each node has its own specific piece of knowledge. This knowledge includes the rules that the system was originally programmed with. It also includes any rules the system has learned for itself. This makeup allows the network to learn and react to both structured and unstructured information and data sets. Almost all artificial neural networks are fully connected throughout these layers. Each connection is weighted. The heavier the weight, or the higher the number, the greater the influence that the unit has on another unit.
The first layer is the input layer. This takes on the information in various forms. This information then progresses through the hidden layers where it is analyzed and processed. By processing data in this way, the network learns more and more about the information. Eventually, the data reaches the end of the network, the output layer. Here the network works out how to respond to the input data. This response is based on the information it has learned throughout the process. Here the processing nodes allow the information to be presented in a useful way.
In this network flow of information is unidirectional. A unit used to send information to another unit that does not receive any information. Also, no feedback loops are present in this. Although, used in recognition of a pattern. As they contain fixed inputs and outputs.
Types of Artificial Neural Networks — FeedForward ANN
This particular Artificial Neural Network allows feedback loops. Also, used in content-addressable memories.
When released back in 2017 for iOS and Android devices, this AI-based face changing/photo editing app — FaceApp took the internet by storm. And with its mind-boggling results, it also created a major kerfuffle on social media platforms like Facebook, Twitter, and Instagram, where everybody including celebrities was using the app to transform their selfies to see their “older version.” At the inception, the app was renowned for its ethnicity filters, but later it evolved with a lot of features.
How FaceApp uses Neural Networks
FaceApp is an image manipulation app that allows its users to alter and transform their images using filters. For this, the app works on image recognition technology, which is key for facial recognition systems, and utilizes deep learning for recognizing the key features, like eyelids, cheekbones, jawline, nose bridge, etc. of the human face to create those transformations.
Like any other machine learning model, this app also works on sample data that is usually gathered from the users’ mobile. Once the sample data, which includes pictures of the users, family members, friends, and everything else, is collected, the system then provides the data to the deep neural networks of the app which helps the system to learn all the nitty-gritty of the human face. Considering deep learning models work on sample data, the more the users share the images as sample data, the more accurate the system is going to predict those transformed images. This predictive ability allows the app to simulate wrinkles, enhance receding hairline, and shrink the skin at a realistic level.
However, this app is entirely different from Instagram filters, as it uses AI technology to bring modification to the face. And to create these images, the app uses deep generative convolutional neural networks — a powerful group of networks — which is typically used for high-fidelity natural image synthesis, augmenting data, and enhancing image compression. These GANs are a structure of an algorithm that uses two conflicting neural networks contesting against each other to produce new information.
The GANs designed to recognize patterns, are usually unsupervised and learn on their own to imitate images according to the information based on the data fed into it. And therefore, the app takes facial information from one image and applies the same to the other image, and consequently, it works on the huge database it gathers from the millions of photos of the users.
However, by using GANs for facial feature alteration, it usually loses the real information and produces entirely new information for users. And to avoid this problem, the app uses conditional GAN with age parameter, which focuses on preserving the information they face, as the programmer can now control the output of the generator. Such conditions like age and gender can also be implied to both generator and discriminator networks while predicting outcome. Conditional GAN will also allow generating multi-modal models with varied conditions that can be applied like age, gender, etc. So, for an age-specific filter, the app applies an added condition to the GAN, where it uses data that is only labeled by age. Similarly, for gender-specific filters, the app uses gender-labeled data.
The architecture of a conditional GAN
To make the prediction, the data goes through multiple players with neurons, and each neuron in the system have features, which are the functions of the sample data, and when the data is processed the output is sent to the next neuron with more complex functions, until the last neutron defines the output of the data to define the features of the face. Once the facial features have been identified, the system uses a generative adversarial network along with TensorFlow to apply the filters of age and gender.
Neural networks and deep learning are extremely complicated subjects. I am still early in the process of learning about them. This blog was written as much to develop my own understanding as it was to help you, the reader.