A: Steps involved in training the ANN (Artificial Neural Network) with Stochastic Gradient Descent STEP 1: Randomly initialise the weights to small numbers close to 0 (but not 0). STEP 2: Input the first observation of your dataset in the input layer, each feature in one input node. STEP 3: Forward-Propagation: from left to right, the neurons are activated in a way that the impact of each neuron's activation is limited by the weights. Propagate the activations until getting the predicted result y. STEP 4: Compare the predicted result to the actual result. Measure the generated error. STEP 5: Back-Propagation: from right to left, the error is back-propagated. Update the weights according to how much they are responsible for the error. The learning rate decides by how much we update the weights. STEP 6: Repeat Steps 1 to 5 and update the weights after each observation (Reinforcement Learning). Or: Repeat Steps 1 to 5 but update the weights only after a batch of observations (Batch Learning). STEP 7: When the whole training set passed through the ANN, that makes an epoch. Redo more epochs. B: Steps in Convolutional Neural Networks STEP 1(a): Convolution We pass the input image through a 'Filter' (or 'Feature Detector') to get a Feature Map. In this step, we reduce the size of the input image. We make the image smaller to make it easier to process it. We have multiple 'Filter Detector'(s) and this results in multiple 'Feature Maps'. STEP 1(b): ReLU Layer We use ReLU to introduce non-linearity in our network. Images have a lot of non-linear features. STEP 2: Max Pooling In this step, we introduce 'spatial invariance'. Here, we are also removing information, and this makes it hard for algorithm to overfit. In 'Max Pooling', we take the maximum value when using the 'filter'. Similarly, there exist algorithms such as 'mean pooling', 'min pooling'. Pooling step is also called 'Downsampling' in ML literature. STEP 3: Flattening Take the number left to right, row by row in a matrix and put them in a column. STEP 4: Full Connection Now, we add a fully connected neural network to our model.
Steps involved in 'training the Artificial Neural Network with Stochastic Gradient Descent', Convolutional Neural Networks
Subscribe to:
Posts (Atom)
No comments:
Post a Comment