- Trying to overfit the model on few samples. This suggests that the model is properly set up for training.
- If only the validation accuracy is low, then the model is overfitting on the training dataset. Decrease the number of epochs, or use regularization / dropout layers.
- If you notice that loss had been decreasing till the last epoch, increase number of epochs.
- Check accuracy on pretrained model. If the accuracy is lower after transfer learning, then you can try setting learning rate to much smaller value.
Some CNN architecture are working, other are not
3 views (last 30 days)
So i'm using a dataset with 400 images at the moment (looking to add more in the close future), but meanwhile I was trying to find which CNN architectures is the best between the pretrained network from Deep Learning Toolbox.
So i did some test, to compare them with the same parameters, and for exemple, after 10 epoch I have over 95% for validation accuracy for DenseNet, Inceptionv3 or Xception, but I've under 20% for Darknet, VGG, or GoogleNet. Why is there so much of a difference? Is this because my dataset doesn't have enough image? Not enough epochs?
Aditya Patil on 11 May 2021
As some of the models are working, but others are not, the issue is likely with the training options used while transfer learning. You might also want to increase the number of samples, especially if the required classes are not well represented in ImageNet dataset. This dataset has been used to train these models, so if the required class are not represented/under represented, then it might be as issue.
Few things to try out are,
If the issue persists, feel free to create a bug report.