This function performs a Non Sorting Genetic Algorithm II (NSGA-II) for minimizing continuous functions. The implementation is bearable, computationally cheap, and compressed (the algorithm only requires one file: NSGAIII.m). An 'example.m' script is provided in order to help users to use the implementation. It is also noteworthy to mention that the code is highly commented for easing the understanding. This implementation is based on the paper of Deb et al. (2002), "A fast and elitist multiobjective genetic algorithm: NSGA-II".
Víctor Martínez-Cagigal (2019). Non Sorting Genetic Algorithm II (NSGA-II) (https://www.mathworks.com/matlabcentral/fileexchange/65494-non-sorting-genetic-algorithm-ii-nsga-ii), MATLAB Central File Exchange. Retrieved .
Glad to be of help! And very happy to see an active contributor :).
@Alexander Hagg, Thank you for that ultra-useful comment! I was not aware of the Gaussian distribution approach until this moment. I have also tested your suggestion and not only it reaches faster convergences, but also better results, specially in functions like ZDT1. So, I have implemented your suggestion about the mutation operator now and I am updated the code (of course, citing your comment).
I appreciate it! :)
Thanks for your submission. Using it for some experiments right now. Always great when people upload their code.
I noticed a mistake in your implementation, compared to the original formulation of NSGA-II and most evolutionary algorithms. The mutation operator is supposed to take the old population and perturb it. This is usually done by drawing from a normal distribution and adding the mutation on top of the old population. However, you assign random vectors to the publication when mutation occurs.
Q = parent; % Children initialized
mutatedPop = repmat((var_max-var_min)',N,1).*rand(N,nVar) + repmat(var_min',N,1);
mut_idx = rand(N,nVar) < pm;
Q(mut_idx) = mutatedPop(mut_idx); % Children overwritten
This works on the Kursawe function you use per default in the example, however it should take significantly longer than using correct mutations. Here is a suggestion how you could improve your implementation by using perturbations of the original population and adding a normally distribution perturbation on top of it. I have added a mutation strength, the standard deviation of the normal distribution, to allow parameterizing the strength of the mutation, which is often domain dependent (although 2-10% should work fine). (not forgetting crossover, but just left it out to make this short)
mutStrength = 0.05;
mutatedPop = Q + mutStrength*repmat((var_max-var_min)',N,1).*randn(N,nVar);
I've compared the performance of this mutation operator to the performance of your original implementation on the Kursawe function. Ran both implementations for 50 iterations. I measured the average distance to the ground truth Pareto front. Here is the figure: https://drive.google.com/open?id=1nxExexi3yqWfel-zje8btFnN1_OKJ5iG
Just for the post-Google era: Your implementation takes about 40 iterations to converge, mine about 15. This difference will be much larger in high-dimensional parameter/genetic spaces, as your "random search" will not be able to find improvements easily in those vast search spaces.
Why the algorithm does not approach the optimal pareto of solutions?
Works and easy to understand and modify.
Why should it reach better results? They are complementary methods, and they also have different parameters to adjust.
Why does the MOPSO generate better results than the NSGA II? In the ZDT functions, the NSGA II does not approach the optimal pareto of solutions.
The old mutation operator is substituted by a weighted normal distribution approach, as suggested by Alexander Hagg, which reaches faster convergences.
True ParetoFronts for the examples are now uploaded