For others, it appears that C (=constant) was originally the binary image segmentation threshold.
I have modified my version of the code as follows:
This now uses the adaptive (mean) filter to highlight image features (i.e. mIM-IM) and then the Otsu threshold to segment and generate a binary image.
this works quite well for my application but can anyone explain to me what the algorithm is doing exactly?
I understand that the algorithm generates a local mean filtered image by iterating over each pixel for user window size but what is line:
this subtracts the original image and a constant, C, from the local mean filtered image. What is C and why do this?
I'd suggest to send your questions to my email, rather then commenting here.
Regarding your question, you should use more then one histogram element. This is called "feature vector". In some cases you can reduce the feature vector length via PCA, and other methods. In case of LBP it was proven that some histogram elements carry more information than others, such as Rotation Invariant and others. I suggest reading relevant publications.