I've been using semanticseg to apply a previously trained DAG network model to segment a test set of images. I've noticed that when repeatedly applying the same model to exactly the same image (with the same random seed, unless I am somehow setting this improperly) the output segmentation can differ very slightly. When using [~,~,allScores] = semanticseg(image, network) to view the softmax probability values output by the network per-pixel and per-class, these values do seem to be able to change each time the network is applied. The difference is very small, the values I've inspected seem to be consistent to at least 5 decimal places between segmentations, but I'm curious to know where this small difference is produced since I expected the inference procedure to be entirely deterministic. Could the use of the parallel computing toolbox influence the values in this way?
Thank you for any help you can provide; I hope I haven't overlooked an obvious answer. Fortunately, this small difference in probability score is only pronounced enough to shift the final pixel classification very rarely, on the order of magnitude of something like 1/1,000,000,000 pixels, and so only in cases where the output class probabilities are practically tied to begin with. I've included one such rare example below, in which the maximum-likelihood class for the pixel shifts in response to this variation.
Considering 9 classes, note the first and the last class:
[0.499402880668640, 8.857092470861971e-04, 9.613345497427872e-08, 1.140553695933022e-08, 3.467669529300110e-08, 2.951414890262072e-09, 6.419115834432887e-07, 3.072153194807470e-04, 0.499403357505798]
[0.499403357505798, 8.857083739712834e-04, 9.613336970915043e-08, 1.140553695933022e-08, 3.467669529300110e-08, 2.951414890262072e-09, 6.419103897314926e-07, 3.072150284424424e-04, 0.499402880668640]