This discussion has absolutely nothing to do with bias weights.
Theoretically, training, validation and test data are all assumed to be random samples from the same probability distribution function.
Both training and validation data are used to design the net with the goal of minimizing a performance function (such as mse (fitnet) or crossentropy (patternnet) for net performance on ANY data (seen and unseen) obtained from the same probability distribution.
Therefore, their performance cannot be used to obtain an unbiased (i.e., honest) estimate of performance on unseen data.
On the other hand, the test data is in no way involved in the design of the net. Therefore it is a valid representative of "unseen" data and it's performance is considered an unbiased estimate of net performance on unseen data.
Hope this helps.
PS Not sure why I never saw this post before.