Here is a simple example of the issue I'm running into:
tt=[1 8;2 7;3 6;4 5;5 4;6 3]; %the above 6 points are all on a line with slope -1
labels=[1 1 1 1 -1 1];
c=[0 1;2 0];
This spits out mod.Beta=[0 0] with a mod.Bias of 1. Therefore, the output of predict() for any point x is 1. That is, it ignores the minority class, which is a common problem for unbalanced classes. However, the cost matrix is supposed to fix that. I impose a cost that is twice as much for misclassifying the minority class, so it should draw a dividing line with a slope of 1 (and a Beta with slope -1) which correctly classifies the minority point and incorrectly classifies only a single majority point (rather than incorrectly classifying a single minority point as it is currently doing). I've tried switching the cost matrix to [0 2;1 0] to no avail. I also notice that the returned model has mod.Cost=[0 1;1 0]. It's as if it's completely ignoring my cost matrix input. What is going on here? Any help is greatly appreciated.