These results are entirely consistent with how classification trees work. Simply rescaling each of the inputs by multiplying them with different coefficients should have no effect on the tree.
For exactly why this is, I'd recommend Breiman's book (which is referenced in the doc), but the short answer is that trees sort each predictor's observations and try a candidate split within each of the gaps. The tree will then select the split that gives the "best" splitting criterion (and that's an entirely different discussion). Scaling the predictor only serves to scale this process, but it doesn't fundamentally change the results.
As an example: suppose we have a simple set of obervations where the predictor has been measured at 1, 2, 4, and 10. The tree will try splits at 1.5, 3, and 7. Let's say that the "best" split is at 7.
Now we go ahead and rescale this input-- mulitply it by 100 or some other coefficient. Now, the tree tries splits at 150, 300, and 700, and it will still select the split at 700. Rescaling doesn't change anything.
Now, if we were to cleverly create _new_ predictors out of a well-chosen combination (linear or otherwise) of our existing predictors, then that certainly would change the tree's performance. For instance, make a 6th predictor in your X from Altman's coefficient's times your original X-- then you might get some interesting results.
I'm afriad that I don't understand what histograms will do for you in this case. One typically matches the "actual" outputs to the model's "predicted" outputs and compares the difference between them in some way to assess the model's performance. Confusion matrices, ROC curves, and other techniques are commonly used to do this. Histograms are not used because they can hide a lot of information. Consider this simple case of classifying data that can take values of either "1" or "2":
% The "actual" data:
Y = [1 2 1 2 2 1];
% The "predicted" data, arrived at through some model:
Y_Pred = [2 1 2 1 1 2];
Most would argue that this "model" is terrible: it has a 100% misclassification rate! In spite of this, hist(Y) and hist(Y_Pred) give the exact same plot.
That visualization concern aside, I think there is some confusion about how Bagging works. In this example, the "actual" ratings are Y, and every observation is used for training in some way. So, Y also represents the training ratings. One of the strengths of ensemble methods like Bagging is that it's not necessary to manually split the data into training and validation sets: you can have your proverbial cake and eat it, too. The out-of-bag errors in this case, though, have a special significance that is too much to explain here-- you should check the doc or (better yet) Breiman's original article for more details on that. Suffice to say, you should not use the OOB errors in the way that you seem to be using them here. If you're looking for the ensemble's predicted ratings, they are simply found by
Y_Pred = predict(b,X);
Philip: On the top of this page we list the required products to run this code. R2010b of MATLAB should be fine as long as you have all of the needed toolboxes as well.
Within this package is a README file that provides step-by-step instructions on how to define the data sources (which is what seems to be going wrong in your error message above), how to get a copy of the MCR, and when you might need to recompile the code. I'm always looking for ways to improve those instructions, so let me know where you find them lacking.
Kostas: You're basically correct. PRBYZERO quotes clean prices; to worry about the partial coupon period, you'd need to use a dirty price convention or (equivalently) calculate the accrued interest. The ACCRFRAC function is useful in this case, as is the (more robust) BONDBYZERO function within Financial Derivatives Toolbox.
You're also correct that our choice of clean or dirty prices doesn't really affect the result as long as we're consistent: if present, the accrued interests from the original bond prices and the simulated bond prices just end up cancelling each other out.
I wanted to say that the problem in my post from December 8th is solved. The problem was indeed the 32 bit version of Microsoft Access which is not supported by the 64 bit version of Matlab R2010b. A friend who had the 64 bit version of Access installed, could easily define the ODBC data source and run the scripts "Credit_Rating" and "TransitionProbabilities".
Unfortunately, we discovered another problem: When the script "Credit_VaR" is run, the following error occurs:
??? Error using ==> datenum at 182
Error in ==> Credit_VaR at 52
BondData.Maturity = datenum(BondData.Maturity, 'mm/dd/yyyy');
Error using ==> dtstr2dtnummx
Failed on converting date string to date number.
Have you got a suggestion to solve the problem?
Again, thanks a lot.
I am not able to follow the first step in the readme file which reads:
"Define the database in the “Data” folder as an ODBC data source".
I therefore tried to follow the steps presented by:
Although I got Microsoft Office installed, the "Microsoft Access driver" is not deiplayed in the list of "Step 6" and thus, I am not able to establish a connection to the "HistoricalCreditRatings" database. Maybe this is due to the 32 bit version of Microsoft Access 2007 which is in conflict to my MATLAB 64 version, however, this would not make sense to me, since MATLAB should be backward compatible.
Is there an alternative way to import the "HistoricalCreditRatings"? What steps would you recommend to do the pre-work task "Define the database in the “Data” folder as an ODBC data source".
Any help is highly appreciated.
Many thanks for part1_intro - of great help for my master's thesis' research! Question: in what way does "Confirm that Maximum Sharpe Ratio is a Maximum" fulfill it's purpose as the Sharpe generated there has different values (risk, return) from the one generated in "Maximize the Sharpe Ratio"?