MATLAB Examples

System Objects for Classification and Code Generation

Products used: Statistics and Machine Learning Toolbox™, MATLAB® Coder™, Simulink®, and Computer Vision Toolbox™.

Contents

This example shows how to generate C code from a MATLAB® System object™ that classifies images of digits using a trained classification model. This example also shows how to use the System object for classification in Simulink. A benefit of using System objects over a MATLAB functions is System objects are more appropriate for processing large amounts of streaming data. For more details, see docid:matlab_prog.btpzufx-1.

This example is based on C Code Generation for Image Classifier, which is an alternative workflow to Digit Classification Using HOG Features.

Load Data

Load the digitimages data set from the matlabroot/examples/stats directory.

load(fullfile(matlabroot,'examples','stats','digitimages.mat'))

images is a 28-by-28-by-3000 array of uint16 integers. Each page is a raster image of a digit. Each element is a pixel intensity. Corresponding labels are in the 3000-by-1 numeric vector Y. For more details, enter Description at the command line.

Store the number of observations and number of predictor variables. Create a data partition that specifies to hold out 20% of the data. Extract training and test set indices from the data partition.

rng(1); % For reproducibility
n = size(images,3);
p = numel(images(:,:,1));
cvp = cvpartition(n,'Holdout',0.20);
idxTrn = training(cvp);
idxTest = test(cvp);

Rescale Data

Rescale the pixel intensities so that they range in the interval [0,1] within each image. Specifically, suppose $p_{ij}$ is pixel intensity $j$ within image $i$. For image $i$, rescale all of its pixel intensities using this formula:

$$\hat p_{ij} = \frac{p_{ij} - \min_j(p_{ij})}{\max_j(p_{ij}) - \min_j(p_{ij})}.$$

X = double(images);

for i = 1:n
    minX = min(min(X(:,:,i)));
    maxX = max(max(X(:,:,i)));
    X(:,:,i) = (X(:,:,i) - minX)/(maxX - minX);
end

Reshape Data

For code generation, the predictor data for training must be in a table of numeric variables or a numeric matrix.

Reshape the data to a matrix such that predictor variables correspond to columns and images to rows. Because reshape takes elements columwise, transpose its result.

X = reshape(X,[p,n])';

Train and Optimize Classification Models

Cross-validate an ECOC model of SVM binary learners and a random forest based on the training observations;. Use 5-fold cross-validation.

For the ECOC model, specify predictor standardization and optimize classification error over the ECOC coding design and the SVM box constraint. Explore all combinations of these values:

  • For the ECOC coding design, use one-versus-one and one-versus-all.
  • For the SVM box constraint, use three logarithmically-spaced values from 0.1 to 100 each. For all models, store the 5-fold cross-validated misclassification rates.
coding = {'onevsone' 'onevsall'};
boxconstraint = logspace(-1,2,3);
cvLossECOC = nan(numel(coding),numel(boxconstraint)); % For preallocation

for i = 1:numel(coding)
    for j = 1:numel(boxconstraint)
        t = templateSVM('BoxConstraint',boxconstraint(j),'Standardize',true);
        CVMdl = fitcecoc(X(idxTrn,:),Y(idxTrn),'Learners',t,'KFold',5,...
            'Coding',coding{i});
        cvLossECOC(i,j) = kfoldLoss(CVMdl);
        fprintf('cvLossECOC = %f for model using %s coding and box constraint=%f\n',...
            cvLossECOC(i,j),coding{i},boxconstraint(j))
    end
end
cvLossECOC = 0.058333 for model using onevsone coding and box constraint=0.100000
cvLossECOC = 0.057083 for model using onevsone coding and box constraint=3.162278
cvLossECOC = 0.050000 for model using onevsone coding and box constraint=100.000000
cvLossECOC = 0.120417 for model using onevsall coding and box constraint=0.100000
cvLossECOC = 0.121667 for model using onevsall coding and box constraint=3.162278
cvLossECOC = 0.127917 for model using onevsall coding and box constraint=100.000000

For the random forest, vary the maximum number of splits using the values in the sequence $\{3^2, 3^3,...,3^m\}$. m is such that $3^m$ is no greater than n - 1.

n = size(X,1);
m = floor(log(n - 1)/log(3));
maxNumSplits = 3.^(2:m);
cvLossRF = nan(numel(maxNumSplits));
for i = 1:numel(maxNumSplits)
    t = templateTree('MaxNumSplits',maxNumSplits(i));
    CVMdl = fitcensemble(X(idxTrn,:),Y(idxTrn),'Method','bag','Learners',t,...
        'KFold',5);
    cvLossRF(i) = kfoldLoss(CVMdl);
    fprintf('cvLossRF = %f for model using %d as the maximum number of splits\n',...
        cvLossRF(i),maxNumSplits(i))
end
cvLossRF = 0.323750 for model using 9 as the maximum number of splits
cvLossRF = 0.198333 for model using 27 as the maximum number of splits
cvLossRF = 0.075417 for model using 81 as the maximum number of splits
cvLossRF = 0.017083 for model using 243 as the maximum number of splits
cvLossRF = 0.012083 for model using 729 as the maximum number of splits
cvLossRF = 0.012083 for model using 2187 as the maximum number of splits

For each algorithm, determine the hyperparameter indices that yield the minimal misclassification rates.

minCVLossECOC = min(cvLossECOC(:))
linIdx = find(cvLossECOC == minCVLossECOC,1);
[bestI,bestJ] = ind2sub(size(cvLossECOC),linIdx);
bestCoding = coding{bestI}
bestBoxConstraint = boxconstraint(bestJ)

minCVLossRF = min(cvLossRF(:))
linIdx = find(cvLossRF == minCVLossRF,1);
[bestI,bestJ] = ind2sub(size(cvLossRF),linIdx);
bestMNS = maxNumSplits(bestI)
minCVLossECOC =

    0.0500


bestCoding =

    'onevsone'


bestBoxConstraint =

   100


minCVLossRF =

    0.0121


bestMNS =

   729

The random forest achieves a smaller cross-validated misclassification rate.

Train an ECOC model and a random forest using the training data. Supply the optimal hyperparameter combinations.

t = templateSVM('BoxConstraint',bestBoxConstraint,'Standardize',true);
MdlECOC = fitcecoc(X(idxTrn,:),Y(idxTrn),'Learners',t,'Coding',bestCoding);
t = templateTree('MaxNumSplits',bestMNS);
MdlRF = fitcensemble(X(idxTrn,:),Y(idxTrn),'Method','bag','Learners',t);

Create a variable for the test sample images, and use the trained models to predict test sample labels.

testImages = X(idxTest,:);
testLabelsECOC = predict(MdlECOC,testImages);
testLabelsRF = predict(MdlRF,testImages);

Save Classification Model to Disk

MdlECOC ands MdlRF are predictive classification models, but you must prepare them for code generation. Save MdlECOC and MdlRF to your present working directory using saveCompactModel.

saveCompactModel(MdlECOC,'DigitImagesECOC');
saveCompactModel(MdlRF,'DigitImagesRF');

Create System Object for Prediction

Create two System objects that, one for the ECOC model and the other for the random forest, that:

  • Load the previously saved trained model using loadCompactModel
  • Makes sequential predictions by the step method
  • Enforces no size changes to the input data
  • Enforces double-precision, scalar output
classdef ECOCClassifier < matlab.System
    % ECOCCLASSIFIER Predict image labels from trained ECOC model
    %
    % ECOCCLASSIFIER loads the trained ECOC model from
    % |'DigitImagesECOC.mat'|, and predicts labels for new observations
    % based on the trained model.  The ECOC model in
    % |'DigitImagesECOC.mat'| was cross-validated using the training data
    % in the sample data |digitimages.mat|.

    properties(Access = private)
        CompactMdl % The compacted, trained ECOC model
    end
        
    methods(Access = protected)
        
        function setupImpl(obj)
            % Load ECOC model from file
            obj.CompactMdl = loadCompactModel('DigitImagesECOC');
        end
        
        function y = stepImpl(obj,u)
            y = predict(obj.CompactMdl,u);
        end
        
        function flag = isInputSizeLockedImpl(obj,index)
            % Return true if input size is not allowed to change while
            % system is running
            flag = true;
        end
        
        function dataout = getOutputDataTypeImpl(~)
            dataout = 'double';
        end
        
        function sizeout = getOutputSizeImpl(~)
            sizeout = [1 1];
        end
    end
end

classdef RFClassifier < matlab.System
    % RFCLASSIFIER Predict image labels from trained random forest
    %
    % RFCLASSIFIER loads the trained random forest from
    % |'DigitImagesRF.mat'|, and predicts labels for new observations based
    % on the trained model.  The random forest in |'DigitImagesRF.mat'|
    % was cross-validated using the training data in the sample data
    % |digitimages.mat|.

    properties(Access = private)
        CompactMdl % The compacted, trained random forest
    end
        
    methods(Access = protected)
        
        function setupImpl(obj)
            % Load random forest from file
            obj.CompactMdl = loadCompactModel('DigitImagesRF');
        end
        
        function y = stepImpl(obj,u)
            y = predict(obj.CompactMdl,u);
        end
        
        function flag = isInputSizeLockedImpl(obj,index)
            % Return true if input size is not allowed to change while
            % system is running
            flag = true;
        end
        
        function dataout = getOutputDataTypeImpl(~)
            dataout = 'double';
        end
        
        function sizeout = getOutputSizeImpl(~)
            sizeout = [1 1];
        end
    end
end

For System object basic requirements, see docid:matlab_prog.bs4mxcb-1.

Declare Prediction Functions for Code Generation

Declare two MATLAB functions called predictDigitECOCSO.m and predictDigitRFSO.m. The functions should:

  • Include the code generation directive %#codegen.
  • Accept image data commensurate with X.
  • Predict labels using the ECOCClassifier and RFClassifier System objects, respectively.
  • Return predicted labels.
function label = predictDigitECOCSO(X) %#codegen
%PREDICTDIGITECOCSO Classify digit in image using ECOC Model System object
%   PREDICTDIGITECOCSO classifies the 28-by-28 images in the rows of X
%   using the compact ECOC model in the System object ECOCClassifier, and
%   then returns class labels in label.
classifier = ECOCClassifier;
label = step(classifier,X); 
end

function label = predictDigitRFSO(X) %#codegen
%PREDICTDIGITRFSO Classify digit in image using RF Model System object
%   PREDICTDIGITRFSO classifies the 28-by-28 images in the rows of X
%   using the compact random forest in the System object RFClassifier, and
%   then returns class labels in label.
classifier = RFClassifier;
label = step(classifier,X); 
end

Compile MATLAB Function to MEX File

Compile the prediction function that achieves better test-sample accuracy to a MEX file using codegen. Specify the test set images using the -args argument.

if(minCVLossECOC <= minCVLossRF)
    codegen predictDigitECOCSO -args testImages
else
    codegen predictDigitRFSO -args testImages
end

Verify that the generated MEX file produces the same predictions as the MATLAB function.

if(minCVLossECOC <= minCVLossRF)
    mexLabels = predictDigitECOCSO_mex(testImages);
    verifyMEX = sum(mexLabels == testLabelsECOC) == numel(testLabelsECOC)
else
    mexLabels = predictDigitRFSO_mex(testImages);
    verifyMEX = sum(mexLabels == testLabelsRF) == numel(testLabelsRF)
end
verifyMEX =

  logical

   1

verifyMEX is 1, which indicates that the predictions made by the generated MEX file and the corresponding MATLAB function are the same.

Predict Labels Using System Objects in Simulink

Create a video file that displays the test-set images frame by frame.

v = VideoWriter('testImages.avi','Uncompressed AVI');
v.FrameRate = 1;
open(v);
dim = sqrt(p)*[1 1];
for j = 1:size(testImages,1)
    writeVideo(v,reshape(testImages(j,:),dim));
end
close(v);

Declare a function called scalePixelIntensities.m that converts RGB images to grayscale, and then scales the resulting pixel intensities so that their values are in the interval [0,1].

function x = scalePixelIntensities(imdat)
%SCALEPIXELINTENSITIES Scales image pixel intensities
%   SCALEPIXELINTENSITIES scales the pixel intensities of the image such
%   that the result x is a row vector of values in the interval [0,1].

imdat = rgb2gray(imdat);

minimdat = min(min(imdat));
maximdat = max(max(imdat));
x = (imdat - minimdat)/(maximdat - minimdat);

end


Load the Simulink® model slexClassifyAndDisplayDigitImages.slx located in mlr/examples/stats, where mlr is the MATLAB root folder.

mlr = matlabroot;
SimMdlName = 'slexClassifyAndDisplayDigitImages';
pathToModel = fullfile(mlr,'examples','stats',SimMdlName);
open_system(pathToModel);

The figure displays the Simulink® model. At the beginning of simulation, the From Multimedia File block loads the video file of the test-set images. For each image in the video:

  1. The From Multimedia File block converts and outputs the image to a 28-by-28 matrix of pixel intensities.
  2. The Process Data block scales the pixel intensities using scalePixelIntensities.m, and outputs a 1-by-784 vector of scaled intensities.
  3. The Classification Subsystem block predicts labels given the processed image data. The block chooses the System object that minimizes classification error. In this case, the block chooses the random forest. The block outputs a double-precision scalar label.
  4. The Data Type Conversion block converts the label to an int32 scalar.
  5. The Insert Text block embeds the predicted label on the current frame.
  6. The To Video Display block displays the annotated frame.

Simulate the model.

sim(SimMdlName);

The model displays all 600 test-set images and its prediction quickly; the last image remains in the video display. You can generate predictions and display them with corresponding images one by one by pressing the Step Forward button instead.