Image Classification on ARM CPU: SqueezeNet on Raspberry Pi

Hi, I am Ram Cherukuri, product manager here at MathWorks, and welcome to another edition of deep learning on Raspberry Pi, this time using it for image classification using squeezenet.

In this video, I hope to show how easily you can take your MATLAB algorithm and test it and validate it using live I/O within MATLAB, test it on the target Raspberry Pi using processor-in-loop simulation before deploying it as a standalone application, without needing to write any additional code in C or C++.

I decided to pick image classification as an example of machine learning and deep learning application for a couple of reasons:

  • It’s one of the fundamental video and image processing tasks in many such applications from video surveillance to automated driving and so on.
  • And it’s very relevant to embedded deployment, which means it should work in real time on a target processor.

You can refer to a lot more resources on machine learning and deep learning in MATLAB on

Speaking of embedded processors, I chose Raspberry Pi for another reason other than it is fun and accessible. It is based on an Arm Cortex A, similar to most other vision-based processors out there.

MATLAB Coder enables you to generate code and deploy your application to any Arm Cortex A based processor that supports Neon instructions.

You get optimal performance because the generated code calls into Arm’s Compute Library, which provides low-level functions optimized for Arm’s CPU and GPU platforms.

Please refer to the link below to learn more about the Compute Library.

In previous videos, we covered deployment aspects with examples such as pedestrian detection and in this video, we will focus on hardware-in-the-loop testing and validation.

Here is our MATLAB algorithm that takes in an input image, does some resizing as a preprocessing step, uses trained squeezenet for inference, and then performs post-processing to identify and display the top five classifications.

Here is my test script that I will use to run through the example.

Let’s first run this section of code to see what the algorithm does on the input image within MATLAB. You can see that it gives us the top five classifications for the things in our input image.

Now, I want to test and validate my algorithm with some live data. Here I am setting up a connection to a Raspberry Pi and I can use the webcam attached to it to get the live feed from the camera and run inference on it in MATLAB – pretty straightforward.

Please make a note to download the free Raspberry Pi support package to try this out.

In addition, if you have MATLAB Coder, you can also generate code and deploy it on the Raspberry Pi.

How about we verify the generated code with the processor-in-loop, so we can use MATLAB as our test bench to pass the input to the application on the target and get the result back into MATLAB for comparison?

Here we are defining the code generation configuration, and specifying the Arm compute library version, the PIL mode. And let’s generate the PIL interface.[JS1] [RC2]

Once code gen is complete, we get this MEX file that I can use to run the application on the Raspberry Pi. Using the same test input, we are running the image classification on the Raspberry Pi and we get the classification results. You can do more detailed verification by comparing the outputs, etc., but you get the point.

Throughout the example we did not have to write any C or C++ code. However, if you like to use any custom libraries such as OpenCV, you can always manually integrate the generated code and write a custom main file to compile into a bigger application.

Please refer to the links below to try out this example for yourself and to download the necessary support packages.