The onnx model exported by exportONNXNetwork() is not the same as the result of running in opencv and Matlab?
26 views (last 30 days)
For example, I use the pre-training model googlenet to classify images, use the official example to test in OpenCV4.1, and identify "peppers.png", the recognition result is not bell pepper.No matter how I set the input image mean, normalization, etc., it always fails.
My matlab program is:
net = googlenet;
exportONNXNetwork(net,'mygoogleNet.onnx','OpsetVersion',9); // or 6,7,8
My OpenCV program is as follows,"synset_words.txt" is in the attachment:
Mat img = imread("C:\\Program Files\\MATLAB\\R2019a\\examples\\deeplearning_shared\\peppers.png");
String onnx_path = "mygoogleNet.onnx"; // this is matlab googlenet export onnx file;
std::string file = "synset_words.txt";
CV_Error(Error::StsError, "File " + file + " not found");
while (std::getline(ifs, line))
// read net
Net net = readNetFromONNX(onnx_path);
cout << "net is empty!" << endl;
int net_size = 224;// googlenet net input size
img = img(Rect(0, 0, net_size, net_size)); // keep the same image in matlab
Mat image = img.clone();
blobFromImage(image, blob, 1.0/255, Size(net_size, net_size), Scalar(122.6789, 116.6686, 104.0069),true); // set params
//! [Set input blob]
Mat prob = net.forward();
minMaxLoc(prob.reshape(1, 1), 0, &confidence, 0, &classIdPoint);
int classId = classIdPoint.x;
//! show result
resize(image, image, Size(500, 500));
// Put efficiency information.
double freq = getTickFrequency() / 1000;
double t = net.getPerfProfile(layersTimes) / freq;
std::string label = format("Inference time: %.2f ms", t);
putText(image, label, Point(0, 15), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 255, 0));
// Print predicted class.
label = format("%s: %.4f", (classes.empty() ? format("Class #%d", classId).c_str() :
putText(image, label, Point(0, 40), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 255, 0));
why is not correct? anyone know?
Don Mathis on 29 May 2019
Edited: Don Mathis on 29 May 2019
Could it be that you're multiplying the test image by 1.0/255 before passing it to your imported network? Notice in the MATLAB example that the network was passed an image with pixels in the range [0 255]. It looks like you're normalizing it to [0 1]?
Also, does openCV import images as BGR? If so, you'll need to change the image to RGB because the network expects that.Maybe both of these problems are occurring?
KAAN AYKUT KABAKÇI on 6 Aug 2020
in my environment the problem was totally about OpenCV version. When i use OpenCV 4.2.0, i was getting different results between MATLAB and Python. After downgrade the OpenCV version to 4.0.0, the problem disappeared. I am using following blobFromImage configuration:
blob = cv2.dnn.blobFromImage(input_image, 1, (512,512), (0,0,0), True, False)
Shape of my images is (512,512,3)