I segmented page into lines. Create my own dataset and then ground truth(by cropping lines from original text page). Now the dimensions of ground truth images and the generated by my algorithm are having different dimensions.I use PNSR but having error that both images are having different dimendions. Now how to compare both and extract accuracy of resultant segmented text images?
I this case which parameters of line, can I use for computing accuracy?