Image quality can degrade due to distortions during image acquisition and processing. Examples of distortion include noise, blurring, ringing, and compression artifacts.
Efforts have been made to create objective measures of quality. For many applications, a valuable quality metric correlates well with the subjective perception of quality by a human observer. Quality metrics can also track unperceived errors as they propagate through an image processing pipeline, and can be used to compare image processing algorithms.
If an image without distortion is available, you can use it as a reference to measure the quality of other images. For example, when evaluating the quality of compressed images, an uncompressed version of the image provides a useful reference. In these cases, you can use full-reference quality metrics to directly compare the target image and the reference image.
If a reference image without distortion is not available. you can use a no-reference image quality metric instead. These metrics compute quality scores based on expected image statistics.
Full-reference algorithms compare the input image against a pristine reference image with no distortion. These algorithms include:
immse — Mean-squared
error (MSE). MSE measures the average squared difference between actual and
ideal pixel values. This metric is simple to calculate but might not align
well with the human perception of quality.
psnr — Peak
signal-to-noise ratio (pSNR). pSNR is derived from the mean square error,
and indicates the ratio of the maximum pixel intensity to the power of the
distortion. Like MSE, the pSNR metric is simple to calculate but might not
align well with perceived quality.
ssim — Structural
Similarity (SSIM) Index. The SSIM metric combines local image structure,
luminance, and contrast into a single local quality score. In this metric,
structures are patterns of pixel intensities,
especially among neighboring pixels, after normalizing for luminance and
contrast. Because the human visual system is good at perceiving structure,
the SSIM quality metric agrees more closely with the subjective quality
Because structural similarity is computed locally,
can generate a map of quality over the image.
No-reference algorithms use statistical features of the input image to evaluate the image quality. These no-reference algorithms include:
brisque — Blind/Referenceless Image Spatial Quality
Evaluator (BRISQUE). A BRISQUE model is trained on a database of images with
known distortions, and BRISQUE is limited to evaluating the quality of
images with the same type of distortion. BRISQUE is
opinion-aware, which means subjective quality
scores accompany the training images.
niqe — Natural Image Quality Evaluator (NIQE).
Although a NIQE model is trained on a database of pristine images, NIQE can
measure the quality of images with arbitrary distortion. NIQE is
opinion-unaware, and does not use subjective
quality scores. The tradeoff is that the NIQE score of an image might not
correlate as well as the BRISQUE score with human perception of
piqe — Perception based Image Quality Evaluator
(PIQE). The PIQE algorithm is opinion-unaware and
unsupervised, which means it does not require a
trained model. PIQE can measure the quality of images with arbitrary
distortion and in most cases performs similar to NIQE. PIQE estimates
block-wise distortion and measures the local variance of perceptibly
distorted blocks to compute the quality score.
The BRISQUE and the NIQE algorithms calculate the quality score of an image with computational efficiency after the model is trained. PIQE is less computationally efficient, but it provides local measures of quality in addition to a global quality score. All no-reference quality metrics usually outperform full-reference metrics in terms of agreement with a subjective human quality score.