• “Deep Neural Networks for No-Reference and Full-reference Image Quality Assessment”
  • Links to papers: arxiv.org/pdf/1612.01…

0 review

This article and the previous rank-IQA feel like a good framework for handling the NR-IQA task. Let’s learn the essence of this article.

1 related work

The related work of this article lists many previous NR-IQA models:

  • [18]
    • DIIVINE: First identify the type of image distortion, and then select the regression model of the corresponding type to get the specific quality score;
  • [20]
    • BRISQUE: Modeling images in spatial domain using asymmetric generalized Gaussian distribution, which is characterized by the difference of spatial neighborhood;
  • [21]
    • NIQE: Extracting features using multivariate Gaussian models and combining them with mass distribution using unsupervised methods;
  • [22]
    • FRIQUEE: Input the manually extracted feature map into the 4-layer deep confidence network, output the feature vector, and use SVM classification;
  • [24]
    • CORNIA: One of the first models to use pure data drive to solve the NR-IQA problem. K-mean clustering was used to process the patch images whose brightness and contrast were standardized, and then the soft coding distance was extracted from the data to predict the quality score.
  • [28]
    • BIECOM: The first step is to estimate a local quality score by CNN using the patch image of standard phone (this model is pre-trained by using the existing FR data set), and then review the score with the mean and variance of the score as features.

Don’t say, saw along while a lot of are very old artificial characteristic method, not too line not too line.

1 the details

1.1 FR – IQA

The same model as rank-IQA in the previous article is also used in this paper, the twin network Saimese Net, and the model framework of FR-IQA is first proposed in this paper:In this framework, the image is 32×32 size by patch, and the feature extractor uses VGG19, which contains 5 maxpool layers, that is to say, after features extractor, the feature will change into such shape as (512,1,1).

For THE FR-IQA problem, the reference patch and brighter patch can obtain two 512 vectors through feature extractor, and then concat is used to splicer them together in the fusion stage. Concat (FR,fd, FR −fd)concat(F_R, F_D, F_R-F_D)concat(FR,fd, FR −fd)

There are two parts behind the Fusion Features vector, one is regression, the other is weights; As for how to obtain the quality fraction of the entire image from many patches, the author provides two methods: Is the patch a non-overlapping sample from the image

  1. Simple average.

For this averaging method, all patches have the same influence on the whole image, so the loss function also locates MAE:

  1. Weighted average.

As shown in the structure above, after feature fusion, regression is performed to output the quality score of a patch, and the weight score of this patch in the whole picture should be output in another branch. The weight parameter is guaranteed to be greater than 0.

1.2 NR – IQA

It simply removes reference and does not do feature fusion.

2 summary

This is a basic framework and idea of using CNN to deal with quality assessment. As an introduction to learning is a better framework.