helios.metrics.functional

Functions

calculate_psnr(→ float)

Calculate PSNR (Peak Signal-to-Noise Ratio).

calculate_psnr_torch(→ float)

Calculate PSNR (Peak Signal-to-Noise Ratio).

calculate_ssim(→ float)

Calculate SSIM (structural similarity).

calculate_ssim_torch(→ float)

Calculate SSIM (structural similarity).

calculate_mAP(→ float)

Calculate the mAP (Mean Average Precision).

calculate_mae(→ float)

Compute the MAE (Mean-Average Precision) score.

calculate_mae_torch(→ float)

Compute the MAE (Mean-Average Precision) score.

Module Contents

helios.metrics.functional.calculate_psnr(img: numpy.typing.NDArray, img2: numpy.typing.NDArray, crop_border: int, input_order: str = 'HWC', test_y_channel: bool = False) float[source]

Calculate PSNR (Peak Signal-to-Noise Ratio).

Implementation follows: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio

Parameters:
  • img – Images with range \([0, 255]\).

  • img2 – Images with range \([0, 255]\).

  • crop_border – Cropped pixels in each edge of an image. These pixels are not involved in the calculation.

  • input_order – Whether the input order is “HWC” or “CHW”. Defaults to “HWC”.

  • test_y_channel – Test on Y channel of YCbCr. Defaults to false.

Returns:

PSNR value.

helios.metrics.functional.calculate_psnr_torch(img: torch.Tensor, img2: torch.Tensor, crop_border: int, test_y_channel: bool = False) float[source]

Calculate PSNR (Peak Signal-to-Noise Ratio).

Implementation follows: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio

Parameters:
  • img – Images with range \([0, 255]\).

  • img2 – Images with range \([0, 255]\).

  • crop_border – Cropped pixels in each edge of an image. These pixels are not involved in the calculation.

  • test_y_channel – Test on Y channel of YCbCr. Defaults to false.

Returns:

PSNR value.

helios.metrics.functional.calculate_ssim(img: numpy.typing.NDArray, img2: numpy.typing.NDArray, crop_border: int, input_order: str = 'HWC', test_y_channel: bool = False) float[source]

Calculate SSIM (structural similarity).

Implementation follows: ‘Image quality assesment: From error visibility to structural similarity’. Results are identical to those of the official MATLAB code in https://ece.uwaterloo.ca/~z70wang/research/ssim/. For three-channel images, SSIM is calculated for each channel and then averaged.

Parameters:
  • img – Images with range \([0, 255]\).

  • img2 – Images with range \([0, 255]\).

  • crop_border – Cropped pixels in each edge of an image. These pixels are not involved in the calculation.

  • input_order – Whether the input order is “HWC” or “CHW”. Defaults to “HWC”

  • test_y_channel – Test on Y channel of YCbCr. Defaults to false.

Returns:

SSIM.

helios.metrics.functional.calculate_ssim_torch(img: torch.Tensor, img2: torch.Tensor, crop_border: int, test_y_channel: bool = False) float[source]

Calculate SSIM (structural similarity).

Implementation follows: ‘Image quality assesment: From error visibility to structural similarity’. Results are identical to those of the official MATLAB code in https://ece.uwaterloo.ca/~z70wang/research/ssim/. For three-channel images, SSIM is calculated for each channel and then averaged.

Parameters:
  • img – Images with range \([0, 255]\).

  • img2 – Images with range \([0, 255]\).

  • crop_border – Cropped pixels in each edge of an image. These pixels are not involved in the calculation.

  • test_y_channel – Test on Y channel of YCbCr. Defaults to false.

Returns:

SSIM.

helios.metrics.functional.calculate_mAP(targs: numpy.typing.NDArray, preds: numpy.typing.NDArray) float[source]

Calculate the mAP (Mean Average Precision).

Implementation follows: https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision

Parameters:
  • targs – target (inferred) labels in range \([0, 1]\).

  • preds – predicate labels in range \([0, 1]\).

Returns:

The mAP score

helios.metrics.functional.calculate_mae(pred: numpy.typing.NDArray, gt: numpy.typing.NDArray, scale: float = 1.0) float[source]

Compute the MAE (Mean-Average Precision) score.

Implementation follows: https://en.wikipedia.org/wiki/Mean_absolute_error. The scale argument is used in the event that the input arrays are not in the range \([0, 1]\) but instead have been scaled to be in the range \([0, N]\) where \(N\) is the factor. For example, if the arrays are images in the range \([0, 255]\), then the scaling factor should be set to 255. If the arrays are already in the range \([0, 1]\), then the scale can be omitted.

Parameters:
  • pred – predicate (inferred) array

  • gt – ground-truth array

  • scale – scaling factor that was used on the input arrays. Defaults to 1.

Returns:

The MAE score.

helios.metrics.functional.calculate_mae_torch(pred: torch.Tensor, gt: torch.Tensor, scale: float = 1.0) float[source]

Compute the MAE (Mean-Average Precision) score.

Implementation follows: https://en.wikipedia.org/wiki/Mean_absolute_error. The scale argument is used in the event that the input arrays are not in the range \([0, 1]\) but instead have been scaled to be in the range \([0, N]\) where \(N\) is the factor. For example, if the arrays are images in the range \([0, 255]\), then the scaling factor should be set to 255. If the arrays are already in the range \([0, 1]\), then the scale can be omitted.

Parameters:
  • pred – predicate (inferred) tensor

  • gt – ground-truth tensor

  • scale – scaling factor that was used on the input tensors. Defaults to 1.

Returns:

The MAE score.