helios.metrics.metrics

Attributes

METRICS_REGISTRY

Global instance of the registry for metric functions.

Classes

CalculatePSNR

Calculate PSNR (Peak Signal-to-Noise Ratio).

CalculateSSIM

Calculate SSIM (structural similarity).

CalculateMAP

Calculate the mAP (Mean Average Precision).

CalculateMAE

Compute the MAE (Mean-Average Precision) score.

Functions

create_metric(→ torch.nn.Module)

Create the metric function for the given type.

Module Contents

helios.metrics.metrics.METRICS_REGISTRY

Global instance of the registry for metric functions.

Example

import helios.metrics as hlm

# This automatically registers your metric function.
@hlm.METRICS_REGISTRY.register
class MyMetric:
    ...

# Alternatively you can manually register a metric function like this:
hlm.METRICS_REGISTRY.register(MyMetric)
helios.metrics.metrics.create_metric(type_name: str, *args: Any, **kwargs: Any) torch.nn.Module

Create the metric function for the given type.

Parameters:
  • type_name – the type of the loss to create.

  • args – positional arguments to pass into the metric.

  • kwargs – keyword arguments to pass into the metric.

Returns:

The metric function

class helios.metrics.metrics.CalculatePSNR(crop_border: int, input_order: str = 'HWC', test_y_channel: bool = False)

Bases: torch.nn.Module

Calculate PSNR (Peak Signal-to-Noise Ratio).

Implementation follows: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio. Note that the input_order is only needed if you plan to evaluate Numpy images. It can be left as default otherwise.

Parameters:
  • crop_border – Cropped pixels in each edge of an image. These pixels are not involved in the calculation.

  • input_order – Whether the input order is “HWC” or “CHW”. Defaults to “HWC”.

  • test_y_channel – Test on Y channel of YCbCr. Defaults to false.

forward(img: numpy.typing.NDArray | torch.Tensor, img2: numpy.typing.NDArray | torch.Tensor) float

Calculate the PSNR metric.

Parameters:
  • img – Images with range \([0, 255]\).

  • img2 – Images with range \([0, 255]\).

Returns:

PSNR value.

class helios.metrics.metrics.CalculateSSIM(crop_border: int, input_order: str = 'HWC', test_y_channel: bool = False)

Bases: torch.nn.Module

Calculate SSIM (structural similarity).

Implementation follows: ‘Image quality assesment: From error visibility to structural similarity’. Results are identical to those of the official MATLAB code in https://ece.uwaterloo.ca/~z70wang/research/ssim/. For three-channel images, SSIM is calculated for each channel and then averaged.

Parameters:
  • crop_border – Cropped pixels in each edge of an image. These pixels are not involved in the calculation.

  • input_order – Whether the input order is “HWC” or “CHW”. Defaults to “HWC”.

  • test_y_channel – Test on Y channel of YCbCr. Defaults to false.

forward(img: numpy.typing.NDArray | torch.Tensor, img2: numpy.typing.NDArray | torch.Tensor) float

Calculate the SSIM metric.

Parameters:
  • img – Images with range \([0, 255]\).

  • img2 – Images with range \([0, 255]\).

Returns:

PSNR value.

class helios.metrics.metrics.CalculateMAP

Bases: torch.nn.Module

Calculate the mAP (Mean Average Precision).

Implementation follows: https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision.

forward(targs: numpy.typing.NDArray, preds: numpy.typing.NDArray) float

Calculate the mAP (Mean Average Precision).

Parameters:
  • targs – target (inferred) labels in range \([0, 1]\).

  • preds – predicate labels in range \([0, 1]\).

Returns:

The mAP score

class helios.metrics.metrics.CalculateMAE(scale: float = 1)

Bases: torch.nn.Module

Compute the MAE (Mean-Average Precision) score.

Implementation follows: https://en.wikipedia.org/wiki/Mean_absolute_error. The scale argument is used in the event that the input arrays are not in the range \([0, 1]\) but instead have been scaled to be in the range \([0, N]\) where \(N\) is the factor. For example, if the arrays are images in the range \([0, 255]\), then the scaling factor should be set to 255. If the arrays are already in the range \([0, 1]\), then the scale can be omitted.

Parameters:

scale – scaling factor that was used on the input tensors. Defaults to 1.

forward(pred: numpy.typing.NDArray | torch.Tensor, gt: numpy.typing.NDArray | torch.Tensor) float

Calculate the MAE metric.

Parameters:
  • pred – predicate (inferred) data.

  • gt – ground-truth data.

Returns:

The MAE score