Welcome to our Support Center



Computes the mean relative error by normalizing with the given values. Type : polymorphic.



Input parameters


Β y_pred :Β array,Β predicted values.
Β y_true :Β array,Β true values.
Β normalizer : array, the normalizer values with same shape as predictions.


Output parameters


mean_relative_error : float, result.

Use cases

Mean Relative Error (MRE) is an error measure often used in machine learning for regression problems. MRE measures the mean relative error, i.e. how much the model’s predictions differ in percentage from the ground truth. It is similar to Mean Absolute Percentage Error (MAPE), but does not take the absolute value of the errors before averaging them, which means it can be positive or negative depending on whether the predictions are on average below or above the ground truth.

Here are some specific areas where MRE is commonly used :

    • Sales forecasting : in sales forecasting problems, MRE can be used to assess how far a model’s sales predictions differ from actual sales in percentage terms.
    • Demand forecasting : in demand forecasting problems, such as electricity demand forecasting, MRE can be used to assess the percentage error, which can help to understand the error in terms of capacity or total demand.
    • Finance : in problems involving the prediction of share prices or other financial values, MRE can be used to assess how much the model’s predictions differ from the actual price in percentage terms.

One advantage of the MRE is that it expresses error in relative terms, which can be easier to interpret than absolute errors, particularly when the magnitude of the variable being predicted varies widely. However, it also has disadvantages, such as the fact that it can be sensitive to outliers and can be undefined if the ground truth is zero.


The Mean Relative Error is a measure of the prediction error relative to a normalisation value. For each prediction, we first calculate the absolute error (the difference between the prediction y_pred and the true value y_true). This error is then normalised by dividing it by a normalizer, producing a relative error. Finally, we calculate the average of these relative errors for all the samples. A smaller average relative error indicates that, on average, the prediction errors are small compared to the normalizer value.



All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install HAIBAL library to run it).

Easy to use

Table of Contents