Computation of the Rank Probability Score (RPS) – Accuracy


The rank probability score is equivalent to the Brier score,
but measures accuracy of probability forecasts when there are more than two categories.
Suppose we wish to know the accuracy of probability forecasts of precipitation amount in three categories,
less than 0.2 mm, 0.3 to 4.4 mm, and more than 4.4 mm.
Here we have divided the quantitative precipitation forecast into three categories,
"no rain", "some rain" and "significant rain".
Probabilities are forecast for each of these categories, for example, 15 days of Tampere data….
….and the observed category can be tallied for each observation, as in the third column.
The RPS is actually the squared difference between the cumulative forecast probability and
the cumulative observation for each category, summed over all categories.
Here is the Tampere data converted into cumulative observations and forecasts with the RPS for each event of the sample.
Perhaps you have guessed by now, but the RPS really measures the difference between the distribution
of forecasts and the distribution of observations over the categories,
shown schematically below for 12 categories of a variable X,
Oh yes, and the formula for the RPS is as follows.
Where there are K categories and CDF means the cumulative distribution of forecast and observation as illustrated above.
Once again, all this data manipulation is handled by computer normally, so we will go to the subject of interpretation now.