ELO rating
ELO rating is a numerical rating system used to assess the relative strength of players in games such as chess, Go, backgammon, poker, and more. Each player has their own ELO rating, which is initially set at a certain level and changes based on the results of their games.
The main idea behind the ELO rating is that players gain or lose rating points based on the strength of their opponent and the outcome of the match. If a stronger player defeats a weaker player, they will gain fewer points compared to if they had won against a stronger opponent. Conversely, if a weaker player defeats a stronger player, they will gain more points than if they had won against a weaker opponent.
The coefficient K is also used to determine how much the player's rating will change after each match. The value of K can vary for different games and may depend on the player's level, the number of games played, and other factors.
Overall, the ELO rating system allows players to compare their strength against other players, and it is widely used in both sports and computer games.
The formulas for calculating user's ratings can be as follows:
Rn = Ro + K * (W - L) * log10(N + 1)
where:
Rn - the new rating of the player Ro - the old rating of the player K - a coefficient that determines the rate of rating change and can be chosen based on the desired level of variability (for example, 32 for chess, 20 for poker, etc.) W - the number of correct predictions made by the player L - the number of incorrect predictions made by the player (but not less than 0) N - the total number of predictions made by the player (both correct and incorrect)
Thus, the formula takes into account both the number of correct and incorrect predictions, as well as the total number of predictions made. The log10(N+1) coefficient allows for faster rating increase for players who make more predictions
log10(N+1) is the logarithm with base 10 of N+1, where N is the total number of predictions made by the player. Adding one to the formula helps to avoid division by zero error if the player has not made any predictions.
For example, if a player made 100 predictions, then log10(100+1) = log10(101) = 2.004. If a player made 10 predictions, then log10(10+1) = log10(11) = 1.041. The more predictions a player makes, the faster their rating will change when the number of correct and incorrect predictions varies.
The final result of the rating calculation depends on the initial rating, the number of correct and incorrect predictions, the total number of predictions made, and the chosen coefficient K. I can provide examples of rating calculation for two different players with different input data using the formula:
Example 1:
Player A has an initial rating Ro = 1000 Player A made 30 predictions, of which 20 were correct and 10 were incorrect Using K = 32 to determine the rate of rating change Calculating the new rating for Player A: Rn(A) = Ro + K * (W - L) * log10(N + 1) Rn(A) = 1000 + 32 * (20 - 10) * log10(30 + 1) Rn(A) = 1000 + 320 * 1.491 Rn(A) = 1478.32
Thus, the new rating for Player A will be 1478.32.
Example 2:
Player B has an initial rating Ro = 1200 Player B made 10 predictions, of which 6 were correct and 4 were incorrect Using K = 20 to determine the rate of rating change Calculating the new rating for Player B: Rn(B) = Ro + K * (W - L) * log10(N + 1) Rn(B) = 1200 + 20 * (6 - 4) * log10(10 + 1) Rn(B) = 1200 + 40 * 1.041 Rn(B) = 1241.64
Thus, the new rating for Player B will be 1241.64.