1

Disclaimer: I am an amateur at statistics and I’m trying to learn by my own. Please bear with me if the question below sounds like utter nonsense to you guys.

In an experiment where you have Team A playing a game of any sports against Team B, both teams had their odds of winning calculated by a bookmaker as seen in the decimals of the spreadsheet below:

Odds Sheet

My intention is to find out by how much the odds of winning are being overrated or underrated for Team A. The initial idea was to do the following calculation:

Total # of games won minus the sum of all odds of team A winning. In other words: # of matches won or lost beyond the expected amount of matches that the team should have won or lost given the odds of winning.

Results are expressed as 0 or 1, being 0 a loss and 1 a win.

For Match 1 for example: Team A was supposed to win only 35% of the times. But it won the match. So, the correct probability should have been 100%. Therefore, the odds were wrong by 65%: 1 - 0,35 = 0,65

For match 2: Team was supposed to win 52% of the times, but it lost. 0 - 0,52 = -0,52

...and so on.

I assumed that:

  • If the final result is NEGATIVE, it should mean that the team lost more than the odds indicated it should have. Therefore, the probabilities are OVERRATED (higher than they should be).

  • If the final result is POSITIVE, should mean that the team won more than the odds indicated it should have. Therefore, the probabilities are UNDERRATED (lower than they should be).

  • If the final result is ZERO, should mean that the odds are fair and completely correct.

After doing this calculation for all the matches: 8 – 7,41 = 0,59

Meaning that the in this set, the team won 0,59 matches more than the odds indicated it should have. The odds are, therefore, underrated. If I divide the result (0,59) by the # of games, I assume that I should be finding the average of how much the odds were wrong (positively or negatively) by game.

It feels to me like I am making a lot of dangerous assumptions without any strong arguments as to why they work, since I don't have the necessary knowledge to validate these ideas. I am not sure if this calculation I am experimenting with holds any ground mathematically speaking.

Also, after doing more research, I found out about Brier Score / Mean Squared Error, which apparently can also be used to calculate this - but I am not sure if these methods apply here either (they seem to be used to calculate the variance of the set - hence, the number can not be negative. It seems to me like what I am calculating is different from that).

Could anyone please help me with (in as much of an ELI5 way you could possibly get):

  • a) telling me if what I am measuring with this calculation really is "by how much odds of team winning are wrong";
  • b) pointing out mistakes on this thought process;
  • c) offering me better suggestions of how to proceed with this idea.

Thank you in advance and pardon me for any mistakes.

  • 1
    Hello, i dont have a comlete answer. Just a few thoughts: what you have here are not odds, they are probabilities. You can google the difference. Further, it is not true that if team n wins game x, that team n had a %100 of winning that game. What you have seems reasonable, however. Hopefully someone smarter than I can give you some better insight. – John Madden Aug 09 '15 at 15:04
  • Thank you for the comment John Madden.

    You are absolutely right about the odds - but what if the given odds are being considered as being exactly the estimated probabilities? Couldn't they be the same thing in this case?

    – sharpbounce Aug 09 '15 at 16:16

0 Answers0