Naturally with such a phenomenon lazy-boy hockey fans will often claim a player is doing poorly because they face too difficult of opposition, or do well because they have easy opposition. You will often here fans site: “difficult minutes” as a determining factor in raising or lowering a player’s value. I will attempt to figure out to what extent opposition effects players’ performance. Personally I would like to see results where opposition is irrelevant as it would make hockey studies a lot simpler; however I expect to see some players being protected and others being “abused”.
A few examples of people discussion strength of opposition.
- “Donovan's recent three year average is 1.5 ESP/hr. So Friesen's an upgrade if you assume they have comparable GAA and strength of opposition. However, that rate upgrade costs a marginal $975K. I'm not sure that's worth it, unless Friesen can do that against tougher opposition or if he can keep down the GA at a better rate than Donovan.” (RiversQ).
- Horcoff appeared to have considerably more difficult strength of opposition” (RiversQ).
- “All this number crunching is nice, but it's not clear that the arbitrators are looking at things like strength of opposition” (
- MacT creates a Samsonov-Stoll-Hemsky line that gets all the easy minutes.” (Dennis).
- “Well, he has been getting lot of easy minutes compared to Salei and Vish, i.e. he plays more against opponents 3rd and 4th lines.” (Pepper).
- “That PPV game was worth every nickel. It was Pronger's signature game this year, IMO. He played a ton of difficult minutes in a place the Oilers hadn't done well, and was, even by his standards, a calming presence” (
My methodology is probably not rigorous enough, but it should be simple enough for most to understand. I rated offense via shooting percentage and defense using expected goals against per minute (given an average goalie). I redistributed these values on a 0 to 100 scale, using 50 as average with approximately 20 as the standard deviation (so 68% of the players are within 30 and 70). I would expect to see the most protected player seeing offense of about 25 and defense well near 50. So I processed every second of every players time (took 43000 seconds) and scored their opposition using a linear scale: sum of opposition score divided by five (goalie not counted). So offense example: Ohlund (56), Baumgartner (58), Morrison (67), Naslund (60), Bertuzzi (68) would score 61.8. So if you only played against this line you should have an opposition offensive score around 62. To find out a player’s score you just sum up all the opposition scores for every second on the ice and divide by amount of time on ice. Method to calculate defense is identical. Note: I will use short forms for offensive opposition score (OOS) and defensive opposition score (DOS) for the rest of this article.
After spending hours processing the data, the results were pretty innocuous. I got distributions with much smaller standard deviations (1.5) than the players’ individual scores (20). This basically says that players play against a diverse group of players some of whom are good others who are bad, in other words most strength of opposition is averaged out to nothing significant. There are two very informative graphs: The first is OOS vs. ev ice-time; the second is DOS vs. ev ice-time. These graphs indicate collapsing of error and there is basically not much, if anything can be gained here.
With the differences, mentioned above, showing poor if anything useful about strength of opposition, I needed to find out how much error was reasonable in this problem. Now assuming a shift length is 54 seconds (average shift length), I ran a script that played a player against random opponents for 500 shifts a thousand times. The standard deviation was 0.4, compared to 1.5 for OOS and 2.3 for DOS. With this in mind I can only explain approximately 30% of OOS and 80% of DOS (the rest is going to be random error).
Looking at some individuals, many phoenix players were all given low DOSs, as such I remembered that results in this years NHL were largely due to teams players played against as such, over 60% of the variability of defense can be attributed to which division you are in, the rest may depend on the teams you played against in the other conference or injuries. The regression can be improved by adding plus minus and/or the individuals’ defensive score, this is likely just the caused the OOS and DOS effecting things like plus minus. This simplifies strength of opposition analysis a lot, because you can just look at the caliber of the teams played against to determine if a player is under or over rated.
Doing the same for the offensive OOS is much less effective (7% of variability explained); using more or less variables doesn’t help. It would appear regressions may be the wrong technique here; there must be a better way to look at this. When you remove the team factor you’ll quickly notice the error drop substantially. If you only include players that played in more than 80 games on the same team you will see the error match that of randomly generated player data.
This likely wont convince you, but I should add that I did a number of regressions and there were a few "notable variables, in general it appeared to be the team you played against and not the players.
Players do face varied opposition, but this variability stems from the teams they play and not a result of being protected by a coach. There is no data showing significant coaching effect on difficulty of opposition, either offensively or defensively. This means that the concept of “strength of opposition” can only relate to team they’re playing and is not affected by playing on a different line. Coaches don’t protect players by using line matching (they may line match for other purposes).
In the future, I hope to furthur analyze whether coaches "protect" players by using other players on their own team to create "balance".