August 31, 2006

Strength of Opposition II

In my previous strength of opposition article I incorrectly concluded that there is no strength of opposition effect, this article will attempt to explain the phenomenon. The mistakes in the previous article are not the result of poor statistical evaluation of the data, but rather poor data. I made some mistakes that are documented in the previous article, related to my scripts. I had compared third period of one player to the first period of another (conclude there is no correlation – go figure…).

Introduction

Strength of opposition is extremely hard to measure, due to several factors. You must separate offense from defense, because players will be matched to a different type of opposition depending on their strength. Players must then be some how, rated at some given value that evaluates how good a players is (preferably not subjectively), this factor also should not be significantly effected by opposition. One can just look at ice time as a measure of quality (coach is all knowing) and thus work from there, and that is the best way to find out who is played against the “top” lines. To find out if a player’s statistics are skewed by their opposition one needs a rating that looks at quality of the top lines they play against (and second, third and forth lines).

Lines

The line of a player is reasonably easy to guess I did a poor approximation predicting line using a linear regression style (more ice time implies better line). Before I go into the analysis I should explain the statistic: Line score basically a number from 0-3 for forwards or 0-2 for defense that rates how much time they are on the ice per game. So 18 minutes of even strength time for a defenseman would make him a 0 for line score, 8 minutes would get you a 2 or so. The exact values don’t have to be perfect because it will just scale it linearly. Forwards spend less time on the ice in general (with a few exceptions) and as such have a different scale. When I look at who each player is playing against I can just take an average of their opponents line scores and approximate the “line of opposition” I do so separately for both forwards and defense (hopefully teams always use a 3 forwards and 2 defense during 5 on 5 situations, otherwise my results are wrong). Once you get the average over a season you can compare players amongst a team (comparing between teams is a little more complicated and I don’t plan on doing it here).

To start me off, I listed two defensemen from every team, who face the top forward lines (interesting how different the score are between divisions: Ovechkin’s and Crosby’s ice time is affecting the results). This list is more here to demonstrate the effectiveness of such tools and demonstrate that the expected results come out. Based on these lists, easy defense opposition, and easy forward opposition, it would appear that coaches have an easier time making defensive style matching than offensive matching (easier to get Ohlund out vs Sakic than Sakic out vs. Brookbank) due to the fact that there are no stars playing against the worst defenseman. And it would appear, that in general the strategy in the NHL is best vs. best (although the small differnces in line scores indicate otherwise). This is likely the result of the fact that you try to get Sakic out as much as you possibly can (required break time then back on ice), whereas Brookbank can go on the ice whenever so you make sure he's out when it wont cost you the game. For interest sake I made the opposite defensive list: defensemen who were protected.

Strength of Opposition and Statistics

So now the real question: what does this all mean? In general what we really want to know is how this affects the statistics of players. Would Naslund get more goals per minute if he was played less vs. 4th line opponents (given the same supporting cast)? The answer is most likely yes, but then the question is how many more? The problem with measuring this is that players are so entrenched in their positions it’s hard to look at things in a before and after type setting. One would have to break all players into categories of line (per game/year) and compare how their stats were hurt by playing against tougher opposition, but one would also have to scale out age and developmental issue (injuries). In summary: it’s impossible to figure out the “cost” of being on a different line (from me perspective at least), but I can do something. As I have analyzed before I can look at players in terms of “line shooting percentage” for offense and “expected goals against per hour” for defense. The defensive measure is independent of goaltending and the offensive measure captures many aspects of scoring, while removing the defensive aspects (possession), so good players on bad teams can be measured accurately. Once I know a players “rating” based on the two scales listed above I can look at how “difficult” these lines actually were (some teams have bad top lines), in fact some coaches, I would argue cannot determine talent as there are likely a number of 2nd lines that are more effective than their fellow 1st lines, but I’ll save that discussion for another day. In order to make my comparisons “nice” I converted the above statistics into statistics with an average of 50 and standard deviation of 20, I also made a max of 100 and min of 0, so that players who fell out of the spectrum due to amazing “skills” would not mess with the statistics to extremely (generally extremes are caused by low ice time numbers or error). Once I have both scores I can compare how offensive and defensive their respective opposition is by using the same average techniques mentioned above, however I no longer need to separate defenseman from forwards, because they are on the same scale.

So what was the question again? How does opposition affect a players statistics, more specifically: how does opposition effect expected goals against and line shooting percentage.

Defenseman:

Due to the fact that coaches do not appear to match players vs. defenseman, but seem to match players vs. forwards (A regression I did agreed with this theory) I will only compare offensive opposition line scores. If you do a quick regression, using ice time as a weight to scale out errors, you get an interesting relationship: players offensive scores increased with harder oppositions (which may just say top lines score more than third lines and top lines play vs. top lines), however, playing against easier offense resulted in better defensive scores by a factor of 15.7 for every 1 increase in line. Now when you look at the scores, they are a statistic with a standard deviation of around 20, the line scores have a standard deviation of around 0.1, 15.7*0.1 = 1.57 or 8% of 20, so I could conclude inconclusively that player’s defensive scores are 8% the result of opposition.

Forwards:

It should be no surprise that forwards have similar results to defenseman, most importantly the line quality, however a forward’s defensive score is even more dependant on opposition than a defenseman (this may be another good indicator of how worthless the minus statistics are). Similar to their defensive scores, forwards saw a marked decrease in scoring when they saw easier opposition (checking lines?), as such I will disregard these results as they provide no useful information on the quality of the forward in question. As for the defensive scores a regression shows a factor 23.1 for forward line score which works out to 2.31 when multiplied by 0.1 (same procedure as above) or 12%, so one can similarly conclude that a forwards defensive results are 12% opposition.

Looking that the regression results, there’s enough error that I can combine the forwards and defensemen into one group, this results in 10% of the variability of the defensive scores coming from opposition (nice number to work with too).

Conclusion

Of course the above conclusions are by no means statistically sound, the data in this study has a “tendency” to stay around the average, and team plays a large role as such the data is likely showing less variability as a function of the above variables and “reducing” the score and as such it is likely a larger number (but this is just as guess), however as lower bounds and reasonable starting guesses I will go on to rescale the players scores based on these assumptions. I can rescale the scores to account for these differences, how different are these results? Well it’s not perfect, most players change by one or two points. Meaning my scoring (from 0 to 100) is reasonably accurate even when accounting for opposition. That doesn’t necessarily imply that plus-minus or that “goals for” isn’t related to opposition, it only states that relatively speaking “expected goals against” is not statistically significantly affected by opposition and that interestingly a "lines shooting percentage" decreases with "easier lines".

August 25, 2006

Goaltending

Goaltending is one of the hardest positions to look at and compare, this is largely the result of extreme multicollinearity of the goaltender with the team. Any goaltender’s results are largely due to what the team is doing, whether it is goals against average due to a poor defense or large quantity of shots. Even the save percentage is affected by the team via the types of shots the goalie sees and the amount of time the goaltender spends killing penalties or in different score situations or just chance. Brodeur has had close to the most wins for many years largely due to the fact he has always played on a great New Jersey team. Just as a goaltender is a function of the team, one also has to remember that the goaltender affects the team’s results as well, it’s a two way street. If the goaltender can’t stop anything then you’ll lose every game no matter how good you are. The obvious question then is: how on earth can one evaluate goaltending?

Goaltenders have been evaluated on a system of: goals against average, save percentage, shutouts, how well their team does and wins. All of which are not true measures of goaltending, but the best one can do at the present time. So much of the goaltending scouting is probably opinions, because the limitations of the data. This article basically summarizes how poor the data is.

Wins

The most important job for a goaltender is to win a game, if the score it 10-9 and he wins, he wins! The fact that he has 9 goals against and a save percentage around 0.7 doesn’t matter because the game was won. Problem is that the goalie wins with the team and the team wins with the goaltender. For example, Legace was considered to have an inflated win statistic, because he played for Detroit. Cloutier (who has always been considered average in Vancouver) was able to capture 3 30 win season in a row.

In order to analyze wins, I broke them into goaltender factors and team factors. The team controls shot quality and quantity so I can follow the shot quality model based on hockey analytics study, using data from 2002-2006. In order to understand goaltender’s ability to wins games I have to first understand the “non-goaltender” probability of winning games, so I assume both goaltenders are identically average and the team with the highest expected goals for wins. I calculate the probably of winning using a Pythagorean winning percentage using the above expected goals and with a binary logistic regression I can see how similar these results are to the actual results. The regression shows that without considering goaltending one can predict game outcomes based on shot quality 84.6% of the time.

To continue, I use the expected goals from each team, and rescale the expected goals to account for the quality of goaltending on the other end. Using these results I can see if the goaltender in question wins or losses and how that compares to how often the goaltender should be winning, since I’ve removed every factor the goaltender cannot effect (to a certain extent).

Now once you get the results you have to look at how it would compare to random results. Due to small sample size I broke the NHL into 4 groups, 0-30%, 30%-50%, 50%-70% and 70%-100%, where the percentages are probabilities of winning a game. If you look at the errors, you will find that the top group (70-100%) random error explains 95% of the variability as such only 5% can be explained by goaltender quality (or error in standard deviation), even the second to top group (50-70%) has 90% error. The bottom group has approximately 70% random error, meaning that approximately 30% could be attributed to goaltender quality. These results show that goaltending play a small factor in determining whether a game is won or lost. This would also make a losing team (who relies heavily on goaltending to get wins) make a goaltender appear better than they are, because every win requires a stellar goaltending. This basically says goaltenders (in general) don’t cause losses, but can win the odd game for the team (which is understood phenomenon). One can conclude that one cannot build a winning team around a goaltender.

It would appear good strategy would be to build a good team and then you can throw any goaltender in front without too much worry so long as the goaltender is at least average; this was likely the circumstance in Carolina’s run to the finals. If you have a bad team a good goaltender is a great way to get wins (that is if you’re able to figure out which goaltender will get those wins). What does this say about worth of a goaltender, a goalie should be somewhere between 5-30% of salary, typically teams spend about 10% on goaltending (this is the correct amount if you’re planning on being around 50% team). It would appear that Detroit is playing smart by spending very little on goaltending and spending on defense, Boston and Thomas are following similar logic. Of course this is all in the perspective on wins. What this says about the value of a goaltender is hard to say, due to the fact that a goaltender is worth more on a bad team and less on a good team. The best way to find out how good a goaltender appears to be: throw him on a bad team. Take Lalime, hugely successful in Ottawa (a team that doesn’t need a stellar goaltender), but failed miserably in St. Louis.

How does this help the binary logistic regression: it helps a little, the variables “0-30%” winning percentage for the home goaltender and away goaltender get the regression up marginally to 85.2%, both variables are significant to 4 standard deviations, it’s nothing to write home about, but agrees with the fact that there’s a lot of error in these results that makes them a poor measurement.

Save Percentage

What is the real job of a goaltender though? Most would say they’re not responsible to win or lose games, but to simply stop as many pucks as possible (preferably at key times). This is commonly measured by save percentage, or the new shot quality neutral save percentage. There’s not much I can add here, but will say that this measurement is inadequate, due to a number of complexities. For example if a goaltender goes on a losing streak giving up a lot of goals in a few games and then wins ten in a row stopping almost everything, they may end up with a bad save percentage, but they will have won 10 games out of 13. Also, one thing people often forget about save percentages is the amount of error is significant, even for goaltenders who play a lot, for example a goaltender who faces 500 shots has approximately 2.5% error, or has a save percentage of 0.908 ± 0.025 or it has a 95% confidence interval of (.934, 882), Luongo with approximately 2500 shots is (.925, .903), which in terms of quality is a huge difference, in fact that range covers the top 20 goaltenders of 2005-2006. When you consider this it can explain why you see a new goaltender on top every year (Huet, Roloson/Kiprusoff, Turco, Theodore, Dunham). So with this in mind you can learn all about shot quality neutral save percentages and marginal goaltending at hockey analytics.

Goals against Average

Since I am talking about goaltending I should mention goals against average. If you’re interesting in looking at this statistic it’s basically save percentage multiplied by shots against average, now if someone can explain how shots against average has anything to do with a goaltender that would be great, but goals against average is a worthless metric as it provides nothing on top of the current save percentage data.

Shutouts

Shutouts are hard to analyze simply because they contain so much error and team components and are always small in quantity. Auld was highly criticized after this season for not making the playoffs and not getting a single shutout (the Canucks first season without a shutout). The simplest way to calculate expected shutout is assume a team faces 30 shots then the probability is simply save percentage to the power of 30, or 0.908^(30) = 5.5%, so a goaltender should get 3 ± 3 (95% confidence interval). So an average goaltender should have anywhere from 0 to 6 shutouts (this sounds about right). However if you consider 25 shots with an above average goaltender (.920)^25, the odds go up to 12.5% (over double), or 9 ± 6, so shutouts can measure quality, but more importantly there’s an exponential decay of “chance of shutout” for each shot against. So Auld who averaged 29 shots against with a .902 save percentage should have 3 ± 3 shutouts. Kiprusoff has a 12.5% chance of a shutout and is expected to have 9.2 (he had 10), so basically shutouts are functions of your other statistics along with some error, there is little additional information one can get from them.

Conclusion

So now, who is the best goaltender? After doing this study I realized there is technically no good way to determine what makes a good, average or bad goaltender, while each measure has its strengths and weaknesses, there is by far too much error. Goaltenders that spend a lot of time on great teams will appear amazing, when they could be average or well below average, there’s almost no good method for quantifying their skills. One could likely generalize goaltenders into groups of good, average and bad, but claiming you have the best or even a goaltender in the top 3 is extremely hard to determine, and even these groups as you will soon see are completely inadequate.

So here’s where I get my head chewed off here’s my good, average and bad goaltenders of the last 3 years, I realize these goaltenders don’t agree to the norms or current goaltender measure, but that’s ok.

Rankings

Good: Legace, Roy, Ward, Markkanen, Aebischer, Raycroft, Burke, Roloson, Nurminen, Shields, Hasek, Brodeur, Turco.
Average: Vokoun, Kiprusoff, Gerber, Miller, Auld, Cloutier, Toskala, Luongo, Joseph, Denis, Grahame, Johnson, Kolzig, Giguere, Fernandez, Huet, Biron, Snow, Niittymaki, Belfour, Esche, Lundqvist, Hackett, Hedberg, Cechmanek, Garon, Khabibulin.
Below average: Boucher, Thibault, Noronen, Dunham, Conklin, Caron, Dipietro, Salo, Prusek, Weekes, Turek, Osgood, Theodore, Potvin, Fleury, Nabokov, Mclennan, Anderson, Aubin, Emery, Lalime.

Note this list is based on winning percentage and no subjective analysis, they are ordered by their score, so Vokoun and Kiprusoff are borderline cases as are Cechmanek, Garon and Khabibulin. What does this tell me? In terms of goaltending Vancouver has done nothing to help the Canucks win games. If you don’t like these groups you can always check out: LCS Player Ratings: Goaltenders.

2007 will be the year of the goaltenders, with a lot of movement and changes, you can almost count the number of teams with the same starting goaltender on one hand, we’ll see who’s right. We will see who sinks and who swims it will be interesting in the end, because it’s always hard to predict which goaltender will do well.

I have a dream

I’ve been processing hockey statistics since, 2002 and have gotten better as time progress. Not only have I gotten better, but so has the reporting: adding hits, blocked and missed shots, ice time sheets. There is still a long way to go and here is my hockey statistics wish list, in order of importance and usefulness of such data (of course the order isn't perfect and I made a few up quickly to get this one done) starting with the least important #10:

10. Position of every player at every second, this is the most ambitious goal and won’t happen till after I’m dead (or later). It would allow for detailed analysis of when players shoot successfully, what causes icing, effective power-play and penalty kill techniques (what works, what doesn’t), does chasing the point man help or hurt the penalty kill?

9. More accurate times, hockey is all about time. Even one extra digit would make a remarkable difference in inferences and data, rebounds for example are approximated to around 3 seconds, but a 1 second rebound is harder than a 1 second rebound or a 0.5 second rebound.

8. Subjective indicators, such as pressure, a tough save, anything that might be interesting.

7. More details about face-offs such not only who wins the face-off, but also the battles that often occur afterwards.

6. One thing that would be neat and not likely too challenging is to come up with a tool to measure the speed of shots on goal as this would show how speed affects save percentage and which players shoot the fastest.

5. One thing that is never recorded that could easily be, is the offended player when a penalty occurs, for example they guy who is high-sticked. There are players who are better able to draw penalties and a fan and an prospective employer should be able to find out.

4. Possession, like soccer, is an important aspect of hockey, face-offs, puck battles, give-aways all measure this, but the NHL does not measure the most important aspect of the above events: puck battles. Most hockey fans and commentators agree that losing puck battles often means you lose the game. The NHL is about hard work and that’s where a lot of it occurs, the NHL should record puck battle (start time and end time, winner, loser). This should fill the gaps on the score sheets on who has the puck at a given time and allow better possession analysis.

3. Shots should mention where on the net the shot is headed, this would be very difficult to measure. It could be as simple as breaking down the net into 9 quadrants. This would show individual goaltender weaknesses and shooters strengths. It would also make rating shots difficulty easier.

2. This rank high on my importance, list but the chance of it happening is extremely low, passing plays a central role in hockey and one cannot look at the game without knowing how many and when passes are made. For example I suspect a shot that occurs a split second after a pass is likely harder to stop. In order to record a pass you’ll have to mention off stick and on stick entries (to measure speed), zone of play should be recorded.

1. Topping the list would be better shot data, currently the NHL provides just distance (from the backboards?). I need an exact location (x and y) on the ice. One would be able to make a 3d graph representing probabilities of scoring (it would look really cool), but more importantly it would make the shot statistics much more accurate and goaltender quality would likely be useful. Preferably, shots would be recorded as radius from net and a theta from the net, although x and y works as well.

Many of these events could be easily account for with the use of GPS detectors on players, sticks and puck and recorded in a database at some given interval.

I do have a nice long post in the works, editing it tonight...

August 14, 2006

Icing

The NHL made one change to the icing rules in order to decrease the number of icings, the rule was quite simple: if you ice the puck you cannot change the players on the ice, but the opposing team can change lines (so a tired line vs. a fresh line). The concept was good, but was less effective given the fact that a number of times an icing would include a commercial break giving both sides a two minute break. The results were stunning however, the new rule decreased icings by 50% (10 per game to 5 per game).

In order to understand scoring after an icing, one should to understand the offensive zone face-off: first off, there’s a 55% chance of winning an offensive zone face-off. From there, there’s likely around a 66% chance of shooting, with a 70% chance of hitting the net. If the puck does hit the net it has around a 10% chance of going in, this all equals around 2.5% chance of scoring off an offensive zone face-off.

Was this decrease in icings warranted? It would be hard to answer that question, due to the fact that players in more disadvantageous (on the ice for two minutes) situations would avoid icing the puck. However if you look at goals after icings (30 seconds or less after an icing) you have 215 in 2005-2006 vs. 304 in 2003-2004, which is actually only about 30% less, it did have an effect on scoring, as a percentage though it was 2.5% vs. 3.1% with a standard deviation of about 0.2%, however it is only a 25% increase in terms of percentage from years past. Scoring increased by 25% resulting in teams to decrease icing the puck by 50%. Would you be better off icing or not icing is the tricky question that I can’t answer.

August 12, 2006

Penalties – The bigger picture

Earl Sleek asked a number of good questions in regard to the new season: one could summarize the penalty questions by saying "what changed", "how has it changed". The first question of 2005-2006 is what changed: it should be no surprise that interference was higher, hooking was three times higher and can account for most of the differences, high sticking statistically lower (I suspect that either referees were too busy looking for hooking to see high sticks or they didn’t call them because they had called too many penalties already). Tripping and Holding were up 50% and Roughing and cross checking were down around 50%. Goaltender interference was up slightly.

So How did it change?

Power-plays accounted for one third of the offense in 2005-2006, some might ask then, what percentage of power-plays are the results of referee’s conscious choices. Referees have a number of things on their own mind. Arguably they want to appear, fair and unbiased (the best way is to give penalties when you see them and not think about what time it is). Referees all know if they were to appear to favor a certain team they would likely loose their job (six figure salary as well). Management likely has some set rules for measure bias that may not actually measure bias and contribute to the results of this study. Power-plays are interesting mostly because of the design of the rules of hockey are extremely subjective. This subjectivity allows the referees to mask almost anything from unjustified penalties to cheating and favoritism.


In hockey there are two types of scores, in regards to penalties: there’s the actual score of the hockey games (goals scored for each team), and then there’s the penalty score, number of penalties to each team. In general if the score is different the losing team might be more willing to take risks to get a goal, similarly a winning team is taking less risk (at worst the game will be tied), when taking a penalty.


Score Differential


The simplest way to account for these differences is to do a regression on each of the mentioned factors, this is reasonably straight forward and one can answer all the questions at once with one equation. There is one significant problem with the data, some data has a lot of results in it (regular season tie game), others have very few (playoffs up or down by two goals), I don’t want a regression that are chasing the parts with very few results, so I approximated the error with the binomial distribution (not quite accurate, because you can have two penalties called at a given time), and used one over the standard deviation as a scale factor. So problems with very small standard deviation will be used more. I also consider scores of 2 and -2 as well as 1 and -1 and 0 as I felt it would increase the amount of data without changing the results (at least of what I’ve seen of hockey and penalties). So I considered a number of variables including: score differential, year, division, home or away, west or east, period in order to predict the number of penalties per hour, while up or down a goal. There are two things I’m concerned about for each factor: constant and slope, as the model is essentially one variable (score differential). Now you can cross any two variables to produce (a lot of) variables, I considered a few cross products including all variables individually cross with the original score differential. Because this is not completely scientific process I’m present here this is the most useful model I found, with more time and effort one could likely find more extra variables that correlate, but this is reasonable model:


penalties/hour =
5.17
+ 0.297
score differential
+ 0.443 away
- 0.306 east
- 1.71 period 3
+ 1.40 2005-2006 season
- 0.949 playoffs
- 0.158
score differential x away

Virtually all the cross products were not statistically significant except for the away cross product (refs are less likely to help the away team than the home team). The only difference between 2005-2006 season is that you see 1.4 more penalties per game, the referees apply the same techniques to balance the score (this shouldn’t be a shock). The away team gets half a penalty more or 0.44*0.17*1230 = 92 goals per season, which is 3 goals per team or approximately 18 extra home wins distributed among the 30 teams, the interesting thing is that there does not appear to be any real home ice advantage in the NHL (7 above .500 games in 3 seasons). I found it interesting that the east is more responsible than the west; I don’t have an explanation at this time (although it has the most error). What shocked me was the extent that period 3 had fewer penalties; the referees call 1.7 fewer penalties in the third period (that’s for playoffs, new rules as well.).

Penalty Differential

However, as mentioned in the comments to my last piece, the score board score isn’t the only score that matters, most fans know by now that referees keep track of the balance of power-plays and try to even them up as well (in order to make them appear fair). Using the same variables as before, I came up with this equation:

penalties/hour =
5.74
- 0.600 penalty differential
+ 0.786 away
- 0.350 east
+ 1.37 2005-2006 season
- 0.733 playoffs

You should note there are no cross products, in other words there were no statistically significant cross products, so referees have been using the same rules in the game and just giving out more absolute penalties for each given situation: 2005-2006 is the same as 2003-2004 in regards to penalty balance. However, one should take notice of the significantly larger coefficient on the main term: score differential, the power-play score plays a much larger role in determining who gets the next power-play than does the scoreboard, which should not be a surprise.

Comebacks

The way the game is refereed has an effect on the outcomes of the games, as it makes it easier to comeback in games, but this has not changed over the course of the last five seasons. Interestingly enough if one compares 2003-2004 one might be able to claim a small change in the ability to comeback in games, however, the comeback rate has remained relatively stable over three years and as such comebacks have not changed (it’s a figment of your imagination), the shootouts may be affecting comeback rates (I did not include over time in this statistic as that has changed too much). But, higher scoring games and more penalties has not effected the rate at which teams comeback, it has remained steady at around 28% (team who scores first loses (in a non over time game) 28% of the time). If you include games where the team wins or ties to get into over time (not considering what happens in over time), you get the same results (around 46%). In other words, while the NHL would like you to believe they’ve made the game less predictable by increasing the number of comebacks, this has in fact remained steady. If you take a look at the Poisson toolbox you can learn about comeback rates, I approximated the comeback rate from the first goal to be around 40% to tie and 25% to win, so comebacks are inflated by about 2.5%, 30 extra comebacks per season (or one per team) compared to Poisson predictions. One should consider the fact that the better team is more likely to score first and it’s harder to comeback against a better opposition, but this shouldn’t affect the results substantially. I should note: I predicted 18 extra wins as a result of referees favoring the home team, which is pretty close to the 30 extra come back wins (maybe the other 12 comeback occur away from home).

In Summary, it would appear all the NHL has done differently in 2005-2006, is call more penalties resulting is more goals, they have not changed the structure of the calls. And it has not helped even strength production significantly.

August 10, 2006

Hubacek -5



I've always been frusterated by the NHL's poor (public) data collection, presentation and consistency. Things like french score sheets for Montreal home games have almost no benifit, but make processing a huge challenge (publish both if you have to). Adding features midway through a season is also frusterating. But when I was going through 2002-2006 data I bumped into potentially the worst sort of problem on play-by-play of game 266 in the 2002-2003 season. Look here to see the negative shirt number listed in the on ice lists
81 2 05:44 SHOT NSH EV -5 HUBACEK, Snap, 16 ft
and
146 3 11:05 GOAL CHI SH 26 SULLIVAN, A: 13 ZHAMNOV, Snap, 13 ft
CHI: 26 SULLIVAN , 2 MIRONOV , 41 THIBAULT , 13 ZHAMNOV , 8 POAPST

NSH: 1 DUNHAM , -5 HUBACEK , 21 JOHANSSON , 18 HALL , 11 LEGWAND , 4 EATON

First off who is Hubacek? According to hockeydb he hasn't played a game for Nashville in his life.
So then there's: "01-11-02-- Nashville Predators traded Yves Sarault and a conditional draft selection in 2003 to the Philadephia Flyers for Jason Beckett and Petr Hubacek." Secondly, I want his jersey, negative 5!
He had a pretty good game for "not playing" a good shot (16' snap). Although he had a shorthanded goal scored against him (what on earth was this guy doing on the powerplay?)

What I suspect based on the shift charts, was that Haydar was a late scratch (didn't play a single minute) as was replaced with Hubacek (how the scoresheet people couldn't get his name listed properly goes beyond my understanding). This would means Nashville played the game with 7 defenseman.

August 9, 2006

wanna make a million dollars?

These are the results only for 2005-2006. I will collect the other data too, but I’d rather study this first, before I go further I removed two types of powerplays from these results: roughing and puck over the netting.

  1. How much more likely is a team down by one going to get a power play than take a penalty?

They are 14% more likely to get a powerplay (7.12 PP/Hr vs 6.24 PP/Hr) (This is also statistically significant…)

  1. Same question, but only on the road.

It doesn’t seem to make a difference [compared to question 1], as it’s 13% (less, but not statistically significantly less). (7.28 vs 6.44) They just get more penalties in general.

  1. Same question, but only Eastern Conference teams.

Same as 1 or 2? I’ll go with 2, which is 16% or (7.23 PP/Hr vs 6.25 PP/Hr).

  1. Same question, but only in the 3rd period.

Third period it jumps to 18% (in regards to question 1 not 2 or 3…), or (4.80 PP/Hr vs 5.68 PP/Hr - notice how much lower as well…). This is significant.

  1. How long into a power play is the average PPG scored?

This can be done, reasonably easily to approximate, take powerplay opportunities and divide by total powerplay time (over the team). In other words someone else can figure this out. The quickest approximation: 8103576 seconds of powerplays in 14394 powerplays, which works out to about 1:34, now if you were to look at only powerplays where goals are scored this would be more complicated…

  1. How long after a faceoff is the average goal scored? Is there a cluster shortly after faceoffs?

Basically goals after face-offs occur in an exponentially decaying pattern (with an initial lag due to near impossible scoring time) as seen below; this is the result of scoring following a Poisson distribution.

  1. Same question, but only after icings.

One has to remember that in hockey, an offensive zone win or loss just changes the amount of time to get to the net, so it should not effect the distribution significantly, but just the average. So it appears it takes about 7 seconds to get into the offensive zone from the neutral zone. You can see this graph, it has a few extra “humps”. Also much of "humps" come from the fact that there were only540 goals after icings, compared to 7426 goals (ok I'm missing a few).

  1. Given two specific teams and a tied score, who is likely to score first?

The better team.

  1. How likely is it that the other team scores next?

Depends on how good each team is, this is answered at Hockey Analytics in their in depth article: Poisson Toolbox

  1. Compare pre-lockout to post-lockout, regular season to postseason, etc.

I’ll add it to my to-do list. I should add: studying post season is extremely difficult as teams play the same team four to seven times and a lot of hockey is determined by oppoenents.

Anaheim

Due to “popular” demand I threw together the Anaheim tables. I always find it interesting when goaltenders perform worse offensively with all players, seriously this cant be error can it? A few players worth mentioning:

Lupul, traded to Edmonton, is nothing to get excited about; his line mates appear to do all the work. A few questions for the Anaheim coach, however, why was Lupul played with Getzlaf on the power-play, but not even strength.

Selanne: No question that he was an important part of Anaheim’s offense this year, Selanne helped every player he played with (the only exception is Marchant, who spent almost no time on the power-play). You didn’t need a complicated table to find that out.

McDonald was also an offensive force to be handled, but he spent over 70% of his time with Selanne over the course of the season and as such their scores are highly correlated, determining who is better using this method would be challenging, that being said both performed better as a tandem, so they complimented eachother.

Salei is an interesting player, helping almost everyone offensively even strength and hurting everyone on the powerplay, he doesn’t appear to have a set partner I could look at.

Sykora wont be missed he provided no “extra” offense on either even strength or the powerplay, maybe that’s why he’s still unsigned.

August 8, 2006

Strength of Opposition: The Question of Existence

Hockey sets itself apart from many sports in the sense that coaches can choose, to a reasonable extent, which players play against other players. In baseball you can’t pick your pitcher, in basketball you can’t pick your opponents as you don’t have shifts. In soccer the same 11 guys play for almost the entirety of the game. In individual sports your opponent’s strength is determined by how far you get or by seeding. Football there are no lines, just offense and defense and the players on either side stay relatively constant (ok, I don’t know much about football). In fact I’d be hard-pressed to think of a sport with similar dynamics in terms of shifts and coaches determining the strength of opposition they face, but is this really the case, can coaches really determine opponents, or do things occur as a random mixture.

Naturally with such a phenomenon lazy-boy hockey fans will often claim a player is doing poorly because they face too difficult of opposition, or do well because they have easy opposition. You will often here fans site: “difficult minutes” as a determining factor in raising or lowering a player’s value. I will attempt to figure out to what extent opposition effects players’ performance. Personally I would like to see results where opposition is irrelevant as it would make hockey studies a lot simpler; however I expect to see some players being protected and others being “abused”.

A few examples of people discussion strength of opposition.

  • “Donovan's recent three year average is 1.5 ESP/hr. So Friesen's an upgrade if you assume they have comparable GAA and strength of opposition. However, that rate upgrade costs a marginal $975K. I'm not sure that's worth it, unless Friesen can do that against tougher opposition or if he can keep down the GA at a better rate than Donovan.” (RiversQ).
  • Horcoff appeared to have considerably more difficult strength of opposition” (RiversQ).
  • “All this number crunching is nice, but it's not clear that the arbitrators are looking at things like strength of opposition” (Robert Cleave).
  • MacT creates a Samsonov-Stoll-Hemsky line that gets all the easy minutes.” (Dennis).
  • “Well, he has been getting lot of easy minutes compared to Salei and Vish, i.e. he plays more against opponents 3rd and 4th lines.” (Pepper).
  • “That PPV game was worth every nickel. It was Pronger's signature game this year, IMO. He played a ton of difficult minutes in a place the Oilers hadn't done well, and was, even by his standards, a calming presence” (Robert Cleave).

My methodology is probably not rigorous enough, but it should be simple enough for most to understand. I rated offense via shooting percentage and defense using expected goals against per minute (given an average goalie). I redistributed these values on a 0 to 100 scale, using 50 as average with approximately 20 as the standard deviation (so 68% of the players are within 30 and 70). I would expect to see the most protected player seeing offense of about 25 and defense well near 50. So I processed every second of every players time (took 43000 seconds) and scored their opposition using a linear scale: sum of opposition score divided by five (goalie not counted). So offense example: Ohlund (56), Baumgartner (58), Morrison (67), Naslund (60), Bertuzzi (68) would score 61.8. So if you only played against this line you should have an opposition offensive score around 62. To find out a player’s score you just sum up all the opposition scores for every second on the ice and divide by amount of time on ice. Method to calculate defense is identical. Note: I will use short forms for offensive opposition score (OOS) and defensive opposition score (DOS) for the rest of this article.

After spending hours processing the data, the results were pretty innocuous. I got distributions with much smaller standard deviations (1.5) than the players’ individual scores (20). This basically says that players play against a diverse group of players some of whom are good others who are bad, in other words most strength of opposition is averaged out to nothing significant. There are two very informative graphs: The first is OOS vs. ev ice-time; the second is DOS vs. ev ice-time. These graphs indicate collapsing of error and there is basically not much, if anything can be gained here.


With the differences, mentioned above, showing poor if anything useful about strength of opposition, I needed to find out how much error was reasonable in this problem. Now assuming a shift length is 54 seconds (average shift length), I ran a script that played a player against random opponents for 500 shifts a thousand times. The standard deviation was 0.4, compared to 1.5 for OOS and 2.3 for DOS. With this in mind I can only explain approximately 30% of OOS and 80% of DOS (the rest is going to be random error).

Looking at some individuals, many phoenix players were all given low DOSs, as such I remembered that results in this years NHL were largely due to teams players played against as such, over 60% of the variability of defense can be attributed to which division you are in, the rest may depend on the teams you played against in the other conference or injuries. The regression can be improved by adding plus minus and/or the individuals’ defensive score, this is likely just the caused the OOS and DOS effecting things like plus minus. This simplifies strength of opposition analysis a lot, because you can just look at the caliber of the teams played against to determine if a player is under or over rated.

Doing the same for the offensive OOS is much less effective (7% of variability explained); using more or less variables doesn’t help. It would appear regressions may be the wrong technique here; there must be a better way to look at this. When you remove the team factor you’ll quickly notice the error drop substantially. If you only include players that played in more than 80 games on the same team you will see the error match that of randomly generated player data.

This likely wont convince you, but I should add that I did a number of regressions and there were a few "notable variables, in general it appeared to be the team you played against and not the players.

Players do face varied opposition, but this variability stems from the teams they play and not a result of being protected by a coach. There is no data showing significant coaching effect on difficulty of opposition, either offensively or defensively. This means that the concept of “strength of opposition” can only relate to team they’re playing and is not affected by playing on a different line. Coaches don’t protect players by using line matching (they may line match for other purposes).

In the future, I hope to furthur analyze whether coaches "protect" players by using other players on their own team to create "balance".

August 7, 2006

Power-play Analysis

In order to conclude my research of shots for and against, I must include the power-play, which is critical in determining a team’s success. As discovered in the even strength offense article: “Does the sniper exist?”, it was noted that shots are not generated by players per se, but rather simply part of the game, the question of what a player does with the shots they get.

I’ve found it more difficult to look at offense than defense as players have more effect over more factors that determine goal production, but none of which I would call great statistical tools for understanding offense. Goals represent a small sample of large selection of time. Shots only measure a players shooting, not how good these shots are. You can look at how good of shots players shoot; however this isn’t worth anything if you can’t score. You can also look at shooting percentage, which is just as statistically useful as goals. Or you can look at how well players perform on a given shot compared to their peers. One can also extend all this to include all shots by line mates (increasing the shots by five times). All these things combined could likely determine good from bad; however as individual statistics they are almost useless. The other question then is how does one combine them, and at this point I have no clue. This all said I will attempt to measure offense via a few of these factors.

When analyzing the power-play the first question will be one of shots per minute, each player was compared to their team’s shooting rate, the resulting error was well within binary error as such players shoot based on how often their team shoots, what this means is hard to say. One can therefore conclude that players shoot more or less based on coaching (team factors). That doesn’t say whether shooting more or less is good or bad coaching. However, when one compares expected goals vs. actual goals one sees a distinct patter, similar to even strength, of positive error growth, so coaches choose players to play on the power-play who score more than average, based on shooting percentage. You can see how much better players due than expected by looking at shot quality neutral shooting percentage [1].

Since I scaled out the team factor I should at least note some facts of teams on shooting percentage. If you do a regression with respect to shots per hour and goals per hour you get a very linear relationship between the two. If you look at shooting percentage you will find that there is no reason to assume shooting more affects your shooting percentage. In other words it would be hard to argue to shoot less. That being said, there are a number of teams with very successful power-plays who shoot very little, or unsuccessful power-plays that shoot a lot. These results are strange in my opinion, because it basically means teams should shoot more no matter what. Of course each team has different players, but players in relation to their teams don’t shoot more or less. The only cost of a shot for is the loss of puck possession (what’s that worth?).

Since we know that the only thing that matters is shooting percentage here’s a list of the top players shooting percentages on the power-play. I have a few other lists included shot quality neutral shooting percentage and a shot quality for list (this method has a lot less error).

[1] SQN% - or shot quality neutral shooting percentage, is a measure of offense. It's calculated by: (goals for/shots for)/(expected goals for/(league average shooting percentage*shots for)) or (goals for * league average shooting percentage/expected goals for). I don'texpect this necessarily to make sense so you can go to Hockey Analytics and read about their Shot Quality article.

August 4, 2006

Does the sniper exist?

When studying defense it didn’t take long to realize that defenseman are unable to make a statistically difficult shot easier or harder based on anything within their control; however the obvious question that follows: can a forward do anything beyond getting into the right position and taking a good shot? The obvious answer should be yes, players have different aim and skills. If a player can’t hit the net for the world of him then he is obviously doomed. I should note first, I am looking at plus ratings and as such I’m looking at all shots from all players while that person is on the ice. This is beneficial because it measures passing and shooting as well as position etc. Also it reduces the error as each player has about five times more shots given to them. The other question pertaining to shooters is their ability to get shots: who or what is responsible for shooting more?

The problem with snipers is choosing who is and who is not. The question of sniper comes from quality, and does not depend at all on quantity. So if we look whether a player has a better shooting percentage than expected then we have a sniper. Hockey analytics determined a shot quality neutral save percentage, the same can be said for shooter, and call it the shot quality neutral scoring percentage (SQN%) [1]. What this does is it compares the players shooting percentage to the league’s shooting percentage on the same shots. When you look at a list arranged by SQN%, you will see a lot of good players on top. The question is of error: what level of shooting percentage makes sense for what quantity of shots. How many snipers does each team have? How many snipers are there in the NHL? I’ll use 95% confidence rate. There are 120 players in this group, 50 of which are included due to error (you pick the 50…). This obviously isn’t perfect, but you can see this list, red indicates the opposite of snipers (statistically if snipers exist the anti-sniper most also, this list might be more interesting to some). When you think of the red: you should think of players who get chances, but don’t score. These players are frustrating to watch. The sniper seems to score with very few chances (or score a lot with a lot of chances), as with the plus statistic there is the problem of line mates, however you will notice that an entire line does not show up, often it is one key player (due to different lines periodically). Just because a player is not on this list does not make them bad or good necessarily, if they can get more shots on net then they are more valuable as well, or less shots would make them less useful.

So then the question of shots for comes into the mix. If you can’t score well with a few shots, why not shoot more. This strategy isn’t that bad, and considering that about 90% of players are in this group it would make sense to shoot a lot. However, can players control how much they shoot? Again the answer is yes, as the error is larger than expected. A similar list can be made for these players; however, players have more control over quantity than they do on quality. The 95% group contains 188 players, which is over 20% of all players, which is much better than the 12% in terms of shooting quality. This being said if you scale out the coaching (team) factor, the results appear very similar to shots in terms of distribution, you can see the list of players beyond 95%. The question of course where does the rest of the error come from then? The two lists (green and red), don’t contain the most skilled players necessarily. So a further analysis of shots: I compared each player to their team’s rate of shots for and against simultaneously. This basically means if you account for different coaching, teams will get the same net shots during a shift, meaning if a player was in the second red list they were playing poorly defensively and allowing a lot more chances in their own zone. The green players are better defensively. Another interesting conclusion one can make from this is that the best defense (preventing shots) is a good offense. This makes shots a measure of defense and as such one can measure offense purely on how well they do with the shots they get. Shot quality neutral shooting percentage. This doesn’t mean shots for don’t matter, and plus per minute is an excellent measure of a player; however this SQN% is an excellent way to see who are the best scorers. I’m not sure which players affect shots for and against (defense/forwards – defense determine which zone you spend your time in…?), but this certainly interesting insight into the game. A player with a good SQN% can make up for a poor offense due to the fact they can score more with fewer shots.

I have separated shots into to components: shots for, a measure of offense and defense, and a shot quality neutral shooting percentage, measuring a players ability to get a better than average shot off in a given location. So what if a player just gets in good positions and shoots average, this should be just as good. So one can look at just shooting percentage (which is already available), but this doesn’t always measure a players ability to score, or look at a lines shooting percentage with this list.

[1] SQN% - or shot quality neutral shooting percentage, is a measure of offense. It's calcualted by: (goals for/shots for)/(expected goals for/(league average shooting percentage*shots for)) or (goals for * league average shooting percentage/expected goals for). I don't expect this necessarily to make sense so you can go to Hockey Analytics and read about their Shot Quality article.