golfaddict1 Posted December 29, 2018 Report Share Posted December 29, 2018 3 1 Quote Link to comment Share on other sites More sharing options...
golfaddict1 Posted December 29, 2018 Author Report Share Posted December 29, 2018 Taking it one more step.... 40% for avg, 40% for pts and 20% for games total vs top 350... comp top 25 teams only included in these lists. 3 Quote Link to comment Share on other sites More sharing options...
golfaddict1 Posted June 23, 2019 Author Report Share Posted June 23, 2019 @badrouter here’s my 5 minute creation one evening last year (after my ELO attempt failed (due to time commitment). Fisher’s algorithm started with Massey Ratings foundation, with added criteria and tinkering as the years go by. I start with CP and Massey foundation and use a points system creation with specific criteria, and a little wiggle room in some cases. At times I decided to give more points in a loss than in any win all season. IMG losing to MD last year being one of the very few examples. Otherwise it’s by the criteria to the fraction. No cards up my sleeve. Teams also receive negative points for bad losses. That’s a tinkering in progress area for 2019. I’ll start this up in October most likely. 1 1 Quote Link to comment Share on other sites More sharing options...
Eddyr2 Posted June 23, 2019 Report Share Posted June 23, 2019 No clue what that means but MD is number #1 in most of those spreadsheets so 1 2 Quote Link to comment Share on other sites More sharing options...
golfaddict1 Posted June 23, 2019 Author Report Share Posted June 23, 2019 Edit: Last two entries were final tinkering decisions. 1 Quote Link to comment Share on other sites More sharing options...
golfaddict1 Posted June 23, 2019 Author Report Share Posted June 23, 2019 1 minute ago, GardenStateBaller said: Thx for your efforts. Look fwd to comparing your results to @HSFBA‘s throughout the season. Can you get Fisher to provide his top 1000 ratings weekly? Would be great to add his algorithm into the mix and get 3 opinions va 2 magnified at the top 350 level (and for negative points on loss would need higher ratings). I’ll take whatever he can offer. Starting October would be nice. For the schools with some larger variances (Warren Central last year for example) a 3rd set of data points might help smooth out the differential. Quote Link to comment Share on other sites More sharing options...
Sammyswordsman Posted June 23, 2019 Report Share Posted June 23, 2019 Any category that has the devil ranked in the Top 25 needs to be relooked at and tweaked. 1 Quote Link to comment Share on other sites More sharing options...
Extremely Humble Posted June 24, 2019 Report Share Posted June 24, 2019 4 hours ago, Sammyswordsman said: Any category that has the devil ranked in the Top 25 needs to be relooked at and tweaked. https://docs.google.com/spreadsheets/d/e/2PACX-1vTfMyz4JiUv8IYLAjMnqT_E-KAGPxHUkHEu5HvEhSY8UZ9nM3As8r8PzFxQ4OB9Svvw0TcJVtqpCTT_/pubhtml you love to see it 1 Quote Link to comment Share on other sites More sharing options...
badrouter Posted June 24, 2019 Report Share Posted June 24, 2019 23 hours ago, golfaddict1 said: @badrouter here’s my 5 minute creation one evening last year (after my ELO attempt failed (due to time commitment). Fisher’s algorithm started with Massey Ratings foundation, with added criteria and tinkering as the years go by. I start with CP and Massey foundation and use a points system creation with specific criteria, and a little wiggle room in some cases. At times I decided to give more points in a loss than in any win all season. IMG losing to MD last year being one of the very few examples. Otherwise it’s by the criteria to the fraction. No cards up my sleeve. Teams also receive negative points for bad losses. That’s a tinkering in progress area for 2019. I’ll start this up in October most likely. None of this addresses the fundamental problems of incorrect rosters/impact players, incorrect scores, or the profound lack of common opponents among teams on a national level or the subjective nature of trying to rate states and teams prior to the season. Quote Link to comment Share on other sites More sharing options...
golfaddict1 Posted June 24, 2019 Author Report Share Posted June 24, 2019 1 hour ago, badrouter said: None of this addresses the fundamental problems of incorrect rosters/impact players, incorrect scores, or the profound lack of common opponents among teams on a national level or the subjective nature of trying to rate states and teams prior to the season. If teams play competitive schedules, the algorithm will work fine. Every week the ratings change. If you get a high rating beating a high rated team in week one and that opponent loses more games and underperforms all year, that initial week 1 high rating declines and it can decline weekly with a downward trend or trend up and make giant leaps up in the rankings with a big win or some playoff wins after a weak regular season will get a school a nice chutes and ladders roll, especially a marquee win in a power state final. Most schools we discuss on the forum have a marquee win by the end of September or a marquee loss. SFA 2018 rating was still heavily 2017 based for example. Mater Dei would be ahead of UCLA 2018 Incorrect scores can be addressed with one email to the site’s admin. MaxPreps can be incorrect sure. CP pulls from their data. The states are rated from on the field performance. Some states don’t play many oos games, if at all. Nebraska is favored by Massey, while Freeman favors HI and both don’t feel the other state is strong overall. But beyond a handful of states, I believe the 3 main algorithm ratings systems do a good job overall and once again, a strong sos will make the algorithm work that much better. You can’t remove subjectivity in state scaling, but quality regular season schedules is a good start at removing preseason data. I begin with their data and trim the fat. Top 350 opponents (that change weekly) are magnified. Some outliers, sure but you ever look at human top 50 and 100 polls? I like the risk/reward boxes and negative points for bad losses theme. I won’t change a thing for this coming season. St Edward last season was my last tinker. Their one low rated loss I played with the minus points and eventually reduced the negative, 1 2 Quote Link to comment Share on other sites More sharing options...
Sammyswordsman Posted June 24, 2019 Report Share Posted June 24, 2019 1 hour ago, golfaddict1 said: If teams play competitive schedules, the algorithm will work fine. Every week the ratings change. If you get a high rating beating a high rated team in week one and that opponent loses more games and underperforms all year, that initial week 1 high rating declines and it can decline weekly with a downward trend or trend up and make giant leaps up in the rankings with a big win or some playoff wins after a weak regular season will get a school a nice chutes and ladders roll, especially a marquee win in a power state final. Most schools we discuss on the forum have a marquee win by the end of September or a marquee loss. SFA 2018 rating was still heavily 2017 based for example. Mater Dei would be ahead of UCLA 2018 Incorrect scores can be addressed with one email to the site’s admin. MaxPreps can be incorrect sure. CP pulls from their data. The states are rated from on the field performance. Some states don’t play many oos games, if at all. Nebraska is favored by Massey, while Freeman favors HI and both don’t feel the other state is strong overall. But beyond a handful of states, I believe the 3 main algorithm ratings systems do a good job overall and once again, a strong sos will make the algorithm work that much better. You can’t remove subjectivity in state scaling, but quality regular season schedules is a good start at removing preseason data. I begin with their data and trim the fat. Top 350 opponents (that change weekly) are magnified. Some outliers, sure but you ever look at human top 50 and 100 polls? I like the risk/reward boxes and negative points for bad losses theme. I won’t change a thing for this coming season. St Edward last season was my last tinker. Their one low rated loss I played with the minus points and eventually reduced the negative, @golfaddict1What about a negative point withdrawl for every week a team schedules/plays an opponent with a Calprep rating under 20.0 (Subtract the difference from 20 points) ex Weak 1 schedules vs. Jesuit (10.3)= -9.7 points Weak 2 schedules vs. Antelope (9.4) = -10.6 points Weak 7 schedules vs. Whitney (-4.2) = -24.2 points 2 Quote Link to comment Share on other sites More sharing options...
Cat_Scratch Posted June 25, 2019 Report Share Posted June 25, 2019 golfaddict1 I like it. Seems to be fair at first glance and easy enough to understand. Keep us posted, and thanks for the effort. 1 Quote Link to comment Share on other sites More sharing options...
badrouter Posted June 25, 2019 Report Share Posted June 25, 2019 5 hours ago, golfaddict1 said: If teams play competitive schedules, the algorithm will work fine. Define "competitive" in objective terms. For my entertainment only of course. Every week the ratings change. If you get a high rating beating a high rated team in week one and that opponent loses more games and underperforms all year, that initial week 1 high rating declines and it can decline weekly with a downward trend or trend up and make giant leaps up in the rankings with a big win or some playoff wins after a weak regular season will get a school a nice chutes and ladders roll, especially a marquee win in a power state final. Most schools we discuss on the forum have a marquee win by the end of September or a marquee loss. SFA 2018 rating was still heavily 2017 based for example. Mater Dei would be ahead of UCLA 2018 Incorrect scores can be addressed with one email to the site’s admin. LOL. So, it really is up to the fans to provide the scores they want provided. So, maybe instead of telling Ned he got the score of the state title game between Lakeland and STA wrong and it was really 33-20, maybe I'll email him and tell him it was 77-0 🤡. MaxPreps can be incorrect sure. CP pulls from their data. The states are rated from on the field performance. Some states don’t play many oos games, if at all. Nebraska is favored by Massey, while Freeman favors HI and both don’t feel- Appropriate word here. Because this all really just comes down to how these guys "feel". the other state is strong overall. But beyond a handful of states, I believe the 3 main algorithm ratings systems do a good job overall and once again, a strong sos will make the algorithm work that much better. You can’t remove subjectivity in state scaling-which is why it's b.s. to see these ratings as anything other than what some guy feels, but quality regular season schedules is a good start at removing preseason data. I begin with their data and trim the fat. Top 350 opponents (that change weekly) are magnified. Some outliers, sure but you ever look at human top 50 and 100 polls? I like the risk/reward boxes and negative points for bad losses theme. I won’t change a thing for this coming season. St Edward last season was my last tinker. Their one low rated loss I played with the minus points and eventually reduced the negative, Quote Link to comment Share on other sites More sharing options...
golfaddict1 Posted June 28, 2019 Author Report Share Posted June 28, 2019 On 6/24/2019 at 8:39 PM, Cat_Scratch said: golfaddict1 I like it. Seems to be fair at first glance and easy enough to understand. Keep us posted, and thanks for the effort. Thanks mate. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.