Jump to content

Power ratings - end of season look


golfaddict1

Recommended Posts

  • 5 months later...

@badrouter here’s my 5 minute creation one evening last year (after my ELO attempt failed (due to time commitment).  :)

Fisher’s algorithm started with Massey Ratings foundation, with added criteria and tinkering as the years go by.  

I start with CP and Massey foundation and use a points system creation with specific criteria, and a little wiggle room in some cases.    At times I decided to give more points in a loss than in any win all season.

IMG losing to MD last year being one of the very few examples.  Otherwise it’s by the criteria to the fraction.  No cards up my sleeve.  

Teams also receive negative points for bad losses.   That’s a tinkering in progress  area for 2019.   I’ll start this up in October most likely.  

 

 

  • Thanks 1
  • Like 1
Link to comment
Share on other sites

1 minute ago, GardenStateBaller said:

Thx for your efforts. Look fwd to comparing your results to @HSFBA‘s throughout the season. 

Can you get Fisher to provide his top 1000 ratings weekly?  :)   Would be great to add his algorithm into the mix and get 3 opinions va 2 magnified at the top 350 level (and for negative points on loss would need higher ratings).  I’ll take whatever he can offer.  Starting October would be nice.  For the schools with some larger variances (Warren Central last year for example) a 3rd set of data points might help smooth out the differential.  

Link to comment
Share on other sites

23 hours ago, golfaddict1 said:

@badrouter here’s my 5 minute creation one evening last year (after my ELO attempt failed (due to time commitment).  :)

Fisher’s algorithm started with Massey Ratings foundation, with added criteria and tinkering as the years go by.  

I start with CP and Massey foundation and use a points system creation with specific criteria, and a little wiggle room in some cases.    At times I decided to give more points in a loss than in any win all season.

IMG losing to MD last year being one of the very few examples.  Otherwise it’s by the criteria to the fraction.  No cards up my sleeve.  

Teams also receive negative points for bad losses.   That’s a tinkering in progress  area for 2019.   I’ll start this up in October most likely.  

 

 

None of this addresses the fundamental problems of incorrect rosters/impact players, incorrect scores, or the profound lack of common opponents among teams on a national level or the subjective nature of trying to rate states and teams prior to the season. 

Link to comment
Share on other sites

 

1 hour ago, badrouter said:

None of this addresses the fundamental problems of incorrect rosters/impact players, incorrect scores, or the profound lack of common opponents among teams on a national level or the subjective nature of trying to rate states and teams prior to the season. 

If teams play competitive schedules, the algorithm will work fine.  Every week the ratings change.   If you get a high rating beating a high rated team in week one and that opponent loses more games and underperforms all year, that initial week 1 high rating declines and it can decline weekly with a downward trend or trend up and make giant leaps up in the rankings with a  big win or some playoff wins after a weak regular season will get a school a nice chutes and ladders roll, especially a marquee win in a power state final.   Most schools we discuss on the forum have a marquee win by the end of September or a marquee loss.  SFA 2018 rating was still heavily 2017 based for example.   

Mater Dei would be ahead of UCLA 2018 :) 

Incorrect scores can be addressed with one email to the site’s admin. MaxPreps can be incorrect sure.  CP pulls from their data.  

The states are rated from on the field performance.  Some states don’t play many oos games, if at all.  Nebraska is favored by Massey, while Freeman favors HI and both don’t feel the other state is strong overall.  But beyond a handful of states, I believe the 3 main algorithm ratings systems do a good job overall and once again, a strong sos will make the algorithm work that much better.     

You can’t remove subjectivity in state scaling, but quality regular season schedules is a good start at removing preseason data.    

I begin with their data and trim the fat.  Top 350 opponents (that change weekly) are magnified.   Some outliers, sure but you ever look at human top 50 and 100 polls?    

I like the risk/reward boxes and negative points for bad losses theme.  I won’t change a thing for this coming season.  St Edward last season was my last tinker.   Their one low rated loss I played with the minus points and eventually reduced the negative,   

  • Thanks 1
  • Like 2
Link to comment
Share on other sites

1 hour ago, golfaddict1 said:

 

If teams play competitive schedules, the algorithm will work fine.  Every week the ratings change.   If you get a high rating beating a high rated team in week one and that opponent loses more games and underperforms all year, that initial week 1 high rating declines and it can decline weekly with a downward trend or trend up and make giant leaps up in the rankings with a  big win or some playoff wins after a weak regular season will get a school a nice chutes and ladders roll, especially a marquee win in a power state final.   Most schools we discuss on the forum have a marquee win by the end of September or a marquee loss.  SFA 2018 rating was still heavily 2017 based for example.   

Mater Dei would be ahead of UCLA 2018 :) 

Incorrect scores can be addressed with one email to the site’s admin. MaxPreps can be incorrect sure.  CP pulls from their data.  

The states are rated from on the field performance.  Some states don’t play many oos games, if at all.  Nebraska is favored by Massey, while Freeman favors HI and both don’t feel the other state is strong overall.  But beyond a handful of states, I believe the 3 main algorithm ratings systems do a good job overall and once again, a strong sos will make the algorithm work that much better.     

You can’t remove subjectivity in state scaling, but quality regular season schedules is a good start at removing preseason data.    

I begin with their data and trim the fat.  Top 350 opponents (that change weekly) are magnified.   Some outliers, sure but you ever look at human top 50 and 100 polls?    

I like the risk/reward boxes and negative points for bad losses theme.  I won’t change a thing for this coming season.  St Edward last season was my last tinker.   Their one low rated loss I played with the minus points and eventually reduced the negative,   

@golfaddict1What about a negative point withdrawl for every week a team schedules/plays an opponent with a Calprep rating under 20.0  (Subtract the difference from 20 points)

ex

Weak 1 schedules vs. Jesuit (10.3)= -9.7 points

Weak 2 schedules vs. Antelope (9.4) = -10.6 points

Weak 7 schedules vs. Whitney (-4.2) = -24.2 points

 

  • Haha 2
Link to comment
Share on other sites

5 hours ago, golfaddict1 said:

 

If teams play competitive schedules, the algorithm will work fine. Define "competitive" in objective terms. For my entertainment only of course. Every week the ratings change.   If you get a high rating beating a high rated team in week one and that opponent loses more games and underperforms all year, that initial week 1 high rating declines and it can decline weekly with a downward trend or trend up and make giant leaps up in the rankings with a  big win or some playoff wins after a weak regular season will get a school a nice chutes and ladders roll, especially a marquee win in a power state final.   Most schools we discuss on the forum have a marquee win by the end of September or a marquee loss.  SFA 2018 rating was still heavily 2017 based for example.   

Mater Dei would be ahead of UCLA 2018 :) 

Incorrect scores can be addressed with one email to the site’s admin. LOL. So, it really is up to the fans to provide the scores they want provided. So, maybe instead of telling Ned he got the score of the state title game between Lakeland and STA wrong and it was really 33-20, maybe I'll email him and tell him it was 77-0 🤡MaxPreps can be incorrect sure.  CP pulls from their data.  

The states are rated from on the field performance.  Some states don’t play many oos games, if at all.  Nebraska is favored by Massey, while Freeman favors HI and both don’t feel- Appropriate word here. Because this all really just comes down to how these guys "feel". the other state is strong overall.  But beyond a handful of states, I believe the 3 main algorithm ratings systems do a good job overall and once again, a strong sos will make the algorithm work that much better.     

You can’t remove subjectivity in state scaling-which is why it's b.s. to see these ratings as anything other than what some guy feels, but quality regular season schedules is a good start at removing preseason data.    

I begin with their data and trim the fat.  Top 350 opponents (that change weekly) are magnified.   Some outliers, sure but you ever look at human top 50 and 100 polls?    

I like the risk/reward boxes and negative points for bad losses theme.  I won’t change a thing for this coming season.  St Edward last season was my last tinker.   Their one low rated loss I played with the minus points and eventually reduced the negative,   

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.


×
×
  • Create New...