Jump to content

Mater Dei 52 Los Alamitos 0


BigDrop
 Share

Recommended Posts

20 minutes ago, Cal 14 said:

If you don't want to get the fact that there are probably dozens of scenarios where a nice A>B>C doesn't work well, then it seems that I've given you far too much credit.

You're in change-the-subject mode.

Understandable.

So strawmen and subject changes. Anything else you want to add?

Link to comment
Share on other sites

4 hours ago, Atticus Finch said:

Just to drive this home.

In the calpreps era, there are 24 California teams rated ahead of the highest rated non-IMG Florida team.

Including 14 Mater Dei and St. John Bosco teams.

Even the 2016 Mater Dei team that played zero OOS games.

They are so far ahead of everyone else that they likely don't need to play any OOS games to maintain their grip on the top spots.

Are you trying to go all the way back to 2003, when they did their first national list?  There were tons of flaws back then.  Ned's had to revise the algorithm a bunch of times since then.  In 2002, you had teams like Fresno Garces and Modesto Central Catholic over Corona Centennial.

He realized that areas where there are a limited number of teams, ratings tended to get inflated really fast.  I don't think they really started getting more reasonable until somewhere around 2010-2014.  Even then, there have been changes after that.

Link to comment
Share on other sites

8 minutes ago, Cal 14 said:

Ok, then Calpreps is continuing to generate non-MoV ratings when someone is willing to pay for them.  We're also seeing the issue when it is removed.  In the normal ratings, TCA is much lower.

I was surprised by this.  I recall mentioning to @Atticus Finchthat I read that Ned won’t agree to allowing his system without MoV… I guess the key component was “pay” and then he lets them do their thing.   
 

I don’t even want to understand how TCA finished ahead of Chaminade-Madonna.   That’s just wrong on so many “levels”, pun intended.  :) 

Link to comment
Share on other sites

22 minutes ago, Cal 14 said:

For that game, yes.  For the entire season, which is from what the ratings are derived, not as much.  At most, that game accounts for 10-15% of either teams' rating.  Everything that happened after that impacts a great deal more.

And this is where we are claiming there is a problem. How those other teams on each school’s schedule are rated is off. Huntington Beach Edison is rated six points higher than Cardinal Gibbons, 24 points higher than Western, and so on. I don’t think Edison is better than either. But, for calpreps ratings, Edison is counted as the much stronger opponent. 

Link to comment
Share on other sites

6 minutes ago, Atticus Finch said:

I have to assume that you're Ned Freeman because you don't seem to understand football either.

Of course records matter. Having a lot of losses has *something* to do with how good you are.

Records matter if when it comes to the ratings.  They do not matter if teams get to meet in a rematch.

Link to comment
Share on other sites

4 minutes ago, Atticus Finch said:

STA was ahead of Edgewater in calpreps until this week.

They were behind Edgewater in the *Maxpreps* ratings all year.

Keep flailing.

Probably DBP’s differential over Seton shall Prep + 2.1x bonus, along with Edgewater’s 4 pt W over TB Tech made the diff.  Edge picked up some good rating points even with the close W.  

Link to comment
Share on other sites

6 minutes ago, Atticus Finch said:

Excuse me, what?

So the score doesn't matter for the season much?

Is that your opinion or is that how calpreps actually works?

At first, I was a little puzzled why you respond to comments separately the way you do.  Now I understand it's your means to omit pertinent information.

  • Like 1
Link to comment
Share on other sites

3 minutes ago, Cal 14 said:

Are you trying to go all the way back to 2003, when they did their first national list?  There were tons of flaws back then.  Ned's had to revise the algorithm a bunch of times since then.  In 2002, you had teams like Fresno Garces and Modesto Central Catholic over Corona Centennial.

Ned claims to have gone back and tweaked the ratings so that every season is on the same scale. Hence why this exists.

http://calpreps.com/National_all_all-time.htm

All the Mater Dei and St. John Bosco teams are since 2013.

Sorry.

Link to comment
Share on other sites

4 minutes ago, Cal 14 said:

Records matter if when it comes to the ratings.  They do not matter if teams get to meet in a rematch.

The system doesn’t even recognize a rematch, correct?  It’s just another game to the algorithm, amongst the others, with no added emphasis of having played prior. 

 

Link to comment
Share on other sites

7 minutes ago, Cal 14 said:

He realized that areas where there are a limited number of teams, ratings tended to get inflated really fast.  I don't think they really started getting more reasonable until somewhere around 2010-2014.  Even then, there have been changes after that.

I guess I shouldn't be surprised that calpreps toadies don't know this stuff but Ned now uses the same scale for all years.

Link to comment
Share on other sites

11 minutes ago, Atticus Finch said:

Ned claims to have gone back and tweaked the ratings so that every season is on the same scale. Hence why this exists.

http://calpreps.com/National_all_all-time.htm

All the Mater Dei and St. John Bosco teams are since 2013.

Sorry.

I enjoy the annual what rating is 2010 Camden County Easter egg hunt.  :)   Seemed to get higher each calibration/tinker adjustment. 
 

Up to 2012 you can see the Freeman ratings prior to adjustments after 2012.   Comp poll goes thru 2012.  

Link to comment
Share on other sites

1 hour ago, golfaddict1 said:

I was surprised by this.  I recall mentioning to @Atticus Finchthat I read that Ned won’t agree to allowing his system without MoV… I guess the key component was “pay” and then he lets them do their thing.   

MaxPreps computer rankings are based on weekly snapshots of the standard (MOV included) CalPreps ratings. CalPreps ratings "finalize" each week early on Tuesday which is when CalPreps locks in their final projections for the games coming up that week and when MaxPreps updates their computer rankings. If you go to MaxPreps right now and look at the computer rankings you will see "Last update: 11/22/2022". The CalPreps ratings are constantly updated throughout the week so they can begin to diverge from the MaxPreps computer rankings until the next weekly snapshot is taken and updated on MaxPreps.

  • Thanks 1
Link to comment
Share on other sites

3 hours ago, Atticus Finch said:

He's clueless.

The 30-point cap is the 63rd percentile of all margins in the country.

He never explains why that number is relevant.

Maybe it's set to be close to the 35-point running clock rule.  But, most of the computer rating systems set it somewhere.  He just defines it for everyone to see.

Link to comment
Share on other sites

3 hours ago, golfaddict1 said:

The system doesn’t even recognize a rematch, correct?  It’s just another game to the algorithm, amongst the others, with no added emphasis of having played prior. 

 

Correct.  It's just another data point.

Link to comment
Share on other sites

6 hours ago, badrouter said:

And this is where we are claiming there is a problem. How those other teams on each school’s schedule are rated is off. Huntington Beach Edison is rated six points higher than Cardinal Gibbons, 24 points higher than Western, and so on. I don’t think Edison is better than either. But, for calpreps ratings, Edison is counted as the much stronger opponent. 

HB Edison benefited from the 15-point rule twice, while being dinged by it once.  But Orange Lutheran had 4 games, including their playoff win over Edison that got the boost.  Because the Chargers played them twice, that carries more weight for them than usual.

As for whether who was better between them and Cardinal Gibbons, this is why the AP poll doesn't only have one voter.  You have your opinion, someone else might have a different one.  I've seen Edison, but not CG, so I don't have an opinion.  It does look like they may have been held back a little by the 30-point rule a couple of times, though.

But, you guys have to stop looking at these things in a linear fashion.  Statistics very rarely works that way.  Normally, it's a generalization of scattered data.  These ratings work very much like a Gaussian curve (a bell curve). 

image.png.a67bf75a04b72249e6fcece7be6247ef.png

At the extremities, there is very little overlap (i.e., very few teams can realistically compete with a Mater Dei or a Miami Central, which would be the G range, if they had labeled it as such).  But, as you move towards the center, there is quite a bit of overlap.  At the center, the data will be all over the place.  The system is not designed for individual situations.  It's designed to evaluate 16,000 teams.  Los Al and AH probably reside in the F range, where there is a little more variability.  Edison the Gibbons are probably in the lower part of the E range or upper part of F.

Another thing to understand is that the ratings are not necessarily predictive.  They're merely a collection and evaluation of data up to that point.  They're a moment in time, waiting for additional data, but there is always going to be a margin of error associated, as well. 

Link to comment
Share on other sites

6 minutes ago, Cal 14 said:

HB Edison benefited from the 15-point rule twice, while being dinged by it once.  But Orange Lutheran had 4 games, including their playoff win over Edison that got the boost.  Because the Chargers played them twice, that carries more weight for them than usual.

As for whether who was better between them and Cardinal Gibbons, this is why the AP poll doesn't only have one voter.  You have your opinion, someone else might have a different one.  I've seen Edison, but not CG, so I don't have an opinion.  It does look like they may have been held back a little by the 30-point rule a couple of times, though.

But, you guys have to stop looking at these things in a linear fashion.  Statistics very rarely works that way.  Normally, it's a generalization of scattered data.  These ratings work very much like a Gaussian curve (a bell curve). 

image.png.a67bf75a04b72249e6fcece7be6247ef.png

At the extremities, there is very little overlap (i.e., very few teams can realistically compete with a Mater Dei or a Miami Central, which would be the G range, if they had labeled it as such).  But, as you move towards the center, there is quite a bit of overlap.  At the center, the data will be all over the place.  The system is not designed for individual situations.  It's designed to evaluate 16,000 teams.  Los Al and AH probably reside in the F range, where there is a little more variability.  Edison the Gibbons are probably in the lower part of the E range or upper part of F.

Another thing to understand is that the ratings are not necessarily predictive.  They're merely a collection and evaluation of data up to that point.  They're a moment in time, waiting for additional data, but there is always going to be a margin of error associated, as well. 

You are a very silly guy....

Link to comment
Share on other sites

15 hours ago, Cal 14 said:

HB Edison benefited from the 15-point rule twice, while being dinged by it once.  But Orange Lutheran had 4 games, including their playoff win over Edison that got the boost.  Because the Chargers played them twice, that carries more weight for them than usual.

As for whether who was better between them and Cardinal Gibbons, this is why the AP poll doesn't only have one voter.  You have your opinion, someone else might have a different one.  I've seen Edison, but not CG, so I don't have an opinion.  It does look like they may have been held back a little by the 30-point rule a couple of times, though.

But, you guys have to stop looking at these things in a linear fashion.  Statistics very rarely works that way.  Normally, it's a generalization of scattered data.  These ratings work very much like a Gaussian curve (a bell curve). 

image.png.a67bf75a04b72249e6fcece7be6247ef.png

At the extremities, there is very little overlap (i.e., very few teams can realistically compete with a Mater Dei or a Miami Central, which would be the G range, if they had labeled it as such).  But, as you move towards the center, there is quite a bit of overlap.  At the center, the data will be all over the place.  The system is not designed for individual situations.  It's designed to evaluate 16,000 teams.  Los Al and AH probably reside in the F range, where there is a little more variability.  Edison the Gibbons are probably in the lower part of the E range or upper part of F.

Another thing to understand is that the ratings are not necessarily predictive.  They're merely a collection and evaluation of data up to that point.  They're a moment in time, waiting for additional data, but there is always going to be a margin of error associated, as well. 

I think if you had teams classified merely by a letter, like with the bell curve in a "tiers" listing, it would be much easier to accept and move on. But, when one sees Team A at #24 and Team B at #25, the natural, obvious inclination is to assume a claim is being made that Team A is (if only slightly) better than Team B.

Link to comment
Share on other sites

3 hours ago, badrouter said:

I think if you had teams classified merely by a letter, like with the bell curve in a "tiers" listing, it would be much easier to accept and move on. But, when one sees Team A at #24 and Team B at #25, the natural, obvious inclination is to assume a claim is being made that Team A is (if only slightly) better than Team B.

A little bit about my background... I'm a chemist, professionally.  You know those annual water reports that the city sends to you?  I used to work in a lab that generated data like that.  There were three situations that we dreaded to have to explain to laypeople:

1.  A water sample had a nitrate value of 22 milligrams per liter (mg/L, danger limit is 45, btw).  Customer wants you to retest (for whatever reason).  The retest gives a value of 21 mg/L.  "See?  The number went down!"

No, the number is exactly the same because the method allows for a margin of error of +10%.  The 21 confirms the 22.  If it went down to 15 or something, yes, but not 21.

2.  Result for perchlorate (which is sort of an explosive propellent, usually found near munition sites) is 2.1 micrograms per liter (ug/L).  The reporting limit (RL, the value with we can say with ~90% certainty the contaminant exists at all) is 2 ug/L.  Customer asks for a retest and the value comes back as 1.9 ug/L, which we would normally report as 'not detected' (ND).  This method allows for +20% at the RL.  "Oh, see, it's really ND!"

No, again, the 1.9 confirms the 2.1.  It says something is in the sample, but it's just really, really low. 

3.  Customer asks for something called a "J-value".  This is the region between the RL and a theoretical method detection limit (MDL, calculated, but not typically confirmed).  Let's say for perchlorate, the MDL is 0.45 ug/L.  We could probably see that number, but it would be really faint and the number just isn't reliable (could just be a blip in the baseline).  Sample comes back as 0.5 ug/L.  "Oh, you made a mistake!  There's no way I have that in my water!"

No, the MDL is only theoretical and is, in no way, reliable.  The "J-flag" we put on the data specifically states that this number should not be taken as real.

Explaining data like the above to laypeople is always a challenge, but they're things that scientists deal with every single day.  Calpreps is merely a collection of data fed into an algorithm, in which there is a margin for error.  You know how when political polling is reported, they always include a margin of error (usually + 3 or 4%)?  That's just a collection of data. 

It's for this reason that I always say that these are ratings, not rankings.  Going from Miami Central (79.1) to Chaminade (71.8) currently represents 2 spots in Florida (1 vs 3).  But, that's a difference of 7.3 rating points.  Despite this, these teams are probably reasonably close.  That same gap applied elsewhere looks a great deal more drastic.  Let's say Sumner at 20.1 (#172 in Fl) vs Pine Crest at 13.8 (#227).  Suddenly, we're talking about a difference of 55 spots.  But, in truth, it's the same gap.  A game between those teams might go somewhat similarly to Central and Chaminade (albeit, probably a little slower).  Cardinal Mooney at 13.3 and North Miami at 13.1?  Same freakin' number.

Ned doesn't publish his margin of error data, but I personally use about 3 rating points.  If there is a gap within 3, then I view the teams as probably reasonably about the same.  But, that does not mean that a team that is 5 or 6 points higher is the one I would predict to win, particularly if they benefited a bunch from the 15-point rule and the other didn't.

Bottom line is that it's not just a simple thing to view this as being so black and white.  A lot of CIF sections and other states are using them as pure rankings and I have mixed feelings about that.  While it's neat that they're being used, they're not really being used in the correct way.  But, at the same time, no human could reasonably rank all 575 Florida schools, so you have to find a reasonable way to approximate that.  A computer system is the only way.

 

  • Like 3
Link to comment
Share on other sites

23 hours ago, Cal 14 said:

A little bit about my background... I'm a chemist, professionally.  You know those annual water reports that the city sends to you?  I used to work in a lab that generated data like that.  There were three situations that we dreaded to have to explain to laypeople:

1.  A water sample had a nitrate value of 22 milligrams per liter (mg/L, danger limit is 45, btw).  Customer wants you to retest (for whatever reason).  The retest gives a value of 21 mg/L.  "See?  The number went down!"

No, the number is exactly the same because the method allows for a margin of error of +10%.  The 21 confirms the 22.  If it went down to 15 or something, yes, but not 21.

2.  Result for perchlorate (which is sort of an explosive propellent, usually found near munition sites) is 2.1 micrograms per liter (ug/L).  The reporting limit (RL, the value with we can say with ~90% certainty the contaminant exists at all) is 2 ug/L.  Customer asks for a retest and the value comes back as 1.9 ug/L, which we would normally report as 'not detected' (ND).  This method allows for +20% at the RL.  "Oh, see, it's really ND!"

No, again, the 1.9 confirms the 2.1.  It says something is in the sample, but it's just really, really low. 

3.  Customer asks for something called a "J-value".  This is the region between the RL and a theoretical method detection limit (MDL, calculated, but not typically confirmed).  Let's say for perchlorate, the MDL is 0.45 ug/L.  We could probably see that number, but it would be really faint and the number just isn't reliable (could just be a blip in the baseline).  Sample comes back as 0.5 ug/L.  "Oh, you made a mistake!  There's no way I have that in my water!"

No, the MDL is only theoretical and is, in no way, reliable.  The "J-flag" we put on the data specifically states that this number should not be taken as real.

Explaining data like the above to laypeople is always a challenge, but they're things that scientists deal with every single day.  Calpreps is merely a collection of data fed into an algorithm, in which there is a margin for error.  You know how when political polling is reported, they always include a margin of error (usually + 3 or 4%)?  That's just a collection of data. 

It's for this reason that I always say that these are ratings, not rankings.  Going from Miami Central (79.1) to Chaminade (71.8) currently represents 2 spots in Florida (1 vs 3).  But, that's a difference of 7.3 rating points.  Despite this, these teams are probably reasonably close.  That same gap applied elsewhere looks a great deal more drastic.  Let's say Sumner at 20.1 (#172 in Fl) vs Pine Crest at 13.8 (#227).  Suddenly, we're talking about a difference of 55 spots.  But, in truth, it's the same gap.  A game between those teams might go somewhat similarly to Central and Chaminade (albeit, probably a little slower).  Cardinal Mooney at 13.3 and North Miami at 13.1?  Same freakin' number.

Ned doesn't publish his margin of error data, but I personally use about 3 rating points.  If there is a gap within 3, then I view the teams as probably reasonably about the same.  But, that does not mean that a team that is 5 or 6 points higher is the one I would predict to win, particularly if they benefited a bunch from the 15-point rule and the other didn't.

Bottom line is that it's not just a simple thing to view this as being so black and white.  A lot of CIF sections and other states are using them as pure rankings and I have mixed feelings about that.  While it's neat that they're being used, they're not really being used in the correct way.  But, at the same time, no human could reasonably rank all 575 Florida schools, so you have to find a reasonable way to approximate that.  A computer system is the only way.

 

The computer ratings figure to do a much better job within smaller geographic regions, where there is much more overlap in opponents.

I also think that the true margin of all games should be the data input. A one-point win over a team really is exactly two points better than a one-point loss. A 50-point win really is 20 points better than a 30-point win (though differences in state running-clock rules make this area difficult to apply, another reason national ratings aren't as trustworthy). 

But, the bottom line, as you do acknowledge, is that there are people who are using these ratings in ways in which they are not intended to be used. The possibility exists for these ratings to get things terribly wrong; that they are (ostensibly) derived from a computer does not rule that possibility out. And I think Florida is a state that tends to be very underrated in computer ratings, for whatever reason(s). And when calpreps predicts Clearwater Academy to beat Lakeland, and the actual result was Lakeland destroying them 44-6, my confidence is not adjusted to say the least.

If people altered their perception of calpreps, and it was understood that it's a work in progress with abundant limitations and is not to be taken as the gospel, it would be fine.  

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...