A lot of this reminds me (again) of Moneyball. Football "experts" gage high school kids the same way that old style baseball scouts gaged baseball players: do they have "tools"? Usually they are working with a much smaller set of games to evaluate that and tend to look at film to do it. Then they bring kids into camps and try to get some basic measurable our of them, just like the scouts did with Billy Bean. Sometimes this works as an evaluation tool because some athletes are conspicuously more talented. Bean was often compared to Darryl Strawberry and both of them were star athletes of comparable "tools". There isn't any comparison between the two in terms of their playing ability, however. That's why baseball teams do so much more stats these days. It is also why the pros, who have a much more substantial number of games against much better competition to use (NCAA = NFL minor leagues) and a much more standard set of requirements, generally do a better job of evaluating talent.
So what makes the difference? A sheer lack of will to develop the stats that will allow better evaluation of high school players is a big part of it. There are people working on this and, with literally thousands of high school games to work with, you would think they would do better. Of course, the football Darryl Strawberries stand out at once and would under any circumstances. But … even when you get to the 4 stars the evaluations are a lot less useful, imho. One reason we have so often out-performed our recruiting ratings is that, like the pros, Tech has a standard set of requirements for it's offense and in finding players who fit we are often competing with fewer schools for the talent we want. This doesn't apply to the D side and that, imho, helps explain why our success there has been more limited.
Well, enough.