clock menu more-arrow no yes mobile

Filed under:

Putting the top prospect rankings in context

What is Future Value? What does being ranked in the Top 100 mean for a player’s actual future production?

If you buy something from an SB Nation link, Vox Media may earn a commission. See our ethics statement.

Wild Card Round - Houston Astros v Minnesota Twins - Game One
After a strong 2020 debut, Ryan Jeffers is showing up on the annual top 100 prospects lists
Photo by Brace Hemmelgarn/Minnesota Twins/Getty Images

One of the things I most look forward to in the runup to Spring Training every year is the release of the Top 100 prospects lists. Many outlets release versions of this list including MLB.com, FanGraphs, Baseball America ($), Baseball Prospectus ($), ESPN ($), and The Athletic ($) (among many others).

I enjoy looking at the lists to learn the experts’ thinking about the Twins’ prospects. The write-ups and projections that accompany the numeric rankings make it easy to get excited and dream about those players making their way to Target Field in the (hopefully) not too distant future. A lot of the current Twins core — namely, Max Kepler, José Berríos, Byron Buxton, and Miguel Sano — is made of players who found themselves on these kinds of lists in the 2010s.

The excitement of the rankings also has a way of inflating our expectations. It’s easy to lose sight of the bigger picture and assume a ranking on a top 100 list portends a future star. If a player is thought of so highly, they must be a sure thing!

But, is that really true?

Today, I want to dig into the data behind projecting prospects to explore that question. In particular, let’s explore what the rankings mean, how well highly ranked prospects have actually performed, and how those insights might shape our expectations for the Twins’ top prospects.

Scouting & Prospects 101

All the lists ostensibly have the same goal — to identify and rank the best prospects in baseball — but they go about that task in different ways, valuing different things with different weights, and bringing differing points of view to each list. Some are primarily made by individuals, like Keith Law for The Athletic, Kiley McDaniel for ESPN, and Eric Longenhagen for FanGraphs. Others are done by small teams and committees. All of them include discussions with professional scouts as a key input. Regardless of the process used, they are all broadly rooted in the same fundamental concepts.

The projection of prospects is usually summarized with the concept of Future Value (FV), which is a way we distill each player’s scouting evaluation into a single expression. Broadly stated, FV is a grade on the 20-80 scale that maps to anticipated annual WAR production during the player’s first six years of service. The links here explain that FV attempts to combine a prospect’s potential (reasonable ceiling and floor) as well as his chance of realizing it (including injury-related risks or proximity to the majors) into one tidy, value-based number.

(Note: for those really interested in this concept and the process for assessing prospects, McDaniel and Longenhagen have written a book, Future Value, that goes deep into this subject.)

Below is a handy chart that describes those 20-80 numbers by player role and translates them to expected single-season WAR:

Table 1: 20-80 Scouting Scale by Player Role
Source: Fangraphs, Kiley McDaniel, September 2014: Link

Like a lot of things in baseball, the invention of this scale is credited to Branch Rickey. It mirrors a lot of scientific scales, where each ten point increment represents a standard deviation away from average (50). In theory, 20-80 covers three standard deviations above and below average, which, in a normal distribution, includes 99.7% of the sample.

Despite the scale appearing definitive, using it to project prospects across all the different minor league levels, at all different positions and ages (some of which are five or six years away from the big leagues), is a decidedly subjective exercise. Nonetheless, it’s a useful communication tool and common framework that allows us to quickly digest and compare across a lot of different variables.

Work by former FanGraphs writer Craig Edwards approximated how players at each future value tier distribute across the annual top 100 prospects lists. Loosely based on the Baseball America prospects lists from 1996-2010, Edwards, with help from Longenhagen and McDaniel, came up with this:

Table 2: Notional Future Value distribution of Top 100 prospects
Source: FanGraphs, Craig Edwards, November 2018: Link

To characterize that distribution, we can look at Longenhagen’s list for FanGraphs for 2021. It has two players with 70 or higher future values (Tampa Bay shortstop Wander Franco, 80; and San Diego left-hander Mackenzie Gore, 70). Prospects ranked #3 through #24 are 60 FV or 65 FV. Prospects #25 to #50 are 55 FV and the remainder of the list are 50 FV. Longenhagen also grades out the remaining 50 FV prospects, which takes his total list to 133 players. McDaniel’s most recent list over at ESPN generally tracks with this same distribution.

Referring back to the first table above, the distribution of the grades here implies that the expectation is for every top 100 prospect to turn into at least a league average contributor (50 FV, ~2 WAR/season).

But we know that’s not how it plays out in reality. For every Joe Mauer and Justin Morneau — top ranked prospects who “made it” — there are as many or more who didn’t. Like David McCarty, Michael Restovich, Adam Johnson, J.D. Durbin, Kohl Stewart, and Alex Meyer — just to name a few from Twins history.

We know that despite our (and the experts’) expectations, prospects fail. But how often? Let’s turn to the hard data to see what past rankings and subsequent actual results tell us about top ranked prospects’ likelihood of success.

Analyzing Prospect Performance

A 2011 study by Scott McKinney, published originally as a Fanpost at Royals Review, used Baseball America prospect list data from 1990-2003 to calculate that about 70% of Top 100 prospects fail to become average major leaguers. McKinney conservatively defined average as a player averaging at least 1.5 fWAR over their six cost controlled years.

His study also showed that prospects ranked higher in the top 100 have higher success rates. About 50% the prospects ranked in the top 20 succeeded against the benchmark, compared to about 30% for those ranked #21-#40, and just a shade over 20% for those ranked #41-#100. McKinney plotted out the average single season fWAR for each rank position in the top 100:

Average season fWAR by Top 100 Prospect rank, 1990-2003
Source: Royals Review, Scott McKinney, February 2011, Link

As you would probably expect, the highest production tends to come from the players at the top of the lists.

McKinney’s findings were largely confirmed in an update to the study by Matt Perez for Camden Depot in 2014 that added three more years of data to the sample. Both studies also proved out that position player prospects are a slightly less risky bet than pitchers. Again using the average 1.5 fWAR/season benchmark, Perez found that about 77% of top 100 pitching prospects fail, compared to about 66% of position player prospects. He also showed that about 15% of all prospects on the top 100 lists fail to even make it to the major leagues at all, with pitchers again having a higher failure rate than position players.

In 2018, another piece of research, with a slightly updated methodology and data through 2013, was published by Shaun Newkirk at Royals Review. Building on McKinney’s foundational methodology, Newkirk altered the definition of success to be relative. Instead of using an arbitrary benchmark like averaging 1.5 fWAR per season for the full top 100, his approach relied on determining if a prospect succeeded/failed to meet the average value produced by prospects who were ranked similarly. This approach tracked with the logic that we don’t have the same expectations for all top 100 prospects. We have higher expectations for the higher ranked prospects — the level of production needed for us to consider Byron Buxton a success is different than the level we’d need to see from Joe Benson — so making the bar of “success” relative to the peer group is a way to reflect that.

By Newkirk’s approach, 60% failed to reach the average production of their peer group. The other takeaways held. Higher ranked prospects were more likely to succeed than lower ranked prospects — notably, those ranked #1-#10 accumulated about 35% of the total fWAR of the sample — and position players succeeded more frequently than pitchers.

Regardless of the method, the research makes it clear — projecting prospects’ future performance is highly uncertain (perhaps even more than we’d expect) and being ranked in the top 100 is no guarantee for major league success. That doesn’t mean it’s a futile exercise. The best players do tend to come from the top prospect lists (especially from the top of those lists). But, this kind of forecasting, like most kinds of forecasting, comes with large error bars.

So, what might all of this mean for the Twins’ top prospects?

Twins on the 2021 Top 100 lists

To explore that question, I compiled the most prominent outlets’ rankings of the Twins top prospects for 2021 to make a composite list, shown below:

Twins Prospects on 2021 Top 100 Prospect Lists

Six different Twins prospects were named in the Top 100 of at least one of these rankings. FanGraphs’ looked most favorably on the Minnesota organization, including all six on its list. Alex Kiriloff garnered the highest single ranking, with his 7th overall spot on Keith Law’s list for The Athletic. Kiriloff and shortstop Royce Lewis were the two Twins to be ranked in the top 100 by each of the six publications. The experts are split about which of the two is the better prospect, though — three have Lewis on top and three have Kiriloff. I was surprised that the range of opinions on Kiriloff, at least in terms of numeric rankings, was wider than it was for Lewis. Baseball Prospectus’ 71st ranking on Kiriloff might be an outlier — the other five lists all had him among their top 26 prospects. All six lists had Lewis inside their top 50.

Outfield prospect Trevor Larnach is next, showing up on five of the six rankings between the mid-30s and low-80s. Catcher Ryan Jeffers, who successfully debuted last season with 55 at-bats for the Twins, was on four lists between the high-50s and high-90s. He’s followed by two right-handed pitching prospects, Jordan Balazovic (three lists) and Jhoan Duran (two) in the latter half of those lists.

ESPN and FanGraphs both went beyond the Top 100 to complete their rankings. If I would have included those additional data, right-hander Matt Canterino was included on both (101 ESPN, 128 FanGraphs), and Duran was included at 107 on ESPN.

An Exercise

Because of the inherent uncertainty discussed above, I like to use all of the rankings Instead of relying on the opinion and judgment of a single list. We can get a consensus numeric ranking of each of these prospects by averaging them together. To do that, I’ll need to make an assumption to somehow account for the handful of “Not Ranked (NR)” data points in the table above. I’ll give those an arbitrary value of 125 for this exercise (except for Duran’s 107). Averaging the numbers and adding placeholders for the not ranked data points might not be the most scientific, but it’s quick and simple.

The order of the Twins’ top prospects stays the same, with Kiriloff and Lewis (improbably) tied again:

Taking it a step further, we can combine that average rank with the typical distribution of future value in the top 100 lists from Edwards (described in Table 2 above) and assign each player an expected future value based on their consensus ranking.

The averaging adds a little conservatism to the rankings — the expected FV from the average ranks are a half grade lower than the FVs assessed by ESPN and Fangraphs analysts for Kiriloff, Lewis, and Larnach. It’s worth noting, if Baseball Prospectus’ outlying #71 ranking for Kiriloff is tossed out, his average would be 18, placing him squarely in the 60 FV tier.

We can then use these rankings and expected future value approximations to get some perspective on the likelihood for future big league success of these players. Below is Shaun Newkirk’s table of actual results of the Baseball America top 100 prospects from 1993-2013. Remember his approach compared individual results to the average results of similarly ranked peer prospects.

Table 3: Actual Results, Baseball America Top 100 Prospects 1993-2013
Source: Royals Review, Shaun Newkirk, March 2018: Link

Using this, we can quickly scan for where the Twins prospects fit and get a sense for how similarly ranked prospects in the past have fared once they reached the majors. We can also return to Edwards’ work which also calculated a “star rate” (defined as accumulating more than 10 present-day fWAR in their controllable years) and a “bust rate” (defined as accumulating less than 1 present-day fWAR) based on the future value tier a prospect was in.

For Lewis and Kiriloff, previous hitter prospects ranked between #21 and #40 averaged 8 fWAR over their six controllable seasons and 30.5% of them met or exceeded that benchmark. Hitters with 55 FV grades turned into stars 21.9% and busted 36.5% of the time.

For Larnach, prospects ranked between #61 and #80 produced an average of 4 fWAR and about 25% of them met or exceeded that figure. For Jeffers, those same figures for prospects ranked #81 to #100 are 5 fWAR and 30.5%. Position players with 50 FV grades turned into stars 9.9% of the time, and had a bit more than a coin flip’s chance of busting (51.5%).

We can fudge the average rankings a little bit to include Balazovic and Duran (and Canterino) in the last tier of the pitcher table where the average was 5 fWAR and 29% success. Pitchers with 50 FV grades became stars 5.7% and had nearly a 60% bust rate.

Full disclosure: I admit I’m comparing apples and oranges here since the studies were done at different times, with different datasets, and used different approaches. But, at a high level, these findings give us additional context and a better sense of the range and likelihood of potential outcomes for the prospects.

Along those same lines, FanGraphs has started including in their top prospect lists a chart for each top prospect that reflects their estimated probability of achieving different future value outcomes, like this one for Royce Lewis below:

Source: Fangraphs, Eric Longenhagen, February 2021: Link

Conclusion

I’m one of the first in line to get excited about the Twins highly ranked prospects. I know better, but I still let myself become confident they are the next can’t miss thing. But the reality is much more complex than that. The data above makes it clear that projecting and developing talent is an inherently risky proposition. Even the consensus top prospects have high failure rates. We all might be wise to take more of a probability mindset and remember these base rates when putting expectations on our favorite prospects. Despite that, it remains true that the best players are most likely to come from the top prospects lists.

But they also come from unexpected places. FanGraphs’ Jeff Sullivan has, for a number of years, tracked the production of players not listed on the top 100 prospect lists and found that around 30% of the players who produce 3+ WAR (which equates to 60 FV per Table 1 above) in a given season were never ranked on a top 100 prospect list. He also found that about 40% of the league’s total WAR in any given year comes from players that were unranked as prospects.

As baseball author Roger Kahn wrote, “it is dangerous to spring to obvious conclusions about baseball or, for that matter, ball players. Baseball is not an obvious game.”


John is a contributor to Twinkie Town with an emphasis on analytics. He is a lifelong Twins fan and former college pitcher. You can follow him on Twitter @JohnFoley_21.