I use Morningstar’s star ratings to help me select mutual funds. I only invest in funds which have at least 4 or 5 stars.
Is this a good measure of success? Is there a better method to use?
Russell Kinnel wrote an article for Morningstar entitled “How Expense Ratios and Star Ratings Predict Success” in which he compared whether Morningstar stars or lower expense ratios were a better predictor of future returns. His conclusion:
How often did it pay to heed expense ratios? Every time. How often did it pay to heed the star rating? Most of the time, with a few exceptions.
The analysis used “three key measures” which were (1) Success Ratio (meaning the fund was not cancelled), (2) Total Returns, and (3) Subsequent Star Ratings.
Using Star Ratings as a measure is a bit biased because it presumes that subsequent star ratings bear some correlation to returns when the whole question is whether they’re correlated at all. Regardless, low expense ratios were the best predictor of a fund’s bright future. As Kinnel wrote:
If there’s anything in the whole world of mutual funds that you can take to the bank, it’s that expense ratios help you make a better decision. In every single time period and data point tested, low-cost funds beat high-cost funds. …
Expense ratios are strong predictors of performance. In every asset class over every time period, the cheapest quintile produced higher total returns than the most expensive quintile.
Given this analysis from Morningstar themselves you might be tempted to think that no one would pay attention to Morningstar stars. Unfortunately that is not the case.
Morningstar stars are still one of the primary marketing devices used by funds to tout their wares and justify their higher-than-normal expense ratio. User arhant made an insightful comment on Kinnel’s article which read:
Much is made of the M* star ratings! What value do they provide other than attracting the investors’ money to those funds[?] A great service to the funds themselves. Academic studies have proved that past performance is poor predictor of future performance. And M* ratings are based on past performance, aren’t they?
Here is a quote from p.181 of the book “Unconventional Success” by David Swensen of Yale University, “Because Morningstar fails to realize that information on past performance provides precious little advantage in the hunt for superior future performance, the firm’s regular attempts to tweak its ratings system hold no promise for success”. Can this be disproved with factual data?
Arhant is correct: Morningstar relies on past performance to bestow stars. Also, those funds with strong past performance are more likely to have had lower fees and expenses. Funds with lower fees and expenses are more likely to have greater success in the future. Therefore, funds with lower fees and expenses are likely to have a better Morningstar star ranking.
This logic provides a correlation between Morningstar stars and future performance only because there is a correlation between Morningstar stars and lower fees and expenses, not because Morningstar really knows how to pick winners. They’re using the same logic you could yourself, except that sometimes they trip themselves up by trying “more sophisticated” ranking methods.
So when did Morningstar stars fail to direct users to success?
Take for example the 3-year gross returns for International Equity ending in March of 2010. Funds in the cheapest quartile of expense ratios had returns a full 1.54% higher than those in the priciest quartile. Meanwhile funds with a rating of five stars actually underperformed a fund with only one (*) Morningstar star by 0.54%. Just sticking to low expense ratios would have been better for their customers. Remember, “In every single time period and data point tested, low-cost funds beat high-cost funds.”
Stars are assigned based on comparative past performance within a Morningstar category. Because they’re based on relative performance, not actual performance, when a fund is re-categorized, its star rating can change without anything actually being different.
Currently there are about 50 different Morningstar categories, but this was not always the case. In 2000, all United States funds were lumped together. Because of the technology bubble of the late 1990s, the only five star funds were large cap US technology funds. At that time some investors thought they were diversified when they owned five different technology funds, each of which had Cisco as its largest holding. A “conservative” fund was considered an S&P 500 fund, which also had Cisco as its largest holding. Some investors had over 30% of their portfolio invested in Cisco simply because they were following the idea of investing only in funds which were given five stars by Morningstar.
Meanwhile funds like Vanguard Gold and Precious Metals (VGPMX) had only a single Morningstar star simply because all U.S. funds were lumped in the same category. Small allocations in different diversified sectors is a good idea, and VGPMX has a very low expense ratio.
In 2000 as the markets were about to change, there was a very strong inverse correlation between Morningstar stars and future performance. Because different categories prosper for years at a time, at the end of March, 2000, 5-star funds were poised to do the worst and 1-star funds were often poised to do the best. Here is what happened to Cicso and VGPMX:
Back then, Morningstar did not have enough categories to fairly compare the past returns of funds. The very same VGPMX that had one star in 2002 now, after re-categorization, had four.
Vanguard Precious Metals and Mining, as it is now called, is currently in the Morningstar Category “Equity Precious Metals.” Breaking funds up into different categories has helped make Morningstar stars more relevant again, but there still exists the problem of different types of funds being lumped into the same Morningstar category.
VGPMX is currently lumped in with funds which are pure gold mining funds. VGPMX changed its name because it has less of a focus on gold and more of a focus on not only gold but also silver, platinum, diamonds, or other precious and rare metals or minerals such as iridium, rhodium, rhenium, and palladium.
As a result, when gold does well, funds invested exclusively in gold get the most Morningstar stars in the Equity Precious Metals category. But when gold does poorly VGPMX gets the most start in the category. This flip flopping on account of different investment mixes doesn’t help an investor pick the best fund. You may want VGPMX as a more diversified fund even when gold has done better recently and it has fewer Morningstar stars because it is being compared to exclusively gold mining funds.
Other similar problems exist with the Morningstar categories.
For example, Mutual funds in the “Balanced Fund” category which are 60% stocks and 40% bonds will, on average, under perform 75%/25% funds and out perform 50%/50% funds because stocks on average have more return even if they are also more volatile. Because all these funds are compared together for stars, the funds with more stocks will always get better star ratings than those with more bonds – the stars just reflect the percentage allocated to stocks. Or if the markets have done poorly for three years they will have fewer stars even at a time when the markets are about to reverse and they are about to do better again.
Other categories need work as well.
iShares MSCI Australia (EWA), iShares MSCI Singapore (EWS), and iShares MSCI New Zealand Capped (ENZL) are all in a Morningstar category called “Miscellaneous Region” and iShares MSCI Hong Kong (EWH) is in “China Region.” Meanwhile, iShares MSCI Pacific ex Japan (EPP) is categorized as “Pacific/Asia ex-Japan Stk” despite 100% of its holdings being invested in the four funds just mentioned.
We have several factors that we use for crafting an asset allocation, but Morningstar stars is not one of them. Just look for low cost funds. Morningstar would do well to remember that advice too.
Photo used here under Flickr Creative Commons.