Home » attain alternatives blog » What’s in a Ranking?

What’s in a Ranking?

If you’ve ever perused our CTA rankings database, you’ll notice that all the programs in there are given a flag ranking between 1 flag and 5 flags, with 5 being the highest ranking awarded. These rankings are based on a proprietary algorithm that analyzes risk and performance across multiple time frames alongside a variety of other factors to provide what we believe is a pretty good reflection of quality programs- particularly since our database has been thoroughly scrubbed to exclude unregistered programs that don’t offer managed accounts or provide huge operational risk (you can see our most recent rankings here). We even rely on these rankings in-house to guide our portfolio construction for clients – though they are, by no means, the final word in the allocation process.

That’s how we do it, at least.

We are definitely not the only ones who rank managed futures programs- there are a variety of rankings and awards that get distributed every year in the space. But that doesn’t mean that all rankings are created equally. Many use incomplete or inaccurate databases to create their lists, some base their recognition solely on returns, and others fail to conduct any form of due diligence on the managers they’re evaluating, leading to some very questionable choices, in our minds.  But, to each their own- right?

Except… sometimes the problems associated with the rankings and awards can be pretty egregious. A client of ours was recently reviewing a set of rankings in the managed futures space, and upon reading one of the names on the list, couldn’t help but balk.

“I was excited to see the awards and winners…. Up until I saw [a program] I believe… are falsely reporting numbers. It didn’t bother me until I saw they were getting an award for it.”

It’s not the awards and rankings themselves we take issue with. In and of themselves, regardless of methodology differences we may have with those bestowing the honors, the recognition isn’t dangerous. In fact, we applaud efforts by the curators to provide recognition to programs in the managed futures space, and the industrious investors who are using these lists as a starting point for research.

Typically, there are two kinds of investors viewing these lists. One type of investor will consider the rankings, question the methodology of the rankings, and, depending on how they feel about the basis of the rankings,  make calculated decisions to further investigate individual programs listed. The other type of investor will take one look at the rankings, and assume the programs in question are obviously high quality and worthy of consideration. Some of those in this group will make an allocation based on the recognition alone. Unfortunately, these investors wind up chasing a lot of hype around investment opportunities with far more risk than the conferred accolades would ever imply- like a program with a highly suspect track record.

The point? As is the case with any piece of investment news or educational resource (or really, any allocation consideration, too), we advise investors to question everything. Rankings and awards can be a great starting point for your research, but no matter what the source of that recognition might be, it’s important to dig deeper and get all the facts before making any decisions.