Close Close
Popular Financial Topics Discover relevant content from across the suite of ALM legal publications From the Industry More content from ThinkAdvisor and select sponsors Investment Advisor Issue Gallery Read digital editions of Investment Advisor Magazine Tax Facts Get clear, current, and reliable answers to pressing tax questions
Luminaries Awards
ThinkAdvisor

Portfolio > Asset Managers

THE GLUCK REPORT- Optimizing Optimizers

X
Your article was successfully shared with the contacts you provided.

Independent advisors who think they are doing a good job for their clients ought to know about Mark Kritzman’s rigorous approach to asset allocation. Kritzman’s firm, Windham Capital Management Boston LLC, advises large institutional investors, managing over $6 billion in currency strategies while also providing asset allocation advice to large pension funds, endowments, foundations, and public institutions. Kritzman is also research director of the Research Foundation of the CFA Institute, formerly called the Association for Investment Management and Research, and has written dozens of articles in scholarly journals on portfolio management. It is because Kritzman’s methods are not widely applied among independent advisors that he should be heard. In the interview that follows, I focused my questions on the use of optimizers, an investment tool that Kritzman is expert on and which advisors generally have moved away from.

What do you think of advisors who are not using optimizers? Let’s say there are two advisors and they agree on a forecast of returns, standard deviations, and correlations. And let’s say their forecasts are reasonably good, and that one of them uses an optimizer. If they each have clients who want to select the portfolio that has no more than a 5% chance of depreciating by 20% in any given year, and you are advising that client ad hoc, then how can you answer that question without using an optimizer to figure out which portfolio is going to give me the highest return for the amount of risk I’m going to have these clients take? You can’t do that unless you have an optimizer. You can’t figure out that calculation in your head. Rules of thumb are not going to help. Second, without using mathematics, how do you estimate the likelihood of a loss? How do you know in your head what that probability ought to be? It is like preparing your taxes without a calculator. I think there are advisors who don’t really understand what the contribution of an optimizer is. Like a lot of things you don’t understand, they tend to be rejected, rather than admit you do not understand them. I guess if you are an advisor to individuals and you don’t understand the technology out there, you may not admit that. You are just going to say it doesn’t work. I’m not saying that this is what everybody does, but I suspect that is what is going on to a large extent.

Optimizers were popular in the 1990s among high-end RIAs working with the mass affluent, but came to be viewed as a rear-view mirror tool for building portfolios. You want to come up with a portfolio that is efficient. For a particular level of risk, you want to identify a combination of assets that is going to give you the highest expected return. Or, for a particular expected return, you want to find the portfolio that has the least amount of risk. The optimizer will isolate all of those portfolios for different levels of risk off of the highest expected return. The optimizer’s solution is based on estimates you make for each asset’s expected return and standard deviation, and then how their returns are correlated with one another. Sure, there are going to be mistakes in your inputs.

Let’s talk about what you call garbage inputs. Garbage inputs are na?ve, hard to defend, or simple-minded. An optimizer is just a sophisticated calculator. Not using an optimizer just compounds problems. If you get the inputs right, the optimizer will give you the right answer. If you get the inputs wrong, the optimizer will give you the wrong answer. If you get the inputs right and you don’t use the optimizer–in other words, you come up with the correct estimates for expected returns, volatilities, and correlations and then through some kind of seat-of-the- pants approach try to figure out what the best portfolio is–you have little hope of getting the right answer. So optimization is a necessary condition for coming up with the right portfolio. But it is not sufficient. To be sufficient, you would also need to have pretty good estimates. However, you have little hope of coming up with the right answer unless you do some kind of optimization process.

Is using historical returns in your optimizer a garbage input? It does depend on how long a history you have, but if you are using returns over the last five or 10 years, then I would say those are garbage inputs. More data is better. There are longer time series back to 1926 or even longer. However, if conditions are much different today than they were 50 or 100 years ago, then the distant past is less relevant.

But blindly relying on history is a major mistake, right? Yes. That is one of the reasons that optimizers have been criticized. People will put in historical numbers that make no sense and then they will do the optimization. It will give them some answer that either looks very strange or turns out to be not very good, and then they say optimization does not work. Well, that is not what happened. What happened is that they put in garbage for inputs.

And Harry Markowitz never intended it to be that way, right? If you read Markowitz’s classic article, “Portfolio Selection,” he says right at the outset that there are two steps in the portfolio selection process: forming beliefs, and how you take those beliefs and construct portfolios from them. His article, this seminal work for which he won the Nobel Prize, addresses the second step; how you take beliefs and then go forward, rather than how you form those beliefs. So the problem with the industry is that they form bad beliefs and then criticize the optimization processes as a consequence. That is just silly.

What is your approach to coming up with your own inputs? My approach is based on theory and history. The capital asset pricing model says that an asset’s return should be proportional to its contribution to the broad market’s systematic risk. There are two kinds of risks in the market, systematic risk, which is a function of broad, pervasive economic factors that affects all assets, and specific risk, which is based on factors specific to the individual securities or asset classes. The model Bill Sharpe won the Nobel Prize for says you can divide risk into systematic risk and specific risk, and then Sharpe points out that specific risk can be diversified away by holding a broad market portfolio. Since it can be diversified away, you should not get any compensation for bearing that specific risk.

So how does that get you to your return inputs? The expected returns of different assets should be proportional to how much they contribute to the broad market’s systematic risk. That’s another way of saying that returns ought to be proportional to their beta with respect to the broad market. If assets are fairly priced and if markets are reasonably integrated, that means you can trade to correct your perception of misvaluations.

That’s your framework. But what do you do to make predictions about returns? I start out with a broadly diversified proxy portfolio for the market and define what premium makes sense for that portfolio versus a riskless investment. Then, I estimate the returns of the individual asset classes based not on history but on their risk relationship with the broad market. First, I define what the broad market is. Then I estimate a premium for the broad market for stocks, bonds, etc. I assess how much of a premium the market should offer over a riskless asset. That becomes my expected return for the broad market. That can be based in part on the historical premiums. At that point, I will look at each asset’s contribution to the systematic risk of the broad market I have defined. Once I have that, based on the historical volatilities and correlations of the assets, I can calculate each asset’s beta with respect to this market portfolio. I call this my reference portfolio. Then I calculate the returns for the asset classes that are proportional to their betas, and those become my default assumptions. I assume in all of this that the market is perfectly integrated, that everything is fairly priced, that equilibrium prevails. In those circumstances of equilibrium, the beta-related returns are those that I expect. That is the basis of my inputs, even if they are quite different from the historical returns, because the historical returns will be specific to whatever period you used to measure them.

So you are starting with a diversified portfolio of different asset classes, right? Yes, like a stock index, a bond index–it depends on whether you want to think globally or domestically. For example, say I assume a 60/40 mix of global stocks and bonds as my reference portfolio. These are liquid assets, reasonably well integrated. I estimate what risk premium makes sense for that broad portfolio versus a riskless asset. So I need two inputs, the riskless return and the expected return, or the risk premium of that 60/40 mix. Then, I examine the betas of the individual asset, for example, U.S. stocks, with respect to this broad world portfolio. I make each asset’s expected returns proportional to its beta, the amount it contributes to the portfolio’s systematic risk. That is my departure point. It is starting from a point where if assets are fairly priced, equilibrium prevails, and world markets are integrated, then these are returns that make sense based on how much risk they contribute to the total world portfolio. There is Nobel Prize-winning theory to support that approach.

Why is integration a key for you in determining an asset’s forecasted return? Rather than basing my expectation on some finite sample, I would rather have some theory, some logic for understanding what returns ought to be. And the theory I would start out with is that returns should be proportional to risk and, in particular, they ought to be proportional to the asset’s contribution to the market’s systematic risk. That’s the equilibrium return. If the markets were in equilibrium, this is the return that would cause the assets to be fairly priced. Sometimes returns are explained more by an asset’s individual volatility than the amount it contributes to the world’s volatility. So I estimate the degree of integration of each asset class. To the extent an asset is perfectly integrated, I make its returns proportional to its contributions to the systematic risk. This is an equilibrium return. However, to the extent an asset class is segmented–that is, not integrated–I make its returns proportional to its own risk.

Take us to the next step: coming up with the inputs for an optimizer. The next step is to determine whether a particular asset class is overvalued or undervalued. To the extent you have a view about that, our software allows you to blend your view with the equilibrium return. You can also express how much confidence you have in your view versus an asset’s equilibrium return.

You can also blend your view of the future for an asset class with the equilibrium return? I blend returns based on an asset’s degree of integration versus segmentation. Beyond that, I personally would stop because I don’t feel that I have any ability to forecast returns beyond that. Others may feel that they have superior abilities to forecast returns based on fundamental analysis or other information they have. If you have a view like that, then the extent to which you depart from the equilibrium return ought to be proportional to how much confidence you have in your view versus this theoretically justifiable return. So, in theory, if the S&P 500 has an equilibrium return of 7.5% but you think it ought to be 10% going forward, and you are equally confident in the equilibrium 7.5% versus your 10% view, then we will blend those returns 50/50. If you are twice as confident in your view as you are of the equilibrium theoretical-based return, then you can blend your return to have two-thirds the weighting versus the theoretically based return having one-third.

This calculation, your view of the world versus the equilibrium view, comes built-in with some optimizers? Yes. It is called Bayesian analysis. You can use Bayesian analysis to blend your view of an asset class with historical returns or with equilibrium returns. It is just saying that I have various sources of information and I want to combine them in a statistical way that makes sense. And there are optimizers that come ready to make that calculation for you. Ours does and others do as well. Our software allows you to change your degree of confidence for the different assets you are considering. So you can say, “I think I know a lot about U.S. stocks but not very much about Japanese stocks,” and weight your view twice as much as the equilibrium return for U.S stocks but go with just the equilibrium rate of return for Japanese stocks.

My guess is that advisors don’t know that optimizers may have this built in, giving them the ability to combine quantitative analysis with their own fundamental analysis. Other add-ons to optimizers have been introduced in recent years that help advisors in other ways, specifically sensitivity to return and standard deviation inputs, or estimation error. Is there a good way of dealing with the problem advisors have of optimized portfolios being dominated by the best-performing asset? There are a lot of ways of doing it. Say that you want to predict the distribution of the batting averages of the major league at the end of the season, and you’re doing it today, in an early part of the season. You could say simply that their batting averages at the end of the season will be whatever they are today. The problem with that approach is that the sample is not very large. We are not that far into the baseball season. What would be better is to say, “Let’s take each individual player’s batting average, and let’s take the average of all their averages to arrive at a grand average. Then, we will blend each individual average with this grand average and that will be our forecast of what everybody’s average is going to be at the end of the season.” The idea is that for small samples you have estimation error and a way of dealing with that is to somehow blend the individual observations with some grand average. You have this very broad dispersion over a short sample that gradually attenuates over time. But you get the normal dispersion by the end of the season. By averaging in the average, you can weight it, depending on how far into the season you are. The further in to the season that you are, the less you weight the grand average and the more you weight individual batting averages and vice versa.

You make up for the fact you don’t have enough data? Yes, and there is a specific reason why this is important when using an optimizer. Optimizers are sometimes cynically referred to as error maximizers because they load up on mistakes. They overweight assets that you have overestimated return on, and they overweight assets where you have underestimated risk. And it does not go away as you add more assets. So you can do the same thing as in my baseball example. You basically calculate the average of all the expected returns and then blend that average, the grand average, with each individual expected return. That reduces the sensitivity of the estimation error. Technically, you would blend an asset’s return with the return of minimum risk portfolio, but it is almost the same idea as the batting average example I gave you. You want to basically compress the returns toward a narrower range than what you actually believe them to be. By compressing them, those mistakes you are making are going to have less of a harmful impact and reduce estimation error. It recognizes that optimizers load up on errors and figures out how we can reduce this problem.

You offer an optimizer to institutions? It allows you to do asset allocation, to allocate across asset classes, managers, styles. It does about everything except manage individual securities. The key innovation versus what a lot of other programs do is that our program allows you to estimate what your exposed loss is all throughout your investment horizon, as opposed to just at the end of the horizon. Most risk measures deal with the distribution of returns at the end of the year, or at the end of 10 years. Our software tells you what can happen all throughout the horizon. We show you how bad it can get today, and at any point between now and at the end of your horizon–not just at the end point of your horizon. There is an article on our site at about mismeasurement of risk at www.wcmbllc.com that explains this.


NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.