ON THE ECONOMY: VOLATILITY – THE COMING ECONOMIC LANDSCAPE
By Guest Column Michael Stojsavljevich:
Managing Partner at Episteme Advisory Group. Former Chief Strategy Officer at the U.S. Mint
Economic theory has been hotly debated for the past several years due primarily to fiscal and monetary interventions being the central tools used by various countries to kick start the global economy in the wake of 2008-2009 financial crisis.
The central issue then was US housing market values and their impact on the credit derivatives market that was tied to mortgages. We know how that ended.
Today the credit derivatives market is again in the spotlight, but the main worry now is sovereign debt defaults, possibly Greece and Spain or maybe Italy.
While most people don’t spend a lot of time thinking about complex financial instruments, one central point that should be made is that global economic output or Gross Domestic Product (GDP) is roughly $70 trillion, while the global derivatives market is estimated to be between $230 trillion and $1 quadrillion.
That’s not a misprint……..possibly $1 quadrillion. This leverage makes any hiccup or single default potentially destructive in magnitude
Why so large and how haven’t you heard about it?
Things changed a lot in the late 90’s and early 2000’s as the industry of derivatives popped up in the US and the UK and spread into Europe and the rest of the world. The notion of hedging changed from having one asset offset another to not having offsets but rather creating all sorts of financial products and then actually selling off a leveraged revenue stream (based usually on credit) to another buyer with a third party insurer.
Think AIG and mortgage derivatives. AIG was bailed out so they could pay out insurance claims to companies like Goldman Sachs. Otherwise things at Goldman would have ended very badly.
Today these complex financial instruments are not carried on the books of most banks or governments for that matter, leading to a lot of confusion. But make no mistake, if a country defaults on its bond payments, the insurer will be called on to make the holder of the bond or revenue stream whole, whether it’s a bank, insurer, retirement fund, university endowment, etc.
Being that many of these derivatives insurers are US or European entities, and the size of the sovereign debt in question is potentially several hundred trillion dollars, things can get very bad very quickly again.
The first order solution is to have central banks coordinate with swap lines and quantitative easing programs that are direct, much in the US, or indirect as is the ECB’s LTRO program. This will give the countries in question money to pay their bond holders and not default. It also gives political leaders 10 more months until the cycle repeats itself.
This year more debt will need to be purchased via quantitative easing programs to keep countries like Greece and now Spain and Italy in fiscally sound states for the remainder or 2012 as the European recession has created larger funding gaps.
Some give and take between fiscal austerity and monetary intervention is occurring now and is squarely front and center in the financial and political headlines, but some solution will probably arise at the last moment much as it did in 2011.
If not, much may unfold. Literally.
What does all of this fiscal spending and monetary intervention mean?
Well it means a lot of volatility in commodity prices and stock values annually as we continue to waver between fiscal brinksmanship and ultra easy monetary policies. It also means less corporate visibility and therefore a stagnant jobs market. A sort of negative feedback loop again.
As revenue maximizating pricing strategies and discounting become equal parts of every corporate strategy every year, corporate leaders and planners, especially consumer products companies will need to be ultra sensitive to planning their pricing structure, and new product pipeline introductions based on varying input prices and consumer’s willing to pay at any moment.
Volatility will make tools such as polling and consumer focus groups less meaningful as the lagging nature of such tests in a volatile market will force a more proactive effort which is difficult for most companies to implement quickly.
Bottom line, those able to understand this landscape and nimble enough to implement effective strategies will gain a competitive edge.
I can’t express how much I think the legislative process that all government affairs professionals participate in is like a football game. Both sides on a given issue put their team of the field – lobbyists, lawyers, specialists, partners, etc. – and they fight it out trying to move the ball (or the bill) to their goal line. I believe that usually the best team wins, not always the best side of the issue mind you but the best team in the game. Often, as in football, teams trade players back and forth, and they may come up against each other more than once in a legislative season. Because of this, winning at any cost is not always the objective, for if a side in the game cheats or loses its credibility they may win one game, but they won’t ever win the Super Bowl.
This issue of credibility is very important, particularly in the work that specialists (be they economists, doctors, statisticians or “ologists”) do for their clients. Studies are part of the legislative battle plan, and often the credibility of these studies gets called into question. But while it is important to question the results of a given study or report, simply because they are used in a legislative battle, or commissioned by supporters of one side of an issue, does not mean that they are not credible. A good specialist will be able to ensure that soundness of their research, while at the same time supporting their team.
So how can someone determine if a study is credible? I suggest seven simple rules.
1. Ensure that a detailed methodology is available.
This does not have to be a complicated methodology with lots of Greek letters and fancy sounding statistical tests, but something that is easy to read and understand. The methodology should say how the study was done and what the data sources were. It should be readily available to anybody who wants it.
2. Ensure that the data used in the study is obtainable.
If public data sources are used in the study, are they cited in the methodology along with a link showing where to obtain them? If private data sources are used (for example IRI, Dun & Bradstreet), are they cited? Often a company cannot actually make purchased data available to others since contracts do not allow it, but is enough information provided just in case someone wants to purchase the data themselves? Survey data often cannot be disclosed due to confidentiality limitations, but does the methodology describe the statistical moments of the data and how it was obtained?
3. Determine if the study was published.
Publication in and of itself does not ensure that a study is credible since even many peer reviewed journals (an obvious example Tobacco Control) are by their very nature biased, but I know from personal experience that the peer review process has always helped to make my research better.
4. Determine if the methods used in the study are sound.
Studies based on survey research are generally of a lower quality than those based on statistical models; however, often times an issue may not have good data available and surveys (what we call primary research) are the only way to gather data. If a survey is used, is it fully documented (see item 2)? If statistical tools like regression analysis, linear-programming analysis or ANOVA (analysis of variance) techniques are used in the model are their indicators of statistical significance published? These include t-scores, f-scores, p-scores, an R-squared statistic, etc. Simply because a statistical method sounds complicated does not mean that a study is credible. Complicated statistical methods may be necessary but they can also hide poor research.
5. Determine if the results are sensible.
Ask a number of questions when reading the report. Are the results internally consistent? For example, if a state by state analysis is conducted do the results from the individual states add up to the results for the country as a whole? Are economic multipliers too large? Multipliers over 2 or 2.5 should be pulled out of a study for further analysis and explanation. Would a reasonable neutral person versed in the issue think the study made sense?
6. Determine if the study can be reproduced.
It can be expensive to reproduce a study, but the methodology should provide enough information to do so.
7. Determine if the prose is hostile or particularly biased.
In a legislative battle, we all use studies and talking points to make our case; however, a study should not BE a talking point. Talking points represent the results of a study in order to make a case or an argument. They are by their nature biased and often can be quite shrill. But the study itself should be objective. It may support or discredit a particular point of view but it should do so honestly and without spite. If the language in a study or its methodology suggests bias, then the study is almost guaranteed to be hiding something.
Using these simple rules can help determine the credibility of a given study or author much more so than whether or not the author is a professor, a bureaucrat or a partner in a big accounting firm.