Metric-Suppression is “Muzzling the Baby”

Technically, you can stop a baby from crying by duct-taping the poor child’s mouth, but who would be monstrous enough to think of this as a solution?

In Nassim Taleb’s excellent book AntiFragile; things that benefit from disorder, he takes a chapter to lament modern civilization’s poor track-record in what he calls “recursive thinking”; or thinking in second-orders… essentially, thinking steps ahead, and admitting when we cannot– choosing inaction over his aptly-named “naive interventionism”.

“Naive Empiricism”, or the belief that we can rely on statistical significance to justify violently superimposing our loose narratives onto social, medical, essentially, organic phenomena because we’ve made the math work… coupled with a similar fallacy, “naive interventionism”, the idea that if something is the matter, however the likelihood of a natural solution or of worsening things exponentially by intervention, we nevertheless must do something.

The case is beautifully laid out for the reader with plenty of examples.

Inflammation-fighters like Celebrex and and Vioxx leading to horrible side-effects and eliminating all the benefits nature embedded in the body’s responses.  Drugs helping to reduce morning-sickness leading to birth-defects… we now know that greater morning-sickness often leads to healthier babies.

As we’ll argue time and again, while the economy has many man-made parts, its composite whole is an organic, natural system– layered in complexity, and like any ecology, it is therefore allergic or hostile to mechanistic treatment. The key distinction one more time: non-complex + mechanistic vs. complex + organic systems.

We currently suffer under a chronic naive-Intervening culture in the west because of confusion between these two.  We have also desperately confused comfort for health, and we use drugs, laws, and artificial liquidity to stave off any momentary discomfort. Perhaps because our economy requires our active work and participation, we’ve ceased to see ourselves as component parts of an organic whole, and decided that we are component parts of a “well oiled machine” that must be actively oiled, run, and rebuilt upon any “breakdown”, but the metaphor doesn’t carry over.

For one, the economy isn’t something any human designed; its laws are each discoveries and usually loose, if sound, analogies and metaphors. A poor metaphor, like comparing debt-liquidity to fuel, or comparing our national economy to a vehicle, puts our research and theory desperately.* Many of our best minds are consumed with which input to alter to what degree to optimize outputs… like with a machine.
A more modest/honest view would acknowledge that even our successes in economic apprehension don’t capture what’s really going on, but are simply useful attitudes and heuristics.

So we aren’t good enough at recursive thinking to really achieve much outside of damage–we can’t see the second- and third-order effects well enough to centrally intervene precisely or comprehensively enough to keep from catastrophe.

So we treat symptoms. We suppress the uncomfortable metric. The baby cries and instead of trying to understand and respond to the signal with some nourishing, we muzzle the noise… and we blame all the dumbest things as the baby slowly dies.

The US economy, circa 2008, is a great muzzled baby.  But in this case it badly needed to shift in its crib and to vomit out all its caustic assets and insolvent firms… its poison and its literal parasites. (GM for instance)

Bernanke’s Fed, arm in arm with Poulson’s treasury, buried this baby in so much muzzle that it’ll be a miracle if we don’t smother.


“Does Cheap Oil Cause Recession?” from

[Excerpt from the January Austrian Investing Monthly Newsletter. Download free at St Onge Research.]

Does cheap oil cause recession?

Not when Oil Supply is Soaring.

For the past year, the price of oil has been plunging, spooking markets going into year-end. Oil is one of Wall Street’s favorite recession indicators. So are they right to worry?

In short, no. Falling oil prices today are supply-driven. Meaning that today’s cheap oil is a boom indicator, not a recession indicator. Because cheap credit subsidizes investment, and oil is one of the most capital-intensive industries out there.

It’s actually better than that: falling oil prices are great for the rest of the economy. Suggesting that cheap oil itself contributes to keeping the boom going.

First, the data: from the International Energy Agency, oil prices have fallen by more than half since June, 2014, from $107 on benchmark WTI to under $49 today.

So if demand is stable, why is Wall Street so worried about cheap oil?

Two reasons. First, statistically, oil prices correlate with recessions. And, second, they worry that cheap oil might hurt the economy in other ways.

Let’s unpack these concerns. First, the correlations. Wall Street analysts run on statistics, not theory

World Oil Prices

When you see a price move the first thing to ask is whether it’s coming from supply or demand. Is oil falling because there’s more oil, or is it falling because people aren’t buying as much?

The answer, so far, is supply: the IEA estimates that global oil demand rose by 1.4 million barrels per day, while supply rose by 1.7 million barrels between Q1 2014 and Q3 2014. The rise isn’t just Dakota frackers; OPEC’s output rose in-line with overall supply.

Zeroing in on the US, the story is even more dramatic: healthy demand vs even healthier supply. Check out the charts below: demand is “liquid products supplied” and supply is production. The other usual suspects, China and Europe, are both using oil normally as well (see charts next page).

The bottom line is healthy but not soaring worldwide demand, consistent with the later boom stage of the cycle. Paired with soaring supply, this gives us lower prices.

US fuels production (supply)So if demand is stable, why is Wall Street so worried about cheap oil?

Two reasons. First, statistically, oil prices correlate with recessions. And, second, they worry that cheap oil might hurt the economy in other ways.

Let’s unpack these concerns. First, the correlations. Wall Street analysts run on statistics, not theory. Statistics are a beautiful thing, but one of their biggest strengths is also a big weakness. In particular, statistics cancel out noise. Meaning that one-off events disappear from statistical analysis. So if you’ve got frequent replicable events — say, rainy days and umbrellas — then statistics are your friend. On the other hand, if you’ve got rare events — say, fracking booms — then statistics will give you the runaround.

US fuels supplied (demand) This means Wall Street is running with the correlations, which come from the most common event affecting oil prices — drops in demand. Drops in demand for oil absolutely correlate with recessions, since a drop in demand suggest businesses are using less oil.

On the other hand, supply disturbances, while common in oil, are each unique. Libyan rebels, the Suez closing, subsidies paid for exploration, fracking. Each is unique. And correlations are actually intended to eliminate unique events as “noise.”

So what’s happening today is a rare (in a statistical sense) event, which is a secular rise in oil supply.

Now, “rare” doesn’t mean “noise” that we can just ignore. Indeed, this rise in supply is a very useful cycle indicator, just not in the way Wall Street thinks. It’s an indicator of cheap money. Oil exploration and development takes scads of money, and cheap money subsidizes that investment. Meaning that cheap money increases the supply of oil.

Europe crude demandThis means that a falling price due to rising supply is actually a boom indicator. Specifically, it’s a late-boom indicator, telling us that money’s been cheap long enough to subsidize lots of new oil projects to fruition.

Now, this isn’t exactly news. We already know that the boom’s late-stage simply because we can see how low rates have been. The Fed’s been feeding this boom for a good 6 years now. Oil prices are giving us no new information — they’re simply a confirmation. They’re simply what happens when you feed the world cheap money for 6 years — you get oversupply of capital-intensive goods.

Now let’s consider the second question, whether oil prices will hurt the rest of the economy. It’s actually a bizarre question to even ask; I guess analysts who don’tChina oil importsknow theory just read off the correlations and wave their hands at what happens in-between. Why bizarre? Because cheap stuff makes the economy grow. Cheap stuff means there’s more to spend on other stuff. So cheap oil means more money going to iPads, movies, vacations, whatever. It’s an unmitigated good for the rest of the economy.

So if the price of oil today isn’t important as recession indicator, what should you watch instead? Demand. In particular domestic demand for oil. If this starts falling, then you’ve got a genuine recession indicator. And that’s when to fret.

What about the future price of oil? Oil is a very a noisy indicator – remember those frequent “unique” events. So the best we can say is what the cycle will do. And the cycle trend is weak (i.e. it won’t overcome noise) for the short-term, until either Europe and China get worse, or until the US economy starts to turn down. At which point oil starts really getting its demand-driven price drop to go along with that supply-driven drop.

At that point, when the recession is coming, will prices go down? Again, cycles only give trend with a price as volatile as oil. So the price will tend to fall, but who knows if it will. For example, if a US downturn shutters fracking massively, or makes it hard to finance new oil exploration, then price could actually rise in a recession. So we want to be careful to know what’s driving prices, on both supply-side, demand-side, and any outside noise.

(This, by the way, is why commodities investment is not for beginners. You really do have to live and breathe a commodity to invest competently. There’s always something you missed — oh, I forgot Qaddafi might get shot and Libyan refineries closed for a year.)

Finally, whither fracking? The old-hand investors in natural resources know this, but resources is no place for widows and orphans to invest. Fracking today is levitating on a flood of easy money. And, when that money cuts off, many fracking investments will be under-water. Expect a bust when the smoke clears, intensifying after interest rates start rising. For now, if you’re running a $5,000-a-spot trailer park in North Dakota, enjoy the boom but do put something aside for the bust.

Dr. Taleb and the Fallacy of Scale

Dr. Nassim Taleb is a thinker absolutely to be reckoned with;, especially in the realm of statistical meaning and market data. This following is a short bit from his blog, Opacity, found at

140 Why Did Communism Fail?

The common interpretation is that communism failed because it did not line-up to human nature, disregarded incentives, free-market matters etc. But I have not heard any commentary attributing a share of the failure to the top-down implementation by gigantic states & the necessity of a large state for that –making nonlinearities & second order effects dominate.

The large state is qualitatively different from the very small municipal state, one in which people have visual contact with those implementing public policy. The large state brings fragility, the small municipality brings robustness. Just as there is a fallacy of aggregation, I believe in the fallacy of scale (because of concavities). Properties change with scale

“The Empiricism Myth” (from Paul Sztorc’s truthcoin mythbusting)

my note: Paul Sztorc is a statistician and PhD student at Yale, with whom I hope to soon be in touch.  His work on Prediction Markets (“PMs” in the article), a subject we’ll feature quite a lot here, will be absolutely critical to the future of political economy. I can’t sufficiently recommend to the reader his work, along with that of Dr. Robin Hanson, on the subject of PMs, Truthcoin, and Futarchy.
For our purposes here, Prediction Markets are a unique example of the proper empirical application in social phenomena. I add emphasis to highlight this 

Myth #2: The Empiricism Myth

The belief that certain phenomena count as evidence against the accuracy of PMs. Examples:
- “This PM was predicting that something would happen, but it didn’t happen. Therefore, PMs are inaccurate.”
 -“Source X (individual, research paper, statistical model, etc.) published a forecast that this event would occur, and did so before the PM reached that consensus. Therefore PMs are slower than Source X, which tracks the true probability more accurately.”
 -“I do not believe PMs are accurate because Source Y investigated PMs and concluded that they…” Pre-outcome, the claim that one forecast method is better than another is epistemologically impossible. Phrases such as ‘true probability’ and ‘most accurate forecast’ are laughable. Post-outcome, such claims are possible, but likely more difficult than the layperson may suspect.

For a start, the front-runners (probability > 50%) should not always win. In fact, if they did, that would indicate that they were consistently underpriced, and be evidence against the accuracy of PMs. PMs are unique to the forecasting world in that their operation and methodology are completely transparent and reproducible. Moreover, only PMs provide a publically available forecast at each moment of their existence, in contrast to a poll or research paper whose results are published one time on a single date. As such, PMs are immune to publication bias, as they can neither censor nor cherry pick their methodology or results. This immunity is highly significant, as publication bias causes roughly 60%2 to 90%3 (or more4 ) of the university-grade research-findings claimed to be true to actually be false.

For individual bloggers, TV pundits, or journalists, or other info-prostitutes, who lack scientific training and the controls imposed by peer review, the effects of selective-publication can only be even more detrimental. PMs are not accurate because they have a track record of accuracy. They are accurate because of qualities inherent to their definition as an incentive-compatible meta-tool. The accuracy is not a mysterious result, which “for some reason” we continue to observe empirically, and ultimately generalize by way of induction. The accuracy of a PM is produced by way of information-aggregation, in a completely clear and atomically understood process. If PMs are ever to be discredited, it can only be on the grounds that they fail to efficiently integrate some existing knowledge.




““Big Data” and Austrian Economics” (from

my note: Dr. Peter St. Onge is an avid student of the business cycle and a member of that rarefied class whose knowledge translates into real results. He is a frequent contributor to (of LvMI), one of the finest economics and politial-economy institutions on the planet, and editor at Profits of Chaos. He “is an Austrian-school behavioral economist. His research focuses on asset valuation, business strategy and business cycles.”
In this piece, he references Dr. Nassim Taleb, whose emphasis on the distinction between “fat-” and “thin-tailed statistical phenomena”. I encourage everyone in finance or econ, not least the aspiring student to familiarize herself with this work (link below)

[Excerpted from the November Austrian Investment Monthly. Download a full copy at St Onge Research]

“Big Data,” the latest and greatest data fad, makes the Austrimatrixan approach even more important to investors. How can savvy investors take advantage?

One key advantage of “Austrian” investing is using theory to guide your choice of data. This advantage is growing massively as Wall Street falls in love with the “Big Data” fad.

In “Big Data,” you toss the kitchen sink into a huge correlation, run your supercomputer, and out pop your recommendations.

What could possibly go wrong? Lots.

Best-selling financial author and professional gadfly Nicholas Nassim Taleb has been on the warpath against Big Data. In an article last year in Wired MagazineTaleb laid out his complaint: the Bigger the data, the more likely it will generate noise masquerading as causation.

To see why, let’s back up and ask how statistical studies are born. Like the proverbial sausage machine, the reality is pretty ugly.

The way it’s supposed to work is that a researcher asks a an important question, seeks out the best data to answer the question, then runs an equation called a regression. The regression spits out an association between A and B — how often do they occur together, along with telling you how big is the association, and how likely is the association to be accidental noise. The standard in academia is to reject anything that has a greater than 5% chance of being noise, meaning that 1 out of 20 studies will still be noise (remember that “19-times-out-of-20” you always hear in opinion polls? That’s referring to this noise cut-off).

So what are some of the problems here?

First, regressions only tell you associations, not causation. So if band aids are associated with scraped knees, the data doesn’t tell you which caused which. Perhaps band aids cause the scrapes. This may not be a problem when the causation is obvious, but interesting questions are rarely obvious. For example, rising oil prices are associated with economic booms, but do they cause booms or vice versa?

The standard “fix” to figure out causation is lagged data — one data set is earlier than the other. Even here, you can be in a bind. Again with oil, high oil prices are associated with a lagged recession. But again, did the oil cause the recession, or did they both come from some third factor, such as a boom.

Problem number two is the noise problem that Taleb complained about. And Big Data indeed represents a worsening of this problem. Simply because Big Data represents the cheapening of regressions.

Cheap regressions are a problem because it means theory drops out as a quality control. In the old days, when it took weeks of hard-core and tedious calculation to run a regression by hand, you wanted to be pretty sure you’d actually find an association. Meaning the very cost of regression forced researchers to use theory to “weed out” stupid ideas.

Today, however, it would take you literally 3 minutes on a home computer to go grab some data from the Census or UN, toss it into a correlation program like Stata, and spit it out.

Automate this process, which is what Big Data’s about and you could literally spit out 1,000 regressions all day, every day. Going back to that 5% cut-off on noise, if you do 1,000 regressions, 5% of them are statistically guaranteed to be noise. Meaning you’re generating 50 false associations every single day.

Now, those false associations will be on anything under the sun. So what does a PR-minded researcher, or a lying institution do? You just cherry-pick from those 50 anything that confirms what you want to say. Then you publish it. It’s statistically valid — you did use the 5% cut-off, you used the correct data. Of course it’s noise, but the noise is coming from the fact that you originally ran 1,000 regressions. And you don’t have to tell anybody how many you ran.

This hustle is actually a cousin to the classic Newsletter fraud. In which I’d launch 32 newsletters predicting rising stocks next month, and another 32 predicting falling stocks. Shut down the 32 letters that were wrong, repeat a few times and eventually I get a single newsletter that’s been right 6 times in a row. A perfect track record that I can promote the heck out of. Never-mind the other 63 letters I quietly shut down.

In the same way, anybody who runs enough correlations will get enough noise to spend a career selling false correlations that, nevertheless, were produced using sterling statistical techniques.

How should a savvy investor react to these data games? Either you can reject all research, knowing that perhaps 90% of it is noise or cherry-picking. Or, my recommendation, you do the theory filter that researchers should have done. Meaning you ask whether it’s a plausible causation. If it’s not, then you toss the correlation. If it is plausible, then you at least keep an open mind that it might, indeed, be true.

Either way, the key here is protecting yourself from the false confidence from the rise of Big Data. Things are going to get much noisier from here, with all those cheap correlations, and the gap in performance between theory-informed evaluations and blind acceptance of Big Data is going to continue to widen.

Empiricism Done Right: “The Truth Behind Truthcoin”

“Some people say money is the root of all evil. Others say its a necessary evil. Most can agree that it’s an important human innovation.

But it hasn’t been widely acknowledged that the human desire for money can actually be used to peer into the future.

A Prediction Market (PM) is a place where people can bet on the outcome of an event. If they guess correctly, they win money. If they guess incorrectly, they lose money.

People care about their money, so the market price of an event gives us a good answer to the question “Will this outcome happen or not?”

It’s not magic. It’s the economics version of the “wisdom of the crowd”.

Intrade became the first provider of this form of market online. It was closed downlast year.

Paul Sztorc has begun to develop Truthcoin, a prediction market based on the blockchain.

Part 1 of Paul’s 5 part introduction to prediction markets highlights some of the problems of traditional Prediction Markets:

“There are several problems with PM’s: mainly the fact that when persuading others of something complex, [people tend] to highlight true statements when they support their own argument, and hide them away when they support an alternative argument.

[Also], If you make a bet with someone, you have to trust them to pay up. Tradable-Predictions, defined as “assets with a definite future value based solely on their future accuracy” have never existed. Instead, the value of PM-Predictions depended substantially on the behavior of the counterparty (ie, the guy holding the money). You can’t “own” a prediction, only a paper claim to money held by the PM administrator. The PM administrator has proven to be unambitious at best (accepting only a few bet-topics) and unreliable at worst (losing funds and/or going out of business, see Appendix). PM-admins rely on trust (as they hold their customer’s money) yet are prevented from accessing trust-forming institutions (law-enforcement, brands/advertising) because of their regulatory/legal/awareness challenges.”

He goes on to say:

“Bitcoin operates independently of a nation’s legal framework, and might avoid closure or regulatory interference. If so, competing “Bitcoin InTrades” would appear to fulfill market demand. Unfortunately, PMs require a way to store up money and pay it out based on a real world outcome, which implies trusting a third-party with your money. Use of supra-national Bitcoin would prevent the use of any legal guarantee (to justify this trust).”

Bitcoin demonstrates that a blockchain can provide scalable, censorship-resistant, and trustless solutions to value-transfer problems. Blockchain solutions also generate efficiency by cutting out middlemen and avoiding overhead costs (no brick-and-mortar, compliance, administration, etc.). They are egalitarian and immortal. “

Paul expanded on these ideas and told me a little more about himself by answering these questions I had for him.

What is Truthcoin?

A marketplace for the creation and trading of ‘event derivatives’, which have a final value based only on the-state-of-the-world (such as election results or stock prices) and nothing else.

Truthcoin’s markets might resemble “smart contracts”, where the focus is not on “performing the math calculation of the contract”, but instead on “getting accurate reports from people”. Where a user would ask Ethereum to solve an equation using some algorithm, a user might ask Truthcoin to honestly-uncover “what was the solution to that equation” from users. Although users would be free to lie about what that solution was, an incentive mechanism discourages this. The emphasis is on “the solution itself”, not on “the process of solving”.

How was it formed? was possibly my favorite website on the internet, but, tragically, it was forced to close for a variety of reasons. I felt that the closure of InTrade resembled the closure of Liberty Dollar and e-gold. Just as the latter inspired Bitcoin, the former inspired me to try to do something similar for Prediction Markets. I thought about it for a while, and wrote down some code and a whitepaper at the beginning of this year.

What is your professional background?

I double-majored in Economics and Psychology (undergrad) and then dual-degreed (graduate) in Operations and Finance at CWRU. I’ve worked at Interactive Brokers and GE/NBC in technical/programming roles, worked on Six Sigma operations consulting, and Healthcare IT consulting. I currently work as a ‘Statistician’ or ‘Visiting Scholar’ doing grant-supported research (unrelated to Truthcoin) at the Yale Department of Economics.

What are you currently busy with?

Currently, a few people want to raise money for Truthcoin, or work as volunteers. Figuring out who is a good fit, exactly what I can reasonably promise to investors, what I should do with people who have already put in work, who I can trust to do a good job, and how to reward all of those people, are questions that consume my time. I also have a communication problem where most people (despite the whitepaper and code) don’t “get” the project. Right now I would like to make more demos, slides, infographics or videos.

What is your vision for Truthcoin?

The short term dream would be that people who know C++ Bitcoin very well would find Truthcoin, decide how to combine the pieces (the existing Bitcoin code + the new parts which I’ve coded), and help me release and maintain a version for discussion.

The medium term dream would be widespread discussion of the costs and benefits of the core idea. Can the risks be mitigated (with sidechains/treechains, some kind of firewall, a multi-round test process)? Are the benefits substantial? Ideally, this would lead to the question: do enough people feel that it is sufficiently-valuable to actually switch from Bitcoin to this (or transform Bitcoin into this)? Currently, few have discovered Truthcoin at all, so such an ambitious question can’t even be asked.

The long term dream is nothing short of a second Scientific Revolution restoring the virtue of empiricism to the public discourse. A world with optimally-accurate forecasts (“Will X be a problem in the future?”), optimal advice (“Which of X would produce more Y?”), stable-value cryptoassets (“BitUSD”), a world where CEOs and politicians have to work competitively for a living, where organizations of all kinds are unable to lie to the public, where smart contracts are widely available, where Public Goods can be financed quickly and at low cost, and where anyone with an internet connection has access to the combined intellectual powers of all mankind.”

“Does Management Research Need to Become More Empirical?”

| Nicolai Foss |

Or, to put it more precisely, does management research (i.e., the journals) need to become more empirical in the specific sense of allowing for research that is pre-theoretic, but addresses an issue of relevance or detects a pattern to organizational stakeholders, that is, identifies a potentially important stylized fact?

In two SO!APBOX Editorial Essays in the May issue of Strategic Organization,Danny Miller and Constance Helfat both argue, in Miller’s words, that “the current institutional setting within which administrative studies develop has evolved to de-legitimize [this] type of research” (p. 177). As examples of the benefits of pre-paradigmatic, atheoretic research Miller points to Fleming’s discovery of penicillin, and, in management, to Woodward’s discoveries of the impact of technology on organization structure, the Hawthorne experiments, Tushman’s work on firm trajectories, etc. Helfat points to the Philips curve and the learning curve.

Miller and Helfat may be quite right that descriptions of stylized facts that are not somehow informed by theorizing are seldom, if ever, seen in the leading management journals (of course, to a hardcore economist — all management “theorizing” is essentially the kind of work that Miller and Helfat want to be done ;-)). However,

  • Are there any known cases of this kind of research being killed by the journals to the detriment of the management field?
  • Isn’t it — contrary to what is implied by Miller and Helfat — the case that sometimes harm may have been done by atheoretical work on empirical regularities? Think of the PIMS.
  • What is necessarily so bad about requesting that authors try to come up with a theoretical rationalization of an observed regularity? Is this such a barrier that it will stop potential authors from publishing their result? Theorizing comes in many forms, and some kinds of theorizing is much harder to do than other kinds of theorizing. What is wrong with requesting that, as a minimum, authors provide some verbal account of the possible underlying mechanisms that may produce an observed regularity?

(MISES DAILY) “Econometrics: A Strange Process” by Robert P. Murphy

JULY 15, 2002

Until recently, most macroeconomic forecasters, assisted by mathematical models, were predicting economic recovery and rising stock indices. But the market has reminded us that reality doesn’t always correspond to the predictions of those who claim the mantle of “science.” As is so often the case, those economists who were more humble in their pretensions to knowledge avoided such embarrassment.

The Methodological Divide

The Austrian School of economics is known for its aversion to mathematical modeling of human behavior. The neoclassical mainstream, on the other hand, is quite fond of this approach, and uses the mathematical method for just about any problem. I think it is fair to say that most mainstream economists would prefer the precision of a false formal model, versus the generality of a true verbal proposition.

This misplaced reliance on the power of mathematical tools for economic analysis is epitomized in the field of econometrics, which employs statistical techniques in the study of empirical data concerning economic phenomena. Unlike their mainstream colleagues in game theory—who are notorious for criticizing human “players” when their actions fail to correspond to the strategies employed in a particular game’s equilibrium state—the econometricians believe they are exempt from the biases of a priori theorizing. The true believer in econometrics takes no particular stand on doctrinal questions, and rather thinks that the facts will “speak for themselves.”

Ludwig von Mises exposed the fallacy in this supposedly atheoretical method:

It is true the empiricists reject [a priori] theory; they pretend that they aim to learn only from historical experience. However, they contradict their own principles as soon as they pass beyond the unadulterated recording of individual single prices and begin to construct series and to compute averages. A datum of experience and a statistical fact is only a price paid at a definite time and a definite place for a definite quantity of a certain commodity. The arrangement of various price data in groups and the computation of averages are guided by theoretical deliberations which are logically and temporally antecedent. The extent to which certain attending features and circumstantial contingencies of the price data concerned are taken or not taken into consideration depends on theoretical reasoning of the same kind. Nobody is so bold as to maintain that a rise of a per cent in the supply of any commodity must always—in every country and at any time—result in a fall of b per cent in its price. But as no quantitative economist ever ventured to define precisely on the ground of statistical experience the special conditions producing a definite deviation from the ratio a : b, the futility of his endeavors is manifest. (Human Action p. 351)

Although the student of Austrian economics may share Mises’s opinions about the dubiousness of econometrics, the fact is that he or she must take classes and exams in this field in order to receive a degree from most programs in the United States. In an attempt to help such students “keep hope alive,” I will now share my impressions and an anecdote gleaned from my experience in a mandatory course in macroeconometrics.

Market Process?

Austrian economists, especially those of a Hayekian bent, stress that the market is a process. Ironically, econometricians use the same term, but they mean by it something completely different.

For example, when he wishes to model the price of a particular stock, the econometrician may say, “Assumep(t) follows a random walk process.” What he means is that the price at any time t equals the price at time t – 1, plus a completely random “shock.” The shock is modeled as a random variable with mean zero and a certain variance.

Notice already that this approach has given up on trying to explain how real-world prices are actually formed. In reality, today’s prices have no causal connection with tomorrow’s prices. Every day, the price of a stock is formed afresh by decisions on the part of investors to buy or sell. The stock price today seems to be partially “dependent” on the stock price yesterday only because the underlying factors that caused yesterday’s price are largely the same today. The case of a stock price is completely different from, say, the balance of one’s bank account, which does remain constant from day to day, except for “autoregressive” changes due to interest compounding, or “shocks” due to deposits and withdrawals.

The econometric approach to stock price movements is analogous to a meteorologist who looks for correlations between various measurements of atmospheric conditions. For example, he might find that the temperature on any given day is a very good predictor of the temperature on the following day. But no meteorologist would believe that the reading on the thermometer one day somehow caused the reading the next day; he knows that the correlation is due to the fact that the true causal factors—such as the angle of the earth relative to its orbital plane around the sun—do not change much from one day to the next.

Unfortunately, this distinction between causation and correlation is not stressed in econometrics. Indeed, for economists truly committed to the positive method, there can be no such distinction. Although the econometric pioneers may understand why certain assumptions are made and can offer a priori justifications such as “rational expectations” for the details of a particular model, the students of such pioneers are often caught up in the mathematical technicalities and lose sight of the true causes of economic phenomena.

A Case in Point

Lest the reader feel I am speaking in broad generalities, let me offer as an example a question that was on one of my exams. The question epitomizes the problems with the econometric approach of stipulating a particular “process” that generates the observed levels of some variable:

Suppose we have T observations on the time series x(t), which has mean μ. Suppose also that d(t), the deviation of x(t) from its sample average s, which is defined as d(t) ≡ x(t) – s, follows an AR(1) process, that is, d(t) = ρd(t – 1) + e(t). What is the variance of the sample average, s?

As I sat staring at this question, I was absolutely befuddled, since I believe it makes no sense. My problem was not that such a question was of little use in understanding the business cycle or the stock market; my problem was that I believe its propositions are contradictory.

The question assumes that there is some variable x(t), the true mean of which is μ. That is, if we took the mean of all realizations of x(t) from t = 1 to t = ∞, the result would be μ. In practice, however, we never have an infinite number of realizations to analyze, but only a finite number T of sample observations. Although we can’t know the true mean μ, we can calculate s, which is the sample average, or mean of the observations from x(1) to x(T).

Now, the exam question above wasn’t intended to be “deep”; I suspect that talking about an autoregressive (AR) process concerning the variable d(t) was an indirect way to get the student to assume that x(t) itself followed an AR(1) process, and to then apply a standard formula to “compute the sample variance of the mean of T realizations from an autocorrelated time series process” (quoted from the solution later given by my professor).

An autoregressive process is one in which the value t is dependent on some fraction of the value at t – 1, plus a random “error” term of mean zero. For example, we might have x(t) = .5 * x(t – 1) + e(t), which means that the value of x at time t is equal to one-half its value at time t – 1, plus some random error term e(t) that on average will equal zero.

It makes sense to say that x(t) in the above question follows such an AR(1) process. However, the question said that the deviation of x(t) from its sample average s follows an AR(1) process, and this I believe is nonsensical. This is because, unlike the infinitely long x(t) process—in which the deviations of x(t) from its mean μ can in principle sum to any number (though we expect in the long run this sum to be zero)—for a finite sample of sizeT, the deviations d(t) by definition must sum to zero. So when my professor—following the standard econometric practice—stipulated that the series d(t) followed a particular process, he stipulated the impossible.

Let’s illustrate the problem with a sample of size T=3. Suppose that the observed values of x are 1, 2, and 3. The sample average s is thus 2. The value of d(1) is -1; that is, x(1) – s = -1. The value of d(2) is 0, and the value of d(3)is 1. As must be the case, the sum of the deviations of x(t) from the sample mean are zero; i.e., -1 + 0 + 1 = 0.

Now notice that this makes it impossible for the variable d(t) to follow an AR process. This is because the value of d(1) and d(2) completely determine the value of d(3). Given that d(1) is -1 and d(2) is 0, d(3) must be 1 to render the entire sum zero.

But if this is the case, then the stipulated formula for d(t)—that is, d(t) = ρd(t – 1) + e(t)—cannot be true. For we know that d(3) is not some function of d(2) plus a completely random error term e(3), which in principle can take any value. So to reiterate, it’s not merely that the question is irrelevant to a true understanding of economics; it’s rather that even on purely mathematical terms, the question makes no sense.

The Econometrician’s Response

I emailed my concerns to my professor* and his teaching assistant. They told me that I was reading too much into the question, and that my problem was of a very “philosophical” nature. Rather than pondering what the question “meant,” I should have realized the relevant formula from the information given, and applied it to get the answer.

I believe their stance is typical of the mainstream approach. It would be one thing if all of the formal rigor of modeling were followed through to the deepest foundations of economic science. But unfortunately, I believe that in day-to-day practice, the mainstream economist relies on certain assumptions and techniques to address a particular problem, since he knows “how to solve” the question when it is asked in this way.

But surely there is something fundamentally wrong when he persists in this method, even when the “question” so posed is internally contradictory.


* In fairness, I want to point out that my professor was a very good one. He always responded to questions, and in fact went out of his way one lecture to explain the dangers of confusing correlation with causation. I also should disclose that I was inadequately prepared for the exam in question; I do not claim that I “understand” completely the field of macroeconometrics, and have fully surveyed it and found it wanting.

Modern Empiricism: an absurd “you-can’t-be-sure-about-someone-until-you’ve-slept-with-them” version of science

I had a seventh grade teacher who expressed the sentiment that perfectly sums up everything that sucks about modern academic inquiry, particularly sucky in the social-sciences.

She went way out of her way to explain to a classroom of thirteen-year-olds how unrealistic and irresponsible pre-marital abstinence is.  She said that sex is absolutely something in which compatibility must be established beyond the shadow of a doubt before any commitment would be prudent*.

Sex, for her, dominated the question of fitness for companionship.  In the same way, I argue, modern social science leans entirely on quantitative data, even meaningless data, instead of getting down to the essential ethical attributes at work in human behavioral phenomena. Their data is sexual gratification. Even if devoid of any real meaning, empty of its real purposes, give me only data, data, data.

So my wife and I enjoyed a sort of sad chuckle when it occurred to us that this is what most empirical science has devolved into: just obstinate, incurious, and eternal skepticism about the reality around us; like the sexual partner we’ll never commit to until they’ve sufficiently shown that they will please us regularly and sufficiently… and even then, “Who knows how things’ll go!? Marriage is risky!” (I imagine it is, with that attitude.)

But they’ll keep this meaningless relationship with data going forever, unless checked by budget cuts, or administrators that demand real results…  a sort of lover’s ultimatum, I suppose.

In the mean-time, all necessary knowledge, like the effects from various government market-interventions is dismissed as tautology or lingual convention.

I say, “If you tax a population, there will less money to spend and invest in a higher-demand direction.”
The empiricist: “You can’t know that until you’ve tested that hypothesis, and even then, you may only say, ‘such and such hasn’t been falsified… yet.’  All else is unscientific!”
Me: “But it’s true by definition! It’s logically necessary!”
Empiricist: “Nothing is logically necessary. Your proposition must be tested and tested ad infinitum because science.”

Or, if you like:

Me: “A ball can’t be simultaneously blue and red all over”
Empiricist: “You can only say that it hasn’t been possible yet.”
Me: “Uh. The words blue, red, all-over, and simultaneously preclude any possible occurrence of this ever…”
Empiricist: “Have you tested this hypothesis in Australia?”
Me: “What? Why would…”
Empiricist: “Well how would you know if the rules stay consistent across the globe? What about across time? It might change tomorrow…”
Me: “What?! Your definition of red or blue might change tomorrow!?
Empiricist: “As a scientist, we never claim to know capital T truth. We only worry about what works.”
Me: “What works for what?! For whom?!”
Empiricist: “What works for…”

There’s no answer beyond what pays, or what lends prestige or furthers technology.  Outside the context of what is cavalierly dismissed as big T truth, understood to the empiricist as only vague ramblings about the unknowable, all inquiry rapidly loses meaning.  Just a succession of contributions to the new Tower of Babel… It’s a positive nihilism factory.

Since the seventh grade, when this radical skepticism is touted as the right scientific attitude, all I hear is excuses for unethical behaviour. Or in academia, vast landfills of mediocre (at best) research.

“Aren’t you worried about this person’s devotion and character before taking on all the risks and vulnerability of a sexual relationship?
“Nah, I really don’t care. I just wanna be sure they’re good in the sack.”
“What about meaningful companionship or mutual admiration or common goals?! Don’t you realize that she’s a completely different person after you’ve committed for life for better of for worse?! Don’t you see that the risks you’re trying to avoid change the experiment completely?”

“I don’t really care about all that.

You never know about something until you had sex with it. ”

And so descends the validity and respectability of our new illogical, selfish, whorish science.

*Thankfully, if not pleasantly, this teacher’s awful, unaccountable, dysfunctional, unprofessional life, to which my English class were horrified witnesses daily, was the perfect backdrop to inoculate us against catching her attitude.