Bubble trouble: all information is not equal

By Lorenzo

This post is partly prompted by this comment and this paper (pdf) (via) on the US housing price bubbles and busts and (greatly) extends this comment by myself. It is also a response to the work of mathematician-turned-historian Andrew Odlyzko.

In a previous post, I argued that easy monetary policy was not to blame for the asset booms of the C19th, interwar and Great Moderation periods. While tight monetary policy can create busts and downturns, asset booms and busts are not a monetary policy phenomena in any simple sense. Indeed, stable monetary policy coupled with rising incomes and accompanying increased demand for assets can lead to asset booms which can then bust without any help from monetary policy. To understand why, we need to look at the dynamics of asset markets.

About assets

Assets are items that produce income or retain value across time periods. Gold is a pure store-of-value asset, as it produces no income. Bonds are pure income assets, as they have no value apart from the income they produce–they are best thought of as a congealed money stream, their value being set by that money stream. Other assets–such as houses–can operate as both sources of income and stores of value. Particularly if they (or at least the land they are on; houses themselves being large decaying physical objects) are positional goods, a state that can be created by regulation (or protected by it).

So, asset prices are all about expectations of future income and/or capital gains, since there is no information from the future. And we form expectations based on current information, past experience and our aspirations (since the last, at the very least, determine what we focus on).

Market information

Commentary often dismisses the efficient market hypothesis (EMH) as a classic case of unrealistic theorising by out-of-touch economists. In fact, it grew out of empirical results–that stock market prices over time operated as random walks (or very close to such). EMH attempted to explain why this was so. Eugeme Fama‘s classic 1969 paper/1970 article (pdf) sets out the history and reasoning clearly. (A nice visual presentation of random walks and the US stock market is provided in this lecture from minute 53; Prof. Shiller is a noted sceptic about EMH but he still holds that there is still a great deal to it.) l EMH comes in three forms based on tests of levels of market efficiency against available data. As Fama writes in his 1970 article:

weak form tests, in which the information set is just historical prices … semi-strong tests, in which the concern is whether prices efficiently adjust to other information that is obviously publiclyavailable (e.g. announcements of annual earnings, stock splits, etc). … strong form tests concerned with whether given investors or groups have monopolistic access to any information relevant to price information.

Can people reliably outperform the market? The EMH says that, without some very specific circumstances, no. This, btw, is also the conclusion of behavioural finance, widely seen as refuting EMH.

The various asset price bubbles–the dot.com bubble (aka the internet bubble or IT bubble), the related telecoms bubble, the housing price bubbles and, above all, the GFC–are taken by many to have refuted EMH in all its forms. The problem is defining what one means by bubble and what market efficiency and market rationality do, and do not, imply. There is, however, a difference between being sceptical of the ability to reliably identify bubbles–especially at the time–and being sceptical about their existence, as this well-known paper (pdf) demonstrates.

What do you mean by “bubble”?

No form of EMH implies that asset prices will not be volatile, for example; merely that this volatility will not be systematic in a way that is reliably exploitable. (Charging people fees to buy and sell on their behalf is not quite the same thing; if markets are not strong-form efficient, recurring advantage in new information may be exploitable and if asset prices have some tendency to revert to some underlying trend, that may also be exploitable–if you wait long enough.)

If asset price bubbles represent surges and collapses without any change to any underlying behaviour of asset prices, then they represent mass failures to “beat” the market. Which is what the EMH predicts. As Fama says in his 1991 article (pdf):

a ubiquitous problem in time-series tests of market efficiency, with no clear solution, is that irrational bubbles is stock prices are indistinguishable from rational time-varying expected returns (p.1581). … deciding whether return predictability is the result of rational variation in expected returns or irrational bubbles is never clearcut (p.1585).

In that article Fama reconstrues the test for the weakest form of market efficiency as tests for return predictability (i.e. the forecast power of past returns) and relabels the others as event studies and tests for private information. This need not detain us.

Commentary often seems to presume that EMH, or notions of market rationality generally, provide some implicit or explicit guarantee that current prices will be sustained, which is false. No guarantee against asset price volatility follows from either.

Jan Brueghel the Younger on the tulip mania

Jan Brueghel the Younger on the 1630s tulip mania

A feature of investment manias, however, is widespread discounting of downside risk. Indeed, the aforementioned assumed guarantee due to market information efficiency of prices being sustained is a common view during contemporary investment manias, with critics of EMH implying that EMH provides such a guarantee after prices have collapsed.

One of the points of dispute is the consistency of market information efficiency. Strong supporters of EMH argue that asset markets are volatile, but consistently information efficient (for some level of efficiency), so there are no bubbles in any analytically useful sense. Bubble proponents typically argue that there are bubbles and this demonstrates that asset markets are endemically information inefficient. It is, however, possible, that asset markets are generally efficient (or are so to some close approximation) but bubbles occur when specific circumstances lead to a (temporary) departure from efficiency. (So the price collapse marks a return to efficiency.) We should always be wary of inferring general characteristics from unusual events.

A working definition of a bubble is of an upward surge in asset prices beyond normal volatility which then reverts to the pre-existing behaviour. (Though how stable any longer-term trend is, is precisely what is in question during the surge.) If we take before and after prices as reflecting “fundamentals” (i.e. the present value of expected dividends in the case of stocks), this covers both lay and economic notions of bubbles.

So, what has to be explained is precisely the widespread, abnormal discounting of downside risk creating the surge beyond normal volatility. Even if asset markets are efficient in one or more EMH sense, it remains of interest whether such abnormal surges have common patterns or causes. It is also possible that factors leading to significant (though temporary) departures from more normal levels of market information efficiency might be in play.

Reasoning

Which leads us to how human reasoning works. Reasoning can be thought of as having three axes or dimensions–truth (connection to how things are), relevance (connections between propositions or concepts) and ascription (connection to emotions or purposes). parkway14

Formal and symbolic logic treats falsity as lexically dominant over truth in order to make logical analysis more tractable. That is, it works on the “bucket of shit” theory of truth–any amount of falsity in a statement makes it false, creating a tractable true-or-false dichotomy. (On the same principle that if you have a bucket of shit and you add a tablespoon of wine, you have a bucket of shit; if you have a bucket of wine and you add in a tablespoon of shit, you have a bucket of shit.) In reality, we deal with partial and approximate truth all the time. (Hence fuzzy logic.)

Relevance is dealt with by such logical connections as contradiction, independence, antinomy, identity, etc. Even identity is mediated through concepts or propositions. Consider the evening star is the morning star philosophical chestnut. It does not arise from any characteristic of the second planet of our Solar System beyond its interrupted visibility from Earth–that is, from Venus being observable from Earth, but not continuously so, without being so regular and dominant a phenomena as the Sun. Hence it is a discovery that the “star” seen in mornings is the same as the “star” seen in evenings. The morning star is the evening star tells us we have labelled the same thing two different ways without originally realising it. (I.e. it is an epistemic problem rather than a metaphysical one.)

We do not, however, reason for the hell of it. We do so for purposes. As philosopher David Hume put it:

Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them (Treatise on Human Nature, 2.3.3).

Even rationality at its most instrumental, is still purposive. While more substantive rationality is about balancing and managing our purposes, our emotions.

Not only do our purposes, our emotions, direct our reasoning; it is easy for them to overwhelm it to a greater or lesser extent. Abstract reasoning was the last thing to evolve; the wonder is not that we do it badly at times, the wonder is that we do it as well as we do.

Cognitive scarcity

It is particularly easy for emotions to overwhelm our reasoning for reasoning is like everything else we do–we economise on (cognitive) effort. We do not have infinite knowledge, time or processing power. (Hence bounded rationality.) So, we use habits, routines, prejudices and other cognitive shortcuts to manage. We tend to display considerable cognitive inertia–continuing to go with what appears to work until we have some clear-to-us reason not to. The more important-to-us the working assumption to be changed, the stronger cognitive inertia is likely to be, since the more cognitive adjustment we would have to undertake. (The less significant in direct personal consequences possible error is, the greater cognitive inertia is also likely to be, as the easier contrary information is to ignore or re-construe: hence ideological inertia is typically pronounced.)

So, individual use of information may well display significant rigidities. Including in our selection and registering of information–hence the decline effect in scientific studies (the tendency of early results to fade away in later studies). This without considering more specific features (pdf) of human cognition (pdf).

EMH implies that these individual rigidities in use of information either cancel out or are overwhelmed by hope of gain and fear of loss or both. If, however, such information rigidities become widely aligned together, and intensified thereby, there will be no such “canceling out”. On the contrary, there will develop strong common framing of information.

How things are framed gives us a cognitive map, a pattern of placement, a way of cognitively simplifying and sorting. It makes some connections salient, obscures others. That framing affects our responses is hardly surprising. Or that better framings improve our responses. (One of the ways Alan Greenspan seems to have steered Fed policy is by speaking first at meetings of the Federal Open Market Committee [FOMC], allowing him to set the framing for discussion; Ben Bernanke, by contrast, tends to speak last, making him more at the mercy of the framing established by other participants.)

Investment manias appear to be marked by strong common framings that act to block contrary information–to increase “information viscosity” (pdf). If sufficiently common, such blocking of information-being-acted-on would decrease market information efficiency.

2009-09-21-An-unusual-frame-of-reference

Also with gratuitous Castle reference–or perhaps it was the other way round?

Nor is all information equal. Information that comes from experience is much “richer”, much more intense–i.e. much more connected to other bits of information and emotions–than information that does not. Intensity of resonance varies with extent of connection to our emotions, the density of resonance varies to how many bits of information within one’s stock of information it connects to and accords with. The more information resonates, the more it motivates.

Information which has such connected resonance is more powerful, for example, than distilled stats in a graph from things that happened to someone else years ago. (Unless you are an outsider to whom [pdf] a market is just a market and the long term trends are what you habitually look at.) So the experience of years of rising house prices has patently been very powerful for those involved in the pattern; particularly when associated with future hopes. Particularly when there is no information from the future. Particularly when reinforced by lots of other people tied into the shared framing. Just as lingering memories of searing experiences tend to affect outlooks until the memories fade or are allowed to die. (The “Depression effect” in US stock prices is likely, at least in part, to be an example of this.)

What do we mean by ‘information’?

Part of the difficulty is what do we mean by information? We cannot simply insist that only correct information “should” count, because trying to work out what is correct is precisely the difficulty–particularly when there is no information from the future. As Andrew Odlyzko–who has amassed a great deal of information on the British railway manias of the 1830s and 1840s, comparing them to the dot.com, etc bubbles of our time–notes, a recurring feature of critics of investment manias is that (pdf):

they are almost all too early, and so wrong for a long time, and they often emphasize the wrong points (p.87).

Being wrong for a long time makes it easy to discount them at the time. Even after the event, if they emphasize the wrong points, are they not simply (eventually) happenstance correct? As I have noted before, bubbles (in the sense of price surges and collapse) occur precisely because we cannot reliably predict turning points. And saying that one will occur at some unspecified time in the future lacks resonance. Nor is saying that there is downside risk all that informative–how much is the operative question. As Odlyzko also notes (pdf) (p.29), there are always pessimists and doomsayers. While people who pick a big event correctly often have generally poor records as prognosticators.

Moreover, picking out who was correct in hindsight is not helpful, especially if it uses information not available at the time. Even considering information that was available, the question arises how likely people would be to pick up the correct prognostications out of a mass of commentary and information. As Odlyzko observes (pdf) of the critics of the 1840s Railway Mania:

… the skeptics overwhelmingly concentrated on the wrong problem, namely level of investment. They overlooked the real problem, namely the inevitable lack of profits at the end of the burst of investment (p.96).

Inevitable but not realised, even by skeptics. Not precisely an available form of inevitability, then–at least, not in the sense of being part of the flow of information. The failure of markets to incorporate information not available is not a failure of market information efficiency. The more interesting question is about existing information that gets overlooked.

Taking cues

We are social beings. We habitually take cues from other people, given that we have limited access to information, limited knowledge, limited processing capacity, many claims on our attention. The more information saturated and technologically complex our society, the more that is so. The possibility for information cascades–with positive feedback effects–is patently real. Indeed, a feature of investment manias. court The real delusion in such investment manias is that, to a significant degree, volatility has been conquered; that there are no significant downside risks.

The striking thing about bubbles is that it is precisely the going past previous surges, previous price levels–whether for that asset or, if new, analogous assets–which helps generate the belief that some fundamental constraint has been breached such that volatility has been conquered. Perhaps the most famous example of that being economist Irving Fisher (he of the Fisher equation, the Fisher hypothesis, the international Fisher effect, the Fisher separation theorem, the Fisher index, the equation of exchange, money illusion, an early observer of the Phillips Curve and developer of the debt-deflation theory of Depressions) announcing, a few days before the 1929 stock market crash that:

Stock prices have reached what looks like a permanently high plateau.

Even if you do not believe said constraint has been breached for all assets of the relevant category, you might believe it will prove so for some of said assets–particularly the assets that you have bought. Not only is the common experience of rising prices self-reinforcing, but it leads to common patterns of information framing, accentuating the self-reinforcing dynamics. Which sucks more people in, taking their cue from the mass of reinforcing cues.

Housing and commercial property are excellent prospects for such asset booms if supply is constrained. Rising demand with constrained supply generates rising prices, which generates expectations of capital gains, which sucks more people and money in, and away we go with feedback loops operating. They are also dominated by people who make few trades–and experimental economics results suggest that markets strongly dominated by inexperienced traders (so with much narrower experience and more reliant on cues from others) are more likely to (pdf) sustain bubbles. As the then Governor of the Reserve Bank of Australia (RBA) said to a Parliamentary committee in 2002 (pdf):

… the market works, but with long lags during which people are encouraged to take decisions based on little more than optimistic extrapolation of what happened in the past. Developers will continue to put up new apartment blocks while there are investors willing to precommit to buy. These are the investors who turn up at seminars where they are told by the developers how they can become very rich if they highly gear themselves and buy an apartment (p.28).

Constrained supply keeps the feedback loops operating. Hence the US property markets with the biggest housing price rises and collapses tended to be constrained-supply markets (pdf) with the booms in constrained supply markets lasting significantly longer on average before the “correcting” price collapse . The paper on the foreclosure crisis (pdf) which helped prompt this post, provides an example of rampant optimism defying past experience:

The answer to why investors purchased subprime securities is contained in the third column of the same Lehman analysis cited above, which lists the probabilities that were assigned to each of the various house price scenarios. It indicates that the adverse price scenarios received very little weight. In particular, the meltdown scenario—the only scenario generating losses that threatened repayment of any AAA-rated tranche—was assigned only a 5 percent probability. The more benign pessimistic scenario received only a 15 percent probability. By contrast, the top two price scenarios, each of which assumes at least 8 percent annual growth in house prices over the next several years, receive probabilities that sum to 30 percent. In other words, the authors of the Lehman report were bullish about subprime investments not because they believed that borrowers had some “moral obligation” to repay mortgages, or because they didn’t realize that the lenders had not fully verified borrower incomes. The authors were not concerned about losses because they thought that house prices would continue to rise, and that steady increases in the value of the collateral backing the loans would cover any losses generated by borrowers who would not or could not repay.

Relative to historical experience, even the baseline forecast was optimistic, and the two stronger scenarios were almost euphoric. A widely circulated calculation by Shiller (2005) showed that real house price appreciation over the period from 1890 to 2004 was less than 1 percent per year. A cursory look at the FHFA national price index gives slightly higher real house price appreciation—more than 1 percent—from 1975 to 2000, but still offers nothing to justify 5 percent nominal annual price appreciation, let alone 8 or 11 percent. Further, even sustained periods of elevated price appreciation are rare.

The optimism was not unique to the Lehman report. Table 3, based on reports from analysts at JPMorgan, shows that optimism reigned even in 2006, after house prices had crested and begun to fall. Well into 2007, the analysts were convinced that the decline would prove transitory and that prices would soon resume their upward march (Pp17-18).

As the authors suggest, it is the alignment of beliefs, not incentives, which fueled the US housing price surges. A pattern that seems to be general to investment manias. New technology, with its inherent uncertainties onto which (pdf) positive framings can be projected, and constrained-supply assets–which can more easily sustain feedback loops–are particularly prone to such alignments.

All of which can lead directly to major financial crises, as collective risk profiles become seriously unbalanced.

Complexities all round

Railways did really change things

Railways did really change things in very new ways

Though, even here there are complexities. Odlyzko repeatedly observes that private infrastructure investments in the late C18th and C19th were often disasters for the investors while providing great social benefits for the wider society in the longer term. That the private return to infrastructure is often greatly below the social return is a regular feature of such (since infrastructure regularly generates significant positive externalities) and accounts for both its history (many private losses, frequent public provision) and its public policy complexities. The investment illusions of the investors turned out to be socially beneficial. Investment manias are not necessarily all bad.

Odlyzko even shows that the 1830s Railway Mania proved (in the longer term) to be profitable (pdf). That time it really was different. The subsequent 1840s Railway Mania, not so much.

Regarding the perennial propensity to serious cost overruns in major projects, one of the Rothschilds–who made much of their money financing railways, though generally not British ones–is supposed to have opined (pdf):

There are three roads to ruin: gambling, women — and engineers. The first two are the most pleasant, but the last is the most certain (p.97).

Budgets and techies; not a natural pairing. In any period.

uncertainty1 But whether bubbles due to people taking their cues from others, and using common information framings, where the strength of the price rise seems to “demonstrate” that some previous constraint has been profitably breached, demonstrates market inefficiency is a rather more complicated question.

If notable exceeding of previous prices is taken as a signal that previous constraints have been overcome, is that not a departure from past information and so a breach of market efficiency? Not necessarily, since it is the divergence from past patterns which is being taken as a positive sign. (Complicated, isn’t it? Note that this is a different question from failure to incorporate existing information.)

Hindsight may be 20×20, but using information from their future to “demonstrate” irrationality is not any sort of proof. Even identifying some basic recurring innumeracy in estimates is not definitive, since it may only be clear after the fact that there was not some counterbalancing error in the other direction–it is common for people whose estimates turn out to be correct to be so because they made counterbalancing errors, not because all aspects of their estimates turned out to be correct. Profits can occur despite calculation errors–how can you know at the time which figures being wrong how will be crucial? gullible 2

Odlyzko argues for (pdf) measures of “gullibility” and “information viscosity” to warn about developing investment manias. The latter idea may well be worth pursuing, but gullibility does not seem quite the right term. There is an implication of deliberate manipulation which misses the point of problematic common framings. Moreover, bubbles are not a general across-all-assets phenomena. Odlyzko also classes football barracking as gullibility (p.32), when that is just tribalism (chosen pseudo-tribes, but still tribes). Finally, “gullibility” smacks of the “knowing, all-wise observer”, which is always a doubtful role to take in social science.

Odlyzko repeatedly observes that information was available during all the [various] investment manias which cast very strong doubt on the profit projections for the asset type involved in the mania. Either by working from available data or taking note of such analysis already available.

As the paper on the foreclosures crisis notes (pdf):

In hindsight, it is hard to see how two groups of analysts could work in close proximity at the same financial institution and not notice the colossal dissonance implied by their respective analyses. For example, during the peak of the mortgage boom, mortgage analysts at UBS published reports showing that even a small decline in house prices would lead to losses that would wipe out the BBB-rated securities of subprime deals … . At the same time, UBS was both an issuer of and a major investor in ABS CDOs, which would be nearly worthless if this decline occurred. Why didn’t the mortgage analysts tell their coworkers how sensitive the CDOs would be to a price decline? This question goes to the heart of why the financial crisis occurred. The answer may well involve the information and incentive structures present inside Wall Street firms. Employees who could recognize the iceberg looming in front of the ship may not have been listened to, or they may not have had the right incentives to speak up. If so, then the information and incentive problems giving rise to the crisis would not have existed between mortgage industry insiders and outsiders, as the inside job story suggests. Rather, these problems would have existed between different floors of the same Wall Street firm (p.25).

Accepting that information casting doubt on key beliefs pertaining to the various investment manias could reasonably have been seen at the time to have the implications Odlyzko draws from them; from the evidence, it was intense common framing specific to the assets involved which led to such information being ignored or discounted. So, the issue is intense common framings which develop and then suddenly collapse. Even if we take the intense common framings–a widespread aligning and intensifying of individual rigidities in use of information–as a departure from market information efficiency (as seems reasonable), and the collapse as a return to it, we will still need a dynamic model of information selection and action. A collapse in “gullibility” is part of what has to be explained and does not seem a fruitful form of analytical framing. (Asset markets as capable of developing short-term information rigidities but efficient in the long run–sound vaguely familiar?)

Risk and Uncertainty

Economist Frank Knight famously differentiated between risk (which can be calculated) and uncertainty (which cannot). Skepticlawyer posted a briefing on the GFC which used the notion nicely. I have suggested that Keynes’s “animal spirits” is how we frame uncertainty.

Putting the distinction between risk and uncertainty slightly differently than Knight does, if ordinary risk is where there is sufficient confidence in available information about the structure of identified possibilities that a pattern of expected risks can be derived from it (so that, even if specific values cannot be calculated, general rankings and ranges can be reasonably derived, even if only of the x is greater than y form) then uncertainty is where there is insufficient confidence in information about how the likely outcomes are structured so as to frustrate calculation, even in general terms. That the likely outcomes are not able to be expressed mathematically in any useful sense, taking mathematics to be the science of pattern and structure, because there is insufficient pattern or structure from available information within which likely outcomes can be assessed.

If uncertainty increases so that it is no longer possible to reasonably price risk for a range of financial assets, then a financial crisis can ensue (see SL’s posted briefing). Conversely, if downside risks are seriously and serially discounted (i.e. uncertainty is both viewed positively and the risks seriously underestimated), then an investment mania can ensue–a profit-seeking response to accepting intense positive common framings. The subsequent crash in prices when the mania busts can simply be the return to more accurate risk assessment, with minimal financial disruption–factors such as bankruptcy laws and loan-to-value ratios will affect this–or include a serious expansion of negative uncertainty, leading to financial crisis.

One can also wonder how much role calculation plays if positive framing of uncertainty is part of the mania. Especially if it is based on mutually reinforcing cues; which would weaken the effect of pointing out problems with underlying calculations or numeric assumptions–can you calculate people out of framings they were not entirely calculated into?

The so-called Greenspan put (an implicit promise to expand liquidity to keep the financial system operating after a major asset price collapse) was the Fed operating to ensure ensure that any asset price collapse did not lead to a more general financial crisis. Since asset market volatility has also occurred during periods when no such policy was operating (the policy began as a response to the 1987 stock market crash, after all), and the “put” provided no guarantees of profit from specific assets, I am sceptical it had more than a marginal effect on asset volatility.  Especially as technological innovation has an inherent tendency (pdf) to boom-and-bust cycles.

Given that monetary policy during the Great Moderation delivered low inflation, generally falling unemployment and solid economic growth, indicting it for asset booms it had dubious connections with is to ignore the significant macroeconomic costs (pdf) any putative anti-asset boom policy would likely have required–especially as the record of “bubble-popping” by central banks is so fraught.

More specific policy responses are much less likely to involve significant macroeconomic costs. Given that a feature of investment manias is that the authorities are often drawn in (i.e. share the common framings), prudential regulation of risk exposures of financial firms which does not rely on discretionary triggers seems preferable to discretionary regulation which regulators may either (1) judge to be not necessary or (2) be reluctant to invoke because of the negative signals involved or (3) swamp the market with, tarring the prudent with the exposed, but simplifying regulatory action and minimising fraught regulatory judgements.

There is also a plausible role for macroprudential actions, such as those the RBA and Australian Prudential Regulatory Authority (APRA) undertook at height of the Oz housing boom–the RBA’s careful and explicit managing of expectations again outperforming the Fed’s gnomic communications policy as the APRA’s attentive pragmatism did the US regulators. A utilitarian political culture can be something of an advantage–all part of what makes us the happiest industrialised country (via). (It is worth noting that RBA and APRA policy followed the lines suggested in the aforementioned 1999 paper [pdf] co-authored by one Ben Bernanke.) It is also worth noting that the concern focused particularly on risk profiles–something that is a legitimate target for effective management of the financial system and which does not require prior identification of asset price surges as bubbles.

Common, serial discounting of downside risks; the development of intense common framings and information blockages from an alignment of rigidities in the use of information; such would seem to be what to concentrate analytical attention on, to both attempt to measure any departures from market information efficiency and to make prudential regulation work more effectively by being more resistant to intense common framings about asset prices and so more sceptically attentive to risk profiles.

ADDENDA For a post (and comments) expressing robust scepticism about the notion of bubbles see here. The problem of prediction is a serious one–that prices eventually fall is not proof that accusations of a specific bubble were correct, not least because it depends on what happens next.  A very thoughtful post on the macroeconomics of rising and falling asset prices.

One Comment

  1. Posted May 29, 2013 at 9:32 am | Permalink

    Interesting post. I find Odlyzko’s idea of a gullibility index interesting. I’ll have to read through that paper when I get a chance. I imagine it would try to measure how much people are relying on consensus to make their investment decisions, compared to how much they are relying on their own research. This wouldn’t necessarily require any assessment of the validity of the consensus views, rather it would measure its procedural quality.

    My main problem with the EMH is the discrepancy between the test and the conclusions drawn from it. The test illustrates that the market handles information better than any one individual (or small group). This does not imply that it handles such information rationally. Nor does it imply that the market handles information better (producing a more efficient outcome) than other society wide systems could.

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*