A few months ago I discussed the failing of econophysics, and more generally, the economic paradigm that treats people like computers and views economic dynamics like physics. The natural follow up question is, “What can you say that is constructive?” The answer is an emerging approach to behavioral economics.
Over the past few decades it has dawned on some researchers that we don’t make decisions the way most economists think we should. And as a result behavioral economics has become a burgeoning field of study. Initially, the bulk of this field consisted of cataloging behavior deemed aberrant and anomalous. That is, the underlying assumption was that the economic view of decision making is the correct one, and the economists need to see where people get it wrong. Thus, we had descriptions of behavioral economics such as “exploring limited rationality” and developing models for the “systematic imperfections in human rationality.” When inconsistencies between behavior and theory were demonstrated, the most charitable response from the neoclassical school was that maybe there was a missing factor; the theory was correct but not well parametrized. Unlike similar fields in psychology and biology, little time was spent on understanding how people think, why they think the way they do, and the ways the bedrock assumptions of economics based on mathematical methods and axioms of behavior might be off the mark.
And they probably are off the mark because, after all, neoclassical economics is missing half the story. It has left out any consideration of the context in which people make decisions, how that relates to people’s varied experience, environment, and the uncertainty they harbor about how the world might change in unanticipated ways — ways that cannot be captured through an enumeration of the probabilities of the possible states of nature. One field that does take this important (for humans) context into account is called behavioral ecology. It is not as well known in economics as it is in biological and psychological studies of behavior. Now, behavioral economics is incorporating this psychological realm.
This new approach is a quiet revolution that may transform the way we look at economic behavior. The era of mathematical, axiomatic views of human behavior will give way to approaches that start with how people look at decision making, understanding why they do that, and then understanding why that approach might have arisen evolutionarily and how it, rather than the utility maximization approach that has dominated the field for two generations, moves us closer to reality.
Following is a critique of the neoclassical approach, and the initial and perhaps still dominant approach of what might be called Behavioral Economics 1.0, within the context of behavioral ecology. A key proponent of behavioral ecology is Gerd Gigerenzer. I rely on his writings, including his book Rationality for Mortals, in much of the discussion below.
Assumption: We are Logicians
The seminal work on which behavioral economics 1.0 rests is that of Kahneman and Tversky. Using carefully posed questions, they plumb the ways people fail as rational beings, where rational means making decisions in a way consistent with the rules of logic. They find that the same question posed in different but logically equivalent ways leads to different results. They catalog these aberrations as demonstrating human tendencies toward heuristics, biases, frames, and other devices.
The notion here, which was then embraced by the first wave of behavioral economists, is that if nothing else, a rational human should act logically. The problem with this is that for humans logic cannot be considered apart for context, such as the usage and norms of language. For example, does anyone really think that when Mick Jagger sings “I can’t get no satisfaction” he actually means he can get satisfaction? If you are parsing like a logician, that is what you think, because you are operating in the absence of context, namely how people use language. Language usage and the mode of conversation are among the clearest examples of how context and norms matter. If someone says “I’m not going to invite anyone but my friends and relatives,” does anyone really think that means he will only invite that subset of people who are both his friends and also his relatives? Again, that will be the takeaway for someone parsing like a logician. These two examples are simplistic, but if you look at the work used to establish the failure of logic and inconsistencies based on framing and the like, they are fairly illustrative.
The bedrock of much of behavioral economics assumes that we should follow the rules of logic, and when we don’t, that it is suggestive of a behavioral bias or anomaly; the axioms are right, and we are flawed. The objective is to uncover those flaws. A classic example of the problems that come from this assumption is shown by this question posed by Kahneman and Tversky, and critiqued by Gigerenzer:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student she was deeply concerned with issues of discrimination and social justice and also participated in anti-nuclear demonstrations.
Which of two alternatives is more probable:
A. Linda is a bank teller.
B. Linda is a bank teller and is active in the feminist movement.
The vast majority of U.S. college students who were given this question picked B, thus scoring an F for logical thinking. But consider the context. People are told in detail about Linda, and everything points to her being a feminist. In the real world, this provides the context for any follow up. We don’t suddenly shift gears, going from normal discourse based on our day-to-day experience into the parsing of logic problems. Unless you are a logician or have Asperger’s, the term “probable” is going to be taken as “given what I just described, what is your best guess of the sort of person Linda is”. Given the course of the question, the bank teller is extraneous information, and in the real world where we have a context to know what is extraneous, we filter that information out.
Demonstrating our failures to operate within the framework of formal logic is more a manifestation of logic not being reconciled to context than it is of people not being logical. Much of Kahneman and Tversky’s work could just as well have been directed toward the failures of formal logic as a practical device than the failures of people to think with logical rationally.
Assumption: We are Mathematicians
In going up against the neoclassical paradigm, behavioral economics sets itself against mathematical structure. A mathematician entering the world of economics begins with a set of axioms. That is just the way mathematics works. And one of those axioms is that people think like mathematicians. In starting this way, they fail to consider how people actually think, much less how that thinking is intertwined with their environment and the context of their decisions.
The mathematical approach is to assume that, absent constraints on cognitive ability, people will solve the same sort of problem a mathematician will solve in decision making: one of optimization. Then, recognizing that people cannot always do so, they step back to concede that people will solve the optimization problem subject to constraints, such as limited time, information and computational power. Of course, if computational power is an issue, then moving into a constrained optimization is moving in the wrong direction, because the new problem may be even more difficult that the unconstrained one. But given the axioms, what else can you do?
It doesn’t take much familiarity with humans – even human mathematicians – to realize we don’t actually solve these complex, and often unsolvable, problems. So the optimization school moves into “as if” mode. “We don’t know how people really think (and we don’t care to know) but we will adjust our axioms to assume they act ‘as if’ they are optimizing. So if we solve the problem, we will understand the way people behave, even if we don’t know how people’s mental processes operate in generating their behavior.”
Behavioral economics 1.0 does not fully get away from the gravitational pull of this mathematical paradigm. Decision making is compared to the constrained optimization, but then the deviations are deemed to be anomalies. Perhaps this was a necessity at the time, given the dominance of the neoclassical paradigm. But academic politics aside, it might be better to ask if the axioms that would fit for a mathematician are wrong for reality. After all, I could start a new field of economics where I assert as an axiom that people make decisions based on astrology, and then enumerate the ways they deviate from the astrological solution. Of course, people will throw stones at such an axiom, but I do have evidence that there are people who operate this way, which is more, as far as I can tell, than the optimization school has.
Behavioral economics of the 2.0 variety, patterned after the context-laden methods of behavioral ecology, does not take mathematical optimization as its frame, so to speak. And the more it delves into how people actually think – work that naturally originated in psychology rather than economics – we find that people employ heuristics: rules of thumb that do not look at all like optimization.
Assumption: We are Probability Theorists
Behavioral economics recognizes that we operate in an uncertain world, and so assumes people not only act “as if” they optimize, but do so under uncertainty. Things then get really complicated, because we have not only added constraints but also made the problem stochastic.
Heuristics take a different approach to this problem; they overcome the uncertainty by applying coarse and robust rules. They do not try to capture all of the nuances of the possible states and their probabilities. They operate in a different way, unrelated to optimization. They use simple approaches that are robust to changes in states that might randomly occur.
This turns out to be better because it recognizes an important aspect of our environment that cannot be captured even in a model of constrained optimization under uncertainty: There are things that can happen which we cannot anticipate, much less assign a probability to. In such an environment, the best solution is one that is coarse. And, being coarse and robust leads to another anomaly for those who are looking through the optimization lens. In a robust and coarse rule, we will ignore some information, even if it is costless to employ. (This is a point of a paper I co-authored years ago in the Journal of Theoretical Biology, one that, like much of the argument in this post, has been embraced in behavioral ecology while passed over in behavioral economics).
Let’s consider environmental context again to see why the apparently rational appeals based on the application of probability theory might be off the mark. At Caltech, Antonio Rangel is looking at how the brain lights up when various problems are posed to subjects. It turns out the problems related to large losses affect different parts of the brain than problems that seem, from a probability standpoint, to be nothing more than a reflection of problems that look at the potential for large gains. This might provide physiological evidence to support the irrationality observed by many in behavioral economists. Or it might be that it demonstrates these apparent biases were wired deep in our evolutionary past, and that they might be what is rational given that past.
Today it is not hard to envision a windfall gain that is similar in magnitude to a large loss. We can hit the lottery; we can build up wealth to last our lifetime. We can do that because of relatively new social and economic structures that allow us to save our wealth, and a legal structure backed up by a police force that gives us confidence that we and our possessions will be around long enough for us to enjoy them.
If we go back far enough, and not so far in terms of evolutionary time, the only good thing that could happen is capturing a large animal, or rebuffing the most recent tribal raids. Anything good was short-term and could easily be reversed. On the other hand, the negative tail was long and ominous. Even short of the not insubstantial risk of losing one’s life or that of one’s family (and with it one’s future support), there was the risk of crippling injury, floods, and any number of other calamities. Include in these a gnawing realization that there were calamities that could not even be envisioned. In that world, it is not surprising that the brain circuitry would be wired differently for gains and losses. In that world, mapping gains and losses with any notion of symmetry is what would be irrational.
This use of robust and information-sparse heuristics again stems from context. We make our decisions in the context of our environment at the time, and our experience with how the world works. In that world, we have to ignore information because much of it is likely to be irrelevant.
Mathematical optimization can be correct in its purified world and we can be rational in our world, without optimization as the benchmark. It is a truism that if we inhabit a world that fully meets the assumptions of the mathematical problem, it is irrational to deviate from the solution of the mathematical optimization. So either we catalog our irrationality and biases, or we ask why the model is wrong. The invocations of information cost, limited computational ability, missing risk factors are all continually shaving off the edges of the square peg to jam it into the round hole. Maybe the issue is not that we are almost there, and with a little tweaking we can get the optimization approach to work. Rather, logical models may not be the right approach for studying and predicting human behavior.
It deserves repeating that the use of heuristics and the deliberate limits on the use of information as employed in the Gigerenzer worldview are not part of an attempt at optimization, real or “as if”. It is not a matter of starting with optimization and, in some way, determining how to achieve something close to the mathematically optimal solution. It is a different route toward decision making, one that, unfortunately for economists and mathematicians, is most likely the way people actually operate.
Logic, math and probability are all context independent. That is where their power lies; they will work as well on Mars as on Earth. But heuristics can take into account context and norms, an awareness of the environment, and our innate understanding that the world may shift in unanticipated way. As with many new paradigms, the new route to behavioral economics adds a critical part of the world that the old one ignored. Perhaps it was ignored for the same reason physics assumes a perfect vacuum. Or perhaps because the field became overrun with mathematicians, and as Kuhn has said, a new paradigm such as this will only successfully assert itself once the older generation dies off.
Originally published at Rick Bookstaber’s Blog and reproduced here with permission.
Opinions and comments on RGE EconoMonitors do not necessarily reflect the views of Roubini Global Economics, LLC, which encourages a free-ranging debate among its own analysts and our EconoMonitor community. RGE takes no responsibility for verifying the accuracy of any opinions expressed by outside contributors. We encourage cross-linking but must insist that no forwarding, reprinting, republication or any other redistribution of RGE content is permissible without expressed consent of RGE.