What’s wrong with economic models?

As linked this morning,  Dr Steve Keen is getting a bit of attention from Paul Krugman after he wrote a Primer article on Minsky in which he called out Krugman, among other neo-classical economists. This was also covered by Philip Pilkington over at Naked Capitalism.

I’m not going to address the discussion directly, but essentially, Dr Keen is making a point that Neo-Classical economists ignore private sector debt, to their own demise. Until quite recently however, Dr Keen himself also appeared to ignore vertical transactions. That is, transactions between a fiat issuing state and its private sector.

Where I have time for Dr Keen’s analysis is his study of debt dynamics, which has been very valuable, I do think he made some key mistakes in his projections for Australian private sector credit. When the GFC hit Australia, he didn’t appear to understand the ability of counter-cyclic monetary and fiscal policy to support both private sector wealth and a re-ignition of private sector credit demand.

Since that time it appears Dr Keen has ventured towards the realm of Modern Monetary Theory, possibly under the guidance of Randall Wray and/or Bill Mitchell, two major advocates of MMT. I am obviously speculating here, as I don’t follow him that closely, but from what I have seen of him over the last few years that is certainly my impression. I am obviously happy to be corrected if I have got any of this wrong.

That however isn’t what I want to talk about in this post.

What this exchange actually got me thinking about was how trapped economists are in their models, some of which were built on economic theory that pre-dates modern factors such as fiat and floating currencies. The fact that economists use models may not seem to be a big issue so let me try to give you an analogy from a non-economic area to explain why I consider this to be such a problem.

Although I can’t claim to be an expert in this area, the realm of computing has a good analogy with regard to Object Relationship Mapping (ORM) . For those of you who aren’t information technology professionals the basic idea of these tools is that they allow software developers to represent data in the form of objects, such as a person, or house etc,  and by doing so they abstract away how exactly those objects are going to be stored in underlying database.

These tools are extremely useful because they allow software developers, who may not have particular skills in the area of database technologies, to very quickly build models of the data they wish store as part of their software. These ORMs then go about automatically generating all of the necessary software components to do the actual work of storing the data.

This allows software developers to build an abstraction layer between the domain they understand, software, and the domain they may not, databases. This process works very, and is a very common software engineering practice these days because,  in most cases, it significantly reduces the time it takes to build a piece of software.

There are however a couple of problems with using ORMs and a major one stems from the fact that the software developer doesn’t have to understand how the underlying database system actually works. He/She can simply add data through their model (an abstraction) and the data is persisted in the database. That is, for many software developers, especially younger ones, the database system is a ‘black box’.

Because the software developer doesn’t actually have functional knowledge of how the database system stores information the software tends to works very well until a certain limit is reached. At this point, let’s say around 1 million records in the database , the system starts to perform outside the boundaries of how it is ‘expected to’. And now we have a serious problem. The software developer doesn’t actually understand how the underlying storage mechanism works because he/she is viewing the system through a model that is an abstraction that hides away the important complexities of how the system actually functions.

It is usually at this point that the software developer goes and finds the database administrator. This, in many cases, is the old guy with the beard who sits in some dark corner of the office grumbling about better days when computers were simple and there were far less idiots to deal with. The database administrator starts shouting at the developer that his software is creating massive amounts of inefficient code that is causing the database queries to take an excessive amount of time. If he/she only bothered to learn about the important complexities of the database system itself, that is, how the system actually worked instead of modelling it, they never would have designed the system in a way that would inevitably lead to failure.

After a couple of hours work the Database administrator returns with a new version of database query which, although produces the same results, takes a 50th of the time to execute. The software developer puts this “special code” into their model and goes back to viewing the database as a ‘black box’.

So now you may understand my concern. Models are wonderful things because they allow you to simplify complexity, but in doing so they also abstract away things that are fundamentally important to understand. That is, they work well right up until they don’t.

This is why, just like computer programmers, I believe it is fundamentally important that economists learn the functional mechanics of the economy before attempting to provide some abstractions of how it works.

Latest posts by __ADAM__ (see all)


  1. Good analogy

    except for the difference in distance and nature between layers – a database is a a way of representing a model in a mathematically rigorous structure (that can still be broken of course) and a mapping between an object and a DB is simply a transformation of some form – it’s not much different to say translating between geographic coordinates and a map grid – might be funny around the edges but it’s quite mathematical

    an economic model on the other hand maps between something like an object layer and something that is a quagmire of innumerable moving and changing states (millions of people interacting in trillions of ways and impacted upon by zillions of influences ranging from flapping butterfly wings all the way to 15 meter tsunami)

    An economic model is more akin to someone doing a Fourier analysis of a single wave in an ocean and being able to predict the wave will break sometime soon

    the rest of the ocean is assumed to be irrelevant


    • as the physicist says: let us assume a spherical cow, now to model the milk production we will assume….

  2. Those models don’t look too economic. Probably wouldn’t get out of bed for less than $5,000 a day. As for the reverse …

    • dumb_non_economist

      I’d agree, very uneconomical to run I’d say and I doubt any modelling would be possible. I bet you could input the same data a number of times and get different results each time!

  3. I know what your on about here in one sense as one of my first jobs after uni was to program an algorithm in C for an actuary, and I had no idea what it really did in the system, but the unit testing treated it as a black box, and I bounds trashed it, time, etc. The run on a high spec multiCPU PC took twelve hours and the previous system would need to run for days I was told. Regardless of the DB if you don’t code the query correctly and populate the fields then you will get rubbish out. There might also be rubbish in there in the first place.

    Thanks for explaining this topic as it’s a bit of a mystery to me. Also, these models can’t account for all stimuli … can they?

    • In modeling one will always simplify. Even more, most of phenomenons found in real life are described by non linear differential equations. We don’t really know how to solve them as they don’t have closed form solutions. Instead, solutions can be APPROXIMATED using numerical methods. So we simplify, approximate and the errors usually don’t cancel each other. They almost always augment themselves.
      The result of the modeling can be horrific. Then we go back and try to fix it so it resembles the real model. The problem with that is that those coefficients we use and changes to the model will make the simulation give a seemingly right behavior for a certain range. Outside that range the model will break down. But we wouldn’t be able to know that as we have never had those conditions before to check the model against and validate its behavior.

      • Thanks vonZetty. I’ve worked on control systems and had some experience coding to take care of non linear behaviour. In the boundary conditions very small changes induce instability. I think of the economy is those terms. I’m trying to write an essay on this topic. In economics I’m not sure you can model it that well given the political inputs, and constant policy changes to monetary policy, but I might be way out. Also, we’re effected by global changes which we have no control of. Anyway I continue to research.

  4. We would probably need to distinguish between builders of these models and users. Builders of the ORM abstraction layer would need quite detailed understanding of the low level persistence engine. Versus developers who are just users of the ORM. In similar fashion builders of economic models would probably need a detailed understanding of the economy.

  5. russellsmith55

    I still am a programmer (at a finance service provider), and I’m very interested in learning the functional mechanics of how an economy works 🙂

    I’m doing a masters in finance at the moment, but so far it has focused more on things like actual financial instruments as opposed to the factors that you would want to use those instruments to trade on / hedge against.

    What do MB bloggers/readers recommend as the ‘must read’ literature?

    • Actually, in terms of functional mechanics, one of the better ones I have found is a booklet produced by the Chicago Fed. ( well at least it claims it was ).

      I have scribd’d it here


      Update: Actually having read it again, it isn’t so good because it focusses too much on reserve requirements and multiplier effects. There is no mention of capital requirements which are very important.

      I’ll leave the link up though.

    • Some practical Econometrics books

      – Modeling Financial Time Series with S-PLUS
      – Statistical Analysis of Financial Data in S-PLUS
      – Modern Applied Statistics with S

      Google, Master reading list for Quants.
      Join the QuantNet forum.

  6. Mate, the models used by Wall Street to price risk in Mortgage-Backed-Security Derivative transactions would have to be the ultimate example so far of the perils of computer modelling.

    None of them included any possibility of falls in house prices…….! DUH?

  7. Ronin8317MEMBER

    The reason why economist focus on equilibrium is simple : that’s the limit of their tools. ‘Dynamic models’ which focus on changes are far to complicated to model in mathematics. It can however be done in computing, and I suspect is where the next advancement in economic theory will come from.

    I believe Dr Krugman found a hole in Dr Keen’s argument which have not yet been addressed. Namely, in order to lend, somebody must borrow. Therefore demand have to exist before a bank can lend. Furthermore, banks cannot create money from ‘thin air’. What they actually did was far worse : they stole people’s saving in exchange for worthless pieces of paper. In fact,iIt would better off overall if banks did print money from thin air.

    • I think Steve Keen’s point is that the bank creates credit out of thin air. They don’t need to steal anyone’s savings but rather just dilute what is already out there, which can be argued as a form of stealing.

  8. How the Economic Machine Works and How it is Reflected Now

    A paper by Ray Dalio

    At the end he says:
    “I wrote this paper just to give you my simple-minded explanation of how I believe
    the economic machine works at a fundamental level so that you can assess it for yourself. However, those who are inclined to learn more about how the deleveraging process and long term cycles work should refer to my papers: “An In-Depth Look at Deleveragings” and “Why Countries Succeed and Fail Economically”.”


  9. I’d be interested to see what proportion of the MB audience works in which sector – I get the feeling there’s a fair chunk of us IT-in-finance geeks reading along…

  10. Targeting models is what climate change sceptics do as well, it’s a cheap trick. For many complex phenomena there is absolutely no way to get to the ‘functional mechanics’ without them.

    The example isn’t persuasive either, it could easily be extended almost indefinitely. Like the physicist or engineer coming to the DBA and saying “hey you may know how to build a database, but you don’t have a bloody clue about how to build the hardware it all sits on, if you did…” etc.