Doing Economics with Computers, an EES Silver Symposium Series Videotape, (Stanford: Engineering-Economic Systems, Stanford University, March 25, 1993.)



EES Silver Symposium


Jim Sweeney: Jim Sweeney & Robert I'd like to again introduce one of my earlier students, Robert Marks. Bob was involved in some of the earlier work with me in economics and natural resources in his dissertation, looking at dynamics of an adjustment in natural resource markets. He has gone back to his home country, Australia, where he is a faculty member in the Australian Graduate School of Management, University of New South Wales.

We are fortunate that he has flown up here for just this presentation; he has managed to pad it with a little additional time, that is, he has managed to take a whole sabbatical year here visiting at the Graduate School of Business at Stanford, but I understand that the only reason he has taken that Sabbatical year is so he can be here as part of our Symposium, although he has never told me that. Bob's been doing some very interesting work in using tgenetic algorithms to create strategies, to create strategies that can play games for game theoretical solutions, and more generally he has been dealing with a lot of use of computers for economic applications. And I would like to at this point turn it over to Bob Marks, who will be talking about doing economics with computers, Bob Marks.

Robert Marks: I remember it was a few days before first term and I got a phone call from Jim and he said, "Bob, there is an emergency. Will you teach EES110?" And so with about three days' notice I stepped in and taught EES110. This year it was about two weeks, I think, before today that Jim rang me up; the difference is that my name is in the program, so I can't complain. Actually, when Jim asked me to talk about some of my work, it seemed to me that I had three possibilities: I could have talked about resource and environmental economics that I had been doing, which followed on from my thesis with Jim; I could have talked about work I'd done on the economics of drug use and drug policy, which followed on from a tutorial I did with Ron Howard as a graduate student; but instead I chose to talk about something which doesn't directly follow from my time at EES -- using computers to do economics.

Robert

In 1971, when I arrived at Stanford, I can't remember whether the Santa Clara Valley was known yet as Silicon Valley, and those of us who remember back to that time realise that a revolution has occurred, a revolution in terms of the development of the Valley and in terms of the development of the computer. And over those twenty years we have seen a rise in desk-top computing and a rise in networking and a fall appropriately in the cost of doing both those things. I might say that when I go to my office at the Business School during the sabbatical, I log in immediately to a machine in Sydney which has all my files and programs and word processing software, and I edit the files and I process them and I download the PostScript file which I produce in Sydney to a printer at the end of the hall here at Stanford. Such is the transformation which has occurred, which enables me to continue working, continue to having my same email address as I have when I'm in Sydney, I can even talk to my research assistant over the computer about debugging programs and what we do next and so on. Now I guess that many of you have similar experiences, but I note that after looking through the EES biography book that I think that I was the only person who listed an email address, although that may be because living in Australia the email is so convenient that I go to great lengths to say how people can get in touch with me if they want to.

Well, we have seen a revolution in the use of computers; what about academic economics and academic economists -- have they changed, and if so how? A brief history: if we look back at the classical and neo-classical economists up to Paul Samuelson, we note that these included some strong mathematicians, and Samuelson himself of course got the Noble Prize for opening up economics to new techniques of mathematics and new applications of these techniques. The crowning achievement of economics indeed has been mathematical economics applied not just the whole market or to abstract models but also to financial markets: Darryl Duffy in the Business School, who has a connection with EES, is someone who has extended the use of mathematics to these things. The strength of mathematics, of course, following Bill Lindvill's comment about structure, is the consistency that it imposes upon our reasoning, a consistency composed upon our modelling, and it allows us to extrapolate beyond our experience and be confident that we are imposing consistency. Mathematical economics for the most part has focused on existence, uniqueness, and stability -- the triumvirate -- and again Bill Lindvillšs comments on the tape yesterday about equilibrium problems in mathematics. But economic systems are increasingly complex and there is increasing concern with disequilibrium, with the phenomena of institutions, organisations, households, and individuals trading and exchanging in situations which are far from the equilibrium situations that are characterised by mathematical economists. Indeed, as Jim mentioned in his introduction, my thesis was involved with disequilibrium phenomena of adjustment with natural resource markets and the relationship of those to the macroeconomy.

What have computers contributed to economics? Well, in the early 'fifties computers were seen as very fast calculators, very good for number crunching and statisticians and the devotees of the emerging discipline of econometrics used computers to crunch numbers, and the fall in computing costs and the fall in the cost of statistical packages has meant now that anyone can have a computer on the desk-top with a statistical package which enables them to do econometric studies. I might add that this has led to an increase in those who publish or try to publish papers which confuse causality and correlation, and statistics colleagues of mine in Sydney sometimes blanch when we get papers in for the submission to the Australian Journal of Management with someone who has obviously pressed a few buttons on his PC and come up with something which is not evidence of understanding of the statistical theory underlying.

Parenthetically I might add that EES is a multidisciplinary department and I flew from there to another multidisciplinary department in Sydney and indeed visiting the business school and other multidisciplinary department. I seem to be an economist who seems to want to find others apart from other economists.

What about the application of computers to economics proper? Yesterday we heard from Shmuel Oren talking about market engineering, about what I might call microeconomic mechanism design, and a question I should have asked him at the end but didn't was to what extent he might see the application of what he was doing in electricity markets and elsewhere to, say, the emerging economies of eastern Europe, economies which have only had small or illegal markets and who are now trying to in most cases anyway catch up to western market economies. But, following Samuelson, a lot of what I did my thesis on once I had developed a model was symbol manipulation, once the phenomenon had been modelled, and it is now possible to get computer programs to assist in symbolic manipulation. The first of these as I understand was Maxima at MIT, a mainframe program, but now Maple and Mathematica are available for those desk-top PCs that I was talking about before, and recently this book (Economic and Financial Modeling with Mathematica edited by Hal R. Varian, Springer-Verlag, 1993) has been published with an anthology of chapters which is related to the use of Mathematica in economics and financial modelling. Chapters in this book include symbolic optimisation, designing an incentive-compatible contract -- which is related to the market engineering that Shmuel was talking about yesterday -- economic dynamics, perturbation and growth models, general equilibrium models, long-range dynamics and symbolic algebra programming, finding Nash equilibria -- again something that Shmuel mentioned yesterday in passing, application of Mathematica to game theory -- co-operative games, diffusion, stochastic calculus, stochastic processes, option valuation, asset price and models, econometric packages, Bayesian packages, time-sharing models and, last but not least, decision analysis: a chapter by Robert Korsan, who is in fact is an ex-SRI colleague I would guess of Ron Howard's. But interesting though the use of Mathematica is to economics as evidenced by the contents of this book, it's still not what I'm talking about.

Consider: the standard mode of theorising in economics is deductive. We model economic agents as deductive agents -- as people, individuals, actors who are capable of perfect rationality. Agents derive their conclusions by logical processes from complete, consistent and well defined premises; that is what we mean by deductive processes.

An example of this might be the problem of a society which is composed entirely of mathematicians, people who are blind to their own errors in their theorems, but, eagle-eyed as they are, see any error in other mathematicians' theorems, if not in their own. So proud are they as mathematicians that the same day that they realise by deductive process that in fact their theorems have errors -- at least one -- they shoot themselves. One day someone -- maybe it's the wizard -- comes to this society of mathematicians, looks at the theorems, calls all the mathematicians together, and announces that there are errors in the theorems. Forty days later some number of mathematicians shoot each other. Three questions:

We can solve this problem by deductive reasoning, and indeed that must of been what each of the mathematicians did in order for what we observed to have happened; at least in order for us to have confidence in the answers. Without going through the process, let me just say that if I was the only mathematician who had an error in his theorems, then I would see no other errors, and I couldn't see my own. The wizard would be telling me something that I didn't already know, and in that case I would deduce that seeing that no one else had any errors that I could see, and I knew I could see all other errors, then it must be my errors that he noticed, so I should shoot myself that first day. If I could see one other mathematician with an error who doesn't shoot himself the first day, that must be because the situation wasn't as I described in the first place but rather because there are two people who have errors, and I am the second so we both shoot ourselves on the second day. And so on and so forth through a process of logical deduction, forty-one mathematicians decide that they can see forty others and they are the forty-first. Nothing happens on the thirty-nineth day, and so forty-one mathematicians shoot themselves on the fortieth day. It takes forty days for this deductive process to work through. Most interestingly, what the wizard told them that they didn't already know was that he gave them common knowledge, he gave them a basis on which to start their deductive process. I give this because I think it's an interesting little problem, but I think it's also one which shows that you can get answers using deductive processes and the logic is clean and relentless. It's self-consistent, but of course you have to make strong assumptions not just over your ability to calculate, your ability to deduce, but also the ability of everyone else in the group. Common knowledge is very important for field communications in time of war; it's important for e-mail if you want to be certain that the person you sent the message to got it and that you know he got it and so on. And it's also important in game theory, and indeed I first heard a problem very similar to the mathematician's problem in a seminar by Bob Aumann in the Economics department in 1976 on this campus.

But the process of deduction: if you pose this problem to your friends, you'll find that a lot people spend a lot of time scratching their heads and can't do the sort of deductive process that I briefly outlined, and indeed when problems get more complicated, the psychologists tell us that it may be that people get to a complexity limit, beyond which the mathematical economists' assumption of a rational economic actor who can make these sort of deductive processes which are necessary is unrealistic because it requires full knowledge of the problem, a perfect ability to compute the solution, and common knowledge that others also have full knowledge and have the ability to solve the problem perfectly. In other words, as problems get more complex, deductive solutions to them get brittle.

An example of a problem that people have relished for hundreds of years, maybe thousands of years which perhaps could be solved deductively, which hasn't been is chess, and even in a simpler game, such as checkers, which might be amenable to deductive solution, it still is the case that computers have not yet beaten at least one person, the world champion is still a human being rather than a computer.

These thoughts in fact come from a paper (On Learning and Adaptation in the Economy, Santa Fe Institute Working Paper 92-07-038, 1992) by Brian Arthur, who is an economist at the Food Research Institute and who has been interested in the limits to deductive reasoning and the application to economics, and who is also associated with the Santa Fe Institute, of which I say more later. The question that Arthur poses is how can we construct models in economics which, if they don't have deductive rigour, because we feel that's beyond realism, at least have behavioural rigour, and he argues that if computational abilities are exceeded and with the assumption that deductive rationality can not hold, then we need to move to inductive reasoning. In fact we see that people make decisions even if they can't solve them in a deductive process, and he argues that people do this deductively. But people look for patterns, for analogies, that they accept feedback from their experience of the real world. In short, they are adaptive and they learn. Adaptation and learning is not something which mathematical economists have paid much attention to in the past twenty odd years, but it is something that I argue they are going to spend more time thinking about, at least a subset of economists is, because learning and adaptation are now phenomena that we can model using computers in a way which is productive.

Deductive reasoning is essentially static but inductive reasoning can be dynamic. Although learning and adaptation have not been used in economics, mathematical economics anyway, much in the last twenty years, there is a tradition of learning, adaptation and indeed borrowing from a discipline where dynamic processes is the rule and where deduction is completely alien. And this discipline is biology.

In 1950 Arman Alchian had the insight that it wasn't necessary that firms be deductive profit-maximising entities, that they be rational in their processes of behaviour. He pointed out that to generate verifiable predictions about the visible characteristics of firms' behaviour, all that was sufficient was that firms exhibit adoption or environmental selection, in other words, that those firms that didn't by some process of trial and error learn rules which meant that they behaved as though they were profit maximising would eventually disappear; they would either go broke or they would be taken over. And he pointed out that, following from the biological paradigm, it wasn't necessary that firms deductively profit maximise even though we might believe that most of them do and are very sensitive to this. Rather, a process of natural selection would winnow those firms which didn't behave like that.

There was a flurry of papers in the early 'fifties, but not much happened really for another twenty-five years, when Winter borrowed from biology to speak an inherent element -- what the biologists call a genotype or we now know is the genome through the genome program of decoding DNA. The genome we could think as the underlying decision-making process of the firm, the firm's decision rule, and we distinguish that from the firm's actual behaviour in any situation, which the biologist would call phenotype. In practise of course we can observe behaviour but it's much harder to observe underlying rules and, as I'll argue, computers allow us to run experiments where we model firms or individuals through their underlying rules and then observe the emergent behaviour of the individuals and of the aggregate as the computer experiment continues. So, by using the biological evolutionary analogy we can model economic actors who are not deductive or even conscious but which learn to adapt in their niches, given the pressures of selection.

Recently John Holland and John Miller, two people who are associated amongst other places with the Santa Fe Institute, have argued anyway that the economic environment that most firms and individuals find themselves in is a complex and adaptive system: complex in the sense that there is a network of interacting agents from which dynamic aggregate behaviour emerges and that this aggregate behaviour can be described without detailed knowledge of the individual decision rules of this network of individual actors. (Holland J.H., Miller J.H. (1991), Artificial adaptive agents in economic theory, American Economic Review: Papers and Proceedings, 81(2): 365-370.) They would argue that firms are existing and competing in a novel and evolving world with technological innovation, with strategic learning, and that the individual actors -- firms and individuals -- earn value, profits in the case of firms, utility in the case of individuals, and act maximise this. They argue that these complex adaptive systems may be far from optimal, far from equilibrium, and indeed that there may be a level of hierarchies in this environment, where there are many degrees of aggregation, many degrees of organisation, and many degrees of interaction, each with its own time scale and each with its own characteristic behaviour. They argue that the best way to analyse the behaviour of economic agents in these complex systems is by what they call artificially adaptive agents, but I prefer to call artificially intelligent adaptive economic agents: "artificially intelligent" because we can use machine learning which is a branch of artificial intelligence to study them; "adaptive" because they learn because they are rewarded by behaviour which is good and vice versa; "economic" because they are interested in maximising their rewards through the selection process.

The emergent behaviour which can occur from this in fact is something that another Santa Fe author, David Lane, a statistician, has described (Artificial Worlds and Economics, Santa Fe Institute Working Paper 92-09-048, 1992) in terms of the aggregate level constructs, without reference to attributes of the specific individual entities that I was saying before. He argues that the aggregate properties persist much longer than the time scale of the level of the individual act as the firms or the households or the individuals, and that explanation for these emerging behaviours is difficult to do in terms of the built-in micro properties of the firms or the households or the individuals.

An example of this might be a situation that you set up with a whole lot of firms or individuals exchanging goods with each other but where the rules that you set up in the computer originally don't single out any one of these goods as being special. And yet, if it occurs that one of these goods does emerge to have the properties of money, as we understand them, then we have moved from a barter economy to a money economy, and the emergence of one of these goods as having this property which is that it is accepted as money and then we move then from a barter economy.

In fact Tom Sargent, when he was at the Hoover Institution, did model such an economy and find that money did emerge. (Marimon R., McGrattan E., Sargent T.J., Money as a medium of exchange in an economy with artificially intelligent agent, J. of Econ. Dynamics and Control, 14:329-373, 1990.) Now, I'm not arguing that complex adaptive systems and artificially intelligent adaptive economic agents should supplant the traditional tools of the mathematical economist's. Rather, they should complement them and where ever possible we should use these computer models to verify the results that have been obtained by closed-form solutions in mathematical economics, and a paper of mine which looks at the behaviour of agents playing repeated games was one that I did where I could show results show that the models were verified by the mathematical result. (Marks, R.E. (1992) Breeding optimal strategies: optimal behaviour for oligopolists, Journal of Evolutionary Economics, 2: 17-38.) But I was able to go beyond situations amenable to solution using traditional techniques.

The Santa Fe Institute does have a connection with EES, I'm pleased to tell you, given the links that we are trying to get. The economics program at the Santa Fe Institute started with a workshop in 1987, which has been published as a book, The Economy as an Evolving Complex System, an adapted complex system, as I've been calling them. (Edited by Philip W. Anderson, Kenneth J. Arrow and David Pines, Addison-Wesley, 1988.) One of the Godfathers to this workshop and therefore to the economic stream at the Santa Fe Institute was Ken Arrow. We saw yesterday that Ken was also involved in the early days, thirty years ago, of advising Bill Lindvill and others about what Engineering-Economic Systems should include in its economic program.

Artificially intelligent adaptive economic agents -- they're flexible because of the program that underlies them, and they are able then to adapt and to learn, unlike the mathematical tools of mathematical economists. They are consistent, however, again because of the computer language, which is unlike the verbal models which people have fallen back on the absence of mathematical tools to tackle the messy, complex, adaptive, economic environments which real-world economists have been looking at. So I would argue that these computer models retain the best of both, that is to say, they are flexible and yet consistent.

Moreover, we can control the experiments -- we can seed our computer worlds, our computer environments, with particular characteristics, particular profiles, of our economic agents; we can be explicit about their utility functions or their profit functions, about the information and knowledge they have, about expectations, and about their ability and speed of learning. We can run experiments to see, for instance, to what extent the behaviour of the evolving complex system depends upon these characteristics at the micro level. Unlike the laboratory experiments which some economists are now doing with human subjects, we can not just observe the behaviour -- the phenotype, to use the biologists' term -- we can also observe the genotype, the underlying decision rules, and we can change these and see how changing these alters the underlying behaviour.

Well, I've been talking in general terms, let me now talk in the time remaining more specifically about some of these computer experiments, computer simulations of economic phenomena. John Holland, I mentioned before, has been interested for a long time now in taking the biological analogy further, into modelling on the computer, further than Alchian or Winter have done, in modelling on the computer adaptive behaviour by borrowing from evolutionary biology. In particular he has developed a computer program which he calls the genetic algorithm. This can be used in what David Lane calls an artificial world to consider a whole lot of economic phenomena, but the one I want to focus on is that of repeated strategic interactions.

Strategic interactions are of interest, because, unlike perfect competition, where firms and individuals are price takers and where they can't affect by their individual behaviour the behaviour of price and therefore the rewards of others -- in other words where we can separate individual behaviour from aggregate behaviour -- strategic interaction is of interest because what happens to you depends not just on what you do, but also on what others do. We can have co-operative interaction, such as the battle of the sexes where you and your spouse have both forgotten where said you were going to meet tonight but you'd like to be together, whether it's the concert or the theatre and you don't want to be separate. Or it can be competitive. The architypal model of competitive interaction is the Prisoner's Dilemma, but the environmentalists are also concerned about the Tragedy of the Commons, and economists have been interested in oligopolies.

Political scientist Bob Axelrod in the early 'eighties used computers to run exhaustive tournaments of the repeated Prisoner's Dilemma, tournaments where human beings submitted programs which were strategies in the repeated Prisoner's Dilemma. Remember, the Prisoneršs Dilemma version is one where in the once-off game you will jointly defect: the logic says defect even though if you could somehow reached a co-operative solution with the other guy you could both be better off. So the Nash solution of the once-off game is not the best solution and is not Pareto optimal.

Theory tells us that the repeated Prisoner's Dilemma can support a co-operative solution, but the question is are there strategies which are robust and which do very well? Bob Axelrod was interested in looking at this; he was using the computer to do something which could have done by hand but which was much faster with the computer. But he wasn't using the computer to generate strategies, rather he was using it to test strategies which had been submitted by humans, and those of you who have read this work will know that one simple strategy -- Tit for Tat, which says start off co-operating and then just do what the other guy did in the previous round -- was pretty robust. You can't say it was the best because the PD is a positive-sum game and therefore how well the strategy does depends upon the environment of competitive strategies, but it was pretty good.

In 1986 the MIT Marketing Department at the Sloan School decided to see whether they could run a similar computer tournament, asking people around the country, around the world, to submit strategies for a more complex game. First of all, a three-person game, and, second, one that didn't just have two actions -- co-operate or defect -- but in which individual's strategies had to price. If they priced high, then depending on what others did if everyone priced high then you might get a collusive solution in which you split the monopoly profits, but where there is always a tendency to price low to get into a price war while increasing your market share perhaps if others are pricing high, but at the expense of unit profit. So, a much more complex environment, three players and many actions. You know your profit; you what the other two players have done over the past history of the game. The only instrument in that tournament was price. Question, what is your best strategy?

You know your opening price, you know your prices through the period up till the previous period, you choose actions from a continuum. The question that they were looking at: can Tit for Tat be generalised to a three-person game? Note that we should be focusing on profits rather than market share: although in the short run of course firms may be interested in market share, in the long run they shouldn't be, and if they are, they may find they are no longer separate entities.

There is some structure in this game beyond the fact that you price low or price high. You can talk about the non-co-operative Nash equilibrium price, the low price, the price which maximises the firm's one-shot profit regardless of others' prices. You can talk about the Pareto optimal or collusive price, which maximises the total profits of all three if they implicitly collude or explicitly, but that was excluded deliberately. Then there are two other prices which give structure to this space of pricing: there is the two-person coalition price which maximises the one-shot profits to two colluding firms (implicit collusion) when the firm doesn't co-operate, and finally the envious price which maximises the difference between a firm's own profits and its competitors' total one-shot profits, which means pricing very low, in fact. So this theory tells us there is some structure here. Well, what happened?

Fader and Hauser ran two tournaments: in the first tournament those prices that I just looked at in fact because of their profit function were not as interesting; in other words there was a separability which they hadn't realised at first. In the second tournament there was no separability and those prices that I have just shown you on the overhead were in fact functions of all three prices. Over the Internet I'd heard about this and I submitted some strategies written as computer algorithms from Sydney, in which I used an average of the two-person coalition prices I mentioned, plus a strategy which was slightly more generous, and this strategy won.

I can't claim to have been completely aware of what I was doing -- there was an element of luck there, but having seen the success of that particular strategy, the people in the Marketing Department at MIT were able to generalise my winning strategy and come up with a strategy in fact which was self-aware, and not just aware of the other two guys' prices, which did even better. (Fader P.S., Hauser J.R. (1988) Implicit coalitions in a generalised Prisoner's Dilemma, Journal of Conflict Resolution 32: 553-582.) This started my interest in trying to understand what it was I had done, and people would ask me and I would tell them what I have just said now, but it wasn't particularly a good answer: there was an element of luck.

So I was interested when I heard about the use of machine learning in repeated strategic interactions. A bit of theory: we can talk about strategies in repeated games using the theory of finite automata; that is to say, we want to choose an action from a set of actions which may vary player-by-player and the problem is strategic: others' behaviours matters, unlike, say, pure competition. We define the state at any time to include all of the previous moves: one's own and the others' and we assume that the players have complete information about that, so that we have a memory of actual moves and actions, which is a summary statistic which is updated after each round. Then we can say in general terms we can model a strategic automaton which is playing this repeated game as one in which the action of this machine is a function of the state it finds itself in. Remember that state is a function of its own moves and the moves of all the others. We can define the extent to which it remembers; we can in other words build it with large memory or small memory. Think of Tit for Tat: Tit for Tat only has to remember one round back. But I might say that the economist's intuition would be that there is a payoff to larger memory, that there is a payoff to complexity, although there may also be a cost, a cost in terms of designing the machine to begin with, of using it to play the game, and of maintaining it. ~so that there may in fact be a diminishing returns to complexity in terms of the size of memory. The move of the machine at any time is a function of its action function, and we also need an updating function.

In the real world we need to define our states, we need to define the threshold of perception of the machine: is it going to look at different state for every penny per pound of the price, or is it going to take bands of price and therefore simplify the amount of memory it needs? Having defined the state, solve those particular problems, we then are concerned with the search for the best action function, mapping state to action.

There are different ways of modelling these automata, but the one I've been using follows some work that Bob Axelrod did in 1987 (Axelrod R. (1987) The evolution of strategies in the iterated Prisoner's Dilemma, in Genetic Algorithms and Simulated Annealing, L. Davis (ed.), London: Pittman. pp.32-41): we limit the actual memory to 1, 2 or 3 rounds and in general we get enough complexity just by looking one round back as Tit for Tat does. The state is the past action of all players; the number of states is the number of past possibilities and we keep that manageable by reducing the memory that the machine has -- the number of states it looks back, as you can see by this formula. In a simple two-person Prisoner's Dilemma with one-round memory indeed there are four states depending upon the four possibilities of the previous round -- we both co-operated, we both defected, I defected and you co-operated, I co-operated and you defected. And so the action function then becomes a mapping from state to action.

These three examples of simple mappings: string 0011 maps co-operates on the part of the other player to co-operation for me in the next round, it maps defection on the part of the other player on the last round to defection for me, and indeed that string which really just looks at what the other guy did and ignores what I did is Tit for Tat, although we also need a initial state for that A string which was all ones, where 1 maps to defect, is simply Always Defect. We could have a situation where if we both co-operate, then I co-operate next round I defect. In fact there are 16 strings, 16 possibilities, I'd just give you that to give you a simple understanding of how we might map these action functions going past state to action in the next round.

How might we use this biological analogy from evolution to models? Genetic algorithms provide a robust and effective means to determine effective strategies. They use parallelism, they have population of strings, a population of solutions to the mapping from state to action, and they use principles of natural selection and recombinant genetics to generate and to test possible strings.

This done by coding a function which maps from previous states of a game to future actions as a bit-string. I gave you three examples of 4-bit strings before. We can think of these as chromosomes, chromosomes or genotypes, which are the underlying rules of the players in this repeated strategic interaction, and then we observe their behaviour which occurs given the history, given therefore the stakes of the game. The strings are then scored in terms of their success in the repeated game: play a serious of repetitions of the game, then come up with a score from the payoff matrix or the profit function. And those strings which do best become the parents of the new generation: we breed them with each other and selection occurs through scoring in the repeated interaction.

To be a bit more precise, we choose an initial population of strings, of solutions to these mapping problem from past state to action in the repeated game, given any population we test by playing the repeated game, test each individual string. Then we choose the best strings for mating, using the biological analogy. Then we pair them, using crossover, mutation and this results in the next generation. Genetic algorithms exhibit explicit parallelism and one of the best things that they can do is to optimise without assumptions of continuity, smoothness, or differentiability. So these mappings that we have are not in any way amenable to description through closed-form solutions.

In fact, going back to my earlier comments, we a population of artificially intelligent adaptive economic agents which play the repeated game, and we always know the genotype -- we always know the structure of these -- and we observe the behaviour at each round and we observe the rewards that they get through payoffs of the repeated game. And this is indeed the advantage of computer experiments -- that we can always observe the underlying structure and the underlying decision rules, as well as the behaviour, and of course as I said we can seed, we can start off with two populations, with many populations we can have some artificially intelligent agents which follow those strings that I was talking about before, or we could include in our population some which use rules of thumb which we have decided to put in to see how well these do against our artificially intelligent population. So we can have, say, exponential smoothing rules or some sort of learning rules which we include, as well as those individual strengths.

Emergent behaviour occurs in situations apart from repeated interactions strategic interactions. One line of work has led to a very recently published volume on the double auction market, which is in fact related in some way to Shmuel mentioned yesterday. (The Double Auction Market: Institutions, Theories, and Evidence, edited by Daniel Friedman and John Rust, Addison-Wesley, 1993). Double auction markets, stock markets, various other sorts of commodity exchanges, are economic institutions which have evolved over the past several hundred years; they work well, but in a lot of cases we don't understand quite why it is that they work well. If we change rules in stock-markets, how will this affect the behaviour of aggregate phenomena, such as price and volume? How will it affect volatility? These are questions that are very hard answer to use using existing theory, but which are important, and this volume of papers is using, in some case theory and other cases computer experiments such as I've been describing, to try to understand more about why this particular economic phenomenon institution has evolved as it has and how sensitive the emergent behaviour of these markets is to the sorts of rules under which they operate.

Very briefly let me talk through some work that I have been doing with some marketing colleagues on coffee price wars, which is using this repeated strategic interaction that I talked about a moment ago to a particular market where we have data. It's a mature market so that we hope that the underlying characteristics of the market are not changing drastically from one week to the next or over a period of weeks. We are looking at 52-week strings of data and the situation where we have observations of price and quantity and where one of the three of us (Cooper at UCLA) has derived a market model which predicts from the prices of all the players and the underlying demand structure, that predicts the payoffs, the profits which the firms get each week. We are looking at US data where in a market where there are nine players, names which are familiar to you from your supermarket shelves, where we have data aggregated in supermarket chains, where the firms week by week can choose price and three other marketing features: the newspaper advertising in-store displays and the use of coupons. It's more complex than the simple repeated Prisoner's Dilemma. We have nine players, although as I argue not all of them are strategic, and we focus on the top three as the strategic players where they have more than continuous price but also have some other instruments which they can operate. We have data on prices on the markets instruments, the sales over 52 weeks, as I said we have a marketing model. Well, what does this market look like?

These are prices in dollars per pound over a 52-week period for one chain for three of the players: Folgers, Maxwell House, and Choc Full O Nuts. We see there is a general pattern that they keep their prices at a high level, the so-called shelf price, and every so often -- there seems to be some sort of implicit co-ordination here except maybe around this period -- every so often there is discounting, and in fact the other chains look similar to this although not identical. We can also look here at the effect of this discounting on the sales of these three brands, and, although I can't easy overlay, these spikes in the pounds -- and we are up to 3,000 pounds per week across this chain -- corresponding as you might expect to discounting: low prices plus in-store displays, coupons, and advertising features. Indeed, we see that brand loyalty is not very strong, that the proportion of sales which they make when they are at their high shelf prices is very low. So, if you haven't been taking advantage of those coupons, a lot of other people have, as you can see from this.

If we extend the data to the full set of nine, we see -- I have now plotted those focal three strategic players in solid -- the dotted prices are the other six players and we see that they are not particularly strategic -- there is some discounting, but it is not very strong. We have six reasonably non-strategic players and three strategic players. Typical behaviour of some brands is to cut the price to gain newspaper advertising features and in-store displays, and then after a period of high prices, and then it also turns out that the bulk of the profit -- up to 80% of the profits -- are earned not when prices are high and volumes are low, but when price is low and volume is high. In other words, they're engaging in discounting but they are not engaging in cut-throat competition, but over time discount prices have evolved so that they sell a lot and their unit profits are still reasonable. Their total profits are pretty healthy, similar to an N-person Prisoner's Dilemma.

I won't go through the sorts of experiments that we have run and are continuing to run with this data: I show it as an example of something where we in fact have been able to evolve artificially intelligent adaptive economic agents which exhibit behaviour very similar to the price patterns and, given the demands, the volume patterns that I showed in that slide, and we are continuing to see whether in fact we can improve on the behaviour of, say, one of those strategic players by using artificially intelligent agents against the history of play, of behaviour in the oligopoly and we are continuing with that work.

To finish up, what can I say about this line of work and the use of computers to do economics in general and EES? Well, I can say several things. First of all, the simulation techniques -- and the genetic algorithm is only one of these, there are others with biological analogies of various sorts -- these are not particularly well understood. We can look at the outcomes and we can benchmark the outcomes against actual behaviour using historical data and as I say we can verify the results that these sorts of procedures, computer experiments, come up with against in those cases where we have them what the mathematical economics tell us should happen. What we need are better mathematical descriptions of these complex adaptive systems: mathematics which is involved with combinatorial techniques, which is involved with algorithmic techniques, and we continue to borrow from biology, although if you read your Scientific American every month you see that it's a moving target too: the latest issue is talking about organisms swapping DNA, parts of DNA in ways which was denied and ways which seem positively Lamarkian to people who were brought up with biology in the 'sixties.

Well, to finish up, let me say that a couple of groups of computer scientists have modelled genetic algorithms as Markov processes, so it may be, Ron, that the frogs on the lilly pads have life yet at describing some of these techniques which are used in computer experiments in doing economics on computers.

March 25, 1993.


bobm@agsm.edu.au
Last altered on February 29, 2000.