Volume 27 Number 1 June 2002

Editor - Robert Marks



Simulating Economics


A fter a lacuna of several years (a consequence of the death of my late wife) I recently returned to active simulations of the interactions among players in an oligopoly, work which dates back to 1988, in order to prepare a new paper with my co-authors, David Midgley and Lee Cooper. My erstwhile research assistant had been compiling and executing the C code on an IBM Unix workstation bought for the purpose six years ago. Rather rusty about both coding and compiling, I first followed in his footsteps, but thought I'd see how my desktop Mac, running the new Unix-based OS X would go. No trouble compiling, and execution went well too. After I'd optimised the program, I was surprised to find that a simulation that had taken six-and-a-half hours on the AIX workstation was taking 65 minutes on the Mac desktop, itself about two years old.

So machines are getting faster. Moore's law (that computers will double in speed every 18 months) has held for many years now. The purpose of my remarking on this is that the use of computers continues to grow in the social sciences, partly spurred by this exponential growth in speed (I can hardly wait to take delivery of a new Mac desktop, perhaps another six times faster than my existing machine, in September) and partly as a result of growth in the imaginative ways that researchers are using computers in the social sciences.

Although the earlier availability of statistical packages led to the doubtful observation that too many researchers had, with the push of a button, as it were, become badly trained statisticians, I don't believe that the increased use of simulation is necessarily a bad thing. For instance, simulation is still a roll-your-own world - no plugging data into an existing package with little thought of the underlying relationships among the phenomena being explored. Instead, the relationships must be modelled explicitly, and the very act of having to describe the interrelationships will clarify understanding. Ken Judd's 1998 book has provided a touchstone for future simulation, at least in economics, and the quality of work discussed at the recent Lake Arrowhead U.C.L.A. conference suggests that, in political science, sociology, and other social science disciplines, similar guides to simulation methodology will also appear.

Not that simulating is always on the side of the angels - it is possible to fall back on simulating because closed-form analysis, although tractable, is too challenging. And simulation cannot provide necessity, only sufficiency, which may not be as valuable. And solving the problem of getting the simulation up and running, generating numbers, displaying the output in an intelligible form, can distract the simulator from adequately considering just how the phenomenon to be explored could best be simulated, or, indeed, whether this is the pertinent phenomenon, or relationship, in the first place. But simulating means that metaphor yields to explicit modelling, which in management thought and writing can be a positive boon.

The idea of computation as more than number-crunching has been growing recently. Since Alan Turing's eponymous test for progress in simulating human behaviour with computers was published, there have been several attempts to model behaviour as the outcome of computations. The most recent model is Stephen Wolfram's A New Kind of Science, just published in the U.S., which argues, apparently, that all phenomena can be explained as the outcome of some computation. Since I have not yet seen a copy of the book, I shall reserve judgement, although early reviews at amazon.com are not encouraging.


In the past ten years or so, agent-based simulations have become popular in the social sciences. By 'agent-based' I mean simulation from the bottom up, in which algorithms, 'agents', are given simple rules of interaction, which results in the emergence of macro phenomena. A computer is useful but not necessary - an early example of agent-based emergence is Adam Smith's 'invisible hand', in which the primarily selfish behaviour of firms selling goods and services to households, and households renting or selling or lending land, labour, and capital to firms, resulted in an outcome which was efficient, and, as we now know, also embodies a dynamic efficiency, in which the pursuit of profits resulted in continual invention and reduction in costs.



Fourteen years ago, in testimony to the power of the nascent Internet to demolish the tyranny of distance, having read about a new technique of simulated evolution which had been used to examine the Repeated Prisoner's Dilemma, I was able to find (from the Usenet news), download (using ftp, from the U.S. Naval Research Laboratories), and compile (the source was in C) a freeware version of the Genetic Algorithm in twenty minutes. I went on to become the first economist to present a paper publicly using the GA, at the Econometric Society Congress in Canberra in August 1988. (See Marks 2002, for more information.) At the recent Lake Arrowhead U.C.L.A. Conference, I was noted as one of the two pioneers in economics using these techniques - the other, John Miller, was at the University of Michigan, the home of John Holland, originator of the GA.

Looking back, I realise it must have been a fortuitous convergence for me: thanks to Philip Brown and Ian Johnstone, the AGSM had been running Unix machines since 1976; thanks to Bob Wood, I read of Bob Axelrod's work with GAs in examining the Repeated Prisoner's Dilemma before it was published (and Axelrod was also at Michigan); thanks to my innate curiosity, I had been reading and contributing to the Usenet news groups on the Internet since 1986. Sydney was not so far from Ann Arbor, finally.



This Issue's Papers

Why do firms decide to use financial derivatives? Having decided to do so, to what extent do they do so? Financial derivatives instruments including options, swaps, future and forwards (Klein & Lederman 1996) are increasingly used by firms to reduce their risk exposure from fluctuations in such prices as interest rates, foreign currencies, and commodity prices. Total notional value of outstanding derivatives contracts is estimated to have grown by over three-and-a half times to US$70,000,000,000,000 (that's $70 trillion) in the four years to 1998.

Whereas we have known for over 40 years that, so long as markets are perfect, use of derivatives cannot add value, markets of course are not perfect - taxes bankruptcy, agency costs, and so on mean that hedging using derivatives can be value-enhancing.



Nguyen and Faff point out that managers may influence the firm's use of derivatives, a principal-agent relationship, where the managers as agents have a large and non-diversifiable stake in the firms and moreover they make the day-to day corporate decisions. They report some evidence of a relationship between firms' hedging and the amount of stock/options held by their managers.

Using the Notes from financial reports of the 500 largest Australian companies listed on the Australian Stock Exchange (ASX) for financial years 1999 and 2000, Nguyen and Faff determine the proportion of firms in 23 industries that use the derivatives: swaps, future/forwards, and options. They find that all firms in six of these 23 industries used derivatives in this period; the lowest rate of use was in Telecommunications.

They conclude that leverage (proxying the role of financial distress costs), firm size, and liquidity are the most important factors underlying a firm's decisions to use derivatives. Leverage is the most important determinant of the extent to which a firm uses derivatives. They found evidence of managerial influences behind the derivative decision. This matters because hedging is costly, and so solely managerially-driven hedging would not be in the shareholders' interests. They conclude that this potential principal-agent problem is apparently eliminated by competition in the market for managers.

Professionals think of their jargon in much the same way that a fish might think of water, that is, not much at all. Their professional jargon is usually seen as an aid to communication, not a hindrance, and this must be true, when the communication is among professionals in a discipline. As professors of disparate disciplines in management schools, we are probably more aware that other professionals that our jargon may, on the other hand, obscure our discipline from others' understanding or even mislead them - if there are common English words which are also used technically in the profession - an example might be profit, which to an economist does not include a return to the owners of capital, but to an accountant includes this return.

Just how effective is the use of technical terms (jargon) in instilling a perception of expertise and trustworthiness in the hearts and minds of customers of financial advisors? It might be thought that a grand display of professional expertise - or at least a display of professional jargon - might be persuasive in influencing lay people to become customers. Or is it a turn-off?



In a interesting paper, Joiner, Leveson and Langfield-Smith perform an experimental study, in which subjects viewed one of two videotapes of a presentation on personal financial planning - one using technical language, the other using non-technical language.

Their results, using Australian undergraduates as the subjects, found that the use of technical financial language in the provision of financial planning advice reduces the perceived understandability of the advice. The lower the understandability of the advice, the lower the clients' perceptions of two attributes: the planners' expertise and trustworthiness. The lower the clients' perceptions of the planners' understandability and trustworthiness, the lower the clients' intentions to seek the planners' advice. This relationship was found to be much stronger than that between perceived expertise and intention to seek advice. Moral: don't attempt to dazzle potential clients with flash language, not if you're a financial planning advisor.

If the editor might permitted an anecdotal observation - I wonder whether there mightn't be cultural differences between countries in the response of the subjects. In my experience, there are apparently significant differences between Australians and Americans in their responses to displays of technical expertise: in Australia one must earn respect, while in America it appears that respect might more easily be ceded to someone who can walk the walk and talk the talk. This observation is relevant to a review of the second edition of Geert Hofstede's influential book, Culture's Consequences, by Giana Eckhardt, in this issue.

'Don't put all your eggs in one basket' could be translated into 'don't marry a fellow employee', since if the company folds the household has lost both incomes at once. Diversification in share markets has led to the growth of share-market indices so that investors can easily diversify by owning index-fixed shares, and so that there is a well-defined measure across the market. These can be closed-end, with a fixed number of firms, or open-ended, where the number of firms is not fixed, and a firm's inclusion is automatic, provided it meets certain criteria, such as capitalisation and liquidity.

Chan and Howard report that stocks added to the closed-end S&P 500 Index in the U.S. experience abnormal returns that average about plus 3% when the inclusion is announced, and about minus 1.5 % when exclusion is announced, that in the longer term these impacts can be as high as plus 7% and minus 14%. To what extent has inclusion or exclusion from the open-ended Australian All Ordinaries Share Price Index (AOI) been accompanied by abnormal returns? This is not simply an academic question: the authors report that litigation against the ASX was contemplated by several companies which had been excluded during 1998 when there was a change in the rules determining the composition of the AOI, and whose share prices had fallen.



Could astute investors have earned abnormal returns by following these inclusions and exclusions? Chan and Howard find that significant positive (negative) abnormal returns occurred immediately before inclusion in (exclusion from) the AOI, which is consistent with evidence from closed-end indices. Moreover, changes to the AOI are associated with extended periods (up to 60 days after) of elevated trading activity, perhaps because changes to the AOI can be predicted successfully. Even after controlling for the selection biases of inclusion and exclusion, the authors find significant positive returns over the sixteen-week period before inclusion. An arbitrage opportunity?

Many countries have moved from a situation where there was a single telephone company, mostly government-owned, to an industry where the government telco has been privatised, and there are many competing telcos. Nevertheless, it is often the case, as in Australia, where, perhaps (as here) because of mistakes made by earlier governments (such as not separating the telco into the switched telephone network, especially including the 'last mile' loop between the exchange and the subscriber's phone, from the rest of the company), there is a dominant player in the industry and a (more or less) competitive fringe, with a degree of regulation.

One particular bone of contention between the telcos and the regulator has often been this issue of interconnection between telcos, and especially the issue of termination charges. These are the costs borne by the telco whose subscriber receives the call. Under the standard caller-pays principle of charging, the caller is charged for both origination and termination services, so that the terminating telco must apply to the originating telco for a fee to cover its charges. At what price? Determination of this price, a marginal cost, will affect the price charged to the caller.

With a single telco, such as AT&T before its court-ordered break-up, or Telstra before Optus, these costs (except for international calls) are internal to the company. Allowing competition for calls inevitably results in such termination charges becoming contentious: in the absence of regulation such interconnection charges might be used by an incumbent telco either to prevent rival networks from becoming effective competitors or the facilitate collusion. Enter the regulator.

Should the regulator target the dominant telco's charges, biasing costs and business towards new entrants? If the termination charges of new entrants are reduced through regulation, then fewer may enter.



Gans and King argue that regulating only the dominant telco's charges may well reduce call prices, but that the unregulated termination charges of other telcos' networks will rise somewhat. They argue, however, that extending such regulation to non-dominant telcos' networks will have an equivocal effect on call prices. They conclude that as the market share of a non-dominant telco's network grows, so does the desirability (from a public-policy, efficiency perspective) of regulating its termination charges: there will be upwards pressure on its call price but the regulation will put downwards pressure on the dominant teleco's call prices. On average, call prices may fall. The authors discuss extension of their analysis when customers are less ignorant of the identity of the carrier that terminates a specific all - at the moment, a mobile number might, to a diligent customer, provide this information, but number portability will eliminate it. Without such information, competition between telcos will be less effective at resulting in an efficient, minimum-cost outcome.



Housekeeping

The E. Yetton Awards are made to the best paper and its runner-up in the previous volume of the Journal, as voted for by the Area Editors of the Journal. For Volume 26, 2001, the winning paper is 'Underpricing of Privatised IPOs: The Australian Experience,' by Ning Gong and Chander Shekhar. The runner-up is 'The Effects of Rater Sex and Ratee Sex on Managerial Performance Evaluation,' by Janne Chung. A list of previous winners can be seen at the Journal web site.

It was only last December that I welcomed Chris Kirby as the new Finance Area Editor. Chris is returning to UT Dallas soon, and will also be standing down as Area Editor when he leaves the AGSM. Incidentally, following from the editorial above, Chris has used simulations in his finance research (see Fleming, Kirby & Ostdiek 2000). Thank you, Chris, and farewell. Associate General Editor, Garry Twite, and Doug Foster of the AGSM will jointly take over the Area Editorship in Finance. Welcome, Doug. Note that Chris and Garry are the joint editors of the forthcoming special issue on Mergers and Acquisitions.

References

Chung, J. 2001, 'The effects of rater sex and ratee sex on managerial performance evaluation,' Australian Journal of Management, vol. 26, no. 2, pp. 147-162.

Fleming, J., Kirby, C. & Ostdiek, B. 2000, 'Does volatility timing matter?' in Computational Finance 1999, ed. Abu-Mostafa, Y.S., LeBaron, B., Lo, A.W., & Weigend, A.S., MIT Press, Cambridge.

Gong, N. & Shekhar, C. 2001, `Underpricing of privatised IPOs: The Australian experience', Australian Journal of Management, vol. 26, no. 2, pp. 91-106.

Hofstede, G. 2001, Culture's Consequences: Comparing Values, Behaviors, Institutions and Organizations Across Nations, 2nd ed., Sage, Thousand Oaks, California.

Judd, K. 1998, Numerical Methods in Economics, MIT Press, Cambridge.

Klein, R.A. & Lederman, J. 1996, Derivatives, Risk, and Responsibility: The Complete Guide to Effectiveness, Derivatives, Management, and Decision Making, Irwin, Chicago.

Marks, R.E. 2002, 'Playing games with Genetic Algorithms', in Evolutionary Computation in Economics and Finance, ed. Shu-Heng Chen, Springer, New York.

Turing, A. 1950, 'Can machines think?' Mind, vol. 59, no. 236, pp. 433-460.

U.C.L.A. First Lake Arrowhead Conference on Agent-Based Modelling in the Social Sciences www.ucla.edu/lake-arrowhead-2002

Wolfram, S. 2002, A New Kind of Science, Wolfram Publishing, Champaign, Ill. www.wolframscience.com



This page was last updated in August, 2002. Copyright © The Australian Graduate School of Management
Phone: +61 2 9931 9200; Email: eajm@agsm.edu.au