Volume 28 Number 1 June 2003

Editor - Robert Marks



Models Rule


A s a simulator for fifteen years, I have tried in the past to anticipate criticisms of my chosen methodology, as well, of course, as marshalling the advantages of simulation. One of the most telling criticisms is that simulations can only determine sufficiency: if you set these parameters so, then you'll observe the following outcome. With a closed-form mathematical characterisation, it is possible, at least in principle, to determine necessity, as well as sufficiency: to observe a specific outcome requires (necessitates) one of the following combinations of parameters - no other combinations will do. Such a conclusion is not generally available to the simulator, who demonstrates the existence of an outcome, rather than the non-existence of the specified outcome unless the parameters fall into particular sets of combinations (which may be null - the desired outcome is unattainable).

As a result, simulation - although in cases of intractable mathematical formulations the only way to get any results, even if these results are merely sufficient conditions, a subset of the underivable necessary conditions - is accepted, but hardly acclaimed.

Recently I was reading about discovery of the structure of DNA by Watson and Crick fifty years ago (Richards 2003). I hadn't properly registered the fact that, following Linus Pauling, they were building physical models of the mysterious molecule. Pauling had rushed into publication with his own model, a three-chain helix (Pauling & Corey 1953). This model, however, had an elementary error: chemically it could not be an acid - remember, deoxyribonucleic acid!

Crick and Watson had already been tinkering with models cobbled together out of sheet metal plates and brass rods, all propped up by retort stands and clamps. That is, they were simulating the molecule's structure, given whatever information was available: the chemical composition of DNA, the relative sizes and charges of the atoms, the chemical properties, and the potential biological properties of the molecule. With X-ray photos from Rosalind Franklin at King's College, they redoubled their efforts at cracking the structure.

On 28 February 1953, regulars at one of my old Cambridge haunts, The Eagle, a small pub at the end of a cobbled courtyard off Bene't Street, became the first to learn the news that the secret of the procreation of life had been cracked. Using simulation! Models rule!


Of course, as Pauling learnt to his embarrassment, these were models of the unknown structure with few degrees of freedom: physics, chemistry, and biology each imposed restrictions on the arrangement of the atoms and sub-molecules in the DNA structure. Pauling's triple helix had earlier been considered and then abandoned by Crick and Watson, after advice from Franklin at King's.



Model-building ('stereo-chemical arguments' in Watson & Crick's 1953 phrase) could not clinch the structure until greater congruence between the model and the observed structure of the actual molecule was shown to exist, as the future Nobel laureates emphasised in their 1953 paper. And any negative results would mean returning to the drawing board, or in this case the brass rods and sheet metal.

Modelling the geometry of molecules has taken on an added immediacy with the emergence of transmissible spongiform encephalopathies, such as Mad Cow Disease and Creutzfeld-Jakob disease (CJD) in humans. 'For proteins to function properly, they need to fold into the correct shape, and protein misfolding is thought to underlie many diseases, including Alzheimer's disease, Parkinson's disease, new-variant CJD, and type II diabetes' (Horizon Symposium 2002). Understanding how proteins fold - and it was Pauling who solved the first simple protein structures in the late 1940s - has been obtained from computer models (in silico) and from laboratory experiments (in vitro), but in the living cell (in vivo) the situation is fundamentally more complex, and computer simulations cannot yet solve the 'great mystery' of the correct protein structure (Pietsch 2002).

In general, simulation to explore emerging behaviour of systems can be seen as a process of induction - inferring general principles from the observation of many particular instances, as opposed to the process of deduction - deriving particular properties from more general principles; or asking what the necessary conditions are to obtain particular properties of a system. (Incidentally, pedagogic research suggests that moving from the specific case to the general principle is much more effective as a teaching method than the opposite of stating a general rule and then applying it to a specific instance.)



Induction was first claimed as a means of scientific advance by Francis Bacon, in his Novum Organum in 1620. He wrote of four pitfalls (or 'idols of the mind') in its use: first, the tendency we have to distort our perception: seeing order when there is none (or vice versa, such as expecting a succession of randomly chosen numbers to have no perceivable patterns), or being overly influenced by outlying datapoints (Salsburg 2002). Second, personal idiosyncrasies in perception. Third, errors from the use of language, where, for instance, the language's syntax imposes a structure on the speaker's thoughts and perceptions. Fourth, the effects of philosophical dogma or ideologies, which can blind their adherents to new, possibly disconfirming, observations.

To work, induction must be properly applied. Specifically, observations, Bacon argued, had to be grounded in relevant observation. One negative, or false instance, would always undermine a host of positives: the particular is stronger than the general.

For the simulator, with many degrees of freedom, this focus underlines the importance of the kind of sensitivity analysis known as Monte Carlo (after the casino in Monaco), whereby the simulation is performed many times with different parameters. As Judd (1998) discusses at length in his Chapters 8 and 9, we cannot obtain truly random samples to initiate our simulations with, but at best pseudo-random numbers. Instead, Judd suggests that so-called quasi-Monte Carlo methods (that do not rely on probabilistic ideas and pseudo-random sequences for constructing an initial sample and analysing the outcome) might be used and, suitably constructed, even outperform true Monte Carlo methods.

Whatever the details of the simulations, or models, it is useful to remember that fifty years ago a simple physical model was the key to the most important discovery in biology for the past century and a half.



The Papers

The five papers in this issue focus on finance and accounting. They examine the pricing of a type of call option, a discussion of how to measure the gains to shareholders of a bidding company in a takeover, an explanation of the entry of deferred tax accounting as a generally agreed accounting practice thirty years ago in Australia, a further look at geared equity investment, and an examination of short-term autocorrelation in the stock returns of Australian equities.

John Handley is interested in the pricing of a specific call option. A call option, remember, is a contract that gives the holder the right to buy a certain number of shares from the option's writer, at a specified (strike) price up to a specified expiration date. When the contract is written not on a fixed number of shares, but on a stochastic number of shares, most simply determined specified at the time of issue by a formula which is a function of the market price at maturity, the call option is known as a Variable Purchase Option (VPO). Such contracts can be seen as an innovative capital-raising tool by firms, and give investors the right to participate in a future capital raising at a fixed discount, as well as being a hedge.



Handley (2000) derived an arbitrage-free pricing model of VPOs within a continuous-time framework, and here (1) tests a theoretical bound on VPO prices, (2) examines the performance of the pricing model, and (3) examines the probability of rational exercise of a VPO, using ASX data on six VPO issues over the period May 1992 to May 1998. Apart from a five-month period, when one contract violated the model's lower bound, all prices of all contracts were well-behaved. After adjustments, the observed prices were consistent with the model. The author found, as expected, a very high ex-ante probability of rational exercise of the VPOs.

A pressing issue for financial advisors and regulators is the acquisition benefits to bidding shareholders in corporate takeovers. David Simmonds examined two widely accepted models for estimating these benefits, only to find that one - the so-called (0,1) market model - shows significant wealth gains, while the other - the market model - shows no such gains. He argues that this is because the reference groups of the two models differ: the former models the expected returns as proportional to the returns from a comparable group, usually the whole market, while the later focuses on company-specific expected returns, a general case of the CAPM, which apparently picks up some of the expected gains from the takeover beforehand. Simmonds also shows that traditional tests of the model are unreliable, especially of the market model, but the use of bootstrapping tests (Efron 1982) greatly improves reliability.

The paper gives me the opportunity to give a plug for a forthcoming special issue of the Journal, on mergers and acquisitions, due out later this year.



Thirty years ago - a time in Australia when returns were high and company taxes low, and when the Whitlam Labor government was upsetting established ways in many spheres - almost a third of listed companies voluntarily adopted deferred tax accounting, which thereupon entered generally accepted accounting practice (GAAP) in Australia. Why then, given its availability for ten years previously? Sidhu and Whittred argue that politically exposed companies (large, with low effective tax rates, particularly in the mining industry) saw the opportunity to reduce their exposure if deferred tax accounting would increase their tax expenses, and took it. Other companies, for whom adoption would be tax-expense reducing, sometimes deferred adoption of the new practice, or sometimes adopted, but took care not to stand out with particularly low tax rates. Indeed, the authors find that mining companies that adopted were likely to experience higher tax rates in consequence.

In a 1999 issue of the Journal, Gray and Whaley (1999) found that the strike price reset feature of the Macquarie Bank's Geared Equity Investment (GEI) contracts was under-priced. The Bank has since then discontinued it: coincidence? Or influence of the Journal? As an academic who professes to be concerned and involved in the real world, I should hope the latter.



The paper by Corrado and Cheung in this issue examines GEI contracts (sans reset) from a different point of view: to assess the value of the GEI to each of its three stakeholders: the individual investor, the Bank, and the tax office. Specifically, to what degree does the GEI contract support tax arbitrage by the investor and the Bank (the issuer)? Using the contingent-claims methods (option pricing theory) pioneered by Black, Scholes, and Merton thirty years ago, the authors found that the tax office (whether in Australia or New Zealand) provides significant subsidies to the other two stakeholders via tax-shield benefits from investment interest expenses. The Bank, the authors conclude, absorbs the greater share of these tax-office subsidies.

The final paper in this issue, by Gaunt and Gray, finds large negative first-order autocorrelation in individual Australian stock returns. Could this give rise to the possibility of arbitrage that generates large risk-adjusted returns? If so, it would undermine Fama's efficient market hypothesis. Their preliminary results suggested that two simple trading strategies could yield large returns, but it turned out that this required the inclusion of small-capitalisation and low-priced stocks which are vulnerable to market-microstructure problems. After the database was revised, virtually no opportunities for arbitrage returns survived. Fama lives.



Housekeeping

Although he had no direct contact with the Journal, Egon Kalotay, who died in April after a short illness, had for many years been a research associate with AGSM's Centre for Research in Finance, which maintains databases on market returns in Australia, and so has been used as a primary data source for many finance papers in the Journal over the past twenty-eight years. The School has established a prize for best performance in corporate finance in his memory. Vale, Egon.

Meanwhile, in Brisbane the Area Editor in Organisational Behaviour, Sharon Parker, has given birth to twin girls. All are doing well. Congratulations, Sharon.

After several years as Area Editor in Economics, Joshua Gans of the Melbourne Business School stood down at the end of April, to be replaced by Chongwoo Choe at the AGSM. I'd like to thank Joshua for his work for the Journal over the years, and to welcome Chongwoo to the team.



Finally, congratulations to Associate Professor Garry Twite, Deputy Editor of the Journal, for his recent appointment.

References

Bacon, F. 2000, Novum Organum, ed. by L. Jardine and M. Silverthorne, Cambridge University Press, Cambridge.
Efron, B. 1982, The Jackknife, The Bootstrap and Other Resampling Plans, SlAM, New York.

Gray, S.F. & Whaley, R.E. 1999, 'Reset put options: valuation, risk characteristics, and an application', Australian Journal of Management, vol. 24, pp. 1-20.

Handely, J.C. 2000, 'Variable purchase options', Review of Derivatives Research, vol. 4, pp. 219-30.

Horizon Symposium: Protein Folding and Disease, October 2002, www.nature.com/horizon/proteinfolding/index.html Accessed on 2003/06/01

Judd, K.L. 1998, Numerical Methods in Economics, M.I.T. Press, Cambridge.

Judson, H.F. 1979, The Eighth Day of Creation: Makers of the Revolution in Biology, Simon and Schuster, New York.

'Linus Pauling and the Race for DNA: A Documentary History,' Special Collections, Oregon State University, http://osulibrary.orst.edu/specialcollections/coll/pauling/dna/index.html Accessed on 2003/05/21

Pauling, L. & Corey, R.B. 1953, 'A proposed structure for the nucleic acids', Proceedings of the National Academy of Sciences, vol. 39, no. 2, pp.84-97.

Pietzsch, J. 2002, 'The importance of protein folding', www.nature.com/horizon/proteinfolding/background/importance.html Accessed on 2003/06/01

Richards, P. 2003, Life class, Cam: Cambridge Alumni Magazine, no. 38, pp. 10-13, Lent Term.

Salsburg, D. 2002, The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century, Owl Books, New York.

Strathern, P. 2000, Mendeleyev's Dream: The Quest for the Elements, Hamish Hamilton, London.

Watson, J.D. & Crick, F.H.C. 1953, Molecular structure of nucleic acids: A structure for deoxyribose nucleic acid, Nature, no. 4356, pp. 737-8, April 25. http://osulibrary.oregonstate.edu/specialcollections/coll/pauling/dna/papers/msna.html



This page was last updated in August, 2008. Copyright © The Australian Graduate School of Management
Phone: +61 2 9931 9200; Email: eajm@agsm.edu.au