Presented to the XIIIth
DHSDLMPS Joint Conference on "Scientific Models: Their
Historical and Philosophical Relevance", Zurich, 1922
October 2000
©2000 Michael S. Mahoney
Recent developments in computational modeling, especially simulations run on massively parallel computers and using genetic algorithms, have raised fundamental questions about the nature and purpose of scientific models. As such models have grown ever more sophisticated, they have become in some cases surrogates for nature, rather than representations of it. Most notably in Artificial Life, they have become nature "as it might be", rather than as it is, exhibiting in the computer forms of phenomena and of "life" that exist nowhere outside it. Among the many ontological and epistemological problems arising from such simulations is the question of the nature of computation itself and of our understanding of it. Computer models are programs, sets of instructions for a finitestate machine with a finite, albeit large, memory. What does one, or indeed, can one know about the dynamic behavior of such programs? What kind of knowledge is it, and how is it related to knowledge about the world? What does a computer model explain, both about its own behavior and about the behavior in the world that it somehow represents or instantiates, and how?
In addressing these questions, it may help to take an historical perspective. In a real sense computers came into existence for the sake of modeling. Analog computers were physical models of computations that could not be carried out analytically. Digital computers implemented numerical models of the same sorts of problems. One may say in short that they did the mathematics. However, two subtle issues lurk beneath that statement. One is how they did the mathematics, the other is what mathematics they did. Those issues assumed particular importance as computers made the transition from number crunchers to symbol processors, and from tools for calculating models to vehicles for computational modeling. What began as an aid to mathematics now verges on replacing mathematics, while yet employing a machine that most agree is nothing if not a mathematical device.
In what follows, I should like to offer a schematic review of mathematical modeling with an eye toward the relation of the mathematics to the model and to the system being modeled. Through the work of John von Neumann in particular, I want to look at the shifts that occurred as the computer became first the tool and then the vehicle of modeling. I shall end up in the present and in the quandary that computer modeling now finds itself. I have no solution to offer, but rather only the historian's perspective.
Perhaps the earliest example of a geometrical model of the
heavens is found in the Timaeus, where Plato accounted for
the motion of the sun by placing it on the equator of a sphere,
embedding the poles of that sphere in the surface of another inclined
to it at a 23^{o} angle, and setting the spheres in uniform
rotation in contrary directions.^{(1)}
Rotating the outer sphere once a day east to west and the other once a
year west to east, moves a point corresponding to the sun from one
observed (lineofsight) position to another. Lineofsight
measurements map the heavens to the model and the geometry of the
rotating spheres models the motion of the sun. Schematically, 

This twosphere model of the sun's motion established two essential features of models that have since purported to provide a mathematical account or explanation of physical phenomena. First, entities of interest in the world are mapped into elements of the model, the operations of which then cause those elements to behave as do the entities of the world. Second, among the derived relations of the mathematical model should be some that link previously unconnected phenomena. That is how the model exhibits explanatory or heuristic power.
Looking more closely at the first feature, once the correspondence between the main elements of the physical system and those of the model has been defined, the model's subsequent properties and relations (or behavior) are consequences of its mathematical structure alone, yet are expected to match those of the original physical system at corresponding points. In modern terminology, the model is in some sense "homomorphic" to the original: the combinatory relations of the mathematical model preserve the structure and operations of the system being modeled. In general:
The use of arrows in the diagrams here constitutes a bit of "metamodeling" along the lines of category theory.^{(2)} Vertically, how the world works and how the model works may each be viewed as a function, f_{W} or f_{M}, transforming one state of the world or one configuration of the model into another. Horizontally, how the model corresponds to the world is a function, or in most cases a theory of measurement, (partially) mapping (a subset of) elements of the world into (a subset) of elements of the model. How well the model fits the world depends on whether the diagram commutes, that is, whether f_{M}(φ(S_{W})) = φ(f_{W}(S_{W})). If so, we can move from one state of the world to another either by the way the world works or the way the model works.
What does one gain from the model? In general, one knows how the model works, either because one has constructed it oneself or because it is accessible to direct manipulation or examination. To the extent that the horizontal relationship is more or less continuous along the vertical, i.e. the diagram fits over an ever finer mesh, one has increasing confidence that the working of the model offers some sort of explanation of the world. The model works the way the world does and an ever closer fit suggests that the world may work as the model does. The model may even aspire to a metaphysics, as in the case of the Newtonian world of matter in motion, or indeed current claims that nature is an informationprocessing system.
Confidence grows in particular through the second
feature, namely when the model fits the world in ways not
originally built into the model.^{(2a)}
Plato's twosphere universe
primarily offered mathematical confirmation of the (natural)
philosophical principle that the irregular motions of the heavens
are apparent only; in reality they are regular and circular. The
model accounted for the sun's varying declination over the year
and hence for the changing length of the day Subsequently, the
model drew added strength from an unanticipated corollary. It
accounts for the curve that is traced over the course of the day
and over the course of the seasons by the shadow made by a pole
(gnomon) stuck in the ground. More precisely, it specifies the
changing position and curvature of the shadow over the year,
including the two times when the normal hyperbola becomes a
straight line.
At the hands of Plato's younger colleague, Eudoxus, and then a line of astronomers down to Ptolemy, the number of spheres increased to include all the planets and all the irregularities of their movements. The introduction of eccentrics and epicycles as means of reducing nonuniform lineofsight progress on a path across the celestial sphere to uniform motion on circles compounded to form an orbit within it tightened fit between observed and calculated positions. But it loosened the correspondence between the workings of the model and those of the heavens, which had no room for physical analogs of eccentrics or epicycles. Or rather, it pressed the imagination to conceive of a world which did accommodate them. The extent to which the mathematical model described a real celestial mechanism was a matter of debate throughout the Middle Ages, as titles such as De reprobatione eccentricorum et epicyclorum bear witness.
As other portions of the Timaeus make clear, Plato's geometry models represented an armillary sphere, which functions as a sun dial, marking the sun's motion over the ecliptic during the year while also following its daily progress through the sky. Ptolemy's models may be considered as a family of such twosphere systems, complicated by the addition of epicycles, which are not concentric. In the Middle Ages, these models (theorica planetarum) took physical form in the equatorium, a variety of analog calculator of planetary positions, with a separate plate for each planet. Except for being centered on the earth, the individual modules had no structural relationship to one another; that is, they did not form a system. That is what Copernicus' new model provided, a rearrangement of the elements and their relations that made the whole a system, redefining its relation to the world by making formerly independent phenomena consequences of the structure of the model.^{(3)} Yet, whatever the changes in the elements of the model, Copernicus clearly followed Ptolemy's lead in the mathematics itself. It is less clear whether Copernicus meant to redefine the ontological relationship between the model and the system, that is, whether his orbits and, in particular, the epicycles he was forced to retain to assure empirical fit were any more "real" than Ptolemy's.
Kepler did mean to capture the physical structure of the heavens. When he first undertook to revise the model of Mars based on Tycho Brahe's observations, he began with the standard model of uniform motion on an eccentric circle with equant. He mapped the data to the model at the quadrants and then used the intermediate positions to test the model against the planet. The positions calculated from the model failed to match the observations by 8' of arc, well in excess of the 2' accuracy of Tycho's measurements. "These eight minutes," Kepler noted in his Astronomia Nova, "led the way to the reform of all of astronomy." During the long struggle recounted in that work, the model changed to an ellipse on which the planet moved at a speed varying inversely as its distance from the sun at one focus, its radius sweeping out equal areas in equal times.
Throughout the process, the world remained fixed in Tycho's measurements. What changed was the model and, hence, the way those measurements were mapped to it. In the Copernican model, intervals of longitude were mapped to angles about the equant point; in the Keplerian, they were mapped to the areas of sectors about a focus. In addition, measurements of distances were added to the mapping. As for the deductive structure of the model itself, the geometry of the circle was replaced by that of the ellipse. Substantial portions of Book I of Newton's Principia were devoted to articulating that geometry and the mapping of astronomical measurements to it.
In the seventeenth century, this mode of modeling in astronomy carried over into new areas such as optics, with changes only in the sorts of mappings and mechanisms it involved. In the process that E.J. Dijksterhuis so aptly termed "the mechanization of the world picture", natural philosophers increasingly thought about natural phenomena in terms of mechanical models, at the same time that mechanics was becoming a mathematical science. One modeled the world mechanically, but one reasoned about the model mathematically. Reduced to a geometrical configuration, Descartes' tennisball model of his theory of light as a pressure in a medium (also a mechanical model) employed his laws of motion to derive the laws of reflection and refraction, including as a corollary the critical angle at which refraction becomes total internal reflection. Similarly, Newton's models based on central force mechanics provided a unified account of Kepler's laws of planetary motion, embedding them in a more general deductive structure.
For there, from the phenomena of the heavens, by means of the propositions demonstrated mathematically in the earlier books, are derived the forces of gravity by which bodies tend toward the sun and the individual planets. Then, from these forces, again mathematical propositions, are deduced the motions of planets, comets, the moon, and the sea. Would that the rest of the phenomena of nature could be derived from mechanical principles by the same sort of argument! For many things lead me to suspect somewhat that all of them might depend on certain forces by which the particles of bodies, by causes not yet known, either are driven mutually toward each other and cohere in regular configurations, or flee and recede from one another. Up to now philosophers have beseeched nature in vain for these forces. I hope that the principles set forth here will throw some light either on this mode of philosophizing or on another, truer one.
Reinforced by similar statements in his Opticks, Newton's "suspicion" became the basis of program of centralforce physics culminating in the work of Laplace.
Second, that process of abstraction occasioned a change of mathematical tools. Until the 17^{th} century, the geometry lay close to the physical model; it was, so to speak, the frame of the machine stripped of its matter. As Newton pointed out in his Preface, geometry accomplished with accuracy what mechanics (in the form of mechanisms for drawing) did imprecisely. One could see the mechanism in the geometrical configuration. As I have described elsewhere, this close fit between physical mechanism and geometrical configuration grew more complex as parameters of motion and force superimposed different spaces on the same configuration.^{(5)} Newton had to contend with this problem of multidimensionality in the Principia, and it shaped the structure of his argument.^{(6)} It was a problem to which symbolic algebra and the calculus provided a solution. Algebra and mechanics developed handinhand over the course of the seventeenth century. Algebra did for mathematics what mechanics did for nature. Each made it possible to take things apart (analysis) to see they fit together and then to reassemble them (synthesis).^{(7)}
Even if Newton himself hewed to the traditional geometric mode,^{(8)} his readers on the Continent embraced the possibilities for expression offered by the new "ordinary and infinitesimal analysis". Questions of expression notwithstanding, the abstract machine Newton described mathematically in the 1680s has its counterpart in the field equations into which Maxwell translated Faraday's lines of force in the 1870s. In each case the mathematics describes a physical mechanism, up to the point where mathematical coherence reaches the limit of physical intuition. As Newton had famously asserted the existence of gravity while "framing no hypotheses" to explain it, so too did Maxwell insist that "we may regard Faraday's conception of a state of stress in the electromagnetic field as a method of explaining action at a distance by means of the continuous transmission of force, even though we do not how how the state of stress is produced."^{(9)} And it was at that point that each man provoked critical response, Newton for positing a force without a mechanical explanation, Maxwell for positing an ether that was (according to William Thomson, at least) physically incomprehensible.^{(10)}
Third, as an approach to modeling
nature, the Principia set a pattern that lasted far
longer than the centralforce paradigm. In mapping the forces of
nature to mathematical relations, it refocused the attention of
the physicist from mechanisms to mathematics and led to another
level of modeling. Proposition 41 shows how. It is important to
grasp the profound implication of the condition in the statement
of the problem (NB it is a problem, not a theorem):
In this proposition Newton maps the motion of an orbiting body on the left onto a graph of motion at the "atomic" level on the right. The orbit and the position of the body on it at any given time are thus captured mathematically, provided that one can determine the area under the curves abzv and dcxw.
As later commentators (perhaps most notably Euler) pointed out, Newton's mathematical style worked against the mission inherent in that condition. Carrying out quadrature geometrically hid the structural relations that made the solution of one problem a guide for others. As the rapid translation of the geometry of the Principia into Leibniz's calculus demonstrates, mathematicians on the Continent believed that algebraic analysis, both finite (ordinary) and infinitesimal, was much better suited to the mathematics of Newton's model. Thus Pierre Varignon transformed Newton's scheme into the two basic "rules", velocity v = ds/dt and force y = (ds/dx)(dds/dt^{2}), where x is measured along the axis AC from A and s is measured along the curve VIK from V.
As to how these two rules are to be used, I say for now that, being given any two of the seven curves noted above [curves relating distance, time, force, and velocity in various combinations], that is to say, the equations of two taken at will, one will always be able to find the five others, supposing the required integrations and the solution of the equations that may be encountered [emphasis added]. ^{(11)}
The condition linked the success of mathematical physics to that of the calculus. It was the job of the calculus to secure those integrations and solutions, and that is where its practitioners directed their efforts over the next centuries. In the 18^{th} century, analytic mechanics was considered a branch of mathematics rather than of physics. Justifying the absence of diagrams in his Mécanique analitique (1788), Lagrange noted:
One will find no diagrams in this work. The methods I set out there require neither constructions nor geometric or mechanical arguments, but only algebraic operations subject to a regular and uniform process. Those who love analysis will take pleasure in seeing mechanics become a new branch of it and will be grateful to me for having thus extended its domain.^{(12)}
The equations of the infinitesimal calculus had become the sole vehicle of mechanics, the unchallenged means of mechanical thought. The intellectual satisfaction derived from reductionist explanations depended on the capacity of the mathematics to carry out the integration that provided the reduction, in the sense of showing that the behavior at the reduced level did produce or correspond to the behavior at the observable level.
As already noted, the situation did not change with the shift from centralforce physics to other models of physical action in the early nineteenth century. Once couched in the terms of the calculus, the effectiveness of the physical model and its capacity to convey understanding depended on the capacity of the calculus to provide a solution to the differential equations that resulted from analysis. Even within Newton's mechanics, but especially in the extension of centralforce mechanics to other classes of phenomena, the classical model had to be extended to allow for the limitations of analytical solution of the differential equations involved. In some cases, it was a matter of calculating, as in expansion into series and termbyterm integration. In essence, a model of the mathematical system was adjoined to the mathematical model of the physical system. Concern for rigorous proofs of convergence and continuity reflected the need to maintain the correspondence between the numerical and the analytical model.
Despite the successes of analysis, it became increasingly clear that in many cases, for example the nbody problem, the move from differential equation to finite form could be accomplished only by numerical calculation, that is by reducing the analytical expressions to explicit summations iterated over small intervals. One could do that by hand, but it was clearly a job suited more for a machine. The story of the development of mechanical computing devices, both analog and digital, during the nineteenth and early twentieth centuries has been recounted many times, and I do not want to retrace that story here.^{(13)} What is important is that, as far as a mathematical understanding of the world is concerned, the turn to mechanical calculation has from the outset been a matter of faute de mieux. A numerical solution may produce from the basic relations specific values to be matched against measurements, but it generally brings very little insight into how those values reflect the working of the underlying relationships. One may, of course, experiment with various initial values and try to discern how the outcome changes, but doing so does not bring insights of the sort provided by relating work to energy by way of force and momentum.^{(14)} Numerical solutions do not reveal how the system works because they hide precisely the intermediate (mediating) relationships that lead from the behavior of the parts to that of the whole.
Even before the advent of highspeed computation, that extended model faced difficulties in maintaining the correspondence between the analytic and numerical models. Computing machines exacerbated those difficulties. Whether analog or digital, errors overrode tolerances. In the case of digital computation, truncation produced its own phenomena and seemingly simple relationships revealed unexpectedly complex behavior when iterated a sufficient number of times. That complex behavior meant that one could no longer be sure that the numerical model produced its result in a way that in fact corresponded to the workings of the analytical model and therefore undermined the claim that the analytical model provided an explanation of the physical system being modeled.
Through much of the 19^{th} century, mathematics modeled the world indirectly as an abstraction of a mechanical model, even if, as in the case of Maxwell, the abstraction began to take on a life on its own. As Bruna Ingrao and Giorgio Israel show, the question of the extent to which a mathematical description needed a physical (phenomenal) referent arose among economists with Leon Walras' equilibrium theory. By the 1920s the mathematicians had asserted themselves, as Ingrao and Israel note with specific reference to John von Neumann:
... one went from the theory of individual functions to the study of the "collectivity" of functions (or functional spaces); from the classical analysis based on differential equations to abstract functional analysis, whose techniques referred above all to algebra and to topology, a new branch of modern geometry. The mathematics of time, which had originated in the Newtonian revolution and developed in symbiosis with classical infinitesimal calculus, was defeated by a static and atemporal mathematics. A rough and approximate description of the ideal of the new mathematics would be "fewer differential equations and more inequalities". In fact, the "new" mathematics, of which von Neumann was one of the leading authors, was focused entirely upon techniques of functional analysis, measurement theory, convex analysis, topology, and the use of fixedpoint theorems. One of von Neumann's greatest achievements was unquestionably that of having grasped the central role within modern mathematics of fixedpoint theorems, which were to play such an important part in economic equilibrium theory.^{(15)}
The developments in economics match those in physics, as mathematics became the conceptual anchor for a quantum physics loosed from its intuitive physical moorings. There, too, von Neumann played a seminal role, as in his collaboration with Garrett Birkhoff on the lattice theory of quantum mechanics.
In the mid to late 1940s, as von Neumann was helping to lay out the architecture of highspeed computation that would bring to light the problems of numerical calculation, he also challenged the classical view of modeling by arguing against the need for a physical mechanism to mediate between nature and a mathematical model. Mathematical structures themselves sufficed to give insight into the world, both physical and social. The job of the scientist was to build models that matched the phenomena, without concern for whether the model was "true" in any other sense.
To begin with, we must emphasize a statement which I am sure you have heard before, but which must be repeated again and again. It is that the sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work that is, correctly to describe phenomena from a reasonably wide area. Furthermore, it must satisfy certain esthetic criteria that is, in relation to how much it describes, it must be rather simple. I think it is worth while insisting on these vague terms  for instance, on the use of the word rather. One cannot tell exactly how "simple" simple is. Some of the theories that we have adopted, some of the models with which we are very happy and of which we are very proud would probably not impress someone exposed to them for the first time as being particularly simple.^{(16)}
But even then von Neumann assumed that the mathematical structure of the model would be accessible to analysis and the researcher would understand how the model worked.
However, the current state of mathematics offered little insight into the problems of interest at the time, a class of problems exemplified by hydrodynamics which, he noted in 1945, was "the prototype for anything involving nonlinear partial differential equations, particularly those of the hyperbolic or the mixed type, hydrodynamics being a major physical guide in this important field, which is clearly too difficult at present from the purely mathematical point of view."^{(17)} "The advance of analysis," he remarked elsewhere, "is, at this moment, stagnant along the entire front of nonlinear problems."
That is what made the computer so attractive. In the absence of analytic solutions, it could at least provide numerical results and, more importantly, produce them quickly enough to make the mathematics useful as a descriptive model. Beginning with von Neumann's own project on numerical meteorology, the computer became a site of scientific investigation in which simulation gradually took the place of analysis. What began as the modeling of the mathematics gradually shifted to a modeling of the phenomenon. With the development of programming languages to support symbolic reasoning, modeling moved beyond calculating numbers where analytic solutions are not possible and extended to defining the local interactions of a large number of elements of a system and then letting the system evolve computationally. For example, rather than seeking a numerical approximation to the nonlinear partial differential equations of fluid flow, one modeled the interaction of neighboring particles and displayed the result graphically. Instead of a mathematical function, what emerged was a picture of the evolving system; an analytical solution was replaced by the stages of a time series, a mathematical model by a computational.
That transition reflected the nature of computing, whether in building auxiliary models of mathematical equations or models of the phenomena directly. The seminal paper on programming by von Neumann and Goldstine lays out the computation of an integral in a flow diagram, a representation borrowed from industrial engineering.^{(18)} It models the computation as if it were an industrial process: the result takes shape as it moves through a sequence of operations. However, what the computer models is not the world but a model of the world expressed as an integral, i.e. the solution of a differential equation. To the extent that one claims any explanatory power for the model, it lies in the analytical expression, not in the computational scheme, i.e. in the flow chart or in the program itself.
At the same time that von Neumann saw the promise of using the computer to bypass the analytical hurdles of nonlinear systems, he also pointed to a fundamental problem posed by the use of the computer as a means of thinking about the world, and indeed about thinking itself. To the extent that science seeks mathematical understanding, that is, understanding that has the certainty and analytical transparency of mathematics, then one needed a mathematical understanding of the computer. That is, referring again to our basic diagram, we need to understand how the computer works. As of the early 1950s, no such mathematical theory of the computer existed, and von Neumann could only vaguely discern its likely shape:
There exists today a very elaborate system of formal logic, and, specifically, of logic as applied to mathematics. This is a discipline with many good sides, but also with certain serious weaknesses. This is not the occasion to enlarge upon the good sides, which I certainly have no intention to belittle. About the inadequacies, however, this may be said: Everybody who has worked in formal logic will confirm that it is one of the technically most refractory parts of mathematics. The reason for this is that it deals with rigid, allornone concepts, and has very little contact with the continuous concept of the real or of the complex number, that is, with mathematical analysis. Yet analysis is the technically most successful and bestelaborated part of mathematics. Thus formal logic is, by the nature of its approach, cut off from the best cultivated portions of mathematics, and forced onto the most difficult part of the mathematical terrain, into combinatorics.
The theory of automata, of the digital, allornone type, as discussed up to now, is certainly a chapter in formal logic. It will have to be, from the mathematical point of view, combinatory rather than analytical.^{(19)}
Neither here nor in later lectures did von Neumann elaborate on the nature of that combinatory mathematics, nor suggest from what areas of current mathematical research it might be drawn.
In the early 1960's John McCarthy set out by reference to an historical prototype the proper goal of a mathematical theory of computation and then articulated it in terms of the computer. "In a mathematical science," he wrote, "it is possible to deduce from the basic assumptions, the important properties of the entities treated by the science. Thus, from Newton's law of gravitation and his laws of motion, one can deduce that the planetary orbits obey Kepler's laws."^{(20)} The important entities and properties of computer science were data spaces and their representation in memory and procedures and their representation in programs. A mathematical science would make it possible, among other things, to prove that one computational process is equivalent to another and to express algorithms in a way that "accommodate[d] significant changes in behavior by simple changes in the symbolic expressions", and to represent formally both computers and computations. Such a science would, in short, deal with the mathematical structures of computation.
For present purposes, it is enough to say that, despite some elegant results, computer science has not yet realized McCarthy's goal. Over the two decades following von Neumann's work on automata, researchers from a variety of disciplines converged on a mathematical theory of computation, composed of three main branches: the theory of automata and formal languages, the theory of algorithms and computational complexity, and formal semantics.^{(21)} The core of the first field came to lie in the correlation between four classes of finite automata ranging from the sequential circuit to the Turing machine and the four classes of phrase structure grammars set forth by Noam Chomsky in his classic paper of 1959.^{(22)} With each class goes a particular body of mathematical structures and techniques.
Three features of the mathematics
warrant particular attention. First, as the study of sequences of
symbols and of the transformations carried out on them,
theoretical computer science became a field of application for
the most abstract structures of modern algebra: semigroups,
lattices, finite Boolean algebras,
algebras, categories. Indeed,
it soon gave rise to what otherwise might have seemed the faintly
contradictory notion of "applied abstract algebra".^{(23)} Second, as the computer became a point of
convergence for a variety of scientific interests, among them
mathematics and logic, electrical engineering, artificial
intelligence, neurophysiology, linguistics, and computer
programming, algebra served to reveal the abstract structures
common to these enterprises. Once established, the mathematics of
computation then became a means of thinking about the sciences,
in particular about questions that have resisted traditional
reductionist approaches. Two examples in biology are Aristide
Lindenmayer's Lsystems, an application of formal language theory
to patterns of growth, and, more recently, Walter Fontana's and
Leo Buss's theory of biological organization based on the model
of the lambda calculus.^{(24)}
Third, for all the elegance and
sophistication of mathematical
computer science, none of the structures mentioned above has
sufficed as a model of computation adequate to provide analytical
insight into the dynamic behavior of working computer programs,
to do for computers what Plato did for the sun's motion, or
Newton for the planets. As Christopher Langton put it in 1989:
Langton’s point pertains to the vertical arrow of our diagram. We map horizontally into the structure of the machine, but we have as yet no means of following the behavior of the structure as it unfolds. As von Neumann lamented of mathematical logic, computer science continues to suffer from its lack of "contact with the continuous concept of the real or of the complex number". As computational models increase in size and power, they present all the difficulties of the infinite with none of the compensating virtues of the continuous, the virtues that classical analysis had so effectively exploited.
<>The computer has become essential to new approaches to biology, as it is to the application of cellular automata to a range of physical, biological, ecological, and economic investigations.^{(25)} It is not a matter of calculating numbers where analytic solutions are not possible, but rather of defining the local interactions of a large number of elements of a system and then letting the system evolve computationally, because we have neither time nor brain capacity to derive that system. In other applications, the results may include new elements or new forms of interaction among them. In particular, the system as a whole may acquire new properties, which emerge when the interactions among the elements reach a certain level of complexity. Precisely because the properties are a product of complexity, that is, of the system itself, they cannot be reduced analytically to the properties of the constituent elements. The current state of mathematics does not suffice to gain analytical insight into the structures of such systems, and hence although the computer by its nature is mathematical, we do not have means of understanding its mathematics, or rather the computation does not afford mathematical understanding, certainly not in the sense of Newton's Principia.
But there is a problem here. In what way, precisely, is carbonbased life an instance of computational life? To understand the issue, it may help to consider the changes our diagram undergoes with the historical shift from mathematical to computational models. Initially, the computer served merely as the agent for the numerical model. Even in that modest role, its speed and its restriction of numbers to finite representation posed significant difficulties and brought out unanticipated behavior in seemingly simple systems. However, if trouble loomed on the righthand side of the diagram, in the numerical modeling of the analytical system, the analytical model retained its relationship to the system being modeled. If the deductive structure of the model became unclear, at least its correspondence with the world seemed unchanged.
The shift to direct computer modeling, or simulation, seemingly simplifies the righthand side of the diagram on the left, replacing the double analyticalnumerical model with a single computational model (although there may still be an intervening mathematical model as in the diagram on the right). The basis for understanding how the model works shifts from mathematical structures to computational structures. As just noted, knowledge of the latter is still quite limited. We lack the tools for capturing the dynamic behavior of a running program in mathematical structures that render it accessible to analysis. But that is not an entirely new situation. Computers are quintessentially nonlinear systems, and, as von Neumann lamented, nonlinear systems were intractable to begin with.
However, the shift from mathematical to computational model also raises questions about how the model corresponds to the system being modeled. In some physical models, for example fluid flow or finiteelement analysis, it remains relatively straightforward. The elements of the model correspond to particles of the real system, and the computational operations correspond to the forces and motions to which those particles are subject. In other models, however, in particular those of artificial intelligence and artificial life, the mapping between the elements and relation operative in the real world and those constituting the computational model is left unspecified. As Robert Rosen, a pioneer in mathematical biology and among the first to think about the relation of computation to biology, put in one of his last articles,
These considerations show how dangerous it can be to extrapolate unrestrictedly from formal systems to material ones. The danger arises precisely from the fact that computation involves only simulation, which allows the establishment of no congruence between causal processes in material systems and inferential processes in the simulator. We therefore lack precisely those essential features of encoding and decoding which are required for such extrapolations. Thus, although formal simulators can be of great practical and heuristic value, their theoretical significance is very sharply circumscribed, and they must be used with the greatest caution.^{(27)}
Not only is it unclear how the model combines and recombines its basic elements to produce its behavior, but in many cases the unspecified relation of the operative constituents of the model to those of the physical system raises the question of what the model could tell us about the system, even if we did know how the model worked. "Because CAS [complex adaptive systems] models tend to strip away many details," notes a recent evaluation, "it is often impossible to say what any component of one of these models corresponds to in the real world. ... As a consequence of this correspondence problem, it is not always clear what scientific questions are being addressed by CAS models."^{(28)} Using contextfree grammars and suitable images to give the terminal symbols graphical form, Lindenmayer systems generate "lifelike" plants. But they do not tell us where in the genetic code of plants one may find the sequences corresponding to contextfree grammars, the mechanisms by which those sequences are read, or the semantics by which the sequences become branches and leaves of a living plant.
Enthusiasts of simulation do not view Rosen's concern as debilitating. As noted above, Langton treats lack of correspondence with the empirical world as a virtue:
Thus, for example, Artificial Life involves attempts to (1) synthesize the process of evolution (2) in computers, and (3) will be interested in whatever emerges from the process, even if the results have no analogues in the 'natural' world. It is certainly of scientific interest to know what kinds of things can evolve in principle whether or not they happened to do so here on earth.^{(29)}
In short, if the model does not fit the world, so much the worse for the world. In After Thought: The Computer Challenge to Human Intelligence, James Bailey, formerly of Thinking Machines, Inc., argues that understanding such as Rosen seeks, belongs to the past and that we are entering an age in which massively parallel computers will replace our mathematical ways of thinking about the world with what he calls "intermaths", the mathematics of which a computer is solely capable. In pointing to the need for cultural change to accommodate the new notion of science immanent in parallel computation, Bailey juxtaposes (at three different places) Laplace's famous description of his omniscient Intelligence with a passage from an interview with French philosopher Alfred Tarantola:
Large parallel computers, with large amounts of memory, may allow us to develop an entirely new sort of physics where, instead of reducing the facts to equations, we can just store in the computer the facts. Then we can extrapolate and we can predict. That's what physics is about: extrapolating and predicting.
At the second use of the passage, where he omits the concluding sentence, Bailey notes that "[t]raditionally, we have not even accorded the rawdata approach the dignity of being considered science at all," and he points to the longstanding debate over whether credit for the law of refraction should go to Rene Descartes or Wilebrord Snel.
Nowhere do the history books concede that the relationship is really Ptolemy's law. More than a thousand years earlier, Ptolemy had listed out all the light ray angles he cared about and next to them listed the resulting angles of refraction.^{(30)}
But historians here take their cue from Snel and Descartes, not to mention Bailey's own hero for other purposes, Johannes Kepler. These thinkers knew what Ptolemy had done, and it did not meet their demand for a law of refraction. His table of values did not explain how light behaved at an interface as a function of the angle of incidence and of the relative densities of the media. Moreover, as it turned out, Ptolemy extrapolated his table beyond the angle of total internal reflection, so as to posit refraction where there was none. Even if the seventeenthcentury philosophers agreed with Tarantola, they would have found Ptolemy's tables severely lacking in their capacity to extrapolate and predict.
But they did not agree with Tarantola's position. To someone interested in a science of lenses, as was Descartes, or to a rule for adjusting celestial observations to account for the distortion caused by the earth's atmosphere, as was Kepler, Ptolemy's tables had no value, even if they had been accurate. Ptolemy's tables do not solve the anaclastic problem, nor do they lend themselves to a theory of caustics. Descartes (or Snel's) sine law does precisely that. It is considered theory because it does more than solve the immediate problem for which the measurements were taken. It abstracts from the phenomena to find a pattern that links them to other phenomena and thus shows that the seemingly different phenomena stem from a common cause. It conveys understanding of the physical world. As Michael Heidelberger has put it:
An explanation better fulfills its purpose the more it unifies the domain of phenomena in question. Facts that appeared to have no relation to one another before the explanation appear after the explanation to be of the same type.^{(31)}
The model of such an explanation is precisely the mechanical world picture as Newton expressed it and unified it. Two mutually independent worlds became a universe in which pendulums and moons moved according to the same laws.
Although the sciences seem to have given up on mathematical explanation, a look at the work of people like John Holland and Stephen Wolfram, to cite just two of the (generations of) proponents of complexity, shows that the new field is no less committed to closed mathematical models than are the classical mathematical sciences. In the concluding chapter of Hidden Order: How Adaptation Builds Complexity, Holland looks "Toward Theory" and "the general principles that will deepen our understanding of all complex adaptive systems [cas]". As a point of departure he insists that:
Mathematics is our sine qua non on this part of the journey. Fortunately, we need not delve into the details to describe the form of the mathematics and what it can contribute; the details will probably change anyhow, as we close in on our destination. Mathematics has a critical role because it along enables us to formulate rigorous generalizations, or principles. Neither physical experiments nor computerbased experiments, on their own, can provide such generalizations. Physical experiments usually are limited to supplying input and constraints for rigorous models, because the experiments themselves are rarely described in a language that permits deductive exploration. Computerbased experiments have rigorous descriptions, but they deal only in specifics. A welldesigned mathematical model, on the other hand, generalizes the particulars revealed by physical experiments, computerbased models, and interdisciplinary comparisons. Furthermore, the tools of mathematics provide rigorous derivations and predictions applicable to all cas. Only mathematics can take us the full distance.^{(32)}
In the absence of mathematical structures that allow abstraction and generalization, computational models do not say much. Nor do they function as models traditionally have done in providing an understanding of nature on the basis of which we can test our knowledge by making things happen in the world. Details aside, Holland's goal, with which he associates his colleagues at the Santa Fe Institute, reflects a vision of mathematics that he and they share with mathematicians from Descartes to von Neumann.
1. /*Re: armillary sphere and construction described in Timaeus.*/
2. The diagrams first occurred to me
several years ago while teaching a seminar on mathematization.
Thus aware of them, I began to find versions of them already in
the literature, most notably in articles by Robert Rosen, a
pioneer theoretical biologist; see his "Church's Thesis and
Its Relation to the Concept of Realizability in Biology and
Physics", Bull. Math. Biophysics 24(1962), 375393
and "Effective Processes and Natural Law", in The
Universal Turing Machine, ed. R. Herken, 48598. A variation of
the diagram can be found in R.I.G. Hughes's discussions of models (e.g.
his "Models and Representation" Philosophy
of Science 64 Suppl. [PSA 1996], Part II (1997),
S325S336. His "DDI" account labels the top arrow "denotation"
the right, "demonstration" and the bottom (pointing in the opposite
direction, "interpretation". The account omits the left arrow,
eschewing any notion that the model might have something essential to
do with the phenomenon being modeled.
3. The relationship between synodic periods and second anomalies (stations and retrogressions) followed in Copernicus's system from the explanation of the latter as the apparent result of the relative motion of the earth and the planet. See Clark Glymour, /*cite*/
4. Isaac Newton, Philosophiae naturalis principia mathematica (London, 1687), 45.
5. See, in particular, my "Huygens and the Pendulum: From Device to Mathematical Relation", in H. Breger and E. Grosholz (eds.), The Growth of Mathematical Knowledge (Amsterdam: Kluwer Academic Publishers, 2000), 1739, where Huygens' diagram contains (interactively) a physical trajectory, a graph of velocity vs. distance, and a mathematical construction.
6. See my "Algebraic vs. Geometrical Techniques in Newton's Determination of Planetary Orbits", in Paul Theerman and Adele F. Seeff (eds.), Action and Reaction: Proceedings of a Symposium to Commemorate the Tercentenary of Newton's Principia (Newark: University of Delaware Press; London and Toronto: Associated University Presses, 1993), 183205.
7. See, inter alia, my "The Mathematical Realm of Nature", in Daniel Garber and Michael Ayers (eds.), Cambridge History of SeventeenthCentury Philosophy (Cambridge: Cambridge University Press, 1998), Vol. I, 70255. The idea pervaded scientific thought in the 18^{th} century; see, for example, Condillac, quoted by Wise.
8. Close examination shows that the geometry of the Principia was far from traditional.
9. J.C. Maxwell, "On Action at a Distance", /*date?*/ Works, II, 341.
10. "Maxwell's electromagnetic theory solved a problem long outstanding in elastic solid theories of light, for no longitudinal waves seemed to accompany the transverse waves of light. But this success came only at the expense of violating another of [William] Thomson's basic proscriptions: no physical entity should be allowed in a theory if no definite (mechanical ) idea could be formed of it. The two violations together an undetectable displacement in an unimaginable ether constituted an epistemological abomination, however successful the theory might be in describing phenomena mathematically." M. Norton Wise, "Mediating Machines", Science in Context 2(1988), 96.
11. Pierre Varignon, "Du mouvement en général par toutes sortes de courbes; & des forces centrales, tant centrifuges que centrepètes, nécessaires aux corps qui les décrivent", Mémoires de l'Académie Royale des Sciences [1700] (Paris, 1727), ???
12. "On ne trouvera point de Figures dans cet Ouvrage. Les méthodes que j'y expose ne demandent ni constructions, ni raisonnemens géométriques ou mécaniques, mais seulement des opérations algébriques, assujetties à une marche régulier et uniforme. Ceux qui aiment l'Analyse, verront avec plaisir la Mécanique en devenir une nouvelle branche, et me sauront gré d'avoir étendu ainsi le domaine." Avertissement.
13. See, for example, William Aspray (ed.), Computing Before Computers (Ames: Iowa State University Press, 1990).
14. For the important of those relations and the capacity to express then analytically, see M. Norton Wise, ...
15. Bruna Ingrao and Giorgio Israel, The Invisible Hand: Economic Equilibrium in the History of Science (trans. Ian McGilvray, Cambridge, MA: MIT Press, 1990), 18687. As will become clear, fixedpoint theorems played a central role in mathematical computer science, too, but the mathematical framework to which they belonged did not match that of the main body of computer scientists, at least not in the United States. McCarthy's invocation of classical physics shows, on the one hand, where their thinking lay and thus why a theoretical computer science seemingly divorced from practical application did not attract their attention. McCarthy's own sense of the entities of computer science, on the other hand, fit well with the new orientation: "A number of operations are known for constructing new data spaces from simpler ones, but there is as yet no general theory of representable data spaces comparable to the theory of computable functions." (McCarthy, "Towards a mathematical science", 21.)
16. John von Neumann, "Method in the Physical Sciences", in The Unity of Knowledge, ed. L. Leary, (Doubleday, 1955); repr. in JvN, Works, VI, 492. Von Neumann went on to observe that "simple" is a relative term:
17. J. von Neumann to Oswald Veblen, 3/26/45; in JvN, Works, VI, 357. /*work Wolfram in here*/
18. Herman H. Goldstine and John von Neumann, "Planning and Coding Problems for an Electronic Computing Instrument", in William Aspray and Arthur Burks (eds.), Papers of John von Neumann (Cambridge, MA: MIT Press, 1987), 151306.
19. John von Neumann, "On a logical and general theory of automata" in Cerebral Mechanisms in BehaviorThe Hixon Symposium, ed. L.A. Jeffries (New York: Wiley, 1951), 131; repr. in Papers of John von Neumann on Computing and Computer Theory, ed. William Aspray and Arthur Burks (Cambridge, MA/London: MIT Press; Los Angeles/San Francisco: Tomash Publishers, 1987), 391431; at 406.
20. John McCarthy, "Towards a mathematical science of computation", Proc. IFIP Congress 62 (Amsterdam: NorthHolland, 1963), 2128; at 21.
21. For more detail see my "Computers and Mathematics: The Search for a Discipline of Computer Science", in J. Echeverría, A. Ibarra and T. Mormann (eds.), The Space of Mathematics (Berlin/New York: De Gruyter, 1992), 34761, and "Computer Science: The Search for a Mathematical Theory", in John Krige and Dominique Pestre (eds.), Science in the 20th Century (Amsterdam: Harwood Academic Publishers, 1997), Chap. 31.
22. Noam Chomsky, "On certain formal properties of grammars", Information and Control 2,2(1959), 137167.
23. See, for example, Garrett Birkhoff and Thomas C. Bartee, Modern Applied Algebra (New York: McGrawHill, 1970) and Rudolf Lidl and Gunter Pilz, Applied Abstract Algebra (NY: Springer, 1984).
24. Aristide Lindenmayer, "Mathematical models
for cellular interactions in development", Journal of
Theoretical Biology 18(1968), 28099, 30015. W. Fontana and
Leo W. Buss, "The barrier of objects: From dynamical systems
to bounded organizations", in J. Casti and A. Karlqvist
(eds.), Boundaries and Barriers (Reading, MA:
AddisonWesley, 1996), 56116.
24a. Christopher G.
Langton, “Artificial Life” [1989], in Margaret Boden, ed., Philosophy of Artificial Life, 47.
25. For a general view, see Gary William Flake, The Computational Beauty of Nature: Computer Explorations of Fractals, Chaos, Complex Systems, and Adaptation (Cambridge, Mass: MIT Press, 1998). It is perhaps worth recalling here that founding document of cellular automata, von Neumann's "General and Logical Theory of Automata", is based on the analogy of computers as artificial organisms. Faced with the problems of building reliable computers out of unreliable components and with the problem of ever expanding computations, von Neumann looked to the main features of natural organisms. They are homeostatic, i.e. they maintain functionality by compensating for disruptions to the system, they grow by replication, and they increase their capabilities by evolution. The solution to reliability, then, was not perfect components, but adaptive systems. Computers should repair themselves, and they should grow with the computations they are carrying out.
26. Christoper G. Langton, "Artificial Life", in Margaret A. Boden (ed.), The Philosophy of Artificial Life (Oxford: Oxford U.P., 1996), 3994; at 40. Emphasis in the original.
27. Robert Rosen, "Effective Processes and Natural Law", in R. Herken (ed.), The Universal Turing Machine, 497.
28. Peter T. Hraber, Terry Jones, and Stephanie Forrest, "The Ecology of Echo", Artificial Life 3(1997), 16590; at 186. The authors define CAS as "a system with the following properties: a collection of primitive components, called 'agents'; nonlinear interactions among agents and between agents and their environment; unanticipated global behaviors that result from the interactions; agents that adapt their behavior to other agents and environmental constraints, causing system components and behaviors to evolve over time." (Ibid., 165) Echo was created by John Holland (see below).
29. Langton, "Artificial Life", 40. Langton seems to temper this strong claim by later asserting that "AL has not adopted the computational paradigm as its underlying methodology of behaviour generation, nor does it attempt to 'explain' life as a kind of computer program." (Ibid, 50) However, since in the absence of natural analogues the computer model (program) is the only basis for asserting what can evolve in principle, it is hard to see how "lifeasitcanbe" might rest on anything other than a computational paradigm.
30. James Bailey, After Thought: The Computer Challenge to Human Intelligence (NY: Basic Books, 1996), p.144. As source for Tarantola's statement, also quoted on pp. 29 and 201, Bailey cites an unpublished interview.
31. Michael Heidelberger, "Was erklärt uns die Informatik: Versuch einer wissenschaftstheoretischen Standortsbestimmung", in Informatik und Philosophie, ed.P. Schefe et al. (Mannheim/ Leipzig: B. I. Wissenschaftsverlag 1993), 1330; at 21. "Eine Erklärung erfüllt ihren Zweck desto besser, je mehr sie den in Frage stehenden Phänomenenbereich vereinheitlicht. Tatsachen, die vor der Erklärung nichts miteinander zu tun zu haben schienen, scheinen nach der Erklärung vom selben Typ zu sein." Statistician David Freedman offers a hypothetical vision of where curvefitting can go wrong: "I sometimes have a nightmare about Kepler. Suppose a few of us were transported back in time to the year 1600, and were invited by the Emperor Rudolph II to set up an Imperial Department of Statistics in the court at Prague. Despairing of those circular orbits, Kepler enrolls in our department. We teach him the general linear model, least squares, dummy variables, everything. He goes back to work, fits the best circular orbit for Mars by least squares, puts in a dummy variable for the exceptional observation  and publishes. And that's the end, right there in Prague at the beginning of the 17^{th} century." (D. Freedman, "Statistics and the Scientific Method", in W. Mason and S. Fienberg, eds., Cohort Analysis in Social Research: Beyond the Identification Problem, p. 359; quoted by Paul Humphreys, Extending Ourselves, 133.)
32. John H. Holland, Hidden Order: How Adaptation Builds Complexity (Reading, MA: AddisonWesley, 1995)1612.