From the Bending of Beams to the Problem of Free Will - Mark Wilson - A Priori - University of Canterbury - New Zealand

From the Bending of Beams to the Problem of Free Will

Mark Wilson, University of Pittsburgh

[S]hape involves something imaginary and no other sword can sever the knots we tie for ourselves by a poor understanding of the composition of the continuum.

—Leibniz1

I

Early travelers often appreciate the charms of a landscape more vividly than the settlers of later years, who gaze upon the encircling splendors with a dull and acclimated eye. Success in science frequently relies upon subtle forms of explanatory structure that exploit data drawn from different scale levels in surprising ways, yet we moderns overlook the oddities of these procedures through inattentive familiarity. G.W. Leibniz, among his many singular accomplishments, was one of the first scientists to attempt physical modeling in what we shall call a ‘mixed level’ mode and was acutely aware of the methodological challenges that such accounts pose. In particular, he pursued such a course in his 1684 essay on the elastic response of loaded beams (an important scientific subject that Leibniz pioneered2) and many of the strangest features of his developed metaphysics directly relate to considerations that arise in such work. The mathematician J.E. Littlewood once published a wry essay entitled ‘From Fermat’s Last Theorem to the Abolition of Capital Punishment’ which traced an improbable path between prosaic worries about mathematical functions and weighty moral matters. Just so: the present essay will follow Leibniz’ analogous journey from worries about bending beams to astonishing conclusions with respect to free will.

Of course, both Littlewood and Leibniz overreached in their argumentation, illustrating Bertrand Russell’s famous apercu:

The point of philosophy is to start with something so simple as to not seem worth stating and to end up with something so paradoxical that no one will believe it.3

However, if we halt our pursuit somewhat shy of Leibniz’ final marks, we shall learn some perfectly credible philosophy of science along the way, for looking at the ‘mixed level’ techniques of everyday science through his pioneer eyes will help us better appreciate the sophisticated structures they display. At same time, our appreciation for the remarkable depth of Leibniz’ thought should increase as well, for modern commentators rarely link Leibniz’ metaphysics with concrete engineering technique in the manner outlined here. Accordingly, this essay will pursue historical and philosophy of science goals in tandem, attempting to extract pertinent conclusions with respect to both Leibniz scholarship and the methodology of science. At paper’s end, we shall follow Leibniz completely to the free will problem, not because these terminal conclusions will aid our core philosophical objectives much, but simply because they complete the full Littlewoodian arc in an amusing fashion, in the slightly comedic manner that truly great philosophy often evinces.

Anyone familiar with Leibniz’ writings realizes that his views about the constitution of matter sound—let’s not mince words—quite crazy, for he contends that the material universe is somehow constituted of a densely packed and nested array of ‘monads’ that don’t truly live in space and time, yet they control everything we see. These ‘monads’ behave like ‘little animals’ in possessing desires, perceptions and actions that aim at furthering such ambitions. Furthermore, the entire material world—including the rocks, the iron girders and water, as well as organic stuff such as wood, mosquitos and human beings—are controlled by these animal-like things, which congregate in great colonies ordered under obscure master/slave relationships. All in all, as a college teacher, this is not the sort of thing you want your students reporting about to the folks back home: ‘In your philosophy class, you learned what...?’

However, Leibniz’ notions are not as strange as they appear, for there are sound structural reasons why the vocabulary of desire, perception and action adapts itself naturally to the behaviors of wood, rock and iron girders. If you open any modern practical primer on materials science, you will find its author describing everyday substances in a similarly anthropomorphized vocabulary of ‘memory,’ ‘desire’ and ‘perception.’ (true, no ‘little animals’ are mentioned, but even these have their formal analogs within engineering technique). But before we consider these issues in greater depth, let’s begin with a motivating query. Every engineer knows that it is far safer, at the present time, to model most forms of macroscopic material (e.g., wood or iron) as continua—that is, as smooth expanses of homogeneous material— rather than describing their features more accurately as assemblages of molecules or other discrete units. Why does this inaccurate mode prove safer? Why is nature most effectively captured within a netting woven from intentional misdescription? A full response would prove quite complicated, but any adequate diagnosis must stress the following factors:

  1. Many behaviors of commonplace materials can be projected downward to smaller scales from macroscopic scale measurements far more reliably than these same behaviors can be accurately augured through any ‘bottom up’ molecular modeling of which we’re presently capable.4 This ‘downward projection’ advantage generally arises because sundry processes of randomization make large assemblies more predictable than the smaller aggregates that make them up, in the same fashion that the gambling or voting behaviors of large hordes are easier to anticipate than the individual psychologies that comprise those populations.

  2. Macroscopic experimentation also suggests fruitful ways in which the dominant behaviors of complex structures can be highlighted through a judicious choice of ‘element’ segmentation.

  3. The behaviors of these little ‘elements’ can be often accurately gauged through heavy reliance upon the notion of constrained equilibrium. Mixing data betwixt (1), (2) and (3) results in what we shall dub a mixed level explanatory strategy.

  4. Mathematically, the resulting dominant patterns are most effectively extracted through taking limits and other forms of asymptotic approximation.

In the sequel we shall amplify upon all of these factors, as well as indicating how they arise within Leibniz’ own thought. Considered in their totality, such techniques practice sagacious physics avoidance in the sense that difficult or tedious aspects of modeling should be eschewed if we already know what their outcome will be within adequate bounds. I believe that philosophers of science haven’t yet paid enough attention to the ways in which commonplace forms of scientific explanation are structured by the underlying topography of what we already know and what we don’t yet know.

More general philosophical purposes can be assisted through a better appreciation of mixed level schemes as well, for many stereotypes prevalent in current metaphysical discussion rely tacitly upon simplistic and wholly ‘bottom up’ pictures of explanation, at least within the alleged orbit of ‘fundamental physics.’ The manner in which such issues adversely affect contemporary thinking about counterfactuals will become important later in the paper.

In contrast to our own deficiencies, it is remarkable that Leibniz, operating simultaneously as philosopher and as physicist, enjoyed a crisp grasp of the basic ‘mixed level’ organization required, although he was unable to work out many examples with optimal clarity (which is wholly understandable, given the undeveloped state of the mathematics required, in which he again served as a great pioneer). Seeing these ideas at work within a grandly metaphysical context will help us appreciate their insight and depth better, even if we no longer subscribe to the full array of philosophical ambitions they fulfilled in Leibniz’ eyes. Indeed, as I’ve already suggested, it’s rather droll to witness such an extravagant edifice erected largely on the frame of dry engineering percept.

Like many good jokes, the path leading to ‘free will’ requires a lengthy windup, involving a sizable collection of moving parts. I would advise technicality-adverse readers to trip lightly over the mechanical machinery on first perusal, returning only as final results warrant. It’s best to gain an initial impression of how a variety of unexpected themes hang together within Leibniz’ thinking.

II

Let’s return to the issue of why engineers commonly credit solid materials with rudimentary forms of ‘memory,’ ‘desire’ and ‘perception.’ Essentially, it is because such materials commonly possess natural equilibrium states to which they strive to return whenever possible. Thus a bent 4x4 beam loaded with rocks will normally struggle to regain its unloaded straight state; in lieu of that, it settles for a condition called constrained equilibrium. However, if the material is afflicted with a so-called ‘fading memory’ (as many woods and plastics are), its ability to ‘remember’ its original state of molding diminishes over time and it only regains a compromised and weakly curved end-state intermediate between its current loaded-with-rocks condition and its erstwhile straight state. There are even some unusual nickel-titanium alloys (popularly called ‘smart materials’) utilized in antennas intended for outer space use that can ‘remember’ two or more natural rest states. Such apparatus will rest docilely in a folded up condition while riding to its destination in a rocket, but as soon as the gizmo is released into the interplanetary void, it senses the heightened cold which then jogs its ‘memory’ of the prior occasion when it had been molded into a stretched out configuration under chilly conditions. Thus our smart antenna ‘perceives’ the cold which awakens a ‘striving’ to return to a different rest state than it had ‘desired’ while it felt warmer. Watching one of these antennas unpack itself is rather unsettling, for it looks like some creepy insect slowly bestirring it self. these anthropomorphic classifications do not represent mere terminological whimsies, for modern continuum mechanics organizes its sundry materials according to their varying capacities for ‘memory,’ ‘perception’ and ‘action.’ As such, the subject depends heavily upon the basic teleology of ‘returning to natural equilibrium state’ or what Leibniz himself calls ‘explanation in terms of final causes or entelechy.’ However, the real genius of the approach does not lie in its brute teleology alone, but in the skillful manner in which such considerations are interwoven with an artificial decomposition into ‘elements.’ Indeed , the basic purpose of this paper is to understand the strategic logic of his entanglement.

We are familiar with Plato’s famous injunction to ‘carve nature at the joints’ (as a squeamish person, I find this comparison rather grisly, but we’ll pass on). This venerable metaphor is commonly interpreted as indicating that scientists should be on the lookout for nature’s most central qualities—its ‘fundamental laws an d properties,’ say. But science must also locate ‘nature’s joints’ in a more literal manner: viz., those locales where hunks of matter are glued, pinned, hinged or otherwise attached together (in mathematics, these requir ements qualify as ‘boundary or interfacial conditions’). In philosophy of science, such ‘ boundary condition’ joinings are often neglected as comparatively insignificant in comparison to the ‘laws’ that correlate with the active differential equations. But many fundamental mistakes trace to this source, for it is precisely at the joining surfaces of materials that the most complex physical processes commonly occur in a macroscopic material. Nor should these boundary events be regarded as falling outside the dominion of ‘law’: typically, a much larger number of law-driven processes are intensely active within these narrow regions. In truth, the main distinguishing feature of a ‘joint’ is that, when we approach such regions descriptively, we can often crush its local complexities into some simple rule of thumb that crudely approximates how the two regions on either side of the joint affect one another across the divide. In adopting this truncated mode of description, we practice the ‘physics avoidance’ mentioned above: our bound ary condition rule of thumb replaces a lot of very difficult physics. As methodologists, we should be on the lookout for physics avoidance of this ilk, for the overall structure of a physical explanation can become quite significantly altered by the topics we elect to suppress through benign neglect.

In this essay, we shall exploit the narrowness of ‘boundary regions’ in a rather different manner, for we shall artificially insert a lot of non-existent ‘boundary joining s’ into the interior of our beam despite the fact that it shows no evidence whatever of being pieced together in such a manner (from a continuum point of view, our beam is completely homogeneous everywhere). Specifically, let us imaginatively decompose our beam in two ways: first, into long narrow fiber that run horizontally from one end to another. When the beam sits in its relaxed, unloaded state, these ‘fibers’ will all be of the same length, but as the beam bends, the top fibers will contract and the bottom fibers will lengthen (the central strands that bend without significant stretching comprise the so-called ‘neutral axis’ of the beam). Secondly, we also want to slice our beam in to short vertical slices of length ∆L that we shall call ‘elements’ (these ingredients will dominant our discussion for the time being). Observe that each ‘element’ will therefore contain a short cross-section of every ‘fiber’ in the beam. In crediting our beam with these fictitious joinings, we are secretly primary directions in which normal beams distort under a load—if our strut had been made of putty or ice rather than wood, we would ‘fibers’ and ‘elements.’ There are many subtleties connected with such ‘element’ assignments,5 As such, a non-trivial methodological question suggests itself: what possible utility can come from pretending that a beam contains joints that are wholly im aginary in their allocation? As it happens, wood and iron reveal considerable granularity when examined under a microscope, but our ‘element’ and ‘fiber’ carvings bear no relation whatsoever to these real-life interfaces. So no obvious correspondence between our ‘joints’ and real life material structure explains the significant descriptive ad vantages that such artificial segmentation offers. In fact, a proper resolution of our puzzle turns instead upon the surprising manner in which our fictitious elements facilitate a judiciou s blending of data drawn from different scale lengths, in a mann er that also amplifies the trust worthiness of the explanations offered. We’ll find that Leibn iz’ monad s reflect his deep appreciation of this ‘mixed scale’ architecture as well.

At this point we shall concentrate upon ascertaining the beam’s states of constrained equilibrium; that is, the configurations in which the wood will remain at rest under a fixed loading of weights W (predictions for how it will move will piggyback in crucial ways upon the answers provided for constrained equilibrium). Let’s initially ass ume that the ‘elements’ into which our beam is fictitiously carved are all of equal length ∆L. We also ask that these elemen ts possess well-defined curvatures at their centers and join to one another with smooth tangents.6 Soon we shall insert sundry springs (corresponding to our ‘fibers’) inside these units that will allow them to resist bending . As a result, our newly segmented beam consists in a large nu mber of short, springy units of length ∆L welded together in a chain. If we consider all of the possible ways in wh ich such a jointed as sembly might sag between two fixed piers, we obtain a ‘space’ of ‘statical beam possibilities.’ As we look over these ‘possiblities,’ we seek the special configuration C which can support its allotment of weights W with the least amount of collective bending (beams store stress energy whenever they bend, so we want this minimized). In searching for this optimized configuration C , we commonly select an arbitrary beam possibility C0 as initial guess and compute its overall curvature budget. We then adjust its linkages by trial and error to see if we can lessen the averaged curvature thereby. If we uncover an altered ‘beam possibility’ C1 that supports W with less overall bending, we conclude, ‘Obviously, C0 can’t truly represent the correct equilibrium configuration under W because any beam shaped like C0 will quickly eliminate its excess bending by relaxing into C1.’ We then abandon C0 as our operative guess and explore whether C1 can be likewise improved through further twiddling. We reiterate this procedure until we (hopefully) approach some final, optimally low tension configuration C . Computing a proper chain of improvements in this fashion is not always easy, because straightening out the joints between, e.g., elements E3 and E4 can force an increase in the faraway curvature between elements E27 and E28 (it represents a global feature of a configuration C1 that it stores less strain energy than C0 , rather than a local characteristic). Nonetheless, in happy circumstances we can ‘walk’ through our space of jointed possibilities until we find the sagging shape C that best represents the configuration that a ∆L-segmented beam would assume under the loading W (picture ourselves as Diogenes searching for a fully relaxed beam). Indeed, the old-fashioned term for our specific collection of ‘beam possibilities’ is ‘relaxation space’: we look for the special ‘possibility’ (or ‘possibilities’—occasionally, the equilibrium configuration will not prove unique) that proves optimally ‘relaxed.’

In performing this search, we rely upon a natural norm of ‘possibility closeness’ ultimately dependent upon the manner in which their internal springs store tension (two beam ‘possibilities’ lie close to one another if they display similar amounts of bending).7

Thus far we have only searched for the relaxed state of a beam divided into segments of a fixed length ∆L, but we quickly realize that our beam’s ‘relaxation’ can be further improved if we split it into elements of shorter and varied segment lengths ∆L*. So let’s enlarge our relaxation space to include all of these ∆L*-segmented possibilities as well, for every length choice ∆L*. Once these supplements have been allowed, we can ‘complete our space’ by tolerating continuous beam possibilities without any segmentation or internal springs at all (their ersatz ‘springiness’ becomes measured by the ‘springiness’ in the segmented possibilities to which they lie near in our norm). In real life, the optimally relaxed configurations we witness are (normally) one of these smooth ‘limiting case’ specimens, for real life beams minimize their stored tensions by bending smoothly at every scale length. I will continue to use ‘relaxation space’ for our completed space, although the less evocative ‘Sobolev space’ is the standard nomenclature for a collection of this sort.

I’ve described these techniques in modern vocabulary, but it should be observed that Leibniz conceptualized the foundations of the calculus in similar terms: we should choose a particular ‘progression of the variable’ suited to our problem (i.e., select an appropriate pattern of element segmentation) and extrapolate our conclusions to asymptotic formulas as ∆L 0.8 We shall later see how such a ‘top down’ diagnosis proves central to his conception of monads.

Notice the odd role that ‘completion in a norm’ plays within these procedures, as this fact will prove crucial in what follows. In section (iii), we will supply the needed rule ℜ for computing the stress energy stored within the segments of a ∆L-divided ‘beam possibility.’ But this evaluation rule will only apply directly to bits of beam that contain certain ‘spring and block’ mechanisms—i.e., constructions that can only live inside a segmented ‘element’. As such, our ℜ rule will not make direct sense for a truly continuous beam, which lacks such artificial divisions and structures. As indicated above, we utilize our space’s ‘norm of closeness’ to induce an appropriate ‘springiness’ measure upon our continuous possibilities C* by stating that each C* stores exactly the energy that obtained in the norm’s limit if a chain of segmented approximations C0 , C1 , ‘walks up’ to C* inside the relaxation space. In other words, we witness an odd situation where we must piggyback upon our artificial ‘possibilities’ before we can make sense of the ‘energy storage’ within a more realistic beam. In this peculiar bootstrapping procedure, we witness a mathematical symptom of the great paradoxes of continuous matter that created so many headaches for the past philosophers who worried about matter in a classical context: real world materials appear to be smooth and throughly flexible in their qualities yet we humans can’t obtain a workable handle upon their governing physics without first pretending that such materials decompose into artificially kinked and less pliant ‘elements.’ A suitable label for these worries might be the problem of the physical infinitesimal, for reasons that will become clear later. Much of Leibniz’ strange insistence that spacial and temporal descriptions of matter are inherently ‘false’ and ‘idealized’ correlate tightly with these fundamental concerns. Indeed, many of his views on ‘possible worlds’ trace to the odd requirements on ‘artificial possibilities’ witnessed here.

In most philosophies of science developed before the quantum era, our problem of the physical infinitesimal played a historically significant role in motivating the doctrine of essential idealization: scientists must intentionally misdescribe the world before it will submit to a tractable mathematical treatment.

Indeed, modern ‘anti-realism’ historically traces to a group of nineteenth century elastici ans (Duhem, Mach, Pearson) who were explicitly motivated by such concerns. Some of us believe that Kant’s Critical Philosophy is partially motivated by similar theses, in a fashion allied to the Leibnizian story to be developed here.9

Unfortunately, the depths of our ‘physical infinitesimal’ problem seem to be rarely appreciated within modern commentary upon these figures. It is important to recognize that our relaxation spaces are different from other kinds of ‘possibility space’ considered in physics (every applied mathematician knows this, but few philosophers have observed the distinction). In particular, the membership of a relaxation space is quite unlike those that philosophers like David Lewis or Saul Kripke usually have in mind when they ponder counterfactuals. As I understand the view, such thinkers generally conceive their ‘possibilities’ in the mode of the bottom up ‘trajectory’ or ‘phase spaces’ utilized within, e.g., statistical mechanics, where each ‘possibility’ represents a distinct realization of a fixed physical system starting from an initial state and evolving over time.

However, our relaxation spaces do not contain dynamically evolving states at all (although they can often be extended to form such a space10) and virtually none of their ‘possibilities’ represent isolated system possibilities in the ‘possible trajectory’ sense at all. As noted, the only ‘beam possibilities’ capable of plausible physical realization shouldn ’t contain springy ‘elements’ at all. The proper motives for considering ‘beam possibilities’ in relaxation space terms trace to the underlying mixed level strategies that we have yet to explicate. These crucial differences in types of ‘possibility space’ should become clearer as we explore the contours of such explanations further. When an engineer ponders beam counterfactuals in the current vein, she will generally think about them in relaxation space terms: ‘If this beam were placed in configuration C, it would store too much energy and quickly move toward configuration C*.’ As such, her implicit reliance upon our ‘space’ and its natural ‘norm’ conforms to Lewis-Stalnaker accounts of counterfactuals, insofar as they go. The trouble is: such stories don’t go nearly far enough. One of the basic mysteries that an adequate approach to counterfactuals must address is: why is it useful to traffic in such fictive concerns at all? In the case at hand, an answer must explain why the natural arena for ‘weighing beam possibilities’ occurs within a relaxation space, rather than within a trajectory space. But I don’t see how this last issue can be adequately addressed without delving into the ‘mixed level’ strategic considerations to be traced here. Until that subterranean architecture has become properly plumbed, I don’t believe that we should claim that we understand how counterfactual evaluation operates.11

This is not to say that all counterfactuals rely upon the same sort of strategic underpinnings; it is quite clear that they don’t. Nonetheless, I doubt that we can properly explain the true utility of any of them unless we pay greater attention to the background policies of physics avoidance et al. that rationalize why certain physical data should be actively brought to bear upon a specific descriptive task and why other sorts of pertinent data should be set aside or ‘ignored.’ The policy gambits that scientists employ in profitably dealing with loaded beams supply beautifully crisp illustrations of the data sifting required. As indicated above, most applied mathematicians appreciate these issues, but the distinctions they draw rarely make their way into philosophy. This prompts me to a brief methodological sermon. Twentieth century philosophy of science relied excessively upon logical tools alone for its diagnostic work. In its proper place, logic performs many useful offices, but if we hope to unravel problems like how counterfactuals operate within a scientific context, we should expect to borrow richer tools from modern applied mathematics (e.g., its sundry types of ‘possibility space’). My main criticism of the Stalnaker-Lewis approach is that it works with logical tools alone and, in contexts like this, such implements are too blunt for the tasks at hand.12

I was surprised to discover, in investigating how Leibniz approached the mechanical tasks we shall now survey, that he plainly viewed his ‘possibilities’ in a relaxation space mode, for entirely cogent reasons. The original ‘father of possible worlds’ would not have recognized David Lewis’ ‘worlds’ as among his progeny.

III

Returning to our main themes, our distinction between ‘trajectory’ and ‘relaxation’ spaces is implicitly important to Leibniz, because his critics, in effect, evaluate ‘best possible world’ within the wrong ‘space.’ In Candide, Voltaire inspects the sorry trajectory that comprises human history and complains: ‘Look at all those earthquakes, plagues and wars: how could this path be optimal?’ However, Leibniz considers his ‘optimization’in relaxation mode: at any moment, the world is comprised of a lot of interlaced elements that strive to achieve their own localized desires in the same manner as a beam seeks its optimal equilibrium state. How can these sundry ambitions be mutually accommodated in a maximal fashion? The ‘best possible world’ reached under this scheme may scarcely prove ‘optimal’ in a Panglossian sense: there are lots of undeserving layabouts with rotten desires who figure equally in the optimization! As we’ll see, this kind of optimization is closely linked to the proble m of free will: God strives to maximize every one’s freedom to choose, including the bad people and the lesser ‘desires’ displayed in the constrained equilibrium strivings of wood, iron and rocks.

At this early stage my suggested reading of Leibniz on optimality probably strikes the reader as too ‘cute’ to be historically appropriate. After several more turns of the screw, we’ll make closer contact with his actual remarks on necessity and continency.

To this end, we must look inside our little elements. Here a second surprise awaits, because we must model their inner workings in a patently artificial manner as well. In fact, this internal fakery is required to block a vicious foundational regress that otherwise arises with flexible continua: the ‘problem of the physical infinitesimal’ mentioned earlier. But let us first examine the ‘cure’ before we consider the ‘ disease’ it addresses. In his essay on beams Leibniz decomposed each element into a variety of pieces, consisting of a central rigid block (to supply the element with inertia) bound to its neighboring blocks by a set of springs (to supply the unit with flexibility and an ability to return to its relaxed rest state). However, Leibniz didn’t realize that the ‘springs’ (his term for the ∆L length segments of ‘fiber’ passing through our element) on the top side of the beam will contract as the unit bows under an applied load, allowing the unit as a whole to turn about its center of gravity.13 If we correct this mistake, we obtain the so-called ‘Bernoulli-Euler element’ which remains to this day the most common engineering model for a loaded beam. If we ignore, for the moment, how its neighbors will inhibit its movements, each local element will respond to a local weight W in the manner of an one-armed jumping jack, compressing or stretching its internal springs according to whether they lie above or below the turning point.

Of course, Leibniz did not believe that wooden beams were actually comprised of springy mechanisms like this. their true workings usually lay concea led at a microscopic level (of which more anon), so Leibniz intended his ‘element’ diagnosis as merely a functionally equivalent mechanism whose parts embody the diverse kinds of ‘force’ that Leibniz believed were essential in physics. Specifically, the element’s ‘passive force’ is represented by the moment of inertia of the rigid block that opposes rotation because of the mass distributed throughout the element as well as the resistence of the springs to compression. The element’s ‘active force’ is supplied in the spring’s capacities to restore themselves back to their natural rest configurations.14 Here we should observe that, although the strengths of both responses are governed by the same ‘spring constant’ E within a simple Hookean spring, this is not true in most materials, which commonly resist initial compression stoutly yet supply only a fairly feeble outward ‘push’ as they slowly creep back to their natural rest states. Because we are presently worrying only about how our beam behaves when it has been forced into a constrained equilibrium rest state under a static burden of applied weights, we are considering our ‘forces’ only in the guise of what Leibniz calls ‘solicitations’ or ‘dead forces.’ Using more modern terminology, his static discriminations correspond fairly closely to a standard identification of the main sites of ‘virtual work’ active on an element in constrained equilibrium (I’ll explain the ‘virtual work’ vocabulary a few paragraphs hence). If our beam is not in equilibrium—it is presently moving due to a load of rocks that it cannot keep in check—-, Leibniz claims that our ‘dead force solicitations’ will blossom into ‘living forces’ in a manner we’ll ignore for the moment. In adopting this ‘locate X’s equilibrium behavior first; treat X’s non-equilibrium behavior later’ recipe, Leibniz anticipates a division of diagnostic labor that became later canonized in clear variational terms within Lagrange’s Mecanique Analytique 15 As such, this technique is crucial to the mixed level descriptive strategy that we will now apply to our beam.

Although it is not our main concern here, the vagaries of Leibniz’ physical terminology can often be helpfully clarified through the consideration of concrete continuum physics constructions such as our Bernoulli-Euler model. Many commentators have experienced interpretative difficulties because they have tried to locate a home for Leibniz’ distinctions within the simpler world of Newtonian point mass mechanics.

Notice the odd idealization implicit in these procedures. Leibniz over and over insists, for sound physical and conceptual reasons, that no material in nature can ever act in a truly rigid manner, for every collision between bodies must result in some temporary contraction and re-expansion in their shapes (‘Nature does not make leaps,’ he famously contends). Indeed, the normal materials of continuum mechanics are inherently expected to remain continuously flexible at all size scales. Yet we have just artificially introduced an ersatz quasi-rigidity into our Bernoulli-Euler element at size scale ∆L (and Leibniz plainly follows the same course in the more primitive beam element he discussed). Why must we insert this patent artificiality into our descriptive practices?

The traditional response to this question throughout the entire classical mechanics era has been: otherwise we will be unable to get a handle on the beam’s operative physics. Why is this so? Ultimately, the problem traces to the circumstance that continuum physics is severely complicated by the simple fact that two types of force can act up on a flexible body, viz. contact and body forces. If we consider a slice of a beam, the gravitational force supplied by the load W directly pulls on each point with the element body (hence the terminology: ‘body’ or ‘volume’ force). In contrast, the tensions transmitted through the beam as it expands and contacts act upon the boundary surfaces where our element joins its neighbors (hence: ‘contact force’).16 This distinction sounds prima facie insignificant, yet these two ‘forces’ act upon dimensionally incongruent locales: gravitation pulling on points and tensions pulling upon surfaces. Ipso facto, our two ‘forces’ must be of different grades of infinitesimal smallness, as Leibniz fully recognized and which we now enshrine in the different measure theoretic ‘densities’ we assign to poin ts and surfaces. But to obtain a workable physics for continuous body, we must persuade these dimensionally incompatible critters to work in harness. This, in a nub, is the ‘physical infinitesimal problem’ mentioned earlier; it rests upon worries about the coordination of ‘forces’ that reach considerably beyond simple Cauchy/Weierstrass concerns with mathematical infinitesimals.

Unfortunately, as long as our beam remains completely flexible at all size scales, there is no evident way to address the question. Physics, according to the old adage, should become ‘simpler in the small. ’ We began our descriptive task with a complete wooden beam pulled in a pointwise manner by gravity and in a surface-like manner by the tensions induced by at our beam’s endpoints by two fixed piers. When we inspect any smaller section of the beam, what do we see?: simply a shorter beam17 No matter how thinly we slice, gravity and endpoint tensions act in different locales and we gain no mathematical simplicity through shifting to smaller scales. That is, any straightforward approach to our element merely reproduces in microcosm the same physical problematic with which we begun, an annoying consequence of the strong scaling symmetries that characterize continua. By introducing the artificially rigidified ingredients of the Bernoulli-Euler element into our picture, this unhelpful regress is halted, for standard mechanical considerations of a ‘virtual work’ nature can then resolve our contact/body force combination problem.

Anyone interested in classical theories of matter should appreciate this central ‘physical infinitesimal’ dilemma: some form of artificial rigidification at a smallish scale appears required before a workable hold upon the behaviors of fully flexible materials can be gained. This theme repeats itself throughout the entire classical era in a bewildering variety of forms, often motivating some associated philosophical ‘essential idealization’ thesis along the way. Accordingly, approaching the ‘classical physics’ of historical writers in an excessively ‘Newtonian’ (= point mass) vein often results in interpretative disaster, for it entirely misses the body/contact force incongruities that create such deep problems for true continua. Observe, by the way, the curious fact that our rigidification artificialities also represent ephemeral idealizations, in the sense that their faulty properties eventually evaporate in the limit as ∆L→0, thanks to the ‘completeness’ of our relaxation space. As we noted, the finalized configurations we attribute to real life beams are normally completely smooth and flexible throughout. Yet we can’t reach that happy descriptive situation without evoking the rigidified stuff inside our relaxation space en route.

Much of Leibniz’ thinking about monads is generated by a fundamental concern with what represents an accurate portrait of mechanical behavior and what merely represents a convenient and smoothed over mathematical approximation to that behavior. So on we shall arrange his strange remarks about monadic behavior into close alignment with the various forms of ‘essential idealization’ that we have been tracking over the past several pages. Having partially rigidified our element’s innards, how do we determine its constrained equilibrium if it is asked to support a local weight W? Here Leibniz (and modern engineers to this day) appeals to notions evolved from Greek statics.

In that venerable mode, let’s first consider a stationary teeter-totter loaded with two children, Jack and Jill. Jill can balance Jack’s weight if and only if she locates herself in exactly the spot where any slight upward movement on her part will be exactly compensated by Jack’s moving slightly in a downward arc (informal terms, the teeter-totter’s equilibrium configuration is characterized by the fact that the ‘virtual work’ W1 · δa1 created by Jill’s weight W1 rising through an infinitesimal arc δa1 exactly opposes the ‘virtual work’ W2 · δa2 provided by Jack’s weight W2 falling through the infinitesimal arc da2 ). To adopt this traditional understanding to our beam element, let us postulate that the total ‘force’ t supplied by the internal springs (it integrates into a torque) must play Jill’s role in balancing the downward gravitational pull of the weight W (serving in Jack’s capacity).18 The only novelty beyond Greek statics is that we now require a further rule ℜ to indicate how strongly the springs will resist the block’s turning expressed as a function of their angular displacement θ. That is, ℜshould be a formula of the form τ = f(θ), delineating the resistive torque tsupplied whenever the block has become twisted through an angle θ. In approaching springy ‘elements’ in this fashion, Leibniz serves as an important pioneer in extending an ancient policy for calculating the equilibrium of rigid bodies into a ‘virtual work’ scheme suitable to continua as well. I won’t delve into the sometimes tricky details of how we properly integrate the masses up and down our element to obtain a ‘moment of inertia’ and how we likewise sum the spring forces into a torque or ‘bending moment,’ but these techniques rely upon the blockish ‘rigidification’ we have introduced into our Bernoulli-Euler modeling. As this is done, our fundamental body/contact force combination problem becomes silently resolved as well.19

As we shall see in the next section, the mixed level strategy here lurking in the background becomes particularly evident when we consider how a specific τ = f(θ) rule gets chosen and justified in practice. The simplest relationship of this nature is a Hooke’s law behavior where the resistive torque t is proportional to the angle opening θ(i.e., τ = EIθ where I supplies the block’s moment of inertia). Furthermore, if our element is small, θ can be nicely approximated by the midpoint curvature d2h/dx2. Employing these ingredients as our ℜ rule and shrinking the element width ∆L to zero, we obtain the standard Bernoulli-Euler beam equation in its ‘weak’ form: d2h/dx2EId2δh/dx2 =Wδh.20

Before we consider the justification of such a τ = f(θ) choice, we should recall that we have only discussed how our loaded-with-block-and-springs element will respond if it operates freely of its neighbors. But a real beam element can’t act in such a way, for our slices are connected in a series and the final equilibrium position of each element will be greatly affected by how its neighbors have decided to behave. It represents a strong additional posit to demand, as Leibniz and the engineers do, that a global optimization must be reached across the full beam where each local element supports its local burden W and bends as little as possible at its joints, compatibly with every other element performing exactly the same tasks. Our relaxation space construction enforces these requirements upon our beam’s imaginary joints and these entanglements influence the character of the full beam’s constrained equilibrium preferences considerably. In doing so, we implicitly assume that the same spring rule (τ = f(θ)) continues to govern the internal production of ‘force’ within each element even after they are collectively chained together in enforced fraternity.

Although developing a dynamic description in a satisfactory manner lay beyond his technical means, Leibniz clearly anticipated that beams in motion could be treated by the methods later arranged under the heading of ‘d’Alembert’s principle.’ It maintains that if the ‘dead force’ solicitations exerted by contacting bodies become unbalanced, the excess will gradually convert itself in to ‘living force’ motions (vis viva) that lead the stronger body to expand or otherwise move against its environment. In our beam, we will witness waves of expansion and contraction passing back and forth along its length as a result. But such piggybacking dynamical issues are not important to our concerns in this essay.

IV

To complete our treatment of the beam, we lack only a rule ℜ of the form t= f(θ) to determine how much collective resistive resistence t becomes excited within our element’s springs as the unit twists by θ from its normal rest configuration.

ℜ rule must differ from material to material, for the differences between stiff iron, stout oak and feeble willow reflects the vim (= the strength of the f(θ) production) with which their internal springs supply a restorative torque when displaced from their unloaded state. Indeed, it is natural to wax anthropomorphic here: their distinct f( θ) rules reflect the ‘personality’ differences between iron, oak and willow, viz., how resolutely they recall and regain their preferred natural states. As indicated above, engineers still speak about materials in this ‘teleological’ vein and the very subject of modern continuum mechanics is taxonomized according to the varying ‘personality’ rules ℜ that diverse substances obey. A Bernoulli-Euler beam is defined by its simple ‘Hooke’s law personality,’ although Leibniz was aware that in wood t usually relates to θ in a more complex manner. But how do these ‘personality rules’ get justified? Here’s we utilize patently ‘top down’ information in a rather clever way. Theoretically, we could attempt to model the small scale behavior of a beam in a direct ‘bottom up’ manner, relying upon some microscopic model of how the molecules in the wood are arranged. But we won’t do that, for such modeling attempts generally supply very unreliable results. Instead, we should return to our beam considered in its macroscopic entirety and place it on a workbench free of any burden 21 We then pull and push on its far ends using a range of forces. How does this help? In these special circumstances of ‘pure tension’ and ‘pure compression,’ every element/fiber segment within the entire beam is affected by its neighbors in exactly the same way, unlike in our original loaded beam where each local strand is pulled by different tensions. This consideration suggests a trick that provides the local τ = f(θ) rule we seek: we must merely graph how the entire beam length responds to our ‘pure tension ’ pushes and pulls in our work bench tests and then scale the resulting relationships down to the ∆L level. So if our beam stretches and contracts in a nicely Hookean manner on the work bench, our graph will plot length against force in a tidy straight line with slope E. We then transfer this E to our element’s ‘springs’ and, after integrating over the cross section, we obtain our desired ℜ as τ = EIθ. This ‘scaing down’ procedure is reasonable because, if ∆L is short enough, then its ‘springs’ ( = our element/fiber segments) should respond to their end point pushes and pulls in pretty much the same manner as the full beam does within our workbench tests (indeed, our beam would splinter apart if this agree ment betwixt large and small scale stretching did not maintain itself). In short, we cleverly exploit the very homogeneity across size scales that occasioned our ‘physical infinitesimal’ headaches earlier, for we can now reliably transfer a well-tested ‘personality’ rule ℜ from the macroscopic level down to the short ∆L scale we require. In doing so, we neatly skirt the substantive modeling problems we would face if we tried to calculate our ‘personality’ rule ℜ rule in a purely ‘bottom up’ way. And this, of course, is why our treatment proves ‘mixed level’ in character: workbench data from the macroscopic level is blended with general mechanical assumptions (our ‘virtual work’ considerations) that maintain their correctness down to scale lengths of a half millimeter or so. We have completely avoided saying anything at all about what occurs within the beam at a truly microscopic scale, although it can appear, from the infinitesimal character of our final differential equation, as if we had done so. But to think that is to misunderstand the mixed level rationale that supplies our equation with its great practical reliability. As we shall soon learn, Leibniz recognized all of these considerations in his own thinking.

As a philosopher of science, I am sometimes asked why variational principles are so important in physics (‘virtual work’ represents the most generally applicable of such principles within a classical setting). the question does not seem to enjoy an invariant answer across all disciplines, but, within a classical engineering context, such principles, in combination with our relaxation space procedures, provide an accommodating frame that allows us to mix data drawn from different scale levels effectively. In our own case, such advantages are tacitly evoked to exploit the macros copically observed fact that the primary tensions in a loaded beam get transmitted as bending moments, a behavior for which constructing a satisfactory molecular mechanism would prove rather difficult. ‘Virtual work’ allows us to ‘walk around’ these modeling difficulties through deftly exploiting our well established experimental knowledge of how beam elements respond to simple tractions and loadings. We already know these things; why must we construct a complex and unreliable ‘bottom up’ model whose only contributing role is to tell us what we already know (and to possibly incur some uncontrolled risks in the bargain)? Indeed, the explanatory topography that underlies our Bernoulli-Euler labors is largely shaped by the significant bogs and swamps of difficult physics that we have skillfully avoided, courtesy of our deft exploitation of ‘mixed level’ data drawn from several sources. Appeals to the mild teleology of ‘desire for a natural rest state’ are central to all of these clever ‘walk arounds.’ Surely, large stands of fir tree might have avoided conversion to paper pulp if philosophers of human action had appreciated Leibniz’ humble observation that beams of wood behave, in certain respects, like ‘little animals.’22

For our schemes to work, a material such as wood or iron must display a relatively homogeneous behavior down to some relatively minute level. However, neither Leibniz nor any modern materials scientist would claim that wood or iron truly behaves identically at all size scales. If we inspect such materials under a microscope, we find that it is comprised of a maze of cells or minute grains that individually stretch and dilate according to far more complicated rules that reveal itself within larger hunks of the material (the simpler behaviors at large scales largely arise through ‘law of large numbers’ randomization). Leibniz’ own thinking was frequently inspired by contemporaneous discoveries in microscopy: he knew, from Hooke’s drawings, that wood was composed of tiny cells, for example. Despite the fact that the simple stretching behaviors we assign to a Bernoulli-Euler beam fails once its ‘little elements’ fall below some critical dimension ∆LC, Leibniz correctly recognized that we should nonetheless push ∆L all the way to zero in extracting our finished differential equation formula, for a tractable simplicity emerges in that asymptotic limit (just as probabilities simplify in infinite populations). If we work backwards from this zero length equation, we find that its predictions will prove quite lousy with respect to very short spans of wood, but its successes improve dramatically when we reach larger scale lengths. In consequence, our model beam equation should not be viewed as a formula that captures ‘what really happens’ in the material at a ‘length zero’ scale, but rather as a shorthand formula that generates sound results when applied to sufficiently long scale lengths. In other words, our beam differential equation represents a downwardly projected expression of a simplified ‘personality’ that the true material manifests only at reasonably long scale lengths. This basic theme—calculus formulas represent simplified, downwardly projected generators from larger scale behaviors—is central to Leibniz’ musings upon the ‘metaphysics of the calculus’: differential equations merely represent shorthand, asymptotic formulas that capture a material’s properly only down to some indefinite choice of scale size. We moderns normally view physics’ most fundamental differential equations in a more favorable light as directly capturing nature’s workings at an infinitesimal level. However, when we consider the standard equations for the classical continua of everyday life (beams, strings, fluids, et al.), we tacitly adopt Leibniz’ point of view, for their validity can only be understood as resulting from a downward projection of homogeneous patterns witnessed at large-to-middling scale lengths. Many of Leibniz’ strangest pronouncements about space and time trace to insightful observations such as this.

V

We must add one final and insignificant-looking datum to our stack for Leibniz’ metaphysical views to lock together in impressive unity. It is simply this: Leibniz embraced Descartes’ ‘air sponge’ theory of elastic rebound. Consider an elastic material such as wood or iron: from whence does it derive its ‘spring’? According to Descartes, its bulk is riddled with penetrating pores through which an ambient super-mundane ‘air’ continually circulates. When we squish the wood, we drive this ‘air’ from the pores and, when we stretch it, the pass ages enlarge and an excess of ‘air’ rushes in. As this happens, a corrective ‘air’ flow will push the pore walls back to their original configurations, rather as a dried sponge regains its shape when we allow water to seep into its crevices. Through this ‘air’ assisted mechanism, the wood retains a ‘memory’ of its natural equilibrium state through what Leibniz calls ‘efficient causation’ mechanisms alone (i.e., non-teleological stories that rely upon geometry and conservation principles alone). Here is one of the many passages where Leibniz endorses this account of the underpinnings of elastic rebound:

I hold all the bodies of the universe to be elastic, not though in themselves, but because of the fluids flowing between them, which on the other hand consist of elastic parts, and this state of affairs proceeds in infinitum.23

As such, Descartes’ story is physically ridiculous: bagpipes don’t blow themselves back up after their wind is evacuated because air pressure normally acts equally in all directions (sponges can restore themselves only because capillary forces draw the water inward). Indeed, the only way that ‘air’ movements could produce the springiness Descartes demands is if some Maxwell’s Demon were to providentially direct the motions of the ambient ‘air’ in exactly the right way that they can collectively knock the elastic material back to its original condition.

As I understand him, this is precisely what Leibniz believes occurs, for a provident God plays precisely the role of such a benevolent Demon in supporting the satisfaction of our human desires. Throughout most of this essay, we have explained our beam’s behavior through teleological appeal to what Leibniz considers as ‘final causes’: the equilibrium states wood and iron strive to regain, with different degrees of vim according to their specific τ = f(θ) ‘personalities.’ God takes the ‘final cause desires’ of our beam into consideration along with the ‘desires’ of everything else in the macroscopic world and optimizes their satisfaction in the democratic manner we modeled for a specific beam within our relaxation space. Once this grand optimization has been settled, God then fills in the full physical universe at the microscopic level by directing the ambient ‘air’ molecules in exactly the right way that the ‘final state’ desires of the middle level objects will be maximally a ccommodated. It is therefore infinitely more reasonable and more worthy of God to suppose that, from the beginning, he created the machinery of the world in such a way that... it happens that the springs in bodies are ready to act of themselves as they should at precisely the moment the soul has a suitable volition orthought; the soul, in turn, has this volition or thought only in conformity with the previous states of the body. 24

Accordingly, if we probe our wood at a scale level below the critical length ∆LC where it stops behaving in a homogeneous fashion, we will observe ‘air’ bumping into pore walls within the wood along the exact ‘efficient causation’ trajectories required to knock the distorted beam back to its relaxed shape. Viewed from this efficient causation perspective, we don’t witness any springy teleology in play, for God’s provident planning has supplied the beam with a micro-mechanism that allows it to regain its rest shape through ‘air’ contact action alone.25 This, I believe, represents the proper physical reading of Leibniz’s famous ‘preestablished harmony’: behaviors that can be explained in a top down manner according to ‘final causation’ narratives can also be addressed ‘from below’ with impeccable ‘efficient causation’ accounts.

Nonetheless, these efficient causation mechanisms shouldn’t persuade us to become rank mate rialists, for none of this providential pushing and pulling could have happened if Someone Swell hadn’t designed the lower scale world for the benefit of the wood:

All in all,... not only efficient causes, but also final causes, are to be treated in physics, just as a house woul d be badly explained if we were to describe only the arrangement of its parts, but not its use.26

And thus we appreciate how Leibniz’ two wondrous ‘kingdoms’ of explanation mesh together:

I have shown that everything in bodies takes place through shape and motion, everything in souls through perception and appetite; that in the latter there is a kingdom of final causes, in the former a kingdom of efficient causes, which two kingdoms are virtually independent of one another, but nevertheless are harmonious.27

In other words, molecular physics at a lower efficient cause level is incredibly complicated and its structure is partially determined, through God’s preplanning, by the ‘final states’ that macroscopic level ‘souls’ strive to reach. In other words, God has planned the world largely for the sake of the monads in the middle ranks and then arranges the rest of the stuff around them.

In this connection, Leibniz sounds the precise theme with which our essay opened: mixed level explanations of a downwardly projected character are often more trustworthy than unconstrained speculations about efficient causation mechanisms active at scales we cannot readily observe:

However I find that the way of efficient causes, which is in fact deeper and in some sense more immediate and a priori, is, at the same time, quite difficult when it comes to details, and I believe that, for the most part, our philosophers are still far from it. But the way of final causes is easier and is not infrequently of use in divining important and useful truths which one would be a long time in seeking by the other, more physical way; anatomy can provide significant examples of this.28

Indeed, for Leibniz descriptive success in physics requires that, at some point, we artificially smooth over the microscopic grain found in real materials to create a pathway to tractable formulas, just as we pushed our Bernoulli-Euler analysis fully to a ∆L 0 limit. It was, of course, rare in Leibniz’ time to actually witness the crossover levels ∆LC below which everyday materials stop behaving homogeneously. Because of that lack of microscopic experience, Leibniz suggests that our limited everyday perceptions make us erroneously presume that material objects genuinely possess wholly objective spatial shapes:

It is the imperfection of our senses that makes us conceive of physical things as Mathematical Beings, in which there is indeterminancy. It can be demonstrated that there is no line or shape in nature that gives exactly and keeps uniformly for the least space and time the properties of a straight or circular line, or of any other line whose definition a finite mind can comprehend.29

In truth, every attribution of a geometrical characteristic to a material object merely represents a false-but-useful downwardly projection based upon its homogeneous larger size scale behaviors. Thus:

[Matter] has not even the exact and fixed qualities which could make it pass for a determined being... because in nature even the figure which is the essence of an extended and bounded mass is never exact or rigorously fixed on account of the actual division of the parts of matter to the infinite. There is never a globe without irregularities or a a straight line without intermingled curves or a curve of a finite nature without being mixed with some other, and this in its small parts as in its large; so that far from being constitutive of a body, figure is not even an entirely real quality outside of thought. One can never assign a definite and precise surface to any body as could be done if there were atoms. I can say the same thing about magnitude and motion...30

In my view, such unexceptionable considerations lie at the core of Leibniz’ strange insistence that ‘space’ and ‘time’ are ‘merely ideal’: he is merely contending, quite correctly, that every practicable description of everyday matter utilizing geometrical vocabulary secretly incorporates a fair degree of fictitious projection to unwarranted size scales. With this feigned homogeneity comes the presumption that a thoroughly continuous material can be potentially divided at every scale length ∆L. As such, the doctrine overreaches, yet these same fictitious ‘possibilities of division’ get exploited whenever we follow a ‘relaxation space’ path to equations such as our Bernoulli-Euler prototype. Although denying the ‘reality of extension’ sounds very strange, we moderns wholly agree with Leibniz when we consider the standard engineering formulas that describe the familiar materials of everyday life with amazing practical success. In attributing a continuous extension to an object, we ipso facto assign it non-existent structures that can be deftly exploited in constructing the very formulas that afford our most trustworthy predictions upon a macroscopic scale. Such ‘fictional projections’ should appear entirely benign once the mixed level strategies that underlie their usage have become properly diagnosed.

And this is where the monads come in. Leibniz believes that material behaviors require firm underpinnings within exterior reality (he is no phenomenalist), but we’ll never provide a wholly stable answer as long as we insist upon describing matter in spatially dominated terms (although this is the descriptive mode in which a successful continuum physics must operate). Instead, he maintains that a substance’s powers to resist alteration and to seek goals (e.g, the manner in which wood strives to maintain its natural rest state) represent more fundamental descriptive characteristics than geometrical extension (he usually arranges such ‘power’-related strivings under the heading of ‘entelechy’). In our τ = f(θ) rule, the tbending moment arises from the stored pressures or stresses present within the body, whereas the geometrical θcaptures its current distortion or strains. t is thus a characterization of power; θ, a characterization of extension. In physics, where the notions of position’ and ‘velocity’ rule as primary, a material’s ‘personality’ rule will usually be expressed in a functional format that privileges extension. Thus we write Hooke’s law in the form τ = EIθ in the course of constructing our Bernoulli-Euler formula. But within the real, monadic order of things, such dependencies operate obversely, better encapsulated in the guise θ = F(τ). The opening angle θ we ‘see’ reflects the ‘power’ fact that the ‘upper’ fibers in the beam oppose a further increase in their compressive stresses (which Leibniz construes as a ‘passive’ resistence to change in state), whereas the ‘lower’ fibers endeavor to reduce their excessive tensions (they ‘actively’ pull on their neighbors in attempting to regain their natural equilibrium configurations). Thus, an object’s visible ‘size’ merely represents a downwardly projected expression of the varying powers that its component parts display in controlling the large scale behavior of their surroundings. More generally, an object will ‘look large’ if it possesses the capacity to make large quantities of light alter their inertial conditions (i.e., to recoil from the vicinity of the target object, which, left to its own desires, the light would not do). Accordingly, the ‘sizes’ we witness in nature ultimately derive from the comparative stresses (or ‘powers’) active within the sundry monadic clusters near us (which is why Leibniz so often compares normal vision to the perception of rainbows). He writes:

[I]n unraveling the notion of extension, I noticed that it is relative to something that must be spread out and that it signifies a diffusion or repetition of a certain nature... [which is] the diffusion of resistance.31

Hence θ = F(τ) represents the proper way to express how the relationship between ‘power’ and ‘extension’ truly arises in nature.

Leibniz conceived the hierarchical relationships required amongst these background ‘power centers’ in intriguing terms that he adopted from the developmental biology of his day.32 Specifically, it was commonly believed, on microscopic evidence, that complex organisms originate from ‘seeds with all of their organs intact,’ albeit greatly shrunken in size. Normal biological development consists largely in these primordial organs taking on enough food to eventually assume adult proportions (for Leibniz, upon death these same parts relinquish their hold upon fleshy matter and shrink back to their original, minuscule proportions). To make sense of this, the monadic units corresponding to each bodily part must belong within a ‘master and slave’ hierarchy where the teleological requirements of the entire body determine the needs of the heart which fix the ambitions of the left ventricle. And so on, in classic ‘fleas feeding upon other fleas ad infinitum’ manner:

[A] natural machine has the great advantage over an artificial machine, that, displaying the mark of an infinite creator, it is made up of an infinity of entangled organs. And thus, a natural machine can never be absolutely destroyed just as it can never absolutely begin, but it only decreases or increases, enfolds or unfolds, always preserving, to a certain extent, the very substance itself and, however transformed, preserving in itself some degree of life or, if you prefer, some degree of primitive activity. For whatever one says about living things must also be said, analogously, about things which are not animals, properly speaking.33

Indeed, an allied hierarchy is required to keep the nested teleological ambitions of a simple continuous body like a plank of wood coherent, where each component hunk above the critical ∆LC scale must cooperate in a slavish fashion with the ‘natural state’ desires of the beam as a whole. Indeed, it is striking (although I’ve found no place where Leibniz argues thus) that a small arc a of a wooden ring A molded under tension will agreeably cooperate with the ‘natural state’ desires of the whole ring A as long as a remains enslave d to A. But once a is ‘liberated’ from its chains (e.g., we cut a free from A), it will often display a fresh set of ‘natural state’ desires.

Although this biological analogy now strikes us fanciful, the underlying recognition that the physics of continua must be controlled through a tight pattern of downwardly direc ted integr ation between scale sizes is not: it is fully enshrined within the rigorous foundations of the subject. We have already highlighted the profound physical strength that lies concealed within the simple requirement that, in the absence of rupture or fusing, the pieces of a continuous body must normally remain firmly attached throughout all of their local distortions. To enforce this condition, we must demand an organized integration of behaviors through all scale sizes in the ‘top down’ manner characteristic of our relaxation space (and modern measure theory more generally). As we observed, this complete scale invariance is only apparent, as other processes become secretly active below ∆LC. It is fortunate that master and slave monads cooperate so agree ably in their larger scale teleological ambitions, for that upper level solidarity allows us to ‘see’ them as comprising continuous bodies. Indeed, God has kindly arranged things so, because positing complete scale invariance represents a very productive descriptive ploy for limited intellects such as ours, who could never forge a reliable path to useful physical rules such as our Bernoulli-Euler equati on without it. As merely finite calculators, it is best that we don’t usually ‘see’ the microstructure within materials, for that awareness might on ly encourage quixotic searches for bottom up modelings when we are better advised to exploit the physics avoidance virtues offered within mixed level teleology. As originally promised, our analysis of the mixed level thinking inherent in most engineering technique indicates that many of Leibniz’ assertions about ‘monadic organization’ are genuinely appropriate to macroscopic materials, as long as we interpret Leibniz’ strange words with a sympathetic charity .

After all, the continuum physics homogeneities we attribute to everyday materials are genuinely engendered through artificially smoothing out the cooperative stress/strain behaviors displayed across a band of size scales larger than some ignorable ∆LC limit. Leibniz is also right to observe that the blurring processes of vision make materials look more continuous than they really are. In fact, it is rather startling to realize that it is largely through quantum physics that we moderns are allowed to hold onto to a firmly realistic notion of spacetime, albeit at the cost of (most likely) denying well-defined shapes and trajectories to the ‘particles’ that inhabit the lower tiers within this arena. That is, if we do not invoke intrinsically quantum behaviors involving ‘effective size’ to halt the regress, the complete scale invariance of classical continuous materials tends to pull us into that unending descent into the ‘labyrinth of the continuum’ that Leibniz feared. Quantum physics raises many paradoxes of its own, but at least it rescues us from these flavors of conundrum.

Returning to our earlier reflections on counterfactuals and ‘possibilities,’ what should a material’s ‘possibilities of division’ represent for Leibniz? Two answers suggest themselves: (i) ‘possibilities’ as they pertain to grainy monads of the real universe and (ii) ‘possibilities’ as they apply to the smoothed continua that we attribute to the world for the sake of descriptive utility. Insofar as (i) is concerned, the answer must be secretly determined by the full ‘personality’ rules that determine when one range of monadic influence ‘cooperates’ with another and when not. If I read Leibniz correctly, he imagines that if, per impossible, we actually learned these rules in their infinitely complex glory, we would discover that only one overall outcome was ‘possible.’ On the other hand, according to (ii), the notion of ‘being divisible into segments of length ∆L,’ for every possible choice of ∆L, represents a fictive projection of our restricted upper length scale knowledge, albeit a form of ‘projection’ vital to effective descriptive procedure within physics. It is this second notion of ‘possibility’ that allows us to declare, ‘Of all its possible configurations, a loaded beam chooses the shape that supports its load W with the least expenditure of internal energy.’ Leibniz explains such distinctions as follows:

[I]n actual things, there is only discrete quantity, namely a multitude of monads or simple substances, indeed, a multitude greater than any number you might choose in every sensible aggregate. That is, in every aggregate corresponding to phenomena. But continuous quantity is something ideal, something that pertains to possibles and to actual things considered as possible. The continuum, of course, contains indeterminate parts. But in actual things nothing is indefinite, indeed, every division that can be made has been made in them.... As long as we seek actual parts in the order of possibles and indeterminate parts in aggregates of actual things, we confuse ideal things with real substances and entangle ourselves in the labyrinth of the continuum and inexplicable contradictions. However, the science of continua, that is, the science of possible things, contains eternal truths, truths which are never violated by actual phenomena, since the difference [between real and ideal] is always less than any given amount that can be specified. And we don’t have, nor should we hope for, any mark of reality in phenomena, but in the fact that they agree with one another and with eternal truths.34

By these lights, a trait qualifies as ‘necessary’ only if it holds within all ‘possibilities’ of our second class, a standard that liberates most human actions from the burden of appearing ‘necessitated.’

Plainly, Leibniz’ notion of ‘possible world’ is markedly different from that posited by Saul Kripke and David Lewis. Specifically, it inherently rests upon the ‘mixed level’ strategic techniques that render our ‘relaxation space possibilities’ the natural constructions to consider in working out an effective physics for a loaded beam. As such, Leibniz’ viewpoint accords with my own conviction that a proper understanding of counterfactuals within science requires close attention to background explanatory context and the utilization of richer tools drawn from applied mathematics.

VI

With these props in place, it is but a short step to a remarkable Leibnizian defense of free will. We humans possess a range of personality-driven desires which we can act upon as freely as the middle level constraints of the world permit (which includes the opposing desires of other agents). God optimizes the world with these middle range constraints in view and then fills in with enough providently directed ‘air’ so that the ‘springs of bodies’ act in the fashion that our operative ‘final cause’ demands require. So we are genuinely free to make all sorts of dumb decisions in accordance with our own ‘personalities’; God has merely crafted the lilliputian ‘air’ that puts the ‘spring’ into our freely chosen steps.

Because we normally ‘see’ our surroundings only in macroscopically smoothed over terms, our operative notions of ‘contingency’ and ‘necessity’ ipso facto reflect our middle scale placement within the cosmos. From this point of view, the only available explanation for our normal activities X is that we choose them: we desire Y, X seems a suitable method to reach Y and nothing prevents us from executing X. True, if we could inspect the microscopic workings of our neurons carefully, we would observe preplanned ‘air’ particles shunting their tubular walls about in an impeccable efficient causation manner. But this fact in no way impugns our actions as not, at core, entirely free, for God has cleverly plotted the ‘harmonies’ of the world to optimally fulfill desires working from our size scales outward.

Of course, no one could swallow this remarkable fantasy completely today, yet we have noted that its continuum physics underpinnings have proven themselves well-observed in virtually every respect. In tracing such matters through, Leibniz has practiced the descriptive philosophy of science in a more acute fashion than we generally manage today, largely because he has paid careful attention to the supportive rationales involved in setting up the canonical differential equations of classical continuum physics. By following Leibniz’ diagnostic steps closely, we have learned the importance of strategically avoiding complicated and unreliable patches of ‘bottom up’ modeling through deft infusions of larger scale data. Through such policies, ‘teleological’ appeals come to play useful roles within all corners of science, including physics, despite faulty misapprehensions that ‘physics doesn’t care for that kind of thing.’ Broadening this moral, we are likely to remain confused about explanation in general so long as we overlook the widely varying ways in which wise ‘physics avoidance’ can shape the form of an explanatory model.

In all of these regards, Leibniz’ remarkable prescience recalls The Lion’s musical tribute to human sagacity:

Men with unsurpassed brilliancy
Shine in the annuals of history
So let us try to turn our admiration
Into praiseworthy emulation.35

Notes

  1. ‘A Specimen of Discoveries of the Admirable Secrets of Nature in General’ in Richard T. Arthur, ed. and trans., The Labyrinth of the Continuum (New Haven: Yale University Press, 2001), p. 315.
  2. ‘New Proofs Concerning the Resistance of Solids,’ Acta Eruditorum., July, 1684. I am indebted to Clifford Truesdell’s excellent discussion of this essay and the allied literature in The Rational Mechanics of Flexible or Elastic Bodies 1638 - 1788 (Basel: Birkhäuser, 1980), pp. 59-64. See also Edoardo Benvenuto, An Introduction to the History of Structural Mechanics, Pt I (New York: Springer-Verlag, 1991), pp. 268-271.
  3. The Philosophy of Logical Atomism. (LaSalle: Open Court, 1985), p. 53. As it happened, Littlewood adopted Russell’s incorrect parsing of ‘determinism’ from ‘On the Notion of Cause.’ Littlewood’s essay can be found in A Mathematician’s Miscellany (London: Methuen: 1963).
  4. Only through sophisticated computer techniques has this traditional appraisal of safety begun to shift in a direction favorable to molecular modeling in recent years. Cf. Rob Phillips, Crystals, Defects and Microstructures (Cambridge, Cambridge University Press, 2001) for a reasonably up-to-date discussion.
  5. In theoretical modern continuum mechanics, traditional dissections into ‘fibers’ et al. are usually regarded as geometrical approximations for constitutive relationships that are better introduced in the latter guise (for a clear expression of this point of view, and the costs it entails, see Stewart Antman, ‘The Equations for the Large Vibration of Strings,’ American Mathematical Monthly 87 (1980)). It is rare to see this difficult policy followed in practical work even today and virtually impossible to locate within works written before the twentieth century, where such appeals to geometry also resolve the ‘physical infinitesimal’ problem we shall discuss later. Such ‘geometry instead of constitutive relations’ gambits almost always introduce paradoxical anomalies in their wake and the entire subject of beams is riddled with them (example: the ‘neutral axis’ of our beam is not supposed to become stressed under loading, yet it becomes longer). The sources of these paradoxes are interesting and strike me as suggestive of the manner in which philosophical conundrums also originate (see my ‘Of Whales and Pendulums’ in Philosophy and Phenomenological Research, forthcoming). But these issues will take us too far afield here.
  6. The simplest functions that can meet these demands are cubic polynomials and these comprise the usual ingredients employed in a modern ‘finite element’ approximation scheme for our beam equation. In such a polynomial p(x) = ax3 + bx2 + cx + d, the coefficients of its variables can be determined from its endpoint positions and slopes alone and its midpoint curvature can then be obtained from p(x) by (d2p(x)/dx2)midpoint. The only ingredient missing is a rule that determines how this local curvature is affected by the local weight W it must support.
  7. As observed in the previous note, we still require a rule ℜ to settle how much each element must internally curve to support its local weight burden—that ℜ will be developed in section (iii). Our segmentation trick presumes that if we can persuade our elements to support their local W allotments properly and if we also minimize the curvature at their joins with their neighbors, we will obtain a reasonable approximation to the entire beam’s constrained equilibrium under W. Note that the weight distribution W has thereby become tacitly segmented, for its burden now falls only inside the elements and not upon their boundaries. This tacit localization is vital to resolving the ‘physical infinitesimal’ problem to be considered further on.
  8. See H.J.M. Bos, ‘Differentials, Higher-order Differentials and the Derivative in the Leibnizian Calculus, Arch Hist Ex. Sci 14, 1974. Chapter 3 of D. Bertoloni Meli’s Equivalence and Priority (Oxford: Clarendon Press, 1993) also contains an useful discussion of these topics. Formulating transparent equations for continua requires partial differential operators, but most early continuum models work in stages with ordinary differential equations by evoking symmetries to reduce dimensions and then employing some form of spatial integration to convert an array of distributed forces into a ‘bending moment’ or allied sum (this is largely the task Leibniz attempted in his original article and failed to do it entirely correctly). These ‘elements’ are then assembled into a final o.d.e. that describes a one-dimensional displacement of the target object from the x-axis. Only ‘turning on motion’ via d’Alembert’s principle demands partial derivatives. S.B. Engelsman in his Families of Curves and the Origins of Partial Differentiation (Amsterdam: Elsevier Science, 1984) claims that Leibniz and Johann Bernoulli had a p.d.e. equivalent available by 1697, but these represent very technical historical issues that I cannot evaluate competently. Without a doubt, Leibniz had a rough conception of the multivariant calculus even if his execution (understandably) faltered. Justifications for the dimension reductions common within elasticity comprise another aspect of the ‘essential idealization’ tradition, but we won’t pursue these issues here.
  9. Specifically, in unpublished work, by Michael Friedman and Sheldon Smith. I am grateful to Michael and Sheldon for many helpful discussions on issues pertinent to the present essay.
  10. One employs d’Alembert’s principle as sketched below. But such extended spaces still rely centrally upon the extra ‘possibilities’ needed in our static relaxation spaces and differ from ‘trajectory spaces’ accordingly.
  11. For a rather different illustration of how the ‘possibilities actively considered’ depend critically upon the explanatory strategy active in the background, see my ‘Beware of the Blob’ in Dean Zimmerman, ed., Oxford Studies in Metaphysics (Oxford: Oxford University Press, 2008).
  12. One of the characteristic difficulties of the standard Lewisian approach is that appropriate standards for evaluating ‘possibility closeness’ often seem nebulous in practice. Indeed, without further delineation of background explanatory goals, there is no canonical way to measure ‘closeness’ within a trajectory space. In our beam circumstances, an appropriate norm emerges directly from the ‘virtual work’ technique, underwriting my belief that we will be able to properly understand real life counterfactual evaluation only if we probe the details of background explanatory strategy to a greater depth.
  13. In short, Leibniz mistakenly located the neutral axis on the bottom of the beam. I have employed the Bernoulli-Euler modeling because, for our purposes, it is important to witness the intended methodology carried to crisp completion, even if Leibniz himself could not do that so early as 1684. It also makes it easier to locate thorough discussions of the relevant Sobolev spaces et al. in modern texts, for the Bernoulli-Euler beam comprises a stock example within virtually every primer on ‘finite elements.’ I have, however, described the ‘components’ of the Bernoulli-Euler element in Leibniz’ own ‘block and spring’ terms, rather than merely demanding, in the common textbook manner, that our element open like an elastic accordion under an applied moment. I believe that Leibniz’ idiosyncratic ‘element’ decomposition sheds some light on his notoriously fuzzy ‘force’ terminology, in the manner I briefly sketch in note 14.
  14. Or so I would approximately parse the famous distinctions of ‘A Specimen of Dynamics,’ in Philosophical Essays, ed. and trans. by Roger Ariew and Daniel Garber (Indianapolis: Hackett, 1989), p. 120 (subsequent references to this collection will be marked ‘AG’). Leibniz conceives of active force has more intelligently guided than passive resistence, since the former seeks a clear cut teleological goal while the latter dumbly resists a large variety of intrusions in more or less the same way (a response that Leibniz considers ‘confused’). While on these terminological topics, it strikes me that ‘primitive force’ corresponds to the ‘absolute’ unstressed equilibrium condition that our beam inherently seeks, while its ‘derivative forces’ capture its strivings insofar as they emerge within the circumstances of constrained equilibrium.
  15. It may advert potential misunderstanding to observe that mathematicians classify our virtual work/d’Alembert’s principle pairing as a prime example of a ‘variational principle’even if it looks rather different than the familiar ‘Hamilton’s principle’ of modern physics (our pairing is more widely applicable within traditional mechanics than Hamilton’s formulation). It is worth observing that Leibniz derived Fermat’s allied ‘Principle of the Least Time’ in optics through an element-like decomposition into stages—Jeffrey McDonough has a nice discussion of the importance of this alternate form of variational principle for Leibniz’ thinking in his ‘Leibniz on Natural Teleology and the Laws of Optics,’ Philosophy and Phenomenological Research, forthcoming. As I understand it, Hamilton himself derived his ‘principle’ within mechanics by imitating the piecewise deflections of the optical path one witnesses inside a telescope with a lot of mirrors and lens. Cf. Darryl D. Holm, Geometrical Mechanics, Pt 1 (London: Imperial College Press, 2008), chapter 1.
  16. In describing matters thus, I am being somewhat anachronistic with respect to Leibniz’ specific context, for his vortex theory regarded gravitation as properly a pressure transmitted through contact forces. Nonetheless, in his own formal modeling the effects of the gravitational load W directly lower the center of mass of the element, which represents, essentially, a ‘body force’ assumption. Once a decision has been made that elements must be artificially rigidified, then clear distinctions between body and contact forces often become obscured. In any case, Leibniz clearly views the inertial reaction (= Newton’s m.a) in ‘body force’ mode.
  17. In truth, the endpoint conditions have shifted slightly, depending upon how we treat the axial tension (which we have here ignored, because our quasi-rigidification decouples it from our problem). But this detail doesn’t affect the main point at issue.
  18. In mathematical terms, our element’s constrained equilibrium position is fixed by the condition that any slight wiggling δh of W around the equilibrium position hE will induce the springs to supply an exactly compensating ‘virtual work’ in terms of an additional work of turning τdθwhere tis the turning moment resistence supplied by the springs when the block is twisted to the equilibrium angle θE . That is, the balance W (hE + dh) = τ (θE + δθ) holds at displacement/ angle opening (hE, θE) pairing that represents the beam’s constrained equilibrium under the W loading.
  19. Although elementary derivations often assume a rectangular cross-section in their beams, I’ve tolerated more complicated shapes like I- and T-beams to help readers charitably appreciate Leibniz’ struggles with how such objects bend under loading, several decades before Euler clarified such issues. Historically, techniques for resolving our contact/body force coordination problem through quasi-rigidification assume a bewildering number of forms, some of which are nicely surveyed in James Casey, ‘The Principle of Rigidification’ Arch Hist Exact Sci 43, 4 (1992). To me, one of the striking realizations from the finite element revolution of the 1950's is that, functionally, an ‘element’ modeling must only enforce the required relationships between load, endpoints and central curvature to obtain the correct differential equation in the ∆L 0 limit, allowing for a huge variety of ‘mechanisms’ that can competently implement the required demands. It strikes me that such observations partially support Pierre Duhem’s distrust of the utility of ‘mechanical explanations’ per se. I will attempt to discuss such issues more fully in my forthcoming ‘Two Cheers for Anti-Atomism.’
  20. Assuming suitable smoothness, this converts to its classical form: EI d4h/dx4 =W.
  21. Although Leibniz employs Hooke’s law within his own beam modeling, Truesdell observes (p. 63) that he informed Bernoulli that, in general, such principles must come from experiment. For his views on free will, this ‘by a priori principle’/‘from experiment’ distinction proves critical, for the divide marks the limits of what appear necessary and what will appear merely contingent from our macroscopic level point of view, on which scale the efficient causation operations of the ambient ‘air’ remain largely hidden from us (our perception of materials is ‘confused’ because our visual system cannot distinguish their fine grain structure into adequately articulated mental representations).
  22. My own musings on teleology began in discussions on the ersatz oppositions of contemporary metaphysics with my brother George. See his The Intensionality of Human Action (Stanford: Stanford University Press, 1989), especially chapter eight.
  23. ‘Letter to Jacob Bernoulli,’ 12/03/1703 quoted in Meli, p. 55
  24. ‘Letter to Arnauld’ 4/30/1687, AG, p. 84.
  25. Leibniz’ full views on the exact nature of the efficient causation rules we will observe at this level of analysis are rather sophisticated. At a truly microscopic level, we will not discover the fictitious little ‘springs’ of our Bernoulli-Euler element, but providently directed ‘air’ particles rebounding from the cell walls of the wood. But how do we account for their rebound? In two ways, Leibniz asserts. First, in these special circumstances, we can assume that the elastic rebound of the ‘air’ is close to perfect and we can accordingly evoke conservation rules to handle the bouncing in the standard manner that perfectly elastic scattering are still treated within freshman physics texts to this day (Leibniz correctly observes that conservation of vis viva is critical to this story—cf. Daniel Garber, ‘Leibniz: Physics and Philosophy’ in Nicholas Jolley, ed., The Cambridge Companion to Leibniz (Cambridge: Cambridge University Press, 1995), pp, 316-7). As such, the collisions are handled through efficient causation principles alone, without appeal to teleology. On the other hand, it violates the rational principle that ‘nature does not make jumps’ for bodies to recoil without distorting in the process, so if we scrutinize our ‘air’/wall collisions more closely, we will find that both parties alter their shapes very rapidly, first compressing and then returning to their original geometries. But how do these dumb objects remember their original configurations? We are naturally returned to the explanatory domain of ‘final causes’ (i.e., desire for original shape), now operative at scale sizes far below the conventionally ‘microscopic.’ However, these fresh forms of teleological appeal can be once again supported by efficient causation mechanisms operating at even lower scales, courtesy, as before, of God’s benevolent engineering. And ever downward this explanatory duality alternates, into the bottomless depths of Leibniz’ celebrated ‘labyrinth of the continuum.’ In the Essay on Dynamics, he writes: But bodies can be taken as hard-elastic without denying on that account that the elasticity must always come from a more subtle and penetrating fluid whose motion is disturbed by the tension or the change of elasticity. And as this fluid must in its turn be composed of small solid bodies, themselves elastic, it is seen that this interplay of solids and fluids is continued to infinity. (Passage translated by Freda Jacquot in Rene Dugas, Mechanics in the Seventeenth Century (Neuchatel: Editions du Griffon, 1958), p. 478).
  26. ‘On Body and Force: Against the Cartesians,’AG, p. 254.
  27. ‘Against Barbaric Physics,’AG, p. 319.
  28. Discourse on Metaphysics, AG, pp. 54-5.
  29. ‘Letter to Princess Sophie,’ 1705 cited in Samuel Levey, ‘Leibniz on Precise Shapes and the Corporeal World’ in Donald Rutherford and J.A. Cover, eds., Leibniz: Nature and Freedom (Oxford: Oxford University Press, 2005), p. 82. The suggestions in the present essay are quite congruent with Levey’s readings, in opposition to the many commentators who instead view Leibniz as embracing some flavor of phenomenalism.
  30. Letter to Arnauld’ 10/09/1687 in Philosophical Papers and Letters, Leroy Loemker, trans., (Dordrecht: D.Reidel, 1969), p. 343.
  31. Theodicy, trans. by E.M. Huggard (Eugene: Wipf and Stock, 2001), p. 251.
  32. Richard T.W. Arthur, ‘Animal Generation and Substance in Sennert and Leibniz’ in Justin Smith, ed., The Problem of Animal Generation in Early Modern Philosophy (Cambridge: Cambridge University Press, 2006). In my reading, Leibniz employs the topdown teleology of an animal body as a structural means for enforcing the topological prerequisites required to keep basic continuum quantities such as mass and force distribution coherent between size scales (Chapter 1 of Clifford Truesdell, A First Course in Continuum Mechanics (New York: Academic Press, 1977) provides a good overview of the sorts of measure theoretic control required). The fact that these requirements are not trivial is shown by the fact that additional ‘compatability equations’ must be imposed to prevent a material that appears well-behaved at a stress-strain level from displaying holes like Swiss cheese in its macroscopic force to distortion relationships. However, it is plainly awkward for Leibniz to presume that a human body, a branch off the old apple tree and an iron rod must all embody the same kind of monadic control, although, insofar as the needs of applied mathematics go, their problems qua continua are identical. At various places Leibniz hints at sundry devices that might allow an iron bar to act for a time as if it were controlled by an animal-like master-and-slave hierarchy, without requiring that the ersatz ‘iron bar master monad’ live on forever in the manner of a true animal’s ‘soul’ (the true monads might somehow act as shepherds tending a flock ‘where the sheep are so tied together that they can only walk with the same step and cannot be touched without the others bleating’). But I’ve not been able to extract any decided opinion from his texts on how that might work.
  33. ‘Of Body and Force: Against the Cartesians,’ AG, p. 253.
  34. ‘Letter to de Volder’ 1/19/1706, AG, pp. 185-6. The influence of Aristotle’s views on potential division are quite evident.
  35. ‘Heroes of Old,’ De 17364. This essay was originally delivered in honor of Hans Reichenbach at UCLA and its calypsonian tribute was intended to cover his great works as well. At core, this essay recommends that we return to formal philosophy of science in a Reichenbachian manner, albeit with a few extra tools from applied mathematics. I am grateful for the thoroughly helpful comments that I’ve received in the several forums where I have presented this material.
  • A Priori
    College of Arts

    University of Canterbury
    Private Bag 4800, Christchurch
    New Zealand
  • Follow us
    FacebookYoutubetwitterLinked In