What Can Contemporary Philosophy Learn from Our Scientific Philosophy Heritage
Mark Wilso, University of Pittsburgh[I]t is so much easier to put an empty room tidy than a full one…
Michael Friedman’s The Dynamics of Reason (DofR) and my own Wandering Significance (WS) recommend that analytic philosophy revisit the themes of its ‘scientific philosophy’ roots, maintaining that philosophical progress remains most secure when it has anchored itself firmly in real life dilemmas.2 Through neglect of such ties, we feel that our discipline has slipped into a conceptual complacency from which it should be reawakened. One route to doing so is simply to think again about the methodological worries that troubled the pioneers of scientific philosophy, considered afresh in light of the gallons of discovery that have subsequently washed beneath knowledge’s bridge. This revivalist project comprises a central component within both of our books. But, in surface rhetoric at least, the conclusions we reach diverge substantially. In truth, there is less to these disagreements than meets the eye but they grow out of somewhat different appraisals of where the true merits within our ‘scientific philosophy’ heritage lie. Since much of contemporary philosophy has been unwittingly shaped by these same half-forgotten episodes, I fancy that a wider audience may find a brisk survey of these issues useful, all in the hope of finding a manner in which we might again act as useful ‘critics of concepts.’ In the main, it requires that we think again about the central place that logic currently occupies within our thinking.
DofR stoutly criticizes present day philosophy for its conceptual timidity, observing that it rarely serves as the fierce enemy of ideological complacency that it did back in the glory days of Helmholtz, Mach, Poincaré and Einstein and their positivist followers, Carnap, Schlick and Reichenbach.3 In fact, a fundamental mission within LMS (= his commentary on WS) is to remind us of the strong ‘anti-classical’ credentials of the entire scientific philosophy movement (where ‘anti-classical’ in this context designates any approach to ‘conceptual content’ that opposes traditionalist presumptions that such ‘contents’ can be readily surveyed by mental introspection—WS cites Bertrand Russell’s Problems of Philosophy as a paragon of such ‘classicism’). As Friedman correctly stresses, strong strains of resistence to ‘classical conceptual thinking’ arose far earlier than Russell’s 1912, despite WS’s apparent suggestion that this revolt commenced thereafter. But the latter impression traces merely to my expositional ineptitude—it was not intended (I hadn’t attempted a faithful historical commentary at all). Indeed, it can be plausibly argued that reservations about ‘concepts classically conceived’ lay implicit in the early battles over the suitability of Newton’s form of mechanics. WS cites Russell’s exposition centrally only because I feel that he marked out the vital contours of the classical tradition in an especially admirable way (and, at the same time, indicated how many standard anti-classical gambits could be plausibly coopted under its banner). But most of the vital strands within anti-classicism were fully on the field before then. And I have always viewed the positivists themselves as comrades in the struggle against ‘classical concepts.’4
Nonetheless, these thinkers didn’t make much of an appearance within my narrative because I feel that they had pursued the wrong branch within historical anti-classicism. Specifically, they favored a view of ‘conceptual content’ that relies strongly upon logical structure in the manner that Friedman deftly outlines in LMS. Specifically, they believed that logic (including set theory) offers sufficient constructive tools to provide a general framework for all (anti-classical) conceptual construction, through ‘implicit definability’ in the arms of a strong, logically interlaced ‘theory.’ This presumption, although reasonable at the time, left the positivists ‘barking up the wrong gum tree,’ in Austin’s phrase, and the after-effects of this wrong turning strongly affect analytic philosophy to this day. In contrast, I discern a quieter and gentler anti-classical pattern within the historical record that promises a more satisfactory template for a revived ‘scientific philosophy.’ Within this alternative approach, logic proves decidedly subservient to other forms of inferential procedure.
Before we inspect these two patterns within scientific anti-classicism, let me reiterate a central theme within DofR: the varying recommendations of the scientific philosophy movement grew out of a fundamental quest for greater conceptual liberty within science. By the mid-nineteenth century, a flurry of strong winds had begun reshaping mathematics and physics in radical and unexpected directions.5 A paradigm of such pressures can be found in Friedman’s central example: shifts in attitude and inferential technique within geometry. Plainly, some novel strain of philosophical thinking was required to make sense of it all, for naïve, traditionalist assumptions about ‘conceptual concept’ were apt to resist such advances. The watchword of the time was that the ‘free creativity of the scientist’ needs to be warmly respected in our liberal toleration of conceptual ingredients. But what, exactly, might that phrase mean?
We moderns have entirely embraced this ‘free creativity,’ in that protracted experience has taught us to accept whatever strange conceptual obligations Nature lays upon our plates. Indeed, perhaps we have become too complacent in our forbearance, for we are apt to forget that, in real life circumstances, acting in a truly open-minded fashion is not as easy as one might fancy in the armchair (it is much harder to renounce ingrained bigotry in practice than in theory). Repetitious episodes in the history of science remind us of these conceptual roadblocks: crucial notions are commonly greeted with bafflement and resistance when first proposed. Why does this continue to occur? DofR observes that the positivists provided the beginnings of a plausible answer to this puzzle in their treatment of the ‘relativized a priori’ content’ and contends that we should not relinquish this explanatory beachhead lightly in our own thinking. Yet something gone awry within contemporary philosophy has persuaded it to do precisely that: we forget or minimize the conceptual difficulties of real life practice and allow the radical remedies they require to slip from view. In an original and distinctive stretch of diagnosis, Friedman traces much of this amnesia to the untoward influence of Quine’s writings, a theme I will amplify in the next section.
Central to Friedman’s argument are the great geometrical thought experiments suggested by Helmholtz and company, as these provided the crucial semantical pivot that prepared a path for Einstein’s later Relativity revolution. Helmholtz, for example, told a ‘Flatland’ story of ‘what one would see’ living in a world of constant but mild curvature obedient to a natural adaptation of traditional geometrical optics (despite the fact that he knew light was actually a wave phenomenon of some kind).6 But how can thought experiment whimsies play such important roles within conceptual development? The positivists’ response to this mystery, as reconstructed by Friedman, will be the subject of section II. His crucial observation is that, if contemporary philosophy is to resume its proper calling as a ‘critic of concepts,’ it must develop some parallel account that renders justice to the peculiar semantic effectiveness of such ‘thought experiment’ pivots. In fact, if we fail to attend to this chore, then we probably lack a plausible answer as to why philosophy is useful at all. The conceptual transitions facilitated by the musings of Helmholtz et al. should be regarded as emblematic of our fundamental intellectual obligations, as they provide an especially convincing illustration of how overtly ‘philosophical thinking’ aids intellectual progress. But current complacency about the breadth of our intellectual grasp (e.g., in the guise of glib appeals to ‘all logically possible worlds’) fancies that such developmental problems lie entirely in the past and that philosophy has no further role to play in such matters.
I agree with Friedman on all of this, but wonder if the old positivist explanation of conceptual resistance can’t be reworked in a different manner than he suggests. In this essay, I will employ my alternative diagnosis to illustrate some of the key theses about ‘semantics’ advanced within WS.
As noted above, the central impulse driving the rise of scientific philosophy grew from a desire to liberate the ‘free creativity of the scientist’ from what Gauss dubbed the nattering ‘complaints of the Boethians.’ But what philosophical conception of ‘concept’ can underpin such an ideological liberalism? By the beginning of the twentieth century, it became widely assumed that scientific concepts could be supplied with adequate ‘meanings’ simply through an inclusion within a properly articulated theoretical frame (in the standard jargon, a logically articulated theory will implicitly define its embedded terminology). In its crudest formulations, this ‘organizational’ point of view claims that, in the final analysis, all that’s really vital for a ‘conceptual system’ is that it lay down reasoning pathways able to convey us from correct input measurement numbers to correct output numbers. As Richard Feynmann puts the matter:The whole purpose of physics is to find a number, with decimal points, etc.! Otherwise you haven't done anything.7
From this point of view, traditional worries about conceptual opacity or unintelligibility can be generically dismissed with an airy: ‘Hell, we don’t really need to “understand” nature in the manner you expect; we merely need to predict its courses accurately.’ Hertz, in a celebrated passage from his Principles of Mechanics, articulates the recommended policy of conceptual tolerance more delicately as follows:
The most direct, and in a sense the most important, problem which our conscious knowledge of nature should enable us to solve is the anticipation of future events, so that we arrange our present affairs in accordance with such anticipation... In endeavoring thus to draw inferences as to the future from the past, we always adopt the following process. We form for ourselves images or symbols of external objects; and the form which we give them is such that the necessary consequents of the images in thought are always the images of the necessary the necessary consequents in nature of the things pictured... [W]e can [thus] in a short time develop by means of them, as by means of models, the consequences which in the external world only arise in a comparatively long time.8
He correctly observes that many strands commonly utilized within mechanics do not harmonize with one another perfectly from a strict logical point of view: [W]e have accumulated around the terms ‘force’ and ‘electricity’ more relations than can be completely reconciled amongst themselves. We have an obscure feeling of this and want to have things cleared up. Our confused wish finds expression in the confused question as to the nature of force and electricity. But the answer we want is not really an answer to this question. It is not by finding out more and fresh relations and connections that it can be answered; but by removing the contradictions existing between those already known, and perhaps by reducing their number. When these painful contradictions are removed, the question as to the nature of force will not have been answered; but our minds, no longer vexed, will cease to ask illegitimate questions. 9
Note that Hertz’ sole complaint is that too many conceptual stipulations have been laid down in mechanics to cohere with one another logically.10 In his final sentence, he stresses the fact that he is not criticizing the notion of ‘force’ on the traditional grounds that its ‘content’ seems ‘occult’ or ‘non-understandable’; he plainly rejects such classical criticisms as ill-suited to the ‘free creativity’ of science. He sees his task as entirely one of reducing mechanics’ excessive collection of inferential principles to logical consistency without jettisoning important derived principles such as the conservation of energy along the way. It is worth observing that Wittgenstein chose Hertz’ final sentence as the projected motto for his Philosophical Investigations, a point to which I’ll return.11
Few steps are required to pass from Hertz’s point of view to the logic-centered picture of concepts favored by many in the positivist movement of the twentieth century and their sympathizers, where ‘conceptual content’ becomes explicitly elucidated as implicit definability within a self-consistent encompassing structure. The anti-classical approach to ‘conceptual content’ that this doctrine facilitates explains why this school came to regard a well-articulated logical structure in the sense of Principia Mathematica as the sine qua non of the theoretical frame required12 (indeed, a maximized liberalism suggests that these logical benchmarks should represent the only substantive restrictions upon scientific conceptualization). Under the banner of these liberal and deeply anti-classical attitudes, logic came to assume the ‘distinctive role’ that Friedman ably emphasizes in his LMS discussion of our twentieth century forebears. Once the required structural ties have been logically forged, the central task remaining to conceptualization is to indicate how such linguistic provisos can be fruitfully ‘coordinated’ with empirical reality. It then becomes natural to expect that a modest range of structural provisos must be activated before it even makes sense to speak of the ‘empirical predictions’ available within a specific framework. Sundry ‘prime the pump’ preliminaries must be set in place before the formalism can draw empirical water (plainly, prerequisites of this type reflect a deeply Kantian understanding of how ‘concepts’ function). Such framework-supporting presumptions constituted the ‘relativized a priori’ that Carnap and the early Reichenbach substituted for Kant’s more absolutist architectural requirements.
At first blush, it might seem as if one could plausibly retain an ‘implicit definability’ story of conceptual content while simultaneously rejecting any clean relativized a priori, on the grounds that developmental pressures are apt to quickly muddy what qualifies as ‘presupposition’ and what qualifies as ‘empirical aftereffect’ within science. Indeed, this ‘first blush’ alternative exactly represents Quine’s perspective on such matters. But Friedman maintains that this blithe abandonment of the ‘relativized a priori’ comes at a severe intellectual cost, for it cavalierly abandons a sophisticated understanding of philosophy’s proper role as a useful ‘critic of concepts.’ He astutely observes:[Modern analytic philosophy] cling[s] to the idea of a specially characteristically philosophical enterprise (of ‘mere conceptual possibilities’) inherited from the ... analytic tradition, while simultaneously ignoring completely the distinctive conception of logic which made this tradition possible in the first place.13
Here Friedman captures a theme that WS likewise explores: much contemporary thinking on ‘concepts’ represents a hodge-podge of themes extracted from a philosophical past that was more disciplined within its own thinking. Many readers of WS have come away with the impression that it accuses contemporary figures such as David Lewis, Saul Kripke, Jerry Fodor et al. of being ‘classicists,’ when they plainly resist such a classification along several fronts. In truth, to consider them ‘classicists’ represents a gentler appraisal of their proposals than I actually entertain, for I largely discern an ill-sorted melange of classical and anti-classical themes that have descended, without proper scrutiny, from earlier philosophical traditions. And Friedman clearly believes the same thing.
However, he is concerned with a deeper point as well:
It is especially ironic that what Wilson calls Quine’s ‘hazy holism’ now gives aid and comfort to the conceptual ambitions of contemporary analytic philosophy by allowing us to imagine that our armchair philosophizing merely occupies an especially central and abstract level in our total empirical theory of the world—and, as such, it operates within the very same constraints, of overall ‘simplicity’ and ‘explanatory power,’ governing ordinary empirical theorizing. In this way, the great opponent of the analytic/synthetic distinction unwittingly made room for essentially a priori philosophizing through the back door. 14
The claim is that, in casting aside the positivists’ notion of ‘semantic prerequisite’ with the unwanted bath waters of the analytic/synthetic distinction, modern philosophy has relinquished the vital tools it requires to understand the peculiar difficulties of real life conceptual advance. Without some tangible foundation within a ‘relativized a priori’ (or something like it), all ‘concepts’ begin to look essentially alike, allowing their contemporary student to fancy that she no longer needs to probe the extensive annals of surprising adjustment within science before framing apt conclusions about ‘how concepts work.’ But in such generality folly lies, for one can truly appreciate why we need to worry about concepts only through closely examining issues such as the underlying causes of ideological resistance within science. DofR hopes that the great thought experiments of nineteenth century geometry will inspire us to become genuine conceptual critics again, yet our contemporary scholar sees nothing significantly ‘semantic’ in any of it—just a bunch of guys shifting the meanings of ‘distance’ and ‘time lapse.’ She will dismiss all of Friedman’s historical nitty-gritty with an insouciant, ‘So stuff happens; let’s get back to concentrating upon DOG and DOORKNOB.’ And Quine’s influence, Friedman claims, has laid the groundwork for such detachment from real life struggle.
As previously suggested, Friedman’s rejoinder emphasizes the need to explain why preliminary thought experiments so often prove essential to conceptual toleration: how can patent fictions prove essential prerequisites to proper intellectual grasp?. He reminds us that the positivists’ approach to conceptual innovation emerged precisely from their efforts to render the semantic benefits conferred within the great nineteenth century discussions of non-Euclidean geometry into a formal and adequately diagnosed frame. This codification relies upon both the ‘implicit definability’ approach to conceptual content (to allow maximal ‘free creativity’ in science) and the isolation of a ‘relativized a priori’ as presuppositional core (to explain the basic manner in which the new constructs ‘coordinate’ with empirical observations). With respect to the latter, merely grasping the purely mathematical structure of a novel set of notions does not show how they can be successfully arranged around the familiar verities of measurement technique (qua mathematical structure, I understand the complexified projective plane well enough, but I don’t see how I could live in such a joint). Clearly, the great geometrical thought experiments provided this crucial bridging service: they adequately illustrate how such the ‘coordinative’ resettlement of familiar experience within a non-Euclidean frame can become accomplished. As instruments of conceptual advance, these patently ‘philosophical’ constructions provided a necessary prerequisite for the eventual development of general relativity, despite the fact that the latter eventually adopted more complex strategies for employing non-Euclidean concepts than were exemplified within the original thought experiments. Without a mollifying thought experiment bridge, conservative parties had no easy pathway to appreciating the novel ‘possibilities’ offered within general relativity’s unfamiliar frame. For the positivists, their ‘relativized a priori’merely represented the formalized treatment of the fundamental mathematics-to-experiment ties required to pull non-Euclidean skeptics into richer conceptual pastures.
Accordingly, the positivists’ story provides an attractive portrait of a specific manner in which philosophy can serve as a useful ‘critic of concepts,’ opening up unexpected conceptual terrains in advance of knowing whether Nature will fondly embrace such semantic recalibrations or not.
To press these positivist advantages upon us, Friedman must first rectify a prevailing interpretative injustice. Contrary to Thomas Kuhn’s asseverations in the Structure of Scientific Revolutions, the positivists were fully aware that science is not invariably ‘cumulative’ in character and, as we’ve seen, much of their thinking was strongly shaped by a desire to diagnose the underlying character of typical bumpy resistance to ‘scientific revolutions.’15 Indeed, they appreciated the crucial role that thought experiments play in making the classical-physics-to-relativity transition appear rationally adjudicated, rather than merely representing the crude triumph of one doctrinal prejudice over another in the ‘political’ manner that sociologists of science often suggest. As Friedman perceptively stresses, the ‘relativized a priori’ captures the conceptual prerequisites of coordination that a critic must grasp before she can think of herself as ‘possibly living within a non-Euclidean world.’
Pace Quine and his followers, modern sanguinity with respect to ‘conceptual content’ provides no comparable means for explaining such common semantic patterns within scientific development, despite their ubiquity:
Our best current historiography of science requires us to draw a fundamental distinction between constitutive principles, on the one side, and properly empirical laws formulated against the background of such principles, on the other.16 In so doing, recent philosophy has abandoned the diagnostic tools required to operate as effective critics of conceptual complacency in the manner characteristic of the entire ‘scientific philosophy’ movement. Worse yet, we have deluded ourselves that this noble calling ‘no longer represents philosophy’s affair.’ It is as if we were only willing to study meta-ethics and had abandoned all normative concern to the preachers and politicians.
Once again I agree with much of this and share with Friedman an unbounded enthusiasm for the peculiar semantic virtues offered within the great geometrical thought experiments. Nonetheless, I don’t believe that the positivists did a very good job in explicating why such proposals should qualify as ‘semantical’ and in the next section I shall sketch an alternative diagnosis of their special character. Much of the problem traces to the well known fact that, as soon as one attempts to identify a presuppositional core within any concrete set of scientific doctrines, one quickly becomes stymied, for few of the ‘philosophical’ remarks attached to the sundry thought experiments that inspired Einstein’s advances actually became uncontroversially implemented within his finalized proposals. Nor are the ‘philosophical virtues’ commonly consigned to Einstein’s advances cleanly displayed within his completed theories either. For example, positivist-influenced commentators regularly claim that Special Relativity deftly avoids the extraneous aether/matter ontology that troubled Lorentz. Perhaps, but Einstein’s formal shift encouraged a point of view where matter should be modeled by point particles situated within an electrically active vacuum. As soon as any charged particle of this type accelerates, one runs into the same horrible mathematical divergencies that Lorentz attempted to cure by other means (any manner of matter/field pairing is apt to prove just as mathematically troubling as the aether/matter coupling). Likewise, general relativity is often credited with ‘explaining’ gravitational activity as freely falling matter following geodesics. Yet Einstein’s finalized equations demand that its ‘matter’ become continuously distributed in a manner that makes ‘freely falling matter follows geodesics’ mean scarcely more than ‘geodesics lie on geodesics.’ The resulting melange of mush and avoidance renders the clean ‘semantic prerequisites’ of the positivists vulnerable to Quinean objections.
The generic root of such difficulties traces to the brute fact that science rarely advances as a complete and coherent ‘framework’ in any reasonable sense of the term (it ‘comes at us in sections,’ to quote Fred Astaire in The Band Wagon). At present, there is no happy method for wedding Maxwell’s electromagnetism or general relativity to a fully embodied ‘theory of matter’ within wholly classical circumstances (let alone to quantum theory). Practitioners typically tiptoe around these alignment problems by modeling matter’s influence opportunistically as ‘boundary conditions’ or ‘transmission coefficients’ in manners that plainly cannot capture the general situation. In brute fact, what Einstein actually contributed to science was a motley collection of equations and inferential techniques that clearly ‘have some truth in them’ but don’t cohere with one another perfectly and also leave huge swatches of physical behavior quite unspecified (none of this should regarded as belittling his great achievements at all; it is simply a comment upon the typical character of physical advance). Of course, the fact that classical mechanics had evolved its ‘doctrines’ in this haphazard and opportunistic manner was the source of the conceptual conflicts that bothered Hertz.
The positivists were forced to freeze assumptions about ordinary matter (e.g. ‘measurement rods’ on a macroscopic scale) into artificial fixity because these prerequisites needed to serve as the stable core around which the ‘free creativity of the scientist’ can liberally range.17 These doctrinal requirements have the unfortunate side effect of locking conceptual advance in celestial mechanics to presumptions about the behaviors of everyday matter on a localized scale. Yet no science yet devised has managed to approach both tasks in a truly harmonious manner.18 Such are the familiar problems attaching to the ‘relativized a priori’ and ‘coordinative definitions’ of the positivists. Friedman is, of course, fully aware of these woes, but he advises us to not lightly abandon our tasks as conceptual critics thereby. Furthermore, in recent work, he has offered an increasingly sophisticated set of proposals for repairing these deficits from an essentially Kantian perspective. Still I am not convinced, partially because I don’t understand the proposals well enough to sort out, at any historical moment, what should qualify as a pre-empirical ‘presupposition’ within a science (Friedman cites the geometry of inertial frames as an example within ‘classical mechanics’ but the messy subject called ‘classical mechanics’ in actual practice contains many strains that do not rest unambiguously upon this foundation). Of course, the fact that real life science often contains ‘untidy rooms’ stuffed with furniture that stylistically clash with one another represents the deepest source of the Quinean conviction that we can repair its problems only in a gradual, ‘Neurath’s boat’ manner that is unlikely to reveal any standing ‘presuppositions.’19 However, lacking an alternate understanding of the bridging capacities offered within the prototypical thought experiments, it becomes hard to summons philosophy to its critical duties on Freidman’s model, as long as it appears likely that the only ‘semantic prerequisites’ philosophers will plausibly codify are destined to wind up rejected or ignored within the very theories for which they allegedly serve as ‘prerequisites.’
In LMS, Friedman deftly recounts how twentieth century analytic philosophy, from both ‘classical’ and ‘anti-classical’ perspectives, commonly assigned logic and set theory a very central role within their various accounts of ‘concept formation.’ It is a characteristic theme within WS that this logistic emphasis was, in fact, a mistake and that we should instead attend more closely to other inferential invariants in our thinking about the ‘core contents’ of critical concepts (set theory has an important role to play, but it is not that assigned within the story that Friedman recapitulates). Although I had not thought of these issues when I wrote my book, it strikes me that Friedman’s vital insistence upon the semantic utility of the great geometrical thought experiments might be accommodated within WS’s framework without any requirement that we clean up science’s messy stables in the throughly hygienic manner of the positivists.
As indicated in an earlier footnote, I generally find histories of sciences pursued in Kuhn’s manner to be rather one-sided, for they typically ignore the gentler forms of historical adjustment within physics and mathematics that often prove crucial to the ‘semantic’ issues at issue. Here I have in mind the gradual shifting of textbook ‘definitions’ and the highlighting of fresh mathematical tools (‘invariant’, ‘potential’) that occur upon an almost daily basis within a healthy discipline, generally unfolding in a quiet and gradualist manner. Despite the near invisibility of these shifts, they frequently exemplify probing ‘critiques of concepts’ as philosophically meritorious as those exemplified within splashier episodes such as the non-Euclidean revolution. To appreciate the importance of these adjustments, let us look more closely at the inferential difficulties that commonly trouble working science.
In particular, let us attend to the reliability of localized reasoning methods, such as the mathematical techniques that a physicist or engineer employs to extract nuggets of useful conclusion from otherwise recalcitrant equations (to ‘get the numbers’ that Feynman wants). And here we must observe that the most powerful inferential schemes utilized within applied mathematics often prove erratic in their performance: they sometimes work well and sometimes work badly, without displaying evident marks to distinguish the cases. Allied woes are rarely evident within the well-behaved inferential patterns of first order logic, whose atypically cooperative behavior has bewitched many philosophers into overlooking the computational logjams that substantially shape the topography of working science. Indeed, my main complaint about twentieth century positivism is that it inadvertently encouraged this logic-entranced obliviousness to the substantial hazards of real life inference, a distracted state of mind from which contemporary philosophy has not yet recovered.
As a prototypical case of erratic inferential behavior, Cauchy’s worries about ‘rigor’ began in a puzzlement why widely employed series expansion techniques in celestial mechanics allow us to solve Kepler’s formula M = E - e sinE correctly over certain ranges for e but not over others. What hallmarks must a reasoning principle further display if it is to be justly trusted? Understanding such issues often requires that the mathematician provide what WS calls a ‘picture of our computational place in nature.’ That is, they construct a model that welds together the reasoning pattern R and worldly behaviors C to which the inferential pattern allegedly answers. Our mathematicians then investigate how ably R manages to relate the membership of C within this generic setting. We rarely know much concrete detail about C on an a priori basis: we are, after all, hoping to establish that the conclusions we can reach by applying R to the starting data D will prove trustworthy in any conceivable circumstances in which we imagine R might be applied. So mathematicians frame the relevant possibilities in huge classes C wide enough to model any feasible manner in which the real world might make the data D true.
Let me illustrate such a ‘computational portrait’ in a bit more detail.20 Most basic physical processes are governed by differential equations (or something very much like them). As a simple case in point, suppose that an ordinary differential equation (call it ‘E’) of the form dx/dt = f(x) governs how a bead slides along a wire. As such, the bead must travel along some unknown curve C within the general class of continuous curves C. Our formula E states that the velocity of the bead is linked at every temporal moment to its current position by a simple f-rule (whenever the bead reaches the point p, its current velocity must be f(p)). As such, E only fixes how the bead behaves at each infinitesimal moment, it doesn’t directly tell us how the bead will move over any longer time interval. We now hope to find some reasoning rule R that starts with the equation E and p’s location as its starting data D and reasons to reliable conclusions about how the bead will behave at some later time t*. But don’t be misled by the exact solutions one is taught in freshman calculus; such manipulative techniques only rarely work. If we want general purpose reasoning methods, we need to look at so-called ‘numerical schemes’ of the ilk commonly installed within computer programs. A basic prototype of such a policy is Euler’s method. Indeed, its workings are quite intuitive. If the f in our equation is readily computable, we can divide the bead’s temporal development into short time steps ªt and argue that its velocity at the starting time t0 should supply a plausible estimate for where the bead will later land at time t0 + t. That is, starting at p and t0, Euler’s method instructs us to draw a straight line leading to the point q where the bead would land if it had maintained a constant f(p) feet/sec velocity over the entire t interval. Applying Euler’s procedure once again at q, we next plot the point to which our bead would relocate at time t0 + 2ªt if it had assumed maintains a constant f(q) velocity after it passes point q. Repeating this process, we obtain a broken line graph CE which, if the reasoning method is sound, should lie fairly close to the unknown true solution C, at least for a reasonable span of time.
But is Euler’s rule genuinely reliable? Without further restrictions on E, we can immediately see that its reasonings must prove inherently fallible, for suppose that E supplies the bead with a sharp kick inside the first ªt interval (it is easy to invent a f term that will do this). Rather than remaining near the bead’s true C location q, the CE we calculate will mistakenly augur that the bead travels to a completely erroneous position q*. Such discrepancies between predicted and actual landing places generally become more severe as the plotting steps are repeated, possibly eventuating in a broken line plot CE quite unlike the bead’s actual trajectory C. This is no idle worry, for if one is not careful in real life computing, the blind application of even the best computer routines will cheerfully plot alleged ‘solutions’ that bear no resemblance to the true curves carved out in Mathematicsland by the target differential equations.21 Obviously, such difficulties trace to the fact that we finite humans can only inspect how C behaves at staggered ªt intervals, allowing sufficient wiggle room for equation E, working at an infinitesimal scale, to decide to do something utterly different to our bead in the moments when we’re not ‘looking’ (i.e., calculating). We must therefore hope that we can find some way to restrict the E’s to which Euler’s method is applied so that our calculated CE’s will stay true to the curves C in C that meet additional provisos M. As it happens, there are a range of restrictions on the f’s permitted in E that will turn this trick. But establishing firm ‘correctness’ results of this type is often rather hard and commonly the results fall short of the assurances that practicing physicists would like to see (a substantial amount of scientific reasoning advances in the hazy twilight where the relevant R’s are neither known to be sound or unsound).
In weighing the reliability of Euler’s rule within this C-based modeling, I hope that the philosophical reader recognizes that we are considering a non-logical equivalent to what logicians call a soundness question22 with respect to a logical reasoning rule R: beginning with premises Γ (. our data D), will the conclusions reached by R remain true in every model M ε M (. C ε C) satisfying Γ? R is said to be semantically justified if it proves ‘sound’ or ‘correct’ under such a generic correlational study. A chief purpose of a ‘semantic portrait’ of descriptive language is to provide the underpinnings for ‘soundness’ evaluations such as this.
Shortly we’ll investigate the manner in which important conceptual shifts are often associated with studies of this kind. However, let us first underscore a fundamental reason why we should expect that most rules of reliable reasoning will require non-trivial correlational scutiny of this type. Inspecting the workings of Euler’s rule, it is reasonable to feel: ‘Gee, this rule clearly is operating with the right idea.’ But why do we think that? Well, we’d like to presume that, if we could draw our straight lines estimates over increasingly brief time steps ªt, the complete collection of broken line estimates CE we would frame will eventually surround the particle’s true trajectory C as a cocooning envelope. Of course, only a god could truly trace the infinite mesh of estimates required in our hypothetical collection, forcing us, as mortal geometers, to terminate our labors at some finite step size ªt. As we saw, that enforced work stoppage potentially allows our E to supply an unexpected kick that our truncated reasoning process fails to notice. ‘Close, but no cigar,’ we might say, because such premature terminations make real life computers churn out absolutely dreadful inferential conclusions.
This gulf between our bead’s real life curve C and the broken line estimate CE we can concretely calculate provides an emblematic illustration of our general computational position within nature: our actual inferential capacities commonly relate to real world behaviors in a mildly transcendental fashion, in the sense that fully deducing the bead’s true physical circumstances demands the satisfaction of more conditions than we can inferentially check. In this regard, recall Richard Feynman’s claim that ‘The whole purpose of physics is to find a number, with decimal points, etc.’ But our correlational studies warn us that, in the immoral words of the Mick Jagger, ‘you can’t always get what you want.’ Feynman may want those numbers, but there may be no computational path that will take him fully there. It is a brute fact that many natural processes run their courses in manners that lie aslant to any computational routine implementable by man or upon any of his swiftest machines, just as a ruler and compass alone cannot successfully trisect an angle. From a predictive point of view, this discrepancy between reasoning capacity and physical behavior is hardly ideal, yet it is surely preferable to living in a world whose unfolding processes relate to our calculations only in some horribly transcendental manner where we never get even close to a correct answer.
In such ‘transcendental’ circumstances, the need to consider inferential practice from a ‘semantical point of view’ becomes practically inevitable. Insofar as logic is concerned, this point of view is defended in influential works such as Michael Dummett’s well-known ‘The Justification of Deduction.’23 However, Dummett is far too absolutist in his point of view—a fault that arises, I think, from concentrating upon logical rules to the exclusion of the wide variety of other common inferential principles such as Euler’s method (these, after all, play a far larger role in the successful prosecution of everyday and scientific enterprises). I explain what I mean by ‘absolutist’ in due course.
Returning to our central concerns, such C-based investigations often disclose that inferential schemes which qualify as unsound when judged by the lights of some received picture will prove correct if the correlational model studied is shifted to another interpretative basis (C*, say). We have noted that physicists commonly employ inferential principles in circumstances that the applied mathematician cannot semantically underwrite. If such successes occur repeatedly, it supplies a strong symptom that the semantic underpinnings of the reasoning method have not been rightly diagnosed. ‘A method which leads to true results must have its logic,’ advised the nineteenth century mathematician Richard Woodhouse and this hidden ‘logic’ is often uncovered by setting old equations and inferential rules upon altered supportive moorings C* (these shifts represent the quiet dimensions of conceptual reevaluation that gaudy Kuhnians generally ignore). And it is not uncommon for a new reinterpretation to appear in hindsight as if it would have provided a better reading for the affected equations and data all along, if only earlier practitioners had possessed enough mathematical savvy to formulate the supplanting C*. For example, a large number of venerable physical equations dating to the nineteenth century have become reinterpreted within modern practice in quite sophisticated ways, usually through shifting the reading of the derivative operation ‘d/dt’ from its standard δ/ε gloss to a smoothed over ‘weak’ reading or as a stochastic operator. Both of these replacements require considerably more technical machinery than the standard δ/ε setup but careful reflection upon physical principle commonly reveals that the adjusted readings better suit the physical circumstances to which the target equations have been long applied. For such reasons, practitioners often resist the presumption that they have ‘changed the meanings’ of the old equations: ‘No, we didn’t change their meaning; we finally got it right.’ In WS, I employ the evocative history of Oliver Heaviside’s operational calculus to illustrate this tranquil vein of conceptual recalibration, but allied examples can be found everywhere in modern applied mathematics. Hans Lewy (who pioneered several important ‘reinterpretations’ of this ilk) remarked evocatively:
[Mathematical analysis only supplies]hesitant statements... In some ways analysis is more like life than certain other parts of mathematics... There are some fields... where the statements are very clear, the hypotheses are clear, the conclusions are clear and the whole drift of the subject is also quite clear. But not in analysis. To present analysis in this form does violence to the subject, in my opinion. The conclusions are or should be considered temporary, also the conclusions, sort of temporary. As soon as you try to lay down exact conditions, you artificially restrict the subject.24
‘Analysis is like life’—I like that and would add—‘and conceptual developments in every form of life commonly resembles those encountered in analysis.’ Specifically, old words can be supplied novel supportive readings without undue fuss, for we frequently replace an implicit C modeling with a substantially altered C* without feeling that the terms’ ‘core meanings’ have been substantially betrayed.
As such, these quiet episodes of semantic recalibration usually display a ‘Neurath’s boat’ character, in that large blocks of successful inferential practice are maintained in place while the accompanying portrait C of their correlational underpinnings is adjusted, sometimes rather radically and in a manner that often opens fresh dominions to inferential exploitation in unexpected ways. And despite the apparently conservative character of the bootstrapping, the shackles of traditional constraints upon conceptual liberty can be cast off quite as effectively, if less dramatically, than through the blunt advocacy of a novel axiomatic frame. Indeed, ‘definitional readjustments’ in science generally display the same brutal indifference to established conceptions of ‘conceptual content’ as do the axiom supported novelties favored by the positivists; the value of a proffered semantic picture C is adjudicated solely in terms of how ably it manages to capture the de facto correlational circumstances that allowed established usage to work as well as it has, no matter how poorly the prior practitioners of those practices appreciated these word-to-world supports. In such revisionary policies, we witness the same strongly ‘anti-classical’ attitudes to ‘conceptual context’ that the scientific philosophy movement has always championed, but less entangled with the constrictive holism that makes the ‘framework principles’ of the positivists so hard to match with real life conceptual endeavor.
Over the course of the twentieth century, the conceptual conflicts within Newtonian mechanics that troubled Hertz became gradually resolved, not through finding an all-embracing axiomatic frame of the type he anticipated, but rather through atomizing classical physics technique into an accumulation of localized inferential gambits enjoying different semantic readings according to applicational context. The classical physicists in the nineteenth century had followed a variety of inferential signposts in blazing pathways to the celebrated modeling equations that modern engineers employ fruitfully in macroscopic circumstances even today. Figures like Hertz had presumed that these inferential conquests must rest upon some shared, if elusive, semantic foundation susceptible of axiomatic codification, but in making this assumption they had been (understandably) fooled by a common species of semantic mimicry, for rules R and R* can look very much alike, yet their word-to-world supports prove altogether different. Modern studies have shown that virtually all of the standard Victorian reasoning gambits were sound within the localized circumstances C where they were applied, but these supports C do not remain the same when we shift from one inferential success to another. So a proper resolution of the conflicts that troubled Hertz assumes a contextualized form: ‘within applicational circumstances X, the proper setting for adjudicating the correctness of rule R should be C, but in applicational circumstances Y, it should instead be C*.’
In fact (although it is impossible to provide supportive argumentation here25), general theoretical considerations of an applied mathematics charactersuggest that frequent drifts in underlying semantic support should be commonly anticipated within well adjusted collections of macroscopic variables—viz., smallish clusters of descriptive vocabulary suitable for the very complex systems we encounter in everyday life. Our unavoidable observational and computational limitations force us to traffic in such ‘reduced quantities’and their characteristic drifts in correlational support warn us to not hold such terms prissily to complete logical rectitude. The main intent of WS is to develop this line of thought in (probably excessive) detail.
Accordingly, my own disinclination to cite the positivists as optimal exemplars of ‘scientific philosophy’ stems from our section II complaints that their excessive ‘implicit definability’ demands require notions such as ‘force’ and ‘rigid body’ to lock together across broad domains when a proper resolution of the historical conflicts requires localized forms of strand-by-strand disentanglement. A symptom of the critical ills induced by such ‘holism’ can be found in the large and almost wholly misleading literature upon ‘the problems of classical mechanics’ spun out in the 1950's under the influence of positivist thinking.26 Virtually every recommendation in such commentaries would have precluded advance in modern continuum mechanics had the practitioners of the subject paid any attention to them. Unfortunately, the same positivist-generated mythologies continue to adversely affect philosophical thinking today, even though the inspirational lights of the original program have largely dimmed.
In any event, it strikes me that the ‘semantically adjusting role’ of the classic geometrical thought experiments might be better accommodated within our gradualist story, which no longer demands that trustworthy ‘framework principles’ must be articulated within some coherent and all embracing theoretical frame. Although a proper analysis would take us too far afield, any investigation of the subtle ways in which inferential principles are actually weighed within real life applied mathematics reveals a number of ways in which patently incomplete or simplified models are commonly employed to ‘try out’ inferential procedures R in a bootstrapping fashion when one can’t otherwise gain a reliable sense of how such reasoning techniques function over their intended domains. Thus applied mathematicians often develop a preliminary sense of how ably reasoning rule R operates over various well understood but inadequate models M* before they apply R to the subject matter in which they are actually interested (such behaviors will be presumably captured within some family of models M, but we can often learn very little about these M-structures until we begin to trust inferential tools like R). But directly establishing the soundness of R over the desired M is often an intractable task, so our applied mathematicians satisfy themselves with checking R’s soundness over the simpler M*.27 Accordingly, gauging the ‘contents’ of key concepts in actual practice often requires that we first investigate how word usage correlates with the features witnessed within unrealistically simplified but easily checked models M*.
It strikes me that the utility of Helmholtz’ ‘thought experiments’ in geometry fit this common bootstrapping pattern rather nicely: specifically, his patently unrealistic postulates with respect to geometrical optics and rigid body movement allow a rough appraisal of ‘what it’s like to live in a non-Euclidean world’ in a manner that makes it prima facie plausible that mathematics of this type might eventually find valid empirical application. As such, we can explain why such thought experiments ‘open up possibilities’ essential to conceptual advance exactly in the manner that Friedman emphasizes, without thereby demanding that our bootstrapping models M* must embody any specific prerequisites operative within the more realistic M that science eventually embraces. But to accept this point of view, one must adopt WS’s milder position that semantics-for-the-sake-of-soundness-proofs should not be viewed as completely capturing in a traditionalist manner the hypothetical ‘established contents’ of the words under examination. Instead, the correlational studies embodied in successful soundness proofs should be primarily regarded as important vehicles for reorienting established usage towards higher standards of performance, exactly in the ‘get me the right numbers’ vein that the ‘scientific philosophy’ tradition has always emphasized. The conceptual utility of the classic geometrical thought experiments reflects the observation that even correlational studies with respect to somewhat unrealistic modeling families M* can serve parallel redirectional purposes.
As we first observed, DofR sounds a vital rallying call to arouse philosophy’s battalions from their complacent slumbers, no longer serving as ‘critics of concepts’ in the great traditions of ‘scientific philosophy.’ Unfortunately, DofR’s adoption of neo-Kantian vocabulary, if left unmollified, is apt to make this resumption of duties appear a rather daunting challenge, for present day academic philosophers, given their less-than-total immersion in the mysteries of nature, are unlikely to suggest ‘framework’ alterations able to assist modern physics directly on its way in the manner of the non-Euclidean revolution. Friedman’s borrowings of positivist terminology like ‘the relativized a priori’ make us wonder, ‘Is he asking us to dream up wrong-headed demands upon physical theory à la Mach or Poincaré in the hopes that someday a proper physicist will be inspired to do something good after reading our crude philosophical follies?’ Such rhetorical difficulties, it strikes me, trace to the fact that the positivists never found an adequate way to accommodate the complex ‘coming at us in sections’ qualities of real life conceptual advance. True, as Friedman emphasises in his LMS note, they fully recognized that science’s prevailing ‘semantic prerequisites’ are apt to vary from one historical epoch to another, but their heavy reliance upon logic as the chief organ of conceptual clarification impeded an adequate treatment of the partial reorientations of practice highlighted here. Thus their optimistic assumptions that able logicians would surely be able to iron the ‘conceptual problems of classical mechanics’ through careful axiomatization seem rather comical in retrospect, for the inferential woes trace to much deeper structural difficulties than that (this is a major theme within WS).
But once we no longer require that improving work upon ‘conceptual contents’ must operate at the ‘total framework’ level, then training of a traditional sort can render philosophers particularly skilled in diagnosing the types of tension that inevitably arise when one basin of usage rubs against another, as well as suggesting ‘thought experiment’ arrangements that might improve local strands of usage in unexpected ways. By returning to this familiar ‘analytical’ mode, philosophy might plausibly display the same ‘critic of concepts’ sagacity that one finds in great twentieth century thinkers such as J.L. Austin and Wittgenstein (and, before them, Leibniz, Berkeley, Reid and the ‘scientific philosophers’ of the late nineteenth century). Indeed, through precisely training our attention upon conceptual conflicts as they arise across a wide variety of disciplines, we might offer wary but useful advice in the wandering ways of words to specialists within their particular fields, even if we lack the empirical specifics required to unravel their localized dilemmas fully. In operating thus, philosophy can serve to ‘open up possibilities’ in exactly the manner Friedman suggests without requiring the extraordinary powers of intellectual engineering that the old positivist rhetoric seems to demand. Unfortunately, through the same Quinean shifts in orientation of which LMS complains, present day philosophical training is often dismissive of the traditional diagnostic skills required to resolve subtle conceptual conflicts in this manner.
Since I have just evoked Wittgenstein’s name, it is worth indicating that the overall point of view suggested here is rather different than he seems to favor, judging by his evident approval of the final sentence in the Hertz quotation above. For convenience, let me cite it again:When these painful contradictions are removed, the question as to the nature of force will not have been answered; but our minds, no longer vexed, will cease to ask illegitimate questions.
Like Hertz and myself, Wittgenstein sees language use as naturally evolving into expanding patches that eventually come into tension with one another. Like me but unlike Hertz, he does not expect that these tensions can always be ‘solved’ by discovering an overarching replacement theory. Yet Wittgenstein appears to think that, ultimately, all the philosopher can do is pare back the conflicting patches of practice so that they no longer bind against one another. He then leaves matters there, ‘ceasing to ask illegitimate questions.’ Ultimately, Wittgenstein thinks that there is no explanation for why these local patches exist in the first place: ‘these are our practices,’ etc. Such thinking exemplifies Wittgenstein’s notorious ‘quietist’ streak; it appears to originate in that same raw suspicion of classical conceptual analysis that is expressed in the Hertz quotation.28
But I don’t see our issues in this manner at all. We can eventually ‘understand our practices’ (although the answers often prove surprising) through suitably diagnosing the strategic complications that our awkward placement within nature necessitates. To evoke a parallelism discussed at length in WS, nineteenth century mathematicians eventually uncovered the strategic techniques that allowed James Watt to devise an abundance of excellent improvements to the steam engine, even though the rational basis behind his search techniques remained completely opaque to him and his immediate followers. In providing these rationalizing underpinnings for otherwise mysterious ‘practices,’ we commonly rely upon the full resources of modern science and mathematics to weave a physically convincing story of what our linguistic circumstances are actually like. And developing unexpected correlational pictures of this ilk seems exactly the sort of thing in which philosophy should be engaged.
In fact, our ‘mildly transcendental placement’ in nature demands that we continually reevaluate our ‘practices’from a correlational point of view. If philosophy becomes unwilling to participate in such projects, well, so much the worse for its intellectual utility. Wittgenstein seems to have fallen into the same trap of one-sided thinking about ‘conceptual content’ as betrayed the positivists: ‘Thinkers like Russell are wrong to fancy that word/world ties can be wholly forged through the direct grasp of “content”; only linguistic practices can turn the trick.’ Well, yes, the sinews of word/world connection are mainly woven through the enlarging entanglements of successful linguistic routine with worldly affairs, not baptismal intent. Nonetheless, a ‘semantical point of view’ upon these ties, such as provided within our correlational pictures, can prove a great boon in helping us enlarge and refine these capacities and for warning us of the applications where such methods should not be trusted. As such, a ‘semantical picture’ usually enjoys a normative edge over accepted practice, in the sense that our inferential practices must be continually hedged and curbed as a side consequence of our ‘mildly transcendental’ placement within nature. A correlational picture, after all, simply represents a mathematized model of what that placement is actually like. For such reasons, classical thinkers like Russell (and Tarski) were right to stress the linguistic importance of the direct correspondences between words and world, as a normative issue above and beyond the nature of our current practices.
On the other hand, Russell et al. erred in fancying that reliable ‘semantic pictures’ can be always constructed through armchair musings upon ‘meanings’ alone, for such correlational portraits, on the present view, should be cherished rather as valuable instrumentalities of conceptual reorientation, commonly in the radical manner that Friedman (and ‘scientific philosophy’ more generally) has identified as essential to intellectual progress. As such, novel ‘semantic pictures’ carry considerable normative or corrective weight vis a vis past practices, but they should not be viewed, pace the ‘absolutist’ assumptions of Dummett and others, as thereby bearing the full freight of ‘meaning’ for the terms they diagnose. In fact, ‘meaning,’ as we ordinarily employ the word, should not be regarded as carrying less of a contextualized ‘content’ than most everyday terms of macroscopic classification, whose supportive significance commonly shifts from one occasion to another. As an ersatz totality, the full dimensions of ‘meaning’ remain resolutely lodged within the full array of pragmatic entanglements that successfully stitch word usage with the physical circumstances they address. Successful correlational pictures merely add a partial and fallible corrective adjunct to this distributed bulk.
But once our philosophical treatments of ‘conceptual content’ abandon their improving ambitions and we fancy that we are merely reporting upon ‘what we know when we understand a language,’ the components within a valid ‘semantic picture’ can easily seem vapid and uninformative when applied uncritically to ones home language, especially if one worries exclusively about the ‘soundness’ of logical rules of inference found there. Thereby engendered are the currently popular tropisms towards ‘deflationism’ with respect to ‘reference’ and ‘truth’ (including those encountered in Wittgenstein’s own writings). The best curative for this undervaluing of correlational studies is to recognize the reorientational assistance that a well-diagnosed ‘semantic picture’ often provides, through implementing the hard-nosed policy of ignoring traditionalist ‘loyalty to conceptual contents’ demands that has always proved a hallmark of the ‘scientific philosophy’ movement. Like Friedman’s thought experiments, such improved word-to-world constructions can provide the crucial semantic pivots that open unanticipated dimensions of exploration to the ‘free creativity’ of the scientist, without pretense that these useful and reorienting assemblies thereby capture ‘absolutely everything essential to meaning.’
As noted above, the role of set theory and logic comes out differently on this account than according to the ‘logic of concept formation’ traditions traced in LMS. It is certainly true that most nineteenth century logicians regarded the setting of proper standards for the ‘construction of concepts’ as comprising a more central aspect of their designated tasks than merely codifying the patterns we now call ‘first order reasoning.’29 Approached from this angle, set theory represents the desired theory of concept formation rendered extensional, which is exactly the manner in which the positivists conceived these issues. But our correlational approach evokes set theoretic ideas rather as a means of articulating the defacto natural relationships that ultimately render a language’s descriptive lexicon useful to its employers, quite independently of whether they have any just inkling of how those strategic correlations actually unfold (as we’ve seen, we commonly picture language’s workings wrongly). Following the lead of the applied mathematicians, we should no longer concern ourselves with establishing an architectural toolkit that any would-be system builder can exploit in framing concepts to suit the ‘free creativity’ demands of science. From the present point of view, it is rather dubious that such a foundationalist project is feasible at all, for devising linguistic strategies that can successfully function within a complex and uncooperative natural environment demands continual syntactic experimentation and mathematical study, whose convoluted contours are very hard to anticipate a priori. To capture the myriad coordinative manners in language can effectively entwine itself with worldly events, we require the basic notions of set theory—mapping and limit—, but not because we trying to supply a general theory of the ‘internal contents of concepts’ in a traditional manner. Our point of view is resolutely externalist: we attempt to assess our de facto computational position in nature in the hope of devising more sophisticated stratagems that might work more effectively. So we must learn how our reasonings and computations currently map to the world they address, in the general fashion in which we examined Euler’s rule. And because this placement often proves of a ‘mildly transcendental’ character, the notion of limit also becomes vital as well, for our Eulerian computations CE’s only approach their target C’s in that asymptotic fashion. Indeed, a look at the historical record will find set theoretical thinking gradually seeping into mathematical analysis in the middle nineteenth century for precisely these reasons, which are quite detachable from the ‘logic of concepts’ motivations recounted in LMS.30
While on this topic, it is worth entering a quick complaint with respect to some widespread misapprehensions. Many self-styled ‘physicalists’ presume that they adequately understand the ‘physical inventory’ of the world and that no sets or allied ‘abstract objects’ lie among them (they merely represent ‘mathematical artifacts,’ whatever those might be). Something has plainly gone amiss in such thinking. Surely, the manner in which an Euler-style computation relates, or fails to relate, to the reality it addresses comprises a vital empirical characteristic of the world in which we live. And the notions of map and limit represent the natural vocabulary in which such relationships should be discussed. It cannot represent a plausible requirement upon a reasonable ‘physicalism’ that it should relinquish the very terminology (‘infinite computational set serving as limiting envelope’) one employs to register garden variety forms of of computation-to-world relationship.31
In sum, if we no longer demand that science advance in great blocks of coherent framework built upon well articulated hunks of theory-to-measurement presupposition, we can better respect the fact that real life science only ‘comes at us in sections’ while underwriting Friedman’s basic claim that philosophy should resume its former role as ‘critic of concepts,’ in the best traditions of scientific philosophy. As suggested above, some of DofR’s terminological borrowings from the positivist heritage (e.g., ‘relativized a priori’) strike me as less than ideal, for fleshing out an accurate portrait of our ‘computational position in nature’ suggests a chastened scientific realism to me, rather than the modernized Kantianism that Friedman espouses.32 But these divergencies may prove more terminological in nature than substantive. However that may be, Friedman and I fully concur that meditating exclusively upon DOG and DOORKNOB as such traits appear in undemanding domesticity will rarely led the modern student to a proper appreciation of the difficult practical dilemmas of conceptual guidance that have always animated the ‘scientific philosophy’ movement. Such a complacent myopia, we think, is unwise; sound philosophizing requires greater critical grit beneath its wheels than mere DOG and DOORKNOB encourage.
- Lectures and Essays (London: MacMillan and Company, 1879), p. 2
- This exchange grew from a critique (‘Logic, Mathematical Science and Twentieth Century Philosophy’; henceforth LMS) that Michael Friedman composed for a symposium on my Wandering Significance (Oxford: Oxford University Press, 2006). But the reflections offered in LMS and Friedman’s own Dynamics of Reason (Stanford: CLSI Publications, 2001) raise important questions of historical indebtedness that reach far beyond the contours of my specific work and the editors have kindly allowed us to pursue these issues at greater length here. The other parts of the original symposium (by Robert Brandom and Michael Liston) will appear in Philosophy and Phenomenological Research along with my replies. I am deeply indebted to Friedman, Brandom and Liston, as well as Anil Gupta, A.W. Carus and Penelope Maddy, for their very stimulating comments.
- For want of a better concise term, I shall loosely lump all of these thinkers together as ‘positivists.’
However, a complication on this score needs to be acknowledged, in deference to the historical record. It is one thing to recognize, as the scientific philosophy tradition plainly did, that excessive loyalty to ‘classical conceptual content’ poses barriers to the progress of science and quite another to explain exactly what is wrong with the notion in itself. The historical path of least resistence was simply to declare, ‘Oh, such “content” exists alright, but science isn’t obliged to worry about it.’ This philosophical meekness leads my favorite avatars of ‘scientific philosophy’ (Helmholtz and Hertz) to declarations of the ilk: ‘Science can only know the external world qualitatively up to isomorphism.’ Plainly, such opinions endorse the coherence of a strongly ‘classical’ view of content through their tacit understanding of ‘qualitatively.’ For myself, I concur with vociferous naïve realists in finding such ‘veil of perception’ doctrines repugnant, but I am not willing to hypostatize some mythical ‘common sense world’ simply so that we might readily stay in touch with it. No; the universe in which we actually dwell is a quite strange place and any acceptable ‘realism’ must warmly acknowledge that unhappy fact, I warrant.
In fact, the best critics of over-inflation and rigidification within our naive understanding of ‘content’ are to be found amongst British thinkers such as Thomas Reid and J.L. Austin, although they rarely appreciated the ‘barriers to progress’ worries that correctly pushed the scientific philosophy tradition towards more radicalized conclusions overall. Indeed, the structure of WS is best viewed as a self-conscious effort to follow the lead of Hertz and Helmholtz while employing some of the diagnostic tools pioneered by Reid and Austin. But fulfilling that program is a tall order, for it requires that we mollify allied misapprehensions about ‘perceptual content’ in a plausible manner as well. Doing all of this requires a vast amount of subtle diagnostic work and I do not pretend to have carried out all of the designated tasks in a satisfactory manner. But one of the reasons that WS grew so bulky is that it strived to warn my fellow ‘scientific philosophers’: ‘No, you can’t simply ignore ‘classical content’; you must also diffuse the mythologies of exaggeration upon which it deeply depends.’
- An allied push towards ‘free creativity’ arose within pure mathematics as well. For a brisk survey, see my ‘Frege’s Mathematical Setting’ in Michael Potter, ed., The Cambridge Companion to Frege, forthcoming.
- ‘On the Origin and Significance of the Axioms of Geometry’ in R.S. Cohen and Y. Elkana, eds., Hermann von Helmholtz: Epistemological Writings (Dordrecht: D. Reidel, 1977).
- Yu I. Manin, Mathematics and Physics (Boston: Birkhauser, 1981), p.35. Manin himself comments: ‘This is an overstatement. The principal aim of physical theories is understanding.’ Some of Manin’s reasons for this counterclaim will emerge in part (iii) of this essay. To avoid potential misunderstanding, let me stress that few ‘scientific philosophers’ have fully embraced the crudely operationalist themes that Feynman sounds in this passage (indeed, as indicated in note 4, it is often unclear what their developed ‘semantic’ views might be). I employ this quotation as merely the coarse expression of an inclination that generally assumes subtler forms.
- Heinrich Hertz, The Principles of Mechanics, translated by D.E. Jones and J.T. Walley (New York: Dover, 1952), p. 1.
- Hertz, op cit, p. 8.
- For example, Max Jammer, The Concept of Force (New York: Harper’s, 1962) completely mistakes Hertz’ motivations. For a better account, see Jesper Lutzen, Mechanistic Images in Geometrical Form (New York: Oxford University Press, 2005). David Hilbert makes very similar remarks when he explaining why he has set ‘axiomatizing physics’ on his famous set of problems that mathematicians should address during the twentieth century. See Felix Browder, ed., Mathematical Developments Arising from Hilbert Problems (Providence: AMS Press, 1983). Hilbert’s thoughts on axiomization and ‘implicit definability’ influenced the positivists greatly.
- Cf. G.P. Baker and P.M.S. Hacker, Wittgenstein: Understanding and Meaning (Oxford: Wiley-Blackwell, 2005), p. X. I first learned this salient tidbit from Michael Kremer.
- Of course, twentieth century conceptual classicists generally relied upon the same logical formalism as well, although they didn’t employ the framework as a source of ‘distributed normativity’ in the sense of WS. In his more ‘structuralist’ moments, Russell often comes close to an ‘implicit definability’ point of view with respect to the predicates of physics.
- LMS, p. . To avoid confusion, I’ve omitted the adjective ‘classical’ before the phrase ‘analytic tradition’ only because Friedman employs the term differently from me, in that he means to embrace the entire positivist movement within his sweep, whereas I would classify most of it as pursing anti-classical conceptual goals.
- LMS, p. Of course, the congruent legacy of the British ‘common sense’ school of the 1930's also encouraged a blurring of traditions through an improper minimization of the conceptual challenges presented within real life scientific practice.
- In truth, ‘explaining the utility of the thought experiments’ was just one reed in the positivists’ muddle of motivations; it is the singular merit of DofR to have brought forth this submerged theme clearly. All the same, why did Kuhn so throughly mistake the positivists re ‘accumulationism’? I am not certain, but the misreading may derive from the writings of mid-century ‘logical empiricists’ such as Ernest Nagel. The latter’s best work lay in the developmental history of mathematics (Teleology Revisited) and his own essays emphasized some of the same ‘Neurath’s boat’ themes to be highlighted in section III of this essay.
- DofR, p. 43. With the phrase ‘best history,’ I believe Friedman is being too generous, for we should entertain strong reservations about ‘histories’ that regularly omit the nitty gritty details of science (such as the conflicting mathematical entanglements that bothered Hertz) in favor of the attendant political and ideological squabbles. Studies in the sociological manner of Kuhn and his followers have frequently amplified the latter to the point of absurdity, while ignoring the commonplace inferential mysteries that should rightly leave any rational agent perplexed and uncertain where to turn. For philosophy’s purposes, our ‘current historiography of science’strikes me as far too lopsided to be embraced wholeheartedly, unless the term also encompasses the excellent ‘amateur histories’ regularly penned by retired scientists in their leisure years
- Actually, the early positivists mostly tried to employ phenomenal verities as the core to be organized, but this position enshrines improper psychological description just as warmly as the ‘measurement rod’ approach locks in physical misdescription. In his ‘Replies’ in the festshrift Discourse on a New Method, ed. by Michael Dickson and Mary Domski (La Salle: Open Court, forthcoming), Friedman appeals to the ‘light principle’ as a means of evading some of these problems, although allied problems attach to ‘light’ insofar as we interpret the latter phrase as signifying an identifiable form of physical emission.
- Such woes remind me of the unhappy fates of the hippie communes of the 1960's that, in striving for maximal liberty along several dimensions of conduct, failed to incorporate the subtle controls and tolerations that rescue more mature societies from the vicissitudes of human character. Indeed, this analogy essentially epitomizes the entire argument of section III in a nutshell!
- Although WS frames conclusions that sound more ‘Quinean’ than the neo-Kantian position espoused within DofR, it reaches those conclusions along decidedly non-Quinean paths. Or so it strikes me. Although I would never wish to diminish his salutary influence upon my thinking, I am troubled that much of Quine’s actual argumentation rests upon the same mythologies of scientific methodology that severely handicap philosophical progress to this day. In this appraisal, I am fully in agreement with Friedman.
- A remark developed more fully in Chap 4, §iv of WS is this: if we convert Hertz’s metaphor about ‘images’ into concrete computational terms, the closest fit we are likely to find is a ‘marching scheme’ of Euler’s method type.
- For example, modern mathematicians have framed an impressive set of conclusions with respect to the ‘chaotic behavior’ of various differential equations, but these conclusions have been generally reached by computer techniques of the fallible sort just sketched. There remains an outside chance that all of these results represent spurious artifacts of the numerical methods utilized.
- Students of numerical reasoning usually dub these as ‘correctness results.’
- In Logic and Other Enigmas (Cambridge: Harvard University Press, 1978).
- Constance Reid, ‘Hans Lewy.1904-1988’ in P. Hilton, F. Hirzebruch and R. Remmert, ed., Miscellanea Mathematica (Berlin: Springer-Verlag, 1991), p. 264.
- The P&PR reply to Brandom mentioned above (‘Of Whales and Pendulums’) supplies a brisk recapitulation of the ‘theoretical’ side of this argument, but WS also endeavors to make its case through a lengthy array of case studies.
- As noted above, Max Jammer’s long series of ‘The Concept of ....’ primers epitomize these trends.
- In the fullness of time, proper soundness results may be established for M, but their articulation often demands a prior understanding of M-structure behavior obtained from an ‘unfounded’ reliance upon inferential tools such as R.
- Or if there is an ‘explanation,’ it is not philosophy’s job to provide it. I find such attitudes preposterous. For my tentative speculations on Wittgenstein’s attitudes towards science, see ‘Wittgenstein: Physica Sunt, Non Leguntur,’ Philosophical Topics 25 (1997).
- Besides Friedman’s own writings, useful surveys of this background can be found in A.W. Carus, Carnap and Twentieth-Century Thought (Cambridge: Cambridge University Press, 2007), ch. 2-3, and Wolfgang Carl, Frege’s Theory of Sense and Reference (Cambridge: Cambridge University Press, 1994), ch. 1-2.
- As I understand her, Penelope Maddy in Second Philosophy (Oxford: Oxford University Press, 2007) regards such attitudes as paradigmatic of what she calls ‘second philosophizing.’
- Nor should one wish to code such relationships in unnatural ways, à la Hartry Field’s Science Without Numbers. (Princeton: Princeton University Press, 1980). In addition, it should be observed that understanding how these mappings operate strategically usually requires that we embed these maps within a richer mathematical setting. For example, Cauchy’s original questions on series convergence require that we study their behavior upon the complex plane (indeed, upon Riemann surfaces) as well. Throughout his career, Robert Batterman has stressed the fact that ‘understanding’ in science commonly requires the interpolating intervention of such ‘abstract’ mathematical structures (cf. his The Devil in the Details (Oxford: Oxford University Press, 2001)))
- To a realist au fond such as myself, it is better to retain a conception of ‘objectivity’ as ‘correlates successfully with the world’ rather than embracing the ersatz ‘objectivity as shared human standards’ that Kantians usually substitute in its stead (see DofR, p. 67, for an expression of the latter inclination).