Brendan Lalor. Southern Journal of Philosophy 36, 215-232, 1998.
Abstract. There is a clash between (i) the intuition that the states of a randomly materialized double of me, Swampman, would have intentional content, and (ii) the best teleosemantical accounts of the metaphysical constitution of content. I argue for a position which is sufficiently liberal about content constitution to allow that Swampman’s states immanently become contentful, but conservative enough to honor what’s essential to good teleosemantics – namely, respect for the following etiological constraint: Content must supervene on structures for whose continued presence there is a function-bestowing causal reason.
1. Swampman Intuitionism and its archenemies
Here I propose, in outline, a resolution of the clash between what I call Swampman intuitionists and certain naturalists about intentionality. Swampman intuitionists (SIs) are philosophers who harbor the intuition that, contrary to what Donald Davidson (1987) acknowledged, contentful states would be had by a randomly materialized entity with a structure physically type-identical to a normal human’s — like that which might result, for instance, when lightning strikes swamp gook and such an entity is spontaneously but coincidentally formed.[ 1 ] SIs hold the Swampman’s states are contentful and any theory of content which disrespects this deliverance of intuition is mistaken. Indeed, hath he[ 2 ] not eyes, hands, organs, senses, passions? Is he not fed with the same food, hurt with the same weapons, subject to the same diseases, heal’d by the same means as you and I? If you tickle him, does he not laugh?
Teleosemantical theories like those of Ruth Garrett Millikan (1984), David Papineau (1984, 1993), Fred Dretske (1990), and Karen Neander (1996) are among the targets of the Swampman Intuition since they require content to supervene in part on the proper functions of the structures which produce contentful states, which, according to many people, depend on the histories of those structures.[ 3 ] These philosophers deny that for a contentful state to have the content it does it is enough that (i) the organism whose state it is have some particular local structure or properties which supervene strictly on such structure (putative narrow content isn’t enough), or that (ii) nomological relations obtain between the organism with the structure and individuals and properties in the environment (mere internal state broadly conceived plus current embedding context isn’t enough, as, e.g., Fodor (1994) thinks it is).
Rather, these teleosemanticists hold there is an etiological constraint on the constitution of content according to which the continued presence of the structures which produce contentful states must be the result of function-bestowing causal processes (a la, say, natural selection across generations). But this, SIs point out, rules Swampman out of court since he has a bizarrely random etiology. The SI argument is supposed to impugn those who want content to supervene on selectional processes involved in evolutionary as well as learning histories because Swampman has neither ancestry nor learning history. Since the Swampman Intuition opposes this teleosemantical theory, the argument goes, something’s wrong with the latter.
Etiological teleosemanticists have, rightly I think, retorted that their theories of representation aren’t exercises in conceptual analysis, but involve matters empirical. So intuition rather than theory should be suspect here. (See, e.g., Papineau 1996 and Millikan 1984, p. 93.)
But I think both sides to the dispute get things partly right. The teleosemanticists in question are right in holding out for real-world causal connections which make for content. But the SIs are right in recognizing the appeal of the idea that Swampman’s states have content, that he’s no zombie. I will argue that he is able to have, or immanently have, contentful intentional states.
2. Everyone’s wrong, at least partly
2.1. The SIs
In cases in which there is a seemingly perfect match between Swampman and the physical and cultural world in which he materializes — as in cases in which he’s not distinguishable from one of us — SIs want to assign contents straightforwardly. Of course, the antecedent probability of a being having just the right dispositions, and materializing in just the right location, is astoundingly slim. For instance, it is so very coincidental that such a creature happens to have (i) perceptual abilities which exploit the ways light is conducted in the local atmosphere, (ii) means of locomotion suited to local solid, liquid, or gaseous media, (iii) a respiratory system which requires precisely the gases found in the specific proportions in which they’re found in the local air, (iv) speaking skills which ostensibly match those of local competent speakers (skills, for instance, like the ones we have for using the word ‘Buchanan’ as a name, subject to all the syntactic constraints on name use, etc.) — and the improbability is compounded by adding that (v) it materializes right here on Earth of all places.
I believe that the antecedent improbability of Swampman materializing at all, especially here, at least partially accounts for SIs’ intuitions and their misgivings about teleosemantical theory. After all, while the conventional wisdom expressed in the adage, ‘if it looks, walks, and quacks like a duck, it’s a duck,’ won’t often lead us astray, it is contingent on the way the world works — it depends on the improbability that something would look, walk, and quack like a duck without being a duck. Ditto in the case of entities physically type-identical to humans. But to deny that the wisdom holds under the astounding conditions the hypothesis asks us to imagine is not to run roughshod over the wisdom.
Notice that the ‘walks like a duck’ approach begins to break down if the case is altered so there is a mild mismatch between Swampman’s skills and the socio-cultural environment in which it materializes. If, for instance, it ended up on another planet whose inhabitants speak a language with different syntax, or don’t even resemble humans, or have no culture at all, but which is otherwise suitable, SIs would have trouble determining which contents to assign, because at least lots of content is socially infected (cf. Burge 1979). But even in this case I think SIs are right that he will at least have, or immanently have, some contents, namely, about the things in his physical environment. However, if the mismatch is gross — for instance, if none of the natural kinds even seem to correspond to the ones on Earth, the air has no oxygen in it, the light rays are chaotic (and are thus a useless source of information about the layout of the distal environment) — even SIs would incline to assign few or no intentional contents (depending on just how bad the mismatch between skills and environment is).[ 4 ]
But — an SI ponders — mightn’t Swampman be able to think contentful thoughts even under these conditions? Perhaps while Swampman can’t have a concept whose content is, say, water, he may have one which expresses a property, like being a watery substance. Joe Levine (1996: 90) argues that this seems to follow from the common claim that when people think they’re imagining a possible world in which water isn’t H2O they’re in fact imagining a world in which a watery substance isn’t H2O. Hence, he argues, people can imagine watery substances without imagining water.
A full response would take us too far afield; but I’ll indicate why I think Levine’s conclusion doesn’t follow. I admit that people might be able to imagine watery substances without imagining water, but not that they can imagine watery substances without being able to refer to something in terms of which the concept might be specified, like water itself, or hydrogen and oxygen, or — if ‘watery substance’ is supposed to designate a more phenomenological concept — some liquid, perhaps a clear one. I think these referential relations will obtain in the way the so-called New (‘causal’) Theory of Reference suggests. Our ability to imagine other worlds containing water-like stuff is contingent on our ability to refer to stuff in the actual world.[ 5 ] Ditto for Swampman.
The etiological teleosemanticist’s point about normativity. Consider another sort of ‘walks like a duck’ case, this time the case of an ailing duck. Suppose a blood clot lodges in one of my coronary arteries and I go into myocardial infarction. There is something wrong with me. Imagine that just then a physically type-identical SwampDoppel of me materializes. My double’s movements mirror mine as I grip my chest in pain. But there is nothing wrong with the Swampthing. There are no grounds for saying there’s something its pump-like thing — its ‘heart’ — is supposed to be doing; this Swampthing is sui generis. The only reason we might say it should be pumping the ‘blood’ is that the uncanny physical resemblance to a human’s physiology invites comparison with normal humans. But why should normative notions in terms of which we evaluate ourselves be extended to Swampthing? Why not apply Klingon notions and say all is well with the supposedly clotted artery; it’s other parts of the heart which are malfunctioning (cf. Dretske’s 1996: 78 f. Twin Tercel example)? It’s always possible to tell stories on which beings are evaluated according to different taxonomies.
The central underlying question here is: What, if anything, counts as proper functioning? The etiological teleological analyst’s general answer is that what counts as normal, or proper, is what hearts were selected for (by, say, natural selection), not what any given heart, such as my failing heart, actually does. Since selection-for makes historical reference, the function of something, if any, goes beyond its physical constitution at any given moment. That is why my heart is failing but Swampthing’s isn’t; mine is the kind of thing which was selected for pumping blood, Swampthing’s isn’t.
As it is with circulatory functions, so it is with perceptual and other cognitive functions. Assuming perceptual and cognitive systems were selected in part to help process certain kinds of information, norms apply to them, too. However, since there’s no particular activity Swampman’s visual cortex is supposed to house, such as the production of accurate edge-representations, it has no proper function, and hence cannot occupy the evaluable states occupied by our visual cortices. Without standards of proper functioning, and ultimately, semantic norms, there is no way to count an organism as representing the world correctly rather than incorrectly, representing one thing rather than another, or having true rather than false beliefs. That such norms can apply to Swampman is precisely what many teleosemanticists deny (e.g., Millikan 1996: 110). In short, they argue, validly, but I will contend unsoundly:
(P1) No content without normativity.
(P2) There’s nothing normative about something sui generis, like Swampman.
(C) Swampman has no content.
(Specifically, I’ll be arguing that teleosemanticists who accept the second premiss are mistaken; Swampman may start out sui generis, but in fact his states can come to be subject to normative standards.)
Nor do many teleosemanticists think that even learning theory explanations succeed in giving a justified story about the constitution of content in Swampman. For instance, Millikan (1984) and Neander (1996) hold that without the right phylogeny — without an evolutionary history to define primitive perceptual and cognitive dispositions such as those on which, say, certain sensory representational contents, like edge-representations, depend — there’s no way learning can get started and thereby determine nonprimitive contents. Swampman can at best replica-learn, not learn, and at best acquire replica-beliefs and replica-desires (Neander 1996: 126).
The rest of this paper will mostly be a pitch to those who are already generally sympathetic with the teleosemantical outlook on content. I won’t re-argue any more of the basics about what’s wrong with the SIs; so, for the sake of argument suppose Dretske et al. are on the right track about functions in twin cases like that of Swampman.
2.2. The wrongs of the (etiological) teleosemanticists
I think teleosemanticists have been blinded by having put too many eggs in the standard evolutionary- and learning-explanation baskets. It’s important not to forget that these are contingently standard, and we should in principle accept any causal processes which make it the case that certain complex organismic properties come to be or remain correlated in certain ways with certain environmental properties, thus allowing the former to serve as means by which organisms come to be reliably tied to or brought into harmony with the latter. Even Descartes’ view that God’s design brought about such harmony is a way of respecting the etiological constraint; harmony by causation is what matters, not what path that causation takes. In the case of evolutionary selection, the process occurs over generations; in the case of individual learning, it occurs within a single life. What I want to ask is, Why stop there? Why shouldn’t it occur — to some extent — at an even finer-grained level? Say, within single acts?
Herein I will explore the possibility that appropriate correlation-securing processes may be contained in single acts. If I succeed, I will have shown that etiological teleosemanticists heretofore have been far too stingy, in effect, conservative content-hoarders.
Suppose that at t0, the moment Swampman — an accidental Doppelganger of myself — was gratuitously generated at a nearby swamp, I was annoyed by a fly and was about to shoo it away with my hand. At t1, since Swampman and I are physically type identical, both our hands begin moving in the same manner, physically characterized. Note that since Swampman is just like me physically, he would only keep moving his hand in that manner if stimulated appropriately. So at t2 there are two possibilities. Either there is a fly buzzing around him just like there is around me, or there’s not, in which case, Swampman will presumably stop waving his hand, as I would in similar circumstances. In the case in which there is such a fly, Swampman, like me, is disposed to make the sounds I make when I mutter, ‘This darn fly,’ staring at it, following its trajectory with his eyes. My claim is that by this moment, t2, the states of Swampman which control his behavior, whether or not they had fly-aboutness before, they have it now.
At this point someone making the claim I just made might go two possible theoretical directions, both of which are consistent with a teleosemantical outlook modified in the direction of liberalization. While the first, and most liberalized version of the view, has some promise (and is perhaps ultimately defensible), I go on to accept the second, more conservative view, which coheres better with theoretical considerations I will divulge soon enough.
3. Two teleosemanticist-compatible paths
3.1. Buying intentionality in bulk
If swampbeings reproduced their kind and evolution had them to select for, already very well adapted to their environments and reproductively fit, it would pick them. But if that’s so, then teleosemanticists who rely only on evolution for functions would have to hold either that only the second generation of swampmen and beyond could begin to have contentful states, or that no generations of unimproved swampmen could have contentful states. (Likewise, those who rely on learning backgrounds are thought to be in an analogous situation.)
One might think the evolutionary analyst’s best move is to accept the second alternative, since the story on the first would be unusual in the extreme, evolutionarily speaking. The argument might go:
Evolution selects organisms in virtue of specific phenotypical traits, which are usually introduced singly and gradually; so it bestows functions one by one, not a bunch at a time. The various structures in swampmen that control behavior are not each the outcome of specific selectional processes; the lot of them are gratuitously present. It’s hard to imagine how evolution could then bestow multiple specific functions in a fell swoop by selecting for a whole organism, with all its traits and abilities. For instance, consider a version of the usual evolutionary story about how certain types of visual representations come to bear content about edges of worldly objects. Through random mutation, a visual perception module comes to produce visual representations which (better) correlate in regular ways with the edges of distal stimuli. This increases the organism’s fitness. The organisms that have that trait are then favored by evolution, so there is specific function-bestowing selection for it: New representations with edge-aboutness are born. It’s the correlation between the representations and distal stimuli which is the causal reason for the continued presence of the representation-producing mechanism in the species. Likewise if learning bestows functions by favoring certain correlations. But swampmen don’t seem to evolve or learn since the usual selectional pressures never pick their specific properties for their fitness value (or other values). Thus, even second generation swampmen will be bereft of content — and third, and fourth…. So the race of swampmen is condemned to be one of zombies.
This sort of argument is inadequate to show that the evolutionary analyst must hold second generationers to be also bereft of content; one can grant that natural selection for swampmen is unusual, but nevertheless genuine (albeit degenerate) natural selection. Hence, there is successful function-bestowal. The first time evolution gets to work, the full-blown systems of the organism are in place; it picks swampmen on the first try. Successive generations can enjoy contentful states. After all, the correlations between the first-generationers’ states and the world are the causal reasons for there being a second generation at all — without those correlations the first generation would have been unfit, unsuccessful.
But even this isn’t good enough. As SIs would insist, even one generation of zombies is one too many. Luckily, I have a better, and more general solution to the problem of how content is generated. Evolution is but one instance of a more general phenomenon, a fact to which I’ve alluded and upon which I will shortly focus. The reason natural selectional processes are so central to teleosemantical theories of content is that they are means to the end of metaphysically constituting content, sufficient but not necessary. What’s necessary is the robust harmonization of organism-environment relations; and I hasten to add, I mean harmonization in a strong sense, in particular, as implying that the organism is sufficiently, or better, determinately tied down to a particular environment — which, other things being equal, will be its actual environment.[ 6 ]
Given this, we might ask, How cheaply can organisms achieve the requisite harmony and thus procure content? If natural selection of forebears is the costly, time-consuming method, what is the quick one? To find the most stripped-down, degenerate case of content constitution, I suggest we start by thrashing about for necessary conditions of content-edness and then build toward sufficiency.
Imagine that the Swampman of my fly-shooing scenario above is miraculously transported from Earth at t0 to Twin Earth1 at t1 to Twin Earth2 at t2 and so on, and that on each planet, corresponding to our flies are creatures of different species, which are nevertheless phenotypically indistinguishable from flies, and which the locals even call by a word sounding just like our ‘flies’. Further imagine that t0 , t1 , and t2 are mere instants apart so no single perceptual act actually gets completed. Does Swampman have contentful states which vary from place to place depending on which kind of insect is in his visual field? No. He has no contentful states at all, not even demonstrative ones.[ 7 ] He’s not tied down to any particular environs. (Of course if I were whizzing from place to place in the universe my thoughts would at least have fly-aboutness, the content they have locally, in the environment I’m tied down to.) So, I suggest, contentful states depend on being anchored by at least some actual contact or connection with an environment (as even the original fly case shows — at t0, before contact, Swampman has no content).
How much contact does it take to get tied down determinately, to specify a ‘home’ environment? Suppose one says that it takes just one anchoring act, an application of an organism’s dispositions to a contingency in its environment (even just an act of perceptual focus); and after that it’s established that all other organismic states are to evaluated with respect to the originally indexically determined environment. To see that an organism might be tied down determinately by such relatively limited contact with an environment, imagine the following twist on a selectional process. Instead of processes operating in an environment selecting for organisms in that environment, consider a process that selects environments by operating on organisms in a certain way. In particular, suppose that each organism is miraculously transported to a planet to see whether that planet is appropriate to its dispositions. If a creature like Swampman (again, gratuitously generated) — physically type identical to one of us — were brought to a planet without oxygen in the atmosphere, he would be whisked to another as soon as it was apparent that he’d die in minutes there. If brought to a planet whose atmosphere conducted light so oddly that objects didn’t appear stable to creatures whose perceptual apparati mainly exploit light rays to get information about objects, he would be whisked away since he couldn’t even see the fly there (or any other object — he’d be ‘blind’ and vulnerable). If brought to Earth, he’d be let go thanks to the good match between his skills and the environment. Our atmosphere conducts light rays in a way which comports well with how his eyes work, so he’d be able to track the fly. His disposition to make the sound we make when we utter ‘fly’ in fly-situations matches up nicely with existing practice in English-speaking communities. Even the oxygen suits his ‘lungs.’
On this story an environment is selected because of an overall good match with specific organismic dispositional traits, and thus, Swampman’s states might be said to acquire content as a result of this process. The match between organism and environment is the causal reason that the resulting situation obtains. Even if the causal process is a bit weird, and doesn’t, so far as we know, occur in nature, it fits under the most general etiological view, which is satisfied so long as there is a causal reason for the structure(s)’s continued presence. This thought experiment illustrates the intelligibility of a selection process which matches Swampman to his Earth environs in one move. In principle, what matters for content is not how functions of representation-producing structures are determined (e.g., by evolution or learning or, perhaps, this strange process) but that they are, determinately so. Etiology does matter to the constitution of contentful states, as the teleosemanticists under discussion hold — but not essentially in the ways most of them think.
Is it enough to say — quite liberally — that the only thing you really need etiology for in the account of content is to tie an organism determinately to a planet-environment? Such a view implies that so long as Earth is specified as the target planet of the attitudes, to have Pat-Buchanan thoughts requires only that Swampman’s relevant states be conditioned determinately by the Earth environment in the sense that, if pressed, he would show his skill and competence as a user of the name ‘Buchanan’ by, for instance, tracking down the man, Buchanan. But this view also involves the recognizedly unacceptable consequence that Swampman need not have ever heard the name from anyone, or indeed had any previous commerce with the man, through the name or otherwise (as the New Theory of Reference requires).[ 8 ]
Thus, I think this position is too lax and liberal. One of its main problems is that what precisely counts as an organism’s ‘environment’ is still too vague. First, it’s overly arbitrary to treat the home environment as a planet. On this view, a Swampman materialized alone on a spaceship equidistant between Earth and Twin Earth would seem to have content indeterminacy until he established some kind of ‘commitment’ to one or the other through interactivity with its kinds and individuals. And if we imagine that interplanetary travel is commonplace, it seems odd to think that just having a ‘home planet’ would suffice to determine all the desired contents. For instance, many Earthers might have fly attitudes about the flies on Earth and fly2 attitudes about the things on Twin Earth (and in conversation, the homonyms would be distinguished by contextual features of discourse). But other Earthers may not know which they hold certain attitudes about, and Swampman may be physically type-identical to someone in this epistemic predicament. In such a case, claims about his content seem arbitrary.[ 9 ]
While anchoring Swampman to a planet may not be what we’re looking for, perhaps tying him down to a socio-cultural context is.[ 10 ] There are good reasons to think this is a necessary condition for the generation of content in cognizers anything like us.[ 11 ] Consider, for instance, that researchers in anthropology (e.g., Brent Berlin and Paul Kay (1969)) and psychology (e.g., Roger Brown (1975) and Eleanor Rosch (1973a and 1973b)) have found that different cultures carve up the color spectrum differently. What counts as white is a much broader class in some cultures, such as the Dani, studied by Rosch, than it is in our own. When Swampman materializes in the middle of nowhere, whose color term criteria should count as canonical for Swampman? Can such questions always be settled by reference to his own dispositions? No. For starters, it is always possible to imagine Swampman unsure about the application of certain color predicates — even disposed to rely on social mechanisms to keep his own use in check.
Suppose Swampman materializes in a city, half of whose inhabitants are from culture X, half from culture Y, and they each apply color terms slightly differently. Imagine that Swampman’s dispositions are varied: He’s not sure about the boundaries of the class to which ‘blue’ applies, but he’s inclined to apply ‘red’ just as those in culture X do, and ‘green’ in sync with those in culture Y. I suggest there’s no principled way to evaluate his uses of color terms here as correct or incorrect until his community membership is determinate.[ 12 ] Hence, there may be no nondemonstrative color-intentionality prior to that commitment.
If you don’t like this example — if you have qualms about the case of color, as many philosophers do — analogous phenomena can be teased out of a consideration of what it is to entertain the concept, contract. Not only does it vary from one culture to the next, but individuals are often wrong or ignorant about it and rely on cultural mechanisms to help fill-in the details on an as-needed basis (see Burge 1979). So even if Earth is determined as the home planet, this doesn’t suffice to individuate contents.
The thought experiments above suggest that the referential apparatus of beings like us supervenes in part on community affiliation, which militates against the view that counterfactuals (not etiology) determine the contents of nondemonstrative thoughts. Fodor (1994) thinks what’s important to content determination are simply an organism’s actual and nearby worlds. But we’ve seen that in one and the same possible world — indeed, even on one and the same planet — an organism’s dispositions alone are insufficient to secure many contents on all reasonable accounts. In the above example, Swampman’s nondemonstrative use of the word ‘blue’ carries the content it does in virtue of community affiliation, not simply the counterfactuals true of him given his dispositions and physical environment.[ 13 ] And, as I’ve been saying all along, if you count his dispositions as tying him to the relevant classes of things by allowing the former to run through a specific community and thereby to the latter, then you at least need some way of securing affiliation.
As necessary as it is, I am skeptical that social anchoring is sufficient for content determination. Granted: My SwampDoppel isn’t disposed to remember climbing beech trees in the 1970s (under the conditions in which I would remember) — because he wasn’t around in the 1970s, he fails to meet the epistemic conditions on remembering. But can he, prior to even indirect experience of beeches (or other things experience of which might furnish means of reference to beeches), think to himself he wants to climb a beech now? If so, should we count beech as part of his conceptual repertoire? If intentionality can be bought in bulk, then yes, as soon as he’s anchored he has all of my concepts in stock. But it does seem odd for someone with a theoretical commitment to the New Theory of Reference to assign contents individuated by kinds or individuals of this sort. Since I hold such a commitment, I’d prefer an alternative.
3.2. Buying intentionality little by little
This brings us to the next possibility, on which content is not metaphysically constituted as such until etiological relations obtain between organisms and the right specific properties and objects. On this view, content isn’t assigned all at once after minimal connections determine, say, an environment or even community affiliation. Instead, it proposes a stronger etiological constraint, one that requires of each token thought that it trace not just to the right environment and thereby the right individual or property, but to the right individual or property specifically (albeit perhaps very indirectly). SwampBrendan goes about the world, acting just as I would, except slowly acquiring more intentionality as the right specific organism-environment connections are made.
Making these connections comes very easily to SwampBrendan; he’s swimming in content in no time. Suppose that, upon materializing, he’s facing a tree, and that within a few saccades, its edges come into focus. Presto! Edge-representations — the very sorts of primitive perceptual dispositions we saw Millikan and Neander maintain Swampman lacks. (Recall that they held it is for lack of them and their kin that Swampman is damned to mere replica-learning.) But how do these items come to be edge-representations if not by natural selection? Answer: The correlations between the relevant neural structures’ activities and distal edges favor the continued apportioning of system resources to those structures — thus is the etiological constraint met. Denying this genesis legitimacy strikes me as chauvinistic in that it dogmatically privileges evolutionary-style function bestowal as somehow more fundamental than any other kind of causal process. A genuine case of so-called replica-learning would be one in which there are no causal reasons, but just randomness or luck, underlying the continued presence of the relevant structures. This is not SwampBrendan’s case.
We can now unshroud the mystery of how content is manufactured by stating it in generalized form.
- Metaphorical statement: Using words to think or speak is like using a hammer to pound nails: Both involve skills which mediate between organisms and outcomes.[ 14 ] If words are like hammers, SwampBrendan hits the nail on the head on the first try.
- Metaphysical statement: As soon as a state S, tokened in an organism like us O, begins to mediate an outcome (such as O climbing a beech), O receives feedback from the environment about S‘s ongoing effects. With the onset of such feedback, reinforcement begins, which, in turn, constitutes a causal reason for the continued presence of the relevant structure(s) — etiological constraint met, QED.
SwampBrendan picks up fly-intentionality during the run-in with the fly because the activated states in him, which control his behavior, receive reinforcing feedback. He might acquire Pat Buchanan-intentionality by hearing the name ‘Pat Buchanan’, which promotes the sustenance of other of his states. But what of the apparent desire to climb a beech tree? If SwampBrendan were hitherto innocent of beeches, would this state possess beech-aboutness? Suppose that SwampBrendan utilizes the linguistic division of labor in his community to find a beech — just as I would if I had such a desire. Were he to stop a passerby on the street and ask, ‘Excuse me, where’s the nearest beech tree?,’ the ensuing verbal response would enable him to pick up on the passerby’s reference in the manner celebrated by Saul Kripke (1980). Are these straightforward means of attaining intentionality all that can be hoped for?
Perhaps not. I suggest that even prior to the conversation SwampBrendan’s state might have been genuinely contentful. Activation of the ostensible beech-climbing-desire in effect put SwampBrendan on the lookout for beech clues — people, signs, university botany departments, encyclopedias, WWW sites, the phone book … the world is brimming with sources of information. Just as a properly adjusted telescope allows us to read facts about specific heavenly bodies off facts about light rays which are closer to home, SwampBrendan’s conveniently calibrated gray matter, in conjunction with any of the above closer-at-hand sources of information, compose a device which mediates his contact with beeches. Positive reinforcement of his dispositions begins upon reception of feedback based on his utilization of such sources. SwampBrendan is so sensitive to feedback that, if at any point, counterfactually, he were heading down a blind alley (e.g., asking a lamp post about beeches), he would countermand the unsuccessful strategy. Content may emerge once feedback, however skimpy, funnels in, but not before — that would violate the etiological constraint. Thence even the very first exercise of the relevant dispositions may lend his states beech-aboutness.[ 15 ]
I submit that I’ve now shown how a selectional process occurring within a single (coarse-grained) act is comprehensible. I showed that it is possible in the argument for the intelligibility of a selection process which matches Swampman to his Earth environs in one move (i.e., the environment-selection-process thought experiment). In fact, the selection process envisioned by that experiment utilizes precisely the kind of feedback here discussed — lots of it starts pouring in immediately, about the air, about the light rays, etc. We can now take that process, not as favoring the bulk-purchase view of intentionality, but rather, as an instance of the way in which it’s possible to match Swampman’s dispositions to his environs in installments, as he activates them.
The view I’ve developed is like the learning theory except it’s as if the organism ‘guesses’ how to respond to stimuli appropriately the first time every time, and reinforcement is effortless. The right cognitive habits (computational algorithms, if you like) are already there in Swampman; he’s just learning the uses to which they are to be put. We might distinguish in principle two kinds of acts, (i) calibrational, which bring cognitive structures into specific kinds of harmony with the environment, and (ii) occurrent contentful, which are appropriate activations of such calibrated structures. What happens in the case of Swampman, I submit, is that one and the same token act actually instantiates both types.[ 16 ] In the real-life case of learning to ride a bike, it’s not always easy to say when calibrational activity leaves off and bike riding itself begins. Now suppose someone gets it right the very first time. Do we deny she is truly bike riding (because she has no learning history)? No. It’s just that her learning history was so short that it gave way just about immediately to bike riding itself. On my account, just as a successful bike-riding act performed on the first try may be the causal reason for the continued presence of the relevant dispositional structures, the successful deployment of an organism’s responses to stimuli can count as the causal reason for the continued presence of the structures on which the relevant habits depend. Such causal reasons make it the case that normative standards apply, which, in turn, is sufficient to undergird content.
4. Last rites: Challenges and replies
In spite of this, there may be some zealous Swampman intuitionists out there who will argue as follows: ‘It’s still true that if the instantaneous transitions from Earth to Twin Earth1 to … Twin EarthInfinity were, for Swampman, so smooth that he could not possibly be consciously aware of the transitions, then his experience has unity, and must, at least to him, have some kind of meaning. But on the theory presented in this paper, when he utters, ‘This darn fly’, under those conditions, his state has neither nondemonstrative nor demonstrative content. Since the theory here considered declares contentless Swampman’s states — which lead to movements and vocal events which are obviously contentful — it must be wrong.’
First, notice that there are two conflated questions here: One about meaning, one about phenomenology — conscious experience. I haven’t said anything in this paper about whether a swampman undergoing instantaneous transportations would have phenomenological experience. For all I’ve said here there might be something it’s like to be Swampman under these conditions even if his states have no content. Although my theory-laden hunch is that a being in those conditions wouldn’t have any phenomenological experience, that is a matter for another day.[ 17 ]
Second, I wouldn’t say the states are clearly contentful. It’s stipulated that even as Swampman is blipping from planet to planet, his states lead to movements which, considered apart from the continual context-shifts, would appear to be content-driven. But this is too bizarre to embarrass my theory. Intuition is ill-equipped to deliver a reliable verdict about content under such conditions. Theory — not intuition — should decide extreme cases, especially those like that of the blipping-Swampman, in which, if conventional wisdom is bound to fail anywhere, it is here.
Given my earlier suggestion that community affiliation is necessary for referential abilities of beings like us, one might ask what happens when a series of swampthings materializes and forms a ‘community.’ Indeed, imagine a whole ‘Swampcivilization’ randomly blips into existence. Does affiliation with such an entity satisfy the community affiliation condition? While I grant that the community could at best pseudo-remember its ‘past,’ I see no reason why it couldn’t learn-on-the-spot in the way that I suggest Swampman does.
So while a satisfactory etiological constraint is stringent enough to prevent the bulk purchase of intentionality, it nevertheless allows content to be had at a cheaper price than my teleosemanticist team mates think. In particular, reinforcement needn’t require trial and error; Swampman might get it right the first time.[ 18 ]
Allen, C., and M. Bekoff (1995) Cognitive Ethology and the Intentionality of Animal Behavior, Mind and Language 10: 313-328.
Berlin, B., and P. Kay (1969) Basic Color Terms: Their Universality and Evolution, Berkeley: University of California Press.
Boorst, C. (1976) Wright on Functions, Philosophical Review 85: 70-86.
Brown, R. (1975) Reference; In Memorial Tribute to Eric Lenneberg, Cognition 4: 125-53.
Burge, T. (1979) Individualism and the Mental, Midwest Studies in Philosophy 4: 73-121.
Burge, T. (1982) Other Bodies, in A. Woodfield (ed.) Thought and Object: Essays on Intentionality, Oxford: Clarendon Press.
Burge, T. (1986) Individualism and Psychology, Philosophical Review 95: 3-45.
Cherniak, C. (1986) Minimal Rationality, Cambridge, Massachusetts: MIT Press.
Cummins, R. (1975) Functional Analysis, Journal of Philosophy 72: 741-765.
Davidson, D. (1987) Knowing One’s Own Mind, Proceedings and Addresses of the American Philosophical Association 60: 441-458.
Dretske, F. (1990) Does Meaning Matter?, in E. Villanueva (ed.) Information, Semantics, and Epistemology, Oxford: Basil Blackwell.
Dretske, F. (1995) Naturalizing the Mind, Cambridge, Massachusetts: MIT Press.
Fodor, J. (1994) The Elm and the Expert, Cambridge, Massachusetts: MIT Press.
Kripke, S. (1980) Naming and Necessity, Oxford: Basil Blackwell.
Levine, J. (1996) SwampJoe: Mind or Simulation? Mind and Language 11:86-91.
McClamrock, R. (1993) Etiology and Functional Analysis, Erkenntnis 38: 249-260.
Millikan, R.G. (1984) Language, Thought, and Other Biological Categories, Cambridge, Massachusetts: MIT Press.
Millikan, R.G. (1996) On Swampkinds, Mind and Language 11: 103-117.
Neander, K. (1996) Swampman Meets Swampcow, Mind and Language 11: 118-129.
Papineau, D. (1984) Representation and Explanation, Philosophy of Science 51.
Papineau, D. (1993) Philosophical Naturalism, Oxford: Basil Blackwell.
Papineau, D. (1996) Doubtful Intuitions, Mind and Language 11: 130-132.
Rosch, E. (1973a) Natural Categories, Cognitive Psychology 4:328-350.
Rosch, E. (1973b) On the Internal Structure of Perceptual and Semantic Categories, in T.E. Moore (ed.) Cognitive Development and the Acquisition of Language, New York: Academic Press.
Tye, M. (1995) Ten Problems of Consciousness, Cambridge, Massachusetts: MIT Press.
[ 1 ] See Christopher Boorst’s (1976) anticipation of Swampman: ‘Suppose we discovered, for example, that at some point the lion species sprang into existence by an unparalleled saltation. One would not regard this discovery as invalidating all functional claims about lions; it would show that in at least one case an intricate functional organization was created by chance’ (75).
[ 2 ] In this paper when I apply words like ‘he’, ‘him’, and ‘his’ to Swampman, and ‘brain’, ‘eyes’, and ‘arms’, to descriptions of its parts, I am willing to allow that this is only ‘by courtesy’ (to use Karen Neander’s phrase). This is because I do not wish to claim that Swampman is a member of the species Homo sapiens, or that he’s a male, since these biological categories may not properly apply to him even if he does have intentionality. See Millikan (1996: 106-110) for a nice discussion of the reasons for this.
[ 3 ] While Robert Cummins (1975) rejects this view of functions, I think that without a teleological notion of function you don’t get normativity, which we need (see below). See Ron McClamrock (1993) for a pin-pointing of what’s wrong with Cummins’ analysis of function. I shall argue for the superiority of a ‘backward-looking theory’ of teleology, which adverts to etiological considerations, over a non-etiological, ‘forward-looking,’ dispositional theory of teleology, such as John Bigelow and Robert Pargetter’s (1987).
[ 4 ] I want to be cautious here. See Colin Allen and Marc Bekoff (1995) for a nice example of why, and arguments that, apparent mismatches needn’t always defeat the correctness of intentional characterizations of behavior.
[ 6 ] I say ‘other things being equal,’ because, if, for instance, I were transported to Twin Earth my water-thoughts would still be about H2O, even though it’s XYZ that’s then in my actual environment.
[ 7 ] If it were possible to describe perceptual acts in terms of a taxonomy over narrow states, there would be such an act, not just a simulation or as if-act. But in my (1997b) I argue against the existence of such a taxonomy. However, if we vary the experiment, lengthening the duration between t0 , t1 , and t2 so that perceptual acts get completed, I think that while Swampman doesn’t have the concept fly, he does have demonstrative content — maybe we could say he has an intention toward a broader class of things, a disjunctive content.
[ 9 ] Due to familiar Kripke-Putnam-type reasons, I find implausible the suggestion that Swampman has a concept which ranges over flies, flies2, flies3, etc. (disjunctive contents), though he’s had nary contact with any of these (or properties in terms of which they can be picked out). The same goes for the claim that the correlations in him of sensory representations in us can give rise to disjunctive contents. If we suppose that physically type-identical sensory representations represent cracks to Martians and shadows to Twin Martians (cf. Burge 1986), would that physical type represent the disjunction of these in SwampMartian? No.
Michael Tye (1995: 154-155) proposes that certain of Swampman’s internal states immediately have sensory content in virtue of their causal covariation, under optimal conditions, with external states. But, since such states may covary with cracks as much as shadows, the nearest option seems to be the (rejected) disjunctive-contents one. The next best is to prevent disjunctiveness by making stipulations about what count as optimal conditions. Unfortunately, causal covariation theories alone can’t settle this issue. But teleological ones can.
[ 10 ] You may think the lack of explicit criteria for ‘anchoring’ foils my theory. But first, I think the concept is fuzzy, like the boundaries of mid-sized physical objects. Second, I’m hopeful that philosophers and sociologists in concert will produce criteria that prove as workable as the ones we use to specify object boundaries.
[ 11 ] By ‘cognizers like us’ I mean complicated creatures with a robust capacity to refer, but quite constrained cognitive resources and time — what Christopher Cherniak (1986) calls beings with ‘minimal rationality.’
[ 12 ] I take a community to be essentially an entity with a history and some normative standards, but allow that, as a limiting case, it may contain one person. The prevailing norms in a single-Swampman ‘community’ include at least those applying to some sensory states (see section III.2). But upholding norms regarding concepts like arthritis, brisket, beech tree, and aluminum, likely requires accomplices who will share the ‘linguistic labor.’
I’m not sure whether the community affiliation condition applies to possible beings with infinite cognitive capacities and time (i.e., with ‘maximal rationality’). I suspect it does.
[ 15 ] Note that my claim is not just that the agreeable beech-climbing outcome reinforces the dispositions on which this piece of behavior supervenes, although it may be true that without this the habits would atrophy.
[ 16 ] I’m assuming here for simplicity that both terminologies (acts as calibrational and as occurrent contentful) can carve the cognitive system at the same temporal and spatial joints. But I don’t think this simplifying presumption obscures anything substantive so long as there can be significant overlap between the two.
Note also that I have not now repented what I said earlier, namely that a thing’s function depends on history, and goes beyond, say, its physical constitution plus current embedding context at any given moment: Specific contact with the properties and individuals is necessary for something to count as (even degenerate) learning on my account, and this goes beyond physical-constitution-plus…-at-an-instant.