Due to the various objections that have already been rebutted since this post was first made, those viewing this thread for the first time may want to read this post instead for a quicker introduction to my version of the EAAN (Evolutionary Argument Against Naturalism) that anticipates some objections this thread has already dealt with.
For the philosophically unacquainted, naturalism is the belief that only the natural world is real and that the supernatural does not exist. The evolutionary argument against naturalism (EAAN), put forth by Christian philosopher Alvin Plantinga, argues that the conjunction of naturalism and evolution is self-defeating. This isn’t argument against evolution, but rather an argument against naturalism (since if naturalism is true, evolution is the only game in town, and if the conjunction of naturalism and evolution is self-defeating, so much the worse for naturalism).
To define some terms and abbreviations, a defeater is (roughly) something that removes or weakens rational grounds for accepting some belief; in the context of the argument, the defeater is such that one is rationally obligated to withhold the defeated belief (i.e. not believe it; as by (1) remaining agnostic about it, or (2) believing it to be false). Suppose for example I arrive in a city and see what appears to be a barn from fifty meters away. I later learn that some eccentric has last week put up fake barns all over the area along with real ones, and that these fake barns are indistinguishable from real barns when viewed at a distance of thirty meters or more. I now have a defeater for my belief that I had seen a barn. I realize I could have seen a barn, but I don’t have sufficient grounds to accept the belief anymore. The rational thing for me to do is to withhold my belief that I had seen a barn. Suppose though I learn later that the eccentric removed all fake barns prior to my arrival. I would then have something that nullifies the defeating force of the defeater, i.e. a defeater-defeater.
Where N stands for Naturalism, E for Evolution, and R stands for “our cognitive faculties are Reliable,” and Pr(R|N&E) refers to the “Probability that (our cognitive faculties are Reliable given Naturalism and Evolution),” part of the argument is this.
- If Pr(R|N&E) is low or inscrutable, then N&E serves as a defeater for R.
- Pr(R|N&E) is low or inscrutable.
- Therefore, N&E serves as a defeater for R.
Call premise (1) the Defeater Thesis and call the premise (2) the Probability Thesis (I’ll give arguments for both shortly).
If N&E serves as a defeater for “our cognitive faculties are reliable,” it also produces a defeater for any belief produced by our cognitive faculties, including the conjunction of naturalism and evolution, and since all of one’s beliefs is subject to defeat here, one can’t use any of one’s beliefs to defeat the defeater. We would therefore have an undefeated defeater for R.
Next I’ll turn to the two key premises.
The Probability Thesis
Why think Pr(R|N&E) is low or inscrutable? Ordinarily one might think that true beliefs help us survive. That certainly is the case if beliefs are causally relevant to behavior (e.g. I believe this plant is poisonous so I won’t eat it). But if the truth of our beliefs has no such causal relevance, then such a factor will be invisible to natural selection. The content of our beliefs could be anything, true or not, and it wouldn’t affect our behavior. Whether the belief's content is 2 + 2 = 4, 2 + 2 = 67, or 2 + 2 = 4096 would make no difference to how we behave. If that is true, then Pr(R|N&E) is low.
Enter something called semantic epiphenomenalism. Call the syntax of a belief its neurophysiological properties; the number of neurons involved in a belief, their rate of firing, etc. Call the semantics of a belief its content, e.g. the belief that p is true for some proposition p. Such a proposition might be, for example, “snow is white.” Semantic epiphenomenalism says that while the syntax of our beliefs is casually relevant to behavior, the semantics of our beliefs are not. Under naturalism, is semantic epiphenomenalism true? It would seem to be. If naturalism is true, then materialism with respect to human beings is true (i.e. we are purely physical beings, having no nonphysical minds or souls). Plantinga writes, “it is extremely hard to envisage a way, given materialism, in which the content of a belief could get causally involved in behavior. According to materialism, a belief is a neural structure of some kind--a structure that somehow possesses content. But how can its content get involved in the causal chain leading to behavior? Had a given such structure had a different content, one thinks, its causal contribution to behavior would be the same. Suppose my belief naturalism is all the rage these days--the neuronal structure that does in fact display that content--had had the same neurophysiological properties but some entirely different content: perhaps nobody believes naturalism nowadays. Would that have made any difference to its role in the causation of behavior? It is hard to see how: there would have been the same electrical impulses traveling down the same neural pathways, issuing in the same muscular contractions. It is therefore exceedingly hard to see how semantic epiphenomenalism can be avoided, given N&E.” If materialism with respect to human beings is true, our beliefs being causally relevant would be as illusory as libertarian free will.
Putting the argument in deductive form and assuming materialism with respect to human beings:
If holding the syntax of beliefs constant while varying the semantics of belief would not change behavior, then the semantics of the belief are not causally relevant to behavior, even if the syntax of the beliefs are.
Holding the syntax of beliefs constant while varying the semantics would not change behavior.
Therefore, the semantics of the belief are not causally relevant to behavior even if the syntax of the beliefs are (follows from 1 and 2).
The above deductive argument is valid, i.e. the conclusion follows logically and inescapably from the premises. So are the premises true? The first premise is true almost by definition; what I mean by semantic properties of belief being causally relevant to behavior is that, all other pertinent factors (including syntactic properties) held constant, sufficiently different semantics would produce different behavioral outcomes. Syntax is a pertinent factor because semantic epiphenomenalism says that beliefs cause behavior by virtue of their syntax and not their semantics.
My justification for the second premise is illustrated with Plantinga’s little thought experiment (holding the syntax constant while varying the semantics). With the same syntactic properties, we have the same physiochemical properties of the human body and therefore get the same behavioral results (muscle contractions etc.); thus whatever semantic content that gets generated from the physiochemical processes doesn’t appear to matter. While the electrochemical reactions in the brain may bring about semantic content, the semantic content itself doesn’t seem to cause anything.
It seems that under naturalism, semantic epiphenomenalism is true. But of course the truth or falsehood of a belief has to do with its semantic content. Under semantic epiphenomenalism, the semantic properties of the belief could be anything and it wouldn’t matter (e.g. if semantics supervenes on syntax, this supervenience relation that determines semantics could yield any semantic belief and it wouldn’t matter) with regard to how we behave. To help guard against bias towards our own species, think not of us but of aliens on some other planet for whom N&E&SE is true. Suppose also for these aliens there is some supervenience relation such that semantics supervenes on syntax. Let RA represent “the cognitive faculties of the aliens are reliable.” Let A represent the following:
A: the semantic properties of the beliefs could literally be anything at all and it wouldn’t matter (e.g. if semantics supervenes on syntax, this supervenience relation could yield any semantic belief and it wouldn’t matter) with regard to one behaves.
(2.5a) If (N&E&SE entails A), then (Pr(RA|N&E&SE) is low or inscrutable)
(2.5b) If Pr(RA|N&E&SE) is low or inscrutable, then Pr(R|N&E&SE) is low or inscrutable.
(2.5c) N&E&SE entails A.
(2.5) Therefore, Pr(R|N&E&SE) is low or inscrutable.
Argument for (2.5a): In the case of the aliens it seems we don’t know what the semantic beliefs are like because not even what the supervenience relation is like is deducible from N&E&SE alone. Given that N&E&SE entails A, the supervenience relation could produce any semantic beliefs at all and it wouldn’t matter. For example, whether the aliens believed that 2 + 2 = 3, 2 + 2 = 4, 2 + 2 = 5, or 2 + 2 = 906 would make no difference. Even if the semantic beliefs were “garbage” beliefs unrelated to the external world (as in dreams) the semantics wouldn’t affect behavior at all. Given all that then, Pr(RA|N&E&SE) appears low, or at best inscrutable.
Argument for (2.5b): there doesn’t appear to be any relevant difference between our case and that of the aliens for Pr(R|N&E&SE) to be anything other than low or inscrutable. Imagine some intelligent and rational alien learns that humans exist and optimistically assumes R with respect to us, but if the alien were to learn that N&E&SE is true with respect to us, the rational alien would believe that Pr(R|N&E&SE) is low or at best inscrutable, just as we would for Pr(RA|N&E&SE).
Argument for (2.5c): that the semantic belief could be anything without affecting behavior follows inescapably from the definition of semantic epiphenomenalism.
I thus conclude that Pr(R|N&E&SE) is low or inscrutable. For those who reject 2.5 and the above argument, which premise is false and why?
For those who have a knack for analytic philosophy and want a rigorous view of what I’m talking about, here’s exactly what I mean (for those who don’t have such a knack, you may want to skip this paragraph and the fancy symbols below it). Let s be some syntax, and SL() be a supervenience law function such that SL(s) yields some semantic belief B. Let [All]x symbolize “For any x” and [Ex]x symbolize “there exists an x.” Let A be a predicate such that Ax means that x produces adaptive behavior and that x is/was selected by natural selection. Let p ->q represent the counterfactual conditional, “if p were true then q would be true,” and let -> symbolize the material conditional. My claim for A is as follows:
[All]s(As -> [All]B((SL(s) = B) -> As))
The argument for Pr(R|N&E) being low/inscrutable can thus go as follows:
|(2.1) ||N&E entails SE|
|(2.2)||If (N&E entails SE) then N&E entails N&E&SE.|
|(2.3)||N&E entails N&E&SE (follows from 2.1 and 2.2).|
|(2.4) ||If Pr(R|N&E&SE) is low or inscrutable, then Pr(R|N&E) is low or inscrutable (follows from 2.3; since N&E entails N&E&SE anyway).|
|(2.5)||Pr(R|N&E&SE) is low or inscrutable.|
|(2.6)||Therefore, Pr(R|N&E) is low or inscrutable (follows from 2.4 and 4.5).|
The above argument is deductively valid. Premise (2.1) is justified with the above argument (assuming materialism with respect to human beings, in which the conclusion is "semantics of the belief are not causally relevant to behavior even if the syntax of the beliefs are"). Premise (2.5) is justified by the paragraph preceding the argument (the semantic content of the belief could be anything and it wouldn't matter etc.). The gist of it is that if N&E were true then semantic epiphenomenalism would be true. But the likelihood of R given N&E and semantic epiphenomenalism is low/inscrutable, and so it follows that the probability of R given N&E is low/inscrutable. I contend that the five premises are more plausible than their denials. If this is mistaken, which premise is not more plausible than its denial?
The Defeater Thesis
Plantinga argues the defeater thesis to be true from analogy, with the idea being that if the analogy holds so does the Defeater Thesis. To use an analogy of how this argument works, and to borrow a bit from Plantinga, suppose I initially believe Sam has reliable cognitive faculties. I later learn that Sam has ingested anti-reliability drug XX, and the vast majority of those who take the drug no longer have reliable cognitive faculties. This serves as defeater for my belief that Sam’s cognitive faculties are reliable.
The Clint Scenario. To use another example (my own), suppose I have a friend named Clint and some tragic event has destroyed the reliability of his cognitive faculties. Luckily there is a machine that can rebuild his cognitive faculties such that they function in a reliable fashion if the machine is set on setting #1. The technicians use the machine to help poor Clint. Let RC stand for “Clint’s cognitive faculties are reliable.” I naturally assume that the technicians use trusty setting #1, and I therefore believe that RC is true. Then to my dismay I learn that they accidentally used the not-so-trusty setting #2. If the machine using setting #2 makes the probability of RC low or inscrutable, I have a defeater for my belief that RC is true.
The Alien Scenario. To use another analogy (borrowed a bit from Plantinga), suppose I know that some alien creatures with evolved intelligence (they can think, form beliefs, change their minds, etc.) have formed, and I initially presume that the cognitive faculties of the aliens are reliable (call this RA). Apart from simple optimism, my only reasons for believing RA rely on RA being true. For example, I know that according to the aliens’ own cognitive faculties, there is a huge pile of evidence that their cognitive faculties are reliable. Accepting this as good evidence is like asking a man if he is honest and taking his answer “yes” as proof of his honesty. In any case, I later learn that the evolutionary processes that produced their cognitive faculties is equally as trusty as setting #2 of the machine in the Clint Scenario. That is, the cognitive faculties of the aliens have evolved in such a way that the probability of RA is low or inscrutable. This would serve as a defeater for my initial presumption that RA is true.
Yet if we evolved in such a fashion, it seems we have a defeater for our own cognitive faculties being reliable. Hence, Pr(R|N&E) being low or inscrutable is a defeater for R. My defense of the Defeater Thesis goes something like this:
|(1.1) ||If RC is defeated in the Clint Scenario, then RA is defeated in the Alien Scenario. |
|(1.2)||If RA is defeated in the Alien Scenario, then R is defeated in the Probability Thesis scenario|
|(1.3)||RC is defeated in the Clint Scenario|
|(1.4)||Therefore, R is defeated in the Probability Thesis scenario|
Line (1.4) is of course the Defeater thesis. The above argument for the Defeater thesis is deductively valid, i.e. the conclusion logically and inescapably follows from the premises.
To avoid misunderstanding, I should note that premises (1.1) and (1.2) are material conditionals. A material conditional takes the form "If p, then q" and is equivalent to "It is not the case that p is true and q is false," where p and q are statements that are either true or false. While the material conditional is a weak-sounding claim, an "if p, then q" material conditional being true is good enough for deductive arguments like this one because if p really is true, then q is true (since q couldn't be false under such a situation).
Since we're dealing with material conditionals, premise (1.1) basically means "It is not the case that both (RC is defeated in the Clint Scenario) and (RA is not defeated in the Alien Scenario)." Stated this way, the first two premises are, I think, more plausible than their denials because the relevant similarity between the Clint Scenario and Alien Scenario in premise (1.1) seems too great (there doesn't appear to be a relevant difference); likewise with Alien Scenario and Probability Thesis scenario of premise (1.2). I therefore maintain that the Defeater thesis is correct. For those who challenge the Defeater Thesis, I kindly ask which premise of the above deductive argument is wrong and the justification for it being wrong, because I can’t think of a successful argument against any premise. If for example one thinks the second premise is false, I ask that a relevant difference be pointed out between the Alien Scenario and the Probability Thesis scenario (note that interspecific chauvinism does not appear to be a good reason to reject premise 1.2).
N&E is in an interesting way self-defeating. As I noted earlier, this isn’t an argument against evolution, though it could be considered an argument against naturalism. If naturalism is true, evolution is the only game in town, and if the conjunction of naturalism and evolution is self-defeating, so much the worse for naturalism. Indeed, I consider this a good argument for naturalism being irrational. That aside, what are people’s thoughts about the argument? Does it successfully show that the conjunction of naturalism and evolution are irrational? If so, why? If for example the Defeater Thesis is mistaken, which premise of my argument for it is false and why?
This thread is extremely long, so while I apologize for the length of this post, I think it's important for future contributors to not simply regurgitate objections I have already dealt with. So on to some objections:
Objections and rejoinders: The Probability Thesis
Objection: There's only one possible set of semantic beliefs for any given syntax.
Reply: I think it's eminently plausible that that by nomic (physical) necessity there's only one possible set of semantics for any given syntax. However, the argument need only claim that it is metaphysically possible for a given syntax to be associated with a different semantics, and that appears no more implausible than the claim that our universe could have had different physical laws. If one wants to claim that the supervenience is so strong that it is metaphysically or logically impossible for a given syntax to have had a different semantics, I think that would require some sort of argument, since impossibility claims rest on the person who makes them. Furthermore, it might be said that all the argument really requires is to have a different possible semantics for a given behavior. On that note, consider the following.
Suppose a mad scientist creates a mind-control device that not only controls the person’s bodily actions but also effectively renders the semantic properties of the person’s beliefs an epiphenomenon (it causes a person to have certain semantic beliefs but prevents those semantics from influencing behavior). Let’s call the machine an “artificial semantic epiphenomenal” (ASE) device. For any behavior the ASE device forces its victim to do, it can create just about any semantic belief. For example, suppose the mad scientist implants the ASE device in Bill and the device forces the thirsty Bill to drink a glass of water while simultaneously making him believe that, “Drinking this water will kill me and I don’t want to die.” The ASE device can even produce a semantic belief that has little to do with the forced behavior, such as making Bill believe that “grass is air” or that “1+1=3” at the same time it forces Bill to drink the water. But if it is physically possible for a technological device to do this (and it seems to be), under semantic epiphenomenalism it seems at least metaphysically possible for the moving atoms of neurophysiology to do the same thing. One could thus put forth the following argument:
- If the ASE device is physically possible, then under SE it is at least metaphysically possible for just about any semantic belief to be associated with some given behavior (since semantic properties of beliefs are epiphenomenal).
- The ASE device is physically possible.
- Therefore, under SE it is at least metaphysically possible for just about any semantic belief to be associated with some given behavior.
Argument for (1): if a technological device can do this, there doesn’t appear to be anything special about organic molecules or biological ontology in general that would render it metaphysically impossible for them to do same thing; a different supervenience relation seems conceivable.
Argument for (2): such device comports with all known physical laws.
Recall that for any behavior the ASE device forces its victim to do, it can create just about any semantic belief associated with it. A similar thing holds for semantic epiphenomenalism: for any belief B and any semantic belief S, there exists some metaphysically possible neurophysiological process that produces both S and B, which means for any given behavior our semantic beliefs could be just about anything. But if that’s true, the claim that it’s metaphysically impossible for a syntactic structure to have a different semantic content looks less plausible. What’s more, the ASE device argument provides additional grounds for thinking that P(R|N&E&SE) is low.
Objection: Semantic epiphenomenalism is false.
Reply: Certainly I agree that it is false, but it seems like it would be true if naturalism were true. It's hard to see how one's behavior would be different if all the neurophysiological properties were held constant but the semantic properties varied. Perhaps one thinks that the semantics just is the syntax. I do not find that at all plausible; to me it seems that syntax and semantics are as distinct as an electron's mass and electric charge. Still, let's ignore that. It turns out we don't need SE to justify the Probability Thesis; we can construct a scenario similar to the ASE case.
Suppose a mad scientist creates an artificial neurophysiological device (ANPD), a many-tentacled device implanted near Smith’s brainstem that controls both his thoughts and behavior. The mad scientist can remotely control the ANPD’s electrochemical processes to vary Smith’s beliefs and behavior in innumerable and diverse ways. For example, Smith is dehydrated, and the mad scientist, wanting his victim to be in good health, uses the ANPD to force Smith to drink some water while simultaneously making him believe “I am thirsty and this water will quench my thirst.” The second time Smith is dehydrated, the mad scientist uses a different electrochemical setting to make Smith believe “drinking this water will grant me superpowers in the afterlife” while producing the same drinking behavior (and suppose this belief is false). Here, the electrochemical process that produces fitness-enhancing behavior also produces a false belief. The ANPD can even produce “garbage” semantic beliefs that have little to do with the forced behavior, such as making Bill believe that “grass is air” or that “1 + 1 = 3” at the same time it causes Smith to drink the water. The third time Smith is dehydrated the mad scientist does just that; causing Smith to drink the water while also causing him to believe “1 + 1 = 3.” Indeed, the mad scientist can associate just about any belief with the same drinking behavior. Such an artificial neurophysiological device is not only metaphysically possible, but it also seems to be physically possible (given that beliefs and behavior can be brought about by electrochemical means).
One could substitute all sorts of semantic beliefs generated by the mad scientist’s device into the above argument. It would seem then that for just about any semantic belief, there exists some metaphysically possible set of moving atoms that generates both that semantic belief and whatever behavior is desired (such as the thirsty human drinking the water). But if that’s true, the claim that it’s metaphysically impossible for a syntactic structure to have a different semantic content looks less plausible. Even if it weren't, we'd still have something very much like SE.
- If the ANPD is physically possible (e.g. to impose the semantic belief that “drinking this water will kill me and I don’t want to die” on the man while also making him drink the water), then it is metaphysically possible for neurophysiological processes to do the same thing the device does.
- The ANPD is physically possible.
- Therefore, it is metaphysically possible for neurophysiological processes to do the same thing the device does.
For example, think of an alien frog whose neurophysiology is very different from an earth frog. In light of the ANPD scenario, we can conceive of a neurophysiology that causes the alien frog to snap out its tongue to capture the alien fly while also causing the frog believe that “eating this fly will kill me and I don't want to die, therefore I should eat it.” We might say the alien frog is being irrational in its behavior, but natural selection does not select for rationality, it selects for advantageous behavior. We can even conceive of the alien frog having a semantic content that has nothing to do with the external environment, as in the third case of the ANPD scenario, and yet the neurophysiology still causes the alien frog to eat the alien fly. Even if we accept that syntax and semantics are the same thing, in a purely physical view of the mind the spirit of semantic epiphenomenalism remains: for any given behavior B, there are innumerably many semantic contents C—even C’s wildly unrelated to the external environment—that could be associated with B. One could argue that the relation between semantic content and behavior is in this way functionally equivalent to SE in spite of the falsity of SE. Call this view semantic pseudo-epiphenomenalism (SPE). The ANPD scenario suggests that given naturalism, if semantic epiphenomenalism is not true, then semantic pseudo-epiphenomenalism is, with both giving us SE-like behavior. Both semantic epiphenomenalism and semantic pseudo-epiphenomenalism permit a great divorce between beliefs and behavior (think of the third and final case in the ANPD scenario).
Because SPE is functionally equivalent to SE, and given the enormous variety of diverse beliefs that could be associated with a given behavior (“bachelors are married,” “grass is air,” “2 + 2 = 1,” “2 + 2 = 2,” “2 + 2 = 3,” etc.) an evolving race of alien creatures afflicted with SPE has a low probability of evolving reliable cognitive faculties just as if they were afflicted with SE. In sum, naturalism entails that either SE or SPE is true, and since Pr(R|N&E&SE) and Pr(R|N&E&SPE) are low or at best inscrutable, it follows that Pr(R|N&E) is likewise low/inscrutable.
In response one could put forth the following rebuttal. Even though naturalism unavoidably entails an SE-type problem—whether via semantic epiphenomenalism or semantic pseudo-epiphenomenalism—the fitness-enhancing neurophysiological properties that are most likely to be selected by natural selection (say that a certain neurophysiology is selectable just in case it’s likely to be selected by natural selection) happen to be those that are truth-conducive. The ANPD scenario is contrived and produces certain belief-behavior pairs that are unlikely to obtain in real human physiology. The most selectable and efficient way for neurophysiology to produce advantageous behavior also produces true beliefs. Thus, even though the SE-type situation exists for semantics and behavior, luckily for us the physiological relation between semantics and behavior is such that true beliefs usually obtain.
All that may be true, but as an objection against the Probability Thesis it falls short. A major problem is that even if a favorable physiological relation between beliefs and behavior obtains for our species, such a favorable relation does not appear to be knowable from N&E alone. To illustrate, consider a slightly modified form of the alien scenario, where the neurophysiology of these creatures is quite literally alien to us and radically different from our own (though we don’t know much more about it). Given this, the ANPD scenario, and the SE-like situation for beliefs and behavior, for all we know the most selectable and efficient fitness-enhancing alien neurophysiology available to natural selection has a physiological relation between beliefs and behavior that is wildly different from what human naturalists believes about themselves. So there are possible worlds where the most selectable alien neurophysiology is such that the fitness-enhancing neurophysiology produces mostly false beliefs as in the ANPD scenario. Of course, there are also possible worlds where the most selectable alien neurophysiology produces mostly true beliefs. But there’s no way to establish on N&E alone that the truth-conducive neurophysiology is more selectable, in part because the alien neurophysiology is too mysterious and too radically different from our own.
Moreover, if we temporarily forget our own beneficial belief-behavior relationship to calculate the likelihood of RA on just N&E and thus without any (further?) background knowledge about what sorts of physiological relations between beliefs and behavior obtain in actual N&E worlds, we would have no reason to suppose Pr(RA|N&E) is high regardless of whether we assume semantic epiphenomenalism or semantic pseudo-epiphenomenalism. Indeed, in light of the ANPD scenario the semantic beliefs of the aliens could (at least in the epistemic sense) be just about anything, and thus Pr(RA|N&E) is low or at best inscrutable. Similarly, Pr(R|N&E) is also low/inscrutable.
One could concede that the probability of R given N&E is low but also claim we know some proposition P (perhaps that the physiological relation between beliefs and behavior happens to be benevolent for our species) such that Pr(R|N&E&P) is high. Therefore, Pr(R|N&E) being low/inscrutable does not defeat R for the evolutionary naturalist. This however would be an objection against the defeater thesis rather than the probability thesis, so it will not be discussed in this section.
Objections and rejoinders: The Defeater Thesis
Objection: There is some P such that Pr(R|N&E&P) is high, and we have excellent evidence for P.
Reply: It doesn't quite work. To illustrate, suppose I ingest drug XX. Those who ingest it have a high probability of the drug rendering their cognitive faculties unreliable, though those so afflicted are incapable of detecting their own unreliability. It seems you and I have a defeater for R with respect to myself. Suppose though I come to believe I have passed an extensive battery of tests to establish my cognitive reliability. Would this provide a defeater-defeater? It would not, because I have come by the belief in the cognitive tests after I have ingested drug XX. Similarly, any belief that P we come to accept is accepted after evolutionary naturalism has already affected our cognitive faculties. As long as the Probability Thesis holds, the "just add P" strategy won't work.
Objection: We have non-propositional warrant that outweighs the Probability Thesis. This objection attacks (1.2); RA is defeated for us in the Alien Scenario but not for the aliens. For the aliens RA is properly basic, and has non-propositional warrant for them such that Pr(RA|N&E) being low doesn't defeat RA for them.
Reply: I don't think this works, but let's ignore that. We can use a different set of scenarios to buttress the Defeater Thesis. Scenarios S1A to S6A below feature drug XX, a drug that renders one’s cognitive faculties unreliable for a high percentage of those who take it, though those so afflicted are incapable of detecting their own cognitive unreliability. A small percentage of people have a gene called “the blocking gene” that produces a protein that blocks the reliability-destroying effects of drug XX, but nobody else is immune to the drug. A few scenarios make reference to the XX-mutation, a mutation that causes one’s body to naturally produce and release drug XX into the body soon after one is born.
Scenario (S1A): I know that my friend Sam has ingested drug XX, a drug that renders one’s cognitive faculties unreliable for a high percentage of those who take it, though those so afflicted are incapable of detecting their own cognitive unreliability. I know also however that Sam later comes to believe that an extensive battery of tests has established his cognitive reliability, though I have no independent reason for thinking this occurred. And since Sam obtained his belief about the cognitive tests long after he ingested drug XX, I conclude that the belief was likely produced by unreliable cognitive faculties, and I have a defeater for my belief that Sam's cognitive faculties are reliable.
Scenario (S2A): I as a three-year-old child ingest drug XX while being aware of its potential effects. I know of no relevant difference that distinguishes my case from Sam’s. The case of Sam, learning of drug XX, and ingesting drug XX are my earliest memories. Some years after the incident I come to believe I have taken an extensive battery of tests that establish my cognitive reliability, but since this belief came long after I ingested drug XX, I conclude that my belief was likely the product of unreliable cognitive faculties and that I have a defeater for my belief that my cognitive faculties are reliable.
Scenario (S3A): A doctor has injected me with drug XX soon after I was born (the doctor mistakenly thought he was injecting an important vaccine), and I come to believe in the following. I am a renowned scientist who has built a machine that I know is capable of reliably detecting whether and when drug XX entered a person's bloodstream. I administer the test to myself and the machine reports that drug XX came into my bloodstream at around the time I was born. Later I come to believe that I have taken an extensive battery of tests that establish my cognitive reliability, but since this belief came long after drug XX entered my bloodstream, I conclude that I have a defeater for my belief that my cognitive faculties are reliable.
Scenario (S4A): Naturalistic evolution brought about a mutation that causes my body to naturally produce and release drug XX into my body soon after I am born, and I come to believe in the following. I am a renowned scientist who has built a machine that I know is capable of reliably detecting whether and when drug XX entered a person's bloodstream. I administer the test to myself and the machine reports that drug XX came into my bloodstream at around the time I was born. Later I come to believe that I have taken an extensive battery of tests that establish my cognitive reliability, but since this belief came long after drug XX entered my bloodstream, I conclude that I have a defeater for my belief that my cognitive faculties are reliable.
Scenario (S5A): I live on planet XX with a hundred humanoid species. Naturalistic evolution brought about the XX-mutation for some (though not all) races. For the races afflicted with the XX-mutation, due to an SE-like phenomenon these species have fitness-enhancing behavior and are even able to create technology. For example, one race is highly skilled in producing hydropower plants yet they believe that water consists of submicroscopic strawberries. Only a small percentage of races that have the XX-mutation also have the blocking gene. I come to believe in the following. The races of my planet have just begun to discover drug XX, the XX-mutation, and the blocking gene. I initially believe that my species has evolved reliable cognitive faculties and that there is overwhelming evidence for this. I am one of the leading scientists of my species and I discover that drug XX was released into my body as soon as I was born. Not only that, but the same problem holds true for the rest of my race and thirty-nine other races on my planet due to the XX-mutation. A plague erupts on the planet eradicating all species except for the forty races that I believe have the XX-mutation. I conclude that the probability of my humanoid cognitive faculties being reliable given that I am a product of naturalistic evolution on this planet is low. Later I come to believe that I and other members of my race have taken an extensive battery of tests that establish our cognitive reliability, but since this belief came long after drug XX entered my bloodstream, I conclude that I have a defeater for my belief that my cognitive faculties are reliable.
Scenario (S6A): The only humanoid species on my planet is homo sapiens, and all of us have the XX-mutation. I come to believe in the following. Via a nifty combination of scientific and philosophical argumentation, it is proven beyond all reasonable doubt that naturalistic evolution entails that the XX-mutation is inevitably a part of any humanoid’s genetics. Though there is the small chance of a humanoid species also having the blocking gene as part of its normal genetics, no other humanoid species would evolve the blocking gene. I conclude that the probability of my humanoid cognitive faculties being reliable given that I am a product of naturalistic evolution is low. Later I come to believe that there is overwhelming evidence for my cognitive reliability (e.g. I believe credible scientists have told me that we all have the blocking gene) but since this belief came after drug XX came into my bloodstream, I conclude that my belief in the blocking gene etc. was likely produced by unreliable cognitive faculties, and that I have a defeater for my belief that my cognitive faculties are reliable.
Scenario (S7A): The Probability Thesis is true and Pr(R|N&E) is low, but I do not initially believe this and instead think I am the product of a sort of evolution that makes my cognitive reliability very likely. Later however I study philosophy and see for myself that the probability of my humanoid cognitive faculties being reliable given that I am a product of naturalistic evolution is low. Afterwards I come to believe I have taken an extensive battery of tests that establish my cognitive reliability, but since this belief came long after N&E has already affected my cognitive faculties, I conclude that I have a defeater for my belief that my cognitive faculties are reliable.
So above we have a slippery slope of scenarios. The idea is that if R is defeated in (S1A), then it is defeated in (S2A), and if is defeated in (S2A), then it is defeated in (S3A), and so forth. If R is not defeated in (S7A), where does the slippery slope stop and why? Whether naturalistic evolution impairs cognitive faculties through a mutation producing drug XX or some other physiological process does not seem to matter.
Of course, Pr(R|N&E) being low is only one half of the Probability Thesis. For Pr(R|N&E) being inscrutable, we can construct mirror scenarios (S1B), (S2B),...(S6B).
Scenario (S1B): I know that my friend Sam has ingested drug XX, a drug that renders one's cognitive faculties unreliable for some of those who take it (and those so afflicted are incapable of detecting their own cognitive unreliability), but I don't know the percentage. The likelihood that Sam's cognitive faculties are reliable is inscrutable to me--I know only that it's low, high, or somewhere in between. I know also however that Sam later comes to believe that an extensive battery of tests has established his cognitive reliability, though I have no independent reason for thinking this occurred. And since Sam obtained his belief about the cognitive tests long after he ingested drug XX, I conclude that I have a defeater for my belief that Sam's cognitive faculties are reliable.
[The scenarios in between skipped for space]
Scenario (S7B): The Probability Thesis is true and Pr(R|N&E) is inscrutable. Moreover, I study philosophy and see for myself that Pr(R|N&E) is inscrutable; I know only that Pr(R|N&E) is low, high, or somewhere in between. Later I come to believe I have taken an extensive battery of tests that establish my cognitive reliability, but since this belief came long after N&E has already affected my cognitive faculties, I conclude that I have a defeater for my belief that my cognitive faculties of reliable.
The same problem of where the slippery slope stops and why arises.
Edit 2010-11-22-MO: Made some edits to strengthen the justification for the Probability Thesis and the Defeater Thesis.
Edit 2010-11-29-MO: Put in a 6-line semi-rigorous argument for Pr(R|N&E) being low or inscrutable.
Edit 2011-3-27-SU: Used a different format for the sub-arguments that folks have been using recently.
Edit 2011-05-11-WE: A few formatting changes, and some relatively minor wording edits.
Edit 2011-06-03-FR: Minor changes to the justification of 2.5 and the Probability Thesis.
Edit 2011-06-08-WE: Wording change to the justification of 2.5
Edit: 2011-07-02-SA: Added an argument for 2.5
Edit: 2012-01-25-WE: Added the "Objections" section
Edit: 2012-02-20-MO: Modified the "Objections" section
Edit: 2012-04-01-SU: Added the "Preface" section