McDowell has a new article out, in the collection Disjunctivism: Perception, Action, Knowledge. The title of the article is "The Disjunctive Conception of Experience as Material for a Transcendental Argument". It was previously delivered as a lecture in 2006. As best I can tell, there's nothing new in this article; McDowell just covers some of the same material he's discussed in his earlier writings about disjunctivism, skepticism, and transcendental arguments.
McDowell has previously been hesitant to endorse any "transcendental arguments" (since there shouldn't be a need for them if we can therapize away the worries instead), and he's generally just as hesitant here; I'm pretty sure all of the material on Stroud and TAs was already covered in "Reading McDowell". The one thing that did leap out at me is p.382, where McDowell claims that Strawson's reading of the Transcendental Deduction, or perhaps the Deduction itself, might serve as a convincing transcendental argument that any potentially self-conscious state or episode is an episode which claims objective purport. So McDowell is slightly less hesitant to endorse "transcendental arguments" than he has been in times past, but only slightly, and he doesn't actually offer any arguments of his own under that banner, but merely points to the work of others.
The meat of the article is devoted to continuing an argument with Crispin Wright about perceptual skepticism. It all read like stuff I've read before.
I've wasted several hours puzzling over Wright's response, rather than sleeping. I now have a headache. 2400 words worth of dreck about Wright and disjunctivism are hidden behind Rena.
I just don't get Crispin Wright's response. It seems to be as wrong as anything can be; he seems to totally miss how the disjunctive view is supposed to answer perceptual skepticism.
What could make someone want to say something like this, from Wright's response to McDowell: "There is surely no plausibility whatever in the thought that the mere perceptual apprehension of the hand in front of my face is all it takes to put me in position rationally to claim knowledge that it is there." (Emphasis his; I'm not sure what the emphasis is there for. Wright seems to have some pretty crazy ideas about what being "rational" demands, or perhaps he means something remarkable by "claim". In any case I don't know why his emphasis stops short of "knowledge"; it doesn't seem like a publisher's error, since I'm going off of the draft, which was presumably from Wright's own fingers.)
Wright doesn't think it's rational to claim that I know I have hands, when I can see my hands in front of me. He claims that Moore's argument is question-begging, but he seems to hold that it isn't even valid.
Wright repeatedly treats the "disjunctive view" as if it meant that the object of one's perception is the disjunction "p or the-mere-appearance-that-p" -- as if this was the information conveyed by a perception that p, on the disjunctive view. But this is just what the disjunctive view denies. On the disjunctive view, the "highest common factor" is not a possible object of perception; only the disjuncts separately are. I can see a barn, or I can see a fake barn; I can't see either-a-barn-or-a-fake-barn. I might, on reflection, withhold judgement about which of the two I saw, but this would not be to judge that I had seen a third thing, the disjunction of the two. I saw the one, or I saw the other; I am warranted in holding that I saw one of them, and have no grounds for holding that I saw the other. To withhold judgement on which is which may or may not be justifiable, depending on what else one holds true. This is unlike the "highest common factor" view, wherein any inference from what I perceive to what is the case is groundless and without warrant.
If I take myself to have seen a barn, then I'm either entitled (non-defeasibly) to the claim that There's a barn over there, or I am wrong about what I take myself to have seen. And the same holds with the fake barn: If I take myself to have seen an illusory barn, then I am either entitled to the claim that I apprehend the mere shade of a barn there or I'm wrong about what I've taken myself to have seen. In either case, if I saw what I took myself to have seen, then it does follow that what I've seen is the case, and Wright's skeptic is refuted. If what I took to be the case is not, in fact, the case, then I was wrong about what I was entitled to hold. But this is not a case of a defeasible claim being abandoned as more information comes to light; if I was wrong about what it was I saw, then I never had any entitlement to the claim I took myself to be entitled to -- not even a defeasible one. If I take myself to have seen a barn, and what I actually saw was something else (or just a hallucination), then I was flat wrong. I never had even defeasible warrant for my claim, and so there's no warrant that I could be abandoning when it comes to light that I was wrong about the barn. But if the skeptic wants to suppose that all of my perceptual judgements are false (and so any entitlements I ascribe to myself are in error), then this clearly begs the question in favor of the skeptic. If I am wrong about practically everything I take myself to know, then I'll grant that I'm also wrong about what I take to be entailed by what I know. But Wright's skeptic gives me no reason to think that I am wrong about everything I take to be the case by way of perception. Hence if I take myself to have seen various things, and I am right about some of them, the skeptic is defeated. If I am wrong about all of them, then the skeptic's conclusion is true, but he has provided no reason to believe that it is so. If I withhold judgement on all of my perceptions, refusing to either believe that they are veridical or that they are illusory, then Wright's skeptic is correct: I don't know if I've ever perceived anything. But this is trivially true, since knowledge that p requires belief that p; if I refuse to believe all my purportedly perceptual claims, then I can't know anything that way, either. But no argument is given for why I oughtn't to trust my senses, since on the disjunctive view my senses provide me with opportunities to gain warranted knowledge of how things stand, in some cases. Thus, on the disjunctive picture of perception, whatever the position I take with respect to my purported percepts, Wright's skeptical argument does not go through.
Wright writes: "Direct awareness of states of affairs that make P true is one thing; warranted belief that P, for one fully apprised of what it takes for P to be true, is something else. One plausible additional necessary condition for the latter..." and then goes on to some blather about "responsibility" and "rationality". But this just shows that he rejects the disjunctive picture of experience. For the disjunctivist, the only additional "necessary condition" for warranted belief that p, if I have been made directly aware that p, is that I believe that what I intuit is actually how things stand in the world.
Wright considers a hypothetical in which there is a 50-50 chance he is hallucinating that there is a cricket match going on, and claims that in this scenario he is neither justified in claiming that there is a cricket match going on, nor does he know same, and neither can he say the opposite. But there seems to simply be no argument for this conclusion; it is simply called "irrational" to reason otherwise. But I am inclined to say that Wright would be perfectly justified in claiming (even claiming to know) that there is a cricket match going on in such a scenario; it is just that there are even odds that he would be wrong about this. And I'd say the same about his right to claim that he's hallucinating that there is a cricket match going on. If he wants to withhold judgement on the matter, this also strikes me as rational, given that nothing great rides on whether or not there is a cricket match going on -- in this case he will have failed to gain some bit of knowledge which he was entitled to, but he has also avoided gaining a false belief that he might have been lead to hold. None of these three options strike me as more or less rational than the others, given only the details Wright gives us; he seems to be afraid of gambling, lest he possibly be in error. Wright claims that only the third option is rational, but I can't find an argument for this conclusion, unless it's simply supposed to be that perception rests on a defeasible inference, and in this situation it is so defeased, and so the inference would be a bad one. Which, again, would simply be to reject the disjunctivist picture without argument; for the disjunctivist there is no inference involved in perceiving that p.
Wright occasionally states the disjunctive position correctly, but then immediately goes on to act as if the "phenomenal state" common to the two disjuncts was all that was actually perceived. Which is just to go back to a "highest common factor" view. From the close of Wright's paper: "It is the supposedly possible phenomenology of subjective indistinguishability, rather than a supposed "highest common factor", that is at the base of the problem." Wright seems unaware that treating this "possible phenomenology of subjective indistinguishability" as an object of perception is just what the disjunctivist calls the "highest common factor view"! No doubt, the disjunctive view is not a helpful way to dissolve skepticism, if it doesn't exclude the view it was designed to oppose.
On the disjunctivist picture, in perception I am either given that this is how things are or I am not given anything. Wright occasionally claims that to be justified in holding that I have been given that p, I must be able to justify that I am able to be given things in the manner I regard myself as having been given the fact that p -- if I hold myself to have sensed that p, I must be warranted in trusting my senses. But this inference is trivially available to the disjunctivist: if I am given an indefeasible reason to hold that I perceive that p, then the inference to I can perceive things is banal. Hence Wright's skeptical arguments fail on this front, too. They give the disjunctivist no reason to doubt that he has perceived that many things are the case which are the case, and that his so perceiving them justifies his beliefs that that is how things are.
Wright's reply in this volume is the first piece of his I've actually forced myself through, all the way to the end. His style grates on me, and his arguments just seem bad. I've never found "fake barn country" scenarios to lead my intuitions in the way they are intended to; if I see one barn and am fooled a hundred times in addition, then I want to say I saw one barn, and was fooled a hundred times in addition. If I am then told that I passed a hundred fakes, along with a solitary real barn, then I would be inclined to suspend judgement on which of my many perceptual beliefs vis-a-vis barns were true, and which false. But I would know that one of them was true, if I thought I'd seen a hundred and one barns, and there were only a hundred fakes. The moral I am supposed to draw from this story is that I didn't know that I saw any barns, including the one that I know that I saw. But I did know (in one of the 101 cases) that I saw a barn, before I distanced myself from the judgement. If I reverse this distancing, and hold that all of the things I saw were barns, then I will be wrong many times, and right once. The many errors do not detract from the one success, though they might make it harder to take advantage of the knowledge.
Wright applies his "I am in position rationally to claim that" operator incorrectly on page 12. Line I should become "I am either in position rationally to claim knowledge that I am perceiving a hand in front of my face or I am in position rationally to claim knowledge that I am in some kind of delusional state of that phenomenal character", if it is to represent something a disjunctivist would take themselves to be entitled to if they (seem to) see a hand in front of their face. If Wright had applied his invented operator in this manner, then it would be easy to see how II followed from I: Depending on how the world is, one or the other of the disjuncts will be true. If it's the first then II follows, if it isn't then it doesn't. This consequence relation is a matter which is independent of whatever I know or take myself to know, and so there is no question of whether or not I am warranted in believing it to hold; if I have a hand in front of my face and I perceive that this is so, then my rational credentials are impeccable.
I suspect that all of the above may be the wrong tack to take in handling this issue.
I am mystified by Wright's "Cartesian Dreaming Argument" in his paper's appendix. The conclusion of the argument does not strike me as more paradoxical than its first premise, so I don't see the point of the formalisms at all.
"To get empirical evidence you have to carry out some empirical procedure." This strikes me as false. I do not carry out any "empirical procedure" to perceive things; I simply see them. I don't do anything to see them; experience is "forced upon me". Wright uses this claim, along with the claim that "it is plausible that the evidence produced by such a procedure may not rationally be regarded as any stronger than one's independent evidence that the procedure was carried out competently and with due diligence — and hence that it was carried out at all" to argue that I'm not warranted in believing that I'm not dreaming. I don't know what it would mean to perceive "competently and with due diligence". But that doesn't seem to matter, for the passage continues Dreaming excludes the genuine execution of empirical procedures. Hence the strength of warrant generated by executing such a procedure is rationally limited by the strength of one's independent warrant that one did not dream its execution! Hence no such procedure can generate a warrant that one is not (fully lucidly and coherently) dreaming.
Which is to say, if I want to claim that I saw a hand, I need independent warrant for the claim that I can see. Given that this would apply to every other claim I make to being gained through sight, this strikes me as entirely implausible, unless I can bootstrap my way up by using "I see a chair" to warrant trusting the senses that tell me that I see a desk, and then vice-versa. Which doesn't sound like what Wright is demanding claiming is plausible about warrant. I suspect Wright is trying to rule out responses like Putnam's response to brain-in-a-vat stories: I can see that p; I could not see that p if I were dreaming (or otherwise out of touch with reality); ergo I am not dreaming (or otherwise out of touch with reality). Which strikes me as a perfectly fine way to respond to brain-in-a-vat stories.
I suspect that the only points of substance in this post are my response to "fake barn country" scenarios and my attempt to give a disjunctivist's view of Wright's double blind test. These are the parts of the post I like least.
24 April 2008
Disjunctivism, Skepticism, and Why I Shouldn't Start Writing Things at One AM
Posted by Daniel Lindquist at 6:15 AM 2 comments
18 April 2008
how does i can read last page
Finally getting around to reading Murray's paper on McDowell and "second nature" from a little while back; the one Currence liked so much. It's a draft, and sadly one of the things Murray hadn't gotten around to was including a bibliography.
**edit: I really need to sleep. I somehow failed to scroll all the way to the last page, where there is a bibliography. The quote I couldn't locate is from "Naturalism in the Philosophy of Mind", which I have as being from 2004. I'm leaving this post up anyway, because I went through the trouble of including content at the end and I'm not letting it go to waste!!1**
I'm not having any luck finding where this quotation is from:
She [Millikan] argues that ‘the biological sciences, including physiology and psychology, are distinguished from the physical sciences by their interest not in lawful happenings…but in biologically proper happenings ’ (‘White Queen Psychology,’ p. 362). But this does not remove the biological, as she
conceives of it, from what I introduced as the realm of law: it is just that the relevant laws are underwritten by considerations about proper function, rather than inductively based on what actually happens. We still have the contrast with the space of reasons. (McDowell 1999:294, fn. 28)
I've checked all of McDowell's work from 1999, according to Wikipedia, and none of it includes a page 294. Millikan's book isn't mentioned in "Having The World In View", so it isn't from there, either. Google is also unable to find it.
Any help?
I'd like to see the context for this quote; it reminds me a lot of Davidson's "Three Varieties of Knowledge", wherein Davidson has to clarify (against Dummett) that the "anomalism of the mental" is different in kind from the "anomalism" of the special sciences. The special sciences, such as geology, can deal in laws only if they include ceteris paribus clauses; these "loose" geological laws are likewise irreducible to strict physical laws. But it hardly makes sense to claim that geological phenomena are "free". Butt Davidson claims that the anomalism of the mental allows for mental phenomena to be considered free, to be considered autonomous. So Davidson has to clarify how anomalous monism can "make room for freedom" in the way that he claimed it did.
Davidson says that his point was not just that there can be no strict mental laws, but that our knowledge of our own and others' minds, of "I and Thou"*, could not even in principle be replaced with knowledge of lawful happenings, even if the relevant laws contain ceteris paribus clauses. This is not the case with geological phenomena, where shifting from (say) talk of plate tectonics to talk of the various relations of force in which plates stand to one another, from geology-talk to physics-talk, needn't defeat one's purposes in talking about the plates. To try to abandon talk of our own and others' minds for talk of (loosely law-like) psychological phenomena would, however, leave us unable to carry on as we do. Our talk of beliefs, desires, etc is not talk of anything which is amenable to subsumption under laws. (I do not think Rorty ever grasps this aspect of Davidson; see his review of "Mind and World", which takes the form of a commentary on Davidson's "Mental Events" and "Three Varieties of Knowledge", for a clear illustration of this. Rorty's compatibilism seems closer to Hume's than to Davidson's.)
*I'm hoping that the Buber reference will help make clear the sense in which "psychological phenomena" are something treatable by the nomothetical sciences and the sense in which they are not. Davidson does allow for "loose" psychological laws, and even psychophysical laws; he simply thinks that nothing of this sort could replace our ordinary way of talking about the propositional attitudes & their cohorts. When I speak of our knowledge of our own minds and of the minds of others, I mean only this latter sort; there is probably a better way to phrase this, but I am tired and really want to know where that reference is from.
Posted by Daniel Lindquist at 1:01 AM 7 comments
Labels: Davidson, McDowell, naturalism, Rorty
16 April 2008
I like C.I. Lewis
I read "A Pragmatic Conception of the a priori" for the first time the other day, and it's fantastic. If one wants to retain the a priori/a posteriori distinction, I think Lewis does a wonderful job of showing how to do it. (The retained distinction will, of course, be unable to play the (bad) metaphysical role it has been called in to do in times past.) I really find nothing to object to in this paper: so long as one is not troubled by the idea that what is "held true no matter what" might also be rejected at some future point, I think it is a thoroughly splendid essay, especially for something so short.
In the interview which was reprinted at the end of "Problems of Rationality", Davidson says that he thinks Quine and C.I. Lewis were closer than Quine recognized: "The explanation for that is that Quine had no training in philosophy [when he encountered Lewis at Harvard] and so when he took Lewis's course in epistemology, he took it for granted that this was what everyone knows about epistemology.... I don't think he realized any of this, but you can find most of Quine's epistemology in C.I. Lewis minus the analytic-synthetic distinction. Epistemology naturalized was very close to the heart of C.I. Lewis. I don't think Quine knows the extent to which there really is a sequence that starts with Kant and goes through C.I. Lewis and ends with Quine."
I would add that C.I. Lewis does not appear to suffer from Quine's scientism, and he certainly doesn't have the weird fetish for classical logic that Quine does. And at least in "A Pragmatic Conception of the a priori", the analytic/synthetic distinction seems to be doing nothing objectionable (it passes largely without comment, and plays no essential role in the paper). Which makes Quine a lesser version of C.I. Lewis.
I almost don't want to read "Mind and the World Order", just in case it's not as good as this essay was.
Posted by Daniel Lindquist at 6:09 AM 7 comments
13 April 2008
On the very idea of blegging for a paper
Forster has a paper titled "On the Very Idea of Denying the Existence of Radically Different Conceptual Schemes" (Inquiry, Volume 41, Issue 2 June 1998 , pages 133 – 185). I would like to read it, but my UT library account is unable to hook me up with a copy; this isn't the first time something's been published in Inquiry which I should like to read, but am not able to.
It has occurred to me that talk of "different conceptual schemes" can be harmless, if sundered from the scheme-content dualism. If two different people (or groups, or cultures) act differently enough, then there's no reason to deny that they get around in the world in different ways, that they conceive of things differently -- that they have different concepts. Rejection of the scheme-content dualism in no way prohibits this; the problem lies not in "conceptual schemes" but in the way these schemes were supposed to "latch on" to the world; the dualism, and not either pole, was the problem.
I do think there are certain families of concepts which anyone who has the ability to communicate must possess -- notions of self, other, belief, truth, etc. -- for basically Davidsonian reasons. But these needn't be identical concepts for all speakers; so long as something in a speaker's practices is able to play the role of distinguishing truth from (mere) belief, the way things are from the way so-and-so holds them to be, all is in good order. And this is a general enough sort of demand that I should imagine very different sorts of practices might suffice to satisfy it. The Davidsonian demand is an externalist one, not an internalist one -- to be a speaker, one must be able to do certain sorts of things, but one needn't necessarily recognize that what one is doing is (e.g.) distinguishing truth from belief. There is no stock of "pure" concepts which all conceptual schemes must have in common, which anyone must possess to be able to think; practices can vary widely, and concepts will vary along with them. And yet all conceptual schemes will have certain family resemblances -- they will all be concepts used by speakers, thinkers, willing agents who act to make things more agreeable to themselves etc. (I am honestly not satisfied with anything I say about "pure" concepts; I cannot seem to find the right words for what I want to say about them, and this leads me to suspect I'm not sure what I want to say about them in the first place.)
I think that incommensurability-talk can be salvaged to an extent; two speakers might be unable to understand one another, despite both being understood by a third speaker. This would be a relative sort of incommensurability, as opposed to an absolute one; it looks to the conceptual repertoire of a particular speaker at a particular moment, rather than something broader. Without learning to cope with one another differently, the two will be unable to understand one another. If the necessary change in coping seems sufficiently difficult, if the mutual incomprehension seems sufficiently resilient to attempts to dissolve it, then talk of "incommensurable conceptual schemes" starts to seem attractive. Of course they might still come to understand one another; there is nothing prohibiting the dropping of old concepts and the taking up of new, or the revision of old concepts in light of new circumstances. The only thing that can bring the hermeneutical circle to a stop is the death of the hermeneut(s). This is all in keeping with Davidson; he has no objection to schemes which can be "calibrated", sentences which are in principle translatable, though they are presently impenetrable. I take this to be the truth of his claim towards the end of "On The Very Idea of a Conceptual Scheme" that we ought not to "announce the glorious news that all mankind -- all speakers of language, at least -- share a common scheme and ontology." Understanding does not come this easily; but, contra certain sorts of incommensurability-talk, neither is it impossible.
An aside: I am reading Kuhn's "Structure of Scientific Revolutions" for the first time; I'm fifty pages in now. So far, no surprises. One thing that I am keeping an eye open for is how Kuhn demarcates paradigms; as entities, they seem fishy to me. Kuhn's "normal science" is too smooth; it seems idealized. I suspect that a "paradigm" is just an idealization, too -- as it were, the tutelary angel of a normal-scientific practice. Revolutions might then be the same sorts of disputes which occurs in normal science, only they are more intense; they are disputes sufficient to make an idealized picture cease to seem applicable (the fall of a paradigm), whereas normal disputes are able to go on without the general picture appearing to shift. (These are all very speculative thoughts, obviously, and I ought to finish the book before I worry too much about these big-picture notions in it. But "normal science" just seems too sanguine to be human.)
Posted by Daniel Lindquist at 7:47 AM 4 comments
Labels: Davidson, interpretation, Kuhn
Chicago and Becoming-Emo
I visited Chicago this weekend for "Campus Days". It was good. It appears that a lot of the people who get accepted into the MAPH program were referred to it after applying to another program at Chicago, so the worries I had about the program were all common worries. I feel a fair bit less uneasy about all this than I did a week ago. This is a good thing.
I always felt that the MAPH folk were very forthcoming about their program. One thing that came out fairly early was that several people each year end up being referred to MAPH after applying to the philosophy PhD program (which is the boat I am in), and the placement record for these people is not very good. For every area that isn't philosophy, MAPH has an excellent placement record. Suffice to say, this was not the sort of thing I was hoping to hear.
This year (06/07) has been particularly bad -- the MAPH program had nearly twice as many philosophy students as they had expected to have, and more of the philosophy faculty was on leave than was expected (IIRC, there were two faculty babies, and one professor was sick for a few weeks, in addition to scheduled sabatticals etc). For 07/08 the MAPH class has been shrunken dramatically (100, down from 138) because of this. Chicago has also hired some new faculty for the philosophy department, has added a class on "contemporary analytic philosophy" specifically for MAPH students, and has assigned one of the new hires the specific duty of acting as an interface between the philosophy department and MAPH (his name's Ben Callard; we had a nice chat at the lunch the philosophy department held on monday).
Simply knowing that the issue has been recognized counts for a lot.
From talking to several people, the consensus seems to be that MAPH was admitting a lot of people who wanted to go on to a philosophy PhD, but who had gotten their bachelor's in a Continental department. Chicago is not really a good place for those people. Four years of Derrida & co. does not prepare someone for graduate study in an analytic department; it's just not the same game. And so the students didn't get the support they needed.
Finklestein thinks they can do a lot to improve this; the MAPH professor I spoke to was less optimistic, but then she's not in the philosophy department so I don't know that I should care what she thinks. Finklestein's reasoning was that what was usually lacking in PhD applications was a good writing sample, and this program culminates in a thesis which is easy to cannibalize for that purpose; it is also possible to get letters of recommendation from Chicago faculty during the MAPH year, which can't hurt. He also made the point that for those of us who are trying to get into philosophy programs with a non-philosophy BA (such as me), this is a way to add a bunch of philosophy courses to your transcript. I would certainly like to have some graduate-level classes to point to to counterbalance some of my undergraduate grades. (I graduated magna cum laude, but all of my lowest grades were in philosophy, and all but one with the same professor* -- the exception was an Ethics course I took for a gen-ed credit as a sophmore.) In conversation with Finkelstein, he agreed with me that this year should either leave me well-equipped for applying to PhD programs or should show that I'm just not cut out for this racket. Incidentally, Finkelstein is a very nice man; I tried to buy his book that afternoon, but the store didn't have a copy. (Annoyingly, it comes out in paperback just in time for the fall quarter to start. I may end up reading the whole thing on Amazon.)
The various humanities departments hosted lunches on monday for prospective students to attend. Conant, Finkelstein, Ford, Johnathan Lear, and Haugeland all showed up, which was nice to see. (Haugeland's beard is much longer and crazier-looking than it is on his faculty page; I didn't recognize him at all. Also I've already forgotten how to pronounce "Haugeland".) Conant handed out a (tentative) curriculum list for the next year. Finkelstein's teaching a graduate seminar on "Late Wittgenstein" in the fall quarter. Pippin's teaching a 3000-level on "Kant's critical philosophy" and a graduate seminar on Hegel's "Lectures on Fine Art", both in the winter quarter. There's a class on contextualism in the spring that looks pretty good; the course description mentions Davidson, Wittgenstein, Austin, and Cavell, and it's focused on contextualism & truth-theoretic semantics and contextualism & the later Wittgenstein. Haugeland's also teaching a course on "Being and Time" at some time or other (heard winter, the paper says fall), which I'll be sure to at least audit. More than enough neat stuff to fill up my schedule. Should be a good year; I look forward to stressing out over writing papers again.
Having heard about the leave problems this year, I was sure to find out who was scheduled to be absent in 07/08. More bad news: Conant will be gone all year. Forster, too, along with three other names I didn't recognize. Less than ideal.
But, Hagueland, Finkelstein, Pippin, Ford, and Kremer will all be there. And the Wittgenstein Workshop looks like it's supposed to continue apace; Kremer's listed as doing it next year. (Conant is also listed, but presumably that's a typo. I figure if the workshop was not going on at all it'd not be listed, but it's on the list three times.) So, still plenty of excellent professors to look forward to having classes with.
The footnotes to this post get pretty emo and are largely about my undergraduate career, so I have hidden them behind the pocket monkey. I advise against reading them if you have something better to do, though it was somewhat therapeutic to write all of that out, and there's some funny bits (or at least a funny picture; it's the hyperlink if you want to just look at that without reading the boring text stuff).
Blogging on topics which are not me should resume shortly.
* In only one case do I know why that professor gave me a 'B+'; I refused to participate in class discussion for a few weeks in "History of Modern Philosophy". The second half of that course was taken up with readings in metaphysics, largely from contemporary figures. I would only speak up when the discussion drifted back to "modern" figures, because that was what the catalog description said the course was supposed to be about. This was a stupid and childish thing to do. But there was nothing in the syllabus about "participation credit"; according to the syllabus, my grade was determined entirely by several short papers and a final exam, plus some credit for attendance (which I had no problems with). I turned in all the papers on time. My grades on them averaged to a 91, if memory serves; I know it was a 90-something. So I figured I was in the clear -- I could protest the way the professor taught the class, and there would be no repercussions for this. This was a very stupid thing to do.
In the two other cases in which that professor gave me less than an 'A', I haven't the foggiest what I did wrong. In his Logic course we never got back our final exams or any of our homework assignments; I got a 98 on the midterm because I misread something in the book**. He said that the homework assignments were going to be counted as completion grades, since he didn't want to have to grade them all (and people were called on to work them out in class the next day anyway). I didn't turn in any of the homework assignments late or incomplete. I do know that I got a problem wrong on the final exam -- it involved several nested biconditionals, and the shortest proof anyone found for it was something like 70 steps. I'd spent an hour or two scribbling about it and ended up with a transcription error somewhere that bunged it all up; I then threw the whole long mess on a truth-table, showing that the contradictory of the conjunction of the premises and the conclusion was false, and claimed that I had proved by reductio ad absurdum that the argument was valid. Which isn't how the reductio rule works in the system we'd been taught, though at 3:30 AM it really seemed like it ought to work. But the problem I got wrong was one of twenty or so on the exam, and the exam itself was worth the same as the midterm, which were added to a percentage for homework and an attendance credit to reach a final grade, according to the syllabus. Again, the syllabus's answer to how my grade was determined seemed to be at variance with the facts. Or else I missed several other questions on the final, despite checking all of my answers twice and never encountering any problems. I rather suspect the professor assigned grades at a whim.
That professor also team-taught a "Philosophy of Science" course with the head of the physics department. The course which was horribly misnamed; two-thirds of it was devoted to the history of science (using "The Ascent of Man" as our textbook, with wonderful lectures from the physics professor on whatever topic we'd just read about), with the last few weeks featuring a whirlwhind tour through logical positivism, Goodman's "New Riddle of Induction", Kuhn, Feyerabend, Van Frassen, and other topics that had I expected to be taught on in the course. The physics professor, Ed Neuenschwander, is a prince of a man, and I don't regret taking the class because of him.
But my grade was determined by a powerpoint presentation on Galileo and a midterm with various weird questions; there were no other assignments. Here are the questions from the midterm which I responded to (it was one of those "respond to N out of these X questions" deals):
*"Why is agriculture a necessary step in the development of civilization?"
*"Describe the role of foresight, and the importance of creating and manipulating symbols, in the Ascent of Man",
*"The Gothic Cathedral presents an excellent study in the advance of human technology and knowledge even before the rise of what we properly think of as "science." Describe the types of social structures, expertise, and technological advances which are necessary to carry out the planning and construction of these wondrous structures"
*"Why is the role of Alchemy historically important in understanding the rise of contemporary science?"
The ones I didn't select were even worse. (One was something like "Why did astronomy and mathematics arise first, historically, among the sciences?", which was neither addressed in our book nor addressed in class; the professor had mentioned that he thought he knew the relation between the two during Neuenschwander's lecture on Pythagoras & friends, but didn't expand on it because he wanted to use the question on the midterm!) I got a B+ of some sort on this midterm (I *think* it was an 88); I received full credit on the first two questions, was docked a few points on the third for using prose which was "too strong" in my conclusion, and was docked several points on the last one for reasons which are not clear to me -- what the book had on the topic, I'd included, and the topic was not discussed much in the lecture on alchemy & medieval science. IIRC, the red ink on that one just said something like "not quite"; I know the red ink wasn't helpful.
Neuenschwander said he liked my Galileo presentation; he came up to me after class to compliment me on how much material I'd covered. I admitted to him that most of my information had come from Wikipedia; we then had a nice little chat about how much information you can find online these days, the contemporary RCC's statements about Galileo, etc. I suspect my citing Wikipedia may have hurt me, possibly; I did double-check everything I read on there which I hadn't previously known, though, and it all checked out. Probably should've listed the various academic sites Wiki linked to, rather than Wiki itself.
Wikipedia was not my only source; I also cited our textbook (which had a very nice chapter on Galileo) and a bit from Jaako Hintikka's autobiography where he has some interesting conjectures about why Galileo was seen as a threat to the Church. (It's in the Open Court "Library of Living Philosophers" volume on Hintikka if you want to read it; just search for Galileo.) I was pretty satisfied with my powerpoint presentation, especially considering I hadn't had to do one in years at this point.
(At least one person (out of the twelve or so in the class) literally copy/pasted their material from Wikipedia. It was obvious that they'd done so; they had trouble reading aloud some of the sentences in their powerpoint presentation, which implies that they likely hadn't even read the stuff before having to present it, and when I asked them about it after class they did not deny it. I guess they were busy with other classes.)
Apart from these assignments, the grade for the class was determined by participation. I don't think there was a class that went by that I didn't speak up; I've never had a problem arguing with professors, and I frequently found opportunities to argue with the philosophy professor,*** or to ask questions of/probe Neuenschwander. I am confident that no one in that class "participated" more than I did. But somehow, I got a 'B'; my only one in all four years.
Incidentally, I fully deserved the B+ I got in my Ethics class; it was at 8 AM, and everyone in the class slacked off hardcore. Nobody needed the course for their major, and the readings were longer and more difficult than anyone'd expected. Nobody did more than skim the readings, and the class discussions were generally not very good (me being no exception). I honestly feel bad about that class; I almost retook it in my last semester, just to apologize to professor Crutcher for not trying the first time, but I had a scheduling conflict. (Ironically, I didn't even get a gen-ed credit for the class; I'd misread the catalogue requirements. I think I thought it counted for my psych/soc credit. I ended up counting it towards one of my minors anyway, so it was all for the good.)
The only other non-A classes I can recall in my undergrad career were Church History II****, Greek I, and Systematic Theology II. I am fine with those grades because the classes were hard. I took Systematic II before Systematic I (for some reason this was allowed), which made things a lot harder than they needed to be. Oh, and I also got something that wasn't an A (I think it was an A-) in "New Testament Theology", because I hadn't figured out that the study guides had the same questions as were on the exams. I was just studying the textbook before each tests for most of the semester. Considering the sheer amount of material covered (600 pages in the textbook, which we read all of, plus supplementary materials and the NT itself), this was nowhere near as easy as studying with the guides. Also I took the course as a freshman. It made a lot of my later classes seem really easy, and none of the reading for any of my later classes seemed bad at all in comparison, until I hit Systematic.
I got 'A's in "Intro to Philosophy" and "Ancient & Medieval Philosophy", both with the professor I dislike. In A&M he returned my second paper on the ontological argument (I wrote one on Anselm's, one on a version he'd given in class) with a blue ribbon on it. Apparently he almost never gives "100"s on essays; I was the first in a few years.
The blue ribbon looked like it was drawn in MSPaint. He had printed it out, trimmed the edges off, and taped it to the front of my essay along with a short letter explaining that he rarely gave "100"s but he thought I had shown an astounding amount of progress in a short time. Here is a picture of the ribbon, taken with my cellphone camera. I only just now noticed that there are nine "10"s; I don't know what's up with that. I guess the "Ten" in the center is the last ten points. You can see that it says "Grade: 100%" at the end of the letter, though.
Two years later, the professor would refuse to write a letter of recommendation for me because I was "not his product"; it was at this point that I decided I'd go to law school rather than study philosophy.
**The question was true/false, and went something like "A premise and a conclusion constitute an argument"; the definition in the book referred to an argument being constituted by "premises and a conclusion" (emphasis mine). It was a take-home exam, and I figured I ought to go with the textbook's definition; I put "false". Whoops.
***I'm not using his name so he doesn't find this post if he vanity-searches. Also, I just tonight found out that he has a blog, which is hilariously bad at places (and which also reveals that he kills time on "Ebaums World", which further justifies me in looking down on him; Ebaums steals content, and also sucks). Look at this: "Since observation locates the mind as somehow in the brain (due to its relationship to the brain).... The mind is in the brain, and the brain is in the world; therefore, the mind is in the world." He did his PhD on the philosophy of mind, as informed by neuroscience, so it's not like he's out of his element, either. Someone I spoke to who'd seen his disseration claimed that his thesis was that such-and-such a part of the brain constituted "personhood", and speculated that his dissertation was accepted because all of his neuroscience details were incomprehensible to his hearers. I'd prefer not to think such things of OU professors, but honestly that dissertation-topic is bad enough that I don't think it matters if his professors understood him or not.
Ooo, another nice bit: "I can only reply that such a view of "mind", therefore, means nothing. But s/he will say, "I'm using the term now, it must have meaning." I will say, "It refers to brain function or to nothing."
The use of a word and the reference of a word is a tangled debate in itself." -- this is how he ends a post.
The professor is broadly contemptuous of the later Wittgenstein. His interpretation of the TLP is exactly the one presented in Russell's introduction to that book, and I half-recall him endorsing Russell's criticisms of the later Wittgenstein, too -- we should look to science, and not ordinary language, to see what our words mean. He did the "switch from 'meaning' to 'reference' without noting it" thing in real life, too; he generally tried to shift to talk of "reference" whenever anyone broached a question about "meaning".
If I drank, that post would have driven me to drink. God damn it, SNU, have some standards.
****I put off editing my term paper for this course until the last day, and then slept for like sixteen hours right before it was due. My paper was not coalescing anyway; I had chosen a bad topic. (It turns out that "John Wesley's relationship to the Anglican Church" is the sort of thing our library had several long books on; I was never able to get through enough of the material to pass judgement on the mess.) Managed to ace the exams, at least, which kept my grade in the class respectable. This was also freshman year, as it happens. Learned a lot of useful things that year....
Posted by Daniel Lindquist at 7:43 AM 8 comments
Labels: junk