Sunday, November 22, 2009

Wartburg: my little homecoming (part 1)

In two days it will have been exactly six months since graduation. Much has changed and much of it for the better. Yet today, as I returned to the Wartburg campus, I experienced a most unexpected feeling.

As I walked into campus from the south, between the CAC and the science center heading for the Konditorei (campus coffee shop; basically my home), I felt... normal. It's hard to describe, like qualia--what does it feel to perceive, say, the color "red" in your mind? What's the subjective feeling associated with it, that feeling that only I know in my own head, that you know in your own, and that we can only assume are similar? You must feel it to know it. The best I can do is this: if I were to vocalize my thoughts, I would have said something like "okay, I know this. This is my school. There's that bush. The bike rack. The skywalk shadow has exactly the shape that it should this time of day. I know this. No big deal." At times--a second is a very long time in your head--it even felt like "yeah, whatever" or even "so what?" It was all very underwhelming, as if I'd been there a million times before... which, after all, I had... and as if there were nothing special about this one time.

But seconds later it hit me. That I would feel like this is nothing short of amazing. Why? So much has changed that maybe I expected to feel joyous, ecstatic, euphoric, just as I had felt kidnapped when back in May I saw the red bricks of the Complex fade in the rear view mirror for the last time. Of course, the joy hit me in full when I set foot inside the K-dit and started seeing people I'd long missed, but the original feeling of utter and complete familiarity stayed with me the whole day. Not for a minute have I felt out of place or like a visitor. Nothing on this campus looks or feels new, old, smaller, bigger, or in any other way different. Everything is just normal; just itself.

In retrospect, now that I'm back at the Love Shack, feet up and a cup of herbal, this is the best thing that could have happened. Had I perceived Wartburg as new and different, as a thing of the past, exciting but far-removed, surely I would have felt more strongly... but, I think, it would not have affected me as deeply. I might have been tempted to ascribe the stronger feeling to "it's been a while after all" and I might not have grasped the heart of the matter. And the heart of the matter is that I feel as if I have never really left this place. As I walked around today, I didn't remember things; memories didn't come back; I didn't relive past events and places and conversations and smells and sounds. All those things just are again, as if I'd held a wary double-mindedness all this time, a Gollum brain of past and present. It's as if suddenly my mind would do its own thing and switch back to Wartburg mode, back to my May self, which was right there, alive and dormant, as if the previous six months had been erased in one stroke. It was rejuvenating in the most literal sense of the term, subtly and penetratingly so.

On a final note, I must say all of this only happened with places and not with people, whom I've been so completely happy to see again and for whom I've felt a genuine surge of "omg lol hi!" every single time. I've loved the attack-hugs and kisses, the familiar smells, the big grins, the awkward tell-me-your-last-six-months-in-two-minutes conversations, and so forth. That was exactly how I'd imagined it would be. It must be that our contact with people is relatively limited and more meaningful than that we have with places, and thus we cherish and miss it more and it evokes a stronger emotional response when we are reminded of it.

Whatever the case may be, I feel like I'm getting the best of both worlds, and I'm loving it. In the end, I was expecting to come here to jump-start myself, to find something that je ne sais quoi that for some reason I'm missing at Tech right now... and so far I think I find it here, in the squirrels and the stones and the steaming mugs.

Oh, and still no wireless for me at the K'dit. See? I never really left!

Peace,
Claudio

PS: much more happened that's worth noting, especially some really meaningful conversations that I'll cherish for a long time... but all that must wait, till at least the morrow...

(end of post)

Wave-town!

I'm in Waverly!

It was heartbreaking not to have been able to come for Homecoming back in October, but this more than makes up for it. At Homecoming it would have been a cramped, 24-hour visit with frantic get-togethers and too little time to process what's going on (yes, I operate slowly; so sue me). But like this I've five days and plenty of time and opportunities to do everything I would have wanted to do--minus, unfortunately seeing friends from my graduating class, although several of them are still around.

Flights went well and I got into Waverly in good time. Had gotten up at 3 a.m. to fly out of Roanoke but put off taking a nap once at Steph/Chels/Kate's house (nicknamed the Love Shack), which means the 9:30 p.m. party totally sneaked up on me. Ended up going to bed at 2, latest it's been for... uh, ever? But it's all good. This and more for my favorite people on Earth.

Today we hang out with MY MAN Eric G who's driving down from Decorah (famous, of course, for Sodom and Decorah... har har... Dr. Strickert's RE101 course from four years ago and I still remember this lame joke!) and tonight I'm crashing the SI staff meeting to let them know how awesome they are and how bad TA training sucked at Tech.

As much as I love Blacksburg, I want to move back here right now! How're those matter-energy transporters coming along? Are we there yet?.........

(end of post)

Tuesday, October 20, 2009

Things learned recently (for lack of a better title...)

A month and a half since the last post. Mamma told me there'd be days like this: you won't update your blog, you'll forget about it, etc. Ah, if only I'd listened. Whatever. 'Tis evidence that I'm taking grad school seriously. Another way to put it is that it is sucking my life.

In these two months (August 24 - October 20) I've discovered the following about grad school and life in general:
  • I love teaching. A lot. I look forward to classroom time when I'm on the other side of the desk much more than to anything else. My kids are intelligent and engaged, within the limits of their generation, of course (nonexistent general culture due to abysmal secondary schooling, irritatingly short attention span, some holier-than-thou attitude, etc). Despite the anti-intellectual world they live in, they seem to take entry-level philosophy decently seriously, and that pleases me. I'd like to believe it's because I'm just that good, but I know better. I'm sure it's because they genuinely like to think, as every healthy kid does, and it pains me that they have been (and will be) pressured not to from almost every side. But that's a lamentation for another time.
  • I don't like my own research so much anymore. It may be for a variety of reasons. (1) I don't have much time for it, what with TAing and the inordinate amounts of assigned class reading, often in the order of 150+ pages a week. (2) My two research seminars, if engaging, aren't totally up my alley. "Religion in the Public Sphere" has has very high highs when secularism and establishment are discussed and very low lows when... well, anywhen else, even if I just love Simon May's South African accent. "On Darwin's Origin" is schizo: the large lectures are excruciatingly boring forays in the history of biology, which good ol' Dr. Zemke drilled into my brain four years ago, so... catching some zzzs. But the Friday graduate discussion sections are enthralling thanks to Dick Burian, who surely has to be the most knowledgeable living biped (look up R.M. Burian). So even though this stuff is awesome and is helping me a lot in building a solid background, I'd still like to focus my work elsewhere. See below.
  • There's no such thing as "meanwhile." I'm supposed to carry out my own research at the same time as I take seminars, both for submitting it to conferences and journals and prospectively toward my thesis. But in practice, this is nearly impossible. This term I'm writing 3 term papers around 20 pages each (one on Lakatosian falsification in evolution, one on whether a secular state is implicitly a nonreligious establishment, and one gods only know what about)... and in the midst of that, plus teaching, I'm supposed to research epistemic circularity, which is what really turns me on? It's just not happening, at least not right now. They tell me that this is to be expected, though, and that to some extent it gets better. One hopes.
  • I feel, to some degree, unfulfilled. This might really be mostly because I can't work on what I want, even though one seminar this coming Spring might give me the chance to. But maybe it's simpler than that, a normal and predictable side effect of having to get used to new living patterns. And who knows if this isn't a co-cause or an effect of my messed-up emotional life right now (see below). Once again, it might be too early. Probably, like many others, I've romanced grad school as a purely-intellectual liberation from the daily-grind uselessness of college... but of course the key word there is "romanced." Duh, Claudio.
  • Never have a crush when you can't afford it. It's bad, bad, bad. But then again, what's new there. FML, and for once I mean it.
  • VT's philosophy department is a rocking place to study. As per the Leiter report (www.philosophicalgourmet.com), we are among the top five programs in the U.S. offering terminal MA degrees in philosophy. I have no terms of comparison, but it feels that the reputation is well-earned, and I'm not talking about the big names we have here or attract each time we host a conference, though that counts too. My #1 reason is the superb quality of the faculty. I am holding a wary, but pleasant, double-mindedness in my relationships with them. On the one hand, I'm awestruck by the impressive amount of knowledge. Some are authorities in their fields; others are somewhat known and very serious scholars. Not one isn't a very respectable expert in something or other (which, given the positions they hold, is no less than I should expect). On the other hand, they feel less like oracles and more like peers. It helps that in our department faculty and grad students are on a first-name basis by default, but it goes deeper than mere lack of formality. Those with whom I've taken the time to converse have taken the time to talk TO me rather than AT me, and point me in useful directions for my research and my studies in general. Perhaps this department, at this time, is striking a desirable balance between quality of research and quality of teaching, which might reflect the equally good balance of very seasoned and very young professors--roughly, experience and enthusiasm.
That's it. No, I'm sure there's more... but that's it for now. Each of the points made above needs elaboration, but that'll have to be some other time. Bed beckons. Ah, yes, THAT is an important point...
  • If you don't get all the sleep that you need, you're DEAD!

Thursday, September 3, 2009

Choosing Darwin, or, grad school has begun!

Who would've thought that with the start of school I'd've blogged less!... okay, everybody. It's a good sign. Means school is keeping me busy. But is it the "this is awesome-but damn is it challenging-but who cares it's awesome" kind of busy or the "this sucks-I hate it-FML-arrgghhh" kind of busy? Definitely the former.

The biggest problem to face was the choice of seminars. There are simply too many on offer here at VT that are amazingly interesting, which probably says something about the breadth of my philosophical interests and the lack of in-depth preference at this point (except perhaps a slight bias toward epistemology and history/philosophy of science, but those are very broad strokes themselves). So, with the DGS's consent, I signed up for four seminars, while the typical course load is three. I then had two weeks to decide which ones to keep and which one to drop. It was a difficult choice.

The original four seminars were about: (1) religion in the public sphere, focusing mostly on arguments for and against religious tolerance, from Locke to Rawls; (2) the epistemology and metaphysics of Locke and Berkeley, and of the pre-Humean modern period in general; (3) the life and work of Charles Darwin, in occasion of the 150th anniversary of the publication of the Origin of the Species; (4) and symbolic logic.

Of these, the logic course had to stay, since it's mandatory for all new philosophy MAs. In retrospect, I should have tried to test out of it, since it covers 90% of the material I also studied as an undergraduate... but oh well: this will be a less stressful, more diluted refresher. Also the religion course wasn't in discussion, since it's a one-of-a-kind class with extremely interesting readings and a well-prepared professor (Simon May) that may not come up again while I'm at VT. So it was down to Darwin vs. Locke/Berkeley.

After two weeks of attendance, I've decided to drop Locke/Berkeley. I must confess it was intriguing, and the first week's writing assignment has been a lot of fun. Never before had it taken me ten hours to write a two-page paper, but it was the most original and better-reasoned paper I've written all year, barring my undergrad thesis, of course. As it turned out, Dr. Ott thought it was "very well done!", so chances are I would have done reasonably well in the course. That and the fact that it's the most strongly epistemological of all my choices made me reluctant to let it go. Dr. Ott is fun and a freakin' genius, class discussion was exciting... so nothing really tipped the scale against it.

But something tipped the scale in favor of Darwin. Three nights ago, I lay in bed reading myself to sleep with some of Darwin's field notes from the voyage of the Beagle, a class reading. I fell in love all over again. Some of my fondest memories of college date back to my freshman year, when the glorious Warren Zemke made us read Gould, Mayr, Shermer, Sagan, and of course Darwin himself for a course on history of science and scientific methodology. Those memories came back like a flood, reminding me of why I loved Darwin's lucid thinking, his poetic and yet strikingly accurate observations on geology and zoology, and the beauty of the story of the discovery of natural selection. To have re-read Peter Bowler's Evolution: The History of an Idea in the first week of class surely helped, too, as that had been among my favorite texts last year for a paper on the same topic.

In short, I chose with my heart and not with my brain. Sure, I can rationalize my choice all I want. I can say that the history of science will still be useful to me as a philosophy grad student, or that the course's instructor Dick Burian is both an awesome guy and a great philosopher (I remember him being cited as an authority on adaptationism in my textbooks, for chrissake), or that I will be able to focus not so much on the history of science and instead write a phil.bio term paper if I want to, or that the course will still count toward the philosophy degree as an elective. These are all good reasons, but they didn't tip the scale. The pleasure of reading about natural history did.

Whether that was a poor choice, time will tell. But for now I am confident it was the correct one, partly because there were no wrong choices, not really: when you work and study next door to such excellent philosophers and talented fellow grad students, you're doing the right thing by definition. So yeah, I might as well be wearing the infamous "I'm really excited to be here!" t-shirt. Phail much? Hmm. We'll see.

Reflections on TA-ing and other stuff coming soon... soon-ish... well, eventually!

(end of post)

Tuesday, August 18, 2009

TA training vs. SI training

I'm undergoing GTA (graduate teaching assistant) training at Virginia Tech, and after the first two days--4 sessions out of 6--my opinion is mixed. The presentations and workshops are well-delivered, entertaining, etc. Top-notch stuff. But they're sorely lacking in two important aspects.

One, presentations are mostly frontal-delivery, one-(wo)man shows with little interaction and no hands-on learning... which is pretty bad, since they're supposed to be shaping teachers. Two, the content of this TA workshop is aimed at people who have never been on the deep end of the classroom before. The first issue is endemic to academia and I won't discuss it. The latter is more of interest to me.

For the last two years of college at Wartburg I've served as a Supplemental Instructor for introductory courses in philosophy. The duties of a SI are quite similar to a TA's, minus the grading (though some SIs do that too). SIs lead discussion, help out students with difficult concepts and putting the material in perspective, assist the professor with scheduling and class communication, and re-lecture as needed. Heck, I've even taught two plenary sessions when the professor was sick. In general, SIs use their hybrid more-than-student, less-than-faculty post to serve as a bridge between faculty and students. The duties and image of TAs are much closer to the faculty side than to the student side, but from a practical standpoint their tasks are nearly identical to SIs. As a result, 95% of the material discussed in these GTA workshops was simply old stuff for me, with the few local differences peculiar to VT accounting for the rest.

I'm a (wannabe-) philosopher, so I name things and affix blame. For one, I blame Virginia Tech for not differentiating between those students with previous SI experience. SI is a national program and more than a few fellow GTAs I've met these days have been SI leaders. They, too, were more or less peeved, depending on the level of SI training they had received at their undergraduate insititution. There should have been some sort of pre-screening based on previous SI-ship experience or advanced sessions for former SIs--or even involvement of former SIs in training non-SI new TAs, just like SI peer mentors carried out a significant portion of SI training at Wartburg. You get my drift. There were countless options, but I feel that the human resources weren't properly utilized. Pity.

More importantly, I praise Wartburg for preparing me so well. The training I've received from (among others) my SI supervisors Jeff Beck and Michael Gleason and my SI peer mentor Lia Kampman was amazingly attuned with VT's declared expectations for their TAs, at least based on these workshops. Now perhaps in a couple months I'll find out I'm an awful TA, and if so I can blame both Wartburg and VT for all-around insufficient preparation--and/or myself, of course, for being a poor learner and not having done more.

Regardless, my point is that I'm very glad that the knowledge and experience I've accumulated in the last two years as a SI, easily the most rewarding part of my college life, is turning out to have been sound, poignant, and resting on solid pedagogical foundations. Whether or not I'm massively deluded, I'll find out. But for now it feels good.

(end of post)

Friday, August 14, 2009

(OT) Airport meanderings

I love airports. I love the smells, the sounds, and most of all the people. Sure, I'm still basically a kid at heart, and what kid doesn't love those giant dick-shaped flying machines--but there's so much more than that. Nothing smells like an airport. It's a mixture of old plastic, new rubber, and human skin. Seriously. When the day is done and you smell your travel clothes, they smell like airport. Unmistakable. And as I said, the people! In no other place than a major international airport do you see such a potpourri of nations, colors, habits, and tongues... and I'm not even very much the multicultural kind: my Wartburg friends know I've never even attended an I-club meeting, party, or whatever! Haha. But today in Rome, as I was waiting to board, I saw a family of Hasidic Jews holding Torahs, waiting in line for the bathroom, next to three Roman teens video-texting someone and talking football in delightfully untranslatable terms. Nearby, the American fat lady tried to placate a crying toddler whom I hoped wouldn't be on MY flight (he was), all the while the Swedish 10-something brother and sister play-wrestled each other on the floor (as only Scandinavian kids can do) as their parents half-laughed and half-scolded. It was one of those epiphanies: we're all basically alike, we're all awesome and we all suck pretty much the same.


Here in Chicago was no difference. See, I cancelled my flight to Roanoke at the last minute and am staying overnight to catch the first flight out at 7am. So since I'm at the Hilton, which is on airport grounds and within walking distance of most terminals, tonight I just toured the airport. Yep. Seriously, how many times do you get to do that? In an airport, you're either running because the flight's on time and you aren't, or you're waiting because the flight's late or the personnel is incompetent. Neither is the good state of mind to consider your surroundings. Despite my love for airports, I admit I haven't really ever gotten to enjoy them, because all the time spent there is time spent worrying, running, or generally being very focused on what needs doing. (For a foreign national in a country with severe repercussions for the smallest mistake in your immigration papers, concentration is a must). Plus, airports are really supposed to be functional, not pretty.

AND YET! Chicago O'Hare is a beauty. I've toured all four terminals in a little over an hour of walking, of course limiting myself to the areas before security. The structure is huge, extremely complex and yet impressively simply laid out and easy to follow. There's a dozen ways to get to the same place and it's nearly impossible to get lost. Everything looks like it can be fully operational at 3am as at peak hours with a million people swarming around. Speaking of people... A mother realized her flight left at 7:45 instead of 8:45, swore loudly in Spanish while her two tween daughters laughed, and then explained to them in broken English that they'd have to run if they wanted to make the flight; they spoke perfect English. A businessman insulted his girlfriend (or whatever) on the phone, totally making a fool of himself. A guy had a prosthetic arm with a hook; a frickin' HOOK! A Hispanic TSA guard went out of his way to help a heavily veiled Iranian (I guess) woman who got confused on which gate to go.

I'm not sure why, but I love all this. Airports are among the most amazing places you'll ever see, both architecturally and from a human(e) standpoint. They're hubs of multiculturalism and varied humanity, and you can't consider yourself a serious people-watcher if you don't go to your local airport every once in a while and just look around. Okay, lately I've been at Fiumicino, Heathrow, Detroit Metro, and O'Hare, which are all very large and very busy...... but you get my drift. :)

Thursday, August 6, 2009

Frustration much?

How frustrating it is that all I have to read here at home that's even slightly philosophically oriented (and which I haven't read) is Plantinga's Warranted Christian Belief. I'm not really sure why I bought it in the first place, years ago. I usually appreciate what AP has to say about warrant and epistemic circularity, but I'm not too interested in philosophy of religion anymore, so... ah well. I guess it's back to printing articles from the Stanford Encyclopedia of Philosophy. And yes, I'm whining. As much as I LOVE being with my parents right now, I'm stuck here for another week and I just can't wait to go back and start classes and shit.

(ignore the stuff below: this IS the full post...)

Sunday, August 2, 2009

Asimov's "Foundation" book review

Just got done reading the first book (1951) of Asimov's Foundation saga, made of seven novels and dozens of short stories. I'd only heard the highest praise about it. It won the Hugo award for best sci-fi literary series of all time and it's often flaunted as the genre's highest achievement. My father especially loves it and has read the series many times over.

I really enjoyed the first book. It's intelligent, unpredictable, original, and direct: it tells the bare facts, most of the action happens in dialogues, and there's almost no character development--all stuff that usually spells "bad lit," but which is exactly what I like, as I said last week. I have some reservations, of course, but overall my opinion is positive.

Plot recap. In a very far future, four quadrillion humans populate the Milky Way, united under the huge Galactic Empire. A dying psychologist, Hari Seldon, develops the science of psychohistory, which allows mathematical prediction of the behavior of very large populations. Seldon predicts the Empire will fall within a few generations, and 30,000 years of barbarism will follow before a second Empire will rise and re-unite the galaxy. The fall is by now inevitable, but Seldon develops a Plan to preserve human knowledge afterwards, an intellectual ark to shorten the upcoming middle ages to a mere 1,000 years.

He creates two Foundations of a few thousand settlers at opposite ends of the galaxy. The first Foundation is on Terminus, a small and barren planet in the extreme periphery of the Empire. In the coming decades, Terminus endures a series of "Seldon crises," crucial moments when the Foundation must ensure history follows the Seldon Plan. The first few Seldon crises concern the Four Kingdoms, powerful warmongering planets that threaten to invade Terminus. The Foundation keeps them at bay first with atomic power and then with free trade. Since knowledge of atomic power is already lost in the outer systems, the Foundation introduces its atom-based technology as a religion: its agents pose as "high priests" who are the sole keepers of atomic power plants. Terminus traders, then, establish a clever symbiotic relationship with the Kingdoms, making them purchase household atomic gadgets with food and raw materials. At the end of the book, though, characters realize that rules are changing fast. Korell, a planet closer to the center of the galaxy than the Kingdoms, has retained some knowledge of atomic power and even has some old Imperial starships and weapons. The Foundation's religion is thus ineffectual, and Korell can only be controlled by pure trade devoid of mysticism. Here Asimov suggests that the Foundation will need to keep reinventing its policies as it expands closer to the galaxy's center, where the Empire is still alive and well... for now anyway.

This plot just begs reading. No sci-fi before or since has tackled human history on such a large scale. The only way to make this gargantuan storyarc readable is to focus on facts and single episodes, so the book does read like a summary, and that too abridged... but once again that's fine by me. A negative side effect is that characters are limited to stock American white males. They all speak and act like post-WWII, pre-Cold War era stereotypes: scorfnul, sexist, and a bit racist. It's a shame that a novel with this scope should be written at a time of relative intellectual hegemony, between two major revolutions. Asimov does eschew self-righteous patriotism, though. The Foundation is not Captain America but rather a stalwart of universal (progressive) principles of science and knowledge. If anything, the novel could have been seen as somewhat anti-American upon release: the Foundation prevails thanks to a careful handling of free trade and State-supported religion, two mainstays of the U.S. after WWII. By portraying free trade and religion as little more than organized manipulations of the masses, Asimov is obviously trading jabs at capitalism. It must help that he's a Soviet-turned-American!

A little less forgivable is that he completely ignores the role of women and children, a grave miss in a sociological tale. You write a novel about the preservation of all human knowledge in a time of barbarism and you "forget" to discuss things like procreation, education policies, civil rights, or anything other than the (puerile) cabinet politics of rich white men? Even just in the 1970s he would have been crucified for this. Along the same lines, the novel forgoes any talk of genetic engineering and bioethics, preferring instead the typical, morbid post-WWII infatuation with all things atomic. Asimov can't be fully blamed for all this, though. The genetic turn, for example, was unforeseeable even for an erudite futurist like him at a time when DNA hadn't even been identified; and times probably just weren't mature enough for more social-oriented considerations. Also, perhaps some of these issues are addressed in future Foundation books, which he kept writing well into the 1980s. We'll see.

When all's said and done, the book's main asset remains that it embodies what science-fiction should always be about: a projection of a possible, future state of affairs based on historical, sociological, psychological, and philosophical hypotheses. Science fiction is unwritten history, and the masters of the genre interpret it this way, surely influenced by the "Three Greats": Isaac Asimov, Arthur Clarke, and Robert Heinlein, to whom I might add Philip Dick (not to mention other major authors like Ursula LeGuin and Kurt Vonnegut, who have ventured into sci-fi but not often enough to be properly called "sci-fi authors"). In Foundation, the genius idea of psychohistory follows these canons. It lends a side to philosophical considerations about free will and predestination which Asimov acknowledges even in this first novel. The idea is fascinating that psychohistory can predict the future of the masses with great precision but is completely unable to make predictions about any one individual. As the Master Trader Hober Mallow explains halfway through the novel, given a large enough population some individual is likely to come along whose characteristics are necessary and sufficient to fulfill the task at hand and keep the Seldon Plan on track... and that's a fantastic metaphor for the workings of actual, past history.

As my father wryly reminds me, in the second novel a new menace comes from "the Mule," a humanoid mutant with extrasensory, psychic abilities that Seldon could not predict and who thus seriously endangers the Plan. I look forward to that! For now I just wish, perhaps childishly, that this kind of novel had been written later in the 20th century, when a wider variety of issues could be tackled and factored in the Seldon Plan. But ah well. :-)

Tuesday, July 28, 2009

End of July

Reflections on Asimov's Foundation coming up. The FINA World Championships are keeping me way too busy for proper philosophizing (a welcome break), and now that Michael Phelps is finally defeated, things are going to get interesting in the 100m and the freestyle relays!

Meanwhile, check out Sam Harris's op-ed on Obama's choice of a scientific advisor. I have a long-standing love/hate relationship with Mr. H so I'll soon comment on this too.

Monday, July 27, 2009

We're the dumbest generation. Now what?

English professor Mark Bauerlein (Emory) writes in The Dumbest Generation that the digital age is stupefying the youth of America, who are so ignorant and so blissfully unaware of past and tradition that they're unfit for the pressures and duties of informed citizenship. His evidence is dozens of scholarly studies showing a steep decline in youth's reading habits and critical skills. They (well, "we": at 28, I fit into his "Don't Trust Anyone Under 30" warning...) are less and less able to retain even basic information and perform simple intellectual tasks. The cause? For one, even though high technology and the media offer plenty of informational opportunities, young people are less and less informed, and so tech is to blame. But so (and primarily) is the previous generation, his own. In what he calls "the betrayal of the mentors," the X-ers haven't fought the stultification of the Y-ers, but instead have glorified it and ennobled it. They've placed it on a pedestal as a new and untouchable status quo. The millennials are rising, they've got cool gadges, they're smarter, and need to be let do as they please. The consequences? Very grim, says Bauerlein, and not at all a bright, multicultural future as the indulgents claim. Knowledge and tradition should help young people become informed citizens and fight their battles, but knowledge and traditions are being lost and battles are being fought on mere hearsay and sheep-like herding. Informed civic engagement is an essential prerequisite for citizenship and a thriving democracy, but very few of tomorrow's leading citizens are informed in any conceivable way, old or new. Consequently, the decline in reading and sound schooling among the youth will result in not only a loss of our common heritage, but possibly in a failure of the democratic system as a whole. When reading ends, so does our way of life.

I'll argue that despite his insufferable style and cheap kid-bashing, Bauerlein is basically correct. His evidence is sound and devoid of major bias. He clearly establishes the situation, points his finger at the right causes, and predicts a sensible set of near-future consequences. Unfortunately he doesn't offer a solution beyond the obvious "kids should read more," though that was never the book's goal. The goal is to awaken the adults, the mentors, to fight the ever-spreading kid-centered media culture, even at the risk of being tacitated as old dinosaurs. The author holds some assumptions that aren't self-evidently true as he claims, and he fails to argue some points as thoroughly as he should, but overall he's right on the mark... unfortunately.

For one, let me get the ad hominem out of the way: Bauerlein is an asshole. His (excellent) writing is pompous and verges on self-righteousness, attitudes largely mirrored by his media appearances. He is not the kind of person with whom I'd pleasantly converse. The most annoying feature of his book is the kid-bashing. In countless instances he describes everything adolescent as "petty," "irrelevant," "silly," "stupid," or downright "mindless." These refer not only to the generation he criticizes, but to the status itself of being an adolescent. He spits venom like a cobra on anything that isn't intrinsically intellectually enriching. To boot, he severely downplays many achievements of the Y-generation, wich I'll argue later is the book's main miss. So while I endorse the main claim and most of the evidence, I could have done with less fervor and judgment, which is as immature as that which it criticizes.

Now to the meat of it. It takes The Dumbest Generation 163 pages to get to the point. The first four chapters provide the evidence on which Bauerlein's case rests, and it's pretty good evidence. If you're strapped for time, all you need to know about the first two-thirds of the book is that he proves that kids read less and watch too much tv. We knew that, and now he's proved it with high stockpiles of scholarly evidence, so we're good (but I'll discuss a couple shortcomings in the evidence later).

In the two concluding chapters Bauerlein first sums up his evidence, then affixes the blame, and finally spells out the consequences. The first argument is simple: kids in the digital age have isolated themselves into a nexus of high-speed information that overblows the importance of peer validation and transient concerns. In other words, we and our peers matter too much and we seek quick, cheap thrills. We live in a present that centralizes us and blots out everything else, and tradition and knowledge are first to go. It's true that everything is social, but it's individualistically and narcissistically social, not culturally so. It's hard to counter this argument, which has been true for quite a while. If anything, I take issue with how Bauerlein downplays some achievement of "my" generation, namely the increased interconnectedness and the spike in volunteer work and community caring. While it's true they were more in touch with tradition and knew more overall, older generations however promoted violence, segregation, and quasi-theocratical forms of church and state. Our generation is following in the X-ers' footsteps in moving away from those and toward a one-race notion of humankind... but this Bauerlein mentions only in passing and drowns in "but" and "however" phrases, which is a significant and somewhat sectarian misrepresentation of how things really are. The positive and the negative coexist and must both be given their due.

The blame-argument points a finger at the mentors, the X-generation. Adults who should "commend [kids] when they're right and rebuke them when they're wrong" have instead elevated adolescent attitudes to a new status quo, one that mustn't be attacked lest one be accused of being a curmudgeon, a backwards grandpa. This interpretation is also quite correct, and it struck a chord. It reminded me of a point that was dear to Stephen King, who in his autobiography On Writing wrote that his generation was largely responsible for the sorry state of the world entering the 21st century: they had the chance to change the world for good but they "preferred free trade, fast-food, and 24/7 supermarkets." In other words, our once-idealistic parents got rich, stopped caring, and raised us in a laissez-faire environment almost completely devoid of tradition and moral fiber, going instantly from one bad extreme to another. This rang another bell. Not 40 years ago corporal punishment was the accepted standard, but then came the Spockians and everything changed. While I do believe that c.p. is barbaric, immoral, and counterproductive, when it went there was nothing to replace it. Families and institutions went from inhumanly strict to unbeliavably inept as "don't punish" (a good thing) became "don't do anything at all" (a disaster) in just a few decades. Likewise it is with culture and tradition: kids are leaving behind the knowledge and methods of their ancestors but aren't replacing them with anything that yields even vaguely comparable results in terms of critical thinking and civic engagement. Yes, it's natural to ditch the old ways and it must happen for us to evolve, but more efficient new ways must come in their place, or "it's like fucking yourself in the ass" (thanks, Lewis Black).

(As an aside--take the statement I just made, that critical thinking and civic engagement must be preserved. Is that obviously true? Notice how it isn't a cultural statement, but a meta-cultural one. Why are critical thinking and civic engagement so self-evidently good that we don't even feel the need to justify them? Couldn't the new ways, which favor intragenerational connectedness and intergenerational exile, be the new standard? What's so important in having a cultural heritage that we must preserve it at all costs? Compare these questions with my previous observations on transhumanism and cybernetic enhancements: by going that route we'll probably lose our humanity as we know it today, but why is that bad? Seldom do writers seriously look into that question, and yet it always nags me, bordering on absolute nihilism as it does. Absolute nihilism feels like anathema most of the times, but at others it feels inescapable, and as such it is an extremely fascinating concept, both logically and ethically. More on that some other time.)

Bauerlein comes a bit close to answering it with his final argument, that is, what the consequences are being and will be of the dumbest generation. Functional democracy requires informed engagement, and informed engagement requires sound schooling. By simple modus tollens, if sound schooling is lacking democracy falls apart. Here we notice that Bauerlein is far from the Dickensian Gradgrind-like automaton of the opening chapters, the champion of useless educational utilitarianism. He is a defender of knowledge not in its guise of guardian of tradition, but as an essential requirement for our very modus vivendi, our way of life. His overarching purpose then is strictly pragmatic: if kids are allowed to disconnect from our common heritage, democracy will be rendered useless.

This too squares well with a notion that has been nagging me for the last few years: democracy isn't working. It isn't working because the idiot has the same voting power as the genius; the culturally isolated, 18-year-old World-of-Warcrafter counts for as much as the philosopher. Joe the Plumber's opinion counts the same as Michel Foucault's before the law and in the public square. This egalitarian character is democracy's greatest asset, but when the information superhighway leads straight off a cliff, then it's its weakest spot. This is not to say that democracy is inherently "good" or "bad," or even that some of its mechanisms should or shouldn't be contested. Surely a major point in the 1960s culture wars was that the fact that Foucault can publish his ideas while Joe the Plumber can't is reason to believe the system is oppressive and must be terminated in its present form. But regardless of whether or not this is true, culture wars must be fought with information and knowledge, which is Bauerlein's main point and which is a bloody good one. The last 5-10 pages of the final chapter are very telling in this sense. If you're a strong leftist like me you might be irritated by Bauerlein's clear right-leaning tendencies, but his points are strongly argued and very much in touch with current reality.

I never say this, but... yeah... you owe it to yourself to read this book, and of course also some that attempt to refute it (which is what I'll look for next). It's important.

Saturday, July 25, 2009

It's okay to hate (fiction) books...

Summary - I am seriously impaired when it comes to reading fiction. Most of my reading is poetry and nonfiction, by which I mean not memoirs or self-help but rather essays, papers, and occasionally book-length treatises, mostly in philosophy but also in history, religion, and sometimes social studies. I am also a prolific writer, again of nonfiction, and poet. So I fall into an unusual category: I spend an awful lot of time reading, but I'm unable to converse about either classic literature (with minor exceptions) or the latest bestseller. When trying to read a novel, I become extremely bored or irritated. Last week I hit rock-bottom when I spark-noted the last three Harry Potter books so I'd know how the series ends. I consider myself an intelligent person and a fast and avid reader... and yet I despise fiction like an ant does DDT. Upon reflection, my conclusion is that my hatred for fiction is a symptom of a larger problem: a very selective form of a.d.d. that concerns most kinds of entertainment. Enjoyment better come quickly, or it's not worth it.

I'm trying to remember which piece of fiction I read last, and I mean start-to-finish. Somehow I've earned a college degree in English Writing (in addition to Philosophy), which entails massive reading lists, and yet I've SparkNoted my way through most of those... as, I'm sure, many students do. Samples of works I've not read and should've: the Odyssey, Othello, Hard Times, Beowulf, Don Quixote, the Ramayana, Candide, To Kill a Mockingbird, The Legend of Sleepy Hollow, The Island of Dr. Moreau, Heart of Darkness, Things Fall Apart, The Metamorphosis, and dozens of short stories in classic and contemporary American and world fiction, including Tolstoj, Hemingway, Hawthorne, Joyce, Poe, Shelley, etc. I wrote my English capstone course term paper on Lolita--without having read the book in ten years. Notice that this list would've been much longer had I been an actual English major and thus required to take more literature courses; yet I feel that even in that case I would've avoided much of the hassle. (That I've been a straight-A student without ever reading the assigned texts says something either about me or about my professors, and probably about both, but that's a discussion for another time).

What did I read then? Well, Siddhartha--it's my all-time favorite and I had no problem sifting through it again for a world religions class, though it is relatively heavy on philosophy and light on narrative. I lived through bits of Dubliners and Dante's Inferno, though I had read the latter as a teen entirely. But... I think that's it. The ultra-short Miss Brill... Young Goodman Brown... a lot of graphic novels... the short fiction for my college's literary magazine... period. That I even remember which ones I did read, and often even where I was, is quite telling.

I won't even start listing all the poems (probably well over a thousand), literary criticism (mostly about poetry, unless mandated for class), and of course philosophy books and papers (those too numbering into the hundreds). I wrote 47 poems in college and over six hundred pages' worth of papers, essays, reflections, and personal notes, the latter mostly on philosophy. And I don't even keep a journal or blog!... well, until now anyway. I say this not to brag, but to prove to myself that I'm neither uninspired nor easily bored. I am "just" very selective, in the sense that if it's fiction I'm about 99.71% less likely to read it... heh.

What's my problem with fiction? On first analysis, it bores me to sobs. I have a very short attention span with novels, for the most puerile reason: if they don't get to the point within 10-20 pages, and they almost never do, I grow antsy and they're not worth continuing. Of course, a novel's goal is not to "get to the point," but to tell a story and tell it well. To do so, it needs at the very least setting and characters. I'm not the least bit interested in either. I can appreciate those nuances theoretically and even tell between good and bad character development... if forced to, and only because I've been trained to do so. But I just don't care about characters, what they feel, think, or experience. It's difficult to follow a story when I've no connection with its players. The few pieces I've read appealed to me for something other than their protagonists. Siddhartha plucks my spiritual strings: there is no Siddhartha character in my mind, but only the (watered-down) theology of his spiritual journey. Dante is culturally important for Italians and goes far beyond its (great) literary merits, not to mention that it's poetry and thus up my alley. And with Lolita it was the language that kept me going, Nabokov being the master that he is: I feel nothing for Humbert or Dolly and am not interested in the novel's significance.

However, it's wrong to say I don't care about stories in general, or characters, or people, or even feelings. Graphic novels entertain me on those levels quite well, as do movies and TV series. I'm in fact a huge movie buff. I get into movies, cry at (most) movies, think long and hard about them, and sometimes even write my own little screenplays. Is it then a matter of time? Maybe when I'm studying philosophy time is not a constraint, but when I'm being told a story I want it to be done in the shortest possible time, and so a movie is better than a book merely because it's over with more quickly. This is sensible, and it squares well with another observation: lately I've been more attracted to TV series (mainly science fiction) than to movies, because to watch a new movie sometimes feels like a higher mountain to climb than to watch a series episode. So perhaps even movies sometimes feel unnecessarily long and I fall back on the quicker fix!

I never had attention deficits as a kid: always sat still at my desk, was quiet, and never was (still am not) good at multitasking. Upon reflection, I'm a good active listener and capable of conversing for hours on end. So might I have an attention problem only with reading? Surely not, given how much stuff I do read. It's true that I can rarely read even philosophy for more than 30-40 minutes at a time without having to stop and think, but that's just healthy: there's so much to think about that continued intake of new information disturbs my thought processes, and that's much better than to read through a book in one sitting like some claim they do. They probably aren't thinking hard enough and instead just drink in the information passively.

It follows that I might have an entertainment-related attention disorder, and my relationship with video games substantiates that hypothesis. I love video games... but which kinds do I play? Mostly first-person shooters. I used to love strategy, simulation, and graphic adventures as a kid, but not anymore. Shooters are a quick thrill, a quick fix--and yet how many of even those have I finished last year? Only two: the latest Call of Duty and the bone-chilling Dead Space. Did I buy more? Oh yes... about a dozen, in fact. I re-sold them all a few hours into them, including the great Crysis, which I had liked a lot at first. I just very quickly lost interest and became supremely bored with them. To me, this means that even shooters better grasp my attention very hard and hold on to it very dearly, lest they lose it.

Now compare this trend with the fact that I've watched perhaps ten new movies last year and only (re-)read Siddhartha. Video gaming, reading, and watching movies are such radically different experiences that I must think the problem is not with any one medium but rather across the board: a very selective, entertainment-oriented attention deficit disorder. If I'm to be entertained, it better be quick, down and dirty, and extremely satisfying in a very short time... or else it's just not going to work. If true, this would mean I'm very utilitarian about myself. What is the purpose of entertainment? To nurture and soothe, to amuse and relax, to challenge or pacify. What is the quickest way to do so? Such and such. Then why should I choose other ways if they take longer to accomplish the same result? Notice that even though I'm spelling this out logically, it doesn't happen consciously in my mind: it's an entirely automatic process that I'm only now starting to look at bare (assuming I'm right).

For many people, reading fiction is pleasurable precisely because it is not very purpose-driven, though it can still be enriching. You can let your mind wander and wonder, both idly and thrillingly (is that even a word?), not worrying about goals of efficacy or even efficiency. I guess I'm just not that kind of person, and I'm fine with that after all: no one said that in order to be a learned person, or even a good scholar, you have to read fiction. Projecting myself now in the role of an instructor or even father, I'd rather have a kid who picks Hume and Locke over Woolf and Joyce any day. But sometimes I wish I could just enjoy that side of literature as well, even if it meant (God forbid) to be able to read Michael Chrichton or Dan Brown... because right now I can't do that either.

~~~

PS - As an addendum, I feel this isn't yet the full picture. Do I not also enjoy philosophy? Do I not glean not only insight but also personal pleasure from poetry and literary criticism? Perhaps this problem is better framed into a discussion of what entertainment is for me rather than what it "should" be generally. After all, that most people make a clear distinction between "books for fun" and "books for school" doesn't mean that I must. It might be very sensible to hold that all books are "books for fun" for me, and I only like certain genres more than others. Food for thought, for another time...

Friday, July 24, 2009

Switching gears... Asimov beckons!

Patternism wasn't worth it: I was right I should go back to people who actually know something about phil.mind, and I'll have plenty of chances of doing that in september. Doomsday stuff wasn't worth it, either. Too much statistics to really appreciate the core of the argument, and studying Bayes just for this isn't worth it.

So, assailed by boredom and with not enough FINA World Championships to watch, I attacked the first stack of printed pages I could find that didn't look like it would suck: Asimov's Foundation series. I'd tried reading it many times before, 10+ years ago, prompted by my father who's a huge buff and re-reads the whole series almost yearly. I had been bored stiff with it. I did love the interactive adventure books based on it, though. Now it's finally starting to look interesting. I mean, a story about predicting the mass destiny of the galaxy based on mathematical inference and psychological profiling? Sign me up! Still appalled at the form, but at least there's not much character development, which is what really bothers me with fiction after all. I'm definitely a philosopher-first when it comes to a story. (I will write a note some time to justify my absolute hatred of fiction vs. poetry and nonfiction, because it's not as simple as "stick to the facts and F the rest").

On a side note, I'm... no, no side notes. New post tomorrow. Not much to say right now anyway.

PS: for days now I've given up on Mark Bauerlein's The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future, and I should write a review asap. It's the best-among-the-worst books I've read all year. Very interesting stuff that turned out to be little more than, well... a mindfuck! Go figure.

Thursday, July 23, 2009

Finishing the book...

I'm just about to finish Susan Schneider's excellent new anthology "Science Fiction and Philosophy" (2009), skipping some articles on ethics and poli-sci in science-fiction that just aren't my cup of tea right now. There's much to say, but I think I'll investigate two areas further.

One is "patternism," not in the biblical sense but rather within philosophy of mind. Schneider tries to reject a version of patternism held by Kurzweil, Bostrom and other transhumanists: that the human mind is definable in terms of a pattern through time, a semi-materialist, emergentist solution to David Chalmers' hard problem of consciousness. I'm finding her rebuttal unconvincing, though so are most transhumanist theses from a philosophical standpoint. I'm afraid this might turn out to be a very superficial and hollow (shallow?) debate and I should just write it off to "this is not their field" and go back to Stephen Stich and Jaegwon Kim.

The other interesting area is the doomsday argument, suggested by Brandon Carter and laid out by John Leslie and (among others) Paul Davies and again Nick Bostrom. It claims that the extinction of human kind is likely to happen sooner rather than later merely on a statistical basis. Like all of cosmology, it is far too speculative for me to find interesting for more than a few hours... but the anthropic principle was my "first love," the argument that first led me to philosophy in college, then framed in a discussion of the (f)utility of SETI and the Drake equation... so I might just pursue this line of thinking a bit more. I'm just not sure what to make of Nick Bostrom. At times he sounds like a perfect idiot, but at others he's quite remarkable. That he holds a teaching post at Oxford sure does weigh in his favor, but... yeah. Suspending judgment. I do that (too) well.

More coming soon. Meanwhile, a picture for your enjoyment. This is SO me. Scarily.


Monday, July 20, 2009

(OT) High school reunions

So I'm back in the motherland (Italy) for a few weeks for the summer. Of course my old high-school friends (classes of 1998-9), whom I've found on Facebook last year, threw together a cute little social event last night, nothing fancy, just pizza out and laid-back chatting. I had a great time and will gladly do it again in January when I'm home next.

We'd done something similar back in January, with fewer people, and that too had been special. I've since shared my experiences with current friends and acquaintances, and I've concluded that people hold one of two opinions concerning high school reunions:
  • "like omg that's awesome I should totally do that too!!1!one!"
  • "lulz phail, who wants to see those losers again?"
Surprisingly, the downers outnumber the uppers roughly three to one...

Last year I'd already noticed an overall negative attitude toward this sort of event among my older non-school friends,acquaintances, family, and some online communities. So the question begs to be asked (which is what journalists should say instead of "it begs the question," which philosophers know is hugely different)... am I normal or are they? Because back in high school, I loved almost every moment spent in class and I would gladly return to those days if I could.

Bit of a background. You have to understand that "class" means something different in Italy. What model Americans follow in primary school we retain through secondary, at least in part. A "class" is a group of (usually) 20-30 students who share the same classroom and the same courses. Each class is assigned eight to ten teachers, one for each subject. Teachers then walk from classroom to classroom and teach their subjects in one-hour periods. There are no courses to take and drop: there is a fixed, mandatory, nationwide curriculum throughout the five years (Italian grammar/lit, foreign grammar/lit, math, history, and PE) and then another fixed curriculum depending on your emphasis. An emphasis is one of six preferred areas of study/concentration: science, classics, education, languages, art, and music. It's a sort of "mini-major" to prepare you for college work. You choose your emphasis your first year (though you may change it later) and students with the same emphasis study together. We had science and technology, an expmerimental and now dead form of the scientific emphasis, meaning we took five years of biology, chemistry, physics, computer science, and technical education, as opposed for example to the "classical" emphasis who did five years of Latin and Greek but had no science other than math and some minor biology. Some subjects are absent from some curricula (e.g., no Latin or Greek for us) and some others are equal for all but not for all five years (philosophy is three years each regardless of your emphasis). The number of weekly hours also varies greatly across emphases, with a whopping 5 weekly hours of math for us as opposed to 2 in the classical.

Keeping this in mind, it follows that when you sit with the same people six hours a day nine months a year, you grow close to them. In our class of 20-25, all were on good terms with all, with exceptions of course, and with varying degrees of out-of-class involvement. But we were still all pretty close, knew each other well, and were there for each other through the tough and the good times, through crushes, breakups, hangovers, school politics (including memorable yelling matches with teachers! yeah you can do that in Italy), and the usual teen drama you'd expect.

How typical is this? Sure was typical in my school. It was a public institute in Rome, a tall downtown building that housed 900+ students, four emphases, and approx. 40 faculty... so not huge or tiny, and pretty representative of Italian public secondary schooling. Nothing I can see set us apart from other schools and kids, and yet we seem to have "withstood the test of time" so much better. As is typical in public schools, most had very different social backgrounds, from the preppies with nice shirts and cologne to those whose family lived pretty much on food stamps (public school is, of course, almost free of charge, as are most textbooks for low-income families).

Perhaps most other kids went along well while in school but entertained no desire to meet up later even when given the chance. In other words, since school thus conceived is a sort of hybrid between a primary and a secondary group, they felt more "secondary" than "primary" and got away from that group as soon as they could. Or perhaps those are most vociferous who were outcasts and now have no desire to reconnect with their high-school nemeses, while the in-crowds just meet up again gladly and are quieter about it. As I said, though, the "in-crowd vs. outcast" separation wasn't really much of an issue at all: with really minor exceptions, most of us were on good terms and bullying was practically non-existent in public schools in the mid-1990s. So I really don't know. It puzzles me a great deal.

It took us ten minutes tonight to slip back into the old group dynamics. Surely we're a tad more mature, most have jobs, many have tertiary/professional degrees or are working toward doctorates, etc... but right away we behaved pretty much as we used to ten years ago, laughed at the same dumb stuff we used to, made the same old jokes to each other, and we had a blast. I hadn't laughed so hard in years.

Not sure what morale to draw from this, but I remain puzzled as to why popular opinion of high school reunions runs so low around here. Maybe I really did get lucky, but it's hard to accept that things would be in such a sorry state generally while we've had a grand time for years.

Sunday, July 19, 2009

A word of caution on the technological singularity

SUMMARY - Famous futurist Ray Kurzweil thinks the boundaries between "humans" and "machines" will soon be blurred or gone. He also thinks it's good to enhance ourselves beyond our biology and become super-human beings with superior intelligence. Critics contend we will lose our peculiar humanity that way and basically annihilate ourselves. I think that both sides ignore important social and philosophical aspects that we must consider before taking sides. I agree with Kurzweil that we will eventually transcend our current human nature and become superhuman. I also agree that we should. But I am convinced we are not prepared for it, and that if we do it as quickly as he advocates we will effectively destroy ourselves.


Futurist and computer scientist Ray Kurzweil holds (The Singularity is Near, 1999-2005) the transhumanist view that technology will continue to grow until humans and their machines merge. In principle, he's correct. Consider the frequency of paradigm shifts (radical innovations, brief periods of radical social, scientific, and technological change). Examples are the discovery of fire, the advent of writing, the rise of democracy, and the first computers. If we chart paradigm shifts, we see they happen exponentially (see picture above).

It took about 3,000 years between the discovery of agriculture and the invention of writing, but in 3,000 more years we went from writing to democracy, a much faster change. Likewise, 500 years passed from the Renaissance to quantum mechanics, and a mere 40 from there to nuclear reactors. This accelerating rate of change is most impressive in computer science. When I was a kid, 486 processors were the big thing. They operated at around 50 megahertz. After only 15 years, today's typical commercial processors perform at 4 gigahertz, about 80 times faster. Th capacity of integrated circuits doubles every two years, and according to Kurzweil the world's combined computing power doubles each year.

Why is this important to understand? Because (Kurzweil claims and I agree) we think about progress as linear and historical, but our estimates are always too conservative. Most of us don't realize just how fast things change. Just think that in 1909 household electricity was rare and commercial flights didn't exist. Only 8 years passed from manned spaceflight to moon landing, and today we have space stations and thousands of satellites. We think that history and progress move slowly and gradually, perhaps because exponential growth looks like linear growth at first, but that's never been the case. Exponential growth soon snowballs. With this in mind, the truth is that in only 30 years we'll see machines that pass the Turing AI test convincingly, we'll have colonized Mars, we'll be able to replicate virtually any type of solid object from thin air, much of medicine will be handed over to nanotechnology, and we'll be able to upload our minds to a computer and replace any part of our bodies with mechanical implants, thus prolong our lives almost indefinitely. In other words, Star Trek is here.

Kurzweil's main point is that nonbiological intelligence will inevitably take over the biological mind. Contrary to popular belief that the human brain is "the best computer," Kurzweil argues that our brain only excels at some tasks. It allows for exceptional pattern recognition (which is why brains have evolved the way they have), but even in the 1990s most computer processors could calculate faster, store more data, and recall them faster and more accurately. Human brains will soon need mechanical implants to keep pace with technological progress, and those implants will in turn accelerate progress, again producing a snowball effect. In short, in less than a half-century "unenhanced" humans will be unable to understand the world around them and be able to live in it.

Now, Kurzweil is obviously an enthusiast. Perhaps his predictions are too optimistic, but the facts are there and I've little doubt that our lifestyle will be radically different by the time I'm 60. What I do doubt and he doesn't is whether any of this is good. In fact, I am quite torn and can't yet pick sides. I'm no conservative nor a luddite; I make extensive use of technology, which facilitates both my work and my relationships. I don't think "life is a miracle" and I sure don't believe in God and the good ol' ways. But the projected rate of progress still worries me. I can also see the good in it, though, and so I'll tackle the conservative objections first and try to defend Kurzweil.

Suppose Kurzweil is right and we basically let intelligent machines take over. Critics say we will have lost our humanity then, but for Kurzweil that will still be human intelligence: we need but expand our definition of "human." I see his point here. When we integrate ourselves with mechanical implants and our creation is virtually indistinguishable from us, aren't those machines also "human" merely because they're the product of our creation? Consider this from the outside, looking in. Why shouldn't we enhance ourselves? Do we have a superior obligation to do anything else? Nothing of what we have is so sacred that it mustn't be changed. "But we'll no longer be human," says the critic. But "being human" is just whatever we make it out to be. It's not self-evidently true that we must remain true to our original nature, especially since "natural" doesn't equal "good." Yes, we evolved through natural selection, which is a slow and gradual process. But for one, the whole of life on earth also follows the exponential paradigm shift scale (see chart above); and even if it didn't, nothing says we shouldn't depart from what has been and move into what will be. Of course, nothing says we should just because we can, either. In fact, at this point we should have a pretty neutral stance about it.

On the other hand, the considerations I've just made hinge on my own metaphysical view. I am a nonreligious materialist who thinks humans are beautiful cosmic accidents. I see no overarching purpose (divine or otherwise) for our species, who is the sole architect of its future or demise. I am thus open to radical change, even in human nature itself. But what does a person think who doesn't share my bias, a person who believes humanity has a "manifest destiny" and must respond to a creator, or simply one for whom our roots are as important as our potential and that as soon as we depart from them we are lost? Clearly this person's outlook will be quite different. It is very narrow-minded to judge Kurzweil's predictions without checking your own metaphysics first. For once, abstraction and neutrality may be negative assets in philosophical inquiry.

Kurzweil's own analysis is unilateral and narrow-minded, for it downplays all social considerations. Take, for example, the ambiguous term "we" I've (and he has) been using. Who's "we"? All humans? The 10-15% who make up technologically proficient societies? Only the scientific community? It's true that what scientists pioneer people ultimately adopt, but this step is slow and scrutinized by ethics and politics. Conservative forces are always defeated in the end, but they have a crucial purpose: they restrain us from adopting new things until we're ready for them. Does Kurzweil factor all this into his predictions of perfect androids by 2030? I think it could be done, but I have yet to see someone do it.

Perhaps more importantly, Kurzweil forgets that the vast majority of human beings are a half-century behind in the adoption of technology, and in this case exponential growth backfires: 50 years may not have been much a millennium ago, but it's night-and-day now. How will Turing-worthy AIs affect people who just got used to cell phones and color tvs? It seems as if technological progress thus accelerated will further facilitate Western imperialism in its dominance over of third-world countries (a dominance which has always been driven by technology anyway, from Pizarro to the free market).

With that in mind, my previous claims need revision. It may be okay to outdo our own nature and enhance ourselves, but must the price be the annihilation of "lesser" peoples? Kurzweil is right that by 2030 new technologies may eliminate world hunger, but will they actually be allowed to do that? It seems that as long as the new techs are in our Western hands we'll do all that we can to keep them to ourselves and maintain power over the third world. At that point, non-Westerners will be forced to adopt the new hi-tech standard in order to even eat, because if Kurzweil is right, then unenhanced humans will be unable to live in the new world order at all. It's going to be "conform or die out." It's then easy to envision a new type of world war, which Kurzweil himself predicts: third-world, neo-Luddite "bio-humans" on one side and elite "mech-humans" on the other... and there's no doubt how that one will end.

So the logical and philosophical considerations must perforce be matched against the social and ethical ones. I've little doubt there's anything majorly wrong in enhancing ourselves. Sure, let's go ahead and become super-mech-badass-brainmachine-humans... but perhaps we should make dead certain that "we" means all and not just an elite. It pains me to say the problem, as with almost everything in the world, is money. If Kurzweil is really right, new matter-energy conversion technologies (as well as new methods to harvest energy directly from the sun) will render money obsolete and we will enjoy virtually infinite resources. Perhaps we should wait until at least then before becoming trans-humans.

Friday, July 17, 2009

Dissing Daniel Dennett: a pointless (?) classic mindfuck

Summary - I read Daniel Dennett's 1978 classic "Where Am I?" and found it very sub-par and almost shockingly uninteresting. I reflect on whether I'm missing something, or if the article is just plain bad, or perhaps if it simply tells me nothing new and I'm experiencing boredom at reading something below my current level of expertise.

I read a lot of philosophy and it happens that I come across a sub-par or uninteresting or just plain "meh" article. Often those are helpful, for they help me pick out my areas of interest and reflect on why they interest me more. At times, though, the guilty paper is right up my alley and I just want to shake the author and yell at him/her. So imagine my surprise when I had this reaction with a Daniel Dennett piece. Okay, so Dennett isn't exactly a philosopher, though he's pretty darn close and he's made significant contributions to philosophy of mind. And phil.mind isn't exactly my field, though I do read up on occasion about the "hard question" and reductive vs. nonreductive materialism. Still, I've always held Dennett in high esteem, surely in no small part because of his contributions to the evolution-creationism debate and his militant atheism (not that I endorse militant atheism generally, but he's a pretty damn cool guy).

The guilty paper is his classic 1978 essay "Where Am I?", in which he proposed a meta-physical variant of Hilary Putnam's brain-in-a-vat mindfuck (yes, that's a technical term).

The paper is written like a fictional first-person account, which makes it fun to read but dispersive: it could have taken him 3 pages to say what he did in 10. Summary is as follows......
  1. Dennett's brain, named Yorick, is removed from his body ("Hamlet"), kept alive in a vat, and cabled in to a radio controller.
  2. High-tech, radio-controlled microchips are also implanted in Hamlet's skull, so that Yorick can remote-control Hamlet.
  3. Yorick is in a vat in a lab in Houston while Hamlet travels to Tulsa to perform a dangerous mission for the government.
  4. Hamlet "dies" in Tulsa.
  5. Yorick is copied to a computer software, Hubert, and the two brains are sync'ed.
  6. Dennett wakes up with a new body, Fortinbras, which is connected to BOTH Yorick and Hubert.
  7. Dennett can switch between the two at the touch of a button, but he never knows which brain is in use and which one is the spare.
  8. Eventually, the spare brain falls out of sync.
  9. When Dennett hits the switch, the out-of-sync brain "tells" of how nasty it felt to be out of sync with both Fortinbras and the other brain, "as if being possessed."
  10. End of story.
Damn cool story, but what does it mean? First a few important details. Before the spare brain is made, is Dennett in Tulsa or in Houston? He guesses he is in fact in both places, for his "I" is in his brain in Houston but his spatial point of view is definitely located in Tulsa. This he takes to mean that our perception of selfhood is more heavily influenced by our external inputs than by reason alone. Even if we're strict physicalists and think that the mind "just is" the brain, it's still nearly impossible for us to imagine ourselves as disembodied brains-in-vats. Point of view still takes over.

But when Hamlet dies in Tulsa, Yorick has no sensory input, so point of view is no more. Dennett now knows he is only in Houston, even though he can't perceive himself as being really anywhere, since there's no point of view. Ironically enough, he now finds it hard to even project himself back in Tulsa. He is thoroughly disembodied. Hence he reflects (and this is the first key line): "had I not changed locations from Tulsa to Houston at the speed of light . . . without any increase in mass"? That is, he believes that his self has moved from Tulsa to Houston even though no information, or matter, or energy, or anything else physically moved from Tulsa to Houston. This fact he takes to be "an impressive demonstration of the immateriality of the soul" (by which, in common philosophical talk, he means the mind).

I have wrecked my brain (or was it my mind?...) for days to find some meaning out of this before I realized the story isn't actually trying to prove anything. Surely a shift in point of view doesn't "prove" we have no mind or soul and we're just brains, for that is among Dennett's premises for the story and he's far too smart to be arguing thus circularly. Was Dennett then trying to draw our attention to how easily we are fooled by spatial concerns when we reflect upon self and personhood? Must be, but I don't see the interest in that. It's a "so what?" idea that I already knew and that probably most people would agree to regardless of philosophical background.

Then I thought that perhaps the latter part of the story would contain some deep moral or teaching, but once again I was in for a disappointment. The only interesting concept is that when both Yorick and Hubert are operational, there are in fact two Dennetts, sharing the one body Fortinbras. This illustrates how first Dennett was in two places at the same time (Yorick in Houston and Hamlet in Tulsa); then how he was pretty much nowhere (Yorick in Houston); and then how there were two of him again located in two places (Yorick-Hubert and Fortinbras).

Okay... once again, so what? Perhaps the overarching lesson to be learned is that selfhood and personhood are but elaborate illusions that we make up for ourselves even if they're not really there. This idea is compatible with a variety of approaches to the mind-body problem and to the concept of mental content. Even though I haven't yet sided myself in the reductive vs. nonreductive materialism debate (as concerns philosophy of mind), I'm pretty sure I'm a physicalist, like Dennett, so I'm generally sympathetic to the idea that self and consciousness are illusions. Perhaps then my problem is that this essay didn't tell me anything I didn't already know and it didn't give me any new information to substantiate or reject my prior knowledge. It was an interesting mindfuck, but sometimes one does try too hard to see insight there where none exists, and I'm afraid this might have been the case with this paper.

Too bad. I still love Daniel Dennett and I still think he looks way better than Santa Claus with that schmexy white beard of his.

Thursday, July 16, 2009

So what if we actually live in the Matrix? Skepticism as a metaphysical hypothesis

I've always been interested in the connexion of philosophy and science fiction, two of my greatest passions. Much of science fiction (or at least much of it that's any good) is really philosophical speculation, for it postulates premises that challenge our common assumptions about ourselves and the world and then moves from there. Rob Sawyer, the Nebula-winning author of Hominids, even claims the genre should be renamed "phi-fi." So I was not really surprised to see that quite a lot of scholarly work has been published in this sense.

I bought two books for plane reading on my way home for the summer: "Science Fiction and Philosophy," a collection of essays and papers edited by U-Penn's Susan Schneider, and "Like a Splinter in Your Mind," edited by Matt Lawrence. The latter is not one of the infamous "~CRAPPY TV SHOW~ and Philosophy" titles, but an equally scholarly collection of sf-based philosophizing. A focus on The Matrix is only natural, after all, as it's among the deeper recent mainstream movies, at least conceptually.

I started with Schneider's anthology to get a broader feel for the subject, and that's what I'll be discussing here. I find this field rather promising and quite in-tune with my current research interests. The first section contains five articles about epistemology and skepticism in science-fiction, focusing on The Matrix. First, three excerpts introduce famous classical skeptical scenarios: Plato's cave, Descartes' methodological skepticism, and Hilary Putnam's brain-in-the-vat thought experiment. I suspect these were mostly included to grab the attention of those who haven't had formal training in philosophy, or perhaps as a refresher for those who have. Regardless, it's always pleasant to see editors include classic primary sources.

Then follows a supremely interesting article by Nick Bostrom. He defends the startling claim that not only should we take the "Matrix hypothesis" seriously, but it is actually more likely that we live in a simulated world rather than in a "real" one. This simulation argument goes like this: not all three of the following propositions can be true:
  1. Most civilizations die off before achieving technology to simulate sentient minds.
  2. Few civilizations have an interest (artistic, scientific, etc) in simulating minds.
  3. We almost certainly live in a simulation.
Now Bostrom contends that since the first two are likely to be false, (3) is likely to be true. Why would he think that? Because the first two ARE probably false. Contrary to (1), a civilization is unlikely to destroy itself before having developed mass-destruction technologies such as nuclear weapons, which makes it more likely that they will also achieve technologies capable of simulating minds. After all, Bostrom reminds us, even if WE currently can't simulate minds we are well on our way to being able to do so: the obstacles are mostly technological, not conceptual. Claim (2) is also false, assuming most civilizations share the intellectual and artistic curiosity of humans. More on this later.

However, the crux of the argument lies elsewhere. Let me rephrase it by turning those allegedly false premises into positive statements:
  1. There exist many civs with the technological means to produce simulated minds.
  2. Most of these civs are interested in producing simulated minds for scientific, artistic, or other purposes.
  3. * A civ thus technologically mature would be able to run huge numbers of simulated minds: once you have the technology and enough "hard disk space," so to speak, there is no limit to how many sim-minds you can create.
  4. Therefore, it is more likely than not that our minds are simulated.
Notice the paramount importance of premise (3*), previously unstated. In short, Bostrom claims we should expect to be sim-minds merely on a statistical basis. It reminds me of a witty anecdote (maybe by Richard Dawkins?) that since millions of religions have existed and there's no way to know for sure which one is "the true one," all believers should statistically expect to end up in Hell anyway, because chances are astronomically low that THEIR religion turns out to be the true one. Likewise, given the huge number of sim-minds a technologically advanced (and curious) civilization could run, no being in the universe has reason to believe he/she/it isn't one of those sim-minds.

What to say about the simulation argument? I find it fascinating but inconclusive. The first two premises sound like straw men at first, mere truisms (falsisms?) easily disproved, but even if so it doesn't matter too much, for the key premise is (3*). But the argument is still too simplistic. For one, it is very anthropocentric to think all civilizations will be so like ours that they too would take interest in simulating minds. (To this one might say, though, that a civilization with no scientific interest or intellectual curiosity would never achieve high technology to begin with). Even more importantly, the argument doesn't consider that the number of existing civilizations is also likely to be astronomically huge, which would counter the statistical basis on which the conclusion is drawn: it's true that each sim-mind-producing civ would be able to run billions and billions of sim-minds, but it's also true there are probably billions and billions of real minds to even the count. If so, then the argument has yet another problem: it may "prove too much," so much in fact as to be self-defeating. Assuming it works, then every rational being in the universe should accept it; but if they do, then there will be no being left who considers herself a "real" being--even those who actually are! The moral might be that a statistical inference is insufficient grounds for changing one's ontological view. The argument is promising and mind-bendingly attractive, but I don't find it compelling in its present form.

In the next essay, David Chalmers (with his trademark clarity of thought and exposition) recasts the Matrix hypothesis as a metaphysical hypothesis. He contends, in short, that The Matrix movie does not present a skeptical scenario akin to the brain in a vat or the Cartesian evil demon. Even if we do in fact live in a matrix, it simply does not matter to the reliability of our cognitive faculties (i.e., whether or not our thought processes function correctly). Why? Because all that we used to know about our (simulated) world still holds true even if we were to learn that there's another world beyond. We cannot say we are massively deluded: all we can say is that there was something about reality which we didn't know before, namely the fact that there's another world "one level up" from us. At most we can say that we may never know what the ultimate reality is and how many more "worlds-one-level-up" there are, but that's about it. So the Matrix hypothesis is but "a creation hypothesis for the information age" (says Chalmers), and thus it is a metaphysical and not at all a skeptical scenario. Still, since it does entail suspending judgment on the question of the ultimate nature of reality, I'm tempted to brand it "metaphysical skepticism," which is a little more than simple agnosticism and de facto a form of local skepticism.

Chalmers is hitting very close to (my) home here. Much of my recent research has focused on epistemic circularity and epistemic defeat. What Chalmers is saying, in effect, is that even if we came to believe we are sim-minds in a matrix, that new belief would not undercut other beliefs upon which we think our cognitive faculties are reliable. To borrow some terminology from Alvin Plantinga, it would be a truth defeater (for what we knew about the ultimate nature of the world was wrong) but not really a rationality defeater, for it wouldn't lead us to distrust our own cognitive faculties. While he is in the Matrix, Neo's cognitive faculties are perfectly functional: he thinks logically, he infers, he solves problems, he feels things, etc. When he leaves the Matrix, those faculties stay exactly the same and he has no reason to doubt they were EVER faulty. He has simply learned he was wrong about where the world comes from, but that doesn't imply that his *thinking* itself was ever faulty. An exception is if the machines running the Matrix were interfering with his brain to make him think things he wouldn't think by himself, such as altering his sense of logic or making him go insane. For example, for his first twenty years in the Matrix Neo is a Democrat, but then the machines change his mind to Republican and make him think he was *always* a Republican (and that would explain a LOT about our own world!). But this objection, while true, is valid whether or not we are sim-minds: we could say the same about an interventionist god, or an evil demon, etc. It's a genuinely skeptical objection, but it is not limited to the Matrix hypothesis and it is not enough to call the whole hypothesis a skeptical one.

The article by Chalmers made me recall that both Plantinga and Michael Bergmann had argued along similar lines while discussing Plantinga's evolutionary argument against naturalism. The Matrix example had sprung up in those papers, and both philosophers had regarded it as a truth defeater but not a rationality defeater. A truth defeater is a belief which, if you hold it, makes you lose confidence in another belief you also held. For example, I believe I have 50 cents in my pocket, but when I stick my hand in there I only feel one quarter; hence my newly-acquired belief "I have 25 cents" is a truth defeater for my previous belief "I have 50 cents." A rationality defeater, instead, is much more serious: if I acquire one, I will doubt my own cognitive faculties, my very thought processes. For example, I learn that I've ingested a pill that makes people hallucinate and then go totally insane. Rationally, I'd have to assume that perhaps I am already under the pill's influence; maybe I didn't really take the pill after all, but the mere suspicion that I might have is enough to make me lose confidence in my own mind and will soon tear me apart. With this in mind, back to Chalmers now. Even though he doesn't frame his discussion in terms of epistemic defeat, that is in fact what he is saying: the Matrix scenario is a truth defeater but not a rationality defeater. He then applies the same reasoning to other so-called skeptical scenarios such as Bertand Russell's five-minute hypothesis and Putnam's own brain-in-a-vat experiment. These simply aren't skeptical scenarios at all: they're metaphysical problems. In this case, philosophy and common sense do get well with each other, because the conclusion is that even if we live in a matrix (or our brains are envatted, or the universe was created five minutes ago, etc) it really doesn't matter for our present purposes. Even after we learn the truth, we will be the same people as we were before and we would have no reason to believe we were being otherwise deluded.

I'll end with two questions that have nagged me throughout the reading. One concerns Berkeley's idealism, of course: are things there when we can't perceive them? To what extent is the real actually "there" if we don't see all of it at the same time? If we live in a matrix and our world is only in our minds, can we say that we perceive it all and it is thus perfectly real to us even if it is simulated? Chalmers certainly seems to think so. (Of course, one needn't speculate about matrices to appreciate the beauty of Berekeley's idea, revamped as it is by modern-day worries on quantum uncertainty). The other question, closely related, is about scientific realism, viz. the idea that what science describes is the "stuff" that's out there. How does the Matrix hypothesis--in either Bostrom's probability argument or in Chalmers' metaphysical recast--affect scientific realism? If we are in a matrix, to what extent can we say that what our science describes is really there at all? Is the fact that we're perceiving it enough to say it's there?

Sounds awfully Berkeleyan to me....... but then again, most things do!


Readings/references:
  • "Science Fiction and Philosophy" (Wiley-Blackwell 2009, ed. Susan Schneider)
    • David Chalmers, "The Matrix as a Metaphysical Hypothesis," 2002.
    • Nick Bostrom, "Are You in a Computer Simulation?" 2003.
  • Michael Bergmann, "Commonsense Naturalism," in Naturalism Defeated?, 2002.
  • Alvin Plantinga, "Reply to Beilby's Cohorts," in Naturalism Defeated?, 2002.

Welcome!

This is Philosoph-ish. This is good shit. Read it. That's all.