Tuesday, July 28, 2009

End of July

Reflections on Asimov's Foundation coming up. The FINA World Championships are keeping me way too busy for proper philosophizing (a welcome break), and now that Michael Phelps is finally defeated, things are going to get interesting in the 100m and the freestyle relays!

Meanwhile, check out Sam Harris's op-ed on Obama's choice of a scientific advisor. I have a long-standing love/hate relationship with Mr. H so I'll soon comment on this too.

Monday, July 27, 2009

We're the dumbest generation. Now what?

English professor Mark Bauerlein (Emory) writes in The Dumbest Generation that the digital age is stupefying the youth of America, who are so ignorant and so blissfully unaware of past and tradition that they're unfit for the pressures and duties of informed citizenship. His evidence is dozens of scholarly studies showing a steep decline in youth's reading habits and critical skills. They (well, "we": at 28, I fit into his "Don't Trust Anyone Under 30" warning...) are less and less able to retain even basic information and perform simple intellectual tasks. The cause? For one, even though high technology and the media offer plenty of informational opportunities, young people are less and less informed, and so tech is to blame. But so (and primarily) is the previous generation, his own. In what he calls "the betrayal of the mentors," the X-ers haven't fought the stultification of the Y-ers, but instead have glorified it and ennobled it. They've placed it on a pedestal as a new and untouchable status quo. The millennials are rising, they've got cool gadges, they're smarter, and need to be let do as they please. The consequences? Very grim, says Bauerlein, and not at all a bright, multicultural future as the indulgents claim. Knowledge and tradition should help young people become informed citizens and fight their battles, but knowledge and traditions are being lost and battles are being fought on mere hearsay and sheep-like herding. Informed civic engagement is an essential prerequisite for citizenship and a thriving democracy, but very few of tomorrow's leading citizens are informed in any conceivable way, old or new. Consequently, the decline in reading and sound schooling among the youth will result in not only a loss of our common heritage, but possibly in a failure of the democratic system as a whole. When reading ends, so does our way of life.

I'll argue that despite his insufferable style and cheap kid-bashing, Bauerlein is basically correct. His evidence is sound and devoid of major bias. He clearly establishes the situation, points his finger at the right causes, and predicts a sensible set of near-future consequences. Unfortunately he doesn't offer a solution beyond the obvious "kids should read more," though that was never the book's goal. The goal is to awaken the adults, the mentors, to fight the ever-spreading kid-centered media culture, even at the risk of being tacitated as old dinosaurs. The author holds some assumptions that aren't self-evidently true as he claims, and he fails to argue some points as thoroughly as he should, but overall he's right on the mark... unfortunately.

For one, let me get the ad hominem out of the way: Bauerlein is an asshole. His (excellent) writing is pompous and verges on self-righteousness, attitudes largely mirrored by his media appearances. He is not the kind of person with whom I'd pleasantly converse. The most annoying feature of his book is the kid-bashing. In countless instances he describes everything adolescent as "petty," "irrelevant," "silly," "stupid," or downright "mindless." These refer not only to the generation he criticizes, but to the status itself of being an adolescent. He spits venom like a cobra on anything that isn't intrinsically intellectually enriching. To boot, he severely downplays many achievements of the Y-generation, wich I'll argue later is the book's main miss. So while I endorse the main claim and most of the evidence, I could have done with less fervor and judgment, which is as immature as that which it criticizes.

Now to the meat of it. It takes The Dumbest Generation 163 pages to get to the point. The first four chapters provide the evidence on which Bauerlein's case rests, and it's pretty good evidence. If you're strapped for time, all you need to know about the first two-thirds of the book is that he proves that kids read less and watch too much tv. We knew that, and now he's proved it with high stockpiles of scholarly evidence, so we're good (but I'll discuss a couple shortcomings in the evidence later).

In the two concluding chapters Bauerlein first sums up his evidence, then affixes the blame, and finally spells out the consequences. The first argument is simple: kids in the digital age have isolated themselves into a nexus of high-speed information that overblows the importance of peer validation and transient concerns. In other words, we and our peers matter too much and we seek quick, cheap thrills. We live in a present that centralizes us and blots out everything else, and tradition and knowledge are first to go. It's true that everything is social, but it's individualistically and narcissistically social, not culturally so. It's hard to counter this argument, which has been true for quite a while. If anything, I take issue with how Bauerlein downplays some achievement of "my" generation, namely the increased interconnectedness and the spike in volunteer work and community caring. While it's true they were more in touch with tradition and knew more overall, older generations however promoted violence, segregation, and quasi-theocratical forms of church and state. Our generation is following in the X-ers' footsteps in moving away from those and toward a one-race notion of humankind... but this Bauerlein mentions only in passing and drowns in "but" and "however" phrases, which is a significant and somewhat sectarian misrepresentation of how things really are. The positive and the negative coexist and must both be given their due.

The blame-argument points a finger at the mentors, the X-generation. Adults who should "commend [kids] when they're right and rebuke them when they're wrong" have instead elevated adolescent attitudes to a new status quo, one that mustn't be attacked lest one be accused of being a curmudgeon, a backwards grandpa. This interpretation is also quite correct, and it struck a chord. It reminded me of a point that was dear to Stephen King, who in his autobiography On Writing wrote that his generation was largely responsible for the sorry state of the world entering the 21st century: they had the chance to change the world for good but they "preferred free trade, fast-food, and 24/7 supermarkets." In other words, our once-idealistic parents got rich, stopped caring, and raised us in a laissez-faire environment almost completely devoid of tradition and moral fiber, going instantly from one bad extreme to another. This rang another bell. Not 40 years ago corporal punishment was the accepted standard, but then came the Spockians and everything changed. While I do believe that c.p. is barbaric, immoral, and counterproductive, when it went there was nothing to replace it. Families and institutions went from inhumanly strict to unbeliavably inept as "don't punish" (a good thing) became "don't do anything at all" (a disaster) in just a few decades. Likewise it is with culture and tradition: kids are leaving behind the knowledge and methods of their ancestors but aren't replacing them with anything that yields even vaguely comparable results in terms of critical thinking and civic engagement. Yes, it's natural to ditch the old ways and it must happen for us to evolve, but more efficient new ways must come in their place, or "it's like fucking yourself in the ass" (thanks, Lewis Black).

(As an aside--take the statement I just made, that critical thinking and civic engagement must be preserved. Is that obviously true? Notice how it isn't a cultural statement, but a meta-cultural one. Why are critical thinking and civic engagement so self-evidently good that we don't even feel the need to justify them? Couldn't the new ways, which favor intragenerational connectedness and intergenerational exile, be the new standard? What's so important in having a cultural heritage that we must preserve it at all costs? Compare these questions with my previous observations on transhumanism and cybernetic enhancements: by going that route we'll probably lose our humanity as we know it today, but why is that bad? Seldom do writers seriously look into that question, and yet it always nags me, bordering on absolute nihilism as it does. Absolute nihilism feels like anathema most of the times, but at others it feels inescapable, and as such it is an extremely fascinating concept, both logically and ethically. More on that some other time.)

Bauerlein comes a bit close to answering it with his final argument, that is, what the consequences are being and will be of the dumbest generation. Functional democracy requires informed engagement, and informed engagement requires sound schooling. By simple modus tollens, if sound schooling is lacking democracy falls apart. Here we notice that Bauerlein is far from the Dickensian Gradgrind-like automaton of the opening chapters, the champion of useless educational utilitarianism. He is a defender of knowledge not in its guise of guardian of tradition, but as an essential requirement for our very modus vivendi, our way of life. His overarching purpose then is strictly pragmatic: if kids are allowed to disconnect from our common heritage, democracy will be rendered useless.

This too squares well with a notion that has been nagging me for the last few years: democracy isn't working. It isn't working because the idiot has the same voting power as the genius; the culturally isolated, 18-year-old World-of-Warcrafter counts for as much as the philosopher. Joe the Plumber's opinion counts the same as Michel Foucault's before the law and in the public square. This egalitarian character is democracy's greatest asset, but when the information superhighway leads straight off a cliff, then it's its weakest spot. This is not to say that democracy is inherently "good" or "bad," or even that some of its mechanisms should or shouldn't be contested. Surely a major point in the 1960s culture wars was that the fact that Foucault can publish his ideas while Joe the Plumber can't is reason to believe the system is oppressive and must be terminated in its present form. But regardless of whether or not this is true, culture wars must be fought with information and knowledge, which is Bauerlein's main point and which is a bloody good one. The last 5-10 pages of the final chapter are very telling in this sense. If you're a strong leftist like me you might be irritated by Bauerlein's clear right-leaning tendencies, but his points are strongly argued and very much in touch with current reality.

I never say this, but... yeah... you owe it to yourself to read this book, and of course also some that attempt to refute it (which is what I'll look for next). It's important.

Saturday, July 25, 2009

It's okay to hate (fiction) books...

Summary - I am seriously impaired when it comes to reading fiction. Most of my reading is poetry and nonfiction, by which I mean not memoirs or self-help but rather essays, papers, and occasionally book-length treatises, mostly in philosophy but also in history, religion, and sometimes social studies. I am also a prolific writer, again of nonfiction, and poet. So I fall into an unusual category: I spend an awful lot of time reading, but I'm unable to converse about either classic literature (with minor exceptions) or the latest bestseller. When trying to read a novel, I become extremely bored or irritated. Last week I hit rock-bottom when I spark-noted the last three Harry Potter books so I'd know how the series ends. I consider myself an intelligent person and a fast and avid reader... and yet I despise fiction like an ant does DDT. Upon reflection, my conclusion is that my hatred for fiction is a symptom of a larger problem: a very selective form of a.d.d. that concerns most kinds of entertainment. Enjoyment better come quickly, or it's not worth it.

I'm trying to remember which piece of fiction I read last, and I mean start-to-finish. Somehow I've earned a college degree in English Writing (in addition to Philosophy), which entails massive reading lists, and yet I've SparkNoted my way through most of those... as, I'm sure, many students do. Samples of works I've not read and should've: the Odyssey, Othello, Hard Times, Beowulf, Don Quixote, the Ramayana, Candide, To Kill a Mockingbird, The Legend of Sleepy Hollow, The Island of Dr. Moreau, Heart of Darkness, Things Fall Apart, The Metamorphosis, and dozens of short stories in classic and contemporary American and world fiction, including Tolstoj, Hemingway, Hawthorne, Joyce, Poe, Shelley, etc. I wrote my English capstone course term paper on Lolita--without having read the book in ten years. Notice that this list would've been much longer had I been an actual English major and thus required to take more literature courses; yet I feel that even in that case I would've avoided much of the hassle. (That I've been a straight-A student without ever reading the assigned texts says something either about me or about my professors, and probably about both, but that's a discussion for another time).

What did I read then? Well, Siddhartha--it's my all-time favorite and I had no problem sifting through it again for a world religions class, though it is relatively heavy on philosophy and light on narrative. I lived through bits of Dubliners and Dante's Inferno, though I had read the latter as a teen entirely. But... I think that's it. The ultra-short Miss Brill... Young Goodman Brown... a lot of graphic novels... the short fiction for my college's literary magazine... period. That I even remember which ones I did read, and often even where I was, is quite telling.

I won't even start listing all the poems (probably well over a thousand), literary criticism (mostly about poetry, unless mandated for class), and of course philosophy books and papers (those too numbering into the hundreds). I wrote 47 poems in college and over six hundred pages' worth of papers, essays, reflections, and personal notes, the latter mostly on philosophy. And I don't even keep a journal or blog!... well, until now anyway. I say this not to brag, but to prove to myself that I'm neither uninspired nor easily bored. I am "just" very selective, in the sense that if it's fiction I'm about 99.71% less likely to read it... heh.

What's my problem with fiction? On first analysis, it bores me to sobs. I have a very short attention span with novels, for the most puerile reason: if they don't get to the point within 10-20 pages, and they almost never do, I grow antsy and they're not worth continuing. Of course, a novel's goal is not to "get to the point," but to tell a story and tell it well. To do so, it needs at the very least setting and characters. I'm not the least bit interested in either. I can appreciate those nuances theoretically and even tell between good and bad character development... if forced to, and only because I've been trained to do so. But I just don't care about characters, what they feel, think, or experience. It's difficult to follow a story when I've no connection with its players. The few pieces I've read appealed to me for something other than their protagonists. Siddhartha plucks my spiritual strings: there is no Siddhartha character in my mind, but only the (watered-down) theology of his spiritual journey. Dante is culturally important for Italians and goes far beyond its (great) literary merits, not to mention that it's poetry and thus up my alley. And with Lolita it was the language that kept me going, Nabokov being the master that he is: I feel nothing for Humbert or Dolly and am not interested in the novel's significance.

However, it's wrong to say I don't care about stories in general, or characters, or people, or even feelings. Graphic novels entertain me on those levels quite well, as do movies and TV series. I'm in fact a huge movie buff. I get into movies, cry at (most) movies, think long and hard about them, and sometimes even write my own little screenplays. Is it then a matter of time? Maybe when I'm studying philosophy time is not a constraint, but when I'm being told a story I want it to be done in the shortest possible time, and so a movie is better than a book merely because it's over with more quickly. This is sensible, and it squares well with another observation: lately I've been more attracted to TV series (mainly science fiction) than to movies, because to watch a new movie sometimes feels like a higher mountain to climb than to watch a series episode. So perhaps even movies sometimes feel unnecessarily long and I fall back on the quicker fix!

I never had attention deficits as a kid: always sat still at my desk, was quiet, and never was (still am not) good at multitasking. Upon reflection, I'm a good active listener and capable of conversing for hours on end. So might I have an attention problem only with reading? Surely not, given how much stuff I do read. It's true that I can rarely read even philosophy for more than 30-40 minutes at a time without having to stop and think, but that's just healthy: there's so much to think about that continued intake of new information disturbs my thought processes, and that's much better than to read through a book in one sitting like some claim they do. They probably aren't thinking hard enough and instead just drink in the information passively.

It follows that I might have an entertainment-related attention disorder, and my relationship with video games substantiates that hypothesis. I love video games... but which kinds do I play? Mostly first-person shooters. I used to love strategy, simulation, and graphic adventures as a kid, but not anymore. Shooters are a quick thrill, a quick fix--and yet how many of even those have I finished last year? Only two: the latest Call of Duty and the bone-chilling Dead Space. Did I buy more? Oh yes... about a dozen, in fact. I re-sold them all a few hours into them, including the great Crysis, which I had liked a lot at first. I just very quickly lost interest and became supremely bored with them. To me, this means that even shooters better grasp my attention very hard and hold on to it very dearly, lest they lose it.

Now compare this trend with the fact that I've watched perhaps ten new movies last year and only (re-)read Siddhartha. Video gaming, reading, and watching movies are such radically different experiences that I must think the problem is not with any one medium but rather across the board: a very selective, entertainment-oriented attention deficit disorder. If I'm to be entertained, it better be quick, down and dirty, and extremely satisfying in a very short time... or else it's just not going to work. If true, this would mean I'm very utilitarian about myself. What is the purpose of entertainment? To nurture and soothe, to amuse and relax, to challenge or pacify. What is the quickest way to do so? Such and such. Then why should I choose other ways if they take longer to accomplish the same result? Notice that even though I'm spelling this out logically, it doesn't happen consciously in my mind: it's an entirely automatic process that I'm only now starting to look at bare (assuming I'm right).

For many people, reading fiction is pleasurable precisely because it is not very purpose-driven, though it can still be enriching. You can let your mind wander and wonder, both idly and thrillingly (is that even a word?), not worrying about goals of efficacy or even efficiency. I guess I'm just not that kind of person, and I'm fine with that after all: no one said that in order to be a learned person, or even a good scholar, you have to read fiction. Projecting myself now in the role of an instructor or even father, I'd rather have a kid who picks Hume and Locke over Woolf and Joyce any day. But sometimes I wish I could just enjoy that side of literature as well, even if it meant (God forbid) to be able to read Michael Chrichton or Dan Brown... because right now I can't do that either.

~~~

PS - As an addendum, I feel this isn't yet the full picture. Do I not also enjoy philosophy? Do I not glean not only insight but also personal pleasure from poetry and literary criticism? Perhaps this problem is better framed into a discussion of what entertainment is for me rather than what it "should" be generally. After all, that most people make a clear distinction between "books for fun" and "books for school" doesn't mean that I must. It might be very sensible to hold that all books are "books for fun" for me, and I only like certain genres more than others. Food for thought, for another time...

Friday, July 24, 2009

Switching gears... Asimov beckons!

Patternism wasn't worth it: I was right I should go back to people who actually know something about phil.mind, and I'll have plenty of chances of doing that in september. Doomsday stuff wasn't worth it, either. Too much statistics to really appreciate the core of the argument, and studying Bayes just for this isn't worth it.

So, assailed by boredom and with not enough FINA World Championships to watch, I attacked the first stack of printed pages I could find that didn't look like it would suck: Asimov's Foundation series. I'd tried reading it many times before, 10+ years ago, prompted by my father who's a huge buff and re-reads the whole series almost yearly. I had been bored stiff with it. I did love the interactive adventure books based on it, though. Now it's finally starting to look interesting. I mean, a story about predicting the mass destiny of the galaxy based on mathematical inference and psychological profiling? Sign me up! Still appalled at the form, but at least there's not much character development, which is what really bothers me with fiction after all. I'm definitely a philosopher-first when it comes to a story. (I will write a note some time to justify my absolute hatred of fiction vs. poetry and nonfiction, because it's not as simple as "stick to the facts and F the rest").

On a side note, I'm... no, no side notes. New post tomorrow. Not much to say right now anyway.

PS: for days now I've given up on Mark Bauerlein's The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future, and I should write a review asap. It's the best-among-the-worst books I've read all year. Very interesting stuff that turned out to be little more than, well... a mindfuck! Go figure.

Thursday, July 23, 2009

Finishing the book...

I'm just about to finish Susan Schneider's excellent new anthology "Science Fiction and Philosophy" (2009), skipping some articles on ethics and poli-sci in science-fiction that just aren't my cup of tea right now. There's much to say, but I think I'll investigate two areas further.

One is "patternism," not in the biblical sense but rather within philosophy of mind. Schneider tries to reject a version of patternism held by Kurzweil, Bostrom and other transhumanists: that the human mind is definable in terms of a pattern through time, a semi-materialist, emergentist solution to David Chalmers' hard problem of consciousness. I'm finding her rebuttal unconvincing, though so are most transhumanist theses from a philosophical standpoint. I'm afraid this might turn out to be a very superficial and hollow (shallow?) debate and I should just write it off to "this is not their field" and go back to Stephen Stich and Jaegwon Kim.

The other interesting area is the doomsday argument, suggested by Brandon Carter and laid out by John Leslie and (among others) Paul Davies and again Nick Bostrom. It claims that the extinction of human kind is likely to happen sooner rather than later merely on a statistical basis. Like all of cosmology, it is far too speculative for me to find interesting for more than a few hours... but the anthropic principle was my "first love," the argument that first led me to philosophy in college, then framed in a discussion of the (f)utility of SETI and the Drake equation... so I might just pursue this line of thinking a bit more. I'm just not sure what to make of Nick Bostrom. At times he sounds like a perfect idiot, but at others he's quite remarkable. That he holds a teaching post at Oxford sure does weigh in his favor, but... yeah. Suspending judgment. I do that (too) well.

More coming soon. Meanwhile, a picture for your enjoyment. This is SO me. Scarily.


Monday, July 20, 2009

(OT) High school reunions

So I'm back in the motherland (Italy) for a few weeks for the summer. Of course my old high-school friends (classes of 1998-9), whom I've found on Facebook last year, threw together a cute little social event last night, nothing fancy, just pizza out and laid-back chatting. I had a great time and will gladly do it again in January when I'm home next.

We'd done something similar back in January, with fewer people, and that too had been special. I've since shared my experiences with current friends and acquaintances, and I've concluded that people hold one of two opinions concerning high school reunions:
  • "like omg that's awesome I should totally do that too!!1!one!"
  • "lulz phail, who wants to see those losers again?"
Surprisingly, the downers outnumber the uppers roughly three to one...

Last year I'd already noticed an overall negative attitude toward this sort of event among my older non-school friends,acquaintances, family, and some online communities. So the question begs to be asked (which is what journalists should say instead of "it begs the question," which philosophers know is hugely different)... am I normal or are they? Because back in high school, I loved almost every moment spent in class and I would gladly return to those days if I could.

Bit of a background. You have to understand that "class" means something different in Italy. What model Americans follow in primary school we retain through secondary, at least in part. A "class" is a group of (usually) 20-30 students who share the same classroom and the same courses. Each class is assigned eight to ten teachers, one for each subject. Teachers then walk from classroom to classroom and teach their subjects in one-hour periods. There are no courses to take and drop: there is a fixed, mandatory, nationwide curriculum throughout the five years (Italian grammar/lit, foreign grammar/lit, math, history, and PE) and then another fixed curriculum depending on your emphasis. An emphasis is one of six preferred areas of study/concentration: science, classics, education, languages, art, and music. It's a sort of "mini-major" to prepare you for college work. You choose your emphasis your first year (though you may change it later) and students with the same emphasis study together. We had science and technology, an expmerimental and now dead form of the scientific emphasis, meaning we took five years of biology, chemistry, physics, computer science, and technical education, as opposed for example to the "classical" emphasis who did five years of Latin and Greek but had no science other than math and some minor biology. Some subjects are absent from some curricula (e.g., no Latin or Greek for us) and some others are equal for all but not for all five years (philosophy is three years each regardless of your emphasis). The number of weekly hours also varies greatly across emphases, with a whopping 5 weekly hours of math for us as opposed to 2 in the classical.

Keeping this in mind, it follows that when you sit with the same people six hours a day nine months a year, you grow close to them. In our class of 20-25, all were on good terms with all, with exceptions of course, and with varying degrees of out-of-class involvement. But we were still all pretty close, knew each other well, and were there for each other through the tough and the good times, through crushes, breakups, hangovers, school politics (including memorable yelling matches with teachers! yeah you can do that in Italy), and the usual teen drama you'd expect.

How typical is this? Sure was typical in my school. It was a public institute in Rome, a tall downtown building that housed 900+ students, four emphases, and approx. 40 faculty... so not huge or tiny, and pretty representative of Italian public secondary schooling. Nothing I can see set us apart from other schools and kids, and yet we seem to have "withstood the test of time" so much better. As is typical in public schools, most had very different social backgrounds, from the preppies with nice shirts and cologne to those whose family lived pretty much on food stamps (public school is, of course, almost free of charge, as are most textbooks for low-income families).

Perhaps most other kids went along well while in school but entertained no desire to meet up later even when given the chance. In other words, since school thus conceived is a sort of hybrid between a primary and a secondary group, they felt more "secondary" than "primary" and got away from that group as soon as they could. Or perhaps those are most vociferous who were outcasts and now have no desire to reconnect with their high-school nemeses, while the in-crowds just meet up again gladly and are quieter about it. As I said, though, the "in-crowd vs. outcast" separation wasn't really much of an issue at all: with really minor exceptions, most of us were on good terms and bullying was practically non-existent in public schools in the mid-1990s. So I really don't know. It puzzles me a great deal.

It took us ten minutes tonight to slip back into the old group dynamics. Surely we're a tad more mature, most have jobs, many have tertiary/professional degrees or are working toward doctorates, etc... but right away we behaved pretty much as we used to ten years ago, laughed at the same dumb stuff we used to, made the same old jokes to each other, and we had a blast. I hadn't laughed so hard in years.

Not sure what morale to draw from this, but I remain puzzled as to why popular opinion of high school reunions runs so low around here. Maybe I really did get lucky, but it's hard to accept that things would be in such a sorry state generally while we've had a grand time for years.

Sunday, July 19, 2009

A word of caution on the technological singularity

SUMMARY - Famous futurist Ray Kurzweil thinks the boundaries between "humans" and "machines" will soon be blurred or gone. He also thinks it's good to enhance ourselves beyond our biology and become super-human beings with superior intelligence. Critics contend we will lose our peculiar humanity that way and basically annihilate ourselves. I think that both sides ignore important social and philosophical aspects that we must consider before taking sides. I agree with Kurzweil that we will eventually transcend our current human nature and become superhuman. I also agree that we should. But I am convinced we are not prepared for it, and that if we do it as quickly as he advocates we will effectively destroy ourselves.


Futurist and computer scientist Ray Kurzweil holds (The Singularity is Near, 1999-2005) the transhumanist view that technology will continue to grow until humans and their machines merge. In principle, he's correct. Consider the frequency of paradigm shifts (radical innovations, brief periods of radical social, scientific, and technological change). Examples are the discovery of fire, the advent of writing, the rise of democracy, and the first computers. If we chart paradigm shifts, we see they happen exponentially (see picture above).

It took about 3,000 years between the discovery of agriculture and the invention of writing, but in 3,000 more years we went from writing to democracy, a much faster change. Likewise, 500 years passed from the Renaissance to quantum mechanics, and a mere 40 from there to nuclear reactors. This accelerating rate of change is most impressive in computer science. When I was a kid, 486 processors were the big thing. They operated at around 50 megahertz. After only 15 years, today's typical commercial processors perform at 4 gigahertz, about 80 times faster. Th capacity of integrated circuits doubles every two years, and according to Kurzweil the world's combined computing power doubles each year.

Why is this important to understand? Because (Kurzweil claims and I agree) we think about progress as linear and historical, but our estimates are always too conservative. Most of us don't realize just how fast things change. Just think that in 1909 household electricity was rare and commercial flights didn't exist. Only 8 years passed from manned spaceflight to moon landing, and today we have space stations and thousands of satellites. We think that history and progress move slowly and gradually, perhaps because exponential growth looks like linear growth at first, but that's never been the case. Exponential growth soon snowballs. With this in mind, the truth is that in only 30 years we'll see machines that pass the Turing AI test convincingly, we'll have colonized Mars, we'll be able to replicate virtually any type of solid object from thin air, much of medicine will be handed over to nanotechnology, and we'll be able to upload our minds to a computer and replace any part of our bodies with mechanical implants, thus prolong our lives almost indefinitely. In other words, Star Trek is here.

Kurzweil's main point is that nonbiological intelligence will inevitably take over the biological mind. Contrary to popular belief that the human brain is "the best computer," Kurzweil argues that our brain only excels at some tasks. It allows for exceptional pattern recognition (which is why brains have evolved the way they have), but even in the 1990s most computer processors could calculate faster, store more data, and recall them faster and more accurately. Human brains will soon need mechanical implants to keep pace with technological progress, and those implants will in turn accelerate progress, again producing a snowball effect. In short, in less than a half-century "unenhanced" humans will be unable to understand the world around them and be able to live in it.

Now, Kurzweil is obviously an enthusiast. Perhaps his predictions are too optimistic, but the facts are there and I've little doubt that our lifestyle will be radically different by the time I'm 60. What I do doubt and he doesn't is whether any of this is good. In fact, I am quite torn and can't yet pick sides. I'm no conservative nor a luddite; I make extensive use of technology, which facilitates both my work and my relationships. I don't think "life is a miracle" and I sure don't believe in God and the good ol' ways. But the projected rate of progress still worries me. I can also see the good in it, though, and so I'll tackle the conservative objections first and try to defend Kurzweil.

Suppose Kurzweil is right and we basically let intelligent machines take over. Critics say we will have lost our humanity then, but for Kurzweil that will still be human intelligence: we need but expand our definition of "human." I see his point here. When we integrate ourselves with mechanical implants and our creation is virtually indistinguishable from us, aren't those machines also "human" merely because they're the product of our creation? Consider this from the outside, looking in. Why shouldn't we enhance ourselves? Do we have a superior obligation to do anything else? Nothing of what we have is so sacred that it mustn't be changed. "But we'll no longer be human," says the critic. But "being human" is just whatever we make it out to be. It's not self-evidently true that we must remain true to our original nature, especially since "natural" doesn't equal "good." Yes, we evolved through natural selection, which is a slow and gradual process. But for one, the whole of life on earth also follows the exponential paradigm shift scale (see chart above); and even if it didn't, nothing says we shouldn't depart from what has been and move into what will be. Of course, nothing says we should just because we can, either. In fact, at this point we should have a pretty neutral stance about it.

On the other hand, the considerations I've just made hinge on my own metaphysical view. I am a nonreligious materialist who thinks humans are beautiful cosmic accidents. I see no overarching purpose (divine or otherwise) for our species, who is the sole architect of its future or demise. I am thus open to radical change, even in human nature itself. But what does a person think who doesn't share my bias, a person who believes humanity has a "manifest destiny" and must respond to a creator, or simply one for whom our roots are as important as our potential and that as soon as we depart from them we are lost? Clearly this person's outlook will be quite different. It is very narrow-minded to judge Kurzweil's predictions without checking your own metaphysics first. For once, abstraction and neutrality may be negative assets in philosophical inquiry.

Kurzweil's own analysis is unilateral and narrow-minded, for it downplays all social considerations. Take, for example, the ambiguous term "we" I've (and he has) been using. Who's "we"? All humans? The 10-15% who make up technologically proficient societies? Only the scientific community? It's true that what scientists pioneer people ultimately adopt, but this step is slow and scrutinized by ethics and politics. Conservative forces are always defeated in the end, but they have a crucial purpose: they restrain us from adopting new things until we're ready for them. Does Kurzweil factor all this into his predictions of perfect androids by 2030? I think it could be done, but I have yet to see someone do it.

Perhaps more importantly, Kurzweil forgets that the vast majority of human beings are a half-century behind in the adoption of technology, and in this case exponential growth backfires: 50 years may not have been much a millennium ago, but it's night-and-day now. How will Turing-worthy AIs affect people who just got used to cell phones and color tvs? It seems as if technological progress thus accelerated will further facilitate Western imperialism in its dominance over of third-world countries (a dominance which has always been driven by technology anyway, from Pizarro to the free market).

With that in mind, my previous claims need revision. It may be okay to outdo our own nature and enhance ourselves, but must the price be the annihilation of "lesser" peoples? Kurzweil is right that by 2030 new technologies may eliminate world hunger, but will they actually be allowed to do that? It seems that as long as the new techs are in our Western hands we'll do all that we can to keep them to ourselves and maintain power over the third world. At that point, non-Westerners will be forced to adopt the new hi-tech standard in order to even eat, because if Kurzweil is right, then unenhanced humans will be unable to live in the new world order at all. It's going to be "conform or die out." It's then easy to envision a new type of world war, which Kurzweil himself predicts: third-world, neo-Luddite "bio-humans" on one side and elite "mech-humans" on the other... and there's no doubt how that one will end.

So the logical and philosophical considerations must perforce be matched against the social and ethical ones. I've little doubt there's anything majorly wrong in enhancing ourselves. Sure, let's go ahead and become super-mech-badass-brainmachine-humans... but perhaps we should make dead certain that "we" means all and not just an elite. It pains me to say the problem, as with almost everything in the world, is money. If Kurzweil is really right, new matter-energy conversion technologies (as well as new methods to harvest energy directly from the sun) will render money obsolete and we will enjoy virtually infinite resources. Perhaps we should wait until at least then before becoming trans-humans.

Friday, July 17, 2009

Dissing Daniel Dennett: a pointless (?) classic mindfuck

Summary - I read Daniel Dennett's 1978 classic "Where Am I?" and found it very sub-par and almost shockingly uninteresting. I reflect on whether I'm missing something, or if the article is just plain bad, or perhaps if it simply tells me nothing new and I'm experiencing boredom at reading something below my current level of expertise.

I read a lot of philosophy and it happens that I come across a sub-par or uninteresting or just plain "meh" article. Often those are helpful, for they help me pick out my areas of interest and reflect on why they interest me more. At times, though, the guilty paper is right up my alley and I just want to shake the author and yell at him/her. So imagine my surprise when I had this reaction with a Daniel Dennett piece. Okay, so Dennett isn't exactly a philosopher, though he's pretty darn close and he's made significant contributions to philosophy of mind. And phil.mind isn't exactly my field, though I do read up on occasion about the "hard question" and reductive vs. nonreductive materialism. Still, I've always held Dennett in high esteem, surely in no small part because of his contributions to the evolution-creationism debate and his militant atheism (not that I endorse militant atheism generally, but he's a pretty damn cool guy).

The guilty paper is his classic 1978 essay "Where Am I?", in which he proposed a meta-physical variant of Hilary Putnam's brain-in-a-vat mindfuck (yes, that's a technical term).

The paper is written like a fictional first-person account, which makes it fun to read but dispersive: it could have taken him 3 pages to say what he did in 10. Summary is as follows......
  1. Dennett's brain, named Yorick, is removed from his body ("Hamlet"), kept alive in a vat, and cabled in to a radio controller.
  2. High-tech, radio-controlled microchips are also implanted in Hamlet's skull, so that Yorick can remote-control Hamlet.
  3. Yorick is in a vat in a lab in Houston while Hamlet travels to Tulsa to perform a dangerous mission for the government.
  4. Hamlet "dies" in Tulsa.
  5. Yorick is copied to a computer software, Hubert, and the two brains are sync'ed.
  6. Dennett wakes up with a new body, Fortinbras, which is connected to BOTH Yorick and Hubert.
  7. Dennett can switch between the two at the touch of a button, but he never knows which brain is in use and which one is the spare.
  8. Eventually, the spare brain falls out of sync.
  9. When Dennett hits the switch, the out-of-sync brain "tells" of how nasty it felt to be out of sync with both Fortinbras and the other brain, "as if being possessed."
  10. End of story.
Damn cool story, but what does it mean? First a few important details. Before the spare brain is made, is Dennett in Tulsa or in Houston? He guesses he is in fact in both places, for his "I" is in his brain in Houston but his spatial point of view is definitely located in Tulsa. This he takes to mean that our perception of selfhood is more heavily influenced by our external inputs than by reason alone. Even if we're strict physicalists and think that the mind "just is" the brain, it's still nearly impossible for us to imagine ourselves as disembodied brains-in-vats. Point of view still takes over.

But when Hamlet dies in Tulsa, Yorick has no sensory input, so point of view is no more. Dennett now knows he is only in Houston, even though he can't perceive himself as being really anywhere, since there's no point of view. Ironically enough, he now finds it hard to even project himself back in Tulsa. He is thoroughly disembodied. Hence he reflects (and this is the first key line): "had I not changed locations from Tulsa to Houston at the speed of light . . . without any increase in mass"? That is, he believes that his self has moved from Tulsa to Houston even though no information, or matter, or energy, or anything else physically moved from Tulsa to Houston. This fact he takes to be "an impressive demonstration of the immateriality of the soul" (by which, in common philosophical talk, he means the mind).

I have wrecked my brain (or was it my mind?...) for days to find some meaning out of this before I realized the story isn't actually trying to prove anything. Surely a shift in point of view doesn't "prove" we have no mind or soul and we're just brains, for that is among Dennett's premises for the story and he's far too smart to be arguing thus circularly. Was Dennett then trying to draw our attention to how easily we are fooled by spatial concerns when we reflect upon self and personhood? Must be, but I don't see the interest in that. It's a "so what?" idea that I already knew and that probably most people would agree to regardless of philosophical background.

Then I thought that perhaps the latter part of the story would contain some deep moral or teaching, but once again I was in for a disappointment. The only interesting concept is that when both Yorick and Hubert are operational, there are in fact two Dennetts, sharing the one body Fortinbras. This illustrates how first Dennett was in two places at the same time (Yorick in Houston and Hamlet in Tulsa); then how he was pretty much nowhere (Yorick in Houston); and then how there were two of him again located in two places (Yorick-Hubert and Fortinbras).

Okay... once again, so what? Perhaps the overarching lesson to be learned is that selfhood and personhood are but elaborate illusions that we make up for ourselves even if they're not really there. This idea is compatible with a variety of approaches to the mind-body problem and to the concept of mental content. Even though I haven't yet sided myself in the reductive vs. nonreductive materialism debate (as concerns philosophy of mind), I'm pretty sure I'm a physicalist, like Dennett, so I'm generally sympathetic to the idea that self and consciousness are illusions. Perhaps then my problem is that this essay didn't tell me anything I didn't already know and it didn't give me any new information to substantiate or reject my prior knowledge. It was an interesting mindfuck, but sometimes one does try too hard to see insight there where none exists, and I'm afraid this might have been the case with this paper.

Too bad. I still love Daniel Dennett and I still think he looks way better than Santa Claus with that schmexy white beard of his.

Thursday, July 16, 2009

So what if we actually live in the Matrix? Skepticism as a metaphysical hypothesis

I've always been interested in the connexion of philosophy and science fiction, two of my greatest passions. Much of science fiction (or at least much of it that's any good) is really philosophical speculation, for it postulates premises that challenge our common assumptions about ourselves and the world and then moves from there. Rob Sawyer, the Nebula-winning author of Hominids, even claims the genre should be renamed "phi-fi." So I was not really surprised to see that quite a lot of scholarly work has been published in this sense.

I bought two books for plane reading on my way home for the summer: "Science Fiction and Philosophy," a collection of essays and papers edited by U-Penn's Susan Schneider, and "Like a Splinter in Your Mind," edited by Matt Lawrence. The latter is not one of the infamous "~CRAPPY TV SHOW~ and Philosophy" titles, but an equally scholarly collection of sf-based philosophizing. A focus on The Matrix is only natural, after all, as it's among the deeper recent mainstream movies, at least conceptually.

I started with Schneider's anthology to get a broader feel for the subject, and that's what I'll be discussing here. I find this field rather promising and quite in-tune with my current research interests. The first section contains five articles about epistemology and skepticism in science-fiction, focusing on The Matrix. First, three excerpts introduce famous classical skeptical scenarios: Plato's cave, Descartes' methodological skepticism, and Hilary Putnam's brain-in-the-vat thought experiment. I suspect these were mostly included to grab the attention of those who haven't had formal training in philosophy, or perhaps as a refresher for those who have. Regardless, it's always pleasant to see editors include classic primary sources.

Then follows a supremely interesting article by Nick Bostrom. He defends the startling claim that not only should we take the "Matrix hypothesis" seriously, but it is actually more likely that we live in a simulated world rather than in a "real" one. This simulation argument goes like this: not all three of the following propositions can be true:
  1. Most civilizations die off before achieving technology to simulate sentient minds.
  2. Few civilizations have an interest (artistic, scientific, etc) in simulating minds.
  3. We almost certainly live in a simulation.
Now Bostrom contends that since the first two are likely to be false, (3) is likely to be true. Why would he think that? Because the first two ARE probably false. Contrary to (1), a civilization is unlikely to destroy itself before having developed mass-destruction technologies such as nuclear weapons, which makes it more likely that they will also achieve technologies capable of simulating minds. After all, Bostrom reminds us, even if WE currently can't simulate minds we are well on our way to being able to do so: the obstacles are mostly technological, not conceptual. Claim (2) is also false, assuming most civilizations share the intellectual and artistic curiosity of humans. More on this later.

However, the crux of the argument lies elsewhere. Let me rephrase it by turning those allegedly false premises into positive statements:
  1. There exist many civs with the technological means to produce simulated minds.
  2. Most of these civs are interested in producing simulated minds for scientific, artistic, or other purposes.
  3. * A civ thus technologically mature would be able to run huge numbers of simulated minds: once you have the technology and enough "hard disk space," so to speak, there is no limit to how many sim-minds you can create.
  4. Therefore, it is more likely than not that our minds are simulated.
Notice the paramount importance of premise (3*), previously unstated. In short, Bostrom claims we should expect to be sim-minds merely on a statistical basis. It reminds me of a witty anecdote (maybe by Richard Dawkins?) that since millions of religions have existed and there's no way to know for sure which one is "the true one," all believers should statistically expect to end up in Hell anyway, because chances are astronomically low that THEIR religion turns out to be the true one. Likewise, given the huge number of sim-minds a technologically advanced (and curious) civilization could run, no being in the universe has reason to believe he/she/it isn't one of those sim-minds.

What to say about the simulation argument? I find it fascinating but inconclusive. The first two premises sound like straw men at first, mere truisms (falsisms?) easily disproved, but even if so it doesn't matter too much, for the key premise is (3*). But the argument is still too simplistic. For one, it is very anthropocentric to think all civilizations will be so like ours that they too would take interest in simulating minds. (To this one might say, though, that a civilization with no scientific interest or intellectual curiosity would never achieve high technology to begin with). Even more importantly, the argument doesn't consider that the number of existing civilizations is also likely to be astronomically huge, which would counter the statistical basis on which the conclusion is drawn: it's true that each sim-mind-producing civ would be able to run billions and billions of sim-minds, but it's also true there are probably billions and billions of real minds to even the count. If so, then the argument has yet another problem: it may "prove too much," so much in fact as to be self-defeating. Assuming it works, then every rational being in the universe should accept it; but if they do, then there will be no being left who considers herself a "real" being--even those who actually are! The moral might be that a statistical inference is insufficient grounds for changing one's ontological view. The argument is promising and mind-bendingly attractive, but I don't find it compelling in its present form.

In the next essay, David Chalmers (with his trademark clarity of thought and exposition) recasts the Matrix hypothesis as a metaphysical hypothesis. He contends, in short, that The Matrix movie does not present a skeptical scenario akin to the brain in a vat or the Cartesian evil demon. Even if we do in fact live in a matrix, it simply does not matter to the reliability of our cognitive faculties (i.e., whether or not our thought processes function correctly). Why? Because all that we used to know about our (simulated) world still holds true even if we were to learn that there's another world beyond. We cannot say we are massively deluded: all we can say is that there was something about reality which we didn't know before, namely the fact that there's another world "one level up" from us. At most we can say that we may never know what the ultimate reality is and how many more "worlds-one-level-up" there are, but that's about it. So the Matrix hypothesis is but "a creation hypothesis for the information age" (says Chalmers), and thus it is a metaphysical and not at all a skeptical scenario. Still, since it does entail suspending judgment on the question of the ultimate nature of reality, I'm tempted to brand it "metaphysical skepticism," which is a little more than simple agnosticism and de facto a form of local skepticism.

Chalmers is hitting very close to (my) home here. Much of my recent research has focused on epistemic circularity and epistemic defeat. What Chalmers is saying, in effect, is that even if we came to believe we are sim-minds in a matrix, that new belief would not undercut other beliefs upon which we think our cognitive faculties are reliable. To borrow some terminology from Alvin Plantinga, it would be a truth defeater (for what we knew about the ultimate nature of the world was wrong) but not really a rationality defeater, for it wouldn't lead us to distrust our own cognitive faculties. While he is in the Matrix, Neo's cognitive faculties are perfectly functional: he thinks logically, he infers, he solves problems, he feels things, etc. When he leaves the Matrix, those faculties stay exactly the same and he has no reason to doubt they were EVER faulty. He has simply learned he was wrong about where the world comes from, but that doesn't imply that his *thinking* itself was ever faulty. An exception is if the machines running the Matrix were interfering with his brain to make him think things he wouldn't think by himself, such as altering his sense of logic or making him go insane. For example, for his first twenty years in the Matrix Neo is a Democrat, but then the machines change his mind to Republican and make him think he was *always* a Republican (and that would explain a LOT about our own world!). But this objection, while true, is valid whether or not we are sim-minds: we could say the same about an interventionist god, or an evil demon, etc. It's a genuinely skeptical objection, but it is not limited to the Matrix hypothesis and it is not enough to call the whole hypothesis a skeptical one.

The article by Chalmers made me recall that both Plantinga and Michael Bergmann had argued along similar lines while discussing Plantinga's evolutionary argument against naturalism. The Matrix example had sprung up in those papers, and both philosophers had regarded it as a truth defeater but not a rationality defeater. A truth defeater is a belief which, if you hold it, makes you lose confidence in another belief you also held. For example, I believe I have 50 cents in my pocket, but when I stick my hand in there I only feel one quarter; hence my newly-acquired belief "I have 25 cents" is a truth defeater for my previous belief "I have 50 cents." A rationality defeater, instead, is much more serious: if I acquire one, I will doubt my own cognitive faculties, my very thought processes. For example, I learn that I've ingested a pill that makes people hallucinate and then go totally insane. Rationally, I'd have to assume that perhaps I am already under the pill's influence; maybe I didn't really take the pill after all, but the mere suspicion that I might have is enough to make me lose confidence in my own mind and will soon tear me apart. With this in mind, back to Chalmers now. Even though he doesn't frame his discussion in terms of epistemic defeat, that is in fact what he is saying: the Matrix scenario is a truth defeater but not a rationality defeater. He then applies the same reasoning to other so-called skeptical scenarios such as Bertand Russell's five-minute hypothesis and Putnam's own brain-in-a-vat experiment. These simply aren't skeptical scenarios at all: they're metaphysical problems. In this case, philosophy and common sense do get well with each other, because the conclusion is that even if we live in a matrix (or our brains are envatted, or the universe was created five minutes ago, etc) it really doesn't matter for our present purposes. Even after we learn the truth, we will be the same people as we were before and we would have no reason to believe we were being otherwise deluded.

I'll end with two questions that have nagged me throughout the reading. One concerns Berkeley's idealism, of course: are things there when we can't perceive them? To what extent is the real actually "there" if we don't see all of it at the same time? If we live in a matrix and our world is only in our minds, can we say that we perceive it all and it is thus perfectly real to us even if it is simulated? Chalmers certainly seems to think so. (Of course, one needn't speculate about matrices to appreciate the beauty of Berekeley's idea, revamped as it is by modern-day worries on quantum uncertainty). The other question, closely related, is about scientific realism, viz. the idea that what science describes is the "stuff" that's out there. How does the Matrix hypothesis--in either Bostrom's probability argument or in Chalmers' metaphysical recast--affect scientific realism? If we are in a matrix, to what extent can we say that what our science describes is really there at all? Is the fact that we're perceiving it enough to say it's there?

Sounds awfully Berkeleyan to me....... but then again, most things do!


Readings/references:
  • "Science Fiction and Philosophy" (Wiley-Blackwell 2009, ed. Susan Schneider)
    • David Chalmers, "The Matrix as a Metaphysical Hypothesis," 2002.
    • Nick Bostrom, "Are You in a Computer Simulation?" 2003.
  • Michael Bergmann, "Commonsense Naturalism," in Naturalism Defeated?, 2002.
  • Alvin Plantinga, "Reply to Beilby's Cohorts," in Naturalism Defeated?, 2002.

Welcome!

This is Philosoph-ish. This is good shit. Read it. That's all.