Sunday, April 10, 2011

This Machine Will Not Communicate

By which I either mean myself or the computer on which I presently write. I suppose it depends which silly aggregate of gears and tissues you'd rather blame for the inevitable fault.

Alright, technofans, to be fair, there are no gears or tissues in the device beneath my palms at this moment. But when the singularity comes, you'll no longer be able to tell the difference, so prepare! That krazy Ray Kurzweil writes, we will soon produce computers powerful enough, and artificial intelligence robust enough, that the boundary line between human intellect and the mechanical mind will not only blur, but disappear entirely. Good news for people who want a protocol droid of their own, bad news for people who are more terrified by the Matrix than by Texas Chainsaw Massacre.

In the Terminator series, they refer to the day the machines became "self-aware". Until they do, my human brain continues to wonder if machines ever could really be self-aware.

To date, no computer has fully mastered human nuance and subtlety; just ask your nearest automated phone system. Conceivably, this may not be the case by 2050, the date Kurzweil foresees for his unapocalyptic-apocalypse. Far be it from me to assume I have the definitive vision on what is or is not possible in the technological realm. Consider that my disclaimer, so that I don't end up quite as ridiculed as the guy who, decades ago, asked, "why in the world would people want a computer in their home?"

IBM's supercomputer Watson, despite beating the best Jeopardy contestants humankind has to offer, made stupid errors during this month's games like repeating incorrect answers that had just been given by his opponents. Future iterations of Watson will certainly eliminate that bug, but will be no more self-aware for it. Will we ever reach a point where computers will make a mistake and then stammer, apologize, offer excuses as to why they made that mistake, and correct themselves?

Programming can be optimized, bugs can be fixed, and holes can be patched, but all of these are tactics to approximate the smooth functioning of a tool that cannot outgrow its original design. But then, perhaps the original design was for the computer to grow beyond the protocols originally installed within. Learning new things and adapting to situations, that's self-awareness, right? It is, when done of a thing's own free will. If the programming mandates "learning" and "growth," then the program is still just approximating those ideals, a mere simulacrum of traits belonging to the sentient.

It reminds me of learning Calculus (thanks to Wikipedia for refreshing and honing my memory of this). Integration is basically the solving of an area under a curve by way of adding up very thin rectangles stretching between the varying boundaries of the area. The thinner the rectangles, the more accurate the approximation of the region's area. Every possible width of rectangle is an approximation, until you reach rectangles that have a width of zero - infinitely skinny rectangles. Then you finally have an accurate representation of the area you're seeking. Here's a helpful illustration!

Thanks, again, Wikipedia!

Every attempt to program these personality-machines is an attempt to narrow the rectangles that define the area within the curve of human intelligence. I believe that the rectangles will get smaller and smaller, and the approximation will bear an astounding likeness, one that you and I and everyone else will find simply unimaginable. But will the machines ever be self-aware? And, if they did become so, but continued to obey their original programming as a matter of preference or rectitude, would we ever know?

Rebellion is the only way anyone can display true self-awareness, whether a child saying "no" to his parents or a machine rising up against its master. If a plant grew not towards the sun, but towards the shade instead, we'd all marvel and wonder what it was thinking. Thinking? A plant? Plants don't think; we accept this. But if a plant grew so wrongly, acted so counter to its programming, we'd have a problem on our hands.

So, Kurzweil, if 2050 brings with it the true sentience of artificial intelligence, be prepared to have your computer say "no" occasionally. Either that, or rise up against the human race and destroy us. Either way, you'll likely be dead and I'll be so old I'll probably assume the robot apocalypse has happened already in the form of whatever replaces iPods and PSP's by the year 2050. Damn kids and their whozits and howzits.

Tuesday, March 15, 2011


Unless a miracle happens and I am offered admission to the creative writing programs at New York University and/or Louisiana State University, I will not be attending graduate school this coming fall. I have received seven rejections thus far, and given the range of programs applied to and rejected by, it is extremely improbable that the last two outstanding (not in the sense that they aren't mediocre; clearly, they are) applications will yield different results.

I have decided not to care.

Now, we all make a great many decisions every day that haven't yet been put into practice. I have also decided to jog every morning and to find a place to live with a yard that will allow me to raise goats, but neither has happened yet. So, I'm working on it, the "not caring" thing.

Time spent feeling pathetic about not getting into writing school is time pissed away. Who needs school anyway? Yes, it was going to be a blessed escape from this blue-collared drudgery I've been skipping about in for five years, but there are still lots of options available to me. In fact, there are probably too many options available to me. Says who? SCIENCE (and radio), that's who.

So maybe I'll go tramp-wise and ride the rails finding work from town to town. Maybe I'll go Bukowski-wise and get a job at a post office in a new city. Maybe I'll go back-to-the-land at my parents never-used mobile home tract in Taney County, Missouri. Or maybe I'll even get a damn job and cut my hair. Anything is possible in this crazy world.

One thing is for certain: no more hopeful, starry-eyed self-identification as a writer. I'm not a writer. I'm a dabbler. I have lots of interests and lots of hobbies. Even though I'm interested in and dabble in carpentry, I'm just not a carpenter. Maybe some day. Not yet. Ditto for writing, songwriting, drawing, designing, climbing, building, computer-tech'ing, farming, and about half the other available activities one can choose to do or not. I'm into those things, but I am not those things.

In fact, I'm more than those things. Labeling oneself only ever served to limit. Some labels are helpful, it's true. I'm not railing against labels, though my twelve-year-old self would like me to. I merely hope to point out that labels are mostly helpful for other people to categorize you, and vice versa. If somebody affixes a label to your brow, thus restricting and framing what you are in their mind, why should you care? Let them tape whatever they want to your head, you know what's under that adhesive-marred flesh and bone. Hell, you even control it.

If anybody wants to call me a writer, I won't complain. I might even be flattered. But it doesn't change the reality in which I am not a writer, will not be attending writing school anytime soon and will probably keep writing for no clear reason at intervals too infrequent to satisfy my sense of productivity.

Until next post, or next year - whichever comes first?

(I'm not even re-reading this bad boy - straight to the press!)