Bruce Sterling says that Bruce Sterling is the real monster (well, I have always said so, too, so he and I agree on that)

Bruce Sterling
bruces@well.com

Literary Freeware: Not for Commercial Use
Lifelike Characters
Speech at Lifelike Computer Characters ’95
Snowbird, Utah
September 29, 1995

Thanks for that introduction. Hi, my name’s Bruce Sterling, I write science fiction novels. In the past few days, quite a few people have asked me what the heck I’m doing here.

That’s a good question, and just to get a grip on that, I’d like to poll the audience before we get started…. How many people here actually write fiction? I’m not asking if you sell it, I just want to know if you’re brave enough to publicly admit that you write fiction. I want to see you raise your hand.

Thanks. That’s very interesting. You may or may not know that two of the hottest and most successful science fiction writers today, Neal Stephenson and Greg Egan, are both former programmers. Just a data point for you there.

Very well, to the point then — what is a science fiction writer doing at an artificial character conference. I came here because it’s good material for me, of course. You see, many science fiction critics consider the first true work of modern generic science fiction to be a book whose theme is the artificial creation of a lifelike character. That book is Mary Shelley’s FRANKENSTEIN. So you see, even if there were no such thing as science fiction, conferences like this would require us to invent it.

But ladies and gentlemen, Mary Shelley was a science fiction writer of a different cultural epoch than my own. The Frankenstein property has always been a very hot vaporware notion, but to date we haven’t come up with a true killer app for Dr Frankenstein’s lifelike character. Generally we just have Frankenstein’s monster pursued and ritually butchered by frightened peasants with torches, and it’s all done in a remote fantasy realm for the sake of popular entertainment. This is a long and honored tradition in the social response to technological innovation. It’s very much the sort of thing we see today in US Senate hearings about the Internet.

I have a profound respect for Mrs Shelley’s technosocial acumen. Although I disagree with her about a great many things, I won’t consider that I’ve won the debate until I too have had a science fiction novel in print for a hundred and seventy years.

Nevertheless I do have some profound advantages over Mary Shelley. It’s no particular credit to me as an artist that I have these advantages, but I do have them, and I do use them. Because I had the blind luck to be born long after her, I get to stand on Mary Shelley’s literary tombstone. I get to watch people actually study, invent, and market the kind of stuff she could only imagine and shudder about. And I’ll go further. When I meet these inventive people and I talk to them, I feel a strong sense of solidarity with them and their efforts.

If Mary Shelley were here tonight, and if she could be brought to understand what you’re up to, I think she’d have a very strong negative reaction. I think she would tell you that what you are doing is blasphemous, that it’s an act of hubris that can only end in misery and bring divine retribution. I think she would tell you that you are sinfully attempting to mechanize the human soul. I think she would strongly argue, along with her contemporary William Wordsworth, that when you over- analyze, you kill the rainbow and you murder to dissect. I think she would argue that you are creating a race of soulless golems and that these creatures will be with you on your wedding night.

However, ladies and gentlemen, you’ll get no such firm moral lecturing from me. Quite the contrary. You see, I’m willing to forgive you almost anything, because I think you guys are cute. I think you’re intellectually sexy. I’m enjoying your company very much. I can even steal your ideas and mimic your language. I’ve done that before, and I’m doing it now, and I’ll do it again.

Since I’m also a pop science journalist on alternate Tuesdays, I think I could guarantee that I’d give Dr. Victor Frankenstein a sympathetic hearing and I’d probably even buy him a beer. I admire Mary Shelley and I acknowledge her as a spiritual ancestor of sorts, but speaking personally, I disagree with Mary and don’t have a lot of use for Victor. I identify with the monster.

I think our grandparents were Victor Frankenstein. I basically am the kind of deeply unnatural creature that Mrs Shelley instinctively dreaded. I not only eat her sacred cows but I eat them with ketchup. While I take her point, I think that transgressive monstrosity and tampering with the life force are both a lot more fun than she suspected.

So tonight I plan to tell you a few things about the artificial character biz than you may not have seen in standard corporate PR, but I hope you won’t take this amiss. It’s just speculation, it’s not a sermon. It takes a lot to make me wax judgemental.

After all, we are creatures bred by the same society and we are occupying the same anomalous space. We exist in the gulf between CP Snow’s two cultures. We are living, breathing, and trying to think and feel inside the gulf between art and science. Quite some time ago I learned to ferret out the smell of people who dwell in this area. You have that smell. That’s why I’m here. I think that I’m a person very much like you who happens to be a novelist.

The fact that I generate prose and that I get called an “author” is not of particular relevance to me. What is relevant is the fact that I live through monstrosity, through contradiction, through violation of old cultural definitions, and through transgressions of the natural order. I live through paradox and oxymoron. “Cyber- punk” was an outrageous oxymoron until it became a pop cliche. The term “science fiction” is also an oxymoron, a profound contradiction on its face. People have been using this term “science fiction” for seventy years, but think about it. How can science be fiction? How can fiction be science? Are novels quantifiable? Do novels yield reproducible results? Are novels falsifiable? Do novels offer conjectures which suggest lines of experimental verification? I know scientists, and I’m no scientist.

I know literateurs, also. I know about plot, character, denouement, climax, rising lines of dramatic tension, emotional impact. But present one of my science fiction novels to someone formally trained in the literary proprieties. Wait a minute — there’s not much of a plot here. It’s more like a picaresque tour through bizarre fields of data. There’s a huge amount of energy expended on background, and relatively little on character. The idea is the hero, the characters are cogwheels. The writer is not true to life, he is not describing what he knows — he’s speculating, he’s describing things that no one can know, things that will never exist. It’s not Chekhovian drama with a tight interplay between fully realized characters. Science fiction is spectacle — the cherry orchard has been set on fire by Martian tripods.

So can you call a science fiction novel a novel? You can if you’re trying to market it — but if you’re honest, you probably shouldn’t. You can just accept that this is a creative work in prose which is monstrous on its face, but it has a specific function and it carries that out. Its function is to bear witness to a society warped beyond recognition by the impact of technology. Science fiction is the native literature of a Chernobylized society. If that damages plot, character and dramatic structure, so much the worse for the literary conventions. I’m not a literary person who chooses to write in the genre of science fiction. I’m a product of high-tech society who happens to express himself in prose.

Well, ladies and gentlemen, I hope these confessions of mine haven’t been too shocking, because I’m about to apply them directly to you.

Your prose sounds quite a lot like my prose. We actually think alike. You see, I come to a gig like this and I hear someone say, “Well, we’ve got a very nice character interface. You can’t really call it true natural language conversion, but we have installed a social interaction engine.” When I hear prose like that, a bell goes off. And I think — man! What a hot rhetorical move for evading the fact that the goddamn thing can’t talk properly. That fine little glow lasts long enough for me to make a note of that for possible future use in a story, or at least in the WIRED magazine jargon file.

That is a very odd rhetoric. That’s the kind of rhetoric than an artist slash technician uses; it’s the language of a poet whose muse is the God that Limps. But then I get to thinking more deeply, because a statement like that is deeply provocative. Do I really want a machine to talk to me in human terms? Do I want to talk to a machine about its bogus humanity? Maybe a lot of people will be happier if their machine can mimic them somehow, if it plays successfully to their own character traits. That’s an extremely interesting notion.

It’s one of my character traits, however, that I want to know the worst and I want it without any candy-wrapping. I don’t mind if a machine talks; I’d rather like it to talk. But an obsequious pseudo-human machine is a deliberate facade. Obviously it’s far more interesting to worm the real truth out a machine — I want a machine to level with me, I want it talk to me about its mechanism. It’s a machine, it doesn’t need natural language or natural consciousness — it needs a social interaction engine.

You see, Mary Shelley believed that creating a Frankenstein monster was a supernatural act. A smart machine is a profoundly paradoxical entity, but a paradox is not a supernatural act. A paradox is a symptom of a breakdown in two competing definitional systems.

Alan Turing was wrong when he implied that consciousness is a primal state that comes in units of one. He said that computers should be considered conscious if they can deceive us into believing in their humanity, but that begs the question of what machine processing really is. Human consciousness is probably only one aspect of a much larger set of phenomena.

I don’t want a machine that can talk like Alan Turing. I can’t even have a machine that can talk like Alan Turing. Why? Not because it’s necessarily ontologically impossible — there are people like Searle and Penrose who make those arguments. They may be right. But I wouldn’t bet the farm on that kind of logic- chopping.

It’s impossible because as soon as the chip starts to talk like Alan Turing, someone is going to make ten of them and network them. It’s impossible because Moore’s Law will see to it that a Turing-on-a-chip can talk like Alan Turing for only eighteen months. After that, the next-generation chip will talk like two Alan Turings at the same price.

Which is really more interesting, more provocative, and more likely to affect our society — a faultless Alan Turing mimic, or Alan Turing to the tenth power? It’s rather silly to limit the power of computation by comparing computers to human brains or human psychological structures. It’s like judging an aircraft by how well it lays eggs. It’s like those very early models of the horseless carriage that had a wooden horse on the prow.

You see, once we’ve got our mitts on lifelike characters, we’re not going to leave them inside the ivory tower with a British spook mathematician. If they really work, we’re going to deploy them throughout the fabric of society. Look around yourself — this isn’t Plato’s Republic ruled by philosopher kings. This is a capitalist society. We sometimes research relatively obscure technologies just because they intrigue a class of researchers. But we develop technologies in order to maximize the return on investment.

So let’s consider some down and dirty visions of what happens when the street finds its own uses for lifelike artificial characters. In order to speculate with a bit more rigor, let me define this class of entity a bit more narrowly. I’m not talking about deep and hard artificial intelligence here, I’m talking about high-bandwidth audiovisual human image and conversation mimicry. Something like the little digital fellow with the bow-tie in Apple’s famous promo piece, “Knowledge Navigator” from the 1980s. Is there anyone here who hasn’t been exposed to that Apple video? Please raise your hand if you have no idea what I’m talking about. Nobody? Right.

I like to call these creatures “mooks,” because to my eye they are kind of like moochers, because they hang around in an annoying fashion, and they’re kind of like spooks, because they’re bodiless, ghostlike, and they can spy on you. “Mook” is a portmanteau term. This may not be the best possible neologism for this sort of entity, but I think it’s a good idea on principle to generate neologisms. They are good for us and solidify our thinking. Science fiction writers especially should do a lot of this because we contribute so very little otherwise.

I recently wrote a science fiction story in which mooks and this mook technology are pretty much omnipresent. The mooks are not much liked, but they are accepted without question. People are forced to deploy mooks, because otherwise they are jerked around by other people’s mooks.

You see, there’s a cascading effect here, because the invasion of other people’s autonomy is very strong when their attention is arrested, and their time is demanded, by an artificial entity. To socially harass another human being with a software engine is an extremely aggressive, dominating act. Ignoring one’s inferiors and fobbing them off with a yakking digital dummy would be something akin to class warfare. People who could get away with this would have a great deal of status. Eventually their behavior would be mimicked. Everyone would want and need a mook.

It’s very much like the phenomenon of answering machines. Answering machines were originally invented so that you would never miss calls. These devices are now used as screening devices so that you can deliberately miss calls. Voice mail was invented to capture messages and insure instant access to data. Voice mail jails capture, detain and discourage unwanted human intruders. This is a process that the science fiction writer Brian Aldiss calls “enantiodromia,” the process of things turning into their opposites. It’s a very common thing once you learn to look for it.

Unless some kind of regulatory framework is installed — and who believes in those any more, especially in computing? — a mook arms race is a very likely outcome. Mine calls yours every fifteen minutes, yours calls mine every twelve. Rich people have smart, well-informed, shiny mooks, but poor people have dirty shareware mooks. I can hide my identity very well behind my mook. In fact, I died six months ago, but my mook’s still alive and covering for me, so you don’t know it.

Let’s suppose you’re called by a mook like the mook in Knowledge Navigator — a mook which is also a web browser and a kind of portable library. This mook knows your psychological profile and your set of interests and hobbies — it’s probably studied your zip code, your demographics and your credit card records.

Let’s say you’re interested in mountain biking. Although you may despise mooks, you can’t help but be interested when the mook starts telling you breaking news and industry gossip and incredibly detailed assessments about mountain biking – hubshocks and gearing ratios and who just won the Iron Man Triathlon. It’s like being collared by a very glossy and perfectly targeted magazine ad. Why does the mook do this for you, or rather to you? The mook does this in order for keep you from hanging up until a human being can catch up to you — in this case, maybe it’s your mother.

Did you happen to notice in the Knowledge Navigator ad that the hero’s mook lies to the hero’s mother? At the end of the video his mother is reduced to blustering aloud that she knows he’s in there. Why should the little mook lie? Because a mook is a total moral vacuum, it will do anything it’s told.

We’re not talking about an intelligent moral actor here. We’re talking about a social interaction engine. This is a device which brilliantly mimics a talking and listening human being, but has absolutely no moral foundation of any kind. It has no ethical base and it does not comprehend the criminal code. In short we have a well-nigh perfect criminal accomplice. This device talks smoothly, convincingly, and unerringly, but it has no sense of guilt or remorse. It has no private life outside its owner’s commands, so it cannot quit in disgust, rat to the cops, or confess everything in pillow talk with a girlfriend. It may have memory storage allotted to it, but it has no permanent memory. It can be programmed to wipe its memory of any criminal evidence after it commits a crime. It cannot testify and it has no culpability.

So what kind of crimes can be committed by a lifelike artificial character? A galaxy of them. Plausible deniability — mafia dons use them to convey direct orders to hit-men. Mooks with spreadsheets can launder money, open bank accounts, do direct electronic deposits. Plausible, fast-talking mooks can run boiler- room operations, defrauding widows and orphans of their life savings by selling nonexistent gold mines, fake collector’s coins and fraudulent stock shares. Mooks can be subverted and used as industrial spies, recording all the victims’ computer operations and then secretly passing that data to their true masters in some distant location of the planet.

Conversation engines would be excellent devices for monitoring huge numbers of phone calls, so there’s a major espionage app. Mooks can engage in libel and slander without recourse, because they don’t exist. They can make endless numbers of harassing phone calls around the clock. Knowledge Navigators would be excellent devices for online stalking of selected victims. And of course, they could impersonate people, including heads of state and corporate executives.

But I don’t see any of these activities as the major killer app for mooks — at least, not in the short term. I saw a presentation here where a text-to-speech unit read a bit of high humanist poetry — the question was then asked, why doesn’t this actor read a line of verse like Laurence Olivier? Well, the answer of course is that this is an actor who has no comprehension of the meaning of the text. It cannot emote. Its reading is lifeless because it is lifeless and its reading is unemotional because it has no emotions. It cannot out-act Laurence Olivier because it has no talent. This actor is a non-intelligent entity. However, it is completely tireless and it will take absolutely any role that its owner offers it.

What is the true market niche for an actor with these qualities? The answer of course is pornography. All new media that survive always have a strong initial vogue for pornography. However, lifelike computer characters are so ideally suited to pornography that they may well dominate this niche completely.

Mooks of course are well-nigh perfect for phone sex applications. Even text recognition systems might have a certain niche for obscene MUD applications and hot chat. But 3-D virtual human bodies generated in real- time have enormous potential; they are blow-up sex dolls that can move.

True pornography devotees don’t want people to have humanlike sex. Actual human sex tends to be full of tiring complications like commitment, affection and emotional engagement. Porn devotees want to witness sexual acts which are somewhat humanlike, but mostly they want sex to occur in a conceptual realm that is profoundly divorced from troubling realities, a realm that is the timeless, guilt-free, zipless realm of pornotopia.

Martin Amis suggested in a recent novel that real porn devotees want pornography on their screens even when they’re not there. There’s a strange and powerful truth to that insight. Artificial lifelike characters can offer live, 24-hour, real-time, interactive porn without any limits of metabolism or even physics. Someone’s gonna make money there.

Please don’t think that I’m trying to be judgemental when I point this out about your craft. I’m merely warning you about the likely fate of utterly stupid and fatally attractive images of human beings once they innocently wander out of the safe realm of Silicon Graphics and AT&T Bell Labs. There’s porn in CD- ROMS, porn on BBSes, porn on the internet, and to me it seems entirely reasonable that lifelike artificial characters will become the apotheosis of digital porn. You should expect this to happen, and you should expect to deal with it, and you should expect trouble from it.

Censorship is one of the few aspects of society that is as powerful and omnipresent as pornography; in fact, they are working models of one another, the yin and the yang. They can even exist in the heart and soul of the same human being, as many of you may know if you have followed the recent career of Martin Rimm.

Well, let me step out of the gutter and fastidiously scrape my feet for a moment. I want to change topic now and talk about some of the more visionary and fundamental implications of the quest for artificial lifelike characters. Earlier I said that I thought Alan Turing was wrong when he implied that there were two states, intelligence and unconsciousness. I also think we are similarly wrong when we imply that entities are living or dead.

I don’t think that computers are alive, and I don’t think that they think. I don’t think that computation is thought. I don’t think that the human brain “computes,” except in some very narrow sense. I don’t think that computation is cognition, either. I think it’s a semantic trick to assert that artificial life systems are alive in a biological sense.

Because we human beings are alive, and because we think, we’ve always considered these to be ontological bedrocks of all possible states of truly complex behavior. I believe that we are now proving that this is not so. It’s my feeling that an Artificial Intelligence cannot be conscious in any human definition of that term; but that scarcely matters if that entity can deceive you, kill you, or make trillions of dollars in the stock market. They might not be human beings, but obviously if such entities existed they would demand our respect and attention.

After all, the United States of America is not an intelligent human being and it can’t pass a Turing test. The city of Paris can’t pass a Turing test either, but people love the city of Paris and will make great sacrifices to be in its vicinity. It might even be said that the city of Paris possesses its own character.

AT&T used to be one entity, and then it became ten entities, and now apparently it’s apparently about to become three more entities, but AT&T is still an entity. It can be recognized, it can be dealt with, it can buy and sell, it can sue and be sued, it can ask for credit cards and thank you for using AT&T with a nice mechanical voice. AT&T speaks in magazines, AT&T gives money to charities, AT&T writes groovy science fictional commercials about the things that YOU WILL do.

The stock market can’t pass a Turing Test, but when the stock market crashes people jump off the sides of buildings. I think that artificial intelligences and machine intelligences are best classed with this sort of complex entity.

The United States of America is not a lifelike artificial character, but Uncle Sam is — he’s exactly that. So are John Bull, and Marianne, and maybe even the Pillsbury Doughboy and Betty Crocker. Computers aren’t the only complex systems with a user-friendly humanoid graphic front-end. I would recommend to you a study of these characters. There might be something useful there.

I think that we get into severe trouble when we try to discuss computation in terms of human thought and vice versa. I don’t think that these definitional systems are fully translatable or transferrable.

However, I do rather suspect that they are blendable. I think that people will end up using machine-augmented modes of consciousness. It’s likely to start as prosthesis for those who are tragically injured and who deserve our pity and help; but that will serve as the thin end of the sociocultural wedge for an ongoing process of cyborgization. I think that the beginnings will be halting and ugly and tragic — but then again, much of the human condition has always been halting and ugly and tragic.

Senile dementia is ugly and tragic, the aging process is ugly and tragic. Machine-augmented consciousness offers a workaround for this. People who are technically equipped to interfere with these processes are going to do it. People will object to this on moral grounds — until themselves or their loved ones are dying. Then they’ll have a change of heart.

I suspect that your successors are going to be very busy in this enterprise, because this is when behavior study and character generation are going to involve an interface with neurons.

People are going to become very cheerfully and deliberately monstrous, and we are going to need ways to diagnose their behavior. We’ll have to deal with new phenomena like Artificial Insanity. Eventually, we’re going to have to deal with people who are our cousins and our parents and our children and who are quite frankly no longer human. We’ll have to invent diagnostic procedures, and legal procedures, and political procedures, to judge just what forms of posthumanity are socially acceptable, and which are beyond the pale (and for how long). We’ll have to look former people right into the eyes, or in the sensors, and tell them why we think they should no longer be allowed to vote, or why they should be institutionalized, or why they should be shut down.

I’m not saying that this is our technological destiny, that there’s no possible way that we can avoid this. But this is very clearly where we are going. We are creating this world and our society is preparing for it across the board. As a society, this is very clearly what we want, whether we can admit this to ourselves or not. The symptoms and the harbingers are all around us. Cosmetic surgery, radical piercing, radical tattooing. Retin-A and baldness cures. Neuroscience, gerontology. Immersive virtual realities. Military man- machine interfaces. Diet fads, contact lenses, crazily elaborate workout machines.

And, of course, this impetus, this dynamic, this imperative is present very strongly in your work. It could not be more obvious. I’ve witnessed this here — in the past few days I have seen the work of people who are methodically chipping away at the walls between mechanical and organic, between intelligence and computation, between the screen and the human face. That is what you are doing.

And why are you doing this? Why are you blurring these lines and merrily smashing these taboos and causing Mary Shelley to spin in her grave? For all sorts of good and practical and reasonable reasons, am I right? Because of national security. Because it will make computers easier to use and to sell. Because Hollywood needs more money. Because your backer wants to do customer service with entities who can talk, but aren’t in unions and don’t get paid. Because computers can be sweet little pets. Because you can teach people foreign languages with a perfectly patient teacher. Because it’s fun to make a puppet run and dance with code. And so on.

Let me suggest ladies and gentlemen that those are rationalizations rather than real reasons. I suggest to you that you are engaged in this effort because you are driven to do it by motives you don’t entirely understand but are compelled to accept. Because you sense in your bones that this is the proper place to be at this historical moment. It’s a paradox to be an artist working through the medium of machinery — but the paradox is where the sons of bitches have hidden all the oxygen. It may be crazy here, but we can breathe in here. Because the old art is no longer working, and the old science is no longer funded, and politics are a debacle, and the culture is at its wits’ end.

Because you are artists slash scientists — with the emphasis on the slash — you are personifying deep powerful drives in culture. Not the culture of 1995, but the culture that is yet to be. And why shouldn’t you do that? I think that you know, and I know, that the only real way out of the trough of the twentieth century is sideways through the wall at high velocity. We are strange people with strange ideas, but our goal is to change society so utterly that we’ll no longer be strange. Same old game of history, but profoundly new players, new rules.

You know, ladies and gentlemen, I’ve been saying this sort of thing for a long time now. I’m not crazy, I’m getting less peculiar by the day. I’m merely bearing witness to the profound strangeness of the epoch that created me. As for the future — the future’s not only weirder than we imagine, it’s weirder than we can imagine. It’s still not weird enough for me, but thanks to the tireless efforts of people just like you, I’m perfectly confident that someday it will be.

I used to think I’d never live to see the day when things were really weird enough for me. I no longer fret about not living to see that day. I think that day is coming plenty soon enough. Thanks for having me in and showing me what you’re up to. It was a privilege.

That’s all I have to say tonight. Thanks a lot for your attention.

http://research.microsoft.com/en-us/um/redmond/events/lcc/lcc95/sterling.htm

Advertisements

Comments Might Work, But We Won't Know Until You Try

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s