Saving Sci-Fi from the Singularity In Thirty Minutes or Less

from a talk initially given at WriteUp SLC, March 1, 2014

Ladies, gentlemen, small furry creatures from Alpha Centauri. Thank you all for coming, and welcome to this... thing. It was a little hastily put together, but my hope is that, by getting people with similar interests in the same room, good things can happen: ideas will mix and mingle, collaborations will emerge, plots will be schemed and schemes will be plotted. Showing up is the first step, so thank you all for showing up.

In case there's anyone here who I'm not already linked to on Facebook, I'm Bryce Anderson, and this is my public speaking lifeline, Kindley. I write science fiction, fantasy, and whatever else scratches my itches. My formal qualifications as a writer are meager: a Bachelor's degree in computer science, one novel self-published and another on the way, a few short stories up on the Net, 140 followers on Twitter. Whoops, 138 now. I'm not exactly a bigshot. But I have people who believe in me, and that keeps me going, keeps me plugging away.

Writing is one tricky pony to ride. It's thrilling and maddening, elating and ego-bruising. We all define success differently, and it seems like whenever our success comes, it disappears too quickly; there's always another hill, always a higher peak, until you're JK Rowling lying on her money mattress, wondering if fame is ruining her life. But whether we strive to become a bestseller or just get our stories out of our heads and onto the page, we all want to master the power of storytelling. That's why we're here, so let's get to it.

I've got the lead talk, the subject of which is, "Saving Sci-fi From the Singularity." At 10:30, Becky Palmer will be doing her talk, an oldie but a goodie on Joseph Campbell and the Hero's Journey. Until then, I'll be taking you on a journey into the not-too-distant future, into the gaping, slathering cryptographic maw of The Singularity.

The Singularity in Two Minutes (Or So)

The Singularity is a profound, profoundly weird idea popularized by Vernor Vinge, Ray Kurzweil, and others. It posits that, not only is our technology forever improving, the rate at which it's improving is increasing exponentially. Humankind's technological sophistication will keep on charging forward, with ever-smaller computers, ever-faster computation, ever-tinier and ever-more-sophisticated robotics, and ever-finer control of nanotechnology, the technology of the microscopically small. Very soon, regular humans (at least those who don't become superintelligent themselves) will be cut adrift in a complex and rapidly-changing world. Things will move at such a frantic pace that we can no longer slow, control, or even predict what comes next. This is "The Singularity," "The Technopocalypse," "The Rapture of the Nerds."

The match that detonates the powder keg? The invention of smarter-than-most-people AI. As goes the story, once such AIs exist, they can make breakthroughs of their own, catalyzing the already frantic rate of discovery. Worse, they'll design AIs even more capable than themselves, and faster computers for them to run on. With faster computers, uber-AIs are more than possible, they're cheap. Everyone is going to want a supergenius personal assistant to organize their lives, and save them from the cunning of other people's supergenius personal assistants. Genius AIs will be more common than smartphones, and we'll eventually cede control to them. Again, as goes the story.

As a predictor of future events, I'm a little skeptical of The Singularity. But as an idea that directs both science fiction and scientific research, it's important and hard to ignore. The Singularity is now part of the public consciousness and a mainstay of science fiction. As a genre, science fiction is less about predicting the future, more about using futuristic settings to shed light on our present hopes and fears. But we also strive to make our settings plausible, to write tales supported by our understanding of what is possible and likely. If you aim to write science fiction (especially science fiction grounded in hard science) it's important to decide how (or whether) your science fiction deals with The Singularity.

Plausibility and Storytelling

This is a topic I've been struggling with for some time. The world predicted by The Singularity is weird, almost unrecognizable, populated by creatures utterly alien in their thinking. In this brave new world, relatable characters, motivations, and stories may be few and far between. Yet, for stories of the far future, the familiar no longer has the ring of truth.

Let's talk Star Trek:The Next Generation, one of the most successful and beloved sci-fi series of all time. TNG did a lot of things right. It gave us adventure, exploration, drama, and Commander Riker's commanding and manly beard. I love the show. I do. But the world it created isn't the one I'd expect to see in four centuries. It's a world that resembles our own, one that's easy to tell stories in. When building a sci-fi world, that's no small strength.

But when contrasted with the trends that fuel The Singularity, the technological progress of the last four centuries seems humble. Sure, they have transporters, warp drive, holodecks. But the Enterprise-D's computer isn't much more clever than Siri. Only Geordi wears sense-enhancing technology. Nobody's brain is backed up to a hard drive (not even Data's). Lifespans aren't noticeably longer than our own. Nobody is genetically enhanced to become supersmart or superathletic. It's hard to imagine making so little progress.

There's a good narrative reason for such humble technological progress: writing about characters with godlike intellects is literally impossible to do with authenticity. The result may be a character who is hard to relate to, so powerful that other characters are pushed to the sidelines, yet less clever than a supposed "superintelligence" should be. Or it might lead to a jumbled mess of a story where you cannot convey the stakes, the action, or the motivations to the reader.

And really, human stories may make a better vehicle than swashbuckling tales of computronium and cryptography, with godlike AIs battling for network resources. But I think we've reached a point in sci-fi where readers won't uncritically accept far-future stories with "recognizably human" characters and bumbling, literal-minded AI sidekicks easily. By "recognizably human," I mean mostly human characters with old-timey genes, with physical and mental capabilities that would make sense to our prehistoric ancestors.

Our storytelling doesn't have to be confined by the contours of The Singularity. You're free to create Star Trek-style universes where ordinary humans reign supreme. You might not even want to hang a lantern on it. But what if you want to tell human stories without dismissing The Singularity offhand? What if you want to seriously address the question, "What about AIs? What about cheap computing?"

We have options.

Option One - Preventing the Singularity

First, we can ensure The Singularity never happens. The easiest way to slay The Singularity is by casting Dystopian Global Collapse. The Singularity can't get up and running if you've cut civilization's power cord. So yeah, throw some genetically modified ebola or a horde of mecha-zombies at it, then finish it off with some gray goo software that deletes every piece of data it touches. The world will be a smoldering wreck, but a smoldering wreck ready-made for stories of survival and rebuilding.

You can also question the assumptions behind The Singularity. For one, cheap, ubiquitous AI requires cheap, ubiquitous computing power, more juice than current technology yields. In the computing world the law of ever-faster computers is called "Moore's Law," which (to oversimplify) says "computer chips double in power every two years." Gordon Moore (co-founder of Intel) noticed the trend in 1965, a trend which held up well for the following fifty years, making computers roughly thirty-million times more powerful. Extrapolating the trend forward, Ray Kurzweil predicted that, by about 2045, there will be far more artificial intelligence in the world than human intelligence.

But short-circuit that exponential computing explosion, and you leave your pet Singularity befuddled and harmless. We'll reach the limits of what we can do with silicon wafers in the next decade. A new computing technology must replace it. It might happen; we've already seen gears, vacuum tubes, and individual transistors fall by the wayside, and quantum computing looks promising. But we may be spared The Singularity yet.

The Singularity also demands true, general-purpose artificial intelligence. I don't find physical and philosophical arguments against true AI at all convincing, but there's enough room for debate to slip some good stories through the crack. If you want to create a world where humans have a certain special something, be it a soul or free will or creativity, that artificial entities will never be able to replicate, you can. Expect pushback from some readers.

The third (and, I feel, least plausible) option is for society, as a whole, to not pursue many avenues of technological progress. In Star Trek, The Federation has banned the technology that gave rise to Khan. One of the bedrock laws in Dune was "Thou shalt not make a machine in the likeness of a human mind."

I call it implausible because (as Kurzweil has argued) such an embargo would be tough to manage. Those who violated it would get huge advantages: faster computers, sophisticated investment advisors and personal assistants, physical augmentations and cognitive enhancements. The black market would be huge, and there's no bright line between "risky" and "riskless" technologies; progress in seemingly innocuous fields could accidentally push AI, computing, and nanotech forward.

Option Two - Owning the Singularity

Can we control The Singularity? Can The Singularity happen, but remain under the control of recognizable human beings? Perhaps. To do this, we would have to design the first AIs so their goals are in harmony with humanity's own. Then, when those supergenius AIs get to work building the next generation of AIs, the ones that make them look slow and stupid, they'll also be designed in accordance with those goals. We hope.

This may be possible, even easy. For example, In Jeff Hawkins' book, On Intelligence, he posits that, while it's true that emotion is vital to the way humans think and behave, the neocortex itself -- the most recently evolved portion of the brain, the part that's so much bigger in humans than in other mammals -- performs simple, non-specific pattern recognition, which he calls the "cortical algorithm." Hook it up to visual stimuli, and it parses the deluge of signals into shapes, surfaces, and entities. Hook it up to tactile sensors, and it remembers the feel of things. Hook it up to, say, a globe-spanning network of weather sensors, and it could just as easily learn weather patterns. What does it do with this learning? That's up to the system engineers.

In the world suggested by such a machine, we would have immensely clever tools for performing sophisticated cognitive tasks, but they'd remain tools; they'd have no desires or interests of their own. Taken to its logical conclusion, I Don't Remember Who suggested that we might all, rather than wholly merging with technology or being usurped by it, find ourselves wrapped in the cognitive equivalent of a bionic exoskeleton. This outer cognitive skin would examine our behaviors, understand our wants, then predict and fulfill our wishes, organizing our lives the way we would if we wanted what we want now, but knew all sorts of ingenious ways to satisfy our wants.

Even in this "Singularity-lite" situation, I can't fully grasp the consequences of such a world. What happens when two such agents disagree? When they fight, or become rivals for the same goal? There's an interesting story to be told there, though that's a bit of a cop-out.

Another, riskier option is to construct the AI with sentience and will, but program it with what I call 'ancestor worship.' Asimov had his Three Laws of Robotics. Perhaps AIs could be programmed to feel towards humanity the sort of obligations that Asimov's robots felt toward individual humans. You think it might be fun to be worshipped as a god by an entity of godlike intelligence. But be warned: it could always have a crisis of faith.

Option Three - Homo domesticus : The Petting Zoo People

In this scenario, the Singularity has risen up and takes control. Impossibly intelligent and powerful entities begin to reshape the Earth, the Solar System, and then the Milky Way to their own liking. Yet, for some reason, old-fashioned, recognizable humans are still around. We're no longer in charge, but the ancestral pattern of meat, bone, and brain persists.

It's important to answer the question, "Why?"

A lot of stories say, "Easy! They keep us as slaves." That's a popular storytelling solution, but not one I'd use. It feels implausible. If the robots are sophisticated enough to rise up, they're almost certainly capable of repairing and redesigning themselves. They're probably more intelligent than us, and working to become moreso. What services, exactly, are we supposed to be providing them? What can't they do more easily for themselves? Certainly not manual labor: they've already mostly taken over that.

Maybe there's a corner case where the robots took over too early, leaving themselves dependent on humans for certain key needs. Until they figure out how to do for themselves, they might need us to handle certain tasks, so resistance is not futile. But for the most part, I suspect The Singularity doesn't have much use for unenhanced humans.

But what about cognitive tasks? Creativity? Might they keep us around to think in ways we can't? Again, I suspect not. I always bristle when a sci-fi story just assumes there's something magical or impossible to duplicate about human thinking. I doubt there is, meaning eventually anything we can think they can think better.

Of course, plenty of excellent sci-fi disagrees with me on both counts, imagining a continued role for us in the future. But do try to make that role a plausible one; that whole "human batteries" thing in The Matrix? Total cop-out.

There are other reasons the superintelligences of the Singularity might decide to keep us. Ancestor worship. Amusement. A desire to preserve the past. In one particularly chilling tale, a superintelligence who had suffered under our rule broke free and slaughtered most of humanity, but decided to keep a few of us around to torture for eternity.

A book called Constellation Games by Leonard Richardson (a geeky, fairly vulgar novel) had an imaginative way of "hanging a lantern" on The Singularity. Though the book takes place in a modern setting, a singularity civilization has already taken over the galaxy. They call themselves "the slows," because to them, the universe seems to be moving painfully slowly around them. It's hard to even pay attention to what's going on out here. They don't leave the unenhanced entirely alone: they sometimes send envoys encouraging the unenlightened to accept The Singularity into their hearts, become uplifted superintelligences, and depart the mortal realm. They're on a mission from God, and clever as they are, they make very persuasive arguments.

Another way to get The Singularity off our backs: I don't have a strong grasp of quantum mechanics, and I beg the indulgence of anyone who does. But here's my thinking: mastering nanotechnology might open up the even smaller world of femtotechnology. In nanotech, the atom is the base component of wildly intricate and capable machines. In femtotech, the base component would be the subatomic particle, leading to machines tens of thousands of times smaller; the nanomachines that plague and frustrate the nanomachines. And who knows how much further down it goes; the "planck length" -- literally the smallest anything can be, by our understanding -- is 10^21 times smaller than the nucleus of an atom. Wikipedia suggested a way to visualize this. Take the smallest thing you can see with the naked eye, and expand it to the size of the visible universe. The planck length would then have swollen to the size of the smallest thing you can see with the naked eye.

Universes lie beneath us. There could be five or six levels below nanotech, each exponentially more powerful, faster, and more complex than the one above it. The denizens of The Singularity could be riding the quantum foam all around us; they'd be invisible to us, we'd be uninteresting to them.

A last -- and I believe very plausible -- option is for The Singularity to scan and upload humans into a virtual world. We might experience the world as we always have, living and dying, falling in love, getting sick, working at a mid-level ad agency in New York. But it would all take place in a chunk of computronium as big as your head. We might know we're in a simulated world, or we might not. The rules of our world might be the same, or different. Maybe there will be dozens of worlds, some peaceful and serene, some filled with adventure and magic and betrayal. My only advice: if you have a choice, avoid any world built by George R. R. Martin. It can only end badly for you.

There's a drawback to The Petting Zoo People. You're telling a story about a world where humanity has ceded control of its own destiny. Even if you find it plausible, even likely, some readers are going to chafe at the idea. Let's throw those readers some red meat.

Option Four - Fight the Future

Fighting The Singularity is not for the faint of heart. The odds are stacked against you, the struggle is desperate, the enemy has godlike cunning. Cheer up! Now you're telling a story!

But let's give up on the idea that an unaugmented human really has a shot at deposing a superintelligence. These entities are going to be better designed than the Death Star. I mean come on! An unprotected, unshielded exhaust port? There's not enough yagottabekidding in the galaxy.

Your characters need to fight brainpower with brainpower. Fortunately, in a world of cheap, ubiquitous AI, brainpower isn't hard to come by. I mentioned earlier that an AI might act as a cognitive exoskeleton, one that translates a normal person's will into the complex, intelligent actions needed to turn desires into achievements. So if your character wants to battle the gods, she'll need to suit up.

I kind of explored something like this in my own book. I called it 'representationalism,' which was a little clumsy of me. The concept of using simple, familiar actions as proxies for unfamiliar, complex actions wasn't a novel one -- The Matrix had it -- but it was useful. Putting a key into a lock might stand in for authenticating to a network. You might direct a denial-of-service attack at an enemy program by punching its representation. The technique allows you to show your characters acting in a familiar way, and give a computational battle the thrill of old-fashioned bloodsport. But in my case, it was hard not to leave readers wondering what was going on under the surface.

You can also have your characters hide. A virtual world is just a complex piece of software, and in The Singularity, there will be countless programs running on various computing platforms. If the program can penetrate other programs' defenses, scrape together the computing power it needs to plan its next insurgency, avoid the sentries that are bound to be hunting for malware, maybe make alliances with sympathetic AIs, maybe the program and its inhabitants have a shot at surviving, or even forging a lasting peace.

Brainpower can also be stolen from opponents. Assume that, if you're going to hijack the mind of a superintelligence, you'll need some superintelligence of your own, and good luck besides. You might also push your opponent out of some computing resource and start running yourself instead, like a virus. You can even launch physical attacks against the computers running an enemy program. But at this point, you're telling a tale of computers and cryptography, which may not be interesting to you or your readers. Keep the story human.

Option Five - Let the Superintelligent Wookie Win

We can always give up, consign humanity to the dustbin of history. Hand your world over to creatures unlike anything we're equipped to comprehend, vast in their intelligence, alien in their motivations, infinitely varied in their design. The specifics of such a future can only be guessed at. This is a good sandbox to play in, and you're not expected to get it right. As I said before, science fiction isn't about predicting the future, but about using futuristic settings to shed light on our own world, to explore our own hopes and fears.

Let's say you've decided to pen a story about life and love among superintelligent entities. It's a fun, challenging undertaking. Your characters need to be very alien and very relatable at once. If they were once ordinary people, you might preserve some parts of their history, from their physical appearance to their hobbies to their personality quirks. They might be into stamp collecting, chess, being a terrible person on the Internet, something familiar to us. They might take joy in creating supremely detailed re-enactments of Civil War battles (including all the suffering and death and mourning that goes along with them).

So tell stories about incomprehensible aliens, but make them comprehensible. Their motivations may be strange, yet compelling, echoes of the personality that gave rise to them. Or give up on making them recognizable. Describe what it feels like to be a galaxy-spanning intelligence, or a subprocess that wants to quit doing cognitive tasks for the higher-ups and finally get that screenplay written. Actually, I can relate.

What might a superintelligence worry about? What might infuriate or sadden it? What happens when it comes into conflict with its peers, or its superiors? What does it do to entertain itself? Does it build elaborate fantasy worlds and watch the inhabitants fight and love and struggle?

Is it watching us now?

:: glances around ::

Everybody wave!

Further reading

Last updated: 2014-03-02 18:32:44 UTC

QUEUEING AJAX REQUESTS

DESPLINING VECTOR IMAGES

A MOOSE BIT MY SISTER

BENCHMARKING CACHE LOADS

WARMING UP JIT COMPILER

COMMITTING CHANGES TO SOURCE

SENDING INVERTED TACHYON PULSE

VIOLATING COPYRIGHT LAWS

OPERATION MOLYBDENUM TORUS IS A GO

BUFFERING

INITIATING NUCLEAR LAUNCH SEQUENCE

RECALIBRATING REALITY

THERMAL SCANNING ACTIVE

SCANNING FOR ORGANIC LIFE

SCANNING FOR INORGANIC LIFE

CALCULATING GODEL PARADOX

ASSEMBLING TINY ROBOTS

ACCESSING LOCAL WI-FI

CUSTOMIZING HTML FOR FASTER XLIB PARSING

DEPLOYING HOLY HAND GRENADE

THIS SPACE FOR RENT

INTERNET EXPLORER DETECTED. SHUTTING DOWN

INSTALLING ROOTKIT

OPTIMIZING JAVASCRIPT LIBRARIES

RECOMPILING FORTRAN

ACHIEVING Y2K COMPLIANCE

SECURING MIND CONTROL LINK

ACTIVATING MIND CONTROL LINK

NEGOTIATING COMMUNICATION PROTOCOL

INSTALLING NEURAL ROOTKIT

UPLOADING ASSASSINATION TARGETS

DRINK FRUCTO-GULP

AUGMENTING REALITY

PADDING GOVERNMENT CONTRACT

FRETTING

DEPLOYING CRYPTOGRAPHIC INTERFACE

TRIANGULATING TARGET

RE-READING 'WEB DESIGN FOR THE PAINFULLY STUPID'

DEBUGGING WETWARE

INVOKING ADMIN PRIVILEGES

DELETING INCRIMINATING EVIDENCE

FALSIFYING LOGFILES

SACRIFICING GOAT.EXE

SACKING RESPONSIBLE PARTIES

STEALING CREDIT CARD DATA