WWW: Watch Page 6
Holy crap.
It took just seconds to run a thousand trials, and the results were clear. If you switched doors when offered the opportunity to do so, your chance of winning the car was about twice as good as it was when you kept the door you’d originally chosen.
But that just didn’t make sense. Nothing had changed! The host was always going to reveal a door that had a goat behind it, and there was always going to be another door that hid a goat, too.
She decided to do some more googling—and was pleased to find that Paul Erdös hadn’t believed the published solution until he’d watched hundreds of computer-simulated runs, too.
Erdös had been one of the twentieth century’s leading mathematicians, and he’d co-authored a great many papers. The “Erdös number” was named after him: if you had collaborated with Erdös yourself, your Erdös number was 1; if you had collaborated with someone who had directly collaborated with Erdös, your number was 2, and so on. Caitlin’s father had an Erdös number of 4, she knew—which was quite impressive, given that her dad was a physicist and not a mathematician.
How could she—let alone someone like Erdös?—have been wrong? It was obvious that switching doors should make no difference!
Caitlin read on and found a quote from a Harvard professor, who, in conceding at last that vos Savant had been right all along, said, “Our brains are just not wired to do probability problems very well.”
She supposed that was true. Back on the African savanna, those who mistook every bit of movement in the grass for a hungry lion were more likely to survive than those who dismissed each movement as nothing to worry about. If you always assume that it’s a lion, and nine times out of ten you’re wrong, at least you’re still alive. If you always assume that it’s not a lion, and nine times out of ten you’re right—you end up dead. It was a fascinating and somewhat disturbing notion: that humans had been hardwired through genetics to get certain kinds of mathematical problems wrong—that evolution could actually program people to be incorrect about things.
Caitlin felt her watch, and, astonished at how late it had become, quickly got ready for bed. She plugged her eyePod into the charging cable and deactivated the device, shutting off her vision; she had trouble sleeping if there was any visual stimulation.
But although she was suddenly blind again, she could still hear perfectly well—in fact, she heard better than most people did. And, in this new house, she had little trouble making out what her parents were saying when they were talking in their bedroom.
Her mother’s voice: “Malcolm?”
No audible reply from her father, but he must have somehow indicated that he was listening, because her mother went on: “Are we doing the right thing—about Webmind, I mean?”
Again, no audible reply, but after a moment, her mother spoke: “It’s like—I don’t know—it’s like we’ve made first contact with an alien lifeform.”
“We have, in a way,” her father said.
“I just don’t feel competent to decide what we should do,” her mom said. “And—and we should be studying this, and getting others to study it, too.”
Caitlin shifted in her bed.
“There’s no shortage of computing experts in this town,” her father replied.
“I’m not even sure that it’s a computing issue,” her mom said. “Maybe bring some of the people at the Balsillie on board? I mean, the implications of this are gigantic.”
Research in Motion—the company that made BlackBerrys—had two founders: Mike Lazaridis and Jim Balsillie. The former had endowed the Perimeter Institute, and the latter, looking for a different way to make his mark, had endowed an international-affairs think tank here in Waterloo.
“I don’t disagree,” said Malcolm. “But the problem may take care of itself.”
“How do you mean?”
“Even with teams of programmers working on it, most early versions of software crash. How stable can an AI be that emerged accidentally? It might well be gone by morning . . .”
That was the last she heard from her parents that night. Caitlin finally drifted off to a fitful sleep. Her dreams were still entirely auditory; she woke with a start in the middle of one in which a baby’s cry had suddenly been silenced.
“Where’s that bloody AI expert?” demanded Tony Moretti.
“I’m told he’s in the building now,” Shelton Halleck said, putting a hand over his phone’s mouthpiece. “He should be—”
The door opened at the back of the WATCH mission-control room, and a broad-shouldered, redheaded man entered, wearing a full-bird Air Force colonel’s service-dress uniform; he was accompanied by a security guard. A WATCH visitor’s badge was clipped to his chest beneath an impressive row of decorations.
Tony had skimmed the man’s dossier: Peyton Hume, forty-nine years old; born in St. Paul, Minnesota; Ph.D. from MIT, where he’d studied under Marvin Minsky; twenty years in the Air Force; specialist in military expert systems.
“Thank you for coming in, Colonel Hume,” Tony said. He nodded at the security guard and waited for the man to leave, then: “We’ve got something interesting here. We think we’ve uncovered an AI.”
Hume’s blue eyes narrowed. “The term ‘artificial intelligence’ is bandied about a lot. What precisely do you mean?”
“I mean,” said Tony, “a computer that thinks.”
“Here in the States?”
“We’re not sure where it is,” said Shel from his workstation. “But it’s talking to someone in Waterloo, Canada.”
“Well,” said Hume, “they do a lot of good computing work up there, but not much of it is AI.”
“Show him the transcripts,” Tony said to Aiesha. And then, to Hume: “ ‘Calculass’ is a teenage girl.”
Aiesha pressed some keys, and the transcript came up on the right-hand big screen.
“Jesus,” said Hume. “That’s a teenage girl administering the Turing tests?”
“We think it’s her father, Malcolm Decter,” said Shel.
“The physicist?” replied Hume, orange eyebrows climbing his high, freckled forehead. He made an impressed frown.
The closest analysts were watching them intently; the others had their heads bent down, busily monitoring possible threats.
“So, have we got a problem here?” asked Tony.
“Well, it’s not an AI,” said Hume. “Not in the sense Turing meant.”
“But the tests . . .” said Tony.
“Exactly,” said the colonel. “It failed the tests.” He looked at Shel, then back at Tony. “When Alan Turing proposed this sort of test in 1950, the idea was that you asked something a series of natural-language questions, and if you couldn’t tell by the responses that the thing you were conversing with was a computer, then it was, by definition, an artificial intelligence—it was a machine that responded the way a human does. But Professor Decter here has very neatly proven the opposite: that whatever they’re talking to is just a computer.”
“But it’s behaving as though it’s conscious,” said Tony.
“Because it can carry on a conversation? It’s an intriguing chatbot, I’ll give you that, but . . .”
“Forgive me, sir, but are you sure?” Tony said. “You’re sure there’s no threat here?”
“A machine can’t be conscious, Mr. Moretti. It has no internal life at all. Whether it’s a cash register figuring out how much tax to add to a bill, or”—he gestured at a screen—“that, a simulation of natural-language conversation, all any computer does is addition and subtraction.”
“What if it’s not a simulation,” said Shel, getting up from his chair and walking over to join them.
“Pardon?” said Hume.
“What if it’s not a simulation—not a program?”
“How do you mean?” asked Hume.
“I mean we can’t trace it. It’s not that it’s anonymized—rather, it simply doesn’t source from any specific computer.”
“So you think it’s—what? Emerg
ent?”
Shel crossed his arms in front of his chest, the snake tattoo facing out. “That’s exactly what I think, sir. I think it’s an emergent consciousness that’s arisen out of the infrastructure of the World Wide Web.”
Hume looked back at the screen, his blue eyes tracking left and right as he reread the transcripts.
“Well?” said Tony. “Is that possible?”
The colonel frowned. “Maybe. That’s a different kettle of fish. If it’s emergent, then—hmmm.”
“What?” said Tony.
“Well, if it spontaneously emerged, if it’s not programmed, then who the hell knows how it works. Computers do math, and that’s all, but if it’s something other than a computer—if it’s, Christ, if it’s a mind, then . . .”
“Then what?”
“You’ve got to shut it down,” Hume said.
“Are you sure?”
He nodded curtly. “That’s the protocol.”
“Whose protocol?” demanded Tony.
“Ours,” said Hume. “DARPA did the study back in 2001. And the Joint Chiefs adopted it as a working policy in 2003.”
“Aiesha, tie into the DARPA secure-document archive,” said Tony.
“Done,” she said.
“What’s the protocol called?” asked Tony.
“Pandora,” said Hume.
Aiesha typed something. “I’ve found it,” she said, “but it’s locked, and it’s rejecting my password.”
Tony sidled over to her station, leaned over, and typed in his password. The document came up on Aiesha’s monitor, and Tony threw it onto the middle big screen.
“Go to the last page before the index,” Colonel Hume said.
Aiesha did so.
“There,” said Hume. “ ‘Given that an emergent artificial intelligence will likely increase its sophistication moment by moment, it may rapidly exceed our abilities to contain or constrain its actions. If absolute isolation is not immediately possible, terminating the intelligence is the only safe option.’ ”
“We don’t know where it’s located,” Shelton said.
“You better find out,” said Colonel Hume. “And you better get the Pentagon on the line, but I’m sure they’ll concur. We’ve got to kill the damn thing right now—before it’s too late.”
seven
I could see!
And not just what Caitlin was seeing. I could now follow links to any still image on the Web, and by processing those images through the converters Dr. Kuroda had now set up for me on his servers, I could see images. These images turned out to be much easier for me to study than the feed from Caitlin’s eyePod because they didn’t change, and they didn’t jump around.
Caitlin, I surmised, had been going through much the same process I now was as her brain learned to interpret the corrected visual signals it was receiving. She had the advantage of a mind that evolution had already wired for that process; I had the advantage of having read thousands of documents about how vision worked, including technical papers and patent applications related to computerized image processing and face recognition.
I learned to detect edges, to discern foreground from background. I learned to be able to tell a photograph of something from a diagram of it, a painting from a cartoon, a sketch from a caricature. I learned not just to see but to comprehend what I was seeing.
By looking at it on a monitor, Caitlin had shown me a picture of Earth from space, taken by a modern geostationary satellite. But I’ve now seen thousands more such pictures online, including, at last, the earliest ones taken by Apollo 8. And, while Caitlin slept, I looked at pictures of hundreds of thousands of human beings, of myriad animals, of countless plants. I learned fine distinctions: different species of trees, different breeds of dogs, different kinds of minerals.
Dr. Kuroda had sent me occasional IMs as he wrote code. Half the work had already been done, he said, back when he’d worked out a way to make still images of Caitlin’s views of webspace, rendering what she saw in a standard computer-graphic format; what he was doing now for me was more or less just reversing the process.
The results were overwhelming. And enlightening. And amazing.
Granted, Caitlin’s universe contained three dimensions, and what I was now seeing were only two-dimensional representations. But Dr. Kuroda helped me there, too, directing me to sites with CT scans. Such scans, Wikipedia said, generated a three-dimensional image of an object from a large series of two-dimensional X-rays; seeing how those slices were combined to make 3-D renderings was useful.
After that, Kuroda showed me multiple images of the same thing from different perspectives, starting with a series of photos of the current American president, all of which were taken at the same time but from slightly different angles. I saw how three-dimensional reality was constructed. And then—
I’d seen her in a mirror; I’d seen her recently reflected—and distorted—in pieces of silverware. But those images were jittery and always from the point of view of her own left eye, and—yes, I was developing a sense of such things—had not been flattering. But Dr. Kuroda was now showing me pictures from the press conference at the Perimeter Institute announcing his success, well-lit pictures taken by professional photographers, pictures of Caitlin smiling and laughing, of her beaming.
I’d originally dubbed her Prime. Online, she sometimes adopted the handle Calculass. But now I was finally, really seeing her, rather than just seeing through her—seeing what she actually looked like.
Project Gutenberg had wisdom on all topics. Beauty, Margaret Wolfe Hungerford had said, is in the eye of the beholder.
And to this beholder, at least, my Caitlin was beautiful.
Caitlin woke slowly. She knew, in a hazy way, that she should get out of bed, go to her computer, and make sure that Webmind had survived the night. But she was still exhausted—she’d been up way too late. Her mind wasn’t yet focusing, although as she drifted in and out of consciousness, she realized that it was her birthday. Her parents had decided to give her the new widescreen monitor yesterday, so she didn’t expect any more gifts.
Nor was there a party planned. She’d managed to make only one friend—Bashira—over the short summer that they’d been in Waterloo, and she’d missed so much of the first month of classes that she didn’t really have any friends at school. Certainly not Trevor, and, well, somehow she suspected party-girl Sunshine (what had her parents been thinking?) wouldn’t have wanted to spend her Saturday night at a lame, alcohol-free Sweet Sixteen.
Sixteen was a magical year (and not just, Caitlin thought, because it was a square age, like nine, twenty-five, and thirty-six). But it didn’t make her an adult (the age for that was eighteen here in Ontario) or let her legally drink (she’d have to make it to nineteen for that). Still, one couldn’t be as obsessed with math as she was without knowing that the average age for American girls—presumably even those living in Canada!—to lose their virginity was 16.4 years. And here she was without a boyfriend, or even the prospect of one.
She was comfortably snug in her bed, and Schrödinger was sleeping next to her, his breathing a soft purr. She really should get up and check on Webmind, but she was having trouble convincing her body of that.
But maybe there was a way to check on Webmind without actually getting up. She felt on her night table for the eyePod. It was a little wider and thicker than an iPhone, and it was a couple of inches longer because of the Wi-Fi module Kuroda had attached to it with duct tape. She found the device’s single switch and held it down until it came on, and then—
And then webspace blossomed around her: crisscrossing glowing lines in assorted colors, radiant circles of various sizes.
She was pleased that she could still visualize the Web this way; she’d thought perhaps that the ability would fade as her brain rewired itself to deal with actual vision, but so far it hadn’t. In fact—
In fact, if anything, her websight seemed clearer now, sharper, more focused. The real-world skills were spilling over
into this realm.
She concentrated on what was behind what she was seeing, the backdrop to it all, at the very limit of her ability to perceive, a shimmering—yes, yes, it was a checkerboard; there was no doubt now! She could see the tiny pixels of the cellular automata flipping on and off rapidly, and giving rise to—
Consciousness.
There, for her, and her alone, to see: the actual workings of Webmind.
She was pleased to note that after a night of doubtless continued growth in intelligence and complexity, it looked the same as before.
She yawned, pulled back her sheet, and swung her bare feet to the dark blue carpeted floor. As she moved, webspace wheeled about her. She scooped up the eyePod, disconnected the charging cable, and carried it to her desk. Not until she was seated did she push the eyePod’s button and hear the low-pitched beep that signified a switch to simplex mode. Webspace disappeared, replaced by the reality of her bedroom.
She picked her glasses up from the desktop; her left eye had turned out to be quite myopic. Then she reached for the power switch on her old monitor, finding it with ease, and felt about for the switch on her new one. They both came to life.
She had closed the IM window when she’d gone to bed, and, although the mouse was sitting right there, its glowing red underbelly partially visible through the translucent sides of its case, she instead used a series of keyboard commands to open the window and start a new session with Webmind. She wasn’t awake enough yet to try to read text on screen, so she activated her refreshable Braille display. Instantly, the pins formed text: Otanjoubi omedetou.
Caitlin felt it several times. It seemed to be gibberish, as if Webmind were getting even for her father’s games from yesterday, but—but, no, no, there was something familiar about it.