Writing Excuses 9.4: Artificial Intelligence with Nancy Fulda
Nancy Fulda, herself a lettered student of artificial intelligence, joins us to talk about writing artificial intelligence believably. We fire questions at her so that you don’t have to!
We talk about what’s current, what’s coming, and what it is that we’re all expecting. We also cover some of the things that writers get wrong (at least insofar as they knock the cognoscenti out of the story.)
Liner Notes: Here’s the article Howard mentioned, “Evolving a Conscious Machine,” from the June 1998 Discover. He got the details almost 100% wrong, but the gist of it was still there.
Homework: Go to the Internet and look up Bayesian learning, neural networks, and genetic algorithms. Yes, it’s more of a reading prompt.
Thing of the week: Rainbows End, by Vernor Vinge, narrated by Eric Conger (note: Howard got this wrong — no apostrophe at all! And yes, a lantern got hung upon that particular missing bit of punctuation).
Powered by RedCircle
Transcript
Key Points: The social definition of artificial intelligence is things computers can’d do yet, so it is a moving target. But as soon as you know how it is done, it doesn’t seem magical any more. “True” artificial intelligence may depend on having the right hardware. What is the line between AI and uplift? Self-awareness and personality? But how do you judge from the outside? AIs as gods or AIs as anthropomorphic?
[12 seconds of silence]
[Brandon] This is Writing Excuses, artificial intelligence with Nancy Fulda.
[Howard] 15 minutes long, because you’re in a hurry.
[Mary] And we’re not that smart.
[Brandon] I’m Brandon.
[Pause]
[Mary] I’m Mary.
[Howard] [laughter] I’m Howard.
[Brandon] And the part of Dan this week will be played by a kitten slowly being uplifted.
[Meow. Meow. Meow. Meow. E equals MC squared]
[laughter]
[Brandon] So. Nancy, you have a little bit of experience with artificial intelligence.
[Nancy] Yeah. About two years of experience. In fact, my Masters degree was in computer science, and my research area was artificial intelligence. Specifically, cooperative learning agents. So the question was, if you have one little artificially intelligent bot and another little artificially intelligent bot, but you don’t allow them to directly communicate with each other, you force them to be somewhat like human beings who have to infer internal states based on external information like speech or other signals, how do you get those two agents to cooperate to perform a reinforcement task?
[Brandon] Wow.
[Nancy] It turns out to be a very, very difficult problem.
[Howard] I’m sorry, that sounds… That actually sounds like a parenting problem.
[Laughter]
[Nancy] Actually, I have written blog posts about how extremely pertinent my research area was to the act of attempting to raise small children.
[Howard] Oh, wow.
[Brandon] Now, we’re going to probably throw just a bunch of questions at you.
[Mary] Yeah. [Garbled – on this head?]
[Nancy] Sure.
[Brandon] To make a resource for our listeners who may be dealing with artificial intelligence. The first one that as a layman I ask is how close are we?
[Nancy] You know, artificial intelligence is a moving target. By social definition, we tend to define artificial intelligence as things computers can’t do yet. So if you go back about 50 years, the epitome of artificial intelligence was chess playing computers. A very complex task, something difficult, that people thought computers couldn’t do. Of course, we’ve now gone through Deep Blue and a bunch of other situations. Chess is no longer considered a true artificial intelligence. Because a brute force… Computers brute force their way through it instead of working as a natural human chessmaster. When I say no longer considered, I mean in the popular perception, right? Like most people wouldn’t say that a chess playing computer… At least it’s my perception that most people wouldn’t say that a chess playing computer is actually artificially sentient, actually thinking or anything like that. Nevertheless, if you go back far enough in the timeline, people would have said that a computer that could play chess would be. So how do…
[Brandon] Well, if you go far enough back in the timeline, they would be worshiping the talking metal box.
[Nancy] Yeah, that’s right. So asking how far away we are is really useless unless you actually have a firm definition of what task is considered artificial intelligence.
[Howard] The Turing test got bandied about a lot during the 80s and 90s when I was reading about these sorts of things. I think it was 2002 when someone took the old Eliza engine, updated the vocabulary, plugged it into AOL Instant Messenger, and told it to go chat with people.
[Nancy] I remember that.
[Howard] Yeah. According to the article, most people had complete conversations with this with no idea that they were talking to a robot. Which passes the Turing test.
[Nancy] Well, as far as I understand, the one thing that they did… At least, I read about the ICQ chat one. They updated the vocabulary to include a lot of profanity and derogatory terms. Which demonstrates that as soon as people start getting insulting on the Internet… Like people would have huge arguments with this thing, and it would call them names back, and they would call it names.
[Laughter]
[Nancy] I find myself very curious whether the one that you saw, Howard…
[Howard] It’s the…
[Nancy] Whether they were using the profanity or whether it was actually passing as a human with people who weren’t…
[Howard] It’s the…
[Nancy] All up on the red scale of anger.
[Howard] It’s the Turing Godwin test.
[Laughter]
[Howard] Oh, my goodness.
[Brandon] Oh, wow. So the question can’t be how close are we. What I want… I mean, if you want a definition, I want to know when we can have Data.
[Nancy] When we can have Data…
[Brandon] That’s what I want. When do we get Data?
[Nancy] When the positronic systems have been developed. We need positronic hardware. It is… In many senses… Okay, this is what I can see…
[Howard] When can we have Jarvis? Jarvis, in Tony Starke’s Ironman suit? That’s the personal assistant that he just talks to and the personal assistant does stuff and the personal assistant usually knows when to interrupt him and when not to.
[Mary] John Scalzi, I think, tweeted this at one point, that he knew that we were living in the future when he got annoyed with his phone because it didn’t understand him…
[Laughter]
[Mary] And he expected it to. So we’re all… I mean, it’s one of the things that Nancy’s talking about, about the moving target. We already have a lot of things that are artificial intelligence systems, but because they are… It’s like, “Oh, well, that’s just a computer making choices from…” Like the telephone trees that everybody hates so much.
[Nancy] Yeah. It’s very much like stage magicians. As soon as you know how it’s done, it does not seem magical anymore. Artificial intelligence is very much the same way. If you understand that you’re talking to a phone system that’s running through a fairly simple tree of questions and answers, suddenly it does not seem magical anymore. Although someone who did not… Had never spoken to an artificial phone system would probably interpret that a lot differently because of how much they would project their own identities onto the technology that they’re interacting with. But hardware turns out to be really important in systems like this. Because seriously, it comes across like a joke, but if you don’t have a positronic brain that can do the things that Data’s positronic brain can do, you’re going to have a really hard time creating a Data. One of my pet theories about true… I’m making air quotes, but you can’t see… True…
[Howard] They can hear them.
[Nancy] artificial intelligence [chuckle] that’s what I thought. Is that it’s probably not going to be based on electronics. If you look through the literature… I did a literature search on this once, it’s fascinating. If you go back to like the 1950s and 60s, the artificially intelligent systems then were atomic powered. If you go back even further, you find stories about steam powered artificial intelligences. If you go back even further then of course it was statues or clockwork mechanisms. So apparently when humanity rights stories about artificial intelligence, we tend to project the current most modern technology known as the likely hardware for artificial intelligence to evolve upon. I’m thinking we don’t have the right hardware yet. So if artificial intelligence ever really happens, it will probably be based on a system of mechanics or manipulation – data manipulation that we don’t yet have.
[Brandon] That’s…
[Howard] Greg Bear’s Blood Music from the 80s actually postulates the accidental development of artificial intelligence in genetically modified white blood cells.
[Nancy] Cool.
[Howard] Very catastrophic when they start colonizing. But that was the… How then is artificial intelligence different from uplift? Where do you draw the line between “Well, we uplifted the white blood cells to become a brain…” I have no idea where that line is.
[Mary] Yeah. When I deal with it in my own fiction, I divide the line… And it’s completely arbitrary, but I say that I have artificial savants, which are basically be artificial intelligences that we have today…
[Nancy] Siri?
[Mary] Yeah, Siri. And artificial intelligence which is something that’s self-aware and with a personality. But even that… Like how do you judge from the outside when something is self-aware and has a personality versus when it’s just going through the motions?
[Brandon] Right. And how much of what we do is based on our programming?
[Mary] Exactly.
[Brandon] I mean, where is that line? This is what science fiction explores.
[Brandon] Let’s stop for our book of the week, and then we’ll dig into this some more. Howard. You were going to give us our book of the week.
[Howard] I certainly was. And then I changed my mind. No, I still am. Rainbows End… And that’s rainbows with the apostrophe after the s, which is actually a plot point in the story. Rainbows End by Vernor Vinge is a near future science fiction about… Well, among other things, about a horrible man who developed Alzheimer’s late in life, none of his family members like him, he’s been in a home, he’s been decrepit, and right about the time he would die, the technological pieces come together to cure his Alzheimer’s, to restore physical function, and he finds himself… He finds himself a 25 year Buck Rogers sort of scenario where he is waking up to a future that’s very, very different from the one that he last remembered. It is near future sci-fi that addresses all of these things. It talks about the evolution of things like Siri, about artificial intelligence, about the ubiquity of the electronic devices that we carry. Nowadays you might say… Ask people, “Hey, do you have a smart phone?” Wave at somebody, “Do you have the Internet on you?” The term in this book is, “Are you wearing?” Because your clothing would be smart. Everybody is… Everybody’s wearing. Why aren’t you wearing? What’s wrong with you? Anyway, it’s a wonderful book. Written in 2006. I was very worried that as somebody who is from 2013, I would look back at it and it would not have held up well. It’s held up really, really well. If this is the sort of thing that you’re interested in, I strongly recommend reading Rainbows End by Vernor Vinge, or better yet, having it read to you. Audiblepodcast.com/excuse. Start a 30-day free trial membership and you can get Rainbows End for free.
[Brandon] Vernor Vinge is one of the best writers of science fiction, just hands-down, in my opinion, out there or ever to have been out there.
[Howard] I met him at Conjecture and we had some fun and amazingly fascinating sorts of conversations. I had to shake my fist at him at one point because I said, “I’m writing science fiction that’s set a 1000 years in the future, and it’s a future that looks a lot like today because I’m writing satire. Regularly, readers of mine will say, ‘This doesn’t really seem realistic. I mean, you’ve read Vernor Vinge, haven’t you? I mean, what about the singularity?’ Mr. Vinge, you’ve ruined it for all of us.”
[Laughter]
[Brandon] Okay. So, speaking about things like that, I have a question. Are there things that people do with artificial intelligence that as an expert in the field, Nancy, bother you? Are there pitfalls our writers can fall into, and advice you can give them on how to stay away from that?
[Nancy] Oh, that’s a big one. Put me on the spot.
[Brandon] Okay. If you want me…
[Mary] Can I just…
[Brandon] Mary had something, and you can think for a minute.
[Nancy] All right. That’s good.
[Mary] [garbled – but this?] Because this will narrow it down. So one thing that I see people do and some people complain about is the anthropomorphizing of the AIs. One argument goes that AIs will be so vastly different from us that presenting them as human-like in any way, shape, or form is unrealistic. The other school of thought, and I will admit that I am in this camp, is that any system that is intelligent enough to know that it is interacting with humans, will model itself around interacting with humans, and we anthropomorphize toasters and cars. So anthropomorphizing AI in the future seems like that’s just how things would work. Do you have a sense on whether or not… Like how the anthropomorphizing of AI would play out if we actually get something that’s smarter than theory?
[Nancy] I think it would end up being a very… Very similar to the way we interact with people. I don’t know how it is for most people, but for me as a teenager, I had very clear ideas about what was going on in everybody else’s head. I always understood what they really meant. Always. It was magic. The older I got, the more I became aware that there was more than one possible internal state which could be resulting in the actions that I was observing. The older I’ve become, the less confident I feel that I really know what’s going on inside the people I talked to at all. I think artificial intelligence and people’s interactions and anthropomorphizations… I’m sorry, I can’t…
[Howard] That’s one of those words that just grows too long.
[Nancy] I can spell it. I can spell it. I believe that in the early stages, people will think that they understand. Then it will be a process very much like getting to know a person. As you get to know… As you get familiar with artificial intelligence… Artificially intelligent systems, you will begin to realize that there’s a lot more happening there than your anthropomorphization allowed.
[Brandon] I have something that I can say on this that actually is a compliment to Howard. A while back, I was reading Howard’s comic, and he has a lot of AIs. They’re part of the whole story. They are just vastly superior intellect-wise to human beings.
[Nancy] I love Petey.
[Brandon] It’s always been this kind of interesting contrast when they are so smart and yet they act like supersmart people in a lot of ways. I’m like, “Is this how it would really be?” I was interacting with one of my really smart friends who will remain unnamed. But we’re talking genius level IQ. I noticed that this friend in talking to me and other normal people, talked like a normal person. He had learned to pass, so to speak. When he interacted with someone else, hyper intelligence, suddenly the conversation ratcheted up, they started talking faster, they started to stop finishing sentences, because they saw that the other person understood. They started making really oblique references that were one word and both laughing. I was like watching people starting to speak German or something like this. I thought he has learned to underclock himself when interacting with other people. This is exactly what Howard’s AIs do. Howard got it right.
[Chuckles]
[Howard] Well, I made that joke fairly recently, where Para Ventura appears inside of Ennesby’s head. She’s trying to… Trying to give him therapy because he’s got… He’s got a mental issue. He looks at her and says, “How are you even doing that? This environment moves a 1000 times faster than… My clock speed ticks thousands of times faster than…” Para holds up a broken clock and says, “You mean this clock? Welcome to the glacier of meat think.”
[Laughter]
[Howard] What she has done is, she has messed with his hardware and forced him to slow down. I remember looking at it and thinking, “Wow. An AI who is insane. The insanity is progressively degrading his function. How would you cure that? Well, the first thing I would do is ice him. Find a way to slow him down so he’s not thinking himself into a hole.” Did I get that right? I have no idea. I have absolutely no idea because I don’t know what their brains are made of. But I’m pretending that the clock is a thing… Although I am… I’m actually considering footnoting that and saying, “Okay. I know Para is holding a clock because that’s a convenient metaphor. There isn’t a single clock. There’s actually 63 different systems that Para had to tune in order to make this work. Because she’s really good at it, she got it right.” Then I have to find a way to write that footnote so it’s funny and not angry at my readers.
[Nancy] Great, Howard. Well, if artificial intelligence is based on electronics, I think it’s fairly safe to say that it will probably have at least one clock. Because I can’t figure out any way to make electronics work without a clock. So I think you got it right.
[Howard] There was a project done… Gosh, this is 15 years old. Where they built some very small, like four chip, little chips like the old TI 555 timers, very small computers that were hooked up to something that allowed them to program themselves. Okay? I say program themselves… A computer was programming this chip to perform mathematical operations. They ran evolutionary algorithms, where they said, “Okay. Mess with it randomly. If you come up with something that does it faster, great. If it does it slower, that’s okay. If it doesn’t solve the problem at all, throw it out.” What they found, after millions and millions and millions of iterations, is that the little timer chips were solving the mathematical operations in fewer steps than was electronically possible via the logic gates they had access to.
[Hum.]
[Howard] Okay? It was impossible for a human being to program that chip. So they took the code on one chip and tried to run it on a another chip. It failed to do any operation at all. The evolutionary programming had determined that it’s not just logic gates… AND, OR, ON, OFF, whatever. There are voltage states in between these switches that vary per chip, and the evolutionary learning algorithm figured out how to exploit that. So when you say, “Well, it might not be electronics,” I think, “It might be electronics, but it might be so hardware specific that if it breaks, it is absolutely impossible for us to fix it. We just have to throw the brain away.”
[Brandon] That’s a fantastic idea though. That’s like…
[Mary] That’s the way the AI system in my university is built.
[Howard] Yup.
[Mary] It has to be on a chassis, and it has to be on a chassis specific to that model.
[Brandon] That’s… I mean…
[Howard] And now you know why.
[Mary] I knew why. But thank you.
[Howard] Now everybody else knows why.
[Mary] I can footnote it.
[Nancy] It’s interesting that you brought up…
[Laughter]
[Nancy] It’s interesting that you brought up evolutionary and learning algorithms, because this may actually be interesting for our listeners to talk about in the realm of electronics. What are the basic…
[Brandon] Well, we’re running a little low on time…
[Nancy] Oh, darn it.
[Brandon] We can give you like some final words. We’ve got like 30 seconds left.
[Nancy] Okay. Go to the Internet. Look up Bayesian learning, neural networks, and genetic algorithms.
[Mary] Those words will all be correctly spelled in the liner notes.
[Brandon] Yes.
[Howard] So now Nancy has given you pretty much a writing prompt.
[Brandon] Yes. All right. We’ll take that as a writing prompt.
[Howard] And has given me a writing prompt which is to figure out how to spell those.
[Brandon] This has been Writing Excuses. You’re out of excuses, now go write.