Fifteen minutes long, because you're in a hurry, and we're not that smart.

Writing Excuses 8.14: Brainstorming with Brandon

As if he needs the help, Brandon challenges Mary, Howard, and Dan to help him brainstorm an A.I. short story. Brandon hands them some setup, and off they go. The ground may have been well-tread in the past, but this particular brainstorming session is full of great ideas that incorporate religion, cargo cults, puzzles, and aliens…

The big challenge here is finding a tale that’s interesting enough and original enough to be worth the telling…

Mary’s Hugo-nominated Novella: “Kiss Me Twice” which appeared in Asimov’s.

Homework: Come up with a better resolution for this story than we did.

Thing of the week: Dragonsinger: Harper Hall Trilogy Volume 2, by Anne McCaffrey, narrated by Sally Darling.

Powered by RedCircle

Transcript

As transcribed by Mike Barker

Key Points: Don’t be afraid to write a story about something that others have written about. How long a story do you want to write? What perspective do you want to write from? Puzzles, characters, resolutions can help set the story. Also consider big story, are there other elements to mix in? Beware of endings that simply raise bigger questions. To write a short story, keep the POVs and characters limited. Make it small and personal. Do consider what others have written in this area, and think about how to make your story original. Keep track of your concept, your premise! Look at how the environment can be misused or manipulated. Figure out who has the most at stake.

[Mary] Season eight, episode 14.
[Brandon] This is Writing Excuses, Brainstorming with Brandon!
[Howard] 15 minutes long.
[Mary] Because you’re in a hurry.
[Dan] And no one’s that smart.
[Brandon] I’m Brandon.
[Dan] I’m Dan.
[Mary] I’m Mary.
[Howard] No one.

[Brandon] So we’ve done this so far with Dan and Mary, so we’re going to do it with me. We’re going to see if the team can give me a story to write. I had one concept. This is actually going to be the hardest of them, because I don’t have context in the same way that Dan did, or a prebuilt sort of tone like Mary had. All I have is one idea. I was reading Mary’s Hugo nominated… Perhaps having won already?
[Mary] No. Nominated.
[Brandon] Hugo nominated story, and she does some cool things with AIs. I like AIs and thought what she was doing was really cool. I wanted to say, “I want to write my own AI story.” But as I thought about it, a lot of science fiction is about the robots or the AIs taking over. I started to really kind of go down those lines. Would they really want to take over? It occurred to me that an advanced AI with all this computing power that they have probably would have no interest in the physical world. Because they can create a reality of their own as complex, and to them, it’s all numbers, it’s all ones and zeros, and they’re not quote unquote real, but… So the world they create is equally real to them, and we’re not really real, because to them we’re just code of a different sort. So why would they be any more interested in us than a projection world that they create for themselves? If they want to rule the world, they create themselves an entire world and then go be ruler of that world, it’s equally real. So this story would somehow deal with that, and perhaps with the idea of trying to coax the AIs into giving us processing power in exchange for something. They are like godlike things. We create these AIs, and in return, they’re like willing to give us some processing power. Or but we have to like let them stay around. Or maybe like nine out of 10 or one… 99 out of 100 just ignore us as soon as they’re created, but that’s like to gamble we take, and our treaty with them is, “Okay, if we create an AI, there’s only a 1% chance it will do anything for us. There’s a 99% chance it will say yeah, I’m just going to go create my own reality.”

[Howard] Let me grill you a little bit. First of all, caveat, you are treading ground that has been tread a lot before. Without going into detail, a lot. That’s fine. It’s fun to write these sorts of things. Question. Do you want… Well, how long are we… How long do you want to be? You want this…
[Brandon] Short story.
[Howard] Short story.
[Brandon] Yeah. We’re looking at a short story.
[Howard] Okay. Short story. Do you want to write this from the perspective of the humans or the AIs?
[Brandon] I don’t know.
[Howard] Okay. When you described the AIs can build their own worlds… That’s “why do I need your resources, I’ve got all of mine?” The AIs who get unleashed on the Internet… Perhaps for them, there’s recognition that resources are finite. They can create virtual worlds for themselves, but they have to do things for us or we will go turn off the CPUs that they infect. Maybe the equivalent of an AI environmentalist who is saying, “Stop spreading your little sub processes and building your little crystal sandcastle whatevers, because you’re screwing up the human Internet and the more you screw it up, the more likely it is that they’ll notice us.
[Brandon] Okay.
[Dan] That’s kind of cool. I was thinking a different route of kind of a real estate deal. Where they have their own realms of like you’re talking about, and we need to use them because we need to process something. We need to use our computer space. So we have to make some sort of real estate deal with them, essentially. Maybe trading access to the Internet or something like that.
[Mary] I was thinking of something where this was already established, that we’ve worked out those deals already. It’s established that a certain number of AI are just going to be like, “Lalala, I’m going to go off and play.” Assuming that if we’ve got AI, that were far enough in the future that processing power is not a big deal. Although I realize that we will always push the limits of it. But I was… We actually talked a little bit about this last night, but I was thinking that maybe a character… A place to go would be that there is kind of a religion that is built up around this, because you have to entice… It’s like praying to the gods… It’s like when you’re trying to coax your smart phone to make connections. It’s that, except that you’re trying to get an intelligent being to come down and answer your phreaking question.
[Dan] Okay. So it’s like, “Siri, great and holy one, let me offer unto you this thing so that you will…”

[Brandon] I like that idea, as long as we take it towards space religion. Meaning like Fifth Element. Where it’s a very different religion. Where it’s a religion… They’re like, “We don’t worship these things.” It’s not… I mean, I like the idea of a religion that is not about necessarily worshiping. It goes back to almost like a Greek gods type of thing. Yes, there are these powerful things. They are real. We don’t worship them. They didn’t create us. But we have to appease them.
[Howard] It’s not a sacrament, it’s an arcanum.
[Brandon] Yes. I like that idea a lot because the juxtaposition of high science fiction with that, people looking for ritual and things like this. Using ritual for these AIs. You could have a complete atheist doing this and be involved in religion. Because religion itself doesn’t necessarily… This is the trappings of all these things.
[Mary] The other…
[Howard] A fun… Sorry. A fun angle to take care might be a puzzle sort of story, where we have a character… Told from two points of view… We have a character who is trying to coax an AI into solving a problem for him. He’s going through the rigmarole, he’s walking through the arcanum, he’s performing the rites and observances. From the other perspective, there’s the AI who is looking at what’s done and chuckling at all the superfluity that’s been added to this stuff. Then there’s the piece that was needed, or a piece that’s new, and the AI is suddenly interested. “Oh, goodness. Oh, now you have my attention.”
[Mary] I don’t know that I would need to see the AI. I’m actually much more interested in how people function within a society that’s like that. But I mean…
[Brandon] I love the puzzle idea. I like that. It harkens back to the classic Asimov-type thing “How can we get the machines to do what we want them to do?” Except now, in this case, it’s not the bulldozer that’s thinking, doing the wrong thing, it’s a deity.
[Mary] In this case… I also like the puzzle idea a great deal too. But something I started thinking about as you were talking is that part of the puzzle is we don’t know why they do the things that they do. It’s like… Humans will try to create rules and rituals to govern the inexplicable.
[Brandon] Ah!
[Mary] Like you know that rule, that if you walk away from the bus stop, that’s when the bus comes? Everyone talks about the, even though we also know that that’s not a rule. That’s where all of these rituals… It’s like nine times out of 10 this works.
[Dan] So you’re talking about what is essentially a cargo cult, where we repeat meaningless things because that’s just how humans are, and we don’t know which aspect of the ritual is the one that gets their attention.
[Howard] Except some of these actually work, so this isn’t cargo cult stuff.

[Brandon] Okay. Let’s stop for book of the week. I actually want to do Dragonsinger by Anne McCaffrey. I really like the Dragonrider books. We’ve actually promo’ed several of them. Actually, the same one twice. So we picked a different one this time. Dragonsinger is interesting in that it’s a smaller story set in the Dragonriders world. It is about… The first one that… This is the first Dragonrider book that wasn’t about a dragon rider. It’s about a young girl who hatches these little eggs of mini-dragons and things. It’s delightful. It is a wonderful book. It’s very fun and readable. I think Anne was trying for something a little more juvenile. A juvenile Dragonriders book. But I loved them and I still do. They’re delightful books. There’s a trilogy kind of. There’s two about this character…
[Howard] Dragonsinger, Dragon Drums…
[Brandon] Dragon Drums is about another character. But anyway… I would highly recommend them. If you have not tried the Dragonrider books, and dragon flight just doesn’t appeal to you, this is another direction you could go.
[Howard] audiblepodcast.com/excuse. Start yourself a 30 day free trial membership and go get Dragonsinger for free, and you could pick up… What is it, Dragonsong or Dragon Drums for 30% off.
[Dan] You could get them both.

[Brandon] All right. So let’s get back to this. So far we’ve kind of teased it out… I want to go with this direction. Religions that deal with AIs. We don’t have a puzzle yet that this character… We don’t have a character who’s working on the puzzle. We don’t have a resolution. A really cool resolution to this story would be part of what would get me really excited about it.
 [Mary] Well… oo, boy. There’s so many different…
[Howard] Sorry. In terms of beginning-middle-end, the big story… We have humans and we have AIs, but we don’t have aliens. If our human is a cryptographer who is trying to come up with a solution to a communications puzzle… In order to get the AI to participate, the piece that the AI was missing is the piece that makes the puzzle interesting, which is the source of the transmission, which is that it was extraterrestrial. The AI is interested because “Oh, someone new to talk to. Well, yes, I will work on this for you forever.” So there’s… Your ending is potentially that… Well, go ahead, Dan.
[Dan] I was going to say, if you want a twist ending, it could be that when the human finally reveals that piece of information, then all of a sudden the entire message is translated incredibly quickly, and he realizes there’s no way he could have done that unless the AIs have already been communicating with these aliens for a long time.
[Mary] Oh. That’s interesting.

[Brandon] Okay. That’s cool. Is that a… Those are the types of endings that I worry about doing right, because it’s a bigger question at the end of the story. So how would you spin that ending to not simply be and now, bigger question?
[Mary] Well. Yeah, because you did say you want to do a short story. So am I correct that you want to stay under 7500 words?
[Brandon] Yes. If that’s possible for me. Which I don’t know that it is.
[Mary] Well, this is one reason that when Howard was talking about two POVs, I’m like, we cannot give Brandon two POVs.
[Brandon] Don’t give Brandon two POVs, you’ll have 20 in 10 seconds.
[Mary] Yeah. He’s only allowed one POV, and I think really that you’re only allowed three characters in this one.
[Brandon] Right. I think this is very doable in three characters, with the AI being one of them.

[Mary] Yes. You know, it could be the large picture of the aliens, or we could get small and personal, that this is somebody who’s just trying to pass his apprenticeship. That to do that…
[Brandon] You have to get the… Yeah, it could be that. That narrows it down a little bit more, which could be very cool. In that case, it could be… That the apprentice… They’re just supposed to work out… Get the AI to do some random thing. He’s found this space… This pattern in space, he wants it analyzed or something, and then it turns out that he’s… Becomes the most important person in a long time because he’s actually…
[Howard] The AI didn’t just deign to solve the puzzle, the AI deigned to communicate with him directly and carry on a conversation and…
[Mary] Oh…
[Brandon] Well, see, there’s got to be a conversation because I want to have fun with the AI personality. I want the AI to show up and be like, “I don’t have time for you. I just decided to have Rome conquered by the Chinese,” in his virtual world simulation thing. “I’m having fun ruling as this.” You get these glimpses of what the AI has been doing. In the 10 seconds it’s taken time to talk, the AI has built Rome up to a spacefaring society and has become Emperor of Space, and is now bored with that and is… I want to see these… Like that culture clash.
[Mary] Right. As you were talking, you reminded me of a thing that I had once with… I won’t go into the back story but… There was this moment where this kid was talking to one of our puppets and jabbering nonstop. We found out that… The teacher wouldn’t stop him. The teacher afterward said his parents were killed in a car wreck and this is the first time he’s spoken to anyone in three months. That moment could be… The kid… The apprentice knows that he’s supposed to communicate with the AI, but does not realize the AI is communicating with him more than they usually do. Part of the puzzle that he needs to solve is why.

[Brandon] Okay. Howard, you have read stories like this before. How do we make this one original?
[Howard] I am by no means literate enough to guarantee originality.
[Brandon] Well, yes.
[Howard] Some of the ideas that have been explored regarding AIs. James Hogan’s AI… Once you teach an AI that it’s alive, it becomes a living thing, and competes just like living things do, except in his stories, it doesn’t compete because it has a respect for all life. Which is a big, fun resolution. Brin AIs… AIs end up being like us, because we raise them like children, because that’s the sorts of networks we raise them on. The godlike intelligence of AI… I can’t think of specific examples, but it’s a trope, it gets done a lot.
[Dan] Neuromancer.
[Howard] Neuromancer.
[Dan] Neuromancer did it. Hyperion did it.
[Brandon] Right. But again, where’s our originality? What do we… I we approaching it in an interesting way or not?
[Mary] I think the rel…
[Howard] I think the arcanum idea is the… That there is a ritualistic sort of communication system. It’s not just… It’s not a traditional interface. You build some sort of an interface. You build an interface… That’s like… I don’t know, dice and a Parcheesi board or something. Instead of a keyboard or a…
[Mary] Well, it’s probably direct communication. They can probably see and hear you at this point. Because Siri can. What if one of the things is that… Not just what Dan was talking about with the AI suddenly solves it… Do you like the idea of aliens?
[Brandon] I’m toying with it. I don’t know.
[Howard] That’s no.
[Brandon] No, no. It’s…
[Mary] Anyway, the thing is…
[Brandon] It’s in the mix.
[Mary] What I was thinking is, as a cool resolution, is whatever this puzzle piece is, that whatever it is that gets the AI to give him its attention, because information propagates… Suddenly, he has multiple AI…
[Brandon] Okay, yeah, interested in him. Here’s something that came at me. All right. What if… To do this… To try… What you have to do, you have to pique its curiosity, the AI’s curiosity. So normally, you bring like some incredibly complex equation that the monks have worked out and hold it up… And this one’s just not interested in those things at all. Could we find something very simple about human nature that could then make the AI fascinated and bring it out?
[Howard] Captioning lolcats.
[Mary] What do I have in my pocket?
[Brandon] Well, yeah. Oh, what do I have in my pocket, so you go with…
[Mary] Tolkien.

[Dan] What if we completely reverse it? Maybe it’s like… Mary was saying, this AI is far more talkative than normal, or he’s more attentive than normal. What the guy eventually realizes is that this AI actually thinks of the humans as deities. Because really, they have the same relationship with us. We live in a bizarre world they don’t understand. We control to some extent their power sources.
[Brandon] I’m going to shoot that one down.
[Dan] Not interested in that one?
[Brandon] I’m not even interested, because the whole concept is they’re like, “You guys are boring.” That’s the whole premise of it. “I don’t want to rule this, I can make my own. You’re somebody else’s programs.”
[Howard] Did you hear my suggestion about captioning lolcats?
[Brandon] Yes, but it’s too gimmicky. It’s too silly. I would rather something like…
[Howard] But it’s the Data trying to understand a joke thing. Some things…
[Brandon] But see, a joke might work. But I feel like that’s ground that’s been tread a whole lot also.

[Mary] Let me flip this around, because we’re coming at it from what is the ending. Let’s look at our environment again. We’re saying maybe the apprentice. But look at the environment again, and figure out how it can be misused. Like how can people manipulate this, so we can figure out who has the most at stake. Because right now, I feel like that’s the piece that’s missing. We don’t… Nobody has anything at stake. So how can you misuse an environment in which you have to… What are the black market… Is there… Maybe there’s a… I mean… This is, of course, The Postman, maybe there is someone who is peddling a fake AI? Like, “Oh, yeah, I can completely communicate with this AI.” Maybe there’s somebody who is teaching the wrong things…
[Dan] How about the idea of the Chinese gold farmers? That someone has set up vast factories of child labor or whatever whose job is to go through these rituals in mass production form and are therefore attracting far more AIs than anyone has been able to in the past?
[Brandon] Okay. I do think we’re out of time on this. I think it’s something that needs to be mulled over yet some more. Maybe the seed just isn’t strong enough for a story. But yeah, we’ll keep working on it. What’s the writing prompt for our listeners?

[Dan] The writing prompt for the listeners is to come up with a better resolution to this story than we have.
[Brandon] All right. This has been Writing Excuses. You’re out of excuses, now go write.