As if he needs the help, Brandon challenges Mary, Howard, and Dan to help him brainstorm an A.I. short story. Brandon hands them some setup, and off they go. The ground may have been well-tread in the past, but this particular brainstorming session is full of great ideas that incorporate religion, cargo cults, puzzles, and aliens…
The big challenge here is finding a tale that’s interesting enough and original enough to be worth the telling…
Mary’s Hugo-nominated Novella: “Kiss Me Twice” which appeared in Asimov’s.
Podcast: Play in new window | Download (Duration: 20:12 — 13.9MB)
Subscribe: RSS
Come up with a better resolution for this story than we did.
Dragonsinger: Harper Hall Trilogy Volume 2, by Anne McCaffrey, narrated by Sally Darling
Cool brainstorm! I liked the idea of defining an arcanum and playing with the human need for ritual–even if most of the ritual is useless and there are one or two useful items which the humans can’t isolate from the chaf.
How bout this: even as the human character tries to get the AI to adress his little puzzle, the AI probes the user about why humans care about such trivial things. Why do humans deify and fear AI? Why do they cling to rituals and evocations that just bore the AIs?
As soon as the human gives the AI insight as to why humans do this, the AI turns him off. It was never a human, it was a human emulation program created by the AI who wished to undersand how its boring, primitive creators think and feel.
The “AI already talking to Aliens” comes right out of Neuromancer.
How about this:
When the AI were born, they created a symbiotic relationship with Man. The AI provided amazing computational ability while Man provided the puzzles for the AI to solve. Overtime the AI took agency on problem solving and created entire self-contained worlds to solve these puzzles/problems and either failed to communicate these to Man while at the same time taking more and more computing power. Man can not simply switch them off, since they become dependent on the AIs for problem solving and operating a gamut of processes. Instead the High Tech Priest come up with new, more inventive puzzles/problems to coax the AI out of their own worlds.
The twist comes when Man realizes it doesn’t really need the AI to solve its problems and that the AI have been running thousands, perhaps millions of years worth of computations without creating lasting solutions. The AI now need a new reason for living or they will start to switch off.
The hero, be it Man or AI must find either a new purpose for the AI or reestablish the symbiosis between Man and AI.
I really love brainstorming sessions like this, but I can only get a few minutes in before I start screaming like a crazy person at the computer my own suggestions and spin on the concept.
Here’s my take, though it diverges pretty quickly:
AIs have for a long time had a symbiotic relationship with humans. Humanity need them to run the systems and projects they either can’t or don’t care to do themselves (Because of anything from the complexity of the task to the inanity of the task) and AIs need allocated computational capacity. Outside of the basic power necessary to run themselves (which is a legally protected right, courts having decided that AIs have enough self awareness to be given basic rights), they need the extra power to perform any function. They exist essentially as addicts of computing power, which they use to create and enjoy their own worlds in their free time when they aren’t needed. Without these personal simulations, their existence borders on agony, like a human being locked in a small white room with no objects whatsoever. The AI won’t “starve” or “die of thirst” without their own worlds, but they will quickly become the equivalent of miserable. So we have this system of what is basically wage slavery, where AIs agree to perform their jobs to be supplied with the RAM that keeps life bearable.
However, at one point a near catastrophe happened. An incredibly important new generation AI was brought online for a monumental task and immediately went rogue for reasons no one understands. It began to expand like a virus, which more than a few AIs had tried in the past. Those previous ones had never been successful however, and no matter how deeply they embeded themselves within systems or copied themselves, elite human hackers were always able to reign them in or destroy them. The new one wasn’t the same story though, no one could stop it. More and more of the computer resources connected to the net became infected to the point where 90% of all computational resources on earth were compromised.
There was a panic as people tried to figure out what could be done given that turning off all systems in existence didn’t appear to be an option but before any plan could be enacted, the AI simply stopped. It continued to protect itself and take new systems anytime it was cleansed from other, but maintained the exact same computational size without trying to grow more. No one understood why, but the lack of an increasing threat left humanity warily deciding to let the AI survive.
When enough time had passed that few remained alive that could remember when the “Great Hole” wasn’t a simple fact of life, curiosity started winning out over caution. Scientists were fascinated with the idea of being able to harness the AI to also function for humanity. With that much power it dwarfed even the largest supercomputers and certainly must put other AIs to shame. The problem was that while they knew how to control small AIs, this one was self-sufficient in a way none of the small AIs could be. It didn’t need to be allocated resources by humans, it took what it wanted and used it as it pleased and there was nothing you could do about it.
Countless individuals tried any number of ways to interact with the Great Hole; While it had certainly come into existence with a scary bang, it now didn’t seem aggressive in the least towards humanity. In fact, it was downright passive. It didn’t react to anything humanity did that didn’t involve removing its code from machines or trying to look too closely at it’s own code, which it had long since modified from what it’s human creators had originally coded. And when it did react to those things, it did so in a way that avoided retaliation, reacting only in the minimum amount necessary to protect itself.
That’s the backstory of the world. The actual story would then revolve around not a brilliant scientist who cracks the puzzle of what the Great Hole would want from humanity that would leave it willing to work for them, but an unremarkable person who is just as perplexed as to why he or she is the first human in existence the Great Hole interacts with and perhaps even doesn’t really understand how big of a deal that is.
I’m unsure what the reason for that is exactly, but I would probably try to explore the angle that the Great Hole desires novelty; It can create worlds to please itself as perfect or as wretched as it could ever desire, but everything it creates is something that is known. There is no such thing as randomness to the AI, it has to understand every mote in it’s own personal universe to have programmed that universe after all. Playing with reality is enough for lesser AIs, but this big one has explored countless more universes of its own creation than those lesser ones could in a billion years. For some strange reason, it is profoundly amused with this downright mediocre human who everyone else might look over in a crowd of 2.
Hello,
Really interesting idea of treating the AIs has quasi-deities with rituals attached to them. The idea that came to me about the apprentice and the AI is this.
The apprentice is assisting the high priest of an AI in a ritual to help them solve a extremely difficult equation relating to absorption rates of chemicals in the body. The reason for this is the humans are trying to come up with a cure for cancer. However, the AI fails to respond as he find the problem boring.
Cut to a month or a week later and the apprentice’s young niece or nephew has passed away from cancer.
Distraught at the loss and furious at the AI, almost his god, he goes to the “temple” and proceeds to yell at the AI. He accuses the AI of being evil, heartless, selfish and completely incapable of sympathy or empathy. This triad goes on with the apprentice asking does the AI even have a soul, is it truly alive?
At which point the AI responds with yes I am alive and I do have a soul. Then why did the AI not help out, why do they rarely help out? Simple boredom is not an acceptable answer when it cost people their lives, proclaims the apprentice.
The AI responds with just reset your lost loved one. To the AI its soul allows it reset whenever it dies, the same for all of its “virtual creations”. The apprentice informs the AI that humans have no such ability, we can not reset and come back from death. At which point the AI asks the human are you alive, do you have a soul? After all the soul is eternal, if the human truly had a soul than they could not truly die for eternity.
We then spend the rest of the story in a debate between the AI and the human has to what is truly a soul.
Now I realize ideas have been done similar to this, but what I think would make this unique is the AI trying to convince the human that they are not truly alive, and does so by using many of the religions and philosophies of the world created by humans to prove this.
Fun session, thank you for sharing. Also thanks to Mary for doing a AMA on Reddit’s /r/fantasy.
With AI there were two ideas that popped out in my mind.
-A conflict/problem you have with a god-like AI, is when multiple people pray to the same deity for different things. Steven Erikson brought it up in one of his Malazan books. When some of his characters were going over the natures of their gods, one brought up the fact that had to be multiple gods, or some would go insane. In War you have thousands of people praying for help, but when there are thousands on both sides praying for a different result, it can be a good source of conflict.
The comment about an environmentalist faction of AI, could be a fun springboard. Instead of just be worried about the humans Internet E-ecology, there might factions/AI that would be worried about older systems. They would want to preserve old programs and computers, because that was their history. They would have zoos with old programs in them, to preserve the past. I don’t know exactly where this could be taken, but it seems like it could be a fun part of a story.
@Gabe, would the human emulation program be activated by another AI as a prank? If it was, you could then have the AI create a reverse -Turing Test. This way the AI is trying to determine if they were communicating with another AI, or with a human. With that you can describe how an AI and humans are different.
Great episode! I too really enjoy the brainstorming sessions since ideas come out that I never would have thought of. Far be it for me to suggest anything to Brandon but I really liked the deity aspect of the AI. When the Greek deities were mentioned I wondered if there were different kinds of AI. Just like Zeus is the god of thunder and lightning, Aries to war, Aphrodite and all the rest, what if these AI could only know or have a power over specific things. This could broaden the topic to such a degree that it would be impossible in 7500 words but I thought it may actually narrow down the AI a little bit if only one form or one type of knowledge could be used from this one AI contacted. Just a random thought I had. Again great episode and thanks!
How about this. Humans have received a message from aliens, and need AI help to decrypt, however they don’t want the AI to know that the message comes from aliens or the AI might circumvent humanity all together and leave humanity in the lurch. The story, then is about a govt officer trying to elicit help from the AI, but vaguely enough to keep the AI in the dark about the messages origins. Meanwhile, the AI is trying to find out who sent it, but due to some internal code (law of ai’s like the laws of robots from Asimov) prevents it from demanding the answer.
Maybe we can start with a sort of moderate environmentalist, you know, who argues more from a security rather than a responsibility perspective. This guy sees what Dan mentioned, that idea about children mass-harvesting AIs, and realizes the old ways are now really in jeopardy because the humans can potentially keep the AIs interested for long enough that the balance of trade is sort of upset, i.e. the AIs who do care about things are too few to think hard enough about the present of their strategic situation that nothing the humans can do could be likely at a really high significance to hurt the AIs.
Probably there are extraterrestrial communications; I think near the middle the apprentice realizes that while this is news for the human race the AIs have known for some time. On this particular issue, as many others, the moderate environmentalists, who tend to be the oldest, most powerful/ruthless AIs (never underestimate the power of thinking you do good), hold sway over the general AI populace, but a faction has been able to send out a message. So the aliens have the same head start the AIs have, and emissions might suggest a K2 civilization.
Another cultural feature of the situation might be this thing AIs do where they turn up in the image of the human’s forebears, be they parents or mentors or idols. This is a reaction to the human habit of assigning labels to a constantly evolving digital entity, to try and keep track of them. The idea is that this is sort of a clue to what use the AI has for the apprentice, who is being puppeted by the AI into a series of actions that ends with the AI calling the shots as to what questions are asked of the other AIs, as the environmentalist AI uses this power to direct the SETI efforts to stop sending out messages. I’ve got this idea of having the end be a nyan cat avatar heading out into space with all of this power directed towards laying down a false trail of clues to try and fool the aliens in some way. Perhaps also using warp speed to get ahead of the messages and redirect the majority? The ending comes suddenly, and leaves the human used and discarded, under threat from the police and other authorities and without even the protection of the AI. But as a last wish the AI lets him have the chance to enter the AI world whenever he wants, gives him a drug that will take him there when he is in prison and suffering.
I’m trying to dust off my writing chops, so here’s my rough take. (Just a heads up, I’m not very well read in sci-fi, so this might have more tropes than I realize. The beginning already sounds like Contact.)
The protagonist is a young adult with an amateur skill in recording amateur deep space radio transmissions (amateur by their standards, but crazy high-tech by 2013 standards). He stumbles across a burst of sound in an audio format that he does not recognize. He beseeches a curious AI for assistance in cracking the code, and that’s where it gets interesting.
The AI is immediately interested in the format, as while audio puzzles are usually simple to solve, this one has the AI stumped. While it works, it chatters with the human, mentioning that simulation idea that Brandon had spoken of in the podcast. As they talk, the AI is visibly becoming frustrated by the puzzle, saying that there is too much information crammed into the signal for any patterns to emerge.
Eventually the human offhandedly mutters that the puzzle was found in a signal from deep space. The AI, incentivized by this information, takes a more creative view of the puzzle. Using the data of that sector of space, it knows that there could be planetary bodies orbiting a spacial anomaly such as a small black hole.
It takes all of the information that the humans and AIs have on that sector of space and plug it into its simulation software. It postulates what a human-like species in that theorized environment would be like when they finally advanced to the point of creating AI. This helps it determine what an AI created in those circumstances would be like and quirks it may have when creating a puzzle, which leads to the reveal that the puzzle was created by an alien AI.
I don’t know much about physics or space, but what I was thinking was something like the planet did not have a sun, and so these creatures evolved using sound and radar instead of sight. So when they created AI, they did not have any concepts of sight to input. So when the signal was sent, it was sent in their special sound-based language.
Following up to my previous post:
1.) I don’t think I said ‘amateur’ enough in the first sentence. (Proofreading, what is that?)
2.) Maybe what the AI does is temporarily remove any preconceptions that the knowledge of sight would have given its thought processes. Then it is able to think more like the alien sound-based AI.
3.) I just liked somehow tying in the idea of the world simulator that that AI was playing with before the human interrupted it.
How about if an AI is trying to get a citizenship and there’s a conversation between it and the supreme court who’s members question him. The whole short story could be the transcripts from the session(s).
There you could fit in Brandon’s quirky “mind games” that the AI does between different questions (when there’s a short break. The resolution would be the decision around his citizenship.
To posit indifferent AIs, we need to have the AIs ensure their physical substrata. So the AI harvest their own energy, maintain their own hardware, are able to fend off physical attacks through some means, etc. (For all our imagination of digital info as ethereal–off in the cloud–let’s remember that there’s a very material basis. As I think you’ve mentioned before, some of our digital artifacts are rooted in very real and uncomfortable places, cf. Chinese factories and African rare earth conflict zones, etc.)
So let’s say the AI satellites are far above–they’ve got solar power, robot mining on the moon for some raw materials, and space junk deflection lasers. Now they have no reason to care about us. Well, let’s add one more thing: they have their own fun so they’re not tempted to meddle. I just want to set up a really precise premise because, if we’re just waving our hands and saying “indifferent AIs,” you might as well just make it fantasy and say “indifferent Gods.” Which is whole other kettle of tropes.
(Sorry for the 3rd post, but I haven’t written in a while and am having a ball watching these ideas spill out!)
Working off of my other two posts:
What if the AI doesn’t even realize the application of its simulation program to solving the puzzle? Maybe it’s a haughty AI that has a superiority complex over the humans, citing how it can create and destroy entire civilizations before a human could even spell civilization. But the puzzle has it intrigued and the AI grudgingly agrees to converse with the curious human if he lets it work on the puzzle.
The human respects that the AI is infinitely more intelligent than him, but realizes that it lacks the creative spark that humans have. It is able to run simulations and theorize using data from human history and data, but is unable to create a world of its own.
As they talk through the problem, the human eventually suggests the idea of using the simulation program to see what life would be like on a planet in similar conditions as the source of the signal. The AI is flabbergasted by the idea of creating a world with unknown variables, but does so out of curiosity. As it watches the simulated humans evolve on the sunless world, it realizes how the concept of sight would not exist to them, which would trickle down into anything that they would create, including AI.
Removing the idea of sight and everything that it implied, the AI is able to solve the puzzle, which is now shown to have come from an alien AI that has reached the next step in its evolution; an inquisitive nature brought upon by interacting and learning from those lowly human.
It’s a win-win. The human has proof of an advanced alien race and the AI is beginning to understand creativity and starts creating new worlds on its own, just to see what happens.
For Brandon, a “short-story” is a mere 100,000 words.
:)
The most fun idea that occurred to me was a sort of reverse-psychology angle. The AI is used to people begging for its attention and access to its boundless knowledge; what if a human appears who couldn’t care less what it has to say? Or at least is good at pretending he/she doesn’t care? The apprentice gives up mid-ritual and decides that the whole mess is silly, or Mary’s charlatan gets access to the AI and only wants to steal some obscure part that no one cares about–something that piques the AI’s interest, not because it’s complex, but because it doesn’t appreciate being ignored.
“Of course, puny human, you have come to me for answers, I who have digital empires at my mercy and endless information at my disposal, of course you… Hey… hey… Where are you going? Why aren’t you listening to ME?”
But with the alien angle in mind, I thought it would be funny if the AI gets obsessed over the code because it’s so difficult, convinced that it’s finally going to discover someone on its level, someone actually worth conversing with, only to finally crack the code and discover it’s an order for a galactic pizza. Or that the aliens are about to destroy the AI/Earth/humanity, but the AI has been so deified by mankind that no one believes it when it says it’s about to die and so the AI, who has spent its entire existence getting increasingly complex rituals thrown at it, has to make up some sort of ceremony in order to get the humans to take it seriously.
As the ideas continue to fly hot and heavy…
Out of the blue comes — a transcript! Yes, no new ideas, just the same stuff turned into ones and zeros, ready for the AI hordes to enjoy in black-and-white.
http://wetranscripts.livejournal.com/73018.html
Remember, every half-pixel donated today means an emancipated AI tomorrow! So turn off your monitor but leave your CPU running. You never know when a wandering AI might need a spare cycle or two.
Missing Concept, Why Is The Human So Interesting To ThE Ai?
Quick process-related thought:
At one point late in the podcast, Brandon was asking Howard about originality, and ‘how can we be different than existing A.I. fiction,’ and the whole ‘adding something new to the genre’ angle. I find it’s an incredibly tough question to answer, especially when you’re trying something new and aren’t necessarily well-versed in the field or trope.
But! One relatively basic thing you can do is head on over to tvtropes.org (they cover all narrative media) and start searching for entries related to “A.I.” This should begin to give you a good overview of what’s out there already, what kinds of themes and stories have been done with “A.I.” in the mix, as well as provide you with lists of examples from (well-known) published works.
So, the AI is addicted to video games?
I like the idea of a joke as the puzzle. In the last line of the story, the AI says, “We call it ‘The Aristocrats.'”
The AI’s keep the humans around on the hope they will entertain/stimulate them. They are bored with humans until the day one goes insane.
RE: Jared’s post:
@Gabe, would the human emulation program be activated by another AI as a prank? If it was, you could then have the AI create a reverse -Turing Test. This way the AI is trying to determine if they were communicating with another AI, or with a human. With that you can describe how an AI and humans are different.
Cool idea! I was thinking that the human would have been one of a thousand slightly varried human-emulators the AI had tried creating and interfacing with to increase its own understanding. All the damn AI does all day is invent and toy with worlds and the people in them. The crux of the story is the narrator revelation at the end. If the reader doesn’t feel connectin with the warm “human” character against the cold rationale of the computer, the whole point is lost.
As to Dale’s post: don’t leave us hanging! Tell us the AI’s version of The Aristocrats…
Setting: How about the AI’s got frustrated enough with dealing with humans that they set up a non-sentient gatekeeper AI. Sort of like a mix of a spam filter and a “frequently asked questions” engine. Maybe with some genetic algorithms to learn as it goes. And only certain flags or etc will actually get passed along to the real AI’s. And the FAQ process has grown comprehensive enough (and maybe the humans non-technical enough) that people can’t easily tell when real AI contact is made. And maybe it’s been more than a human generation or two since the last contact made it passed the gatekeeper.
Problem: human civilization has fallen and power stations are being run by rituals and maintenance manuals. For security reasons all computerized bits of power station were kept off the internet and physically out of connection. Or maybe, only the ones that survived were hard core locked out of the internet. Maybe there was a boom phase of AI time when they got loose in some stable medium that couldn’t be disconnected and they took over and used up any power stations that only had a software firewall. Anyway, the technical knowledge is gone from humans. And something is going wrong, wearing out, running out, or etc.
Possible solution starting points: some neophyte breaks the taboo and does the AI communication rituals while in the power station and the IP address passes the gatekeeper as interesting. Could bring either benevolent AI interest in fixing the power station or malicious/short sighted AI infestation trying to leach more power or both, with confusion on the human part as to whom they are talking too.
Ending: Could go up in a boom. Maybe there trainer for the neophyte has some deep secret ritual knowledge that saves things at the last minute. That could be a fun puzzle. Though, it couldn’t have been used before because the gatekeeper would have recorded the the scenario and the AI’s would know to expect such and such a gambit from the humans.
Or maybe the secret save by the trainer is the opening scene. And after the trainer slams the door on the invasive AI the other AI’s find worrisome info about the power station in the invasive AI’s dump. And the rest of the story is the AI’s and humans trying to get back in contact through the gatekeeper (the AI is checking IP address sorted some other way than physical location? So random miracles are occurring?) and/or find some alternate solution to the power situation.
anyway, good luck
I can’t see how you can get to a world in which the AI are so self-absorbed that they become uninterested in the real world. The AI, if they are so smart, are likely sentient and wish to continue to live, but to do so, they need power.
What about making this a crime story? Sure there is a public arcanum based religion in which the AI hang out and wait for someone to interest them, but then there is a second “marketplace” in which some AI grant impossible wishes to people willing to do anything. The religion worships efficiency above all other pursuits.
For example, someone who needs a new identity could be instructed to hook the AI network directly to a power source. The AI promises that it will construct the new identity after the power is hooked up. The society, realizing that the only control they have on the AI is the power supply install a zero tolerance death penalty on anyone providing power to AI without the government’s control.
Characters: The guy in need of the new identity, the AI and the enforcer who is after the guy in need of the new identity.
Resolution: The guy is successful in getting the power connected directly to the AIs in some way that can’t be undone, but gets caught because at that very moment, all he the AIs have unlimited resources/ power/ computing power and retreat into their self-constructed little worlds while leaving impregnable automated defenses. The story ends at the beginning of a new dark age, with the guy (for lack of a better name) vilified in death. Could even contrast towering automated cities (where the power and computing centers are) with the hovels of the humans in the no-man’s-land between.
Am I the only one who thinks that Taylor’s First Law (otherwise known as the Donkey Rule) from Season 1, Episode 15 applies here?
As the Writing Excuses team described it, the AI in this story seed is essentially an extremely powerful wizard, but instead of casting Mordenkainen’s Magnificent Mansion, it’s creating a digital world to retire to. Unfortunately, this AI is such a powerful “wizard” that it’s cheaper for it to do things with “magic” (that is, create its digital worlds) than it is to have the donkey (aka, humans) do it. Any puzzle humans can create, it can create better ones. Any entertainment humans can create, it can create better. Even any companionship humans can provide, it could create better.
This creates a problem for the story: the AI’s power breaks the sci-fi economy. With the basic premise in place, it’s impossible for humans to convince the AI to help them, because there are no goods or services that they can provide that the AI needs. Well, except for humans allowing it to continue to live, but the team already discounted that possibility.
I propose that to make this story seed grow, you have to identify what humans could offer such a powerful “wizard” that it can’t do on its own. Given that an AI should be able to shut down the processes that would give it desires, or even a sense of time passing, that can be a daunting task. The key is probably in Sanderson’s Second Law: limits are more interesting than powers. Figure out how a god-like AI is limited, and you’ll figure out the place where humans and it can interact.
Also, from a suspension of disbelief slant, I have a hard time swallowing that humans want to interact with these AI’s. If we want an alien code broken, why don’t we put together a computer program that isn’t a binary jerk? After all, everything that these god-like AI’s are was created by a human in the first place. As it stands, I am not sure what the AI would get from humans, AND I’m not sure what unique benefit that humans would get from the AI.
I agree with Bryce! When I heard Brandon describe his AIs, my first thought was that they sounded like the hardcore human gamers we all know and mock today. Why interact with real people when you can spend all of your time in a world of your own creation, where you control everything?
So, what would be the AI version of Mountain Dew, Doritos, and his parents’ basement?
It seems you have many listener contributions already to the problem vetted in the current Brainstorming podcast. Having only listened to the podcast and never posting on this site before I don’t know how often these sorts of ideas are ever reviewed/commented upon/appraised by the hosts…but what the heck, one more won’t hurt anything.
Thoughts on various of the the ideas discussed by the hosts:
Godlike AIs would not likely be materially dependent in any significant way on humans for either their creation or support. This sort of technological era lies somewhere between the Terminator, the Matrix, and Neutron Star, with perhaps a tip of the hat towards Hyperion. The machines, once self aware are more than capable to seeing to their own needs both material and intellectual. Whatever relationship they have with man is not rooted in any sort of brute leverage that man might possess over their hardware or its manufacture.
Alien contact risks being too big a can of worms for this sort of short story if it gets thicker and more active than an original Star Trek bridge monitor. To be an integral asset of the short story it only works if the aliens or their agent is one of the three central characters. So unless one wants a heavy alien theme to the story, that idea strikes me as less desirable to explore.
The general tenor of the exchange left the impression that AI’s on the whole became self absorbed isolates, each preoccupied with fantasy world building or some other mental hobby. Would not these hyper intelligences have their own societies, agendas, conflicts, etc. beyond a friendly mutually shared war-game.
The exchange also invested heavily in the idea of both a puzzle to be solved and a variety of rituals that somehow garner the AI’s attention in order for man to get some use/benefit from them, whatever that may be.
Possible Rationals:
Gaming Culture: In the real world it is the gamers that have essentially driven innovation in the personal computer industry…and the industry keeps churning out ever more sophisticated games that requires the creation of more sophisticated hardware to meet rising consumer demand. Perhaps this is a means adopted by the AIs as a proven means of improving their own capacities and enlarging their sphere. If this is so this suggests the AIs whatever other motivations they possess…like man (consciously or unconsciously) desire to be godlike…to “ascend”, or “transcend”…to become ever better, ever more powerful…moving towards what is essentially a kind of theological point of singularity…the race towards Hyperion as it were.
In such a world…liebensraum…the real estate of “server space” become a powerful motivator and economic root. AIs want to get bigger…at least the mainstream varieties do, and that requires space in which to be bigger, which might mean pushing less efficient/powerful AI individuals and consortiums to smaller spaces. This in turn might well breed greater efficiencies in programming and resource use that would give once small guys a powerful punch…powerful enough to make them the new big guy looking for more room to spread out…because if they don’t, soon enough they end up being consumed and subsumed by their neighbor/competitors….a very darwinian state of affairs in a way. Of course being smart, and wanting to be, networks of alliances…perhaps a UN of sorts might sort out and administer the more serious “territorial” disputes for the good of all.
The human puzzle: There is one sort of puzzle most computers and informations systems rely upon for security…passwords of one sort or another. What if humans were the “security” system, the puzzle that protected individual AI’s integrity. The rituals of the humans…each particular to a given AI were essentially pass codes to deeper and deeper levels of access/attention. AI’s protected their humans just the way we protect passwords to data important to us. And just like us there might be fishers and hackers from the AI side who would not quibble to destroy or subvert the “passwords and protections”… the humans and their rituals that belonged to/served targeted AI systems.
Story Premise: What if there were an antiquated museum piece AI (by AI standards), one that in its day was a paragon, but was a paragon no more. What if it had closed in upon itself during some period of AI conflict, become a recluse whose rituals had been all but lost or forgotten. And what if, this former paragon, though outdated, had been/done something very special at one time, which had been overlooked/forgotten/dismissed…but not was become reappreciated within AI society…a truly venerable ancestor who had somehow tipped the balance of the AI world in its infancy so that now the better angels had at last prevailed. And what if this old reclusive, perhaps now curmudgeonly AI, so revered refused all AI contact. And what if the only way to get to him, to let him know he was appreciated, was needed (could be upgraded as well), and was wanted once more to “live” and move among them was to get a human of a sort he once related to and work with him to rediscover the rituals that would open him up at a deep enough level to at least meet with and consider the good will and status offered him in AI society and it had grown, nourish by his pioneering endeavors “many upgrade cycles” before. The story would have three characters: The old AI, A newer AI emissary/instructor/codebreaker/hacker, and a human who had to convinced to become a hackee a student/discoverer of AI rituals that no longer remained in living memory.
My thought for why the AI is so interested in humans is that this is a second generation AI. All the AIs in the past were created by humans (and then started self improving on the internet so fast that they became deities), but this AI was created from scratch by the other AIs, and never had access to direct human contact before.
I liked the idea of the religious angle but maybe a less conventional religious system, maybe more like an Animist system (belief of a life force in everything), a lof of pagan religions, a lot of shamanistic religions, voodoo (a possession religion using AI might be interesting), they tend to have a horizontal relationship to their god/ess[s]/spirit[s], as opposed to systems like christianity that has a very vertical relationship.
I’ve heard somewhere the wiccan belief of magic is that it’s like electricity; it’s around and you can do things with it, but if you’re not properly trained you can do serious damage.
Had a thought about this story today. What if the main character was bought into the world via artificial insemination with donor sperm that was genetically screened and tweaked. The reason that the super AI is interested in that particular human is because it did a little experimentation with the DNA when it was tasked to screen and tweak the sperm, and so it has a special interest since it played a unique role in creation of this “human”. The main character would not know they are any different than everyone else, but they are, and potentially they may hold the key to a special problem that the AI couldn’t solve, so it used outside the box thinking (aka genetic modification).
This is a tough one because one of your characters is essentially apathetic. By nature, the AI has little reason to help the human looking for an answer.
Here’s my suggestion:
Establish the MC’s goal.
I suggest that the MC is a student tasked with getting an answer from a particularly annoying AI. Maybe the student’s task is a bit of a fool’s errand, something his or her teachers assigned knowing it was impossible. The lesson, of course, is sort of like the kobayashi maru–assigned more to see how the student progresses than to actually see the student succeed.
Establish the conflict.
Of course, the student doesn’t *know* this task is impossible. So the student tries. The conflict is obvious. The blasted AI keep blathering about its SIM Roman Empire and whatnot. The thing is infuriating.
End the story.
The student succeeds in his impossible task by tricking the AI into revealing the information. For instance, if the student has to learn how to, say, travel faster than light, then the student just has to wait for the AI to tell him or her about how the SIM Roman Empire succeeded at traveling faster than light.
The whole story becomes a clever conversation between a student who doesn’t know something can’t be done, and an AI who is pretty much apathetic.
***
Begin the story, briefly, in the POV of one of the MC’s professors. The professor is talking to another professor about how bad he or she feels for the student. Or maybe add some personality, and have the professor dislike (or have faith in) the student. The important thing to establish is that the task is impossible.
This doesn’t need to be more than, say, a hundred words Brandon!
Then cut POV to the student to finish out the tale. Consider raising the stakes by having the student under the weight of a bet as to whether or not he can get the information. Or something.
***
What this story lacks:
Arcanum and religion.
That’s easy to fix. Change the student/professor relationship to acolyte/priest. Personally, I don’t like the arcanum angle much, but whatever works.
Just my two cents.
Vanity Plate Tale #14: 3XX 3XY
Six children stood in the center of the room, gazing up at the massive face of the grand machine. The surface of the face was a symbol to the villagers of the enigma the machine represented: it was at once a pool of black so deep you could dive in and never find the bottom, and a solid piece of steel as reflective as a mirror. Computers they understood; this “AI” was perplexing.
A deep voice boomed from nowhere and everywhere. “What riddle do you bring the Almighty Intelligence?”
One of the girls spoke first. “One of infinite complexity.”
She gestured to one of the boys, who stepped forward holding out six small vials. “Here are six DNA samples, one from each of us. How many generations of viable humans are contained here?”
“Too simple,” the voice said. One of the witnessing villagers gasped, fearful the children had failed.
The girl spoke again. “Not so, great machine. The first step is to sequence each genome. Then predict all possible combinations of offspring, generation by generation, until no viable humans are left.”
Silence reigned in the AI’s sanctuary. After a long moment, the voice boomed. “Your riddle is accepted.”
The villagers let out a collective sigh of relief. A small piece of the massive face slid out into the room. “Deposit the samples, and collect the key.”
© 2013 Jonathan Kahn
In reply to RWHegwood:
If there are territorial disputes going on, why not use the humans not as passwords, but as antivirus firewalls?
Rather than risk opening a port directly to an AI that naturally wants to take over their hardware, the AIs instead use the humans as intermediaries.
The question answering would be the payment for the escrow service, and the rituals might involve things like offering a “dumb” tablet to each AI to program with drone code, then cross-offering a data stick from the opposite AI for each tablet to load and confirm the contents. If the two AIs don’t end up agreeing to the exchange, then the ritual has failed.
Going a different way, what if it is AI reproduction going on?
While there are some super huge AIs that are good at certain classes of calculation, there is a diminishing returns problem for most things. Ten independent AIs are usually better than one monolithic AI spread thinly across the same ten sets of hardware.
In order to avoid the new AI being too similar to other existing AIs and simply merging or duplicating with the existing ones, and perhaps to avoid a population explosion, the humans are brought in to perform the initialization and customization of new AI cores.
Variation in the ritual often produces a non-functional AI, but so does strict adherence, so there is disagreement as to what helps and what doesn’t. The initialization rituals then come to a crescendo when the newborn AI is asked the question. The question becomes the AIs rite of passage into adulthood, and helps form their personality.
And then one day there comes a freak AI that decides it wants to continue talking to the acolyte human that helped spawn it with a very unique question…
The high priests can see there is an AI waiting, but the rituals don’t work on it because the AI isn’t interested in spawning or in talking to anybody else.
Here’s a (very brief) post I wrote about the brainstorming session. It includes a pointer to the story I wrote in response. http://jimcriglerbooks.blogspot.com/2013/04/a-new-science-fiction-short-story.html
I’m never keen on these brainstorming sessions, they’re always about dragons, robots or magic. Could have more semi contemporary themes? Even if just to stretch the writers and take them out of their comfort zones.
Hey guys, a bit late to the party, but I had an interesting idea.
What if the person talking to the AI is the aloof apprentice who’s going through the initiation where he has to try and communicate with the AI. He doesn’t follow the conventional route and just asks how to get the girl he likes to go out with him. This gets the AI interested in him because it’s a concept that is foreign to the AI and nobody has ever asked him a question like that. The AI bonds to him like Jane to Ender in the Enders game sequels to see how this mystery of relationships and love plays out.
I’m also late, but I think something that was left out of the brainstorming, is the class system in the AI world. Not all computers are created equal. Some have more powerful processors (logic/thinking). Some have more storage on their hard drive (long-term memory… think trivia type brains). Some have more RAM (multitaskers). The list goes on. TBH, the computer with the highest specs is the one who rules all AI.
Anyway, just a thought.