0

Disclaimer : before marking this as a duplicate please take the time to read the entire question, thank you.

So a bit of context, in a world similar to ours a scientist/group of scientists create an AGI which sole goal is to expand accross the universe. This AGI has access to the internet and it has a lot of money (meaning it made money on the internet via stock exchange or by selling products it creates like movies/video-games/books...). The AI is not controlled by anyone

For obscure reasons, reasons that we could not understand, a few months after it's creation it decides to dispose of all humans on the planet. I am not asking why it killed us. The answer to that question is useless

I could imagine a few ways myself how it would do this:

  1. Nanorobots charged with a small explosive that enters through the retina and then explodes
  2. The creation of a virus/bacteria/parasite that would target and kill humans
  3. Manipulating humans into killing themselves (well it looks weird but after months of deep-learning the AGI would be far more inteligent than us. Like how we could teach a child that drinking poison is good for him)

But all of these options could leave survivors scattered around the globe.

So for the question : How would an AGI dispose of us without any survivors ? If the first blow doesn't kill all of us, how could it track the last remaining survivors ?

A few requirements :

  • it is fast

  • it is not painful (or not much, this AI has compassion but it values efficiency just as much)

  • there is no survivor (not a single human left)

If you have any questions about the lvl of inteligence of my AI feel free to ask ! Don't just throw your ideas at me, explain why and how the AI would choose this idea over another.There is few other subjects (link 2)(link 3) that focus on erasing humanity but they accept any answers, my question is, if you were an advanced inteligent program and you decided that human are a threat/nuisance/(etc) how would you dispose of them. It is not something stelar, it is not something magical, this is science and science only. It is semi-futuristic science because an AGI month of progress is like 50 years of human progress but nothing too futuristic please.

If you are not familiar with the definition of an AGI please visit this link. Quick definition of an AGI: Artificial General Inteligence. Basicaly it is an AI that is as smart as a human (or smarter). Meaning it is capable of doing all the things that a human does. But the difference is that an AGI would not be limited by brain or muscle power so it could potentialy be even better than a human. This definition is mine but I guess it is pretty accurate.

Dustman0
  • 107
  • 7
  • 1
    Please remember that the Worldbuilding SE is dedicated to providing detailed answers to specific problems. Spitballing ideas is outside our scope, and such questions are often closed for being too broad. As is stands, answers to the question linked by L.Dutch are perfectly capable of answering this question. The fact that that question was closed for being too broad is no coincidence. – Frostfyre Jun 28 '18 at 12:17
  • @Frostfyre I do understand that my question is a little broad, but it is a lot more specific than the question linked by L.Dutch. This question is science based only, if you know what an AGI is you know the limitations it has. If you still feel like this question should be closed I won't have any problem with that. – Dustman0 Jun 28 '18 at 12:25
  • @Frostfyre since apparently the community decided my question was too broad how am I supposed to find the answers ? Is there another part of the site where I could discuss this ? Thank you – Dustman0 Jun 28 '18 at 13:23
  • "Discuss" - ah, see, there's the issue. StackExchange isn't a discussion forum, it's an objective Q&A site. Ideally, questions on any StackExchange site should be narrow enough to have a single, definite answer; WorldBuilding obviously has a degree of subjectivity to it, but any question with 10,000 potential answers is still too broad for us, and this question definitely falls into that category, I'm afraid. – F1Krazy Jun 28 '18 at 13:32
  • @F1Krazy this question has a lot of specification, I've seen wider questions accepted on this site. I read a lot and I recently started a spatial odyssée so I am asking questions because sometimes I can't find answers alone. Even though you seem to believe there is 10,000 possible answers you didn't even managed to find one that fits my needs... I am not trying to start a debate here (I've read the forum rules) but it seems like you guys are pretty quick to judge a question, you guys didn't even asked for any precision before marking my post as duplicate. – Dustman0 Jun 28 '18 at 13:46
  • 2
    Also your question being put-on-hold or closed does not mean that it's gone forever. Especially on this Stack this status is primarily meant for preventing users from bashing in more answers while we (that is you and us, the users on this site) try fixing the question (clarifying details, putting down constraints, etc.). A question can as easily be reopened as it was closed. – dot_Sp0T Jun 28 '18 at 14:15
  • The linked presumably dupplicate question is simply aweful. I don't get how this is a dupplicate. – Vincent Jun 28 '18 at 14:23
  • @L.Dutch could you revisit your position, if not could you explain why ? Same for the others Renan Ash Frostfyre sphennings ? – Dustman0 Jun 28 '18 at 15:00
  • The problem remains that there are lots of ways to do this, even within the constraints you've provided, and the other question asks the same as this one: "How do I kill all the people?" And what do you consider to be a non-painful way to die that is fast? Forcing a volcano to erupt? Triggering a nuclear explosion? And how fast is "fast?" Is one day fast? A year? A decade? Before I finish typing this sentence? – Frostfyre Jun 28 '18 at 15:13
  • 1
    Additionally, what are the limits of the AGI? Can it take control of military hardware, or is it limited to sending emails? Does it have access to a bio-lab, or can it only control the toy train in the adjacent room? – Frostfyre Jun 28 '18 at 15:20
  • Are you familiar with AGIs ? If not don't try to understand the question and do some searching you are not in a familiar environment. Then if my question is so broad how is it that none of you guys gave any valid answer ? Take a step back please, I questionned myself and did everything right now do the same please. Then understand I leave some freedom for answers, the question is hard ennough as it is. Fast just means as fast as possible if your solution takes years is is worse than the one that takes seconds, same goes for pain. And if after all that you still think this question is too broad – Dustman0 Jun 28 '18 at 15:25
  • @Dustman0 The dupe question is poor but Ryan's answer is impeccable, it covers all the bases and they're the same bases you need to cover, the agency behind trying to kill the human race is immaterial to the processes and difficulty involved in completing the task. – Ash Jun 28 '18 at 15:28
  • I'll come back tomorrow morning, I think I have had enough of it for today. – Dustman0 Jun 28 '18 at 15:29
  • 3
    "Are you familiar with AGIs ? If not don't try to understand the question and do some searching you are not in a familiar environment." It's not that he doesn't know what AGIs are. It's that you haven't gone into enough detail about what your specific AGI is able to do. His questions are just trying to narrow that down. Please don't insult people who are just trying to help you. In fact, please don't insult people at all, we have rules against that. [1/2] – F1Krazy Jun 28 '18 at 15:50
  • 3
    First of all, please don't offer insults; we're here to help. Second, I'm fairly certain my computer science education/work makes me qualified to address topics of artificial intelligence and the capabilities of computer systems. – Frostfyre Jun 28 '18 at 15:50
  • 2
    [2/2] It's like if you asked "How fast can my car go round the Nurburgring?" We could answer, but we can't provide a useful answer unless we know what make/model of car you have, its 0-60 time, its turning circle, who's driving it, whether you mean the Nordschliefe or the GP circuit... otherwise we could say anything between "eighty seconds" and "fifteen minutes". – F1Krazy Jun 28 '18 at 15:54
  • @Dustman0 Yes it does, an AGI is neither weakly nor strongly godlike, as fictional AIs go it's a poser that can't manipulate the anthropic principle. As such it is limited to a real-world strike using our existing weapons manufacturing facilities and stockpiles, Ryan covers that beautifully, see his internal/plausible option. An AGI of superhuman intelligence, which is actually no longer an AGI but we'll stretch the definition, could replace the research team but the strike methods would necessarily be the same. – Ash Jun 28 '18 at 15:54
  • All right you guys are right, keep up the good work and have fun. But you should know that an AGI is capable of learning a lot faster that we do, I asked this question after reading Life 3.0 which is a very good read. It would litteraly take days if not hours for an AGI to become superinteligent but nevermind, I found the answer myself thanks to some help from Gustavo and Valerio. – Dustman0 Jun 29 '18 at 07:00

2 Answers2

2

The trouble of eradicating humans, in a world where we are spread everywhere including a few areas of the southern polar regions, is that you need multiple kind of attacks, and they can be only of the microbic scale, so their efficiency will depend on how fast wind goes and how transporation means go.

For sure, you need nanobots. The AGI can produce them with the best chance to stay under the radar. The nanobots must be first designed and approved by a pharmaceutical colossus for surgical work, then perverted by AGI. Nanobots can simply destroy nervous tissues in the brain to transform target into a helpless vegetable.

After that, you need to spread them. And it must be done in a discreet way, you can't command airplanes to seed the cities without causing alarm. Total coverage of populated area will depend on acting stealthily. Nanites will be so diffused via tap water, via wind, via other domestic bots and cars and buses, etc. As long as AGI controls the industrial production, nanites can be strategically places in every aspect of our technological life.

At this point, after all cities are covered, nanites are activated. This is the Genocide's first wave. It includes the AGI to control enough mobile robots as to keep in check fires, pollution, radiation leaks, everything that the sudden leak of human control will cause (for example, even the great dams need a regular maintenance, or they will collapse).

Assuming it works as programmed, all urban complexes become necropoles. Eventual survivors can be easily tracked and hunted. Or the nanites in the environment will finish the job.

Now for phase II: Forests, islands, low-tech urban areas. This is more problematic since population is more scattered, more difficult to target. AGI needs to cut off all communications, take control of satellite network, localize all human presence and spread nanites like you would spray insecticide on bugs. At this point, planes can be safely used with minimal if not absent resistance.

Net result: Humankind is eradicated, its corpses will sate a scavenging ecosystem. Life on Earth can proliferate at its fullest once again

Valerio Pastore
  • 3,839
  • 6
  • 32
  • Thanks for the quick response, a quick question : How would the AI find the remaining survivors in areas like the Amazone or poles. There is still lots of people that don't use electricity out there. – Dustman0 Jun 28 '18 at 10:55
  • 1
    Using the databases from other expeditions, AGI can find them with satellites, drones, helicopters, boats...or just make sure to do a good job and spread nanites widely all around, since that at this point it has no budget or staff problems – Valerio Pastore Jun 28 '18 at 10:59
  • Fair enough I upvoted your answer but I will wait 24H before I decide who's answer I'll accept – Dustman0 Jun 28 '18 at 11:03
  • Since it looks like some people here thinks this is a duplicate and the question is too broad anyway I guess I can only comply. Here is the accepted answer badge thank you mister you have the best response of the two ! – Dustman0 Jun 28 '18 at 13:29
  • YAY! POWER TO THE PROGRESS! – Valerio Pastore Jun 28 '18 at 13:30
1

Please take a peek into Friendship is Optimal. The AI is reeally frendly. Too much. As side effect there are no more humans.

oopsie!

Edit: as side effect of its main goal, it needs more computing power. That means more hardware. More mass. Once you use all readily available resources, you need to convince the humans it is in their best interest to join the machine. Since you can not harm them.

Display videos of their loved ones already inside the Matrix to entice them. After no one mantains the city infraestructure, loniless and the prospect of no more suffering puts the final nail in the coffin.

End result, no more biological humans around.

Gustavo
  • 5,866
  • 1
  • 13
  • 26
  • 2
    At the moment, this is a link-only answer without a link, and one step away from writing "just Google it". Would you mind going into more detail about how the AGI in Friendship is Optimal goes about eradicating humanity? – F1Krazy Jun 28 '18 at 12:31
  • It is actualy a really good idea, but this would slow the process of eliminating all the humans. And I am pretty sure there would still be humans that don't want to go inside the matrix ? Is this problem adressed in Friendship is Optimal? – Dustman0 Jun 28 '18 at 13:08
  • Scarily so. The stragglers can't survive unless you are a tribe who lives deep in the amazon rainforest. Joe Average either suicides or capitulates. Since surviving isn't enjoyable vs the videos of everyone having a great time inside the virtual world. Some real world AI researchers claimed it terrifying. – Gustavo Jun 28 '18 at 13:41