Is there any reason to think that GAI, in every conceivable form and with any possible safeguard, will annihilate any civilization that builds it, thereby solving the Fermi paradox (assuming that every civilization must build it for space travel)?
I can't see why, unless GAI is sentient, and sentience intrinsically, with enough computing power, annihilates everything as soon as it emerges. While this may solve the Fermi paradox, I personally believe GAI won't be sentient anyway, though I guess we'll see (or not).
I don't like the idea that flawed but powerful processing power alone is enough to kill everything, as everything contains bugs. (Life contains unhelpful mutations but still exists, is increasingly complex, etc. My physics is also bad, but if all systems contain random noise, then complexity alone should not suffice?)
I'm interested not in whether this solves the Fermi paradox (which would be scary) but if the Fermi paradox can seriously add anything to 'the dangers of AI'.
Whether or not the fermi paradox has any relevance to the dangers of GAI should depend on how likely any civilization (including ours) will be able to contact another after developing GAI.
The Drake Equation is part of why there are official searches for alien life... 12,500 intelligent alien civilizations which may currently exist. This is just a guess [for the milky way].
Supposing p=1 (which it isn't), then the Fermi paradox would imply that the chances of annihilation from GAI are large1, though I am not proficient enough at statistics to estimate it.
- We can be 95% sure it is over 1/3000.