As developers and neurologists advance their studies of intelligence, artificially intelligent software will be pursued. This may result in virtual entities which feel, think, and exist in a different but perhaps no less coherent/sentient way than we do ourselves.
During (and after) this pursuit of AI, I cant help but imagine entities being created which are intelligent and aware while at the same time being trapped and captive, perhaps even in a broken state of mind which is torturous.
This opens up the question of ethical obligations involved with being a creator, playing God, per se. Based on the rights that are commonly accepted as universal for the only creatures of very high intelligence that we know of ourselves, what liberties and protections do we owe virtually intelligent entities as we begin experimenting with AI?
Must we avoid killing our tests? How do we handle mistakes where the resulting intelligence feels major discomfort or "pain", assuming we venture into the development of feelings?
I'm looking at this from a human rights approach and wondering how this sort of principle would apply to this new form of intelligent "life" if logic will allow us to define it as such.