I have not played "The Talos Principle", so all my information came from the Wikipedia page of this game and your question.
According to your question, humans developed an 'evolutionary algorithm' with the goal of developing some entity that could be classified as a "person". According to Wikipedia, an evolutionary algorithm works as follows:
- Generate the initial population of individuals randomly - first generation.
- Evaluate the fitness of each individual in that population
- Repeat on this generation until termination (time limit, sufficient fitness achieved, etc.):
- Select the best-fit individuals for reproduction - parents
- Breed new individuals through crossover and mutation operations to give birth to offspring
- Evaluate the individual fitness of new individuals
- Replace least-fit population with new individuals
The key aspect here is the existence of "fitness", to help measure how close someone is to reaching the desired goal (in which case, the entity that could be classified as a person). This "fitness" must be a quantifiable number, so if the AI is not "fully" a person, then how close to "personhood" the AI is (so that the evolutionary algorithm could decide which AIs to breed and which to 'replace'). In this specific case, the humans have chosen to measure the "fitness" of the AIs through the use of puzzles...if AIs have successfully solved all puzzles and defeated Elohim, then the AI is considered a person.
If the humans already solved the hard questions of defining personhood, then why would Elohim be intentionally designed to be a person? Elohim's goal is to supervise the evolutionary algorithm...you don't need "personhood" for that. You don't even need intelligence for that. If you can come up with a way of measuring fitness, you can attempt to run an 'evolutionary algorithm' that will maximize this fitness. The Wikipedia article does not claim that Elohim is able to reprogram itself. So if it was not intentionally designed to be a person, it never had the chance of ever becoming a person.
Now, Tenrec77 argued that the players "have every indication Elohim was a person, just as much as Soma, Milton, or even you and I". That may be true. There's no reason to assume that these future humans' attempts at measuring personhood would be at all sensible. It's possible that the humans built Elohim (the evolutionary algorithm) with several "features" such as the desire for self-preservation and the ability to feel emotions such as fear, claimed that Elohim was only "20% person", and simply used Elohim as a way to build "real, true, 100% persons".
The humans has tried to define a complex topic such as "personhood" by creating an arbitrary boundary between a 'person' and 'not-a-person', and then used an evolutionary algorithm to try and reach that boundary. This idea of setting up arbitrary boundaries does seem very similar to a possible solution to the classical philosophical problem: the Paradox of the Heap...
A typical formulation involves a heap of sand, from which grains are individually removed. Under the assumption that removing a single grain does not turn a heap into a non-heap, the paradox is to consider what happens when the process is repeated enough times: is a single remaining grain still a heap? (Or are even no grains at all a heap?) If not, when did it change from a heap to a non-heap?
Replace "heap" with "non-person" and "non-heap" with "person", and you can see the similarities. Now, one way to solve to the Paradox of the Heap is to just create an arbitrary definition of "heap" and adhere to that definition ("a heap is a collection containing 10,000 or more grains of wheat"). Wikipedia doesn't seem to like this type of solution:
However, such solutions are unsatisfactory as there seems little significance to the difference between 9999 grains and 10000 grains. The boundary, wherever it may be set, remains as arbitrary and so its precision is misleading. It is objectionable on both philosophical and linguistic grounds: the former on account of its arbitrariness, and the latter on the ground that it is simply not how we use natural language.
On the other hand, without an arbitrary definition of personhood, the evolutionary algorithm would not work. You need some way of measuring how close someone is to "personhood". In addition, the Wikipedia article for the "Talos Principle" does not mention anyone ever questioning the "use" of the evolutionary algorithm, so it's possible that humans have came to a consensus about how to define personhood. We may therefore wish to defer to their consensus. (Wikipedia calls this appeal to consensus as the "Group consensus" approach to solving the Paradox of the Heap.)
So, back to your question. Is Elohim a person? My response depends on whether you trust the arbitrary definition that humanity has given to personhood.
If we do trust this definition, then: No, Elohim is likely not a person.
If we do not trust this definition, then we will need to come up with our own definition, and then apply that to Elohim to determine whether Elohim is a person.
(As a side-note, the fact that the humans believe that it is possible to measure personhood by their external response to puzzles rather than philosophizing about the internal state of the AI means that the humans adhere to either Behaviorism or Functionalism. If you don't agree with two philosophical approaches, then it's possible that this entire question is moot and that it's impossible to build an AI that would reach "personhood" status.)