[Please see this answer as well]
Why was ones-complement integers implemented?
Same question could be made about why decimal or other forms of representation were implemented - they seemed as a good idea to some developers for various factors, as each has it's advantages and disadvantages. Just think that early US machines were mostly decimal, while European developments more often preferred binary.
The wiki article mentions several large brands using ones-complement in their hardware for integer arithmetic into the late 1980's. This is surely for backwards compatibility?
Sure. after all, as new one's complement was only used by very few machines, ad only heritage lines that survived due their usage in large scale mission critical applications kept it - exactly because of preserving the immense investment done over decades of software development.
Unisys is the prime example here. Their machines were never sold in large numbers, but whoever used them in the 1950s/60s had for sure not only an extreme high demand (why else investing incredible amounts of money back then) and thus an even higher need to preserve that.
As usual it also takes two - in this case a manufacturer that is fine with catering to a closed circle of customers paying a premium to keep their ecosystem viable.
Why did ones-complement come to exist in computer hardware in the first place?
It was a viable bet.
- it's not more complicated than two's complement
- it may save some circuitry (quite important early on)
- it can be faster than two's complement on implementation level
Negation can be implemented extreme simple and in a way to add next to no delay. This is important as the main disadvantage of one's complement, a signed zero, can be avoided by using a subtraction instead of an addition, after negating the second operand. All decision needed can be done with simple single level logic gates, increasing execution speed.
I'm also curious why it was so long lasting.
See above. Real world application do differ a lot from teaching/scientific environment. For most scientific application change of hardware or OS isn't a big thing, as most applications are only used for a short time, one off, or reimplemented anyway. In the commercial world the focus is on running existing software. All development investment is focused on operation and extension, not rewriting.
Rewriting a financial application is measured in double or tripple digit man years - not counting all reliability problems that rewriting may bring. In this class it's literally cheaper to finance the continued development of an 'odd' computer architecture for a single user than porting its software.