9

Wikipedia’s article on ones’ complement mentions large brands using it in their hardware for integer arithmetic into the late 1980s. This is surely for backwards compatibility?

According to the article, in 1952 the IBM 701/702 used twos’ complement, i.e. the integer representation method was well-known.

The IBM archives description for the 701 is somewhat non-supportive of the wiki article saying 35 bits and a sign.

Why did ones’ complement come to exist in computer hardware in the first place?

I'm also curious why it was so long lasting.

(As a university prof I'm also really curious why ones-complement is still presented in many introductory textbooks as a reasonable alternative to 1 + (−1) = 0. But that's for CS Educators Stack Exchange.)

Simpler encoding of negative numbers in the hardware↔user boundary is the only reasonable explanation I can think of. That would account for a 6-month fad. Not a blemish in the C and C++ specifications for several decades.

The invented part in the rubric might be trivially a one bit sign.

user3840170
  • 23,072
  • 4
  • 91
  • 150
  • 1
    I am aware of https://retrocomputing.stackexchange.com/questions/7095/why-did-ones-complement-decline-in-popularity – Captain Giraffe Mar 04 '22 at 20:10
  • 1
    @Raffzahn No. I'm more interested why it it was implemented in the first place. 2-s complement was not unfamiliar. – Captain Giraffe Mar 04 '22 at 21:41
  • 1
    While I understand the advantages of twos-complement over ones, there is a certain symmetry to ones-complement which part of my mind finds appealing. For a 16-bit integer, having exactly 32,767 values both positive and negative is balanced (unlike the 32,768 negative numbers in twos-complement). Sure you end up with a positive and negative 0, neither unsigned, but in a way that's balanced too. – RichF Mar 04 '22 at 21:52
  • 1
    The linked wikipedia page does not seem to mention the 701 (and the only recent edits seem to be squabbling about "ones'" versus "one's" complement). But the 701 was definitely sign and magnitude, not ones' complement. – dave Mar 05 '22 at 03:05
  • 1
    Wiki article? On which wiki? – user3840170 Mar 05 '22 at 11:15
  • @user3840170 The one linked in the question. – Captain Giraffe Mar 05 '22 at 13:19
  • 1
    IMO, one's complement is a whole lot more obvious than two's complement to anybody who's ever learned to add and subtract multi-digit numbers using pencil and paper. Two's complement is a clever trick—clearly superior—but it took a clever person to see it. – Solomon Slow Mar 05 '22 at 14:20
  • 1
    Regulation don't trust Wikipaedia reference - https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_is_not_a_reliable_source – Chenmunka Mar 05 '22 at 15:38
  • @Chenmunka Erm. The one linked is about self referencing of Wikipedia by Wikipedia. I believe you wanted to link this one. – Raffzahn Mar 05 '22 at 18:25
  • @CaptainGiraffe Oh, you meant WikiPEDIA. Why did you not say so? – user3840170 Mar 05 '22 at 18:26
  • 1
    @user3840170 oh come on, don't be worse than me :)) – Raffzahn Mar 05 '22 at 18:33
  • @SolomonSlow: If one recognizes that the sum of all the numbers 1, 2, 4, 8, etc. is -1, then two's-complement math will follow naturally from that. Subtract 1 from zero and the result will be a number with an infinite number of 1 bits. – supercat Mar 06 '22 at 00:45
  • @supercat, Yes. That's how 2's complement works. – Solomon Slow Mar 06 '22 at 02:09
  • 1
    I think that some computers used sign magnitude at the level visible to the programmer, but the unerlying arithmetic was actually done on twos complement. The IBM 704 comes to mind. – Walter Mitty Mar 06 '22 at 19:33
  • @SolomonSlow: I think the idea of "sign bit gets extended infinitely far to the left" seems pretty straightforward from a pen-and-paper standpoint, moreso than any pen-and-paper explanation I can figure for how to determine the last four digits one of the sum of two one's-complement numbers that end in 1100 and 0100. – supercat Mar 06 '22 at 19:49
  • @supercat, Yes. That's why 2's complement is simpler to implement in hardware. – Solomon Slow Mar 06 '22 at 20:02

2 Answers2

14

[Please see this answer as well]

Why was ones-complement integers implemented?

Same question could be made about why decimal or other forms of representation were implemented - they seemed as a good idea to some developers for various factors, as each has it's advantages and disadvantages. Just think that early US machines were mostly decimal, while European developments more often preferred binary.

The wiki article mentions several large brands using ones-complement in their hardware for integer arithmetic into the late 1980's. This is surely for backwards compatibility?

Sure. after all, as new one's complement was only used by very few machines, ad only heritage lines that survived due their usage in large scale mission critical applications kept it - exactly because of preserving the immense investment done over decades of software development.

Unisys is the prime example here. Their machines were never sold in large numbers, but whoever used them in the 1950s/60s had for sure not only an extreme high demand (why else investing incredible amounts of money back then) and thus an even higher need to preserve that.

As usual it also takes two - in this case a manufacturer that is fine with catering to a closed circle of customers paying a premium to keep their ecosystem viable.

Why did ones-complement come to exist in computer hardware in the first place?

It was a viable bet.

  • it's not more complicated than two's complement
  • it may save some circuitry (quite important early on)
  • it can be faster than two's complement on implementation level

Negation can be implemented extreme simple and in a way to add next to no delay. This is important as the main disadvantage of one's complement, a signed zero, can be avoided by using a subtraction instead of an addition, after negating the second operand. All decision needed can be done with simple single level logic gates, increasing execution speed.

I'm also curious why it was so long lasting.

See above. Real world application do differ a lot from teaching/scientific environment. For most scientific application change of hardware or OS isn't a big thing, as most applications are only used for a short time, one off, or reimplemented anyway. In the commercial world the focus is on running existing software. All development investment is focused on operation and extension, not rewriting.

Rewriting a financial application is measured in double or tripple digit man years - not counting all reliability problems that rewriting may bring. In this class it's literally cheaper to finance the continued development of an 'odd' computer architecture for a single user than porting its software.

Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • Are there any situations, other than parallel I/O, where ones'-complement offers any advantage? I suspect that many mechanical adding machines which could handle negative values did so with nines' complement because it's much easier for a printing mechanism to handle negative numbers in nines'-complement than in tens'-complement. I can't think of any other situation where ones'-complement would be cheaper, but there are many where it would be more expensive. – supercat Mar 05 '22 at 21:04
  • @supercat the conversion advantage is true for 1's complement. – Raffzahn Mar 05 '22 at 21:26
  • What "conversion advantage" are you talking about other than parallel I/O? If one wanted to have a circuit that would accept a parallel binary input and display positive values in binary using a sequence of green light bulbs, and negative numbers in binary using a sequence of red light bulbs, the circuitry required to handle ones'-complement would be much simpler than that required to handle two's-complement, but if I/O is going to involve the CPU, having code convert two's-complement values to sign-magnitude for I/O is simpler yet. – supercat Mar 05 '22 at 21:33
  • Any information that is structured as words is by nature parallel, even if transferred serial. – Raffzahn Mar 05 '22 at 23:18
  • The amount of hardware required to increment a value sent one bit at a time in LSB-first order is pretty tiny, especially if the system has a multi-phase clock available. One needs a carry latch which can be set or cleared before processing a number, based upon whether it should be incremented, an XOR gate, and a circuit to clear the carry latch after having processed an incoming zero bit. Probably under a dozen transistors total, to handle any number of bits, if one needs to have cleanly-buffered inputs and outputs; fewer if one can get by with outputs that may not switch totally cleanly. – supercat Mar 05 '22 at 23:56
  • I'd guess the design mentality around use of ones'-complement was probably similar to the design mentality that leads to ARM-based processors having real-time-clock-calendar subsystems which report the time and date in BCD format. If one wanted to have a device do nothing but display the time and date, BCD would be great, and so the devices were designed around that use case, even though for most other use cases the design is both more complicated and less useful than a straight binary counter would be. – supercat Mar 05 '22 at 23:59
  • Crap, BCD is the best format to the most common use cases. Binary time doesn't make much sense - especially nowadays. – Raffzahn Mar 06 '22 at 00:06
  • Sorry, my sarcasm detector is broken and I can't tell if you're being serious or not. – supercat Mar 06 '22 at 00:36
  • @supercat Dead serious. ... well, no, but mostly. Just think about use cases, and you'll find that most use cases of date/time is storing, comparing and displaying. While the first two may not hold any difference between binary and BCD, displaying (as in on-screen or print-out) does benefit a lot from being almost plain text. Not to mention that it makes debugging a damn lot less hard when all date/time fields can be read right away without use of a calculator tool. :) (The nowadays part is to counter the argument about the slight, although not much, higher storage need for date and time) – Raffzahn Mar 06 '22 at 01:44
  • A lot of use cases involve adding and subtracting. If a system which was last powered up in July is next powered up at 12:34am on March 1 of the following year, code would need to subtract an hour from the time, which would in turn require determining whether or not the current year is a leap year. Even code which simply needs to add or subtract an hour from an arbitrary time will be almost as complicated as code to convert a time to/from a Unix-style seconds counter, and if one needs any more complicated arithmetic things are harder yet. – supercat Mar 06 '22 at 19:46
  • @supercat stop being silly. How often is the current day checked for being a leap year? Maybe once a day. And how often are dates displayed for example when booking a flight? A job done maybe a billion times a day by the very same computer system(s). Counting use cases by them self is ivory tower mathematics. Real world is about total use numbers. Believe me, been there, done that - for almost 40 years. Despite the fact, that getting a binary date is literally a single machine operation on a /370, it was more useful performance wise to use BCD date/time. If not saving space as well. – Raffzahn Mar 06 '22 at 20:06
  • If a system needs to keep track of various things that need to happen at different intervals (e.g. once every 60 hours), and needs to know on power-up which intervals have and have not elapsed, checking whether the present time is within 21600 seconds of the last time an operation was done will be easy and reliable if the RTC uses a linear counter. I don't see any comparably nice and convenient way to do that using calendar dates. – supercat Mar 06 '22 at 22:35
  • @supercat Of course there is good use for binary time keeping (I mentioned it being a machine instructions, didn't I?). But they are confined to distinct usages. But you're trying to counter a point that hasn't been made. look up, topic is about a one size fits all or not. In this case not. While binary time keeping has its usage, BCD is the best way to store and handle Date/Datetime. Thus unlike 2's vs 1's complement, this is far from a done case for one of them. – Raffzahn Mar 06 '22 at 22:47
  • My point is that some hardware designs use data formats which are chosen to maximize efficiency of "human readable" I/O in cases where no other significant computations will be done with the data, at the expense of making those other operations inefficient or requiring that data be transformed into a form that's more practical to work with. – supercat Mar 07 '22 at 17:16
3

Why did ones-complement come to exist in computer hardware in the first place?

It's just the same reason why ten's complement and nine's complement exist in decimal. In fact the method of complements for representing negative numbers existed long before binary computers. Mechanical decimal calculators can use either of them

Why did ones-complement come to exist in computer hardware in the first place?

Raffzahn already gave many great reasons from a hardware perspective. On the software side it has an advantage that makes it exists even until now: "endianness-resistant". It's used in the checksum of some software and most importantly in IPv4 header. When summing the array you don't need to operate on bytes but on words and reduce to byte later because the byte order isn't important due to the wrap-around carry

phuclv
  • 3,592
  • 1
  • 19
  • 30
  • The latter seems not so much about the order of bytes, but rather byte-wise vs word-wise sum. – user3840170 Mar 05 '22 at 18:37
  • The fact that something like a TCP checksum is endianness-agnostic may have been desirable politically, but makes it more expensive to calculate on almost any platform than a fixed-endianness checksum would have been, even on platforms whose endianness was opposite that of the checksum. – supercat Mar 05 '22 at 21:06