54

Considering all those early home computers, the ZX Spectrum, the Galaksija, the VIC-20, the Apple I/II/III, and all of those, they all have some 8-bit CPU which does integer arithmetic and no floating point. So these machines have some kind of floating point implementation in software.

My question is, why not use fractions? A fraction is just two numbers which, when divided by eachother, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend. And probably we all remember how to add or subtract fractions from school.

I haven't really calculated it, but it seems intuitive that the advantages over floating point would be:

  • faster addition/subtraction
  • easier to print
  • less code
  • easy to smoosh those values together into an array subscript (for sine tables etc)
  • easier for laypeople to grok
  • x - x == 0

And the disadvantages:

  • probably the range is not as large, but that's easy enough to overcome
  • not as precise with very small numbers
  • marketability?

It seems like such an obvious tradeoff (precision/speed) that I'm surprised I can't find any homecomputers or BASICs that did arithmetic in this way. What am I missing?

Omar and Lorraine
  • 38,883
  • 14
  • 134
  • 274
  • 5
    Not sure I understand exact what you mean by this, but I see look up tables, and look up tables takes memory. Memory is something you don't want to waste on a machine that only have 64kB continuous memory. – UncleBod Oct 01 '18 at 09:33
  • @UncleBod is it actually unclear what I mean by the question? It's essentially "why didn't homecomputers use fractions to implement what Fortran calls real". – Omar and Lorraine Oct 01 '18 at 09:39
  • And of course lookup tables are still handy; Commodore BASIC uses one for its implementation of sin(x) and cos(x). – Omar and Lorraine Oct 01 '18 at 09:41
  • 38
    Actually, pretty much all floating point representations in home (and all other) computers do make use of fractional representations of numbers, but, for obvious reasons, restrict the denominators to powers of two. – tofro Oct 01 '18 at 10:00
  • So, fixed point arithmetic? – Polygnome Oct 01 '18 at 12:29
  • 1
    @Polygnome No, rational arithmetic. – Omar and Lorraine Oct 01 '18 at 14:18
  • 4
    Fixed point arithmetic is calculating in fractions, just with a global implicit denominator. But that's clearly not what the question is asking about. – Tommy Oct 01 '18 at 14:29
  • @Tommy You mean in the same way that an integer is a fraction with an implied denominator 1? – Omar and Lorraine Oct 01 '18 at 14:34
  • @Wilson not exactly the same way, since integers are a well-established number system, and fixed-point numbers aren't. Though if I'm going to be like that about it, quotients are another of the formal number systems, so I'm implicitly admitting: fixed-point numbers aren't really quotients. Either way I'm happy with my original comment: that's clearly not what the question is asking about. – Tommy Oct 01 '18 at 15:31
  • 23
    Rational addition and subtraction are slower than floating-point, not faster. And difficult to get right with limited precision, too (the obvious method tends to overflow). – Toby Speight Oct 01 '18 at 15:36
  • 2
    Some calculators did support simple fractions, for example the Casio fx-82D I had in high school. Note that it was a convenience feature, not a perf tweak. – Jonathan Oct 02 '18 at 07:44
  • 3
    How such system will deal with numbers which are not fractions, like square root of 2, or pi? – Peter M. - stands for Monica Oct 02 '18 at 15:20
  • 1
    @P.M Exactly the same way as we deal with it today: Round to the nearest representable value. – pipe Oct 02 '18 at 15:39
  • 1
    A further issue NOT to do this is intermediate expression swell: http://www-troja.fjfi.cvut.cz/~liska/ca/node57.html#SECTION00434000000000000000 Not only a real rational arithmetic requires arbitrary precision integers (that would mostly choke an 8-bit machine), the size for the fractions explodes in many cases. – Oleg Lobachev Oct 02 '18 at 21:38
  • 1
    certainly addition and subtraction of fractions is slower than with floating-point. one must obtain a common denominator, add/subtract the scaled numerators, then simplify the fraction by cancelling factors common to numerator and denominator. that's much more computation than is required to scale the smaller float to have the same exponent as the larger float, adding, and if necessary adjusting the resulting exponent. – robert bristow-johnson Oct 03 '18 at 00:14
  • but i wish that IEEE had done a better job in spec'ing the 754 standard. – robert bristow-johnson Oct 03 '18 at 00:15
  • floating-point have powers of 2 as denominators, making it fast to do operations since you just need to shift. For example a*2ⁿ + b*2ᵐ = (a + b*2ᵐ⁻ⁿ)*2ⁿ. Similarly fixed-point operations are fast since the denominator is a constant (most often a power of 2 or 10). OTOH a/b + c/d will need 3 multiplications and one addition (ad + bc)/bd (unless b == d, in which case it becomes simple), not including a lot of slow divisions needed to reduce the result to its simplest form. In short, to be fast you need to limit the set of denominators – phuclv Oct 03 '18 at 03:24
  • 4
    @phuclv Don't answer questions in the comment section. If you think the other answers are wrong, you write a new answer. – pipe Oct 03 '18 at 07:42
  • @robertbristow-johnson As someone who does research in numerical analysis, IEEE754 seems very well thought out to me; do you have any specific complaints? – Federico Poloni Oct 03 '18 at 18:52
  • 2
    @FedericoPoloni, like the DEC PDP-10, negatives should be the twos-complement of the positive value (rather than the sign-magnitude format as is IEEE754) and NaNs and INFs should only be for when the exponent is set to the max. an exponent of 0 should be denorms. and denorms should be considered non-exceptional. this makes the mapping function of the float to the int counterpart having the same bit pattern similar to the strictly increasing arcsinh() function. compare operations done as integers will work for floats (as long as there is no NaN). – robert bristow-johnson Oct 03 '18 at 19:00
  • 2
    "Computer Math is to Math what Computer Music is to Music." –  Oct 03 '18 at 20:10
  • 3
    Your question says "I haven't really calculated it..." I know how to answer your question. Calculate it. Try implementing such a scheme in a modern programming language, and then see how many bytes of code it took for the logic and the representations. – Eric Lippert Oct 03 '18 at 21:53
  • 2
    You make a number of claims as being "obvious" and "easy enough" without evidence. Prove that it's as trivial as you claim and offers better performance characteristics. – jamesdlin Oct 03 '18 at 22:21
  • @jamesdlin I said "It seems" and "intuitive to me". I didn't claim anything. – Omar and Lorraine Oct 04 '18 at 07:44
  • @jdv, i am interested in all four topics you mention. can you explain the relationship? how exactly is computer music to music? and computer math to math? – robert bristow-johnson Oct 04 '18 at 08:36
  • @robertbristow-johnson it's a semi-famous historical quote, though I can't seem to find the source. –  Oct 04 '18 at 14:41
  • But, those processors had no floating point. I wrote an optical character recognition system for use in an industrial setting on a Z80 (it would grab a frame in hardware, use a small robot to center the center of a circle in the frame, grab another frame, figure out where the number was (it was along the circumference of a circle), rotate the image, enhance the contrast and then decode the image of 4 digits - in about 5 sec). 32k of code, 32k of RAM. We used fixed "sine" tables, fractions, hand-code multipliers, everything. What we couldn't use was floating point. Oh, it was mostly in C. – Flydog57 Oct 05 '18 at 16:52
  • @robertbristow-johnson: "certainly addition and subtraction of fractions is slower than with floating-point". Have you ever tried doing any floating point math on a Z80? We never bothered trying to get a general solution to any math anything on a Z80 - memory and time wouldn't allow it. We stripped everything done (example, we had a much less than 10k stripped-down version of the C "standard library" that we used (strcpy, strlen, a rudimentary sprint, etc.). If we had math to do, it was purpose built (like writing an optical filter with no multiply instruction). – Flydog57 Oct 05 '18 at 16:59
  • @Flydog57, not much on a Z80, but i did some serious math on a MC6800. the phenomenon we were measuring was tremendously slow and i had a sample rate of 10 Hz. i had my own multiplication and division routines and had 16-bit, 24-bit and 40-bit words and i used the carry bit a lot. with 1 MHz instruction rate, you can crunch a lot of numbers in 1/10 second. – robert bristow-johnson Oct 05 '18 at 18:26
  • still @Flydog57, the problem with this fractional arithmetic is getting the denominators the same to add or subtract numerators. and then simplifying the result by cancelling common factors. – robert bristow-johnson Oct 05 '18 at 18:28
  • I'd prefer to add this as an Answer, but the question was protected before I could. The code for storing floats as fractions is not simple. For instance, the GNU GMP code is here https://gmplib.org/repo/gmp-6.1/file/tip/mpq It uses MPZ as a mutprecision integer. See the mul.c, div.c, aors.c (add or subtract) files for the complexity required. – CSM Oct 06 '18 at 17:46

11 Answers11

109

When adding or subtracting fractions, you need to find the least common multiple of the two denominators. That's an expensive operation, much more expensive than adding or subtracting floating points, which just requires shifts.

Multiplication is also more expensive, because now you need to multiply two numbers instead of just one. Similarly for division.

Also, the numerator and denominator of a fraction will eventually grow large, which means you won't be able to store them in the limited memory of an 8-bit system. Floating point deals with this by rounding.

So: It's more expensive, there's no way to limit the memory used for truly general applications, and scientific applications are geared towards using floating point, anyway.

dirkt
  • 27,321
  • 3
  • 72
  • 113
  • 2
    Multiplication of floating point is one multiplication and one addition. – Taemyr Oct 01 '18 at 13:57
  • 1
    @Taemyr: And additions are extremely cheap compared to multiplication. Especially if you just add the exponent, which is short. So in first approximation, you can ignore these additions. – dirkt Oct 01 '18 at 14:06
  • 36
    And the moment you encounter prime number denominators, it can get very large very quickly. It has no predictable upper limit. The unpredictability of memory use and calculation speed makes fraction use very unappealing. In extreme cases, if the rounding errors are too severe, you can switch to fractions to increase accuracy, but those are very specific optimization. Switching everything to fractions is not reliable. – Nelson Oct 01 '18 at 14:27
  • @Nelson: For any given maximum value of starting denominator, there would be an upper bound to the size of denominator required to evaluate any particular sum-of-products expression. Unless expressions are simple or starting denominators are limited to very small values, however, that bound would be beyond the ability of any practical implementation to handle. – supercat Oct 01 '18 at 18:25
  • 13
    Might be worth mentioning that not all values are representable as a rational and need to be approximated anyway. Probably with a fraction with a "complicated" denominator. – Leo Oct 01 '18 at 19:49
  • 1
    I'd expect division to be faster, since it's trivial to invert a fraction (which means division should be just as fast as multiplication, as opposed to floating-points where division is typically an order-of-magnitude slower) – BlueRaja - Danny Pflughoeft Oct 01 '18 at 21:13
  • @supercat For prime numbers under 1000, the 4 largest are 977,983, 991, and 997. Multiplied together it's already 948,892,238,557. When calculating 1/977 * 1/983 * 1/991 * 1/997, floating point calculation will simply round to 0. Fractional calculator will give you an error with denominator exceeding Integer size. We're assuming simple fraction. Far more complex fractions will give you an unusable answer if either the numerator or the denominator exceeds maximum datatype size, while floating points will always give you something with loss of accuracy. – Nelson Oct 02 '18 at 00:28
  • @Nelson: Given N additions, subtractions, multiplications, or divisions, with no operand having a numerator or denominator exceeding K. the result will be representable without need for any value exceeding (2K)**N. If K and N are large, doing math with values that size may be rather slow and unwielldy, but the values would be finite. – supercat Oct 02 '18 at 01:30
  • 4
    @BlueRaja-DannyPflughoeft: Given an 8-bit CPU like the 6502 etc., division has to use a similar shift-subtract algorithm ("long division" style) as the shift-add algorithm for multiplication, so it's actually in the same ballpark. The order of magnitude difference only shows up in hardware, where fast multiplication is doable, but fast division is much harder. That's why I wrote "similarly for division". I do recommend to look at actual implementations of floating point, for example in the Apple UCSD P-code interpreter. – dirkt Oct 02 '18 at 05:37
  • 12
    "scientific applications are geared towards using floating point" - It's actually exactly the other way round. Floating point formats are designed specifically to meet the needs of scientific computation. – Michael Borgwardt Oct 02 '18 at 09:04
  • 2
    @MichaelBorgwardt Great observation. Scientific computation has been performed with floating point (base 10, aka scientific notation) for centuries before computers as we know them arrived on the scene for real in the fifties. – pipe Oct 02 '18 at 13:54
  • 13
    @dirkt: There are some table-based approaches for multiplication on the 6502 that exploit the fact that A*B == ((A+B)*(A+B)-(A-B)*(A-B))/2. Given a table of (N*N)/2, one can reduce the computation to Table[A+B]-Table[A-B]. If one multiplicand will be used much more than the other and one sets up table pointers properly, sec / lda (ptr1),y / sbc (ptr3),y / sta resultL / lda (ptr3),y / sbc (ptr4),y / sta resultH will multiply the B value used to prepare the pointers by the value in y, yielding a 16-bit result. I don't know how to do that nicely for division. – supercat Oct 02 '18 at 21:48
  • @supercat: yes, you can always trade space for time (at least to some degree). I'd bet there are ways to speed up division using tables, too. Still, existing implementations used the shift-and-add/subtract methods. – dirkt Oct 03 '18 at 04:34
  • 1
    @dirkt you can always trade space for time, true, but the trick in supercat's comment is better than the naive trade-off because it manages to use only O(N) space to tabulate the the results of N^2 multiplications. – Federico Poloni Oct 03 '18 at 18:56
54

My question is, why not use fractions?

Quick answer:

  • Too much code needed
  • Dynamic storage needed
  • Long representation even for simple numbers
  • Complex and slow execution

And most prominent:

  • Because floating point is already based on fractions:
    Binary ones, the kind a binary computer handles best.

Long Form:

The mantissa of a floating point number is a sequence of fractions to the base of two:

1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + 1/128 ...

Each holding a zero or one denoting if that fraction is present. So displaying 3/8th gives the sequence 0110000...

A fraction is just two numbers which, when divided by each other, yields the number you are interested in. So picture a struct which holds one integer of whatever size, and one byte for each of divisor and dividend.

That 'whatever size' is eventually the most important argument against. An easy to implement system does need a representation as short as possible to save on memory usage - that's a premium, especially early machines - and it should use fixed size units, so memory management can be as simple as possible.

One byte each may not really be good to represent the needed fractions, resulting in a rather complicated puzzle of normalisation, which to be handled needs a rather large amount of divisions. A really costly operation (Hard or Softwarewise). In addition to the storage problem for the numbers, this would require even more address space to hold the non trivial handling routines.

Binary (*1) floating point is based on your idea of fractions, but takes it to the logical end. With binary FP there is no need for many complex operations.

  • Turning decimal FP into binary is just a series of shift operations.
  • Returning it to decimal (*2) is again just shifting plus additions
  • Adding - or subtracting - two numbers does only need a binary integer addition after shifting the lesser one to the right.
  • Multiplying - or dividing - means multiplication - or division - of a these two fixed point integers and addition of the exponent.

All complex issues get reduced to fixed length integer operations. Not only the most simple form, but also exactly what binary CPUs can do best. And while that length can be tailored to the job (*3), already rather thrifty ones (size wise) with just 4 bytes storage need will cover most of everyday needs. And extending that to 5,6 or 8 gives a precision rarely needed (*4).

And probably we all remember how to add or subtract fractions from school.

No, we don't really. To me that was something only mentioned for short time during third grade. Keep in mind most of the world already went (decimal) floating point more than a century ago.


*1 - Or similar systems, like IBM's base-16 floating point used in the /360 series. Here the basic storage unit isn't a bit but a nibble, acknowledging that the memory is byte-orientated and parts of the machine nibble-orientated.

*2 - The least often done operation.

*3 - Already 16 bit floating point can be useful for everyday issues. I even remember an application with a 8 bit float format used to scale priorities.

*4 - Yes, there are be use cases where either more precision or a different system is needed for accurate/needed results, but their number is small and special - or already covered by integers:)

Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • By "whatever size", I meant "your choice": you can choose 16 bits, 32 bits, whatever. I didn't mean "variable length". Probably unclear wording on my part – Omar and Lorraine Oct 01 '18 at 12:08
  • @Wilson that would even increase the problem. Reserving like 24 bits (8+16) for each fraction used would for one limit precission while already increasing size requirements compared to FP. Even worse at more usable precision. Going to variable length encoding would be a quite sensitive thing to do. - >Then again, the size part is maybe even the least hurdle here. It's just clumsy and hard to handle. – Raffzahn Oct 01 '18 at 12:15
27

There is a mathematical problem with your idea. If you choose to store fractions with a fixed-length numerator and denominator, that's works fine until you try to do arithmetic with them. At that point, the numerator and denominator of the result may become much bigger.

Take a simple example: you could easily store 1/1000 and 1/1001 exactly, using 16-bit integers for the numerator and denominator. But 1/1000 - 1/1001 = 1/1001000, and suddenly the denominator is too big to store in 16 bits.

If you decide to approximate the result somehow to keep the numerator and denominator within a fixed range, you haven't really gained anything over conventional floating point. Floating point only deals with fractions whose denominators are powers of 2, so the problem doesn't arise - but that means you can't store most fractional values exactly, not even "simple" ones like 1/3 or 1/5.

Incidentally, some modern software applications do store fractions exactly, and operator on them exactly - but they store the numerators and denominators using a variable length representation, which can store any integer exactly - whether it has 10 digits, or 10 million digits, doesn't matter (except it takes longer to do calculations on 10-million-digit numbers, of course).

alephzero
  • 6,646
  • 3
  • 29
  • 34
  • 7
    I'm currently working with one of these 10-million-digit-fraction calculations in a piece of code at home. It's nice because you get good accuracy. It's not nice because every single operation thrashes the CPU cache and even the simplest calculation takes ages. :) But computers are built to calculate stuff so they don't mind. – pipe Oct 01 '18 at 13:34
  • 6
    Also, note that often you have to approximate anyway, not only to keep denominators bounded, but for instance if you want to make computations with sqrt(2) or pi. – Federico Poloni Oct 01 '18 at 13:48
  • 1
    Adding together more than two numbers makes things even worse. Even something as simple as adding a group of fractions whose denominators are all 13 or less can yield a result with a denominator of 360,360, and that number will grow hyper-exponentially with the denominator of the largest fraction to be added. – supercat Oct 01 '18 at 18:20
  • @FedericoPoloni and then computer algebra systems come into play, where instead of approximating and doing arithmetic with the approximations, you do symbolic manipulations to simplify expressions, so that e.g. sqrt(pi^2) becomes pi, and e^2*cos(sqrt(pi^2)) becomes -e^2. – Ruslan Oct 08 '18 at 09:21
20

Floating-point isn't just about representing numbers that have fractional parts. It's also about representing numbers that are very large, or very small, in a way that allows extra range by sacrificing precision.

Consider these examples:

1,000,000,000,000,000,000,000,000,000,000 (1 with 30 zeros after). This number can be reasonably stored in floating-point format in 8 bytes with some loss of precision (reduced number of significant digits). To store it as an exact fraction, you need considerably more space.

Similarly, the fraction 1/1,000,000,000,000,000,000,000,000,000,000 has the same problem. It's still 8 bytes in floating-point but much larger as a fraction.

Because floating-point has an exponent that's stored separately from the mantissa, this gives it the ability to represent a larger range of numbers, even if every number within that range cannot be represented exactly. This is a valuable tradeoff because in many applications only a limited amount of precision (significant digits) is needed so this saves memory, which is especially at a premium on small systems (and in the beginning, all systems were small by today's standards).

Ken Gober
  • 11,427
  • 1
  • 40
  • 56
  • 2
    Fun fact: the nearest IEEE double to 1e30 has the value 1000000000000000019884624838656, a rounding error of less than 1 part in 2^55. (the format has 2^53 significand bits, so the max rounding error is slightly larger). Or it can be stored exactly in a decimal-float format (most computer hardware only supports binary floats, though). Anyway, great answer, the exponential nature of floating point is important. – Peter Cordes Oct 03 '18 at 09:56
15

Point by point.

  • faster addition/subtraction
    No: 8/15 + 5/7 is evaluated as 131/105 [(8*7 + 15*5)/(7*15)], so 3 multiplications for one single addition/subtraction. Plus possibly reduction
  • easier to print
    No: you have to print a human readable string of digits. So you must transform 131/105 to 1.247... Or are you proposing to simply display the fraction? Not so useful for the user.
    PRINT 12/25 --> RESULT IS 12/25

  • less code
    No: floating point code is compact, it's just shifting all in all

  • easy to smoosh those values together into an array subscript (for sine tables etc)
    I don't understand what you mean. Floating point 32 or 64 bit values can be packed togeter easily

  • easier for laypeople to grok
    Irrelevant, laypoepole do not program the bios of microcomputers. And statistics tell us that most of laypeople do not understand fractions anyway

  • x - x == 0
    The same in floating point

edc65
  • 249
  • 1
  • 4
  • Regarding the last point, (at least some) floating point values are not guaranteed to be equal to themselves? – Omar and Lorraine Oct 01 '18 at 13:33
  • 5
    @Wilson, no the only FP values that is not equal to itself is NaN (not a number). And this is by design. – edc65 Oct 01 '18 at 13:35
  • I think I know where I went wrong then: I'm thinking about expressions. (x*2)*2 might not equal 4*(x*1). Or something. I'm sure there's some kind of funky business like that with floating point. – Omar and Lorraine Oct 01 '18 at 13:39
  • 1
    @Wilson That's not a good example because multiplication by powers of 2 is exact (barring underflow/overflow), it's just a shift in the exponent. But essentially you are correct; since the results of the operations are approximate, identities such as (1/x)*x - 1 == 0 are not guaranteed to hold (try x=49, for instance). – Federico Poloni Oct 01 '18 at 13:44
  • 3
    @Wilson you are thinking about approximation. In binary FP the result of 2/5 can not be stored exactly (like in decimal FP you can not exaclty store the value of 1/3: it's not 0.33, not 0.333, not 0.3333 and so on). But even using fractions you need to use either approximation, or variable size integers for numerator and denominator – edc65 Oct 01 '18 at 13:45
14

Some modern programming languages do have a Rational or Fractional type, so your idea has merit.

However, there are and were several different problems with it. It's worth noting that, even on systems where floating-point also needed to be implemented in software, fractional math wasn't widely-used as an alternative. Some possible reasons that applied at the time:

  • The 8-bit computers you list used either the Zilog Z80 or MOS 6502 CPU, which had no hardware multiply instruction. (The Motorola 6809, used in the Tandy CoCo, did.)
  • Adding or subtracting rationals requires computing greatest common divisor or least common multiple, which could be done without division but still would have been very slow compared to the shift-and-add of floating-point numbers, and then both multiplication and division (which was even slower).
  • Reducing fractions to their canonical form would also have involved calculating GCD and dividing.
  • Multiplication and division of floating-point is also simpler: multiply mantissas, add exponents.
  • While floating-point math needs only a few extra guard bits of precision, exact multiplication of rationals requires doubling the number of bits, so to be able to compute a/b + c/d where a, b, c and d have 16-bit precision, then find the GCD of ad+bc and bd and divide both the numerator and denominator, you would have needed 32-bit math on an 8-bit ALU with no hardware multiplier or divider.
  • Many values that programmers want to work with are irrational, most famously π and the square root of 2.
  • It wasn't how math had always worked on mainframes and minicomputers, and wouldn't have been acceptable for scientific computing.
  • Fixed-point was a simpler alternative for most use cases. You typically know what an acceptable lowest common denominator for your problem domain is, and then you only need to store the numerators and the math becomes easy.

In the era of 16-bit microcomputers, floating-point coprocessors appeared on the market that were hundreds of times faster, systems that did not have the coprocessor emulated them, and their functionality became IEEE standards, although many games and applications continued to use fixed-point. In the late '80s, floating-point math became integrated into the CPU cores themselves.

Davislor
  • 8,686
  • 1
  • 28
  • 34
  • In many circumstances, fixed-point could be superior to floating-point with proper language support, but no modern mainstream languages are equipped to efficiently handle them because achieving optimal performance would require supporting a wide range of types. By contrast, when using floating-point math, one can get by with just two types. – supercat Oct 02 '18 at 00:17
  • @supercat And besides, fixed-point is usually represented as an integral numerator, with the denominator implicit in the code. Haskell is one language with a library for rational types. – Davislor Oct 02 '18 at 00:48
  • Depending upon the platform, operations on values with mixed hard-coded denominators may or may not be efficient, but the only languages I can think of with fixed-point support (PL/I and COBOL) use decimal which most machines can't handle very efficiently. – supercat Oct 02 '18 at 01:24
  • 1
    @supercat You don’t really need much in the way of special support, though. For a game like Doom, which came out before most computers had FPUs and used 16-bit fixed-point, addition and subtraction were just integer addition and subtraction. Multiplication (rounded to 16 bits) was integer multiplication followed by a right shift. (If you're representing probabilities or other values between 0 and 1, you don’t even need the bit shift; just take the high word, which on x86 is stored in the DX register, or use UMULH on ARM.) Division was slightly trickier, but not too bad. – Davislor Oct 02 '18 at 03:01
  • Addition and subtraction of fixed-point values that have the same precision is simple integer addition and subtraction, sure. But getting optimal performance with fixed-point math may require having values with differing levels of precision. On a platform that can efficiently shift values by 4 bits, for example, it may be helpful to have position and velocity both stored in signed 7.8 format, while position is stored as 11.4, but on a platform where it's faster to shift by multiples of 8 it might be better to use 7.8 and 15.8 or 23.8. – supercat Oct 02 '18 at 04:42
  • @supercat Yep! It’s probably so rare to see a dedicated fixed-point library because fixed-point is so easy to implement with general-purpose instructions. – Davislor Oct 02 '18 at 06:04
  • at a previous workplace, i saw someone create a fraction class that had a signed long for the numerator (32 bit) and unsigned long for denominator which was always at least 1. all integers had the denominator 1 and all other fraction where supposed to be reduced to the simplest so that each value had a unique representation. that reduction took time, there was a table of primes all the way to 2^16 and i am not sure what the method was to factor the numerator and denominator efficiently without using division or the % operator. – robert bristow-johnson Oct 04 '18 at 19:12
  • @robertbristow-johnson I might have used Stein’s binary GCD algorithm on hardware with no divide instruction, but a table of primes suggests trial division was involved? One modern, open-source implementation anyone who’s curious can look at is the Glasgow Haskell Compiler’s Data.Rato. – Davislor Oct 04 '18 at 20:20
  • it seems to me @Davislor, that one must do trial divisions to figure out, in general, if a particular prime is a factor to a composite number. Other than having a *huge* table where each positive integer has a list of all of its factors, i see no other way of simplifying fractions than dividing out primes until there are no common prime factors to both numerator and denominator. – robert bristow-johnson Oct 05 '18 at 03:48
  • @robertbristow-johnson To simplify fractions to a canonical form, Stein’s algorithm will find the greatest common divisor without any divisions (other than right bit-shifts). Then, you do need to divide both numerator and denominator by their GCD, but only once. Similarly, the denominator of a/b + c/d or a/b - c/d is the least common multiple of b and d, equal to bd/gcd(b,d). – Davislor Oct 05 '18 at 03:52
  • @Davislor, i took a little peek at that haskell code (i don't know haskell, but i thought i might be able to read it) and i cannot see where and how it does it. it's at the bottom of the file. can you point to any C or C++ code that does this Stein's binary GCD? – robert bristow-johnson Oct 05 '18 at 03:56
  • @robertbristow-johnson Sorry for the confusion. That Haskell code actually uses Euclid’s algorithm for gcd. Stein’s would be useful for a machine with no multiply or divide instructions, such as Z80 or 6502 (or SPARC v7). – Davislor Oct 05 '18 at 03:59
  • suppose you had 81/421. both odd, so no right shifts without losing information. how does one figure out that this cannot be simplified without dividing trial primes into numerator and denominator? – robert bristow-johnson Oct 05 '18 at 03:59
  • @robertbristow-johnson To find the Haskell implementation of gcd in the file I linked, by the way, search for gcd ::. – Davislor Oct 05 '18 at 04:00
  • @robertbristow-johnson In that example, you would find that gcd(81, 421) == 1. – Davislor Oct 05 '18 at 04:01
  • i think i know what rem means but i cannot figure out how this Haskell code works. – robert bristow-johnson Oct 05 '18 at 04:04
  • @robertbristow-johnson In Haskell, quot is quotient and rem is remainder. It’s the variant of Euclid’s algorithm that uses modulus rather than subtraction. – Davislor Oct 05 '18 at 04:20
  • okay, so starting with 2, you have to trial divide every prime into the numerator and denominator until one or the other has a non-zero remainder. then move on to the next prime. – robert bristow-johnson Oct 05 '18 at 04:47
  • @robertbristow-johnson You do not need to do that to compute either the gcd or the lcm. There are much better ways. – Davislor Oct 05 '18 at 04:48
  • i don't see the other way that you are alluding to. – robert bristow-johnson Oct 05 '18 at 04:49
  • @robertbristow-johnson You might want to review Euclid’s algorithm for greatest common divisor. – Davislor Oct 05 '18 at 04:50
  • well, i guess i will Google it or wikipedia the topic. – robert bristow-johnson Oct 05 '18 at 04:51
  • @robertbristow-johnson In fact, trying to keep an array of all prime factors less than sqrt(ULONG_MAX) will fail catastrophically as soon as you try to extend it to 64-bit values. – Davislor Oct 05 '18 at 04:52
  • of course. this implementation i saw had 32 bit ints for numerator and denominator. and a big table of primes up to 2^16. i am now reviewing https://en.wikipedia.org/wiki/Euclidean_algorithm#Description . – robert bristow-johnson Oct 05 '18 at 04:53
  • 2
    Worth noting: The ANSI SQL language specification describes fixed-point types DECIMAL(<precision>,<scale>) (or synonymously, NUMERIC, and has for decades. Not that getting the implementation right is easy, but it's part of the language. – Steve Kass Oct 05 '18 at 14:45
11

In fact, fractions often are used, especially by wise programmers on systems without hardware floating point capability. However, generally this is done where the same denominator can be used for all values to be considered in a particular computation. For a fixed denominator to work, the programmer must start by figuring out the maximum range of values and the required precision, determine a denominator which supports this, and then write the relevant portions of the program in the context of that. In simpler cases no actual manipulation of the denominator is needed - its just implicitly assumed all the way through, though when multiplying two fractional values adjustment is of course required. Most often the denominators chosen are powers of two, so this adjustment typically ends up being a simple arithmetic shift - or in the simplest case, the denominator is the word width, so the shift is accomplished by not even bothering to perform the parts of the calculation which would produce the discarded part of the result.

Ultimately, the choice between computation which uses a fixed denominator, verses the variable binary exponents of floating point (when unassisted by hardware) is a software decision, not a hardware one.

Programmers writing efficient, high performance code for such platforms would use integer or fixed fraction arithmetic, then and also today. Programmers needing to deal with a wider range of values, or working on a program that would take longer to write than the amount of time users would ever spend waiting for it to run, might find floating point more suitable or more convenient.

If the systems you mentioned had a norm, it was likely more with packaged languages, especially a ROM BASIC. Typically, if someone is writing in BASIC, they want flexibility and approachability more than speed, and so many BASICs had their default variable type floating point (in effect, "hey computer, figure out how to represent this for me"). However, it was not uncommon for a BASIC to also support explicitly declaring a variable as an integer type. And of course some of the smaller/earlier ROM BASICs such as Apple's Integer BASIC didn't support floating point to begin with.

Chris Stratton
  • 918
  • 6
  • 12
  • 2
    I was hoping someone would write this answer and save me the trouble. Floating point is, in a way, just an automated version of this, where the computer keeps track of the denominator. But nothing stops the programmer from keeping track of the denominator and coding appropriately as described here. It's more work for the programmer, but less for the CPU. – Wayne Conrad Oct 01 '18 at 16:06
  • 2
    Fixed denominator is called fixed point. It's a totally different beast – edc65 Oct 01 '18 at 22:50
  • An interesting paper regarding such implementations (fixed-slash and floating-slash) is "Finite Precision Rational Arithmetic: An Arithmetic Unit" by Peter Kornerup and David W. Matula from 1983. PDFs available but not linked here for legal reasons. – deamentiaemundi Oct 01 '18 at 22:58
9

One more point on the topic. Floating-point was designed so that almost all bit patterns of a memory representation of a number were used meaningfully. With the exception of zeros, infinities and NaNs, every bit pattern is a different number. On the other hand, when using fractions, you get 1/2 == 2/4 == 3/6 == … etc. You either keep normalizing fractions (and end up never using bit patterns corresponding to non-normalized fractions), or having trouble even comparing numbers. So, in your proposed case of a byte for divisor and a byte for dividend, out of 2¹⁶ bit patterns available for two bytes:

  • there are at least 127 bit patterns that represent 1/2,

  • there are at least 85 bit patterns that represent 1/3 and 2/3 each,

  • there are at least 63 bit patterns that represent 1/4 and 3/4 each,

  • there are at least 51 bit patterns that represent 1/5, 2/5, 3/5, 4/5 each,

…and for these 9 fractions you're already using almost 1% of all possible bit patterns ("at least" depends on defining corner case semantics, like: what number is represented when the divisor is zero?).

What's more, you're wasting close to half of all possible bit patterns on numbers where divisor > dividend.

liori
  • 199
  • 1
  • 1
    There are representations of fractions that avoid this, for example the Stern-Brocot tree (but that again has implications for how the operations are implemented, making them even more difficult). – dirkt Oct 02 '18 at 11:09
  • 1
    IEEE FP has only one bit pattern each for -INF and +INF. You're right that +-0.0 is mostly redundant (although it does tell you which direction you underflowed from, if you're multiplying by a very small number). +-0.0 compare equal to each other and are equivalent when added to a non-zero number, but otherwise are distinct. NaN is where the vast majority of the not-usefully-distinct bit-patterns lie: there are 2 * 2^(significand_bits-1) - 2 of them. (*2 from +- NaN. -2 from infinities. -1 from the significand bit that indicates signalling or quiet, rest is "payload".) – Peter Cordes Oct 03 '18 at 10:09
  • why is it important to have unique bit patterns? – Magne Nov 22 '23 at 12:41
6

dirkt and alephzero provided definitive general responses. Mine is focused on one element of your question, the structure used to store a fraction. You proposed something on the order of:

struct mixedFrac {
   int   whole;
   uchar dividend;
   uchar divisor;
}

Note that this pre-limits general accuracy; there is only so much precision an 8-bit divisor can depended on to provide. On the one hand, it could be perfect; 78 2/3 could be represented with no loss, unlike with floating point. Or it could provide great (but not perfect) accuracy, such as pi at 3 16/113 (accurate to 6 decimal places). On the other hand, an infinite quantity of numbers would be far from accurate. Imagine trying to represent 1 1/1024. Within the structure, the closest you could get would be 1 0/255.

The proposed structure could be easily simplified and improved with the use of a different structure:

struct simpFrac {
   long  dividend;
   ulong divisor;
}

Since the divisor is now represented with much more precision, a much wider span of general accuracy is allowed. And now that approximation to pi can be shown in traditional fashion, 355/113.

But as the others have pointed out, as you use these perfect to very accurate fractions, you lose precision quickly, plus the overhead of maintaining the "best" divisor for the result is quite costly, especially if a primary goal of using fractions was to keep things more efficient than floating point.

RichF
  • 9,006
  • 4
  • 29
  • 55
2

Consider fixpoint arithmetic, that is pretty much what you are looking for:

A 32-bit value can, for example, be split in 2 16-bit values with an implicit "binary point" between them. π, for example, would then be expressed with

1 * 2 + 1 * 1 + 0 * 1/2 + 0 * 1/4 + 1 * 1/8 + 0 * 1/16 + 0 * 1/32 + 
1 * 1/64 + 0 * 1/128 + 0 * 1/256 + 0 * 1/512 + 0 * 1/1024 +
1 * 1/2048 + 1 * 1/4096 + ....

That is pretty much a representation in fractions. The downside is a relatively small range of values, but if you can live with that, the upside is blazingly fast operations and a relatively good precision within the number range.

tofro
  • 34,832
  • 4
  • 89
  • 170
  • 1
    I don't consider fixed point is "pretty much" the same thing as rational arithmetic... – Omar and Lorraine Oct 01 '18 at 14:54
  • 3
    @Wilson Then reconsider. Decimal math and the classical representation of fractional numbers doesn't work any different. Writing down π as 3.1415926 is the very same thing – tofro Oct 01 '18 at 14:56
  • Please could you clarify why there is "a relatively small range of values"? Are you comparing the range of values of your 32-bit fixed point type with that of a struct type such as RichF suggested, with a total size of 32 bits? Or, if not that type, to what type? – Rosie F Oct 01 '18 at 15:42
  • @RosieF For a 32-bit FP, +/- 32768. For a 64-bit type, +/- 2 and some billions. You can adjust the range vs. precision by moving the fixed point up or down. – tofro Oct 01 '18 at 15:47
  • 4
    No, fixed point arithmetic and rational arithmetic are not the same. Fixed point arithmetic only allows powers of some base as denominator (and often the denominator is implicit). Rational arithmetic allows arbitrary denominators. That's why the actual implementation of arithmetic operations is very different for each. – dirkt Oct 01 '18 at 17:57
  • 1
    If you had written that fixpoint arithmetic is "pretty much" the same thing as floating point, I would agree - both are limited to using a power-of-two denominator. Your expanded example works equally well with floating point numbers. Calling it anything similar to rational arithmetic is just weird. – pipe Oct 02 '18 at 12:48
-1

Old flight simulators used to use a system called binary scaling. This was a system where an imaginary binary point was set on an integer.

What that meant was the numbers above the `binary point' were ascending powers of two, and those below descending.

Use of fractions to represent any scale number to maximum accuracy of the integer!

It meant they could get away without using slow floating point processors. Also binary scaling means more accuracy, because some of the bits in a float/double/REAL are used to hold the scaling. Binary scaled values used the context to store this information.

See https://en.wikipedia.org/wiki/Binary_scaling

Robin
  • 107
  • 1
  • 4
    That's also called "fixed point arithmetic" (see discussion in the other answers, and in the comments directly under the question). Powers of ten were also used for this in addition to powers of two ("binary scaling"). It's different from what is asked in the questions. – dirkt Oct 02 '18 at 11:11
  • I suppose binary numbers are sums of powers of two, which is similar to adding fractions. Also binary scaling is very useful for storing pseudo float constants in EEPROMs etc. Binary scaling was probably something specific to flight sims, and I agree it does overlap with fixed point arithmetic. – Robin Oct 02 '18 at 11:21
  • 4
    No, it's not similar to adding fractions. Rational arithmetic allows arbitrary denominators. Fixed point arithmetic allows only denominators which are a power of some base. And yes, it's very useful, and no, it's not specific to flight sims - mainframes and micros used fixed point arithmetic long before flight sims. Maineframes often with powers of ten, and a decimal representation, for business calculations. – dirkt Oct 02 '18 at 11:31
  • I only remember it being called binary scaling in flight sims. They also numbered it according to how many bits were considered integer (i.e. a B1 number could hold between -2.0 and 1.999). I know in DSP it is often called fixed point. – Robin Oct 02 '18 at 11:34
  • 2
    Fixed-point isn't always more precise than floating. For very small numbers, the nearest floats are very close together, but the nearest representable fixed-point numbers are the same distance apart regardless of the magnitude of the number. (Floating point has near-constant relative precision, fixed-point has constant absolute precision.) Floating point can handle a huge range of magnitudes, and has the same (relative) precision at any magnitude, until you run out of exponent bits and hit subnormal values very close to zero. – Peter Cordes Oct 03 '18 at 10:14
  • Good point Peter. Yes especially with integration you had to be careful not to loose bits of precision. Having the scaling factors to hand though was useful for choosing the order that variables were added as well. Always starting with the smallest. That is something that can be easily overlooked when using floats. – Robin Oct 03 '18 at 10:21
  • 1
    @user13972. That is the reason most complicated filters in the Z domain are broken down into chains of bi-quads (2nd order digital filters). Its also the reason most finite state analysis (strength of beams etc.) in mech engineering runs into problems. – Robin Oct 08 '18 at 09:41