Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers? As opposed to the various power-of-2 word sizes which seem to have won out?
-
1Related question: https://retrocomputing.stackexchange.com/questions/1621/what-register-size-did-early-computers-use – snips-n-snails Jul 23 '19 at 22:42
-
21Back when people were starting to expect 32-bit integers, your Lisp interpreter could store 32-bits worth of immediate data and a 4-bit type code in a single machine word. (Don't ask me how I know!) – Solomon Slow Jul 24 '19 at 01:37
-
4There is a specific case of this problem: current 64-bit x86 CPUs have a 32-bit mode, but with 36-bit address lines. That is called PAE (Physical Address Extension). It is very useful - you can combine the smaller RAM requirement of 32-bit processes, with the 64GB maximal physical RAM of newer, hard-core machines. The price is that no induvidual process will be able to see more than 4GB. – peterh Jul 26 '19 at 19:21
-
3IIRC, PAE well pre-dates x86-64 (circa 2000/Pentium-III era?) – Alex Hajnal Apr 19 '21 at 10:39
-
2Because 64 bits would have been a stretch. – dave Apr 19 '21 at 12:27
-
@AlexHajnal Tue, but then again x86-64 only allows up to 56 bit addresses - while no CPU I know of goes past 42 (IIRC) – Raffzahn Apr 19 '21 at 15:47
-
148 bits was a popular choice in the UK - Ferranti Orion, Atlas, English Electric KDF9. And the 24 bit FP 6000 / ICL 1900, which had 48 bit floats. – dave Apr 19 '21 at 22:42
9 Answers
Was there some particular design theory or constraint that made a 36 bit word size attractive for early computers?
Beside integer arithmetic, 36 bit words work quite fine with two different byte sizes: Six and nine. Six bit was what's needed to store characters of the standard code for data transmission at that time: Baudot code or more exactly ITA2.
As opposed to the various power-of-2 word sizes?
There is no inherent benefit of power of two word sizes. Any number can do.
Even more, there were no 'various power-of-two sizes' in the early and not so early days. Before the IBM/360 settled for a 32 Bit word size and four 8 bit bytes within a word and two nibble in a byte, power-of-two word sizes were an extreme exception (can't come up with any beside SAGE and IBM Stretch). The vast majority used word sizes divisable by 3, not least to allow the use of octal representation. Before the IBM /360 with its 8 bit bytes, octal was as common to computer scientists as hex today - heck, Unix carries this legacy until today, making everyone learn octal at a time when hex is the generally accepted way to display binary data.
Now, the reason why Amdahl did choose 8 bit bytes is rather simple: A byte size chosen had to be at least 6 bit to store a character, eventually 7 for the upcoming ASCII, but 8 would give the ability to store two BCD digits within. Any larger byte size would again waste storage with this important element. Operating in BCD was one main requirement for the /360 design, as it was meant to not only be compatible to, but as well replace all prior decimal machinery.
What seems today as 'natural' use of power of two is just a side effect from being able to handle decimal by a binary computer.
Conclusion: As so often in computing the answer is IBM /360 and the rest is history :)
-
43"There is no inherent benefit of power of two word sizes. " This is the most important part of this answer. Before microprocessors, computers were literally assembled by hand. If you didn't need more bits, you didn't wire them up. – DrSheldon Jul 24 '19 at 05:24
-
1Impossible to say for certain of course, but I think the modern obsession with powers of two word sizes stems more from the PDP-11 than the IBM/360. The 8 bit byte does seem a natural size as it will nicely take two BCD digits or one EBCDIC character or one ASCII character including parity. That doesn't necessarily mean you have to have powers of two though. The Burroughs large system architecture had 48 bit words = 6 bytes or 12 BCD digits. – JeremyP Jul 24 '19 at 08:31
-
The PDP-11 came some 6 years after the IBM - at a time when IBM ruled the market. It was rather that DEC had to switch for 8/16 bit (instead of 12 Bit) to keep up. EBCDIC was made to fit the 8 bit rather than causing it. For ASCII as base a 7 bit word would have been natural, as noone stores error correction in band. That'S something the memory controller does. Storing parity in band is a big waste of storage. And yes, there have been other 48 bit as well, And 60 and so on, but they are no power of 2. – Raffzahn Jul 24 '19 at 09:02
-
7-bit is a bit impractical; multiples of four give you a nice print out in hexadecimal, which makes manipulating and reading the binary data manually much easier. Of course, both powers of two (except for the first power) and multiples of eight are also multiples of four, but 36-bit fits the "multiple of four" (handy in hex), unlike both multiples of eight and powers of two. – Luaan Jul 24 '19 at 10:19
-
7@Luaan nice, but that's retroactive. You might want to tale a look at these machines. Hex may work fint with 36 bit, but it totally screws 18 bit, which was quite often a common half word on such machines. At that time Octal was the way to go. It covers all common used dividers of 36 bit words (18, 9 and 6). Hex, if at all, a quite exotic way to look at binary. World has changed, hasn't it? – Raffzahn Jul 24 '19 at 10:26
-
1I've seen it argued that Octal is pretty close to the idea power of 2 size for humans to deal with. Hex and larger are really tough to do mental math with, and anything smaller tends to end up with too many digits for a human to hold in their head. – T.E.D. Jul 24 '19 at 15:36
-
"The vast majority used word sizes dividable by 3 not at least to allow the use of octal representation". We managed to use octal quite happily on the 16-bit PDP-11. – dave Jul 24 '19 at 16:47
-
There is some interplay between instruction layout and choice of numeric base. If you position significant fields on multiple-of-3 bit boundaries (as per PDP-11), octal is natural. If multiple-of-4 bit boundaries, (as per VAX), hex is natural. Or, alternatively, you position your fields to match your favourite radix, with of course preference for hex on larger word-lengths due to the reduced digit count. – dave Jul 24 '19 at 16:52
-
4"There is no inherent benefit of power of two word sizes." Is that still true? I had always assumed there was an advantage to being able to address the bits in a word compactly, which may become even more important in modern complex CPUs. But now that I think about it, I'm having trouble coming up with a specific example where it'd help. – Cort Ammon Jul 24 '19 at 19:01
-
3@CortAmmon, the advantage comes later, when computers start being assembled from commodity parts. If you've got 8-bit words, you can make your RAM from eight 1-bit chips, four 2-bit chips, two 4-bit chips, or a single 8-bit chip. If your word size isn't a power of 2, you've got fewer ways to mix-and-match parts. – Mark Jul 24 '19 at 20:52
-
1@Mark That world as well with word sizes as multiples of three. 1,3,6,9 or 18, would do well. There is no reason to prefer power of two. In fact, steps of three allow even more combinations :) – Raffzahn Jul 24 '19 at 23:03
-
@CortAmmon: With variable-count shift instructions, x86 for example masks the shift count with
0x1fso 32-bit (and 64-bit with or 0x3f`) shifts are unable to shift out all the bits, only from 8 or 16-bit regs. This masking is basically free for power-of-2 register widths, but a modulo by a non-power-of-2 would be more expensive. However, this wasn't a thing until 286 according to the manual entry, so it's just taking advantage of power-of-2 widths after the fact. 8086 just shifted 1 position per count for whatever was in CL (up to 255). – Peter Cordes Jul 25 '19 at 01:14 -
1@CortAmmon: splitting up a bit-index into a bitmap into word : offset is much more efficient in software with power-of-2 byte or word sizes. On a 36-bit machine you might leave the low 6 bits for an offset and use the higher bits for a word-index, so incrementing has to manually wrap the low 6 bits but splitting is cheap (shift / mask). – Peter Cordes Jul 25 '19 at 01:18
-
2@PeterCordes Erm, I'm not sure if this is a thing at all. The machines are binary anyway, so all indexing, shifting and all the works are the same. Sure, for a continous bitmap addressing power of 2 might help. Then again, the most common use for bitmap might be graphics. With colour multiple of 3 works even better. A 36 bit word can hold 36/12/6/3/2/1 pixel at B&W/1/2/3/6/12 bit per colour. Quite handy and no wasted bit or reduced depth for on colour at all :)) – Raffzahn Jul 25 '19 at 10:03
-
@Raffzahn: I meant bitmaps like for a Sieve of Eratosthenes, or allocation bitmaps for disk sectors or memory blocks or whatever, or a bloom filter. In-use-or-not bitmaps aren't always as perf-critical as graphics or used as much, but they're not rare. And they need to be indexed with a bit-index into the whole bitmap. (Although graphics needs that too for drawing functions that take x,y coordinates.) – Peter Cordes Jul 25 '19 at 14:24
-
@PeterCordes You may belive me, I have done one or two bitmapsover the years. Still, they are not realy that common of a real world task. Not in number of implementations not usage. They barely beat the real world application of recursive programming - usually my most beloved example about where computer science education goes useless. It's use for file systems is only good for real tight cases. Eventually the worlds most used file system, FAT, doesn't use bitmaps. And for memory it's as well more performant to use bytes (or words) for an allocation table, as that's usually faster. – Raffzahn Jul 25 '19 at 18:10
-
Yeah, they're not widely used, but Cort was looking for any example of a case where a power-of-2 word size helped with anything. Bit-indexing into a contiguous multi-word bitmap is certainly one. Bitmaps can actually be very efficient in the right use-cases, especially with modern short-vector SIMD (SSE2 / altivec / etc) to search forward or backward for the next-set or next-zero bit, then bit-scan to find it within a byte or word. e.g. for a "dense" searchable set of 16-bit integers. – Peter Cordes Jul 25 '19 at 18:27
-
@PeterCordes the point is that even this single example can be challenged and depend on many other factors to come even thru, doesn't it? Also, I read his remark rather that it would need to be a worthy example with consequences massive enugh to make the case for power of two, otherwise, one can always find some usage, even for the oddest ISA (and yes, that includes 8051 :)) – Raffzahn Jul 25 '19 at 18:42
-
@Raffzahn: If I had a better more convincing example, I would have said that instead. >.< I agree with Cort that it feels like there should be something more important, but it seems there probably isn't any "killer app" for power-of-2 byte/word widths even in modern machines. But my point is that there are certainly some advantages for some data structures which we can take advantage of now that we have switched to power-of-2 sizes. I'm not arguing those were part of the motivation; maybe, maybe not at all for some HW designers. – Peter Cordes Jul 25 '19 at 18:56
-
I learned programming on a computer with 18-bit words, so we used octal when referring to multi-bit groupings. I'm still have to think a bit when looking at hex values > 9 to figure out which is which. :-} – Bob Jarvis - Слава Україні Jul 26 '19 at 16:39
-
@Raffzahn: Many forms of encryption and data compression both benefit from power-of-two word sizes. Whether it's better to store a table of bits packed one per byte or one per bit depends upon the balance of reads and writes. For a large table that is predominantly read, the reduced cache footprint of a packed table may vastly outweigh any extra cost of address computation and masking. – supercat Jul 26 '19 at 17:04
-
5Re "Unix carries this legacy until today, making everyone learn octal", I presume you are talking about
chmod? If so, you are mistaken. Using the numeric form ofchmoddoesn't require any knowledge of octal since one doesn't need to know the number formed. Each digit is independent of the others. Knowledge of hex would do just as well, as so would memorizing the meaning of 4,5,6,7. – ikegami Jul 26 '19 at 19:47 -
16 bit works great with octal. Numbers are of the form
177777, so could easily see the most significant bit. You can see if a number if signed, for starters. Doubly useful when jump-if-positive and jump-if-negative are common operations. And if the system distunguishes direct addressing from indirect addressing using bit 15... – ikegami Jul 26 '19 at 20:16
36 bit word size attractive
Many sizes have been tried, but fundamentally, this results in a certain precision; from Wikpedia on 36-bit
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code.
As opposed to the various power-of-2 word sizes?
It is lack of requirement to conform to pre-existing specifications, for example, no internet, even simple disc files were not easily shared between computers back in those days.
- 3,357
- 1
- 14
- 21
The key point made by Wikipedia seems to be:
Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator....Computers, as the new competitor, had to match that accuracy....
Many early computers did this by storing decimal digits. But when switching to binary:
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum).
35 bits is obviously a slightly more awkward size than 36 bits anyway, but there are other reasons to choose it if your minimium size is 35 bits.
36 bits was on average a bit more efficient when packing characters into a word, especially for the 6-bit character encodings common at the time:
Char size | 35 bit word | 36 bit word ----------+-------------------+------------------- 6-bit | 5 + 5 bits unused | 6 + 0 bits unused 7-bit | 5 + 0 bits unused | 5 + 1 bit unused 8-bit | 4 + 3 bits unused | 4 + 4 bits unusedIf you intend to make smaller computers later, having registers that are exactly divisible by two makes having some level of data interoperability easier, if not perfect. (Numerical data in a single large word can easily be split into two smaller high and low words, and a 6-char x 6-bit word can be split into two 3-char words, but splitting a 36-bit word with packed 7- and 8-bit character data would result in either splitting parts of characters between the smaller words or adding additional smaller words and ending up using more bits than the original larger word.)
- 25,592
- 2
- 79
- 179
-
3I think the smaller computers with half the wordsize is an important point. DEC, who made a fair few 36 bit computers, already had the PDP-7 and other 18-bitters on the market for a long time. And of course, 36 also is a multiple of 12, another wordsize they used (PDP-8) – Omar and Lorraine Jul 24 '19 at 15:36
-
@Wilson Actually, I'm starting to doubt that point now: DEC's first computer, the PDP-1, was 18-bit, and they moved up to 36-bit only later. The major 36-bit architecture predating DEC was the IBM 701 and its descendants, but I can't find any evidence that they ever created a smaller 18-bit version of that. (If anything, the PDP-1 was that smaller version!) Still, I suppose they could have planned for that, even if they didn't do it. – cjs Jul 24 '19 at 15:47
-
1w.r.t. 7 bit character data and splitting parts of characters between words: The 36-bit PDP-10 architecture had instruction set support for variable-sized "characters" and compilers used that to pack 5 7-bit characters per word - with one bit left over, which didn't hurt addressing at all, but which clever application programs would use for all kinds of things ... - Frequently programs used both 6-bit (where only upper case and digits and some punctuation was needed - like symbol tables - and 7-bit (where you wanted a full alphabet plus digits plus punctuation) character representations. – davidbak Jul 25 '19 at 02:24
-
@davidbak That's a clever trick, but rather beside my point. I've tried to clarify that part of the answer. The issue is that with 6-bit chars, a 36-bit word of 6 chars divides nicely into two 18-bit words of 3 chars each, but with 7-bit chars a 36-bit word of of 5 chars cannot be divided into two 18-bit words without splitting a character between those two words. Given the various difficulties that would produce, it would make more sense to allocate the 7 chars amongst three 18-bit words, but then you are sigificantly changing both the size and processing of the storage. – cjs Jul 27 '19 at 02:45
-
@CurtJ.Sampson - I did get your point. But we're talking about 36-bit word machines, like the PDP-10. And though you could do halfword stuff on that machine, you didn't. You used it fullword all the time. You couldn't address halfwords.. And I don't have a history in front of me - but the PDP-10 came after 3 generations of Digital Equipment Corp's 18-bit machines, and was intended to be a "mainframe", so they weren't particularly worried about "making smaller computers later" with any kind of compatibility. – davidbak Jul 27 '19 at 03:08
-
@davidbak Yes, and 36-bit word machines like the IBM 701, (April 1952), and the Univac 1103 (October 1953), both released years before PDP-1 design was started in 1959 (and years before DEC was even founded). I too had originally thought that 18-bit machines would have preceeded 36-bit machines, but I can't find any 18-bit machines before the late 50s (at which point clearly everyone had knowledge of the IBM 701), and 18-bit machines don't make sense in light of the "ten digit calculator" explanation, anyway. – cjs Jul 27 '19 at 03:58
I'm going to address the power of 2 part of the question.
Keep in mind that before microprocessors, computers were assembled by hand. Increasing the number of bits in a computer was really a big deal. Each time you added one bit to the word size, you would need
- more parts in the register file
- more parts in the ALU
- more wires in the buses
- more cells in memory
- more parts (relays, vacuum tubes, transistors, or small-scale ICs) to make all the above
- more circuit boards
- more time to assemble and solder (or wire-wrap) the parts
- and more cost for the parts and labor of all of the above.
This wasn't just a one-time design cost. Every unit sold had these extra costs for each added bit. If there wasn't a good reason to add another bit, they didn't add it. And rounding up to the next power of 2 was not a good reason.
This wasn't just limited to processor word sizes. Drum memories were not powers of 2. EBCDIC was not a power of 2 (6 bits). ASCII was not a power of 2 (7 bits).
So why are powers of 2 dominant now?
- IC transistors cost practically nothing compared to relays, vacuum tubes, or discrete transistors. You don't have to hire someone to solder them together. So there's little incentive to keep the part count low, and little penalty to round up the number of bits to a power of 2.
- Automated chip design tools make it very easy to add more bits during chip design.
- Doubling the width of registers can often make a new architecture compatible with the old architecture, either as source code or actual executables. There were many 18-bit systems, and some of these architectures went on to become 36-bit systems.
- Intel created the first commercially-available microprocessor as a power of 2: the 4004 was 4 bits. Subsequent architectures doubled the register size, resulting in power-of-2 architectures: the 8008 was 8 bits, 8086 was 16 bits, and 80386 was 32 bits.
- Competition causes different manufacturers to offer something similar to their competitors. There was a time when 18 bits was popular among several manufacturers. Then 36 bits were in vogue. Then 8-bit microprocessors. Followed by the age of 16-bit processors, then 32 bits, and 64 bits today.
- Finally, powers of 2 seem "natural" or "elegant". We are suspicious of a platform that isn't so, even if it is perfectly valid. Would you like to buy this lovely 67-bit processor? No?
- 15,979
- 5
- 49
- 113
-
5Actually, 36 bits was "in vogue" before 18 bits; 36 bit machines were common in the 1950s (IBM 700 series, UNIVAC 1103) but 18-bit machines didn't appear until 1960 or so (PDP-1), as far as I can tell. – cjs Jul 24 '19 at 13:36
-
1"Would you like to buy this lovely 67-bit processor? No?" If I had a legitimate need to natively (bignums weren't nearly as easy to work with on the hardware of the day) handle numbers on the order of 2^65 to 2^66, then sure, why not? 2^32 is only about 4.3 billion; barely enough to accurately store the number of humans alive on Earth even at the time, let alone today. If you're doing fixed-point arithmetic, it's even worse. When you're working on a clean slate design and don't have to worry about compatibility with anything else, any word size can at least be a contender, if not a good one. – user Jul 24 '19 at 13:40
-
3@aCVn: Yet if I had offered a 128-bit processor, some people would rationalize a need for it. It can do cryptography. It can do vector arithmetic. And many would simply think I need it because it must be better. The point is that there is clearly a difference between how critical our thinking becomes. We want the power of 2 processor to be better, but we look for reasons to reject other word sizes. – DrSheldon Jul 24 '19 at 14:29
-
8Yes, "8008 was 8 bits, 8086 was 16 bits" -- but just look at all the octal-oriented structure in the instruction sets! Three bits was just right to specify one of eight registers, so the instruction values were actually easier to interpret in octal than in hexadecimal. 110 is "load C from B", 103 is "load B from E"... – jeffB Jul 24 '19 at 16:39
-
1@jeffB: Given that assembler, disassemblers, and compilers have existed for 60+ years, there's little reason to make instruction words human-readable. Nor do the number of bits to select a register need to be a power of 2; every number from 1 to 8 has been used as the width of a register select field. – DrSheldon Jul 24 '19 at 17:05
-
4@DrSheldon Of course assemblers/disassemblers have been around forever, but people have been patching things by hand or reading raw dumps during that entire time. And of course you can build as many registers as you want, but the 8080 and its immediate descendants did have three-bit register select fields. I assumed that was why my Z80 pocket guides always showed instructions in both hex and octal, even though my tools all strongly preferred hex. – jeffB Jul 24 '19 at 18:01
-
Note that some of those reasons still apply today. For example, AMD64 processors currently only support 52 bit physical addresses. – Jörg W Mittag Jul 25 '19 at 16:49
-
1@JörgWMittag - WHAT?!? They only support 4,503,599,627,370,496 bytes of memory?!?!?!?!? IT'S AN OUTRAGE!!!!!! :-) (I joke, but in three years you'll probably be able to get 4 petabytes of storage on your phone. <sigh>) – Bob Jarvis - Слава Україні Jul 26 '19 at 17:06
-
@CurtJ.Sampson The Macchina Ridotta, built in Italy in 1957-1958, had 18-bit words. See http://128.84.21.203/pdf/1904.00944 . It was an early prototype of a 36-bit computer built later. – Federico Poloni Jul 26 '19 at 20:37
-
@FedericoPoloni Right, but that computer in fact directly took ideas from the 36-bit IBM 701 released at least two years before development started on the Macchina Ridotta. From the article: "This solution [to constructing the adder] was published in an article describing the arithmetic unit of the IBM 701. The comparison with the CSCE blueprints leaves no doubt about the sources that inspired the MR designers." – cjs Jul 27 '19 at 02:37
-
2@BobJarvis-ReinstateMonica ...and you'll still get "Your phone is running out of space" messages. – Alex Hajnal Apr 19 '21 at 10:44
-
@FedericoPoloni - the computer I learned about programming on (an EDP-18; EDP was a company called something like "Educational Data Products") had an 18-bit word. Interestingly, the main memory was a rotating drum storage device, so you had to wait about 30 seconds for it to come up to speed before you could begin working. :-) – Bob Jarvis - Слава Україні Apr 19 '21 at 12:52
Wiki page 36-bit shows some reasons (all copied from the page):
"This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. "
And for characters:
- six 5.32-bit DEC Radix-50 characters, plus four spare bits
- six 6-bit Fieldata or IBM BCD characters (ubiquitous in early usage)
- six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.
- five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five- seven ASCII)1[2]
- four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits
- four 9-bit characters1[2] (the Multics convention).
- 983
- 7
- 20
When I was first exposed to this stuff in engineering school in 1978, I was taught that a "byte" could be either six or eight bits; the former were usually represented as two octal digits, and the latter by two hex digits. Most of the computers I used in college (PDP-8s and a CDC 6600) were based on six-bit bytes.
There were quite a few computers using odd word sizes in the '70s; probably there were more different architectures based on 6-bit bytes than 8-bit bytes. The PDP-8 was a 12-bit machine; Harris actually sold a microprocessor compatible with the PDP-8 instruction set.
DEC also made 36-bit machines for a while. The CDC6600 and 7600 were 60-bit machines. I gather that there were quite a few 18-bit machines in military applications, but I've only ever worked with those architectures in emulation (and I'm confident there are still emulators of 18-bit processors being built).
There probably are still 36-bit machines (or at least 36-bit software) running production EDI applications, because General Electric kept using their own computers in their EDI services business long after they'd sold that hardware business off to Honeywell (and in fact after Honeywell sold it to Bull). Although these days I'd guess they're running in emulation on 8-bit hardware.
From my perspective there was no more rationale than success in the marketplace, and the turning point was Intel's choice of 8 bits for single-chip microprocessors.
- 51
- 2
-
2IMO the turning point was IBM's choice of an 8-bit byte for the System/360, which (along with its successors) became the dominant mainframe computers for the next 30 years. – Bob Jarvis - Слава Україні Jul 26 '19 at 17:11
-
In my experience of two very different machines, octal was used for 8-bit quantities. PDP-11 (16 bit word/8 bit byte). KDF9 (48-bit word/8 bit syllable, "syllabic octal" used when writing a word as six syllables). – dave Jul 26 '19 at 17:19
-
I also occasionally saw 8- and 16-bit words represented in octal. Lot more octal than hex in those days. – jefuf Jul 26 '19 at 17:40
-
1I suspect in many cases octal was used for 8- and 16-bit byte/word sizes because octal had already been used extensively in previous machines (where it made more sense) and both existing knowedge and existing code (e.g., when building, e.g., cross-assemblers or porting other tools) could be re-used. DEC had spent more than ten years producing various 18-, 12- and 36-bit machines by the time they released the PDP-11. – cjs Jul 27 '19 at 01:09
I happen to find an explanation on 36-bit floatpoint advantage over 32-bit from an article in the WIKI, which I found interesting but unable to verify. It says.
The 1100 Series has used a 36-bit word with 6-bit characters since 1962. This word and character size was a Department of Defense (DoD) requirement.[citation needed] Since the military needed to be able to calculate accurate trajectories, design bridges, and perform other engineering and scientific calculations, they needed more than 32 bits of precision. A 32-bit floating point number only provided about 6 digits of accuracy while a 36 bit number provided the 8 digits of accuracy that were accepted as the minimum requirement. Since memory and storage space and costs drove the system, going to 64 bits was simply not acceptable in general. These systems use ones' complement arithmetic, which was not unusual at the time. Almost all computer manufacturers of the time delivered 36-bit systems with 6-bit characters including IBM, DEC, General Electric, and Sylvania.
Well, now it turns out to me that IBM's adoption of 32-bit, rather than other firms sticking to 36-bit to 1996 or new century, is a big surprise indeed.
PS: As for charset, it's not that surprising.
The 6-bit character set used by the 1100 Series is also a DoD mandated set. It was defined by the Army Signal Corps and called Fieldata (data returned from the field).1 The 1108 provided a 9-bit character format in order to support ASCII and later the ISO 8-bit sets, but they were not extensively used until the 1980s again because of space constraints.
Thus I don't think packing five 7-bit ASCII in 36bit is a big deal or ASCII justified 8-bit bytes.
- 3,752
- 1
- 17
- 40
Computers used to be all sorts of varying standards for varying reasons. When only large businesses could afford a computer, they bought or designed a computer for the reasons they needed. Thus, there were systems including 4, 6, 8, 13, 16, 18, 24, 26, 32, 36, etc.
There have been computers using binary and trinary (ternary).
Eventually, due to the popularization of Intel CPU along with many other Risc chips being 16-32-64 binary bits, these became the standard.
Windows 7-x64-Home only allowed 8 or 16 GB of memory address space in 64-bit mode.
Today, most 64-bit CPU have a 48-bit memory interface, with 56 as an option. Many BIOS/EFI don't have 48-bits, and might only allow 36, 38, 40 or whatever bits of memory space. E.g. many systems cannot address more than 16 GB or 64 GB, or whatever. The CPU and OS can use the remainder as swap/page file space.
- 139
- 1
-
2
- This doesn't address the particular reasons that someone looking at any of those bit ranges you mentioned would choose 36. 2. The size of the address bus relevant here; even a number of the 36-bit computers being talked about didn't have a 36-bit address bus. (The IBM 701 had 12-bit addresses, for example.) 3. Address space cannot be used as "swap space."
– cjs Jul 27 '19 at 00:53 -
3You've sort of just written "varying reasons" where the answer should be. – wizzwizz4 Jul 27 '19 at 11:56
-
The particular reason is because a developer, programmer, manager, or designer chose that number. Same with the other options. The reasons for many choices are simply choices made that might meet some need. They may not meet the current or the next need. The choice might become a standard or be replaced. – MikeP Jul 28 '19 at 04:28
The main reason I was taught back in the 70s, was the original binary (vs. decimal) IBM machines (e.g., the IBM 701)'s ability to store two 18 bit addresses (about the most memory one could afford at the time) pushed the development of a programming language that took advantage of it. And thus that capability caused later machines to inherit that as a minimum word size. Also named "CAR" and "CDR". Yep, it was Lisp.
- 21
- 5
-
Right, the decimal machines (e.g. using binary coded decimal) represented decimal numbers (i.e. used 4 bits to represent 0-9), and I believe virtually all the machines that preceded the 701 did that. So the representation of 12 was different in a decimal machine than a binary machine. See here: https://en.wikipedia.org/wiki/Decimal_computer – I'm Pliny Mar 22 '23 at 21:57
-
Thanks - thought that's what was intended, but not quite confident to make the edit for you! – Toby Speight Mar 23 '23 at 07:26