33

I am old enough to remember computers that were not octet oriented. E.g. the first that I used was an ICL 4120. It had 24 bit words which were, when necessary, divided into four 6-bit characters. There were operations to support extracting the 6 bit characters from the words. There were no corresponding operations to extract three 8 bit sub-units.

I am excluding computers which can address memory in octets even if their word size is larger e.g. 16, 32, 64, etc.

To narrow down the question and exclude specialized chips, I will require that the computer be capable of some level of text I/O to a human, more than just a limited set of fixed messages.

Do any remain in production? If not which was the last? Two interpretations of last are interesting: the last to be launched and the last to remain on sale (as new). I am not asking for the last to be used as that is probably an impossible question.

badjohn
  • 2,014
  • 1
  • 9
  • 22
  • 1
    There are processors today which have "weirdo" sizes, for example PIC16 has 14-bit program memory. – Omar and Lorraine Oct 11 '18 at 09:39
  • 1
    Oh and at least some modern-day Elbrus chips also have a BESM-6 compatibility mode, which is a 48-bit computer which does not have bytes. – Omar and Lorraine Oct 11 '18 at 09:41
  • @Wilson Thanks. Are they in niche uses? I found a Wikipedia page. I will do some reading later. – badjohn Oct 11 '18 at 09:42
  • 1
    Not really, but they're not the CPUs found in desktops or whatever either. For your question to be answerable you may need to define what you feel a "computer" is. – Omar and Lorraine Oct 11 '18 at 09:44
  • 7
    A lot of modern DSPs (e.g. TI TMS 32000 series) use "bytes" that have 16 bits: http://processors.wiki.ti.com/index.php/Byte_Accesses_with_the_C28x_CPU# – tofro Oct 11 '18 at 09:59
  • 4
    Many Harvard type CPUs use different size for programm and data memory, so PICs can be had in 12, 14 and 16 bit program word size. If your question is about Von Neumann machines, then we need to seperate between (logic) byte addressing and physical interface - for example modern x86 CPUs have a physical interface of 8 or more bytes wide, while on a locical level they operate bytewise. and so on. There's no real answer to that. – Raffzahn Oct 11 '18 at 10:03
  • 8
    "byte" and "character" didn't always mean the same thing on word-addressable computers. For example the CDC 6600, 7600, and the early Cyber series word-addressable machines had 60-bit words divided into 5 12-bit bytes, but characters were 6 bits, with 10 to a word. There were machine code instructions that operated on bytes, but accessing a single character needed a sequence of shift and mask instructions. – alephzero Oct 11 '18 at 10:28
  • Chips from GreenArrays, Inc. uses 18 bits wide data and program, see http://www.greenarraychips.com/home/documents/greg/DB001-171107-F18A.pdf for example. – Baard Oct 11 '18 at 10:33
  • @alephzero Yes, as I said, my first experience was on a machine with 24 bit words. It is a long time ago but I think that support for 6 bit characters was just a few 6 bit shift and mask instructions. – badjohn Oct 11 '18 at 10:50
  • Did they ever disappear? As far as I know it was only home and office computers that have been 8 bit computers, and they are mostly 64 bit today. – UncleBod Oct 11 '18 at 11:13
  • 1
    I'd guess the 8 bit handling on most processors are for the fact that you want to communicate with the user. Most i/O systems you have today are 8 bit. If you don't have the need, or have your own standard for I/O 8 bit is nor necessary (which was the case with 6 bit instructions on old systems). – UncleBod Oct 11 '18 at 13:11
  • 1
    The x64 architecture allows bytes to be written and read individually at the user-code level and I think at the bus level as well, since most programming languages specify that non-interlocked accesses to distinct bytes of memory by different threads will operate without interference. It would be possible to use cacfhe-locking to guarantee that in hardware, but at a considerable cost even in the common no-conflict condition. – supercat Oct 11 '18 at 15:03
  • @UncleBod "I'd guess the 8 bit handling on most processors are for the fact that you want to communicate with the user." - only if you live somewhere that never writes using anything except non-accented western characters (and that excludes most Eurasia). 16-bit would be a more logical choice now computers are fast enough to handle fonts with thousands of glyphs. – alephzero Oct 11 '18 at 16:08
  • @badjohn "just a few shift and mask instructions" was a speed-killer for C (and Unix) on the early vector machines because so much of the code base processed files one byte at a time, and had to run in scalar mode until it was completely rewritten. (And the first C compilers for those machines couldn't figure out how to vectorize code that used pointers either, which didn't help the situation!) – alephzero Oct 11 '18 at 16:19
  • 13
    4-bit microcontrollers are still sold today so arguably the last computer hasn't been made yet. – Ken Gober Oct 11 '18 at 16:25
  • well there are/was also https://en.wikipedia.org/wiki/Bit_slicing CPUs out there – Spektre Oct 11 '18 at 16:32
  • 4
    Note that defining bytes as 8 bits wasn't always the norm. In languages like C++ byte is defined as the smallest addressable unit of at least 8 bits, possibly more... – PlasmaHH Oct 11 '18 at 16:34
  • 1
    I'm surprised this question didn't get a nybble. :) – Don Branson Oct 11 '18 at 17:50
  • 11
    A better term for an eight-bit value is "octet" rather than "byte". Standards documents focus on that to avoid misunderstanding. –  Oct 12 '18 at 00:52
  • 1
    Random historical footnote: The Apollo Guidance Computer helped put men on the Moon and used a 15+1 bit one's complement memory. – Wossname Oct 12 '18 at 06:51
  • 2
    Modern x86 CPUs still contain FPUs that can't use 8-bit bytes, but rather operate only on 32–bit, 64–bit, or 80–bit numbers. If you want to look at it that way, the computer you're likely using right now isn't byte-oriented, at least if you look at the FPU portion of its CPU. – mnem Oct 12 '18 at 08:12
  • @mnem A nice example. Indeed 8 bits has no significance to the FPU. – badjohn Oct 12 '18 at 08:14
  • 2
    In the future please just update the question without adding "edit:" parts. They make the question more difficult to read (there is already an edit history page that everyone can look at if they want) – pipe Oct 12 '18 at 09:07
  • 1
    @pipe I could rephrase the whole question and merge in the updates. My reluctance was that it would invalidate many of the responses. – badjohn Oct 12 '18 at 09:31
  • If we interpret this question as the "size of the addressable unit" being other than 8-bits, then there are architectures which are bit-addressable. – simon.watts Oct 12 '18 at 14:22
  • You can program any pseudo turingcomplete (pseudo because they not need a infinity amount of storage) processor to do textual IO. Maybe you can just control a simple LCD per I2C. – 12431234123412341234123 Oct 12 '18 at 14:48
  • @badjohn: I would suggest editing the question so it leads off with the current intention, but adding a footnote describing the earlier version of the question. – supercat Oct 12 '18 at 20:57
  • Re Update 4: the story I was told when I was a CS student was that byte was a term coined by IBM in the 1950s. Octet was used in place of byte in most international and communications standards documents so that it didn't have any association with IBM. – cup Oct 13 '18 at 04:54
  • All networking literature in English uses "octet" AFAIK. I don't think it's "rarely used". It's always used when there is any ambiguity on how big given bytes are, so it seems most fitting here. What about "last computer to use non-octet bytes"? – Agent_L Oct 13 '18 at 09:36
  • I guess it depends where you are. I work in a software house and I never hear octet. – badjohn Oct 13 '18 at 09:37
  • @badjohn I also work in a software house and I also never actually "heard" it. But I'm the "network guy" so I read it alot. Eg in HTTP spec. – Agent_L Oct 13 '18 at 09:40

7 Answers7

35

In the early 1990's CDC sold a line of Cyber 180 mainframes. These machines were descendants of the CDC 6600 and supported that machine's 60-bit word size and 6-bit characters. Notably, one of the innovations of the 180 over the 170 is that the 180 added support for 64-bit words and 8-bit characters, and could run software written for both modes simultaneously. So this is probably towards the tail end of sub-8-bit character oriented computing, at least at commercial data processing scale.

At the other end of the spectrum, HP calculators used a series of fully custom four bit CPUs that shipped as late as the early 2000's. These started out as custom multi-chip devices and evolved into what we'd now consider a system on a chip, with some mixed analog and digital logic on the same die. CPU characteristics are what you'd expect from a custom chip for this purpose: highly optimized for calculator operations. This means 4-bit data paths and and ALU that has both BCD and Binary modes. There were also 64-bit wide registers (16 digits) for numbers and 20-bit wide registers for the 1M-nybble address space. Also, there were a number of CPU operations for specific fields of the larger registers (mantissa, exponent, etc.) Around 2002-3, HP switched to using a commodity ARM part running an emulator of the older custom CPU. (And I believe that was their final calculator architecture.)

mschaef
  • 4,836
  • 1
  • 16
  • 23
  • Thanks. This is the type of answer that I was hoping for. – badjohn Oct 12 '18 at 08:22
  • 2
    I worked on a Cyber 180 for a while in 1990-1991. They were officially called "super-minis" (BTW, you can see some of them getting blown up in one scene of the original "Die Hard") – James Curran May 23 '19 at 14:43
28

Unisys continued shipping 36-bit systems far more recently than 1997. The last new 36-bit Dorado - the 800 series -- was released in 2011, and superseded - per my recollection - by the Xeon-based, emulation-oriented 8300 series in 2015. (Xeon emulating the Dorado ISA had made up a progressively larger part of the Dorado product line since the late 1990s, but Unisys CPUs persisted at the high end. A similar story happened with Libra, Unisys's name for the Burroughs Large Systems 48-bit a architecture.)

Groupe Bull released the last of their 36-bit line in 2004, the DPS-9000/TA300; it was subsequently superseded by Itanium systems running an emulator called V9000.

Lexi
  • 381
  • 2
  • 2
19

Such a question is a bit difficult, or rather impossible, to answer. While it is true that most mainstream computers today use units of 8 bits for bytes and and, at least Latin, characters, there always have been and still are exceptions. So, the answer to your "the last one" question probably is "there is none".

There are a number of widespread embedded MCUs with Harvard architecture that use 12-, 14- (PIC) or 16-bit wide (AVR) program memory and disallow 8-bit access to this memory. A "byte" in program memory for those MCUs thus has the above width.

The same thing applies for a lot of DSPs - They have byte widths of typically between 16 and 24 bits and very rarely allow (8-bit-)byte extraction from this memory. Typical examples would be the TMS 32000 (TI) or DSP56000 (Motorola/Freescale/NXP)

It is disputable whether systems based on either of these MCUs/DSPs should be considered "computers", but in my book they have to.

In a less strict sense, even relatively modern CPUs like MIPS could be considered to use, in this specific case, 32-bit "bytes". While the MIPS architecture has the concept of "8-bit bytes" in internal registers, MIPS CPUs technically cannot do less than 32-bit data transfers from and to memory. A similar restriction applies to the address registers in the Motorola 68k, and even your trusty PC's FPU will not work with anything that's 8-bit only.

What seems to have evolved as a kind of standard, though, is that register and data bus width on most of today's CPUs typically is a multiple of eight.

tofro
  • 34,832
  • 4
  • 89
  • 170
13

Univac 36-bit

The Univac 1100/2200 series used a 36-bit word. Many models had functions to work with a word as 4 9-bit "bytes" - using the term byte but fitting your definition of a non-byte computer based on not using 8-bit bytes. I had a few courses on an 1100/80 at the University of Maryland in the early 80s.

The 2200 series appears to have carried on the 36-bit architecture, which in theory means up to at least 1997 according to the Wikipedia article. But the ClearPath series, while compatible with the 1100/2200, also included Xeon which would, by definition, mean it included plenty of 8-bit byte oriented instructions.

9

The PDP-8 and its derivatives were 12-bit machines using (at least sometimes) 6-bit characters. While the last actual PDP-8 ceased production in 1980, by that time other systems existed that used microprocessors based on the architecture, and they continued to be produced for many years (the DECmate word processing system, for example, was produced until 1990). Also, as many PDP-8s found their way into industrial control systems and other applications that change infrequently, they have been very long-lived. It's not clear to me whether any are still in actual production use today, but certainly they were in use relatively recently, and this site suggests they were definitely in use as recently as 2000.

Jules
  • 12,898
  • 2
  • 42
  • 65
  • 1
    You might also mention the PDP-10. It had byte oriented operators that could operate on bytes as small as 1 bit and as large as 36 bits. But the most convenient byte sizes were 7 bits for ASCII characters or 6 bits for reduced alphabets. You could use 8 bit bytes, but these would be stored 4 to a word, with 4 wasted bits in the word. – Walter Mitty Oct 12 '18 at 15:07
  • The 6120 chip that was produced as a PDP-8 in a DIP hasn't been made since the 1980s. The only sources are now recyclers from China – scruss Oct 12 '18 at 23:35
6

There are plenty of chips around today that don't use 8-bit bytes. For example, some architectures of the Kalimba DSP used in CSR/Qualcomm chips use 24-bit words and 24-bit bytes.

It is an important distinction that a 'byte' is not 8 bits but at least 8 bits, and is determined by the minimum addressable unit of a processor. That's because there are operators in 'C' that specifically return the answer in bytes, not octets (sizeof being the most obvious).

K. Morgan
  • 161
  • 2
  • 2
    "a 'byte' is not 8 bits but at least 8 bits" according to the C standard. Outside of C, a byte can be as few as six bits: https://en.wikipedia.org/wiki/Six-bit_character_code – snips-n-snails Oct 12 '18 at 19:23
  • 1
    You're mixing byte, word size and addressable unit. These are way different issues for a CPU. – Raffzahn Oct 12 '18 at 21:25
  • Interesting Raffzahn, could you please let me know which part of my answer was incorrect? – K. Morgan Oct 14 '18 at 21:38
  • That's true @traal, I added the 'at least' part to avoid 'C' language lawyers but it is not a generic answer. I'll leave it unedited otherwise your comment won't make sense. – K. Morgan Oct 14 '18 at 21:45
3

The Intel 4040 processor may well have been the last commonly used sub-8bit processor. Intel produced this series 1974 and 1981. It had a 4-bit architecture (i.e. register size) and generally used Binary Coded Decimal (BCD) arithmetic and I/O. Intel 4040 architecture (Diagram from Wikipedia)

The Intel Intellec MCS 4/40 Microcomputer Development System was based on a 4040 processor, had text I/O based on a TTY interface, and PROM program memory. Note that the program memory was 8-bit, but the data RAM was only 4 bit, as were the registers and ALU. Intel referred to both addressable the 4-bit and 8-bit memories as bytes.

As to last computer in active use, who knows? There are definitely working examples of several of these architectures in places like https://livingcomputers.org

P.S. Other 4-bit processors are still available for purchase today, e.g. Renesas 4508 group.

Burt_Harris
  • 415
  • 4
  • 8
  • Thanks. I didn't ask for last in active use as I realised that it would impossible to answer. I had hoped that the last to be launched or sold would be possible. – badjohn Oct 12 '18 at 08:26
  • I don't know that the Intel 4004 has ever been used in a Von Neuman architecture except possibly in a few development systems. Four-bit microcontrollers with ROM built in have been common for a long time. The electronic games Merlin and Simon were both based on four-bit microcontrollers, for example. – supercat Oct 12 '18 at 21:00
  • @supercat, I agree a 4004 was not a von Neuman computer, but used the Harvard architecture. Harvard architecture machines however are generally regarded as a computer as they perform calculations from a stored program. – Burt_Harris Oct 12 '18 at 21:10
  • @Burt_Harris: Except in a few development systems, the 4004 would use ROM for its program store. While one could build a 4004-based machine that would allow the user to supply a program (e.g. via pluggable cartridges) I don't think most 4004 machines were wired in such fashion. – supercat Oct 12 '18 at 21:17
  • @supercat, I agree ROM was the typical in production applications. But as you say there were 4040 development systems and which supported self-hosted assembly language programming. As I remember the MCS 4/40 was essentially a modified Harvard architecture with READ/WRITE program memory accessible as data. I think this development system meets the OP's recently updated requirements for having text I/O. I've updated my answer to include details. – Burt_Harris Oct 12 '18 at 21:48
  • There are tens if not hundreds of millions of non-4004 related 4-bit micro controllers sold every year even now. – KJ Seefried Oct 15 '18 at 00:38