26

Please don't point out APUs with x86_64 cores used in current generation game consoles, these are not part of the question

I cannot recall a single arcade system or game console that ever used x86 for its CPU. I'm happy to be corrected in the comments if there were some. Notwithstanding these exceptions, which must be incredibly rare, it sure seems that gaming hardware steered away from this otherwise incredibly popular CPU family.

Why is this the case? What tradeoffs in game hardware design likely led engineers away from using x86 for the CPU?

Brian H
  • 60,767
  • 20
  • 200
  • 362
  • 12
    I think Mad Planets, Krull, and Q*Bert, were all based on a 16-bit x86 platform. – supercat Apr 18 '19 at 15:29
  • 9
    Adding to supercat’s examples, there were a few x86-based games consoles: the FM Towns Marty (1993) used a 386SX, the WonderSwan (1999) a NEC V30 MZ. The Xbox nearly counts as retro now ;-). – Stephen Kitt Apr 18 '19 at 16:00
  • 3
    Several Irem arcade machines also used NEC V30 CPUs. – Stephen Kitt Apr 18 '19 at 16:08
  • 18
    I'm not sure why you're making a distinction about "discrete" CPUs. The PlayStation 4 and Xbox One have x86 CPUs with integrated graphics, like almost all x86 PCs these days. In any case, you're going to have to come up with different criteria to exclude the original Xbox and it's discrete Pentium III. –  Apr 18 '19 at 16:52
  • 5
    Just looking at the list of Sega arcade systems, there are multiple systems using discrete x86 processors: Chihiro, Lindbergh, Europa-R, RingEdge, RingWide, RingEdge 2, Nu, ALLS. Due to the shift towards home gaming causing the decline of arcades, I'd venture to say that you aren't going to find any non-PC based arcade systems anymore. – user71659 Apr 18 '19 at 18:21
  • 1
    The Konix Multisystem had an 8086 shoved into it at a relatively late stage so that it could be sold as 16-bit. Had it ever been sold at all. – Tommy Apr 18 '19 at 18:25
  • 1
    @RossRidge I belive Brian's limitations are meant to restrict this to designs up to like a 386 or 486 - and to explicite exclude modern incarnations to focus on development before that, isn't it? – Raffzahn Apr 18 '19 at 23:33
  • 2
    @Raffzahn Yes. I was thinking about retro systems, as always for this site. Xbox probably meets people's definition of retro, though. – Brian H Apr 19 '19 at 02:14

6 Answers6

53

Video game hardware, whether for home consoles or arcade machines, is designed pretty much from scratch. Hardware designers have pretty much free rein on choosing what CPU to use, basing their choice on factors like cost and ease of programming. The Intel 8086, quite frankly, was a poorly designed processor and was never well regarded. While you could argue it made reasonable compromises at the time it was released (1978), these compromises ended up hanging around its neck like an albatross. If IBM hadn't picked the Intel 8088 for its Personal Computer in 1981, you probably wouldn't be asking this question.

We take the x86 architecture for granted today, but before the IBM PC it was fairly obscure and afterwords widely ridiculed. In particular, it compared poorly with the Motorola 68000 which had a flat 24-bit address space, more orthogonal instruction set and sixteen 32-bit registers. The 8086 used a segmented 20-bit address space, placed more restrictions on how various registers could be used, and only had eight 16-bit registers. It also wasn't particularly cheap, though the 8088 with its 8-bit data bus helped reduce overall costs compared to the 8086.

During the 70s and first half of the 80s, 16-bit CPUs like the 8086 and 68000 weren't really much of a consideration. The games of this era didn't demand anything more powerful than an 8-bit Z80 or 6502. While there were Gottlieb/Mylstar arcade games like Q*Bert in the early 80s that used a 5 MHz 8088 CPU, it's not clear what advantage this gave the machines. Performance in games of this era was mostly limited by how fast the CPU could access memory. Because of how the 8086/8 was designed, this made the 8088 effectively about as fast as a Z80 or 6052. These Gottlieb/Mystar arcade games also only had 64k (16-bit) memory maps, so they didn't benefit from the 8088's 20-bit address space.

Starting around the mid-80s, games had started moving beyond the capabilities of 8-bit CPUs. While the dominance of the IBM PC in the personal computer market at this point would've meant there would be programmers out there familiar with the 8086, there would've been few people singing its praises. By and large 68000 CPUs were chosen for new arcade game hardware designs that needed more power than 8-bit CPUs offered. Console hardware, being more cost sensitive, stuck with 8-bit CPUs for the rest of the decade, though most of the next generation went with 16-bit CPUs, either the 68000 or 65816. It's also worth mentioning that the two major new home computer designs of the mid-80s, the Commodore Amiga and Atari ST, also went with the 68000.

While arguably the 80386, introduced in 1985, solved a lot of the 8086's problems, with a more orthogonal instruction set, 32-bit flat address space and 32-bit registers, it wasn't until the early 90s that games started demanding the level of performance it offered, and when its price would have dropped to make it competitive in new hardware designs. It's not entirely clear to me why it didn't attract more interest at this point. The early 90s was also about the time that the IBM PC became the premier platform for home computer gaming. It would've inherited the disdain its predecessors had, but there were some arcade boards designed in the early 90s that use the 8086-compatible NEC V30 type CPUs. I think the main factor against at the time was that back then RISC-based architectures were considered the future, while CISC-based architectures like the x86 and 68k were considered obsolete. Still, that didn't stop Sega from using the CISC-based NEC V60 CPU in its arcade hardware designs in the early 90s.

For the rest of the 90s though, RISC-based CPUs like the Hitachi SH and IBM PowerPC, dominated arcade hardware designs -- at least at the high performance end. At the lower performance end, cheaper 68k and NEC V30 based designs were still in use. In the home console market, the 5th generation was almost all RISC CPUs, though notably the Japan-only FM Towns Marty used an AMD 386SX CPU. For the most part, this situation continues to around the turn of the century, with both arcade games and the 6th generation of consoles.

A big exception is Microsoft's Xbox. A 6th generation console, released in 2001, it has an Intel Pentium III CPU, much like PCs of the time. It's not surprising that Microsoft, with its long experience using the x86 CPU, made this design choice, but it's only a few years after this that mainstream Intel and AMD CPUs start appearing in arcade hardware. Although these x86-based arcade machines aren't really new hardware designs, they're PC clones running Windows or Linux. The 7th generation of home consoles went exclusively with PowerPC CPUs, but I suspect this had more do with the prices IBM was offering rather than the relative technical merits of the CPUs. Arcade games went increasingly with PC clone based hardware.

Today the choice of CPU in current game hardware designs is unremarkable. Home consoles and arcade games use x86 CPUs just like our personal computers do. Handheld consoles use ARM CPUs just like our phones do.

So, in the early days of game hardware design, x86 CPUs weren't chosen simply because there wasn't good reason to use one except for IBM PC compatibility. Later 32-bit x86 CPUs solved a lot the architecture's problems but RISC CPUs were seen as more modern. Today the ubiquity x86 architecture combined with its unrivalled speed has turned it into the dominant CPU architecture for game hardware that doesn't need to run off a battery.

  • 4
    Certainly x86 is ubiquitous, but the tablet/netbook cores they use in the Xbox One/PS4 are mediocre in performance at best. I think the driving factor is that a major GPU vendor, AMD, had in house SoC IP that they could use and integrate. As Nintendo shows, NVidia+ARM, benefiting from all the game engine work on smartphone gaming, is a viable competitor. – user71659 Apr 18 '19 at 21:57
  • 2
    I think Microsoft even named the X-Box in homage to the DirectX API. – Brian H Apr 18 '19 at 21:58
  • 1
    The 8086 included some design missteps, but it was pretty well designed for a 16-bit processor. No 16-bit processor is going to be able to access large amounts of memory as efficiently as a 32-bit one, but I can't think of any better general design for how to have a 16-bit processor access 1MiB of address space. The 8088 performs poorly, but that's largely because the 8086 was designed to be used in 16-bit systems rather than 8-bit ones. The 80286 performs poorly, but that's because its designers failed to recognize and retain some of the key benefits of the 8086 design. – supercat Apr 18 '19 at 22:20
  • Also you should emphasize the fact that the 8088 (which belongs to the x86 family despite the last digit of its part number) was used in arcade machines. – supercat Apr 18 '19 at 22:21
  • 1
    @supercat Most of the my third paragraph deals with the use of the 8088 in arcade games. The 68000 is an example of a much better designed 16-bit CPU. The 8088 performance problems, while made worse because of the 8-bit bus, exist in the 8086:. https://en.wikipedia.org/wiki/Intel_8086#Performance –  Apr 18 '19 at 22:38
  • 14
    @RossRidge: Describing the 68000 is a 16-bit CPU is like describing the Z80 as a 4-bit CPU because of the size of the primary ALU. Very few instructions are limited to operating on 16-bit quantities, stack operations are expanded to 32-bits, and pretty much everything about the architecture is 32 bits other than the fact that many 32-bit operations are automatically performed in two 16-bit chunks. – supercat Apr 18 '19 at 23:15
  • 1
    @supercat Nonetheless the 68000 is often regarded as a 16-bit CPU. Whatever you call it, it was the 68000 most 8086 detractors pointed to as an example of a better designed CPU and it was the 68000 most gaming hardware used when 8-bit CPUs didn't cut it anymore. –  Apr 18 '19 at 23:29
  • 1
    @RossRidge: The 68000 was a much more complicated chip than the 8086. Further, the 8088 does a better job at handling data structures between 32,768 and 65,520 bytes in size. The 68000 requires moving to 32-bit offsets once things pass the 32767-byte barrier, while the 8088 can effectively use 16-bit offsets with objects up to 65,520 bytes (getting the last 16 bytes can sometimes be tricky). For its level of cost and complexity I think the 8086 mostly does pretty well. Not without missteps, but one wouldn't have to add much to it to make it just about perfect given cost constraints. – supercat Apr 19 '19 at 01:20
  • 1
    @supercat Yah, I'm not sure why you're trying to fight old battles here. The x86 architecture won out in the end anyways. –  Apr 19 '19 at 01:29
  • 4
    @supercat speaking as a former compiler writer the 8086 had too few registers and too many of them had hardwired special functions in the ISA - and that's when you consider the 8086 on its own before comparing it to the 68K. (Given what it had though the addressing modes were good.) – davidbak Apr 19 '19 at 16:22
  • 2
    @davidbak: There are only two 8-bit or 16-bit architectures I can think of that I'd consider even remotely nice to write a compiler for support for recursion is required: the 8x86 and the 6809. The PIC family could be decent for supporting a C-like dialect if one uses storage-class qualifiers to place data into segments. Sure a 16-bit machine like the 68000 is nicer, but compared to the Z80 the 8088 is pretty darned sweet. – supercat Apr 19 '19 at 20:20
  • @supercat why 32767 bytes given the support for 16-bit offsets? Just because the offsets are signed, so you'd have to put the base 32768 bytes after the start of the object and, if you're a human being, probably then confuse yourself quite a lot with negative offsets? – Tommy Apr 20 '19 at 15:32
  • @Tommy: All of the 68000 instructions or addressing modes that combine a 32-bit address and a 16-bit register sign-extend the register. Code which wanted to use 16-bit offsets between 32768 and 65535 would need to manually zero-extend the 16-bit value before adding it to the address register or using an indexed addressing mode with it. – supercat Apr 20 '19 at 16:31
  • @supercat I was thinking of the (d16, An) addressing mode ("Address Register Indirect with Displacement" per the Programmer's Reference Manual) — 32-bit register plus 16-bit offset. The offset is sign extended, which I guess is just another way of saying that it's signed. So e.g. if A0 = 32768 then you can address the range from 0 to 65535 using that addressing mode. (p2-8 of the real document, which is p49 of the most common PDF) – Tommy Apr 20 '19 at 18:18
  • @Tommy: If one loads an address register with a value which is 32768 higher than the actual base address of the structure one is interested in, and xors the data register with 32768, one could do that, but if one is going to go to all that trouble it would usually be easier to simply sign-extend the data register and then use 32-bit address calculations. – supercat Apr 20 '19 at 21:11
  • @supercat ah, I think I see how I've misunderstood now. I'm imagining struct-type access: an address register plus fixed offsets embedded in each load and store. So no data register involved in address calculation whatsoever. Which is fine for flat data structures, but not for anything more complicated. Apologies for the digression. – Tommy Apr 20 '19 at 21:34
  • 1
    @Tommy: Ah, you were thinking of the immediate-displacement forms. There too, while it would be possible to maintain pointer values that point 32768 bytes above the start of a data structure and then use the base+displacement mode to access the whole thing, that wasn't usually done. Instead, programmers simply lived with a 32767-byte limit. – supercat Apr 20 '19 at 22:33
  • If I recall correctly, the CPU in the Xbox was actually from the Celeron line, not the Pentium III one. It was the sort of thing big OEMs making systems for the budget market would buy. Skimpier on the cache and lower clocks compared to its contemporary P3 counterparts, if memory serves. – Rohan Oct 22 '21 at 12:33
11

The original Xbox was an ~$800 computer sold at a loss, with embedded hardware making it impossible to use it as such. To my knowledge, Microsoft was the first company to take that gamble: that it couldn't be hacked and used as a home PC, and therefore negate the sales of their peripherals or software that they get kickbacks on. They took that gamble because (they're a computer company, not a gaming company) they could afford it, and they won big because they had competent people design the system, and a marketing strategy to suit.

Mazura
  • 420
  • 2
  • 9
  • It took quite a while before the X-Box was hacked. Apparently some very elaborate tricks were used. – Thorbjørn Ravn Andersen Apr 19 '19 at 20:02
  • 1
    iirc, it took about a decade before anyone even claimed to have hacked it. That's well beyond the service length of a console, and as a PC if it ever did become one. And I assume this strategy continues: I built a refurbished PC when Fallout 4 came out for ~$500. Its specs are all half of an Xbox One; thus an X1 is/was a ~$1000 computer. – Mazura Apr 20 '19 at 01:51
9

Konix Multisystem: 6 MHz 8086 (1989).

Sure, it was cancelled just before release, but it got amazing press (I remember Jeff Minter raving about it at Earls Court) and some of it lived on as the (68k based) Atari Jaguar.

scruss
  • 21,585
  • 1
  • 45
  • 113
7

The original 8086 was quickly overshadowed by the Z80, which was somewhat compatible but easier to work with as it required less support hardware. Also many arcade developers preferred the 6502 and derivatives, and then later the 68000 which was easier to work with on both the hardware and software fronts.

Another issue was that the development machines available for testing code were often 68000 based as well. One prime example was the Sharp X68000. A lot of game programmers of that era were self taught too, and for hobbyist home computer systems Z80 and 6502 dominated with very few using 8086.

Finally, the 8086 was much more expensive than the Z80, while offering no real advantages over it unless you were expecting to buy millions and sell a line of compatible computers for years to come.

user
  • 15,213
  • 3
  • 35
  • 69
  • 12
    This would make sense if the CPU in question was the 8080, but the 8086? Yes, it was much more expensive than the Z80, but the Z80 wasn’t compatible with it, and the 8086 was much more capable than the Z80... – Stephen Kitt Apr 18 '19 at 16:38
  • 5
    @StephenKitt Not as much you might think for games in the 80's. If you compare a 4 MHz Z80, a 1 MHz 6502 and a 4.77 MHz 8-bit 8088 they all accessed memory at about same speed, which is most of what a game of that era is doing. A 16-bit 8086 could potentially double this speed by accessing two bytes at a time, but only by greatly increasing the cost. If that cost could be justified then it would also justify using a 68000. –  Apr 18 '19 at 17:11
  • @Ross ah, right, yes, I was thinking in terms of registers, register sizes, address space, faster arithmetic etc. – Stephen Kitt Apr 18 '19 at 17:17
  • 1
    @RossRidge: If fulfilling a device's RAM and ROM requirements would require an even number of banks or each, the relative cost advantage of putting them all one one 8-bit bus, versus having half on the the upper 8 bits of a 16-bit bus and half on the lower 8 bits, would be relatively slight. The instruction size vs capability tradeoffs for the 8086 were designed so that code fetching and execution could be asynchronous tasks that would happen at about the same speed. The 8088 effectively cuts prefetch speed in half which means the processor spends most of its time awaiting code fetches. – supercat Apr 18 '19 at 22:30
  • @RossRidge THe 8086 did have several essential features that would have made it great for console development, not just it's 16 bit speed advantage (which the 8088 misses for most parts). Most notably here a 1 MiB address range without the need of external helpers. But its 40 pin package made it use a multiplexed AD-bus, needing latches for demultiplexing, so somewhat a draw. Here a (speed up) Z80 with some banking might give the same result at lower cost. – Raffzahn Apr 18 '19 at 23:32
  • 1
    "The original 8086 was quickly overshadowed by the Z80, which was somewhat compatible" -- compatible how? – Wayne Conrad Apr 19 '19 at 20:31
4

I have no info about the gaming consoles but when I was creating my own HW computing stuff (for different purposes) I did not use Intel CPUs as they require additional ICs just to be able to work (It stuck till today as the PCs have a chipsets on board...). Z80 and or MCUs (even from Intel) was preferred by me as it did not need those and you know fewer ICs is usually cheaper, easier to design and manufacture PCB ... Once MCU's computing power matched my needs I stick with those and using them till today in my designs.

So my bet is that in early day game console designers used similar reasoning ...

Spektre
  • 7,278
  • 16
  • 33
3

SEGA made several arcade boards in the early 2000s that used discrete Intel CPUs paired with Nvidia GPUs, starting with the Chihiro and continuing.

Robin Whittleton
  • 131
  • 1
  • 1
  • 5