48

Why did manufacturers of home computers avoid using the 6809 CPU? I realize that the Z80 and 6502 had a 3- or 4-year head start in availability. But once it did become available in 1978, I don't understand why designers of new computers didn't choose it. I can only think of the TRS-80 Color Computer (home market) and the Commodore SuperPet (educational market) as mass-produced computers which used it.

It doesn't surprise me that it was not used in any business computers. New = unnecessary risk to many of those potential customers. But most home users didn't care much about having a well-known operating system and large preexisting library of professional applications.

The 6809 had inherent advantages. From the Wikipedia:

The 6809 was, by design, the first microprocessor for which it was possible to write fully position-independent code and fully reentrant code in a simple and straightforward way, without using difficult programming tricks. It was also one of the first microprocessors to implement a hardware multiplication instruction, and it features full 16-bit arithmetic and an especially fast interrupt system.

It seems strange to me that the engineers of manufacturers would not have been attracted to these advantages, and striven to build machines utilizing it.

RichF
  • 9,006
  • 4
  • 29
  • 55
  • 4
    The Dragon 32/64 also; almost exactly like the CoCo but I think because both are based on the support chips provided directly by Motorola (and, especially, the 6847) rather than because the one is based on the other. – Tommy Feb 18 '18 at 23:02
  • 17
    "Couldn't run CP/M" was not helping. – Thorbjørn Ravn Andersen Feb 19 '18 at 00:57
  • 4
    It was used a lot. By french company Thomson, at least. They sold a lot of computers (MO5, TO7) all using the 6809. Thet were cheap, and found a way in a lot of homes and schools. – dim Feb 19 '18 at 05:17
  • @dim and Tommy, thanks for the info. I had heard of the Dragon, but I didn't think of it when writing my question. The Thomson machines must have gotten very little press in the USA. Europe and Australia seemed to be more flexible in accepting more of a variety of machines. Even American-designed machines such as the Exidy Sorcerer and Commodore Amiga got a good deal of respect. – RichF Feb 19 '18 at 05:46
  • 1
    The very first prototype Mac built by Burrell Smith used a 6809, which makes me wonder now where it came from. Wasn't it used in a laser printer? – No'am Newman Feb 19 '18 at 07:01
  • 3
    It was used in a large number of industrial microcomputers and quite a few office micros. OS/9 was one of the best real-time OSs around at the time - and still in use. – Chenmunka Feb 19 '18 at 10:01
  • 2
    Indeed, the Thomson machines have mainly been sold in France (but had a great deal of success there due to a govt project bringing computers in schools), and a few neighboring countries. Anyway, the 6809 has probably been used a lot in lesser-known computers like these. There was a (French again) company called Goupil, as well, who made a computer who had several processor options, one of which was 6809. – dim Feb 19 '18 at 11:07
  • 1
    As an aside, the Z80 did support position-independent code. I wrote and sold a hacker tookit in Z80 assembly in the 1980's that was position-independent and could easily move itself around in memory (self-relocating). – Steve Jones Feb 19 '18 at 15:05
  • 2
    In comparing to the Z80, you have to consider that most Z80 based systems were treated by at least application software as being 8080's - the hardware design became simpler, but software efforts could continue an already proven and supported heritage. – Chris Stratton Feb 19 '18 at 18:17
  • Another thing that I don't see mentioned, is dollars and cents.A salesman working on commission would rather spend the time selling the more expensive compatible, that wouldn't require as much hand holding. – tinstaafl Feb 20 '18 at 03:55
  • @tinstaafl For home computers this would have been a small factor, with the computers on the shelf in stores like Target and Sears. The sales folks generally didn't know a lot, and it didn't matter to them if you bought a computer or a microwave oven. In the "old days" (early to mid-80s), the general chains didn't sell a lot (if any) of business systems. They might encourage you to add a printer, though. – RichF Feb 20 '18 at 07:31
  • 3
    The following is not exactly an answer to your question, but might be of interest. There was a real-time multi-tasking operating system "OS-9" that started with a 6809 version (what gave it its name) and became quite successfull later when ported to the 68000 family. Personally, I only used OS-9/68000, so I have to guess a little what the predecessor looked like. OS-9 as a real-time OS needed low interrupt response times, so the 6809 probably was a good match. The OS made use of position-independent, reentrant code, by having the software organized in "modules" containing code and read-only da – Ralf Kleberhoff Feb 19 '18 at 09:50
  • @RichF - Mind you it did matter a lot to Radio Shack and led to them dropping the CoCo completely – tinstaafl Feb 21 '18 at 03:32
  • @tinstaafl I never worked for Radio Shack, but from what I saw they supported the CoCo until the days of 8-bit computing were coming to an end. They sold the CoCo for 11 years (1980 - 1991), and it had 3 major revisions during its life. Perhaps the third generation was a bit under-designed to make the Tandy 1000 more competitive, but even that machine (a better PCjr clone) was considered more of a home system than business. – RichF Feb 21 '18 at 04:00
  • @SteveJones, supporting PIC means having addressing modes that make it very easy to write PIC with little reduction in execution speed. PIC works with the run-time PC address. That's not the Z80 by any stretch of the imagination :-) Practically all CPUs can have utilities written to relocate their code, that's what linkers do and what DOS/Windows did when loading EXEs. – TonyM Apr 03 '18 at 10:21
  • @TonyM Yes, of course, but I meant PIC, in that it could move around in memory at runtime. It was a dynamic process, so you could get it to move around at will, not via a linker, etc. It was a hacker toolkit for the ZX Spectrum, so it had to be able to move around in memory, depending which game you were investigating. There was no reduction in execution speed, as it was all machine code. Happy days 8^) – Steve Jones Apr 03 '18 at 20:02
  • IIRC, the 6809 did not have an "especially fast" interrupt system. To the 6800's regular IRQ it added added "fast" interrupts (FIRQ), but these were still a bit slower than the 6502. Both FIRQ and 6502 IRQ were so much faster than 6800/6809 IRQ because they pushed a just the PC and condition code registers on to the stack, whereas 6800/6809 IRQ pushed all the registers. – cjs Jul 23 '19 at 13:31
  • 1
    @Tommy: The Dragon and the CoCo are based on the same reference design by Motorola, which was published to show how the support chips could be used together. They also both got their BASIC from Microsoft, who had little reason to make them different. – John Dallman Feb 17 '20 at 13:19
  • Short routines could also be made position independent on 6502. It was even required on Apple II for ROM code read from slots if you wanted the card be slot independant. Of course for routines longer than a page, it was not possible without code patching for JSR and JMP instructions. – Patrick Schlüter Oct 16 '20 at 09:07
  • @PatrickSchlüter: I often find myself looking back on 1970s-1980s dev tools and thinking they should have better supported the creation of self-relocating programs, at least for relocation on 256-byte boundaries. On something like the Apple II, an I/O card driver running patched code from RAM near the top of the address space could be much more efficient than one which has to determine which slot it's running from before processing each character. – supercat Jul 22 '21 at 20:36
  • @supercat The high-end operating system OS-9 (the real one, not the Apple one that stole the name) did exactly that. As far as I know, everything written to run on it was position-independent by design. – RichF Jul 23 '21 at 06:18
  • @RichF: Did OS-9 use a patching loader, or simply exploit the fact that code could use register-relative addressing? – supercat Jul 23 '21 at 14:41
  • @supercat to the best of my knowledge, 6809 os-9 languages are written to compile position independent code. The loader simply transfers the code from disk without patching anything. There may be languages or compiler switches to modify this behavior, but for the most part positional independence was just expected behavior. – RichF Jul 24 '21 at 14:39
  • @RichF: An instruction like "lda 1234,pcr" would be inherently relocatable, but take four bytes and nine cycles to execute. An "lda 1234" that was patched by a loader would not be relocatable once patched, but would take three bytes and execute in five cycles. The performance penalty of making code fully relocatable may have been less than on other microprocessors, but was still significant. – supercat Jul 25 '21 at 18:16

10 Answers10

55

Why did manufacturers of home computers avoid using the 6809 CPU?

I can't really see that anyone 'avoided' it. There have been many successful machines using the 6809. Beside the mentioned Tandy's CoCo there where other computers for the general audience, like

(Not exhaustive, there might be many more, as it's just from memory)

They were quite successful in Japan throughout all of the 1980s, and somewhat strange, the FM7 also in Portugal. The Fujitsu machines even featured two 6809, the second one operating as independent graphics subsystem.

Then there was Thomson as a major French player with the

  • Thomson TO7/8/9 series (starting 1982), and
  • Thomson MO5/6 series, a somewhat low cost version of the TO7(*1).

These machines had good sales and a strong following (still today) in French speaking countries. In other places they were rather rare (*2). The MO6 was also OEM'ed by Olivetti as Prodest PC128.

(For all these machines it might be more useful to read the corresponding French/Italian/Japanese Wiki pages than the English ones :))

Another successful machine was the British Dragon 32/64 series of 1982. They are often attribute as Tandy clones, but that's rather due the fact that both use Motorola's SAM chipset. Compared to the CoCo, they offer a better keyboard and an on-board parallel interface.

And lets not forget the Vectrex (1982), and the fact that the MacIntosh prototype, developed around the same time, was also 6809 based.

I realize that the Z80 and 6502 had a 3- or 4-year head start in availability. But once it did become available in 1978, I don't understand why designers of new computers didn't choose it.

For one, above examples do show that it has been used, but it takes some time to decide, design and market a new machine. So while the raw CPU may have been available in late 1978, above examples show that it took roughly 3 years for computers to show up using this new CPU. Comparable to the Atari Series, only showing up 4 years after the 6502 CPU was available.

It doesn't surprise me that it was not used in any business computers. New = unnecessary risk to many of those potential customers. But most home users didn't care much about having a well-known operating system and large pre-existing library of professional applications.

It might be less simple here. Professional users don't care about the machine or its CPU. They care about certain applications. If a manufacturer supports its applications after switching the CPU, they gladly buy the new, incompatible one.

Now, with third party software it becomes more complicated. If a manufacturer can convince them about the new system and its future sales, they will support it and users will follow. Otherwise it's playing safe and make the new machine compatible. Back in the 1970s and early 1980s, professional software was rather closely tied to computer manufacturers, so switching CPUs wasn't uncommon. Their decisions were hardware driven and supported by good profit margins, allowing them to spend large amounts on software ports.

On the fast moving home computer market, margins were rather small, and changing a machine design that would result in a complete rewrite was unaffordable. That's why Commodore stayed that long with the 6502. It was less expensive to patch some parts of the Kernel for a new video controller while keeping the same old CPU.

Pagetable has just released a nice work showing how Commodore kept reusing code in the Kernal.

The 6809 had inherent advantages. From the Wikipedia [...]

I guess that (and for sure the usage of full 64 KiB) were the main reasons for the University of Waterloo Computer Systems Group's development of a 6809 daughterboard for the PET - what later became known as SuperPET after Commodore bought the design in 1981 (*3).

In fact, the SuperPET has been one of many 6809 add-on cards for existing machines. Like The Mill for the Apple II or the 6809 Tube Module for the BBC.

It seems strange to me that the engineers of manufacturers would have been attracted to these advantages, and would have striven to build machines utilizing it.

At the time the 6809 became available the game was no longer played by some lone engineer starting a new computer but by bigger companies, and driven by much more than just curiosity for a new chip. Still, a wide usage of 6809 systems as CPU in other systems, from knitting machines to street lights and telephone systems does show that engineers did appreciate the additional abilities.

Also, and maybe even more important, 16-bit CPUs (8086, 68k, 32k) became available about the same time as the 6809. And the Mac is a great example, that switching over to 16-bit brought even more advantages, especially in terms of memory, than just using a more advanced 8-bit unit. Kind of a 'too little, too late' case.

Conclusion: I don't think the 6809 was avoided. There have been many successful systems. But it was already too late to successfully compete with upcoming 16-bit systems.


*1 - Thomson somewhat screwed their own success by making the MO5 not fully compatible. While the hardware is quite similar, they mixed up the memory map, thus making exchangeable programs less common.

*2 - Keep in mind, most machines had their home markets and respective companies were rather niche players in other parts of the world. For example Tandy was a big number in the US, but never really a thing in continental Europe. Much like Thomson machines were big in France, Belgium and Italy, but exotic in other parts of Europe. Interestingly they were somewhat successful in Britain. Similarly, Japan had a completely separate ecosystem.

*3 - Reading the SuperPET history reveals that the original 6809 choice even came from IBM(!) as part of the MICROWAT program they developed for the University of Waterloo CSG.

Toby Speight
  • 1,611
  • 14
  • 31
Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 2
    Thank you for this well-researched and point-by-point response. With my North American perspective, I was only aware of your list's Dragon computer. I guess the Vectrix was sold in the USA, but I don't remember it, not having much interest in game consoles. I'm glad the rest of the world was more open to a broader spectrum of home/office machines. – RichF Feb 19 '18 at 15:25
  • 1
    I wouldn't call it more open. Just different markets with comanies beliving to have a chance. In case of Thomson, try to imagine if GE had come up with a home computer in 1982 to match some government recomendation for a nation wide school system. That might give a comparable picture. – Raffzahn Feb 19 '18 at 18:44
  • Vectrex. I had one of those. :) – Almo Feb 19 '18 at 20:38
  • Nice answer! Shame the question's closed, but The Mill card would've been a great answer for “What early home computers have more than one CPU, where both could be used by the programmer?”. It looks like it was a fully independent 6809 that could interact with the 6502 in the Apple //. – scruss Feb 20 '18 at 02:03
  • @scruss The 6502 wasn't independant, as they could just work interleaved. The same mechanics are true for the Z80. Then again, there have been real independant cards for the Apple II where the additional processors had seperate Memory and could realy work in parallel. – Raffzahn Feb 20 '18 at 06:17
  • If you insist on using divis where it shouldn't occur, OK. But care to explain why the suggest was making "drastic changes"…"deviating" when the whole intent was only to fix grammar and unclear sentence constructions? (eg conclusio: "was successful" … "to late to successfully" is confusing). I learned your divis quirks and leave them be; but I would like to save your time – and mine in the future. – LаngLаngС Nov 01 '18 at 23:41
  • @LangLangC First of, I realy apreciate you effort (and the one otehrs invers) to improve my way less than acceptable spelling. No doubt, without it would be a horrible read. Still, not every change is an improvement. It might b less noticable to you, but there have been almost fights about spelling in some - including multiple changes between US and English spelling and back. Or people changing passages more about taste than content. I usually go along, and enjoy how people think their variation of grammer is the only one to go. Where I'm not going along is ... – Raffzahn Nov 02 '18 at 00:21
  • ... when people change meaning. In this case, you changed "many successful" to "many relatively successful". That's inserting an unitended notion and the reason why I choose to select the item of unwanted changes. I learned to trust your corrections, and only do a very quick skimming, not expecting any issue, only looking at larger changes. As more did this suprise me. Again, I very much apreciate your help, and while I would prefer English grammer and spelling over US style, I'm the last to argue about grammer and spelling at all. ... – Raffzahn Nov 02 '18 at 00:22
  • Regarding hypens of any kind, periods or more so the use of HTML, I do active shun them. I considere them at best being unneccessary eye candy, but more often than not harmful. I try to stay to plain ISO646-IRV, as these are guaranteed to work almost everywhere. And yes, I am using non-Unicode and non-8850 codeset machines on a daily base. Heck, Im usually not even using German umlauts. Crap like using SUP is even more devastating, as copying text, will drop it, thus possible changeing the meaning. The way I write is impenetrable against such conversion problems - well proven since usenet time – Raffzahn Nov 02 '18 at 00:27
  • 1
    Ah well. I write in British myself and keep US in edits if present (and I notice). Avoiding edit-wars and keeping changes to the minimum in meaning, most similar in style. Only in this case I interpreted the two "successful"s as being on the same level of meaning unintentionally. If they are intentionally the way they are I'd still disagree a bit in style and ease of comprehension, but see what this should accomplish now. Indeed my error, as that's opinion then; and your post. Thx for the explanation, dialing back a notch on interpretation ;) – LаngLаngС Nov 02 '18 at 00:33
  • The hyphen issue is just the one thing I learned by myself from your edits. No problem for me – if communicated – as I like the typographic correctness but our shared time not wasted even more. –– & yeah I still write 7bit ASCII in mail subjects, but my professional writing very much benefits from unicode and visually appealing use of typography, as paper is the medium of choice. Multiple personalities sometimes overlap, that's the only time any confusion arises. – LаngLаngС Nov 02 '18 at 00:39
  • @LangLangC Ah, Now I see. I have to admit, that I only did a fast check and therefor not checked the environment for bad style due double use of the wame wird. Great job, except in this case it dont know of any way to avoide it, as its about two different issues, and in my understanding successful is the best word in both cases: That the machines on themself where successful /which was part of the question), but they (in total) where to late to be successful against the upcoming 16 bit - which describes the whole dilemma of the 6809 in general. I'm always open for suggestions to improve – Raffzahn Nov 02 '18 at 00:40
  • How about "There have been many successful 6809 systems (in their time). But then it was already too late to successfully compete with the upcoming 16-bit systems."? – LаngLаngС Nov 02 '18 at 00:46
  • @LangLangC Well, I do enjoy a good typography, but I have to disagree about the notion that there is a 'correct' one. It's a matter of culture and maybe moreso of personal style. Typography is an art to please and can be good in many ways. It starts by the lenght of a hyphen used for each purpose. The ones offered here are way too long for my background and look plain uggly. Much similar the use of 'wrong' quotings instead of '»...«' :)) ... and many others. Beside being portable, 7 bit encoding also eliminates all possible missuse :)) – Raffzahn Nov 02 '18 at 00:51
  • Still got twice successful in it - anyway, got to get some sleep. Got to get up early tomorrow and do some interviews and finally sign the leastfor a nice little exhibition Im prepareing - ofc, all about old computers, from punch cards to PCs. – Raffzahn Nov 02 '18 at 00:52
  • 1
    @Almo I still have a Vectrex, although I probably haven't taken it out of the box in over 10 years. I picked it up along with a ton of game cartridges when it was discontinued and went on super-clearance. I got quite good at Mine Storm, progressing to levels that I'm sure were past the design intent because they appeared to have massive bugs. – Mark Ransom Jul 28 '21 at 01:20
  • @MarkRansom I have one too! :D Solar Quest is very good. – Almo Jul 28 '21 at 14:33
32

While I don't know the answer to this, I'll hazard a few guesses:

  • It was quite an expensive CPU. For example, in 1983 retail price in the UK was £6.50 for the 6809 or £12 for the 68B09, versus £3.20 for the Z80A or £5 for a 6502A.

  • Its performance didn't exactly set the world on fire. With most instructions taking 3-7 cycles at 1MHz, the base 6809 would be somewhat slower than either of those alternative processors, except for applications that used a lot of 16-bit operations. The 68B09 at 2MHz may have been a little faster, but probably not enough faster to be worth the cost, especially given that an 8088 was only another 50% more expensive and gave a much larger leap in performance.

  • The advantages cited in the Wikipedia article weren't generally seen as particularly important for PC/home applications at the time. It was only when more demanding applications started to be used on micros that they became relevant, and by then the 8088 had already achieved a dominant position.

Jules
  • 12,898
  • 2
  • 42
  • 65
  • 1
    Also the Motorola 6809 was not a drop-in replacement for the MOS 6502 or even the 6800. Later, the 68000 line experienced similar backwards compatibility issues that Intel's x86 line never had, and I think this contributed to Motorola's demise. – snips-n-snails Feb 19 '18 at 06:28
  • 2
    I just found the 6809 difficult to program. It had these combination commands that required an extra level of thinking, very much like Data General Assembler where you had to do a couple of commands in parallel. – cup Feb 19 '18 at 08:45
  • 3
    well, for electrical engineering students just learning microcomputer hardware and software and DSP, the 6809 was sexy. it was the first microporicessor CPU with a multiply instruction that i am aware of. and the orthogonal addressing modes with post-increment and pre-decrement were also quite cool. – robert bristow-johnson Feb 20 '18 at 04:44
  • 8
    @traal The 68000 was wildly successful. It didn't contribute to Motorola's demise in any way. – JeremyP Feb 20 '18 at 09:29
  • 1
    Yes, cost!! Example: a Jameco ad from a September 1984 issue of BYTE magazine lists the 1MHz MC6809E at $14.95, compared with 2MHz MC6502A at $6.95, 4MHz Z80A at $4.49, and P8085A at $4.95. Same ad also lists 5MHz 8086 at $24.95. – Edward Feb 20 '18 at 14:25
  • @JeremyP Amiga Unix can only run on a 68030 CPU because subsequent CPUs were not fully backwards compatible. – snips-n-snails Feb 20 '18 at 19:01
  • @traal By that time, the PC with the x86 architecture had already taken over the the world of PCs. – JeremyP Feb 21 '18 at 10:00
  • 1
    @traal That's a deficiency in Amiga Unix, not the processor. The M86K family had a long, healthy life in a number of general-purpose computers (early Suns and Macs) and had considerable success in the embedded space as a standalone CPU. Despite having been discontinued by Motorola in 1994 after a 15-year run, compatible versions continue to be available as VHDL for use in FPGAs. – Blrfl Feb 21 '18 at 14:05
  • @Blrfl Can you think of any software that runs on a 386 but not a 486? – snips-n-snails Feb 21 '18 at 15:21
  • @traal: I can (but it's software I wrote that was intended to figure out whether the 386 it was running on was an Intel or an AMD, so having to update it to work properly on a 486 didn't surprise me). – Jerry Coffin Feb 21 '18 at 17:06
  • 3
    @Blrfl: I think that's over-simplifying quite a bit. Motorola made some pretty major changes in how the MMU worked in the 68040 vs. 68030. Then they changed things again in the 68060. Yes, it was possible to detect the processor and use code that worked on each. It's still a lot different from Intels where code written for a 386 in 1987 will still work just fine on a 2018 Skylake X. – Jerry Coffin Feb 21 '18 at 17:25
  • 1
    @JerryCoffin True. Having been in this since about the time of the 8088's debut, I'm not convinced that the yoke of eternal backward compatibility didn't set the industry back. Motorola wasn't saddled with it, and the improvements they made in later versions were, I thought, positive. The fact that in 2018 the ARM box on my shelf can run the same stuff as my x86 box on my desk or the S/390 in the fishbowl with only recompilation says we've come far enough that it's no longer really necessary. Microsoft's failure to adapt to multiple architectures is, IMHO, their problem. – Blrfl Feb 21 '18 at 17:53
  • @Blrfl: From a hardware viewpoint, Microsoft has adapted to multiple architectures just fine--somewhere in my garage I'm pretty sure I still have discs for Windows NT to run on x86, Itanium, MIPS, DEC Alpha, and PowerPC. I'd say their primary failure was in adapting to differences in marketplaces (e.g., people who used Windows phones liked them a lot--but MS never "got" how to sell into that market). – Jerry Coffin Feb 21 '18 at 18:21
  • 3
    Also to mention that the 6809 architecture has an excellent addressing mode set with a register layout which leads finally to more compact code even the cycle count is sometimes greater for equivalent instructions compared to a 6502. Superior for pointer oriented stuff, OS implementation, common techniques for compiler-based languages (which was one of several design goals of this CPU). Especially for the stack-based programming language like Forth a 6809 was regarded as the ideal choice. – Johann Klasek Feb 21 '18 at 23:12
  • 1
    @JerryCoffin "their primary failure was in adapting to differences in marketplaces (e.g., people who used Windows phones liked them a lot--but MS never "got" how to sell into that market)" -- this is true... and here's a bit of history that a lot of people don't spot: the most popular web site for discussions of development on Android phones is xda-developers.com, but almost nobody ever notices that the "XDA" referenced in the site URL was an early Windows CE based smartphone... they had an early lead in developer mindshare, but lost it to Android because of the more open environment there. – Jules Jun 18 '18 at 00:23
  • Some history of 6800 and 6502 worth mentioning: https://en.wikipedia.org/wiki/MOS_Technology_6502#History_and_use – Yuhong Bao Sep 12 '20 at 01:07
  • 1
    @cup I think the 6809 was the most elegant 8-bit processor I had the joy of programming for. Not sure why you found it difficult. – Mark Ransom Jul 28 '21 at 00:59
  • @MarkRansom It is just a different mindset, similar to Data General. I found the indexing difficult because of the auto increment. It is OK if you're coding in it all the time but when you have to switch between 8080, 6802, 6809 and Prime Assemblers, switching mindsets gets difficult. – cup Jul 28 '21 at 07:05
5

I think the ultimate reasons were high cost, late introduction, and poor marketing by Motorola (a constant problem with this otherwise technically superior company).

Manufacturers ultimately look at the bottom line of their BOM and the 6809 was at least 2x as much as Motorola was inherently more expensive and did not offer the same level of volume discounts that were available for the 6502 and Z-80.

By time the 6809 was readily available other 8 bit CPUs were already well established and going downward in price. From an ISA perspective the 6809 walked all over any CPU available - but concepts of clean code structure were minor considerations still in the PC world. Remember that FORTH was still a significant language at that time and the dual hardware stacks made the 6809 an ideal FORTH environment which could have helped improve the traction of stack based computing models rather than register oriented.

Had Motorola properly demonstrated the development superiority of their ISA the quality of systems we could have gotten had the 6809 received the same market penetration as the Z-80 was have had a profound impact on our industry by at least one decade. Imagine if IBM had picked up the 6809 instead of that nasty little 8088. The 6809E was designed specifically to operate in a multi-CPU environment.

Sadly, just like DEC, the best tech did not win out because the parent company's mindset was oblivious of where the market was actually going. Had this system gained wide adoption the entire history of the PC revolution would look completely different and our system architectures would be more secure and scalable than they are today. Now we're having to rethink and re-engineer all this technical debt built up over the decades since Moore's law has run its course for the most part.

  • It would be a catastrophe had the ibm selected 8-bit cpu for their XT. 8088 while being less than satisfactory (in comparison with 68000) still was able to address 1Mb of memory. – lvd Sep 06 '22 at 07:19
5

While the 6809 had (and still has) technical applications, it lost out badly in the consumer market in the “mug's eyeful” department: if you were buying your kid a computer for Christmas and didn't know any different, why buy the Dragon which only had 0.89 megahertz when the ZX Spectrum came with 3½ of them?

While folks here know that processor speed isn't remotely comparable, to people buying and selling computers without a technical background, it mattered. That's why Alan Sugar was so fond of his mug's eyeful, the extra bits that make it look to the buyer that they're getting a lot for their money.

scruss
  • 21,585
  • 1
  • 45
  • 113
4

One of the missed opportunities, in the United Kingdom, was the BBC Micro.

In 1979 Acorn was selling 6502-based eurocard kit computers, and quickly brought out a successor based on the 6809 -- often omitted from descriptions of Acorn, but you can see one here http://chrisacorns.computinghistory.org.uk/8bit_Upgrades/Acorn_6809_CPU.html

It was vastly better board: CAD designed while the previous 6502 was hand-drawn, and the 6809 obviously gave access to professional operating systems Flex and OS/9.

The specification for the BBC Micro was being drawn up, and my understanding was that the discussions were between Best (68000 or whatever) and Current (Z80/6502), with not much credence for Good (6809). As the 68000 products would take too long, Acorn won with developments on its existing 6502 experience. I have heard it rumoured that the design was done right up to the deadline and the 6809 team at Acorn lost out for very ordinary internal reasons.

If the BBC had chosen a 6809-based design, I think it would have had enormous impact on the wider deployment of this CPU.

jonathanjo
  • 790
  • 4
  • 10
  • The Acorn 6809 card you describe was an optional CPU card for their rack-based System range. It wasn't offered as standard on any of the System 1-4 range, which all shipped with the hand-drawn 6502 card (which, in fairness, was the very first product the company designed). See a contemporary catalogue www.vintagecomputer.net/fjkraan/comp/atom/doc/cu04-05.pdf . Acorn later released a CAD-designed 2MHz 6502A card for the System 5. – Kaz Feb 17 '20 at 05:50
  • 2
    One detail I think you've overlooked from the BBC's point of view is that they wanted a machine that could suit business' needs - i.e. run CP/M software. Acorn's Tube interface for second processors stemmed from internal argument on what CPU to use for their new "Proton", but suited the BBC's desire for an optional Z80 processor. It's notable that while they subsequently made second processors based on the 6502, Z80, 80186, 32016, and ARM, the 6809 never showed up; any internal advocates must have changed their mind or gone elsewhere. – Kaz Feb 17 '20 at 06:04
4

This is a fascinating subject, IMHO, and a fun one to find searching Google for a Z80/6502/6809 showdown. I think one of the areas that people in here haven't addressed - aside from cost - is the ability to license the 6809 core vs, say, the 6502.

Let's take for example, Atari. Both Atari Inc and later Atari Corp. Atari Inc. did end up using the 6809 somewhat in a few of their arcade games, but their consoles and 8-bit computers remained using the 6502. Atari didn't own their own fab like Commodore did - MOS/CSG - but they worked closely with Synertek, Rockwell, and others. They designed their own 6502 variant, the 6502C SALLY, and had those companies manufacture it for them. Could they have done the same with a 6809? Doubtful. And they would've had to have purchased all of the 6809s from Motorola and possibly later from Hitachi. It's actually a shame Atari did take to manufacturing even further 6502 enhancements they came up with in-house like adding 16-bit instructions to it back in 1979.

Flash forward all the way to 1988/89 when the successor Atari Corp was acquiring what became the Atari Lynx from Epyx. Despite its powerful 16-bit graphics chip, it still had a 6502 as its main CPU. Why? Because the 6809 and 68000 weren't available to license for customized cores, according to the development team. For whatever reason, they also didn't go with the 65816 from WDC either. But that's a different story...

Jeremy Holloway
  • 269
  • 3
  • 2
2

Don't forget the Intel 8086 came on the scene at the same time; The major difference there was the memory management chip was built into the 8086. This allowed the development of software tools that utilized extended memory without having to deal with the differences from multiple implementions of memory management. While (imho) the 6809 was a superior processor, the easy access to development tools gave a distinct advantage to the 8086 family at a critical time during the development of the industry.

  • 4
    The x86 line had no form of MMU until the 80286. What are you referring to? – user3840170 Jul 25 '21 at 16:20
  • 1
    I suppose he meant the inbuilt access to 1 MiB via segments which 6809 would need some external, non standard, circuit to implement anything to overcome the 64 KiB address limit. While x86 segments were annoying when the 1 MiB limit was reached, it was quite usefull and simple to overcome the 64K limit. – Patrick Schlüter Jul 26 '21 at 07:32
  • @PatrickSchlüter: IMHO, the 8086 segment design is vastly under-appreciated. I think there should have been a processor status bit to control whether the default segment is DS or SS, along with a DS prefix for use in the latter scenario, so that applications where stack and global data total 64K or less could have two "free" segment registers rather than one, and dev tools had some unfortunate limitations (e.g. assemblers could have allowed a programmer to request that a segment register be loaded with a constant, and handled that by generating cs:mov ds,[someAddr] putting... – supercat Jul 27 '21 at 17:15
  • ...the desired constant value somewhere in code, and putting its address in the instruction). If one tries to write code in the style of the "huge" memory model, the 8086 instruction set will be horrible to work with, but if one designs code to work with the design of 8086 segments it's really quite nice. Too bad the designers of the 80286 and 80386 failed to understand what was good about 8086 segments, since otherwise the principles could be extremely useful in frameworks like .NET and Java. – supercat Jul 27 '21 at 17:17
  • The 6809 was a great CPU at the time. However, it was a little late to the party, was too costly when released, and when the price did come down it was shown-up by the new 16 and 32-bit CPUs like the 68008 and 68000. Motorola could have made a killing if they had reduced the price to compete with the Z80 and 6502. It was used in many not so popular (in the US anyway) home computers. I was told by a retired NASA EE it was used as a cargo door controller on the SST-1 (Columbia Space Shuttle). I still have a soft spot for the 6809. Not sure if anyone still manufactures it. They are open cores. – user693336 Mar 05 '23 at 18:02
2

Why did manufacturers of home computers avoid using the 6809 CPU?

To add the excellent responses above, I will add the one that is likely the most significant: price.

The 6809 had about 9,000 gates, whereas the 6502 had about 3,200. Given an equal fab, the 6809 will cost more than the 6502. Not three times, as the packaging is significantly the same, but the overall yield is going to be lower and that goes right to the bottom line.

Now the 6809 offers more functionality, but that functionality has to justify a price/performance ratio that was being offered by other platforms. And simply put, it didn't offer either price or performance.

On the price side, when it was introduced single units were $37, while the Z80 was $9 and the 6502 was about $6. These are single unit prices, which means volume purchases will be even more spread out due to multiple suppliers in the case of the 6502 (consider that Atari got a 6507 and RIOT for $12 at introduction).

You can see the other side of the equation here. Although it was relatively quick in theory, in practice it was slower than most existing systems. Yes, this bench is in BASIC, but of course, that was what would normally end up being used by these processors in the home computer space. The BASIC in question was, at the runtime level, identical to the one on the PET, but as you can see, the PET outran the Dragon in spite of being five years older.

They started the project at the same time as the 68000, because their customers who told them they didn't need 16-bit systems. So they designed an updated 8-bit ISA for a market that, by this time, was moving to dedicated microcontrollers and using 8-bit standalone CPUs largely only in legacy roles. The project seems like it was doomed from the start.

Maury Markowitz
  • 19,803
  • 1
  • 47
  • 138
2

As I remember the 6809 was a great processor in many ways with a great memory paging system that allowed fast IRQ and many tasks to be be swapped efficiently. The merged data/code space (vs 8031) made programing simpler BUT also resulted in quickly running out of address space as data, code and peripherals all had to share that little 64k address space. Paged code and/or data space was needed for any but small embedded 6809 systems and that created it's own software troubles. The 68008 made all the code space issues go away and saved the programmers from endless paged memory problems at the price of being slower in IRQ performance. A few years time quickly solved the speed issue and then 8 bit became only low cost low performance processors and the 6809 was never a low cost part. Pity as the 6809 instruction set was good but there was just never enough address space the make full use of the parts features.

Bob J
  • 21
  • 1
2

MC6809 did have a lot of instructions however (to be compatible with 6800 most one byte instruction codes were unavailable) to access them required multiple bytes (multiple instruction fetches) and the internal microcode (Motorola didn't improve the internal microarchitecture of the CPU) took many more CPU cycles to execute those instructions. Basically for the same clock speed other processors executed the same program function faster and with less code (memory was costly).

Steve J
  • 141
  • 1
  • 1
  • 1
    That fits with something I remember. Well IIRC that is. What one would think should be one of the simplest and quickest operations, copy register A to B, was actually executed as Push A, Pop B. Apparently the registers had no internal path, so one of them had to be pushed onto the stack and popped into the other register. That said, I still like the assembly language of the 6809, even if the actual implementation of those instructions was less than ideal. – RichF Aug 01 '18 at 05:40
  • @RichF - While you're correct that it's a surprisingly inefficient implementation, the MC6809 datasheet shows that TFR and EXG generate dummy bus cycles rather than stack accesses, suggesting the use of internal temporary storage of some kind. – Jeremy Nov 02 '18 at 10:02
  • 5
    This is mostly wrong. 1) The 6809 did not have "a lot of" instructions, in fact considerably fewer than the 6800 (59 vs. 78). 2) It was not object-code combatible with the 6800. 3) Most opcodes were single-byte, with the usual additional bytes to specify data or addresses. 4) There was no microcode (it was perhaps the last of the combinational logic processors) and this was one of the reasons it often used fewer clock cycles per instruction than competitors. – cjs Jul 23 '19 at 14:12
  • 1
    It did have a lot of addressing modes, and was generous in that most addressing modes could be used with most instructions -- in that sense, it had a lot of instruction+mode combinations. – jonathanjo Feb 17 '20 at 18:36
  • "Basically for the same clock speed other processors executed the same program function faster and with less code"

    That could be true if you directly translated a 6800 program, instruction by instruction. But if you wrote the code FOR the 6809 then you could use fewer instructions. It was also much much easier to write a decent compiler for the 6809 than for the 6800, 6502, or z80. I was involved in writing a BCPL back end for 6809.

    – Bruce Hoult Mar 17 '23 at 00:27