23

BBC Micro model B has 32k memory. An average book, like Mary Shelley's Frankenstein, has about 350,000 characters in it. So you'd need over 10 times the memory to load it in, plus the software to edit it.

If people wanted to use a BBC Micro era computer to write a novel, how did they go about doing it?

Would it be a case of maxing out the expandable memory? The wikipedia article for the BBC Micro suggests it could support quite a large number of expansions, presumably if they are 32kb you would need around 10 to load the entire novel plus the software.

Or did word processing software use say, a floppy disk as a way to store the novel and load portions of it into memory? It looks like disks could be around 200kb for the micro, so multiple disks might work (plus, you'd need to store the novel offline anyway).

Or perhaps there was some kind of clever compression that would let you get more out of the memory?

NibblyPig
  • 341
  • 2
  • 6
  • 62
    It was possible to write a novel on a typewriter, which has 0 bytes of memory. The question is somehow not very sensible. There is not requirement to have the whole book in memory to process it. – Patrick Schlüter Jul 03 '20 at 17:00
  • 4
    I encourage you to look at how many characters are in each chapter of Frankenstein, not the overall length of the book (and ignore double newlines and other formatting stuff along the same lines, on a system like a BBC Micro you're not doing typesetting, you're just writing prose). Very few people who have to write large documents do so as a single file because it's much easier to manage as one file per chapter/section. Even then though the whole chapter doesn't have to fit in RAM. – Austin Hemmelgarn Jul 03 '20 at 23:41
  • 2
    I never used a BBC Micro, but I used a TRS-80, and they seem to have been contemporaneous. It would have been a challenge to write a novel on a TRS-80, but not because of limited memory. The big issues would have been (1) slow and unreliable floppies, and (2) difficulty in getting printouts good enough to send to a publisher. My father was an early adopter of computer word-processing in that era, and it was a major hobby/geek project to build a system that was usable enough to do legal briefs on. Historical flavor: https://www.jerrypournelle.com/chaosmanor/early-days-of-word-processing/ –  Jul 04 '20 at 01:31
  • I was thinking that if anyone had it would be Douglas Adams, but apparently he didn't buy a word processor until as late as 1982 (book 3): https://lowendmac.com/2016/douglas-adams-author-and-mac-user/ and rapdily gave up on his first few purchases. – Steve Jessop Jul 04 '20 at 03:00
  • 11
    typewriters used to have at least one bit of memory: caps-lock :-) – dlatikay Jul 04 '20 at 12:38
  • TeX was around at the time, no idea if it was possible to run on the machine you name. TeX does facilitate and encourage separate files for sparate chapters. – Clumsy cat Jul 04 '20 at 14:12
  • 2
    @dlatikay Technically, shift-lock. It affected the shift from numerals to symbols as well as from lower to upper case letters. – Chromatix Jul 04 '20 at 15:57
  • 1
    @Patrick Schlüter - Not true. The typewriter use paper as its storage medium. If you simply typed without the paper, you wouldn't end up with much of a novel. – chasly - supports Monica Jul 05 '20 at 11:54
  • 2
    @chaslyfromUK I am not sure where we want to draw the distinction between typewriters and computers, but in the eighties there were electronic typewriters with one line LC displays (you could edit your line before it was actually printed, and even go back a few lines and have mistakes erased) and floppy disk storage. Here in Germany at least they were marketed as a type of "Elektrische Schreibmaschine", so depending on terminology there may have been typewriters with actual storage. –  Jul 05 '20 at 12:04
  • @SteveJessop Terry Pratchett, however, wrote on a CPC464 – Chris H Jul 06 '20 at 08:31
  • 1
    @dlatikay: Most typewriters also had a bit of memory per column, which could be set or cleared by moving to the column and pressing "tab set" or "tab clear". They also included a function to quickly scan through memory for the next set bit, leaving the mechanism at the associated column. – supercat Jul 06 '20 at 22:01
  • A typewriter saves everything you write, and you can read all of the output at any time, so it's not a very good comparison. I suppose it depends on what writing a novel means, these days we'd consider it to be 100% on computer, but I suppose a hybrid approach would be the solution if you wanted to repeatedly refer back and forth without waiting 5 minutes for things to load. – NibblyPig Jan 23 '23 at 14:05

11 Answers11

44

You just made a file for each chapter, like sensible people do with current word processing!

It is very unusual to write something lengthy in a single document.

  • 17
    Um, I could poll them more thoroughly to check, if you like, but I'm pretty sure that nowadays, outside of a retro-computing context, published novelists of my acquaintance actually do often work in a single file. They're not highly technical and they are not responsible for typesetting (and neither are their editors), so you really do see 100k-word MS Word documents emailed about the place. How you'd organise an indexed, heavily cross-referenced work in LaTeX is beside the point! – Steve Jessop Jul 04 '20 at 03:03
  • 5
    @SteveJessop The popular Scrivener package, which is specialised for writing large, complex documents such as novels, stores each section of text as a separate file. Optionally it can pack them up into a Zip archive. – Chromatix Jul 04 '20 at 03:33
  • 12
    I could likewise poll how many of them stuck with Scrivener from back whenever it was they tried it. I'm not saying separate documents doesn't happen, I'm saying the assertion that single-document is "very unusual" seems optimistic to me. There's no upper bound on the proportion of novelists that one can claim are "not sensible": "the way that 97% of novelists do it is not sensible" would still be a coherent criticism. But there is a bound somewhere one what one can claim is "not usual": "the way that 23% of novelists do it is very unusual" would not be coherent! – Steve Jessop Jul 04 '20 at 03:44
  • 6
    I wrote three books, each of them has ca 450-550 thousands of characters, and each book was in a single file. I guess there is no reason to split chapters into files nowadays. Contemporary computers have a lot of memory as well as they are fast enough to keep a workflow seamless. And sometimes you need to copy something from chapter 5 to chapter 7, which is more comfortable with single file, etc. – Martin Maly Jul 04 '20 at 10:22
  • Contemporary computers manage memory very differently; virtual memory systems basically treat physical RAM as a cache for on-disk blocks of memory. Disk blocks are paged in and out of memory as needed, both by the active process and as the operating system switches context from one process to another. Even before virtual memory become common, I suspect a word processor could have done its own such paging to accommodate large files. – chepner Jul 04 '20 at 13:06
  • Early-mid Microsoft Word had issues that made very large documents balky; for that reason you’d split up. However they’ve fixed those in the last 15 years. Now that we’re in the self-publishing age where book printing companies do epub conversions and/or print layout formatting, for flat rates like $100, they won’t let you hand over 65 Microsoft Word docs. That would be more work than they’re willing to flat-rate. – Harper - Reinstate Monica Jul 04 '20 at 16:03
  • 2
    @Harper-ReinstateMonica, you would not hand over 65 documents. I don't know the proper English terminology used in MS Word, but it basically allowed (and probably still allows) you to create a "parent" document that could include any number of separately created "child" documents (as I remember that was a useful feature, since MS Word had a habit of corrupting large documents, sometimes beyond recovery). –  Jul 04 '20 at 17:19
  • @EikePierstorff I don't know if Word has/had that feature. WordStar did. – manassehkatz-Moving 2 Codidact Jul 05 '20 at 02:28
  • @EikePierstorff: For what it's worth, I wrote my doctoral thesis with Word - about 218 pages - all as one document. There are advantages in doing so - page numbers, automatic table of contents, internal searching (very important when I want to see how I've written something). There were times when I wrote the occasional chapter as a standalone document, but that was because I wasn't sure that I wanted to merge that chapter with the whole document. Of course, I backed up extremely frequently and in three or four disparate locations (Mega, Google docs, work computer, home computer). – No'am Newman Jul 05 '20 at 05:44
  • 1
    @manassehkatz-Moving2Codidact it had and still has (just had a look). In German it's called "Zentraldokument" and "Filialdokument". –  Jul 05 '20 at 05:48
  • @No'amNewman if you backed up on Google Docs, then we are definitively not talking about the same era. I was editing book length documents in MS Word before Google existed as a company. –  Jul 05 '20 at 05:50
  • @EikePierstorff Fascinating. Turns out that is Master Document and Subdocument in English, and apparently accessible only in Outline view - which is OK, but not easy to find. – manassehkatz-Moving 2 Codidact Jul 05 '20 at 05:54
  • When I wrote some novels myself I found that I had to frequently refer back to things that I'd written earlier on. This does of course depend on two main things, one, that you're writing fiction, since writing say a textbook you would probably not have to refer back as chapters would be self-contained, and two, that you're more of a pantser, someone who writes by the seat of their pants basically making it up as they go. Meticulous planners on the other hand would probably rarely have to refer back. I wrote using Word and also Scrivener which both facilitated this. – NibblyPig Jul 05 '20 at 11:56
  • "Referring back" can be and was done with paper printouts. It may well be more convenient with doc-in-RAM, but the question is about possibilities, not comfort. – dave Jul 05 '20 at 15:41
  • 1
    @MartinMaly when writing theses or (e.g. physics) textbooks in LaTeX it's certainly common to use 1 file per chapter. Moving text between chapters is if anything easier as finding your place is easier in smaller files. The difference of course is that with LaTeX you're typesetting and may well want to compile the single chapter you're concentrating on – Chris H Jul 06 '20 at 08:24
  • @manassehkatz-Moving2Codidact Lotus WordPro definitely supported a master-file structure - I used it for undergrad work (and avoided the issue that many fellow students had in Word - the floppy never ate my homework). I think its predecessor AmiPro did too, but I was in high school when I used that - and learnt the value of backing up manually. – Chris H Jul 06 '20 at 08:27
30

It was common to install word processing software as a ROM into one of the spare "sideways ROM" sockets on the BBC Micro, in the same way as the DFS ROM needed to operate a floppy drive. WordWise and Inter-Word were two popular options. This left more RAM available for text than if the software were loaded into RAM from tape or disk; typically enough to write a chapter in.

From late 1984, WordWise Plus gained the ability to use a floppy disk as backing storage for editing a larger document. It also allowed having multiple documents open at once. This would effectively remove any practical limitation on the size of an individual chapter. However, the size of the document would still be limited to the capacity of one side of a floppy disk.

It was reasonably common to fit twin double-sided 80-track floppy drives to a BBC Micro, and these would collectively have enough capacity for a full novel, divided into a file per chapter. Also, the twin drives could be used to make backups.

Text compression algorithms were not in common use on 8-bit micros. About a 3:1 compression ratio could be expected from running DEFLATE on typical English text. However, DEFLATE was not invented until approximately 1990 (Katz' patent is dated 1991) and ran on a contemporary DOS PC. Before then, less effective algorithms such as LZW and basic Huffmann were available. On an 8-bit CPU, compression algorithms would have been slow and rather memory-hungry, and thus not obviously worthwhile for a word-processing application.

Chromatix
  • 16,791
  • 1
  • 49
  • 69
  • If the program is expecting (English language) Latin-script, then it can use pre-computed Huffman translations. This would save it from the expense of finding the best compression, at the risk of poor compression if the the text doesn't have typical character frequencies. – CSM Jul 04 '20 at 10:31
  • 4
    @CSM Sure, there's definitely things you can do, but most developers at the time were not nearly as familiar with data compression technology as we can be today. No Internet that you can just look stuff up on a whim. So if you're fully engaged in writing a word processor, data compression books probably aren't on your shelf. – Chromatix Jul 04 '20 at 11:13
  • @Chromatix "not nearly as familiar with data compression technology" - I wrote a basic adaptive Huffman encoder/decoder for a summer job as a student in 1985, for the Tandata PA. It got better than 2:1 on short English text notes, with precomputed initial encodings to pre-heat the adaptive parts; it was all bit-twiddling, reasonably quick. This was easily within the grasp of a software dev of the time. A data-compression expert might have done better ito sophistication, but the concept here was pretty obvious, no internet required! Code-size was a factor,anyway. – SusanW Jul 04 '20 at 16:26
14

BBC Micro model B has 32k memory. An average book, like Mary Shelley's Frankenstein, has about 350,000 characters in it. So you'd need over 10 times the memory to load it in, plus the software to edit it.

True, but only if you insist on having all text at all time in RAM.

If people wanted to use a BBC Micro era computer to write a novel, how did they go about doing it?

Using a text editor and following whatever concept it offered:

  • The most simple would be splitting int into chapters. One is never really working on the whole book. It requires saving changes each time changing a section and loading another. The size would much depend on memory and book type.

Of course on a bare BBC, the only media would be cassette, so saving and loading is rather slow and cumbersome. Investing in a floppy does help a lot to speed up, which brings the next method:

  • Some editors did offer fast switching between sections and

  • others offered a kind of virtual memory system by loading only the part of the document shown right now (plus maybe some screens before and after). Think of it like a window into the text which is not in main memory, but at a second level storage. Text size was now only limited by media size.

  • Some even combined both, allowing to split a text in sections residing on different drives/floppies and only loading a part of either.

Would it be a case of maxing out the expandable memory?

More relevant than main memory might be the disk size. After all, it is of no help if one could edit a 300 KiB text, but only 200 could be stored to floppy

Or did word processing software use say, a floppy disk as a way to store the novel and load portions of it into memory? It looks like disks could be around 200kb for the micro, so multiple disks might work (plus, you'd need to store the novel offline anyway).

Exactly that is what 'better' editors offered.

Or perhaps there was some kind of clever compression that would let you get more out of the memory?

A few did go that way, but savings aren't as great as one might think. Processors were slow, thus only simple algorithms could be used, ending up with maybe 25% saved, thus not offering much gain. The Zork-Engine is a great example about what could be done back then.


Back then many people did use their machines for serious word processing. All the way from letters to whole books. I remember one guy writing his thesis on a CPC 464 using cassette storage. So yes, they were used in any imaginable way. Today's ease of Libre Ofice and alike makes us forget that there was a time when it took dedication and skills beyond the topic to write a paper.


Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 2
    Maybe also worth mentioning the teletext mode.  Unlike most similar machines, the BBC Micro had a screen mode which was not bitmapped — instead, it used 1 byte for each of 40×25 characters; just under 1KB in all.  (This screen mode, MODE 7, used the same character generator chip as the TV teletext service, hence the name.  It could also do some limited colours, graphics, and effects, using control codes.)  Its low memory and high-quality display made it very popular, especially for text-based programs such as word processors. – gidds Jul 04 '20 at 00:01
  • 2
    @gidds Thrue - except for the assumption of being 'unlike most similar'. Next to all had a text mode, even including the IBM-PC. Bitmap modes large enough for text is what was less common. – Raffzahn Jul 04 '20 at 09:39
  • You mean CPC 464? – fuenfundachtzig Jul 05 '20 at 09:24
  • @fuenfundachtzig Erm ... yes :)) – Raffzahn Jul 05 '20 at 11:01
13

Stephen Fry describes here how he wrote a book on the BBC Micro, saving on cassette:

In 1982 I bought a BBC Acorn for £399. It came complete with a firmware programme called Wordwise which I adored and which, in my fond memory, was the best word processor ever. I used it to write the book (ie story and dialogue) of a stage musical, saving on cassette tape as I went along and finally outputting to a daisywheel printer. The show was enough of a hit to allow me to indulge my passion for computer gadgetry for the rest of my life.

(the BBC Acorn for £399 being the BBC model B)

It should be remembered that documents would still be exchanged (with the publisher etc) largely on paper, so the fact they had to be split up into multiple files on the computer was no problem - you simply collated all your printout pages and bound them together. If the publisher also had an electronic system for editing they could deal with the split files in the same way you did.

Eventually it got collated at press time, but in 1982 there was little computer typesetting so at some point in the process your manuscript got turned into printed form (the 'camera ready') and then photographed by the printing house for duplication.

There was no sharing of the finished version in electronic format (PDF, etc) where it needed to be all together electronically. Paper was king.

user1908704
  • 611
  • 3
  • 6
  • 1
    "in 1982 there was little computer typesetting" -- this is probably correct for books, but most of the newspaper industry transitioned in the late 70s/early 80s, probably because (a) time matters more, (b) typesetting costs are a greater share of total costs, and (c) it's easier to equip a newsroom with terminals and have everyone use the same software and follow the same workflow than convert data arriving on a variety of media in a number of formats from freelance authors. – Michael Graf Jul 04 '20 at 08:15
  • 1
    I worked in the printing industry in the '80s and '90s. At one small company we had some strange old systems from the pre-Mac era where the operator edited in text mode with some kind of markup, so not WYSIWYG. I think they output photographic bromides, so before the laserprinter era and higher quality, but still "computer typesetting". I forget now if the machines used 8" floppies or some kind of proprietary tape or cartridge. – hippietrail Jul 05 '20 at 02:13
  • TeX came out in 1982 and the Linotron 300 in 1980 (?) so a rather nicely typographed mathematical book were entirely possible by 1985. This isn't word processing of the WordStar type while TeX in 1982 by that time only ran on DECsystem-10/-20 (ported later on to UNIX and HP machines.) – Stefan Skoglund Jul 08 '20 at 09:57
  • "It should be remembered that documents would still be exchanged (with the publisher etc) largely on paper"... I think that's an important point. It's possible to manipulate arbitrarily large text files with minuscule amounts of processing power: with home micros the bottlenecks were in typing it in (a novel on a ZX Spectrum keyboard? Ugh), editing-in-context (32-columns on TV screen? Yuk) and printing out the "manuscript" (OK, as long as you stuck with fan-fold paper!). An 80-column display and 80 column printer was weirdly WYSIWYG!

    – Lou Knee Jul 14 '20 at 22:27
12

Since you're asking not (just) about the BBC Micro, but about computers of that era in general: More sophisticated word processors like WordStar, running on CP/M, were able to swap both code and text between RAM and disk, letting the user edit lengthy documents in the typical 64k of early-80s CP/M systems. This would, of course, be slow, and profit greatly from more RAM.

Nonetheless, as Brian wrote, you'd break down longer texts into separate files. This would continue for long after; MS Word had a special function for that until at least Word 97.

Michael Graf
  • 10,030
  • 2
  • 36
  • 54
9

Computer Concepts (now part of Xara) produced Wordwise Plus in 1984 on a 16 K EPROM. It allowed a document to use the entire space on an attached disk drive as virtual memory.

scruss
  • 21,585
  • 1
  • 45
  • 113
8

The first novel written on a microcomputer was probably Jerry Pournelle's portion of Oath of Fealty (cowritten with Larry Niven). At the time, Pournelle did most of his work on a Cromemco Z-2 with 64KiB RAM and CP/M (the machine is described in more detail in Pournelle's column in the July 1980 issue of Byte).

It's not much of a stretch to imagine that similar work could have been performed using a BBC B - it may only have half the RAM, but it has both OS and (optionally) a word processor in ROM, substantially reducing the amount of RAM that would be unavailable to store text, so the largest editable document would probably be about 75% or so of the size that Pournelle's system would have been able to handle.

The main problem would be convenient storage - the largest disk commonly used with the BBC (80-track 5 1/4") held 400KiB total, which is slightly less than the typical size of a novel (somewhere between 600KiB and 800KiB), so during editing you'd likely have to swap disks quite frequently. It's not clear what capacity drives Pournelle used, but there were larger formats available, so he may well have not had this problem.

occipita
  • 2,347
  • 1
  • 9
  • 20
6

This answer is not specific to the BBC Micro, but is generally illustrative of editing technology on systems where document size is likely to exceed available memory. I'd allege that prior to virtual-memory systems, this style was the norm, since there was "never" as much memory as you needed.

I've edited fairly lengthy programs on a system where the core available to the editor (the 'amender') was only a handful or words larger than one disk block, as I recall 640 words or 3840 characters; this had to hold the input version of the file and the output version at the same time.

You can imagine two separate buffers for input and output, but space was tight. So the input text would be at the top of the buffer and the output at the bottom. You edit the text sequentially, a line at a time. As you proceed through the file, lines are transferred from the one to the other.

When you navigate off the end of the current input, the output is written out and the next block of input is read in.

There are of course complications in the amender for inserting more text than you've got space for, and perhaps if lines span block boundaries (I forget whether the latter was allowed, maybe there were no partial lines in blocks). But the user did not need to deal with it.

If you needed to 'go backwards' in the file then this can be accomplished by writing the rest of the output and starting again with that as the new input file.

So, with one simple restriction -- forward motion only -- you can edit in practically no memory at all (by modern standards).

A slightly more flexible editing arrangement puts the 'chunk' control into the hands of the user. Some amount of text is read into the available buffer space. The user can edit randomly within the buffer. When he's done with that buffer-full, he can move on to the next. This is sequential through the file, but randomly within a single buffer.

The venerable TECO, on many DEC systems, used this approach.

As usual, you can quickly accommodate working styles to the technological limits of the available tools. All of this was better than using a card punch.

dave
  • 35,301
  • 3
  • 80
  • 160
4

You’ve laid the assumption that one would load the whole novel into memory. That’s a false assumption.

Performance is the #1 reason

If you got anywhere near 16,000 book characters in RAM at one time, the system started to suffer performance issues, which were very annoying and would tend to break workflow.

It can’t be overstated how annoying these would be.

Also, a person working fast and touch typing puts burden on the CPU that is often proportional to document size. Now, the BBC has a keyboard buffer, which helps, but even it is finite and can be overrun - especially if every keystroke is creating work that takes longer than an average keystroke! But much more critically, this makes the system balky and inefficient. I’m sure you’ve typed on a modern PC when system task load makes the word processor laggy - but it happens for moments. Now, imagine all the time.

So your solution was to save files chapter by chapter, and keep chapters sanely small.

Based on your question I’m not sure you understand this, but every system back to the Apple II ca. 1978 had some sort of DASD available for it - typically a floppy disk, sometimes a smart tape drive. So you could store many files on it, and access any of them at will. So it was easy to have a bunch of small files.

You could do it with the basic/low-end storage too, like tape drives, but anybody who sunk real time into writing would’ve ante’d up for the DASD.

Yes, you saw large documents. Classic example would be release notes of software. And you could open the documents and read them, but there was serious lag, to the point where typing even one word would be tedious. And that was fine, because it was largely intended to be read-only. Obviously the publisher didn’t write it as one document, but as several which were merged into one for distribution.

  • 1
    Word processors on the BBC Micro (such as Wordwise and InterWord) allowed documents of 27K or larger, well more than 16,000 characters. Also, keypresses on the Beeb generated a software interrupt, which the 6502 processor was able to respond to with little latency. – Kaz Jul 04 '20 at 19:27
  • 1
    @Kaz Edited keyboard part, but I didn’t say they didn’t allow large documents. My assertion is that large documents impact system performance, and that is distracting/annoying/hard to edit with. – Harper - Reinstate Monica Jul 04 '20 at 20:19
  • Wonder why it got slower. I guess if each paragraph was a string, then an array of paragraphs would make inserting a new one be O(N); and a linked list of them would make scrolling through be O(N). – Dewi Morgan Jul 05 '20 at 05:23
  • 1
    @Harper-ReinstateMonica Edit noted. I'm still unconvinced that a large document (within the RAM capacity of the machine) would slow down and break workflow significantly: you'd only need to iterate through the whole document for find/replace operations or the like: surely performance when scrolling up/down a line/page would be similar irrespective of document size, as you're always loading a screen's worth of data from a calculated location? If a document got larger than available RAM, requiring splitting into smaller files, or swapping onto floppy disc, I agree that'd be a performance hit. – Kaz Jul 05 '20 at 08:45
  • If you insert a character near the start of the text, you need to copy all 27kb down by 1 byte to make space. This would take perhaps 1s with the 6502, completely unacceptable. Some systems were line-based, so you only do the copy when moving from an edited line to an unedited line. This still produced annoying pauses, so systems like wordwise split the document into three: low memory had the lines of text before the cursor, the middle section of memory was just for the current line, and high memory held lines after the cursor. Insert was quick, but jump to start and end was very slow. – jcupitt Jul 05 '20 at 22:10
  • 2
    @jcupitt: Using a reasonably optimized memory-copy loop, copying 32K would take less than half a second. Using a gap-buffer approach would work fine for most purposes, since people don't usually need to bounce back and forth between the top and bottom of a document multiple times per second. – supercat Jul 06 '20 at 00:02
  • “Less than a half second” is still way too much when a touch typist is going at it. You really don’t like it when your devices have half-second lag. The point is, there really isn’t capacity for the kind of performance optimizations you’re thinking of. Anyone remember Applesoft basic locking up for 30 seconds to run garbage collection? Same thing will happen if a word processor uses a segmented approach on a document that nearly fills RAM. It’s hard to “get” unless you lived in that age and worked with those platforms. – Harper - Reinstate Monica Jul 06 '20 at 04:59
  • @Harper-ReinstateMonica: The half-second lag would be in response to a request to move directly from the beginning of the document to the end or vice versa. The reason Applesoft GC was so horrible is that it would scan through all strings to find the one that was located highest in memory but below the new-string pointer, reduce the new-string pointer by the length of that string and copy it above that pointer, and then scan through all strings to fix all references to that string, and then repeat this procedure another time for every string used by the program. – supercat Jul 06 '20 at 22:04
  • What is all this “would be”? All this talk of hypothetically how 1980s word processing software might have worked. It’s not hypothetical, it’s historical. It happened, I was there. Coding was simple enough back then, that you could see why programs had trouble running with documents that nearly filled RAM. I already hit on a few reasons. These were hard problems that we thought a lot about. Heck I even wrote a word processor of my own, so yeah. Remember, no malloc, no heap. Just one expanse of 32KB RAM and you must manage everything inside it. – Harper - Reinstate Monica Jul 07 '20 at 00:25
2

The problem with the idea of expanding the memory is that the 6502 only had 64K of address space and pretty much all of it was allocated to something. 32K was used for the RAM, 16K for the current "sideways ROM", 15¼K for the OS, ¼K for internal memory mapped IO and ½K for the "1MHz bus" expansion interface. So it was not possible to simply expand the main user memory area. This meant that while memory expansion options existed they were of limited utility.

Sideways ROMs were all banked into the same 16K chunk of address space. The OS provided mechanisms that allowed calls to be directed to sideways ROMs and handled the bank switching. The OS supported up to 16 sideways ROMs, though extra hardware was needed for more than four.

It was common for the word processing software to be implemented as a "language" sideways ROM. Thus meaning that the word processor software did not take up space in user memory. So a machine used for word processing might have three sideways ROMs installed, one for Basic (you could theoretically remove this but I doubt many people did), one for the word processor and one for the disk (or network) filling system.

"Shadow RAM" boards did exist; these split the screen memory off into a separate bank from user memory, allowing the higher screen resolution modes to be used without eating in to user memory. I'm not sure what, if any, word processor software supported them.

There were also "sideways RAM" boards which banked RAM into the sideways ROM space. As far as I can tell this was mostly used for soft-loading programs that were intended to run from sideways ROMs.

As far as I can tell the way you would work with lots of text on such systems would be to work in multiple files and save and re-load from disk as needed. Initially such saving and reloading would be manual but automated options apparently existed later.

tripleee
  • 134
  • 7
Peter Green
  • 2,884
  • 20
  • 22
0

Of course.

CPU is not a bottleneck. In fact, some people used a simple computer based on NES connect a printer to process documents containing Chinese characters, and memory of NES was only a few KB.

Sideways ROM/RAM with massive banks controlled by an ASIC is very common in 1990s NES games and software. Similar deck is also possible on BBC Micro.

But, is it an expansion?

wizzwizz4
  • 18,543
  • 10
  • 78
  • 144
Milowork
  • 37
  • 4
  • 2
    It would be great if you could elaborate the NES setup you mention, as it seams extrem interesting to me. And if you're at it, maybe as well why you think a computer has a 'political position' ? – Raffzahn Jul 05 '20 at 22:04
  • Most of this answer doesn't answer the question. Retrocomputing Stack Exchange is unlike most other forums; please read the [tour] and see [answer] for details. But it would be nice to have elaboration on that NES setup; perhaps you could self-answer a question about it? Something like "I heard there was a computer based on the NES that people used to process documents containing Chinese characters; what was it?" and then posting the answer in the answer box. – wizzwizz4 Jul 06 '20 at 15:13