50

I've been reading through The Unix Hater's Handbook. It has many, many very valid criticisms. (I'm still raging that terminal escape codes aren't in the terminal driver...)

There is one anomaly though: One of the chapters is complaining that Unix doesn't support "file versioning", when "real operating systems" have had this feature for years.

Now they don't really elaborate on what that means, but it seems to mean a system where each time you save a file, the old version is still kept, and the new revision just gets a new version number.

I am not personally aware of any operating system in the entire history of computing ever having had this feature. Can someone enlighten me as to what these mythical systems were?

Unix was written an extremely long time ago. As I understand it, in Those Days a "large" system might have as much as 2 kilowords of memory and presumably a similarly tiny amount of disk. The "real operating systems" the book alludes to would presumably be even older than Unix, and hence for even more constrained hardware.

I find it very hard to believe that a system with a 4 Kword disk would "waste" disk space by keeping every prior version of every file ever created. That just seems like you'd run out of disk space within ten minutes.

Have I misunderstood what they're talking about? Or were there actually systems that worked like that?

user3840170
  • 23,072
  • 4
  • 91
  • 150
MathematicalOrchid
  • 3,045
  • 1
  • 17
  • 24
  • 9
    Actually, unless every case in that book has correct proofs with references to 'other systems' (and preferably, with the year when the comparison took place), anything there is not criticism, but simply a froth. Decades have passed since that book was published and everything has changed heavily since then. Nowadays it is no more than an amusement. – lvd Feb 07 '20 at 20:09
  • 2
    @lvd Certainly several of its complaints are long since moot now. Still an interesting read though... – MathematicalOrchid Feb 07 '20 at 20:11
  • 6
    When programming with cards, you'd reprogram a 'file' by making a new card. No need to destroy the old card..... so if you kept every version in a filing cabinet, you've kept the history of the program ;) – djsmiley2kStaysInside Feb 08 '20 at 22:49
  • 4
    We are pleased that you are enjoying THE UNIX-HATERS HANDBOOK. For those who are interested, it can be downloaded from https://simson.net/ref/ugh.pdf. I've updated your question to add the link. – vy32 Feb 09 '20 at 02:19
  • File versioning doesn't necessarily mean you keep all old versions of a file. They could be mounted elsewhere e.g. tape. Or you could have delta backup schemes with various time-deltas. The most important part or file versioning is surely the audit trail of which user modified what, when, (and by how many bytes added/deleted/changed, if the old version is accessible). – smci Feb 09 '20 at 03:35
  • Not sure about "most important" part. To me, it was simple access to "undo". Example: I'd be making changes to program code but be unsure about my approach, so trying various stuff. These days I save off intermediate copies of the file under different names (they're not "complete enough" to do something more heavyweight like a git commit). On VMS the intermediate copies would be just "there". I'm talking about a timescale of a few hours while I'm in the flow, I certainly wouldn't leave things in flux overnight. And I'd purge down to the final version when I was done. – dave Feb 09 '20 at 13:29
  • For the sort of version tracking you're talking about, I don't want to track every version (every time the user exits an editor), I want a tracking tool -- these days I use git. Inside DEC, we had CMS (Code Management System). – dave Feb 09 '20 at 13:34
  • 8
    The UNIX Hater's handbook, from memory, seemed to be a series of rants as to why things were done "wrong" in UNIX. While it was an interesting read, I can't say I agreed with many of its ideas. – paxdiablo Feb 10 '20 at 05:16
  • @vy32 if you were involved making the book, thank you. I have my paper copy I got back in the 90s. When I ordered it I thought it was a joke, because at that time I thought Unix was the bee's knees - and it mostly was compared to contemporary options. Reading the book was an eye opener and probably added to my urge to discover the wisdoms of the ancients. – Lars Brinkhoff Feb 10 '20 at 06:39
  • I'm a rather frequent user of ITS which, as per @another-dave's comment below might sport the first versioned file system. (In which case it probably grew from the MACDMP format for DECtapes, but I digress.) ITS limits the number of files in a directory. This can be annoying but is also a feature because you will be forced to delete old versions. – Lars Brinkhoff Feb 10 '20 at 06:43
  • @LarsBrinkhoff, thanks for your kind words. I'm glad you liked it! The book was really a collaborative project of everyone on the mailing list and cartoonist, John Klossner (http://www.jklossner.com/), – vy32 Feb 10 '20 at 14:33
  • I was using a versioned file system on a Unix 25 years ago. Give me time and I will even remember the vendor. Maybe Silicon Graphics? – user207421 Feb 10 '20 at 20:55
  • 1
    @paxdiablo, I think your comment about rants isn't too far off the mark. I believe the book is kind of a distillation made of vitriolic messages to a mailing list with the same name. Doesn't make it less interesting! – Lars Brinkhoff Feb 11 '20 at 06:55
  • You're off by a couple of orders of magnitude on what a "large" system was when the features the UHH is discussing were introduced. One of their comparision systems (IIRC) was Multics, where the kernel alone was 135 KB and the full compiled OS on disk was over 4 MiB. (Per Wikipedia.) – cjs Feb 28 '20 at 00:12
  • Modern file system like ZFS or BTRFS have versioning feature for the files. – Patrick Schlüter Mar 03 '20 at 16:23

5 Answers5

44

FILES-11 on DEC minicomputers was a versioned file system -- RSX-11M, IAS (on PDP-11), VMS (on VAX, Alpha).

Version numbers are very user-visible; they are part of the syntax for specifying a file. And programs are designed to behave appropriately for a versioned file system.

When creating a file, the normal way was to not specify a version number, and the system would assign one higher than the highest extant version. This is the "normal" approach for editors and similar file-modifying programs. Much easier than juggling .BAK files, etc.

When opening a file, the normal way was to not specify a version, and the system would open the highest extant version. This is the normal approach for using programs that just read file. If the user is typing in the name of the file to be opened, they can specify a version or not, as required.

You could specify a version, which allows you to modify a file in-place if desired (when writing) or read any previous version. That'd be normal for a file that, for example, was used for random access (database, ....)

With respect to "not running out of space" -- two things. Firstly, typical disks of the time could hold many thousands of files with typical file sizes of the time. Think program source files (Macro-11). A large file is, what, 1000 lines? That's surely under 50K bytes, or 100 blocks in PDP-11 terms. An RP04 disk pack, a storage device from around 1974, held about 88MB. For another data point, the RSX-11M-PLUS kernel (exec and drivers) source files occupy around 4.5MB on my PiDP-11 system.

Secondly, people generally tidied up. While doing active program development you'd likely end up with dozens of versions. When happy you didn't need to go backwards, you'd purge down to one version. And the computer operator might very well decide to purge everything down to a couple of versions (if he was nice, with fair warning to the users) if the disks were getting near full. In summary, users were aware they were using a finite resource and behaved accordingly.

In my opinion it's a giant step backwards to not have a versioning file system.

dave
  • 35,301
  • 3
  • 80
  • 160
  • 5
    "In my opinion it's a giant step backwards to not have a versioning file system." I couldn't agree more. – Raffzahn Feb 08 '20 at 00:52
  • 1
    I agree too, with the caveat that I am not finding Apple’s approach to versioning very convenient. But maybe that’s because I have gotten used to not having one—last time was on VMS around 1989. – WGroleau Feb 08 '20 at 07:01
  • 8
    Also in re "running out of space", at least VMS allowed you to set per-directory (and I believe per-file) 'max versions' attribute(s), to keep the revision history under control. – Vatine Feb 08 '20 at 08:59
  • 1
    @WGroleau - not familiar with Apple's OS, but isn't their approach like the Windows "previous versions" feature (which seems to be getting de-emphasized these days)? I agree, that is clunky. I ascribe the problems to having to do this underneath an existing population of programs that do not expect and do not see file version numbers. – dave Feb 08 '20 at 14:12
  • 2
    Time machine backups (stored on a separate disk) use a date/time-based snapshot, with hard-links for files that haven't changed since the previous backup. To recover, there's a finder window that can go forward/backward in time. – Kelvin Sherlock Feb 08 '20 at 17:23
  • OK, so not the same thing? (I cannot, for example, just open "version 17" of a file by typing its name into any random editor) – dave Feb 08 '20 at 17:27
  • VERY different. It's like making a backup once an hour. There is no OS, FS, or application support needed. (Just a daemon that keeps track of which files have changed since the previous backup). – Kelvin Sherlock Feb 08 '20 at 19:17
  • 3
    I wasn’t speaking of Time Machine. TextEdit and other Apple programs allow you to examine previous versions. Whether they use Time Machine invisibly for that, I don’t know. – WGroleau Feb 08 '20 at 21:48
  • @Vatine or the sysadmin could have a regular PURGE/KEEP/SINCE=X batch job, if he didn't want to bake the rules into the file system. – RonJohn Feb 09 '20 at 11:53
  • 1
    @Vatine, I don't remember whether you could configure maxima on directories or files but I do remember the purge /keep=N <some set of files> command that you could periodically run to ensure file counts were kept under control. – paxdiablo Feb 10 '20 at 05:11
  • 1
    "A way-too-big source file is, what, 10,000 lines? That's surely under 50K bytes." Really? So your average line length is < 5 characters? – twalberg Feb 10 '20 at 16:06
  • @twalberg - I write terse code, but probably not that terse. I have no idea what I was thinking when I typed that! I'll clean it up later. Thanks. – dave Feb 10 '20 at 21:57
  • @twalberg - fixed by rewriting – dave Feb 11 '20 at 02:42
  • 1
    @WGroleau Those file versions are unrelated to Time Machine. They're stored in a hidden folder. https://eclecticlight.co/2018/02/19/document-versioning/ It's part of Cocoa so well-behaved, document-based, native GUI apps get it for free but it doesn't exist for a command-line tool like vi. – Kelvin Sherlock Feb 12 '20 at 10:34
  • @KelvinSherlock, that’s what I figured. Point is that it’s less convenient in some ways than one like VMS that makes them easy to access by any program. – WGroleau Feb 12 '20 at 17:15
  • If I remember correctly, you could access a specific file version by appending ; and the version number to the filename in RSX-11. But that was a very long time ago and my memory is a bit fuzzy. – Mark Ransom Jul 28 '21 at 01:58
  • @MarkRansom - that's exactly correct. – dave Jul 28 '21 at 01:59
  • Thanks for reassuring me that I still have a few intact brain cells. I was the sysadmin for said RSX-11 system. A few years after I left that company, I got a call asking if I'd consider doing a little contracting on the side for them. When I realized that I couldn't even remember how to log on, I turned them down. – Mark Ransom Jul 28 '21 at 02:12
25

There were quite a few operating systems that had file versioning in the same era as unix.

Many file systems that we are familiar with today just have some components of a file name, such as:

Name.type

They might have a path:

\folder\folder\Name.type

They might have a server (UNC as an example):

\\server.domain\folder\folder\name.type

In many current systems, if you make a copy of the file, or attempt to overwrite with the same name, it might asks if you want to overwrite. If you choose not to you get a version number appearing:

Name.type(1)
Name.type(2)

So you can experience file versioning on current operating systems.

However, a fully versioned file system would not normally overwrite a file. It makes a new version number every time. This means whenever you use an editor or save a spreadsheet, or other document you get a whole stack of numbered versions of the file. The version number is usually stored at a distinct location in the directory structure and is not part of the name. If you refer to a file by name you are given the latest. You can clean up old versions of the file with specific system commands, like PURGE.

Two example operating systems that used this were VMS (from DEC) and George III (from ICL). (I can probably add a fair few more when my memory digs them out).

Did it exhaust storage: yes and no. The file storage as usually quota'd to users (which were always multi-user) and each individual user may exceed their quota and have to tidy up. The other aspect is that most files were just text. There was much less multi-media like images and video that we have today. The other thing is that these computers were huge, and not as small as you imagine, and that exchangeable disk/tape storage was the common way of having many files - you swapped the disc pack and the tape.

  • 3
    How did this not instantly exhaust all available storage? – MathematicalOrchid Feb 07 '20 at 20:07
  • When used with deduplication, this could exhaust storage not that fast. – lvd Feb 07 '20 at 20:11
  • 6
    @MathematicalOrchid In my experience, it did almost instantly exhaust available storage. When I was using VMS in college, one of the first things we had to learn was how to turn off file versioning. Ironic that VMS is now a faded memory in an era when disk space is abundant and file versioning would be a welcome feature. – rwallace Feb 07 '20 at 20:14
  • 4
    It did quickly exhaust all storage, but there was a simple purge command to remove older versions. FWIW, NTFS supports versioned files, but no software uses it. – Erik Eidt Feb 07 '20 at 20:14
  • 6
    @rwallace - Windows since NT has so much VMS heritage that it's practically DEC at the core. – scruss Feb 08 '20 at 00:25
  • 1
    I disagree with "never permits overwrite". Certainly DEC systems did (just open the current version ;0 for write). I'm pretty certain GEORGE allowed it too (just add (+0) to the entrant description. – dave Feb 08 '20 at 00:26
  • @another-dave I think you are right; and I did wonder about that as I typed it. – Brian Tompsett - 汤莱恩 Feb 08 '20 at 09:56
  • @rwallace not a faded memory. I was using OpenVMS up until 8 years ago when I moved employers. I’m not sure if they’re still using it as they were experimenting with porting to Linux when I left. – Darren Feb 08 '20 at 11:59
  • With GEORGE, of course, the file system was "infinitely large" - files would migrate between disk and tape. In extremis, the backing store unjammer would be called in to free up some storage. Could individual versions be archived and retrieved? As in, version 42 is on disk but versions 41, 40, 39, …, were not. I don't know whether I ever knew that detail, but it seems like an obvious feature the designers would want. – dave Feb 08 '20 at 14:25
  • @another-dave YES and that was a unique feature that other systems did not emulate. Perhaps I should add another paragraph; unless you want to add it to your answer instead... – Brian Tompsett - 汤莱恩 Feb 08 '20 at 15:05
  • 1
    Control Data's super computers of the 80s did this and they stored deltas. So you had the original file and then a delta stored for every edit to give you the current version. – user2121 Feb 08 '20 at 21:06
  • @MathematicalOrchid Imagine a file that is 4.3 KB of memory is stored on the disk using five 1KB memory blocks. Imagine the file is edited, but only part of the binary data changed, so 3 of the old 1KB memory blocks are still useful. The new "file" can be composed of a linked list of the blocks (which can suffer performance issues, especially on non-SSDs, but may be perfectly tolerable for backup systems where fast writing is more important than fast reading). – Jamin Grey Feb 09 '20 at 01:47
  • Further, only the most recent version of a file actually needs to be instantly accessible: older memory blocks not "active" can be compressed and moved elsewhere (simultaneously passively defragging the drive). If you want to look at a modern filesystem with true file versioning, check out the ZFS filesystem - I forget how ZFS handles the situation, and naturally, memory does still stack up over time, but ZFS does let you prune older snapshots to free memory. – Jamin Grey Feb 09 '20 at 01:50
  • @BrianTompsett-汤莱恩 - go ahead, you've mentioned GEORGE and I haven't. There was no automatic "retrieve when needed" mechanism in the systems I mentioned, and I think that's as important as getting the file off disk in the first place. – dave Feb 09 '20 at 13:45
  • VMS was the first thing I thought of when I read the question. And it was fairly widely deployed for a fair space of time, so not a niche product. – John Bollinger Feb 09 '20 at 21:22
  • @BrianTompsett-汤莱恩, do you remember the year when GEORGE III was introduced? – Lars Brinkhoff Feb 10 '20 at 07:36
  • I recall reading 1969 somewhere recently but do not have a reference handy. GEORGE was named in 1965 but that name refers to a series of different operating systems. – dave Feb 10 '20 at 12:48
  • Found the reference: last line of this page -- April 1969. – dave Feb 11 '20 at 03:18
21

I am not personally aware of any operating system in the entire history of computing ever having had this feature.

Siemens BS2000 of the early 1970s may be an example here (*1) with a feature they called file generations. A new file could be marked in the catalogue as having generations, setting a base generation number and how many generations are to be held (*2). It was presented by a single entry, and for most purpose handled like any other file.

To address any of the generations a file name could be suffixed with it's generation number. For example a file named "TEST.FILE" could be defined as holding up to 5 generations with generation 6..10 existing and generation 10 being the newest (actual). Valid names for file operations would be:

  • "TEST.FILE" accesses the actual generation (#10)
  • "TEST.FILE(9)" accesses the explicit generation #9
  • "TEST.FILE(-1)" also accesses the generation before the actual (as well #9)

The actual generation (pointer) could be moved using shell commands and/or an API. For example to revert to a previous version. If in our example it would be set to 8, then

  • "TEST.FILE" accesses the actual generation (#8)
  • "TEST.FILE(-1)" also accesses the generation before the actual (#7)
  • "TEST.FILE(+1)" also accesses the generation after the actual (#9)

This mechanism makes it easy to handle things like logfiles, program versions or databases. Roll-back or roll-forward can be done by a simple command and unlike any naming scheme, no program has to be modified to work with file generations - that is, unless some special features are to be used - all they see is a regular file.

Unix was written an extremely long time ago. As I understand it, in Those Days a "large" system might have as much as 2 Kwords of memory, [...]

Erm, these were the smallest systems. Keep in mind, a PDP of that time was the lowest end of computer available. The upper end, where 'real' OSes were used, was quite different. For example, the database system used for the 1972 summer Olympics used two mainframes with 2 MiB of core memory each and more than 30 disk drives of 77 MiB each (*3). Those were large systems (*4). Not PDPs.

Admittedly, such a configuration is close to the upper limit what was used at the time, but it wasn't a unique installation.

I find it very hard to believe that a system with a 4 Kword disk would "waste" disk space by keeping every prior version of every file ever created. That just seems like you'd run out of disk space inside of ten minutes.

Like with many other features, they are only useful on a capable setup, the same way that subdirectories only make sense with drives large enough to hold them and so on. Equally important, features only make sense from an application viewpoint. An application needing do hold versions will love an OS that supports it in a consistent way.

Bottom line: Developing an OS's capabilities orientated at the smallest possible configuration doesn't sound like a good idea, or does it?

Have I misunderstood what they're talking about? Or were there actually systems that worked like that?

Quite a lot. In the mid '70s it was seen as a great addition to extend the usability of file systems. At that time many features we would nowadays place request from a database system were provided directly by OS and file system.

Oh, and it's not only a thing of the past. IBM zOS for example does as well support their mechanic of file versions, called a Generation Data Group.


*1 - BS2000 was based on RCA's TSOS, but I'm not sure how much was already present in TSOS.

*2 - This includes tape storage, so generations could be moved to tape for long term storage (and to save disk space). The catalogue would still be used to manage them.

*3 - Yes, that's in total about 2 GiB in 1972 :))

*4 - That view, users of 'real' computers had on Unix is reflected in the Unix Hater's Handbook, isn't it?

Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 2
    "data base system used for the 1972 used two mainframes": Are you referring to the computer system used for reporting results at the 1972 summer Olympic games in Munich? I seem to recall it was one of the first wide-area real-time systems (though SAGE, SABRE and the like were earlier). I could be totally mis-remembering though since I can't find any info online beyond a one sentence reference. – Alex Hajnal Feb 08 '20 at 14:24
  • 1
    Well, the 'first' part is as so often debatable, but it was for sure one of the first offering access to real time information for a large audience. The compunting center was setup at the press center in Munich with more than a hundret terminals inhouse, a sub system in Augsburg, and terminals at all events for entering data as well as for journalists to gather information. Events were (were possible) 'wired up' so results were taken automatic and available immediate Everything else was keyed in at site. – Raffzahn Feb 08 '20 at 15:41
  • 1
    @AlexHajnal The mainframes themself were setup each in a 'T' configuration, where the CPU is the vertical bar and memory the horizontal. That way the delay due cable length was minimized :) Memory was made of 64 blocks of core with 32 KiB each. Most definite the upper end what was possible in 1971/72. It as well delivered a perfect testbed for statistics about core reliability (at the time). On none recoverable error per 32 KiB per Month. With this setup is came down to two per day per machine. Like a clockwork. Best of all, the software was made in a way to withstand most and continue working – Raffzahn Feb 08 '20 at 15:48
  • Fascinating. It sounds extremely advanced for 1972. Do you know where I could find more info on the system? – Alex Hajnal Feb 08 '20 at 16:15
  • @AlexHajnal Not really. My kmowledge is mostly due listening to the old guys - my boss during 1980-85 was part of the hardware service team in 1972, so he could tell quite some stories :) – Raffzahn Feb 08 '20 at 19:14
  • 1
    The description of the OS resembles ICL GEORGE III in some ways, which seemed interesting since Fujitsu eventually acquired both ICL and Siemens, but I would guess it's just a case of "ideas that were in the air at the time". – dave Feb 10 '20 at 12:45
  • @Raffzahn, so that's 16 Mbit of core memory, correct? The MIT AI lab moby memory from FabriTek was of comparable size: 10 Mbit, installed in 1967. (If someone is checking the numbers, the moby was 40-bit words.) – Lars Brinkhoff Feb 11 '20 at 07:11
  • I'd like to clarify that to write "PDP" might be confusing since there were several distinct families of PDP computers. They ranged from the PDP-8 which was absolutely tiny for its day, to the PDP-10 which was a medium sized computer supporting dozens of timesharing users. In between were the PDP-11 and the 18-bit family. – Lars Brinkhoff Feb 11 '20 at 07:21
  • @LarsBrinkhoff PDP: The OP refers to Unix and machines with 2 KiWords. That's most definite lowest end. Unix was done for PDP-7 and 11 (at that time) which were as well lower end - after all, being cheap was the main selling point for buying a PDP11. Moby: I'm not so sure what your point is in referencing a very specific after market solution for research in comparsion to an of the shelf installation? – Raffzahn Feb 11 '20 at 12:31
12

In addition to what others wrote: ITS, TENEX, TOPS-20.

In ITS, files are named by two strings each at most six characters. The second file name can be a number to specify a version. If you open a file for reading, > will access the latest version. When writing, it creates a new version. < refers to the oldest version.

Moby edit. Let's make a timeline.

  • ~1965 - Project MAC: MACDMP
  • 1967 - Project MAC: ITS
  • 1969 - BBN: TENEX (→ DEC TOPS-20)
  • 1969 - ICL: GEORGE III
  • 1971 - DEC: RSX-11 (→ IAS, VMS)
  • ~1971? - Siemens: BS2000
  • 1993 - Microsoft: Windows NT
Lars Brinkhoff
  • 3,111
  • 16
  • 35
  • 2
    TOPS-20 (and I assume Tenex) had a double disk-wasting approach :-) As well as the file system supporting versioning, deleted files did not immediately free up disk space - they vanished from normal sight, but could be undeleted. An explicit 'expunge' operation was needed to reclaim space. I think the OS had provision for automatic expunge when free space got too low (I imagine Lars knows all this, just general info). Unlike modern "waste basket" approaches, this happened inside the file system itself. – dave Feb 08 '20 at 14:03
  • 3
    Wikipedia claims ITS was "possibly the first" to have a versioning file system. – dave Feb 08 '20 at 14:05
  • 1
    This page contains a talk from George Felton (the George behind GEORGE - the OS was named by his team) which indicates design was under way in 1965, My half-guess of 1969 for release is likely in the right ballpark. – dave Feb 10 '20 at 13:00
  • The last line of the page linked to my previous comment gives the release date as April 1969. – dave Feb 11 '20 at 03:22
7

My experience is with the VAX and VMS. It had versioned files.

Back in the day, it was not uncommon for some programs, like editors, to create a backup copy of the file you were working on. In the end you'd have, for example, file.txt and file.bak.

The versioned file system is simply that concept writ large. Instead of file.txt and file.bak, you had file.txt;2 and file.txt;1, with the lower numbered version being the older one.

You'll note that it's not used for files that are changed in place (notably, things like databases). Rather they're for files that are rewritten wholesale.

If you open a file for writing that already exists, rather than overwriting the old version, it simply creates a new version. It's a simple mechanic.

On systems like UNIX, the applications have to jump through hoops to manage this. .BAK files, adding time stamps to file names, file_2.txt, etc. On versioned files systems, this is unnecessary, and "free" for all applications.

VMS has a PURGE command that goes through and removes all of the older versions.

It should be noted that modern macOS applications implicitly version files today. The OS has built in application support for this model (note, the filesystem does not, the application framework does). Edit, for example, a word processor document, and it internally makes new versions and manages that for you.

This is distinct from modern version control systems, which, obviously, also do this but offer a different workflow from versioned file systems. Many developers leverage these version control systems not just for source code, but many different files.

I believe the Symbolic Lisp Machines used a versions filesystem as well.

Will Hartung
  • 12,276
  • 1
  • 27
  • 53
  • When I worked at my university's computer lab, I remember saving the day for numerous people by using the 'purge' command :-) – bjb Feb 11 '20 at 17:51
  • There was also a technique of keeping the last n versions but I can't remember what it is - that was over 30 years ago. – cup Feb 27 '20 at 13:02