33

From the naming of operating system only i.e Unix = Uniplexed Information and Computing Service vs Multics = Multiplexed Information and Computing Service, I was first having a misconception that the prime difference between Multics and Unix should be - Multics was Multi access to multi users by multi-programming whereas Unix is multi-* replaced by single. But later on I found that the coining of term Unix was just a pun on Multics. Actually both Multics and Unix can be considered as evolution of the early Time Sharing Systems.

I know that Ken Thompson and Dennis Ritchie started to write a simple system as an alternative to Multics in order to make it possible to run the system with lower hardware resources, which then become Unix.

I think talking about all differences between Multics and Unix would be too broad question. So, I would like to know the major significant technical difference between Multics and Unix.


Clarification (considering comments):

  • By Unix, I mean the orginal Unix of 70s, not modern Unix or BSD
  • Yes, the main difference is simplicity of Unix and I want to know what makes Unix simpler as compared to Multics
  • If differentiating systems is broad, limit the difference between system kernels
Pandya
  • 673
  • 5
  • 10
  • 2
    Still way too broad. These are basically different systems for similar usages one having superficial peeked on the other. There is no coherent and useful answer, so the question will just attract opinions.. – Raffzahn Jul 20 '20 at 13:27
  • 2
    @Raffzahn how about limiting question to kernels rather than operating systems? – Pandya Jul 20 '20 at 13:27
  • 1
    By the way, how technical difference can be opinions? It would be facts I think; I am not asking which one is better. – Pandya Jul 20 '20 at 13:28
  • 1
    The answers attracted by such questions are usually opinions. (And on a side note, please edit your comments if you want add something - opening multiple comments within minutes are a bad idea). Also, no reason to get agitated. – Raffzahn Jul 20 '20 at 13:35
  • 2
    The basic difference is what you already stated - Multics is complex while Unix is simpler. That complexity slowed down development. – Brian Jul 20 '20 at 13:35
  • 8
    Good primers on Multics: https://multicians.org/history.html and https://multicians.org/features.html A rounded (if perhaps biased) rebuttal to the myths surrounding Multics: https://multicians.org/myths.html – Jim Nelson Jul 20 '20 at 17:09
  • 2
    (another link) A paper from unix creators: https://www.bell-labs.com/usr/dmr/www/hist.pdf – lvd Jul 20 '20 at 17:48
  • If I had to pick a single one, it'd probably be use of segments. BTW, there's an emulator, so you can experience differences in usage yourself. – dirkt Jul 20 '20 at 19:11
  • 1
    To be valid here the "Unix" in question should really be restricted to be the original Unix, not the modern descendants 50+ years later. – davidbak Jul 20 '20 at 19:35
  • 2
    @Raffzahn, on this site, a great many of the answers are based on opinions or recollections and therefore subjective. A look through the previous answers shows that, your own much so, and suggests it is the spirit of site. – TonyM Jul 21 '20 at 10:38
  • 5
    (I'm not up to writing an answer at this time, sorry) Multics was intended to be a computing utility - the first/a prototype. As a utility it demanded certain features in an extremely robust way: Two such features were reliability (availability) and security. Considering security: Multics had an extremely robust security model, enforced with special hardware features (rings). This made Multics the only system in its time - or for many years after! - certified by the US govt for running multiple security classification levels simultaneously. Unix was not as secure nor was it a focus. – davidbak Jul 21 '20 at 14:32
  • @davidbak You should read 'Reflections on Trusting Trust' by Ken Thompson before you go on about security of Multics... Not th at that means much with government but still. See: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf - in it he wrote he couldn't find the original paper he found the idea in but I years ago sent it to him though where that is I don't know now. Anyway it's true that Unix isn't exactly secure though it has better models than say Windows. (Edit: I realise you're comparing the two - I'm just adding to it.) – Pryftan Jul 22 '20 at 17:50
  • @davidbak: I'm intrigued by your it's true that Unix isn't exactly secure though it has better models than say Windows comment. I used very early Unix system (I started using using Unix in the pre-System V days) and very early Windows systems (my first was NT 3.51 (16-bit Windows and Win9x had no security, so it's hard to use them as comparisons). From my memory, (very early) NT had a more sophisticated security model (based on Cutler's VMS days) than very early Unix did - of course I may be wrong – Flydog57 Jul 22 '20 at 23:35
  • 1
    @Flydog57 - that was Pryftan who said that - but now that you mention it I agree with you. NT had, from the beginning, ACLs for fine-grained discretionary access control, plus auditing, both features missing from Unix. It also has other security/management features such as mandatory policy enforcement, also missing from Unix. Perhaps Pryftan will return and elaborate. – davidbak Jul 22 '20 at 23:50

7 Answers7

39

From this list of Multics features, almost all are recognizable in modern UNIX-style systems in one form or another. Looking for distinctions between is two is made difficult due to the longevity of UNIX and the proliferation of its children.

For me, the most interesting distinction between Multics and UNIX (and most operating systems to follow) was Multics' concept of segments. In Multics, all memory belonged to a segment, whether in core or on disk, and segments could be paged in from and out to disk on-demand. Distinctions between files and RAM are less pronounced than in the UNIX model. Individual segments could be named and assigned attributes.

It's often said that in Multics, it's as if all files are memory-mapped via mmap rather than accessed via fopen/fread/fwrite/fclose. Once a segment was opened or allocated, the program accessed it via memory addressing. This model worked for code and data, and was the basis for Multics' dynamic linking.

From the link above:

A basic motivation behind segmentation is the desire to permit information sharing in a more automatic and general manner than provided by non-segmented systems. Sharing must be accomplished without duplication of information and access to the shared information must be controlled not only in secondary memory but also in main memory.

This meant a single permissions model could be employed for sharing code and data, both in memory and on disk. Segments unified a number of mechanisms that later operating systems would "re-invent" for in-memory vs. on-disk blocks.

Whether this is superior to traditional file primitives (open/read/write/close) is a separate question. And certainly UNIX came to support memory-mapped files, demand paging, shared memory, and dynamic linking. But because Multics blurred the differences between files and RAM from the start, and designed for segments from the ground up, segments were not an optional or advanced feature, they were a fundamental storage paradigm.

(Confession: I've never written for Multics, so I'm only speaking from sources I've read.)

Jim Nelson
  • 3,783
  • 1
  • 18
  • 34
  • 2
    re: it's as if all files are memory-mapped Not 'as if' - they are. Storage is addressed through virtual memory. If you really have to, you can build 'read' and 'write' routines on top of that. – dave Jul 20 '20 at 21:14
  • 3
    re: Whether this is superior to the traditional file-based model … Multics has a file system; in fact it invented the hierarchical file system. The debate is rather between read/write primitives and memory mapping. Memory mapping is attractive, but it seems to me your file size has to fit in your address space. If it won't, then you (the system designer) have to invent some way to move a 'window' of address space through the file, and suddenly it's not as clean as it was. – dave Jul 20 '20 at 21:20
  • 3
    By "as if" I meant with w/ mmap() or equivalent; I've clarified that. I never said Multics lacked a file system, and the links I provided indicate it was the first with an HFS. By "file-based model" I meant accessing disk via open/read/write/close primitives rather than memory addressing. (I would think that's clear enough considering the context.) "The debate is rather between read/write primitives and memory mapping" is, in my opinion, only rewording what I said above. – Jim Nelson Jul 20 '20 at 22:27
  • On re-reading I understand your point, but that's not what I understood by "file-based model" on first read. I will delete the comment if you'd prefer. – dave Jul 20 '20 at 22:48
  • No need. Let me see if I can clarify my answer. – Jim Nelson Jul 20 '20 at 22:58
  • 1
    I don't know Multics, but I think I understand the point you're making (interesting). Phrasing suggestion: "all files are memory-mapped (as if via Unix mmap)". The "as-if" shouldn't go with what Multics actually does, because there's @another-dave is saying there's no as-if about it; it literally uses hardware support for virtual memory to demand-page that address space, just like Unix mmap. – Peter Cordes Jul 20 '20 at 23:09
  • 1
    Related: What is the "FS"/"GS" register intended for? laments x86-64 making a segmented memory model (and thus Multics) impossible. – Peter Cordes Jul 20 '20 at 23:16
  • 2
    @another-dave - Multics' hardware used 36-bit addresses and words, but unfortunately those 36-bit addresses are split into 18-bit segment number and 18-bit offset, which means the segment size limit is 2^18 words or 1.25MiB. As you suggest, this is too small for practical memory mapped operations; if the processor had been designed with segments as a separate part of the address, allowing a full 36 bits of offset, this would have been fine for any plausible application at the time, but it seems like the need to retrofit segmentation onto a preexisting 36-bit processor crippled the design. – occipita Jul 21 '20 at 03:56
  • 4
    @occipita In those days, the segment size was extremely large. Hardly anybody wrote code using megabyte contiguous data structures: only a handful of big machines had that much memory. – John Doty Jul 21 '20 at 17:55
  • 1
    Multics has an I/O library, roughly analogous to stdio. One of the things this supports is "multi-segment files". These are directories with a special mode set to treat it as a group, and the library automatically switches between segments to give the illusion of unlimited file sizes. Similarly, database libraries spread the data among multiple segments as necessary. – Barmar Jul 21 '20 at 19:50
  • 1
    And there's an msf_manager_ library that can be used by other applications. – Barmar Jul 21 '20 at 19:51
  • 1
    FWIW, a system with "all files are mapped into memory" but not segmented is Tenex on the PDP-10. – dave Jul 22 '20 at 00:31
18

In Multics, not only was all data mapped into memory, but all binary executables were what we now call DLLs. There was no natural "main program" concept: every binary executable was a compiled function. Processes were extremely "heavy": you got one when you logged in, and everything you ran was a DLL linked into that process. This messed up imported code that assumed static initialization happened every time the "main program" was run.

Unix, with a separate process for each command using standard I/O streams, naturally supported pipelines, an excellent way to factor complex jobs into simple chunks. Pipelines were not technically impossible on Multics, but they were unnatural and difficult to set up.

John Doty
  • 2,344
  • 6
  • 12
  • not only was there not a 'main function' but any procedure in a segment could be given a name in the file system - then when you used that name at the command prompt that procedure in that segment was run! (I think the procedure 'signature' needed to make sense...) And that's also the way "linking" worked - you just referred to a name - and it was found in a segment - and the OS "linked" it dynamically ... – davidbak Jul 21 '20 at 19:54
  • 3
    @davidbak Signatures weren't checked: as long as the segment contained linkage information, it would run. Experienced multicians put trailing underscores on names of functions not intended to be called from command level to avoid confusion. This provoked resistance from more casual users: "Why is the function sin_() rather than sin()?" – John Doty Jul 21 '20 at 20:21
  • Did that also mean, that something the user run could also ruin the whole userspace, which was single process? – lvd Jul 22 '20 at 16:04
  • 1
    @lvd The individual user's userspace, yes. And PL/I tended to encourage sloppy pointer usage. When things seem knackered, type "new_proc" and go make yourself a cup of tea while it builds a clean process for you. – John Doty Jul 22 '20 at 16:15
  • 4
    "we now call DLLs" Surely you meant "SO". – Déjà vu Feb 09 '21 at 07:40
  • @e2-e4 When I see "DLL" I read "dynamically linked library", not specifically the kind with a ".DLL" suffix. On the machine at my left hand they are ".so", on my right they are sometimes ".dylib". – John Doty Feb 09 '21 at 12:55
  • https://retrocomputing.stackexchange.com/questions/8361/process-model-in-early-unix suggests that Unix was originally closer to what is claimed about Multics here, with one process per terminal. – tripleee Jan 16 '23 at 18:03
  • @tripleee Multics had one process per user. Some users were daemons without terminals, it ran batch processes, and the number of interactive users varied as they logged in and out. The Multics process model was very different from Unix: a process's (virtual) memory footprint included every program that the process had run since its creation. The "top level control by chaining" model was common in batch and single user interactive systems at the time, but Multics was unique. – John Doty Jan 18 '23 at 00:56
11

A couple other significant differences between Multics and early Unix systems in the security area:

  1. Multics had rings (8 in commercial versions), whereas Unix only had two effective rings -- supervisor and user. This allowed privileged subsystems to be created that would run in process, but be protected from tampering by user (ring) manipulation. This was used for the message system, the mail system, data management, etc. It was not necessary to move these privileged subsystems to the "kernel" (ring-0) because of the additional rings.

  2. Multics had Mandatory Access Control (MAC), in addition to its Discretionary Access Control system based on ACLs. MAC prevented writing "down" from higher privileged levels to lower privileged levels, and reading higher privileged levels from lower ones. It was successfully deployed by the US government to allow users cleared to only Secret to run alongside users cleared to Top Secret, while ensuring that no improper access occurred. It also had a robust system auditing mechanism that ensured audit records for any security-relevant events (such as covert channels, or attempts to write down or read up).

eswenson
  • 211
  • 1
  • 2
9

Another significant difference between Multics and Unix was the size of the virtual memory accessible to a process.

It is true that each Multics segment was limited to 255K 36-bit words in length. But each process mapped more than 300 such segments into its address space. About 1/2 of these segments belonged to kernel and inner-ring programming environments. [Yes, the kernel was memory-mapped into each user's process; and its pages were shared among all users of the system.]

160 or more segments were therefore available to the user. Each could contain programs or data. And all of this program memory space was available for use in each process: a total size of more than 40 GB of virtual memory as upper limit on process working set size. Dynamic linking and a flexible segmented pointer register mechanism allowed programs running in many segments to work together as a large subsystem; and to share huge pools of data as well.

There are many operating systems that cannot match that programmable memory space even today. Certainly the early Unix was limited to much less than that amount of space (1- or 2-GB or less). So were processes on IBM OS/370 systems.

gdixon
  • 91
  • 1
  • But this huge virtual memory space was not backed by much physical memory. In 1970, MIT had two Multics machines, a 384 kword machine for users, and a 256 kword machine for developers. A page was a kword, so if the user machine had 20 people logged in, there were <20 pages per user. 20 pages isn't a lot. Call a function? Well, its code probably isn't in the same page as the caller, and its linkage/static data is elsewhere. It calls another function, same problem. Stack and heap are in different segments. Easy to have >20 pages active. Even small, simple programs thrashed. – John Doty Jul 22 '20 at 16:59
  • @JohnDoty RAM was expensive in those days. In 1980 MIT-Multics was upgraded to 2.5 MW – Barmar Jul 23 '20 at 13:05
  • 1
    Most systems of the 1970s had little physical memory backing the virtual memory subsystem. However, the Multics virtual memory architecture had the advantage of being able to share publicly read-only and read-execute-only pages among all processes on the system. So pages containing commonly accessed subroutines remained in memory for instant use by any running process. This included not only kernel pages (most of which were not wired into physical memory, but paged in when accessed); but also user-ring routines (Multics shell; shared object code snippets - aka pl1_operators_; terminal I/O). – gdixon Jul 23 '20 at 13:40
  • "In theory, theory and practice are the same. In practice, they're different." Sharing pages only helps if they have a substantial residence time. I knew a guy whose strategy for logging in to Multics was to attempt to log in from every free terminal in the terminal room, thus creating enough demand that the pages involved would actually be in memory. This reduced login time considerably during peak hours. – John Doty Jul 23 '20 at 17:01
  • One revelation came in 1974, when I ported Kildall's PL/M compiler to Multics. It was written in portable Fortran, and not intended for virtual memory or dynamic linking. But once I'd dealt with the static initialization problem, it worked great! A compact, statically linked binary with few calls to DLLs was very quick and responsive. Avoiding page faults and linkage faults was a winning strategy. So, exactly what was virtual memory and dynamic linking doing for the user, at 1970s scale, anyway? – John Doty Jul 23 '20 at 17:11
  • Multics supported statically linked executables: known as bound objects. Dynamic linking allowed user programs to reference routines in these bound objects by name without having to specify location or loading definitions for each such bound object. Virtual memory allowed multiple processes to share pages of such bound objects, thus reducing process swap costs. When a new process ran, it might find many pages of its working set already in-memory. When it ran out of schedule quanta, its pages might remain in-memory being shared by other running processes. – gdixon Jul 23 '20 at 18:13
9

All these answers accurately describe the most salient features of Multics. One of the main consequences was that it could only run on specialized hardware.

From a programmer's standpoint, dynamic links had a fantastic use: when debugging a program, you could pause on a breakpoint, fix your code, recompile it, update the link and continue execution (if you use an interpreted language, this may sound trivial, but for compiled languages, this was paradise).

The ring-based protection system was also very nice. It allowed multiple organizations to have projects on the same machine, and each organization to manage the protection of their own projects. Having said that, UNIX' set_uid bit was a wonderful invention (if I had to rate the nicest features of UNIX, I would certainly place it in the top 3).

Stuck
  • 91
  • 1
  • 1
    Well, yes, "Multics" as it was implemented could only run on one type of processor, but there's nothing fundamental preventing a very similarly designed operating system from being written for Intel 386; and indeed it would even be theoretically possible to port Multics to a 386. It may even be possible to do something approaching multics-like on a flat address space machine, especialy if it has a sufficiently flexible MMU. – Greg A. Woods Feb 04 '21 at 21:13
6

Perhaps the best way of thinking about it is that Unix basically is a minimal implementation of Multics (the ideas in it a least) with absolutely everything that was not strictly necessary to bring up the system stripped out. So segmenting and virtual memory are not really needed (at least to get started). Complex permissions, ACLs or protection rings, quotas, mandatory access controls, ... -- all gone. The idea being to have an absolutely minimal system that could run on minimal hardware, and could be implemented by one person (or a minimal team).

Since a lot of the philosophy behind Unix is the same as Multics, over time pretty much all the features of Multics have made their way into Unix, though some of the "minimalist" philosophy has survived. So one can argue that the only difference between Unix and Multics is that minimalist philosophy

Chris Dodd
  • 596
  • 3
  • 8
2

Is UNIX really a modified ("mini") Multics at all?

Ironically, people often speak of Unix as 'a descendant of Multics', and there are some aspects of Unix that are clearly copied from Multics (e.g. the hierarchical file system), but Dennis Richie wrote, in his paper The UNIX Time-sharing System - A Retrospective (published in the first BSTJ 'Unix' volume).¹

A good case can be made that UNIX is in essence a modern implementation of MIT's CTSS system. This claim is intended as a compliment to both UNIX and CTSS. Today, more than fifteen years after CTSS was born, few of the interactive systems we know of are superior to it in ease of use; many are inferior in basic design."

I see the truth to DMR's observation; fundamentally, it is more like CTSS than Multics - no single-level-memory, no dynamic linking, etc, etc - all key concepts in Multics.

In an interview with Peter Seibel², Ken Thompson stated that "The things that I [Ken Thompson] liked [about Multics] enough to actually take were the hierarchical file system and the shell."

Implying not much if anything else in Multics was adapted into UNIX.

  1. Ritchie, Dennis M. (1977). The Unix Time-sharing System: A retrospective. Tenth Hawaii International Conference on the System Sciences.

  2. Seibel, Peter (2009). Coders at work : reflections on the craft of programming. New York: Apress. p. 463. ISBN 9781430219491.

Toby Speight
  • 1,611
  • 14
  • 31
Knickers Brown
  • 620
  • 2
  • 11