10

I'm looking for performance data on relay based computers, and even human based computers. Performance, as in, time to perform an addition, multiple, etc.

The Nordhaus data is all I have been able to find so far. W. D. Nordhaus "The progress of computing", Cowles Foundation Discussion Paper No. 1324

Cost per million ops by date

Derek Jones
  • 101
  • 3
  • 4
    Not an answer, but check for FACOM 128B (there is Youtube vlog). Some info can be found here: http://museum.ipsj.or.jp/en/computer/dawn/0012.html – ghellquist Aug 09 '23 at 13:46
  • 1
    What a beautiful graphic. It's literally beautiful while being both clear and having good information density. – Wayne Conrad Aug 09 '23 at 15:10
  • 3
    @WayneConrad Not really. It does not tell what currency, if those values are adjusted for inflation or pure numeric, or what each group includes - not to mention that Operations of complete different systems cant really be compared - it doesn't lower cost if one machine operates at 10 times lower priced operations if it also needs 10 times the number of operations. So, while looking nice it doesn't really tell anything on its own. – Raffzahn Aug 09 '23 at 15:21
  • Thanks for the update with the graph (regardless of Raffzahn's comment) – davidbak Aug 09 '23 at 15:25
  • @davidbak ??? I haven't said it isn't helpful. It does quite good illustrate the base the OP is working on. Nonetheless one should note existing deficit, don't you think so? – Raffzahn Aug 09 '23 at 16:01
  • 2
    Sure, note it. But in a friendly way. OP is new to this board. And we'd like to encourage him to come back because he could be a font of information about software development practices as far back as the retro time period. And he's a data-analysis guru and knows well the limitation of this chart. (You wouldn't know any of that, I totally get it, but even though I'm not exactly onboard in general with SE's "welcoming" policy - in this case I definitely am.) Welcome Derek! – davidbak Aug 09 '23 at 16:06
  • BTW Raffzahn your answer is great, TY! I had no idea relay computers were even that fast to do 40 "clicks" per second - even back in the heyday of the relay. ("clicks per second" is appropriate for a relay computer, don't you think? Better than Hz.) – davidbak Aug 09 '23 at 16:09
  • @davidbak the 40 clicks are more due a different internal structure. If I understand it correctly (which I'm not sure) the Z4 and Z5 switched to a model more like the later 4 phase structure. So while the clock was (almost) 8 fold, the speed required from each relay did not even double - which should be quite in reach for fine new after war quality, compared to pre-war/war time scrap material used for the Z3. All much like today: hardware technology changes only slow, its better algorithms (or circuit design) that yields higher performance). – Raffzahn Aug 10 '23 at 03:13

1 Answers1

8

Well, a good datapoint here may be the Zuse Z3 and Z4 computers. Not at least as their workings are close related to today's computers in being tact controlled as well as using binary floating point arithmetics.

The 1941 Z3 operated at a clock frequency of 5.3 Hz needing 0 to 20 clocks per instruction (9..41 for FP/Decimal conversion):

Operation Clocks Instr./sec Instr./h
Load/Store 1 5,3 19,080
Addition 3 1.8 6,360
Subtraktion 4–5 1.1-1.3 3,960-4,680
Multiplikation 16 0.33 1,192
Division 18 0.3 1,060
Squareroot 20 0,27 954

All of that with 22 bit binary FP.

The 1948 Z4 follows the same basic structure, but adds a lot of additional instructions - which still operate speed wise in the same region.

The Z4 might also give a practical data point regarding the above 'Cost Per Millon Operations' scale, as it was the first commercially used computer, rented out to paying customers at 0.01 Swiss Franks (CHF). That's about 2.336 USD (of 1948) for 1 Million Operations, a number ending up somewhere in the middle of all those vacuum tube machines - assuming its about contemporary USD (*1).

Likewise the 1953 Z5 but it als cranks up (*2) clock frequency to 40 Hz. It delivered about 6-7 times the performance of a Z3/Z4 (*3), all while extending FP Format to 36 bit for extended precision.


*1 - Well, beside that graph not mentioning what currency is used, it also doesn't note if those numbers are inflation adjusted or not. It also doesn't show a distinct 'relay' category, so one may assume them being rather part of the data points for 'tube'.

*2 - Quite literally as the clock generator was a motor driving a disk operating contacts for instruction phases.

*3 - The speed increase is all due faster relay based memory. While the Zuses may have been the first example of memory being the main limiter for speed, the phenomena happened many times over in later generations :))

Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 2
    16 clocks seems a bit fast for a 22bit FP multiply - ?? And only two more for division. Seems impressive to me - but I don't know that much about how arithmetic is implemented even today ... – davidbak Aug 09 '23 at 14:38
  • 3
    @davidbak It may be helpful to note that Zuse already used a single step carry look ahead logic for his 1930 mechanical Z1. That substantial speed up was only to be rediscovered by IBM who filed a patent in 1957. Using such allows binary addition/subtraction be performed extreme fast as it does not have to wait for carry propagation. Other parts of his designs were as well ahead of time. There was a whole book about those details written in the late 1990s. – Raffzahn Aug 09 '23 at 14:57
  • Your comment on the clock being a rotating disk reminded me of my research director (in about 1980) saying he had worked with an IBM 700-series system where the persistent memory was on a drum (https://en.wikipedia.org/wiki/IBM_drum_storage). Apparently, programmers were aware of the drum-induced latency during a drum-read and would purposely delay their code to match that latency (otherwise, it would need to wait till the drum performed a full rotation). Early high performance floppy based systems (pre-caching) performed similar delay gymnastics. – Flydog57 Aug 09 '23 at 22:21
  • @Flydog57 yes, using drum as main memory and the resulting timing constrains were the mainstay of early (1950s) machines, before COR became fast (and cheap) enough to take over. There were even machines like the IBM 650 making that quirk a feature by having each instruction including the address of the next one. A program was spread out along tracks to allow best possible access times for data as well as the next instruction following the data access. That machine didn't even need a program counter :)) – Raffzahn Aug 09 '23 at 23:24
  • 3
    @Flydog57, talk of drum memory always brings The Story of Mel to mind for me. – John Bollinger Aug 10 '23 at 00:35
  • 2
    @davidbak Here is a nice deepdive into the Z1 and Z3. My understanding is that it only took a single cycle to add or subtract the significand registers. So multiply/divide do scale with the significand size as expected. The add/subtract instructions themselves add some overhead to fiddle with signs etc. that the inner mul/div loops do not need to repeat each time. – WimC Aug 20 '23 at 15:31