24

Computer memory used to be a limited and expensive asset for a long while (for example, in computers with 16KiB RAM or less, compared to the 2 MiB of my first PC (an Intel 486) in 1995 and current day's gibibytes).

I guess this was mostly between the 60's and the 80's, if you limit the issue to personal computers or small wardrobe-sized ones, but I'm not sure about the precise period, and it could naturally be traced up to the advent of computers.

Secondary memory may be included in the question but I'm mostly concerned about RAM.

Why was it so expensive?

What were exactly the reasons which prevented higher capacity RAM to be produced and adopted earlier? Was it more a matter of supply vs. demand? Technology limitations? If so, which were them?

Didn't people living at that time feel sudden reductions in size/availability as technologies evolved? And if they didn't, why wasn't there a "jump" in memory availability when technology reached a certain state?

Piovezan
  • 421
  • 3
  • 8
  • Comments are not for extended discussion; this conversation has been moved to chat. – Chenmunka Jan 11 '22 at 11:38
  • 14
    This question would be better phrased as "How did memory come to be so cheap?" which is was really happened (and largely the story-behind-the-story of the computer revolution). – RBarryYoung Jan 11 '22 at 15:52
  • 1
    @RBarryYoung The question was mostly based on my ignorance about hardware and the wrong assumption that IC production processes didn't change that much since when they were invented, so I asked it wondering why more powerful memory didn't become available earlier. In that sense, I think focusing on expensiveness and scarcity sounds more fitting to the spirit of my uncertainties. – Piovezan Jan 11 '22 at 21:41
  • 5
    Fair enough. There has probably been no product category in human history that has changed so much and so many times in such a short time. – RBarryYoung Jan 11 '22 at 21:46
  • 4
    In 1953, $7.50 was considered radically cheap for a single, rather crummy, transistor (https://en.wikipedia.org/wiki/CK722). Go figure. – John Doty Jan 11 '22 at 22:30
  • 1
    When I was a DEC programmer, savvy engineers used to have a drawer full of parts that they could use in trade. One day I woke up to the realization that the bottom had dropped out of the half-a-megabyte memory market. (Evidently I bought high, sold low). – dave Jan 11 '22 at 22:52
  • Because explosive growth can only be fueled by huge profits. And expensive kit was reserved for really important jobs and was expected to work in harsh environments. A team of coders that had a five-lines-of-code-per-day quota (for the entire team) was still coding faster than memory was being produced. – Phil Sweet Jan 11 '22 at 23:08
  • When I was a DEC intern, I helped test 1 GB memory modules. With the heatsinks they were literally the size of a brick. – stannius Jan 11 '22 at 23:16
  • @RBarryYoung Interesting observation. I'd contend that software in a general sense did, but I think we could say that they pretty much evolved together. Or I might be wrong -- e.g. the principles of functional programming haven't changed since Church/Turing, GUIs have been available for a long time, etc. (or perhaps I'm thinking of software applications). – Piovezan Jan 11 '22 at 23:27
  • 4
    If a PC nowadays has a few GiB of RAM, and a consumer PC in 1995 had a few MiB of RAM, is it really that surprising that a computer in the late 70s / early 80s would have a few KiB of RAM? – Vikki Jan 12 '22 at 05:25
  • 1
    @Vikki The question asks about the technical / economic specifics which prevented higher capacity memory from being available earlier. I thought I had made that point clear in the original post. I even improved the title a few hours before you commented. No need to imply that the question was dumb :) It might need to become more focused, though, but I like the way it has been asked in general, and the discussions it triggered (which is bad, I know, but it kind of felt like a warm welcome from the community :-). – Piovezan Jan 12 '22 at 12:30
  • Old question title was fine. – user3840170 Jan 12 '22 at 12:30
  • @user3840170 It was not bad, and I liked it, but I think it attracted a few close votes and unfocused answers/comments. It's still open to discussion, as usual. – Piovezan Jan 12 '22 at 12:45
  • I don’t think this had much to do with the title. The closure review concluded in favour of leaving open, for what it’s worth. – user3840170 Jan 12 '22 at 12:58
  • @user3840170 Oh, I wasn't aware of the closure review. Okay then, I'm changing it back. – Piovezan Jan 12 '22 at 13:49
  • You say that in the 70's and 80's the price of RAM was expensive... I seem to recall that, even more recently, in the mid to late 90's I wanted 32 MB of RAM (72 pin DIMM) to max out either a Quadra 650 or a PowerMac and it was ~£600 - which (to me at least) seemed prohibitively expensive. – Greenonline Jan 12 '22 at 15:46
  • @Piovezan The technical/economic specifics would fill a library. There is no small set to point to. Each new device taught its designers, manufacturers, and users new things that could be used to improve the next device. – John Doty Jan 12 '22 at 16:50
  • One factor is that even things that have basically not changed are cheaper. Today people work far fewer hours to buy a refrigerator or a car than forty years ago. – Kaz Jan 13 '22 at 01:03
  • @Kaz Not necessarily. For example, cars: quick search shows average new (US) 1980 $7k, 2020 $46k. Median family income 1980 $21k (3x car cost) vs. 2020 $68k (1.5x car cost). Cars may not be the best example as there is a wide range on the used -> new and small -> large -> luxury spectrum. Plus the modern US car almost always has power steering, power brakes, air conditioning, etc. which were not necessarily standard in 1980 and things like electronic fuel injection, air bags, antilock brakes, etc. and many purely electronic features such as keyless entry, OnStar (or similar), GPS nav., etc. – manassehkatz-Moving 2 Codidact Jan 13 '22 at 18:10
  • that were simply not available 40 years ago. But in a purely consumer electronics realm - not just computers but TVs and other things, the change has been huge. It gets really interesting looking at major appliances such as refrigerators, washing machines, ovens, etc. where there is (like cars) a mix of basic stuff and new technology. The end result is that some things really have gotten more expensive (relative to change in income), some have gotten hugely cheaper (computers, etc.) and some are nominally cheaper (in inflation-adjusted dollars) but don't, on average, last as long, resulting – manassehkatz-Moving 2 Codidact Jan 13 '22 at 18:15
  • in a long-term cost much higher than the "good old days". – manassehkatz-Moving 2 Codidact Jan 13 '22 at 18:16
  • I noticed this snippet in a reminiscence of Atlas (1960s): The Core memory was increased from 16k words to 32k for about £½m. (about $1.2M). Link – dave Jan 17 '22 at 02:32

8 Answers8

46

As noted in some initial comments (but I feel fine answering, as I had the exact same ideas when I read the question), this is a general progression of technology but there are two very specific factors for RAM:

Core Memory -> Integrated Circuits

While many different, very expensive, systems were used in the first computers, including mercury delay lines, CRT storage tubes and magnetic drums, the primary memory technology from roughly 1955 - 1975 was core memory. Still lives on, for some of us, in terminology such as "core dump".

There were no "personal computers" in 1955, at least not based on cost! In the 1960s mainframes and minicomputers both used core memory. Some of the first minicomputers were used in a single-user/single-task/single-operator mode, so arguably they fit the "personal computer" term, though arguably it wasn't until the days of the 8080 and 6502 that there were affordable & functional personal computers.

Because core memory required actual physical separate wires running through tiny little magnetic cores, there were both limits to the achievable density (smaller = cheaper, at least in cost of materials) and limits in speed of the manual labor involved in actually threading the wires through the cores.

Integrated circuits were the next (and still current) major form of RAM. Individual transistors were actually not a major advantage over core, because the physical space needed for transistors was larger than needed for core memory, on a bit-for-bit basis. Integrated circuits changed that. For those who don't know, while Intel is most famous for its CPU chips, it started making memory chips for mainframe computers. Which leads us to Gordon Moore and...

Moore's Law

From Wikipedia:

Moore's law is the observation that the number of transistors in a dense integrated circuit (IC) doubles about every two years

Memory chips (Static RAM vs. Dynamic RAM vs. various types of Flash) vary in terms of how many transistors needed per bit. However, within each type, the improvements based on Moore's Law have resulted in consistent improvements in both capacity and cost.

In fact, I would argue that Moore's Law has had more of an impact on memory than on any other type of chip. Improvements in CPUs now depend as much, if not more, on improved design - pipelining, cacheing, predictive branches and so many other things - where improvements in RAM largely come down to using smaller transistors and more of them in each chip.

It was simply impossible to produce today's RAM chips 50 years ago - integrated circuits had transistors on the order of 2,000 times the size they are today (10 µm vs. 5 nm). And if you could build a gigabit chip in 1971, it would have been physically too large to be useful.

  • 1
    Comments are not for extended discussion; this conversation has been moved to chat. – Chenmunka Jan 11 '22 at 11:38
  • 2
    Given that Moore's law is an "observation", it does not have an impact on memory chip development. – user3481644 Jan 11 '22 at 20:32
  • 9
    I've always found it funny that electronics technology was typically stuck waiting for advances in optics technology. Smaller transistors means you have to shrink your stencil down to a smaller size, which means better lenses, new types of lasers, etc. Moore's Law is basically just a description of the speed at which the optics industry evolves. Designing denser, better memory is relatively easy, figuring out how to actually build the thing is the hard part. – bta Jan 11 '22 at 21:17
  • 1
    @user3481644 Gravity is also an observation. Of course, Wright's Law has better predictive power, and implies Moore's Law. – Yakk - Adam Nevraumont Jan 11 '22 at 22:37
  • 3
    re @bta - figuring out how to actually build the thing is the hard part. Ain't that the truth? – dave Jan 11 '22 at 23:00
  • 9
    10 µm vs. 5 nm means 4 million times the size in die area! – Zac67 Jan 12 '22 at 10:03
  • 1
    @user3481644 "Moore's Law" can easily and correctly be read as "the phenomenon described by Moore's Law". No law has any effect on physical behavior, strictly speaking, but the meaning is clear. – fectin Jan 12 '22 at 18:35
27

I'm mostly concerned about RAM.

Why was it so expensive?

It wasn't - at least not once integrated circuit RAM became available in the 1970s. Compared to other chips, RAM was cheaper both per transistor and per package.

Some example prices:-

From an advert in Byte magazine issue #1 (Sept 1975)

  • 8080 CPU: (4,500 transistors) $149.95 = $33 per thousand transistors
  • 2102 SRAM: (6,000 transistors) $4.95 = $0.83 per thousand transistors

Byte magazine issue Vol 5 #1, (Jan 1980)

  • Z80A CPU: (8,500 transistors) $16.95 = $2 per thousand transistors
  • 2114 SRAM: (25,000 transistors) $7.50 = $0.30 per thousand transistors

But while computers generally only needed a single CPU, the more RAM that could be installed the more complex the programs that could be run on it, so software tended to bloat until it used up all the available RAM - and then you wanted more! A natural limit would be reached at the memory addressing range of the CPU (or memory map, depending on the particular architecture of the machine). For 8-bit machines that was 64 KiB or less. Whether RAM seemed expensive depended on how much was needed to run available software, which in turn was determined by how much the machine could take.

Back to that 1980 Byte magazine, the Commodore PET 2001 was selling for $995 with 16 KiB or $1295 maxed out to 32 KiB. So doubling the RAM only increased the total price by 30%. The price for an NEC Spinwriter 5530-P printer for the PET (recommended by Commodore for their Word Processing System) was $2995. Even the cheapest printer was $850, much more than a RAM upgrade.

Computer memory used to be a limited and expensive asset for a long while (for example, in computers with 16KiB RAM or less, compared to the 2 MiB of my first PC (a 486) in 1995 and current day's gibibytes).

The situation hasn't really improved though, because bloat continues to make however much RAM you have not enough. Modern PCs are sold with a 'mere' 4 Gibibytes that gets used up by Windows just getting to the desktop, and browsers need a Gibibyte or more just to display a single web page. 16 Gibibytes is recommended for reliable operation of Windows 11, which could cost several hundred dollars. Then in a year or so even this won't be enough, as it becomes the 'minimum'.

Toby Speight
  • 1,611
  • 14
  • 31
Bruce Abbott
  • 6,673
  • 15
  • 31
  • 2
    For completeness' sake it should probably be mentioned that while you can pay hundreds for 16GiB, you can get it for <$100 if you want. – ojs Jan 10 '22 at 11:04
  • 5
    Data expands to fill the space available for storage. -- Corollary to Parkinson's law – ssokolow Jan 10 '22 at 13:25
  • 3
    I remember being in an Apple Store when the sales person got a call about the price of a Mac. And apparently the caller asked about the price with maximum available RAM. That was at a time where some users considered a Mac as lots of RAM insided a box with a CPU. I think the same model went from DM 2,000 (it was a long time ago) to DM 10,000 (with 1.5GB of RAM). Same today: A Mac Pro with 1.5TB of RAM costs 4 times more than one with the smallest amount available. – gnasher729 Jan 10 '22 at 14:15
  • @ssokolow - I believe the correct wording is "data expands to exceed the space available for storage" :-) – dave Jan 10 '22 at 17:55
  • @ojs right... except your machine might not take that cheap RAM, or if it does it might not be reliable. More annoying is all the different incompatible types. In the good old days there were only a few types of RAM chip, and machines could often be modfied to take larger capacity chips. Now it's a minefield. I have a bag full of DIMMs that won't work in any of my PCs! – Bruce Abbott Jan 10 '22 at 19:22
  • 3
    @BruceAbbott I remember a different past where motherboard manuals had a long lists of supported configurations of SIMMs with different chips. Just because two modules worked on their own, they wouldn't work at the same time. And yes, there were different types of memory chips even before DIMM was invented. If anything, current stuff is incredibly easy as long as you don't try to mix and match previous generation parts. – ojs Jan 10 '22 at 21:01
  • 3
    The memory requirements increases of Windows have slowed in the last decade or so (Since Windows 7, released in 2009). My 2008-bought 4GB machine worked acceptably with the most recent Windows 10 (until it started blowing caps). – Jonathan Jan 11 '22 at 14:20
  • 1
    In most processor chips of the era, the interconnections between transistors use up a lot more space than transistors themselves. By contrast, RAM chips can often be laid out to minimize the amount of space needed for interconnects. As for "per package" price, that would depend upon memory capacity. When a 32Kx8 SRAM chips became available from Jameco, I think they were about $30, which was much greater than the cost of a 6502. – supercat Jan 11 '22 at 16:57
  • In the mid 80's, I coordinated a purchase of memory chips for a half dozen people. These chips were 256k BIT DIP chips - 32 per megaBYTE and were $2.00 each, which was considered very cheap. I populated my daughter memory board with 1.5 meg was the talk of the company for a while, even more so than when I over-clocked my AT from 4 to 6 MHz – user3481644 Jan 11 '22 at 20:37
  • @user3481644: I thought the AT was sold at 6MHz? Did you maybe overclock from 6 to 8? – supercat Jan 11 '22 at 21:06
  • @supercat Yes the HM62256 was initially very expensive compared to 256k DRAMs. The HM65256 pseudo-static RAM was much cheaper and a drop-in replacement on Z80 based systems. I put one in my ZX81. – Bruce Abbott Jan 11 '22 at 21:59
  • @gnasher729 Are you sure you mean RAM? I doubt you can get a Mac with more than 64GB, never mind 1.5TB. Maybe you are thinking of the harddrive? – evildemonic Jan 11 '22 at 22:59
  • @evildemonic Apple website lists the maximum for current desktop Mac Pro as 1.5T as 12 128GiB DIMMs, 24 or 28 core CPU required. Those are expensive. And yes, there are cheaper vendors if you want just the RAM and CPU spec. – ojs Jan 12 '22 at 09:45
  • @BruceAbbott: I once ordered one of the pseudo-SRAM parts, but it turned out not to be usable as a drop-in replacement for my application because it imposes much stricter timing requirements than the 62256; the latter doesn't impose any real timing constraints on read cycles beyond needing some time after inputs stabilize before outputs are guaranteed valid. If I recall, the 62256 has a longer access time after the falling edge of /CS than /OE, so leaving /CS active and gating /OE and R/W allowed the use of a slightly inferior speed grade 62256, but won't work at all with PSRAM. – supercat Jan 12 '22 at 17:02
  • With respect to "bloat" - I'd say the relevant metric for the customer is "function per dollar" rather than "memory used", and given that memory is getting cheaper, and programmer-hours are getting more expensive, if you can get function to the customer sooner by using more memory, that's an optimization. – dave Jan 12 '22 at 22:44
  • @another-dave: Point #9 of Joel Test, use the best computer that you can buy. That's cheaper than the people: https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-steps-to-better-code/ – Ángel Jan 13 '22 at 01:42
15

Didn't people living at that time feel sudden reductions in size/availability as technologies evolved? And if they didn't, why wasn't there a "jump" in memory availability when technology reached a certain state?

Yes - except that since it happened on an annual basis, we got used to having jump after jump after jump in technological capability. It is possibly difficult to describe for people who weren't there; you could be sure that in three years time your computer would be spectacularly obsolete, and for the same price you would be able to get one with four times as much RAM and processing power. Whole new capabilities like "live 3D graphics" appeared as a result.

pjc50
  • 1,015
  • 7
  • 9
  • 8
    Yeah, it was fun while it lasted but upgrading a PC every couple years got really tiresome after a while. I distinctly remember having a 2.5 year old computer in 1998 that had just enough processing power to play one MP3, which were just becoming a thing. If you tried to open windows or move them while the MP3 was playing, it would stutter and lag. I replaced it a few months later with a system that had, literally, four times the clock speed (75->300MHz) and four times as much RAM - so yeah, that's not hyperbole. – J... Jan 10 '22 at 18:10
  • @J..., hopefully, things will soon reach the point where people no longer assume my desktop computer is hopelessly obsolete just because it's ten years old. – Mark Jan 10 '22 at 22:40
  • 1
    @mark - I wouldn't assume that now. I forget when I built my current Windows desktop machines. Probably just after Windows 7 was released, maybe in 2010.. – dave Jan 10 '22 at 23:31
  • @Mark, ...we'd have to stop getting hardware-level security vulnerabilities before I'd call that a safe class of assumption to make. – Charles Duffy Jan 11 '22 at 21:34
  • @J... A side effect of all this is that back in the day, you were always very excited when you got a new computer because the changes really made a difference. Nowadays it's just yawn, the same but a bit better. Even though the increases in performance etc. continue at a tremendous pace, the effect they have on what you can do seem to follow an asymptotic curve. – jcaron Jan 12 '22 at 15:32
  • @jcaron: These days the perf gains that are still really large are in multi-threaded throughput, from fitting more cores into a package. Or from better SIMD like AVX-512 which most programs don't take advantage of for most interactive things, but does help for number crunching / video encoding. IPC and frequency gains for single-threaded performance are nowhere near as dramatic as the 1997 to 2001, when we were replacing P5 Pentium (in-order dual-issue) with P6-based PII / PIII (triple-issue out-of-order exec) which was truly huge for single-threaded performance. Even more than SnB in 2011. – Peter Cordes Jan 12 '22 at 17:05
  • Also Athlon XP K7 and K8 a couple years later, and huge increases in frequency from early P6 to K8 and Core2, and in memory bandwidth, too. Actually, memory bandwidth is one thing that has continued to grow impressively, although most interactive workloads benefit from L1/L2/L3 caches enough that DRAM bandwidth isn't a major factor. But yes, the other major factor is that a single core of a modern Skylake from 2015 is already fast enough for interactive web browsing, so a new system doesn't feel a lot faster, unless you were cramped for RAM before or didn't have a fast SSD. – Peter Cordes Jan 12 '22 at 17:08
7

On my windowsill, I've got (roughly) 2K bits of core memory in a picture frame.

According to an engineering manager it was assembled by Philippina seamstresses, and they were able to do it for about three years before their eyesight failed. He sounded smug rather than horrified.

Each core, representing one bit, has four wires threaded through it. If we assume one cent per wire, that's 25 bits or roughly 3 eight-bit bytes per dollar... and note that that's the manufacturing cost.

So 1Kbyte of memory would cost roughly $330 to manufacture.

Double that to include a diode switching circuit and sense amps, then multiply by ten for sale to the customer. Call it a round $1,000 per Kbyte.

Now consider that when the next generation of machines were brought out, it was good economics to sell the IC-based memory a bit cheaper: but not /that/ much cheaper.

And that's why memory was expensive and scarce.

Mark Morgan Lloyd
  • 2,428
  • 6
  • 22
  • Was weaving memory bad for the seamstresses' eyesight? – stannius Jan 11 '22 at 18:16
  • 1
    @stannius: It could be due to them losing eyesight precision because of the job, or possibly natural eyesight degradation that led to it being an issue - don't know if I'm looking a the same core memory threading that Mark did, but...It turned out that NASA's core rope memory is very small scale even during Apollo missions. – Alexander The 1st Jan 11 '22 at 22:20
  • 2
    @AlexanderThe1st I read that same article and it says "The core rope memory was nicknamed “LOL memory”, where LOL stood for the "Little Old Ladies" who assembled it." which (assuming it is accurate) doesn't jibe with the natural eyesight degradation explanation. But nor did I find any indication in that or other articles that the work was dangerous for the seamstresses. Anyways I was just curious about that aspect of the story. – stannius Jan 11 '22 at 23:14
  • 1
    I'd like to stress that I was quoting, but one of the interesting things- which I've not seen alluded to elsewhere- was that this had specifically been offshored to the Philippines and since we're talking about 1960s tech it wouldn't be too difficult to check my "finger in air" guess of a cent per wire per core. Apart from that... I don't want to be judgemental, and I believe I have quoted accurately. (To be continued...) – Mark Morgan Lloyd Jan 12 '22 at 19:26
  • (...continued) The second interesting thing is that somewhere there is a photo showing the comparative size of successive generations of magnetic core: I suspect my specimen is very late, the outer diameter of the cores is roughly 1mm, and the inner diameter is... surprisingly large, I'd guess around 0.6 mm (40 and 25 thou, assuming that the cores themselves were American-made). In any event, that's /not/ a job for little old ladies. – Mark Morgan Lloyd Jan 12 '22 at 19:37
  • Relative core sizes from Doug Jones: 1.96mm OD circa 1960, reducing to 0.46mm OD circa 1977 http://homepage.cs.uiowa.edu/~dwjones/core/ . So it turns out that my specimen is not the smallest by any means and might be dated to circa 1965, the threading is also less complex than most since it's "square rigged" rather than having a sense wire running diagonally. – Mark Morgan Lloyd Jan 12 '22 at 19:54
6

The brief answer to the question is we watched the evolution of computer technology from computers that filled a room and cannot hold a candle to a watch we can wear on our wrist. You might as well ask why we didn't have cars that could do 200 mph back in 1900.

Part of the answer to your question is R&D the other part is paying for that R&D.

Remember that traveling between the US and Europe used to take 7-10 days on a ship and now its only a few hours by air (depending on the route).

user3481644
  • 161
  • 1
5

Since the invention of ICs, computer memory size has been gated by Rock's law, or Moore's second law. At any point in time, no one could afford the rapidly growing costs (now many $billions USD) it took to build the more advanced semiconductor fab lines that are required for smaller lithography higher density memories, until the market grew enough, and technical knowledge advanced enough, to make their manufacture financially feasible.

You can only fit so many transistors on a memory chip, and the earliest IC transistors were gigantic compared to todays (10's of thousands of nanometers in size), because the lithography equipment to go smaller just didn't exist back then, even in the the most advanced research labs.

Before semiconductor IC's, basic material and assembly costs (magnetic cores and ladies to knit the wiring) limited memory size. Before that other technical limitations, such as phosphor dot size (Williams tubes), and timing synchronization against acoustic dispersion (mercury delay lines), and magnetic dot size (rotating drum memory as RAM) limited memory size.

hotpaw2
  • 8,183
  • 1
  • 19
  • 46
  • Moore's First and Second (Rock's) Laws are observations, they are not capable of "gating" or limiting computer memory size. – user3481644 Jan 12 '22 at 11:56
  • 1
    @user3481644 They don't limit computer memory size, but they explain the cost per memory unit, which is the driver behind affordable memory size. At any given time you can find devices with capacities ranging over many orders of magnitude, they just have different costs. – jcaron Jan 12 '22 at 14:57
4

An observation I have made recently is that only things which involve a lot of material have a natural bottom price. A washing machine weighs 100 pounds, a car weighs 2000, a house 200,000 (or whatever). People have to dig coal and ore out of the earth, make steel, fell trees, truck it to processing plants etc. No amount of automation or technological progress can change that. Washing machines, cars and houses are much better than 50 years ago, but not much cheaper. There is a bottom to their price because of the amount of material involved, even if very few workers are needed, as in modern car factories.

Not so with electronics and data processing. The material cost is negligible; everything else has no bottom price. Automation and miniaturization continue to lower the production cost. Remember when CD blanks cost 20 dollars in the 1990s? 10 years later we liked the free AOL CDs because we wanted their jewel cases. The packaging had become more valuable than the CD.

There is no bottom to the cost of ICs either, RAM or anything else. Producing ICs with an established process costs very little: A Raspberry Pi whose computing power would have served 8 X terminal workplaces in 1995 costs less than 100 Dollars.

  • 2
    Nonetheless, you'll find that the prices of computers, divided by the weight of those computers, is kind of consistent. The IBM 610 cost $55k and weighed 800 lbs, or $70/lb. The VAX 11/780 was roughly $150k and up to 2000 lbs or about $75/lb. The IBM 5150 ("IBM PC") was $1565 and weighed 21 lbs: $74.52/lb. This starts to fall apart when weight becomes a critical factor though: laptops are much more expensive per pound. :-) – torek Jan 13 '22 at 02:43
  • @torek Interesting observation. Doesn't that support my argument that for mature technologies the material involved is important for the bottom price? Probably with some factor k that stands for complexity of processing (1kg bread is less expensive than 1kg laptop)? – Peter - Reinstate Monica Jan 13 '22 at 09:31
  • Yes, I think it does. It's also kind of fun to think about: go to the grocery store or deli and buy a couple of pounds of computer.... – torek Jan 13 '22 at 09:48
3

For the same reason as:

  • CPUs increasing in frequency from sub-MHz to multi-GHz
  • CPUs increasing in number of cores from a single core to several dozen
  • Hard disk drives increasing in capacity from a few MB to several TB
  • Modems increasing in speed from a few hundred bit/s to gigabits/s
  • Wi-Fi increasing in speed from 2 Mbit/s to gigabits/s
  • Ethernet increasing in speed from 10 Mbit/s to 10 Gbit/s
  • The cost of an Ethernet card going down from over $1000 to less than $10
  • TV resolutions have gone from 576i to 720p to 1080p to 4K

It's all mostly due to:

  • Better and better manufacturing processes, allowing more transistors to be packed on the same surface, and better yields
  • Economies of scale

It's all very incremental. One small improvement here, one small improvement there. In some cases, a later generation depends on previous generations to have existed.

It's not very different from small but constant increases in performance of engines (from cars to aircraft), improving fuel efficiency, range, and reliability.

In aircraft there was one big jump with the introduction of jet engines, but other than that, it's all very incremental.

It's interesting to note that next to Moore's law which applies to ICs (and thus CPUs and memory), there are similar laws for communications (Ethernet, wireless...) and they don't all have the same shape or slope, which leads to bottlenecks moving from one to another over time.

jcaron
  • 803
  • 4
  • 7
  • I think I got a bit mislead when I read about the IBM System/360 family (which, for completeness, was a mainframe line released in 1964/65, the first one to cover a whole range of processing capabilities and target users). Although not a personal computer family, it made it sound like releasing the latest technology of the time was more of an option than a rule. Come to think of it, the memory availability wasn't that much wide (although it did range from 8KB up to 1 to 8 MB at that time - again, 1965 -, according to Wikipedia). – Piovezan Jan 12 '22 at 15:22
  • @Piovezan The driver is cost (and sometimes, physical dimensions, or some side constraints such as power consumption). Even nowadays, you save systems with only a few KB of memory (possibly even less) while others have TB of memory. Personal computers are mostly in the range of a few GB to a few dozen GB, but IoT sensors and other very limited devices may have very (very very) little while large and very expensive systems may have several TB. – jcaron Jan 12 '22 at 15:26
  • 3
    @jcaron: Re "possibly even less", for many years in the 1990s, the most popular microcontroller by sales volume was the PIC 16C54, with 25 bytes of RAM, and enough code space for 512 instructions (which took 12 bits each), and since that time Microchip introduced a controller that was even smaller, with 16 bytes of RAM and only 256 instructions. – supercat Jan 12 '22 at 16:50