I've ended up doing quite a lot of research on this question now, and I'm going to try to provide my own answer. This is based on my own research, assumptions and a lot of guesses, not to mention input from other answers and comments. I'd very much welcome corrections. I intend to edit further to include more links to evidence.
The short answer
The short answer, as others have said, is that presumably the reason Linux implemented support for RTS/CTS flow control, and not DTR/DTS flow control is that RTS and CTS are the standard signals used for hardware flow control and DTR and DTS are not. That is both the de facto standard implemented by a very large number of devices, and the de jure standard described by the ITU and the EIA¹.
The very long answer
That prompts a bunch of other questions, if RTS/CTS flow control was the standard, why was DTR/DTS flow control even a thing? And was it really even a thing, or are some RS-232 devices just weird? Why wouldn't they just do RTS/CTS flow control? And if it was a thing, why wouldn't Linux implement that in addition to RTS/CTS flow control?
Many Devices use DTR or DTR/DTS Flow-Control
Just to illustrate that DTR/DTS flow-control is not something I have made up, here are some examples of devices from various well known manufacturers that used it:
- Wyse WY50 Terminal (1983, subsequent Wyse Terminals too)
- HP 7550A plotter (1985, probably older and newer HP serial plotters too)
- DEC VT520 (1993, curiously older models in the series aren't documented to support hardware flow-control at all)
- Epson Receipt Printers (Present day, although I bet they're continuing a long line)
All of these devices support deasserting the DTR line to show that they are too busy to handle more data, and none of them support using RTS for the same purpose. I bet that there are many, many other examples. I don't know which device was the first, or if it was written down anywhere as some sort of proposed standard. I'd be interested to know.
So why?
Modems are the key to understanding RS-232
JustMe said:
Different serial devices work differently and how they have worked over time has changed.
But most common use for any implementation for serial ports have been modems.
and I think this is correct. It seems that to understand the choices made in implementing RS-232 devices, you need to understand what modems did at a specific point in history.
I think the history of modems might usefully be broken into the following phases (note: Dates are extremely approximate. I think US technology is most relevant here, so I'm going to focus on that, even though I am not based in the US):
- The half-duplex era (1950s and beyond): RTS and CTS have their original meaning The first versions of RS-232 would have been designed to support half-duplex modems. I haven't been able to find information about these, but it does seem that they were not used on the (US) telephone network; all commercially available modems for the public telephone network were full-duplex devices. However I think that this style of modem took a very long time to fully die out, if they ever did; instead they they were just relegated to niches like radio. This is where the original meaning of RTS and CTS come from. Because only one side could be talking at a time, you would have to request the medium, and then wait for it to become available. RTS (Request to Send) was the "I want to take control of the medium" signal, and CTS (Clear to Send) was the "You have control of the medium" signal. These two lines seem to often have been called "the modem control lines", and that is what they were for: controlling the modem, not flow control. I am unclear as to which of the other handshaking lines would have been used by these devices. However I suspect that at least DSR was not.
- Bell 103 and 212 compatibles (1970s and 1980s): RTS and CTS are obsolete? The first commercially available modem in the US was the Bell 103, which was pretty rare, but a lot of devices were eventually introduced that was compatible with it. This excellent video shows what these modems were like. Internally they were relatively simple. As they were full duplex devices without any internal buffering, it would have been necessary to configure the serial link to match the baud rate of the modem (300 or 1200 baud). This period would have been the heyday of devices like serial printers, serial plotters and serial terminals, which could easily have been connected to a remote computer through one of these modems. Devices and software designed in this period would have probably supported using the modem control signals (with their then-current meanings) as half-duplex devices were probably still around, but this "modem mode" was likely to be optional even if you were using a modem. Once these modems were connected, there would be no need to assert RTS or wait for CTS; it should have been possible to use one of these devices with a 3-wire serial cable. Adding the DTR, DSR and DCD signals would enable you to know if the modem was on, connected and enable automatic hang-up. RTS would either have been disconnected and ignored, or looped back to CTS, which would otherwise be linked to DSR.
- Smart-modems accelerate (1980s): CTS returns as flow control The Hayes Smartmodem, and the myriad similar devices introduced automatic dialling, and with it an internal microprocessor and command language. Significantly this enabled modems to have internal buffers, which meant that it would now be productive to configure the serial link at higher than the modem speed. A lump of data could be sent to the modem, which could then be gradually sent out at the speed of the connection. The DTE would not even need to know how fast the connection was, and automatic negotiation became possible! However this meant that flow-control would be necessary. Happily the CTS signal would work fine for this; systems designed to be aware of half-duplex modems would not send data while CTS was not asserted, and systems not aware of CTS shouldn't be sending fast enough to overflow the buffer so this is a backwards compatible change. Data rates increased towards 9,600 baud through the 1980s.
- 9.6Kbps and up, e.g. (late 1980s, to 2000s): RTS returns with a new meaning In this period the speed of the serial link between your computer and your modem finally became important, as modems were fast enough that old PCs with crummy UARTS wouldn't be able to keep up with their modems. As the RTS line had been irrelevant for some time, but would usually be connected in a modem cable, modems optionally started to support repurposing it to enable hardware flow control from the modem to the computer - which modems hadn't supported until now.
So in the period when I think DTR or DTR/DTS flow-control became common, there was an understanding that the RTS and CTS lines had a meaning - the meaning it had for half-duplex CTEs - but at the same time RTS and CTE might have been absent from any given serial cable, and ignored by any computer as it wasn't relevant to contemporary modems.
I/O devices of the '80s needed flow control before computers did
While I presume that almost any computer was faster than the modem or peripherals it was connected to, lots of devices like plotters or printers would have occasionally been overwhelmed even at 1,200 baud (I think smart terminals might keep up at these rates though). These devices were implemented as DTEs, meaning that they could be connected to a modem, or to another DTE through a null-modem cable. RS-232 had no standard way of providing out-of-band flow control for DTEs.
Aside: xon/xoff flow control might have worked better than you'd think
We're used to describing xon/xoff flow-control as software flow-control, and we're often warned that it does a poor job of flow-control and causes data corruption. It had one job, but failed twice.
The reason xon/xoff flow-control can cause data-corruption is that it uses two bytes (typically 0x13 and 0x11, but often configurable) in-band in the communication streams to signal that a buffer is becoming full, and sending should be slowed or stopped until it's sorted out. These bytes are not processed with the rest of the data stream. But if the data you are transmitting happens to contain either of these bytes, well you've halted the flow of data, and that byte gets removed from the transfer. That means it's manifestly unsuitable for transferring arbitrary binary data. However many of the peripherals we're describing are character devices, and so these two bytes should usually be perfectly safe. When terminals use xon/xoff flow-control Emacs users are prone to complain that Control-Q and Control-S don't work.
The reason xon/xoff is considered slow is because it is often handled in software. On PCs in particular, systems started using UART chips with increasingly large buffers, and only reading the input data once they had been filled, which could be after many bytes had been received, and long after the xoff byte had been sent. However, in researching this I've seen references to mini-computers or servers having hardware accelerated console ports, which would handle the processing of xon/xoff bytes automatically, and as they arrive. So if the remote machine is properly equipped xon/xoff should have worked well in that respect too.
It seems that at least some modern USB to RS-232 adaptor chips also do automatic handling of xon/xoff and that this is supported by Linux².
Hardware flow-control makes no sense on the modems of the era, but DTR flow-control would not have been very compatible with them
I suspect (but have no proof) that DTR/DTS flow-control was not ever supported by modems. But the for the modems that were in use at the time, it would not have helped. Even if the modem had a buffer (which it probably didn't), then it would have been tiny, and telling the the modem to stop sending data would just have led to the buffer in the modem overflowing, instead of the one in the device. However these dumb modems would have been transparent to xon/xoff bytes, and so they would have had the effect of telling the sending system to slow down, which hopefully would have the desired effect.
Dropping the DTR signal on a DTE connected to a modem has the potential to make it drop the connection. This was an optional feature that could be disabled (local laws not withstanding).
Overall enabling DTR or DTR/DSR flow-control on a device connected to a modem is likely to create more problems than it solves.
DTR/DTS flow-control does make sense on null-modems of the era
However if the device is connected to another DTR directly through a null-modem cable, it's likely that the serial link is going to be very much faster, and even terminals are prone to be overwhelmed. And as xon/xoff flow-control can have problems for terminals, and software implementations have issues, it makes sense to make a non-standard but optional, and pretty compatible extension to the standard (just as the modem manufacturers would also do).
It stands to reason that you shouldn't send data to a device that is not ready - and so turning off the the Data Terminal Ready signal when your buffer is full makes sense. And in terms of sending it makes sense not to send data to a Dataset which is not ready, so using the DSR signal for the other direction makes sense too. It was already common practice for these two signals to be crossed in the numerous null-modem wirings of the era. If a device configured for DTR/DTS were connected to a fast modem, these could potentially cause a connection to be dropped - but only if that feature is enabled on the modem.
MS DOS did support DTR/DTS flow-control (Perhaps)
I have no evidence whatsoever for this other than some vague things I saw, I cannot find a source that tells me one way or the other, or when it was introduced, and I'm not equipped to check for myself. I do believe that modern Windows supports it based on the online help.
The MS DOS MODE command provides the means to enable DTR/DTS handshaking.
Some Unix vendors supported DTS/DTR flow control
POSIX appears to be silent on what hardware flow-control should be supported on Unix. It would be surprising if it were not. The Linux man page for termios.h notes that CRTSCTS (which is the option passed to enable RTS/CTS hardware flow control on an interface) is not in POSIX. Sure enough, the version published by The Open Group does not include either CRTSCTS or CDTSDTR. The (much more detailed) equivalent for FreeBSD includes CRTSCTS (but doesn't say if it's standard). The Open Group header file does not include it.
However the MacOS implementation of that header does include constants for DTR and DTS hardware overflow - whether that is supported by the OS, or this is vestigial, I'm not able to say.
Apparently System V did seem to have supported DTR/DTS flow control. The equivalent constants were defined in termiox.h. This is the documentation for AIX. However surprisingly, Ultrix and HP/UX do not seem to support it.
Is it true that Linux does not support DTR/DTS flow control?
A premise of the question is that Linux does not have support for DTR/DTS flow control. This is only sort of true. It's possible to write userspace code that reads the state of the various serial port control lines, and sends data or waits, depending on what it finds. However RTS/CTS flow control is supported by the kernel, and can be enabled for a specific port, using the stty command. This is no doubt not only faster and easier, it's probably a lot more reliable. It also offers the possibility of taking advantage of features of the hardware to support it.
Linux partially added support for DTS/DTR flow control... but removed it
At one point a patch was included from Alan Cox to enable the System V style termiox interface. However the follow up work to connect this to the TTY and serial drivers seems to never have landed, and the support was later removed.
Conclusion
So it seems there have been (at least) two approaches to hardware flow-control in RS-232 devices, both were initially non-standard. Both were supported by various Unixes. Both saw use on the PC platform. The earlier one adopted by many DTE devices like printers and terminals was DTR/DSR - but it's not very compatible the meanings given to these signals by modems. RTS/CTS came later, but it undoubtedly became more widely deployed, and it became standardised in the early '90s.
Since Linux development started in the '90s, and devices implementing RTS/CTS flow control were already widespread (especially fast modems, and who doesn't want a fast modem?) it makes perfect sense that Linux would support this standard.
Non-modem RS-232 devices were unusual before long, and wiring null-modem cables to support old devices is usually possible, and I guess ultimately no one ever cared enough to support the other approach.
- Technically I should say RTR/CTS flow control, but every single source I've seen that mentions this distinction uses the word "technically".
- This is good news for me, as it turns out the eBay seller sent me 3-core cable not 5-core.
sttyoptioncrtsctsto use CTS/RTS handshake, and anothercdtrdsroption to use DTR/DSR. I used the former extensively in my Unix experience last century, but never heard of the latter until much later, so I'm quite sure that is a Linux extension which Unix classically didn't have. – Guntram Blohm Nov 07 '23 at 16:36