Trying to understand the physical limits to computation, I notice that among these we have two types of limits that constrain the minimum allowable energy for a computation.
- Limits that constrain the product of energy and time taken, which include Bremermann's limit and Margolus-Levitin. These two laws pretty much state the same thing as far as I can tell, with the only difference being a constant which doesn't concern me since only order of magnitude matters for my own purposes.
- Limits that constrain only energy required for computation, notably Landauer's principle.
So let's just write these. I'll utilize the Bremermann's limit since it seems to be more commonly referenced.
$$ E \ge \frac{ 2 \pi \hbar }{ \Delta t } $$
$$ E \ge k T \ln{(2)} $$
If you believe what these two equations are telling you literally, then obviously there is a certain time range where the energy-time limit will be more restrictive and a range where the energy-alone limit will be more restrictive. With trivial algebra, we set the RHS equal to each other and find the time for computation where the crossover happens. I used the temperature of space here. For room temperature, it is much smaller/faster.
$$ \Delta t = \frac{ 2 \pi \hbar }{ k T \ln{(2)} } \approx 2 \times 10^{-13} s $$
If we're considering serial computations, this would leave us in the $\text{THz}$ range. It's not anywhere near Planck time or anything like that, but it also doesn't seem like much of a practical limitation.
Is the energy-time limitation only discussed for academic purposes, with the understanding that it will be swamped by the more restrictive limit? Or is there some deeper reason why it should matter?