14

Remark: All the answers so far have been very insightful and on point but after receiving public and private feedback from other mathematicians on the MathOverflow I decided to clarify a few notions and add contextual information. 08/03/2020.

Motivation:

I recently had an interesting exchange with several computational neuroscientists on whether organisms with spatiotemporal sensory input can simulate physics without computing partial derivatives. As far as I know, partial derivatives offer the most quantitatively precise description of spatiotemporal variations. Regarding feasibility, it is worth noting that a number of computational neuroscientists are seriously considering the question that human brains might do reverse-mode automatic differentiation, or what some call backpropagation [7].

Having said this, a large number of computational neuroscientists (even those that have math PhDs) believe that complex systems such as brains may simulate classical mechanical phenomena without computing approximations to partial derivatives. Hence my decision to share this question.

Problem definition:

Might there be an alternative formulation for mathematical physics which doesn't employ the use of partial derivatives? I think that this may be a problem in reverse mathematics [6]. But, in order to define equivalence a couple definitions are required:

Partial Derivative as a linear map:

If the derivative of a differentiable function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ at $x_o \in \mathbb{R}^n$ is given by the Jacobian $\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o} \in \mathbb{R}^{m \times n}$, the partial derivative with respect to $i \in [n]$ is the $i$th column of $\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o}$ and may be computed using the $i$th standard basis vector $e_i$:

\begin{equation} \frac{\partial{f}}{\partial{x_i}} \Bigr\rvert_{x=x_o} = \lim_{n \to \infty} n \cdot \big(f(x+\frac{1}{n}\cdot e_i)-f(x)\big) \Bigr\rvert_{x=x_o}. \tag{1} \end{equation}

This is the general setting of numerical differentiation [3].

Partial Derivative as an operator:

Within the setting of automatic differentiation [4], computer scientists construct algorithms $\nabla$ for computing the dual program $\nabla f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ which corresponds to an operator definition for the partial derivative with respect to the $i$th coordinate:

\begin{equation} \nabla_i = e_i \frac{\partial}{\partial x_i} \tag{2} \end{equation}

\begin{equation} \nabla = \sum_{i=1}^n \nabla_i = \sum_{i=1}^n e_i \frac{\partial}{\partial x_i}. \tag{3} \end{equation}

Given these definitions, a constructive test would involve creating an open-source library for simulating classical and quantum systems that doesn’t contain a method for numerical or automatic differentiation.

The special case of classical mechanics:

For concreteness, we may consider classical mechanics as this is the general setting of animal locomotion, and the vector, Hamiltonian, and Lagrangian formulations of classical mechanics have concise descriptions. In all of these formulations the partial derivative plays a central role. But, at the present moment I don't have a proof that rules out alternative formulations. Has this particular question already been addressed by a mathematical physicist?

Perhaps a reasonable option might be to use a probabilistic framework such as Gaussian Processes that are provably universal function approximators [5]?

Koopman Von Neumann Classical Mechanics as a candidate solution:

After reflecting upon the answers of Ben Crowell and gmvh, it appears that we require a formulation of classical mechanics where:

  1. Everything is formulated in terms of linear operators.
  2. All problems can then be recast in an algebraic language.

After doing a literature search it appears that Koopman Von Neumann Classical Mechanics might be a suitable candidate as we have an operator theory in Hilbert space similar to Quantum Mechanics [8,9,10]. That said, I just recently came across this formulation so there may be important subtleties I ignore.

Related problems:

Furthermore, I think it may be worth considering the following related questions:

  1. What would be left of mathematical physics if we could not compute partial derivatives?
  2. Is it possible to accurately simulate any non-trivial physics without computing partial derivatives?
  3. Are the operations of multivariable calculus necessary and sufficient for modelling classical mechanical phenomena?

A historical note:

It is worth noting that more than 1000 years ago as a result of his profound studies on optics the mathematician and physicist Ibn al-Haytham(aka Alhazen) reached the following insight:

Nothing of what is visible, apart from light and color, can be perceived by pure sensation, but only by discernment, inference, and recognition, in addition to sensation.-Alhazen

Today it is known that even color is a construction of the mind as photons are the only physical objects that reach the retina. However, broadly speaking neuroscience is just beginning to catch up with Alhazen’s understanding that the physics of everyday experience is simulated by our minds. In particular, most motor-control scientists agree that to a first-order approximation the key purpose of animal brains is to generate movements and consider their implications. This implicitly specifies a large class of continuous control problems which includes animal locomotion.

Evidence accumulated from several decades of neuroimaging studies implicates the role of the cerebellum in such internal modelling. This isolates a rather uniform brain region whose processes at the circuit-level may be identified with efficient and reliable methods for simulating classical mechanical phenomena [11, 12].

As for the question of whether the mind/brain may actually be modelled by Turing machines, I believe this was precisely Alan Turing’s motivation in conceiving the Turing machine [13]. For a concrete example of neural computation, it may be worth looking at recent research that a single dendritic compartment may compute the xor function: [14], Reddit discussion.

References:

  1. William W. Symes. Partial Differential Equations of Mathematical Physics. 2012.
  2. L.D. Landau & E.M. Lifshitz. Mechanics (Volume 1 of A Course of Theoretical Physics). Pergamon Press 1969.
  3. Lyness, J. N.; Moler, C. B. (1967). "Numerical differentiation of analytic functions". SIAM J. Numer. Anal. 4: 202–210. doi:10.1137/0704019.
  4. Naumann, Uwe (2012). The Art of Differentiating Computer Programs. Software-Environments-tools. SIAM. ISBN 978-1-611972-06-1.
  5. Michael Osborne. Gaussian Processes for Prediction. Robotics Research Group Department of Engineering Science University of Oxford. 2007.
  6. Connie Fan. REVERSE MATHEMATICS. University of Chicago. 2010.
  7. Richards, B.A., Lillicrap, T.P., Beaudoin, P. et al. A deep learning framework for neuroscience. Nat Neurosci 22, 1761–1770 (2019). doi:10.1038/s41593-019-0520-2.
  8. Wikipedia contributors. "Koopman–von Neumann classical mechanics." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 19 Feb. 2020. Web. 7 Mar. 2020.
  9. Koopman, B. O. (1931). "Hamiltonian Systems and Transformations in Hilbert Space". Proceedings of the National Academy of Sciences. 17 (5): 315–318. Bibcode:1931PNAS...17..315K. doi:10.1073/pnas.17.5.315. PMC 1076052. PMID 16577368.
  10. Frank Wilczek. Notes on Koopman von Neumann Mechanics, and a Step Beyond. 2015.
  11. Daniel McNamee and Daniel M. Wolpert. Internal Models in Biological Control. Annual Review of Control, Robotics, and Autonomous Systems. 2019.
  12. Jörn Diedrichsen, Maedbh King, Carlos Hernandez-Castillo, Marty Sereno, and Richard B. Ivry. Universal Transform or Multiple Functionality? Understanding the Contribution of the Human Cerebellum across Task Domains. Neuron review. 2019.
  13. Turing, A.M. (1936). "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. 2 (published 1937). 42: 230–265. doi:10.1112/plms/s2-42.1.230. (and Turing, A.M. (1938). "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction". Proceedings of the London Mathematical Society.
  14. Albert Gidon, Timothy Adam Zolnik, Pawel Fidzinski, Felix Bolduan, Athanasia Papoutsi, Panayiota Poirazi, Martin Holtkamp, Imre Vida, Matthew Evan Larkum. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science. 2020.
LSpice
  • 11,423
Aidan Rocke
  • 3,827
  • 2
    How about the discrete equivalents to derivatives ,i.e., difference equations – Piyush Grover Mar 05 '20 at 18:01
  • 5
    There are important discrete physical systems, for example quantum spin systems, which one can formulate without partial derivatives. In general, quantization probably helps - you base your description on an operator algebra, and nobody forces you to cast, e.g., $[x,p]=i$ in the position or momentum representations. Cf. the ladder operator formulation of the harmonic oscillator. – Michael Engelhardt Mar 05 '20 at 18:15
  • 3
    Maxwell's equations, and others like them, have integral formulations that eliminate most if not all derivatives (and are more "geometrical"). Do you exclude that just as you exclude Cauchy's formula? – Francois Ziegler Mar 05 '20 at 18:30
  • 1
    @PiyushGrover I understand that discrete/continuous approximations are methods for implementing partial differentiation computations. As such they wouldn't count as alternative formulations. – Aidan Rocke Mar 05 '20 at 18:45
  • @FrancoisZiegler That is an interesting counter-argument. However, within a classical framework if we are to simulate the work done by any force wouldn't we need to somehow approximate second-order derivatives of position? – Aidan Rocke Mar 05 '20 at 18:49
  • 2
    Do you consider the acceleration in $F=ma$ a partial derivative? Arguably, it is a (total) derivative with respect to time $\frac{d^2x}{dt^2}$, not a partial one $\frac{\partial^2x}{\partial t^2}$. If the answer is no, then it seems to me that you can do all Newtonian physics with it. – Federico Poloni Mar 05 '20 at 19:22
  • @FedericoPoloni Doesn't the total derivative generally require the computation of partial derivatives? Ref: https://mathworld.wolfram.com/TotalDerivative.html Otherwise, the force term emerges in the Euler-Lagrange equations as a partial derivative which is worth noting since the Lagrangian formulation is computationally efficient.

    Have I understood you correctly?

    – Aidan Rocke Mar 05 '20 at 19:30
  • 12
    I think the way you set up the question, the answer can only be No. (1) We know that mechanics can be formulated using derivatives. (2) Such models correctly reproduce the behavior of natural phenomena (excluding extreme regimes where quantum effects become important). (3) Any alternative formulation must reproduce the same behavior of natural phenomena. From your comments, it seems that (3) is all you require for your notion of equivalence. By that logic, (1) and (2) imply that any alternative formulation of mechanics will be equivalent to the one with derivatives. – Igor Khavkine Mar 05 '20 at 20:18
  • 3
    Hartry Field's Science Without Numbers gives an integral formulation of gravitation, which is worth a look to see if that would satisfy you. –  Mar 05 '20 at 20:19
  • @IgorKhavkine To be precise, (3) isn’t sufficient for equivalence or my position would be tautological. If an alien civilisation is able to do (3) without (1) i.e. simulate classical mechanical phenomena without computing derivatives then their formulation doesn’t depend upon continuous/discrete approximations of derivatives. – Aidan Rocke Mar 05 '20 at 20:38
  • @MattF. My understanding of integral formulations is that once you start simulating the work done by that field(gravity/electromagnetic) on an object you would need an approximation of second-order derivatives. In this case, I think you are referring to Gauss' law? Ref: https://en.wikipedia.org/wiki/Gauss%27s_law_for_gravity – Aidan Rocke Mar 05 '20 at 20:45
  • 1
    @AidanRocke I am not suggesting to use Euler-Lagrange equations, just $F=ma$, which should be sufficient for Newtonian mechanics. In that formulation, only functions of one variable, the time $t$, appear. So "total derivative" is a bit improper as a term, and I don't think that partial derivatives are required. Of course, one could argue that also derivatives of a single-variate function are partial derivatives with respect to that lone variable, but then your question should probably have said "without derivatives" in the first place. – Federico Poloni Mar 05 '20 at 20:49
  • 2
    @AidanRocke Well given the motivation, it seems like if the organisms can do addition and subtraction, then discrete (difference) equations should do the job of replacing derivatives. If organisms cannot do addition or subtractions, I do not know what physics (if any) can be simulated. – Piyush Grover Mar 05 '20 at 20:50
  • @PiyushGrover If I understand you correctly, finite-difference methods would then be used for solving differential equations? Ref: https://en.wikipedia.org/wiki/Finite_difference_method#Explicit_method – Aidan Rocke Mar 05 '20 at 21:13
  • 7
    @AidanRocke This wasn’t so much a counter-argument as a putative example of what you may want, or not. Basically integration by parts, or Stokes’ theorem, allows one to recast laws in an integral form avoiding derivatives although the setting is still “differential” geometry, sorry. For another example, recasting the nominally very “second-order” law that particles move on geodesics, see §§13–15 of Sternberg’s Einstein lecture General Covariance and the Passive Equations of Physics. – Francois Ziegler Mar 05 '20 at 21:18
  • I think your edit is very confusing. Why do you introduce automatic differentiation, and then not use it later? Under which conditions do you claim that it produces exact derivatives? (Spoiler: in the most common setting, IEEE arithmetic on a computer, it doesn't.) Why pseudocode in an unusual language rather than a formula? Why does the comment call it "automatic" when you claim it's numerical? Are you interested in formulating physics without partial derivatives, or in making actual computations? In which model: on a silicon computer, or in "organisms with spatiotemporal sensory input"? – Federico Poloni Mar 06 '20 at 15:38
  • @FedericoPoloni Those are good questions. I think the nature of interdisciplinary problems is that a useful formulation of the problem for a broad audience is half the challenge. Regarding automatic differentiation, the derivatives are exact up to round-off error which is the notion of equivalence I am using in (2). As for formulating physics vs. making actual computations, I think those are fundamentally related questions. Leibniz and Newton did both simultaneously. – Aidan Rocke Mar 06 '20 at 15:43
  • @FedericoPoloni Digital computers are used by most computational neuroscientists and computational biologists as models for biological computation. – Aidan Rocke Mar 06 '20 at 15:48
  • But if all this stuff about computing derivatives is tangential to the actual question, why did you include it in the first place? – Federico Poloni Mar 06 '20 at 15:50
  • @FedericoPoloni How is it tangential? I think the challenge of formulating physics using a set of mathematical operations and methods for performing actual derivative computations are related. The latter allows you to define a notion of equivalence and the former requires you to clarify the alternative formulation you are using. Naturally, this alternative formulation must be computable. – Aidan Rocke Mar 06 '20 at 15:54
  • @FedericoPoloni In fact, all the answers so far are spot on but from the comments it appears that my problem statement could have been improved so this is what I tried to do. In particular, a number of people felt that my notion of equivalence could have been made more explicit. – Aidan Rocke Mar 06 '20 at 15:56
  • 4
    Would it be fair to interpret your question as: can one describe physics without differential calculus? – Michael Bächtold Mar 07 '20 at 09:55
  • 1
    @MichaelBächtold Without partial derivative computations you wouldn’t have a multivariable differential calculus so I think that’s a very reasonable interpretation. – Aidan Rocke Mar 07 '20 at 11:02
  • 1
    I added a link to gmvh's answer, but couldn't find Ben Crowell's. Is it this one? – LSpice Jan 25 '22 at 21:11

6 Answers6

10

As to question 2, there are certainly plenty of non-trivial discrete models in statistical physics, such as the Ising or Potts models, or lattice gauge theories with discrete gauge groups, that require no partial derivatives (or indeed any operations of differential calculus) at all to formulate and simulate.

Similarly, quantum mechanics can be formulated entirely in the operator formalism, and an entity incapable of considering derivatives could still contemplate the time-independent Schrödinger equation and solve it algebraically for the harmonic oscillator (using the number operator) or the hydrogen atom (using the Laplace-Runge-Lentz-Pauli vector operator).

So an answer to question 1 might be "at least anything that can be written as a discrete-time Markov chain with a discrete state space, as well as anything that can be recast as an eigenvalue problem", and other problems that can be recast in purely probabilistic or algebraic language should also be safe (although it might be hard to come up with their formulations without using derivatives at some intermediate step).

As to question 3, I personally don't believe that an approach to classical mechanics or field theory can be correct if it isn't equivalent (at least at a sufficiently high level of abstraction) to formulating and solving differential equations. But the level of abstraction could conceivably be quite high -- for an attempt to formulate classical mechanics without explicitly referring to numbers (!) cf. Hartry Field's philosophical treatise "Science without Numbers".

gmvh
  • 2,758
  • 3
    I believe Hartry Field avoids explicitly referring to numbers by assuming that physical space satisfies Hilbert's axioms for geometry, including the Archimedean and completeness axioms. From this one can derive a structure isomorphic to $\mathbb{R}$, so he actually does assume $\mathbb{R}$, implicitly. – John Stillwell Mar 06 '20 at 11:14
  • 1
    As I said, eventually you have to be able to describe differential equations and all of that (which of course includes having $\mathbb{R}$ at your disposal). And I agree that Hartry Field implicitly assumes (the consistency of) $\mathbb{R}$; as far as I can tell, his nominalism is ultimately more a matter of presentation. – gmvh Mar 06 '20 at 12:46
  • See my comment to another answer: linear algebra alone is, in some sense, also equivalent to derivatives. – Federico Poloni Mar 07 '20 at 09:59
  • After reflecting upon your answer, I wonder whether Koopman–von Neumann classical mechanics might be a candidate solution? Ref: https://en.wikipedia.org/wiki/Koopman%E2%80%93von_Neumann_classical_mechanics – Aidan Rocke Mar 07 '20 at 21:08
  • 1
    I'm not familiar with KvN mechanics, but from the Wikipedia entry it doesn't really seem to meet your criteria -- note that the Liouville operator contains partial derivatives of the Hamiltonian function, and that simply putting those in as arbitrary operators won't work, since they would have to be related by the integrability condition on the gradient of the Hamiltonian. – gmvh Mar 09 '20 at 15:08
9

Is it possible to accurately simulate any non-trivial physics without computing partial derivatives?

Yes. An example is the nuclear shell model as formulated by Maria Goeppert Mayer in the 1950's. (The same would also apply to, for example, the interacting boson model.) The way this type of shell model works is that you take a nucleus that is close to a closed shell in both neutrons and protons, and you treat it as an inert core with some number of particles and holes, e.g., $^{41}\text{K}$ (potassium-41) would be treated as one proton hole coupled to two neutrons. There is some vector space of possible states for these three particles, and there is a Hamiltonian that has to be diagonalized. When you diagonalize the Hamiltonian, you have a prediction of the energy levels of the nucleus.

You do have to determine the matrix elements of the Hamiltonian in whatever basis you've chosen. There are various methods for estimating these. (They cannot be determined purely from the theory of quarks and gluons, at least not with the present state of the art.) In many cases, I think these estimates are actually done by some combination of theoretical estimation and empirical fitting of parameters to observed data. If you look at how practitioners have actually estimated them, I'm sure their notebooks do contain lots of calculus, including partial derivatives, or else they are recycling other people's results that were certainly not done in a world where nobody knew about partial derivatives. But that doesn't mean that they really require partial derivatives in order to find them.

As an example, people often use a basis consisting of solutions to the position-space Schrodinger equation for the harmonic oscillator. This is a partial differential equation because it contains the kinetic energy operator, which is basically the Laplacian. But the reality is that the matrix elements of this operator can probably be found without ever explicitly writing down a wavefunction in the position basis and calculating a Laplacian. E.g., there are algebraic methods. And in any case many of the matrix elements in such models are simply fitted to the data.

The interacting boson model (IBM) is probably an even purer example of this, although I know less about it. It's a purely algebraic model. Although its advocates claim that it is in some sense derivable as an approximation to a more fundamental model, I don't think anyone ever actually has succeeded in determining the IBM's parameters for a specific nucleus from first principles. The parameters are simply fitted to the data.

Looking at this from a broader perspective, here is what I think is going on. If you ask a physicist how the laws of physics work, they will probably say that the laws of physics are all wave equations. Wave equations are partial differential equations. However, all of our physical theories except for general relativity fall under the umbrella of quantum mechanics, and quantum mechanics is perfectly linear. There is a no-go theorem by Gisin that says you basically can't get a sensible theory by adding a nonlinearity to quantum mechanics. Because of the perfect linearity, our physical theories can also just be described as exercises in linear algebra, and we can forget about a specific basis, such as the basis consisting of Dirac delta functions in position space.

In terms of linear algebra, there is the problem of determining what is the Hamiltonian. If we don't have any systematic way of determining what is an appropriate Hamiltonian, then we get a theory that lacks predictive power. Even for a finite-dimensional space (such as the shell model), an $n$-dimensional space has $O(n^2)$ unknown matrix elements in its Hamiltonian. Determining these purely by fitting to experimental data would be a vacuous exercise, since typically the number of observations we have available is $O(n)$. One way to determine all these matrix elements is to require that the theory consist of solutions to some differential equation. But there is no edict from God that says this is the only way to do so. There are other methods, such as algebraic methods that exploit symmetries. This is the kind of thing that the models described above do, either partially or exclusively.

References

Gisin, "Weinberg's non-linear quantum mechanics and supraluminal communications," http://dx.doi.org/10.1016/0375-9601(90)90786-N , Physics Letters A 143(1-2):1-2

  • 6
    Arguably, linear algebra allows one to compute derivatives: given a rational function (or an analytic function as the limit of its Taylor series), you can evaluate it in the matrix argument $\begin{bmatrix}\lambda & 1 \ 0 & \lambda\end{bmatrix}$, and the result you obtain is precisely $f(\begin{bmatrix}\lambda & 1 \ 0 & \lambda\end{bmatrix}) = \begin{bmatrix}f(\lambda) & f'(\lambda) \ 0 & f(\lambda)\end{bmatrix}$. This is, essentially, automatic differentiation recast as linear algebra. So matrix algebra is, essentially, equivalent to derivatives. – Federico Poloni Mar 07 '20 at 09:57
  • 1
    @FedericoPoloni: If I'm understanding you correctly, then you're assuming that the function has been expressed in the position basis. The point of my answer is that you can work with these models without ever even knowing any wavefunctions in the position basis. In the interacting boson model, nobody knows what the wavefunctions would be in the position basis. –  Mar 07 '20 at 14:18
  • 1
    No, that is a more general statement that is independent of applications or bases: if you allow matrix algebra among the things that you are allowed to do, then you can use it to compute the derivative of any function that you can compute. – Federico Poloni Mar 07 '20 at 14:56
  • 3
    @FedericoPoloni - The irony is that the clever idea of automatic differentiation, and its putative realization through a biological system, become irrelevant if we formulate our physics problem such that it does not require any differentiation anymore, as the OP is asking us to do. That's where the OP lost me - the line of questioning seems completely self-defeating. (You probably had a similar reaction, going by some of your comments to the OP). – Michael Engelhardt Mar 07 '20 at 15:08
  • @MichaelEngelhardt Why would it be self-defeating? I consider automatic differentiation in biological systems to be the most likely scenario, given what we know, but as a scientist I think it is important to carefully consider the alternative possibility. – Aidan Rocke Mar 07 '20 at 16:46
  • @AidanRocke - I don't know how to say it any more clearly than I already did: If my physics problem doesn't require differentiation, then I won't need your nifty biological system to do any differentiations for me, will I? – Michael Engelhardt Mar 07 '20 at 17:05
  • 1
    @FedericoPoloni: No, that is a more general statement that is independent of applications or bases: if you allow matrix algebra among the things that you are allowed to do, then you can use it to compute the derivative of any function that you can compute. I think you're misunderstanding. In a model like the interacting boson model, we do not know how to compute any wavefunctions. All we know is the matrix elements of the Hamiltonian. As a concrete example, here is a matrix: $\left(\begin{matrix}1 & 2 \ 3 & 4\end{matrix}\right)$. Please compute some derivatives for me. –  Mar 07 '20 at 17:19
  • [...] Sure, I can "compute" wavefunctions in this model. It's a finite-dimensional vector space. Here's a wavefunction that I computed in some basis: $\left(\begin{matrix}1 \ 0\end{matrix}\right)$. But I don't think this is what you had in mind as computation. –  Mar 07 '20 at 17:37
  • 1
    So how do you actually compute stuff in this model? You mentioned diagonalization, so I guess one of the "primitives" that one needs to compute is the eigenvalues and eigenvectors of a given Hermitian matrix. Are there other ones? Do you need to take matrix sums? Products? – Federico Poloni Mar 07 '20 at 17:46
7

Well if you take out partial derivatives, at least quantum field theory and in particular conformal field theory will survive the massacre. The reason is explained in my MO answer: $p$-adic numbers in physics

One can use random/quantum fields $\phi:\mathbb{Q}_{p}^{d}\rightarrow \mathbb{R}$ as toy models of fields $\phi:\mathbb{R}^d\rightarrow\mathbb{R}$. In this $p$-adic or hierarchical setting, Laplacians and all that are nonlocal and not given by partial derivatives.

Most equations in physics are local and therefore need partial derivatives in order to be formulated. What should remain, in the very hypothetical scenario proposed in the question, is everything pertaining to nonlocal phenomena.

4

I'd query the contention that organisms or even inorganic matter compute in the sense described.

For example, if I drop a stone on the surface of the earth, it falls in a straight line. To call this as 'computing' a straight line seems rather a stretch of the word computation; to my thinking, to compute, means that one ought to be conscious that one is carrying out a computation. That is the person who dropped it is computing the straight line - and not the stone itself. It merely moves in a straight line. We know it moves in a straight line, and hence by dropping it, are describing a straight line.

Mozibur Ullah
  • 2,222
  • 14
  • 21
  • 1
    This is an excellent answer, because it finally brings into focus the question of what we mean by computation. One way to think of it is that we, as humans, arrange two physical systems to behave in ways that can be mapped into each other: Say we are trying to predict what system A will do. If we can arrange system B to do the "same" thing, then by observing system B, we can predict A. B could be a traditional general purpose computer, but doesn't have to be. Now, there is no reason for us to hobble ourselves in performing the mapping by, say, outlawing derivatives ... – Michael Engelhardt Mar 08 '20 at 15:01
  • ... it may well be that our understanding of both systems A and B, and therefore the construction of the mapping necessary for computation, hinges on using derivatives, even if system B does not "perform derivatives" in the traditional general purpose computer sense. – Michael Engelhardt Mar 08 '20 at 15:08
  • You may be interested in the historical note that I added to the question as well as this paper on brain computation: https://igi-web.tugraz.at/PDF/LNCS-10000-Theories_006_v1.pdf – Aidan Rocke Mar 08 '20 at 16:38
1

One way for reformulating all of (classical) mechanics is Peridynamics, which does away with derivatives. It is essentially a non-local reformulation.

Javili, Ali, et al. "Peridynamics review." Mathematics and Mechanics of Solids 24.11 (2019): 3714-3739.

1

For one example of non-trivial physics without partial derivatives, one can look into Volume 1 of the Feynman lectures. In Chapter 28, Feynman starts to develop electrodynamics without partial derivatives — they only appear in Volume 2.

Instead of the Maxwell equation, Feynman uses a somewhat complex formula for the field that is generated by a single moving charge. The formula only has ordinary derivatives — the first and second time derivative of the speed of the charged particle — but becomes a bit unusual in that uses retarded time: The field at a distant point is determined by the movement of the particle some time ago, to correct for the finite speed of light.

rimu
  • 749