12

The complex inner product $\langle u,v\rangle$ has two different definitions decided by conventions: $\bar{u}^Tv$ or $u^T\bar{v}$. In BLAS, I found the routines cdotu, zdotu, and cdotc, zdotc. The former two routines actually compute $u^Tv$ (a fake inner product!) and the last two routines conjugate the first vector in the inner product. Also, by either definition (conjugate $u$ or $v$), $\langle u,v\rangle=\overline{\langle v,u\rangle}$ with conjugation! Moreover, as pointed out in a comment, choosing the principal values for multi-valued complex functions can be convention dependent.

My question is: does this complication cause true danger for use of complex arithmetic in scientific computing? This issue is emphasised by the authors of deal.ii who suggest to always split complex numbers into real part and imaginary part and use the real arithmetic only. But I never found the splitting approach is convenient. For example, think about the Perfectly Matched Layer (PML) for time-harmonic Maxwell's equations.

It seems the worry of using complex numbers is prevalent in most open source FEM softwares except FreeFem++ and libmesh. But even for the two exceptions, the complex arithmetic is less tested than the real.

My final question is: shall we just always avoid using complex numbers?

Tyberius
  • 1,023
  • 1
  • 10
  • 26
Hui Zhang
  • 1,319
  • 7
  • 16
  • 4
    Does anybody really know which root of $-1$ is $i$ and which is $-i$? It would seem that a software developer should include a small set of test examples in their regression suite to guard against incorporating inconsistent conjugations in any lengthy chain of complex arithmetic computations. – hardmath Mar 21 '15 at 21:36
  • @hardmath Thank you! I added it in the question. – Hui Zhang Mar 21 '15 at 22:00
  • @hardmath: "small set of test examples" -- in most libraries that comprehensively implement linear algebra operations, there would likely be dozens or hundreds of places where inner products are taken. It would take hundreds of tests to verify their correctness, likely taking months to implement correctly. It's not impossible, of course, and some libraries have done that. It's just a lot of work and not all library authors are confident that they got it right :-( – Wolfgang Bangerth Mar 22 '15 at 10:56
  • @WolfgangBangerth, maybe you could explain the deal.ii design decision? – Bill Barth Mar 22 '15 at 14:38
  • @WolfgangBangerth: I was thinking about a user of libraries (developing software using one or more linear algebra packages) who trusts the third-party software but is aware that these are inconsistent in their treatment of complex inner products. In such a case one might make an effort to maintain compatibility by calling various libraries through "wrappers" that would conjugate or swap arguments as necessary. A fairly small set of test routines would then suffice to maintain consistency as the "outer" software evolves. – hardmath Mar 22 '15 at 22:54
  • @BillBarth: At the time the statement was written, we simply did not have any kind of support for complex vectors at all. We do now (on a branch) but I think the situation is the same as before: we're not confident that we current catch all of the places where one has to pay attention to one or the other argument having to be conjugated. – Wolfgang Bangerth Mar 22 '15 at 23:06
  • @hardmath: Sure, but the question was specifically about the statement in deal.II. I don't think that deal.II is internally consistent. – Wolfgang Bangerth Mar 22 '15 at 23:07
  • 4
    Shall we just always avoid using complex numbers? Please, no. I believe every computational scientist needs unsymmetric eigenvalue decompositions, for instance. – Federico Poloni Mar 23 '15 at 11:57
  • @FedericoPoloni That is a good point. But that complex numbers occur only in the end as results. Also, the results can again be represented by real and imaginary parts for further applications. – Hui Zhang Mar 23 '15 at 20:57
  • 1
    Please do not interpret flaws in the BLAS interface with mathematical problems. – Jeff Hammond May 12 '15 at 02:49
  • I don't think I have ever seen $u^T\bar{v}$ used in the wild. Sure, when you write a scalar product as $\langle x,y \rangle$ you can choose which argument has to be conjugated, but if one needs to use both notations at the same time the most reasonable thing to do seems swapping the order of the arguments so that the transpose and the conjugate come together. So maybe it is just an issue with the "mathematician's notation" $\langle u,v\rangle$. – Federico Poloni May 23 '15 at 15:22
  • @FedericoPoloni It depends on your field. In mathematics the standard convention is $u^T\overline v$, while in physics it is $\overline u^Tv$. – doetoe Jun 25 '15 at 12:55
  • @doetoe My point is that $u^T\bar{v}=\bar{v}^Tu$, so the issue is really only with the order of the arguments. – Federico Poloni Jun 25 '15 at 13:08
  • Good luck with quantum mechanics in real numbers (it is possible). – Vladimir F Героям слава Jul 26 '17 at 14:06

2 Answers2

3

You say that the problem with complex arithmetic is that there are different ways to define the scalar product for complex vectors, compared to just one way in the real case. I think the real problem with the complex scalar product is another one, which is, however, closely related to your observation.

In complex arithmetic the order of the arguments of the scalar product do matter, while in real arithmetic they do not. Many algorithms are essentially the same in complex and real arithmetic, meaning you just have to write them once and then use the same code for complex and real arithmetic. (For example, in C++ you can use templates for this purpose.) When you are done writing your code, you usually test it. To uncover mistakes in the ordering of the arguments in some scalar product, you have to test your code with a complex-valued test case.

You often get the real-valued code for an algorithm for free when you have a working code for complex valued problems. When you have tested your code with a complex-valued test case, the code is often also correct for real numbers. Turning, a real-valued code into a complex one, however, requires additional work. Therefore, there are just more codes that just work (and are thoroughly tested) for real-valued than for complex valued problems.

My question is: does this complication cause true danger for use of complex arithmetic in scientific computing?

I would say "Yes", in the following way. When the code is not well tested for complex-valued problems, there is a higher probability of bugs in the code, but this depends on the concrete code you are looking at. When the code is well tested, there is no problem.

My final question is: shall we just always avoid using complex numbers?

As already pointed out, there are problems that cannot be solved using real numbers. For example, the computation of eigenvalues of unsymmetric matrices. Hence, we need complex arithmetic.

H. Rittich
  • 558
  • 3
  • 11
-1

This paper is relevant:

Branch Cuts for Elementary Complex Functions or Much Ado About Nothing's Sign Bit.

http://people.freebsd.org/~das/kahan86branch.pdf

  • 6
    Welcome to SciComp! Perhaps you could explain more about why the paper you link is relevant? A summary would make your answer more valuable, and more likely to be upvoted. We tend to discourage answers that add links without sufficient context. – Geoff Oxberry Mar 23 '15 at 18:59