I'm trying to do a high precision discrete fourier transform on a signal. To examine the precision, I use a gaussian function as the signal, because the fourier transform is also a gaussian function.

The code is as follows
#include <stdio.h>
#include <tgmath.h>
typedef long double complex cplx;
long double PI;
void _fft(cplx buf[], cplx out[], int n, int step)
{
if (step < n) {
_fft(out, buf, n, step * 2);
_fft(out + step, buf + step, n, step * 2);
for (int i = 0; i < n; i += 2 * step) {
cplx t = cexpl(- I * PI * i / n) * out[i + step];
buf[i / 2] = out[i] + t;
buf[(i + n)/2] = out[i] - t;
}
}
}
void fft(cplx buf[], int n)
{
cplx out[n];
for (int i = 0; i < n; i++) out[i] = buf[i];
_fft(buf, out, n, 1);
}
int main()
{
const int nPoints = pow(2, 12);
PI = atan2(1, 1) * 4;
cplx dt = 1e-3;
cplx dOmega = 1 / (dt * nPoints);
long double T[nPoints];
long double DOmega[nPoints];
cplx At[nPoints];
cplx tau = 28.4e-3;
for (int i = 0; i < nPoints; ++i)
{
T[i] = dt*(i-nPoints/2);
DOmega[i] = dOmega * (i - nPoints / 2);
At[i] = cexpl(-T[i]*T[i]/2/(tau*tau));
}
fft(At, nPoints);
FILE* fw;
fw = fopen("fft_01.txt", "w+");
for (int i = 0; i < nPoints; ++i)
{
fprintf(fw, "%.15Le, %.15Le\n", DOmega[i], fabs(At[i]) );
}
return 0;
}
When I change the defined type from float complex to double complex, the result did show a improvement in precision. However, When I change the data type from double complex to long double complex, there's no improvement while the numbers are far away from the minimal limit.
I think that the two wings of the peak should go down like the results by Mathematica, but here it seems stuck around 10^(-35) and can not further get down.
When I further apply a iDFT to the result of DFT, it's more apparent that the long double precision has been lost. The blue one is the original signal, with high precision. The green one is the result after iDFT(DFT(x)), which should be coincide with the blue one, but the minimal numbers are much larger.

Anyone can tell me where is the problem, thanks in advance.
PS: I am on Mac OS 10.13.1, using gcc as the compiler.
I have checked the numerical limits on my platform
storage size for float: 4
minimum float positive value: 1.175494e-38
Maximum float positive value: 3.402823e+38
precision value: 6
storage size for double: 8
minimum double positive value: 2.225074e-308
Maximum double positive value: 1.797693e+308
precision value: 15
storage size for long double: 16
minimum long double positive value: 3.362103e-4932
Maximum long double positive value: 1.189731e+4932
precision value: 18



PI = atan2(1, 1) * 4;looks like it's always double. TryM_PI. Also, I agree with nicoguaro, the question could be a little easier to read if you made the actual question a bit more explicit. – Kirill Nov 14 '17 at 16:364*atan2(1.0l,1.0l)to make sure, the long double answer doesn't look right because you'd expect the numbers to be smaller than for double. – Kirill Nov 14 '17 at 20:044*atan2(1.0l,1.0l)makes just the same asM_PI. And I think right now the result is at the bottom of float(1e-38)instead of double(1e-308). – Roy Liao Nov 14 '17 at 20:18long doubleas a synonym fordouble, so one really does need to carefully check everything. – Kirill Nov 14 '17 at 23:37doubleon my platform is2.225074e-308, and forfloatit's1.175494e-38. Even though it can not reach the limit forlong double, I wonder why it is so far away fromdouble? – Roy Liao Nov 15 '17 at 08:23gccas the default compiler. Thanks – Roy Liao Nov 15 '17 at 08:25double", but the accuracy with which operations are performed (machine precision), which is2e-16for doubles and2e-34for long doubles. You should probably stop and learn something more about floating point computations before you go on with your experiments, otherwise there are more suprises awaiting you. – Federico Poloni Nov 15 '17 at 08:492e-16is the value for epsilon(1). I do not know the difference, but in Matlab, epsilon(0) is much smaller than epsilon(1). Is the situation just the same here in C? – Roy Liao Nov 15 '17 at 08:582e-16is important not because it is epsilon(1), but because it is the accuracy bound of operations (each operation is guaranteed to return a result within a relative error2e-16of the true result). DFT is a stable computation in norm, which means that it is going to return a result within a relative normwise error2e-16(times some factor that depends polynomially on the dimension) of the true result. – Federico Poloni Nov 15 '17 at 10:00libquadmath,hpa,mapm,ttmath, andgmpas well. Do you have some other suggestion? Thanks a lot. – Roy Liao Nov 16 '17 at 17:50