4

As far as I know, for any signal the first $N/2$ frequency samples and second $N/2$ frequency samples are equal (by magnitude), so (sorry, if I made mistake in indices): $$X[k]=\sum_{n=0}^{N-1}x[n]e^{-j2 \pi kn/N}, \tag{1}$$

$$X^*[1]=X[N-1], X^*[2]=X[N-2], ... \qquad \text{if} \ x[n]\in \mathbb{R} \ \forall n\in\mathbb{Z} \tag{2}$$

For example (calculating FFT in Python):

import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt

t = np.arange(0,16,1) N = len(t) signal = np.sin(2np.pi2*t/N) freqs = np.fft.fftfreq(len(FFT)) FFT = np.fft.fft(signal)

print(FFT[2]) print(FFT[-2])

the result of both prints is the same (within round off error): (-1.6229664991885371e-15-8.000000000000002j)

Wouldn't it be better to calculate only the frist $N/2$ frequecny samples, store it and then "rearrange" (as in $(2)$) it for the second $N/2$ frequecny samples? If we can do so, then it looks like there is no advantage of using FFT instead of DFT (because as I understand from this post for $N/2$ samples there is no advantage of using FFT). My question is not about the math, it's about calculation speed.

robert bristow-johnson
  • 20,661
  • 4
  • 38
  • 76
Curious
  • 355
  • 1
  • 11

1 Answers1

10

First, there's some pedantics to get out of the way: it's not FFT or DFT -- the FFT is just a specific method of computing the DFT that's advantageous under many circumstances.

Any DFT takes $N$ points and transforms them into $N$ points on the output. In the process, it loses no information (in fact, you can show that the formal definition of a DFT is identical to multiplying the input vector with an $N \times N$ matrix, and then you can show that matrix is not only never singular, but that it is always about as well-conditioned as a matrix can be).

Because it loses no information, it must return as many points as it takes in -- in general this means $N$ points out for $N$ points in.

As far as I know, for any signal the first $N/2$ frequency samples and second $N/2$ frequency samples are equal (by magnitude)

This is not always true. It's certainly true if the input signal is all real-valued. In the general case where the input signal can be complex-valued, this is not at all true.

Wouldn't it be better to calculate only the first $N/2$ frequency samples, store it and then "rearrange" (as in $(2)$) it for the second $N/2$ frequency samples?

If the input data is guaranteed to be real there's a slight advantage to be had in doing so. In fact, most FFT packages will have an FFT variant (i.e., scipy.rfft). This saves a few operations, but not a lot.

Note that this will seemingly violate the "$N$ points out for $N$ points in" rule that I quoted above. It doesn't, because when you do an FFT on real-valued data you put in $N$ real-valued points, and you get out one or two real-valued points plus $\frac N 2 - 1$ or $\frac {N - 1} 2$ complex-valued points. Each of those complex-valued points can be considered to be a pair of two real numbers, so you put in $N$ real numbers, you get out $N$ real numbers, and information is preserved.

If we can do so, then it looks like there is no advantage of using FFT instead of DFT

The naive DFT takes $\mathcal O(N^2)$ operations as $N \to \infty$. The FFT takes $\mathcal O(N \log N)$ operations as $N \to \infty$. This is true even when you're wasting time by computing the FFT of an all-real vector using a plain old FFT. Usually for small $N$ the "plain old" DFT is better, but there is always a crossover point where the FFT starts executing faster.

If you want, you can use the real-input FFT. It usually takes a bit of extra bookkeeping to get it right, so you may only want to use it where processing time is more important that a bit of extra work and a lot of extra comments so that follow-on work gets all the 't's crossed and 'i's dotted correctly.

(because as I understand from [this post][11] for $N/2$ samples there is no advantage of using FFT).

Skimming the answers to that post, I don't see anyone saying that. I could be missing it. But if someone did say that they were in error -- because the "fast" part of the fast Fourier Transform is the fact that the computational complexity is $\mathcal O(N \log N$) operations as $N \to \infty$, and that will always beat out the naive DFT's $\mathcal O(N^2)$ operations as $N \to \infty$ eventually.

TimWescott
  • 12,704
  • 1
  • 11
  • 25
  • thank you for such a comprehensive answer!) I don't state, that anyone in the post I linked wrote it; for this reason I mention it in my question "as I understand"; I just concluded from that post that if I consider $N/2$ samples, the naive DFT will require $N^2/2$ multiplications and summations and FFT $N^2/2+N$, which confused me – Curious Mar 04 '23 at 20:25
  • 1
    If this is the answer then mark it as such (and, in general, do that with other answers that are "the one"). – TimWescott Mar 06 '23 at 02:54