4

It seems intuitive to me that, in a digital system, a system sampling the error rate "too slowly" will fail to stabilize the system.

Is there a theory/set of metrics/equation I can use to represent this in the frequency or time domain?

i.e.

$$ e(t) : \text{Actual error (not sampled error) with respect to time} \\ e(s) : \text{Laplace or frequency domain representation of error signal} \\ F_s : \text{Rate at which error is sampled} $$

If $e(t)$ has frequency components much higher than $F_s$ (or maybe if $F_s < 2\text{max}[e(s)]$ ), is it possible to properly control the system?

  • Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking. – Community Nov 19 '21 at 02:10

1 Answers1

1

If you do not filter out (or filter down) the high frequency components of $e(t)$, they will be "folded back" into the desired spectrum through the process of "aliasing".

Practically speaking, no low-pass filter is ideal, so there will always be high frequency components that get under-sampled and aliased back. You can view these undesired components as a noise contribution at the output of your sampler. Depending on the nature of the high-frequency components, they might just introduce white noise that produces small random errors, or they may introduce a bias that you need to account for.

This paper by Texas Instruments gives a pretty comprehensive treatment of the subject.

guero64
  • 338
  • 3
  • 10
  • Hey thanks for the effort. What I'm actually interested in is how sampling rate improves controller stability and behavior even with a proper anti-aliasing filter. – my name is aj Feb 24 '22 at 16:51