Long answer:
Let's model the information flow from your "hidden" IIR $X$ to your observable output $Y$ as
$$ X \longrightarrow Y$$
Then, we call the amount of information you get per observation the *mutual information $I(X;Y)$; that information is the reduction of uncertainty about $X$ to be achieved by observing $Y$.
We call the expected uncertainty of something the entropy, in your case, the uncertainty about $X$ is its entropy and typically denoted as $H(X)$.
Now, the nice and thing about all this is that $H(X|Y)$, i.e. the "uncertainty about $X$ that remains when you know $Y$", is actually just the entropy of $X$ minus the information you get, so:
$$H(X|Y) = H(X) - I(X;Y)\text.\label{equiv}\tag1$$
The attacker's goal is to reduce the uncertainty he still has about $X$ to $0$.
Now, since any signal that "excites" all the eigenfunctions of a system can fully characterize the system, that means we only need to send the full set of eigenfunctions through your IIR. And since your IIR is an LTI system, that just happens to be the vector containing all oscillations of any representable frequency.
You can reduce the amount of information an attacker can get about your system by artificially inserting noise. Information theoretical, this increases your irrelevance $H(Y|X)$ (even if you knew $X$, you wouldn't 100% know $Y$, because noise is added).
The mutual information $I(X;Y)$ as used in $\eqref{equiv}$ is symmetric, i.e. $I(X;Y)=I(Y;X)$; hence follows
\begin{align}
H(Y|X) &= H(Y) - I(X;Y)\label{irr}\tag2\\
&\overset{\eqref{equiv}}= H(Y) - (H(X)-H(X|Y))\\
&= H(Y) - H(X) + H(X|Y)\\
H(X|Y) &= H(Y|X) + H(X)- H(Y)
\end{align}
Your objective was to stop a reverse engineer, i.e. to maximize $H(X|Y)$.
Since $H(X)$ is fixed (you've got some coefficients that might take some values, so it's some amount of bits), your only way of tuning this objective function is to increase $H(Y|X)$. And the only way to do so is by inserting truly random variations in your output.