Residual autocorrelation does not cause bias, but it monkeys with the variance
Because bias is only focused on the expected value of an estimator, residual autocorrelation actually does not cause bias, but it does monkey with the variance of the estimator. This means that it still affects the properties of the estimator and can lead to under or over-estimation of the estimator variance, which can impact confidence and prediction intervals. To see this, note that for the linear model $\mathbf{Y} = \mathbf{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon}$ the deviation of the OLS estimator from the true coefficient vector can be written as a linear transofmation of the error vector:
$$\begin{align}
\hat{\boldsymbol{\beta}}
&= (\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} \mathbf{Y} \\[6pt]
&= (\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} (\mathbf{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon}) \\[6pt]
&= \boldsymbol{\beta} + (\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} \boldsymbol{\varepsilon}. \\[6pt]
\end{align}$$
If we denote the error mean as $\boldsymbol{\mu}_\mathbf{x} \equiv \mathbb{E}(\boldsymbol{\varepsilon} | \mathbf{x})$ and the error variance as $\boldsymbol{\Sigma}_\mathbf{x} \equiv \mathbb{V}(\boldsymbol{\varepsilon} | \mathbf{x})$ then we get the corresponding estimator moments:
$$\begin{align}
\mathbb{E}(\hat{\boldsymbol{\beta}} | \mathbf{x})
&= \boldsymbol{\beta} + (\mathbf{x}^\text{T} \mathbf{x})^{-1} \mathbf{x}^\text{T} \boldsymbol{\mu}_\mathbf{x}, \\[6pt]
\mathbb{V}(\hat{\boldsymbol{\beta}} | \mathbf{x})
&= (\mathbf{x}^\text{T} \mathbf{x})^{-1} (\mathbf{x}^\text{T} \boldsymbol{\Sigma}_\mathbf{x} \mathbf{x}) (\mathbf{x}^\text{T} \mathbf{x})^{-1}. \\[6pt]
\end{align}$$
The linearity assumption in the model is that $\mathbb{E}(\boldsymbol{\varepsilon} | \mathbf{x}) = 0$. So long as this assumption holds, you then have $\mathbb{E}(\hat{\boldsymbol{\beta}} | \mathbf{x}) = \boldsymbol{\beta}$ so the OLS estimator is unbiased. However, if the error terms are equivariant with non-zero autocorrelation then we get a non-diagonal variance matrix of the form:
$$\begin{align}
\boldsymbol{\Sigma}_\mathbf{x}
&= \sigma^2 \begin{bmatrix}
1 & \rho_{1} & \rho_{2} & \cdots & \rho_{n-3} & \rho_{n-2} & \rho_{n-1} \\
\rho_{1} & 1 & \rho_{1} & \cdots & \rho_{n-4} & \rho_{n-3} & \rho_{n-2} \\
\rho_{2} & \rho_{1} & 1 & \cdots & \rho_{n-5} & \rho_{n-4} & \rho_{n-3} \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
\rho_{n-3} & \rho_{n-4} & \rho_{n-5} & \cdots & 1 & \rho_{1} & \rho_{2} \\
\rho_{n-2} & \rho_{n-3} & \rho_{n-4} & \cdots & \rho_{1} & 1 & \rho_{1} \\
\rho_{n-1} & \rho_{n-2} & \rho_{n-3} & \cdots & \rho_{2} & \rho_{1} & 1 \\
\end{bmatrix}.
\end{align}$$
which leads to a complicated leading term $(\mathbf{x}^\text{T} \mathbf{x})^{-1} (\mathbf{x}^\text{T} \boldsymbol{\Sigma}_\mathbf{x} \mathbf{x})$ in the variance expression. If there is no autocorrelation then this term reduces down to a constant multiple of the identity matrix which gives the standard expression for the variance of the OLS estimator.
The result of all this is that the presence of autocorrelation in the error terms affects the variance of the OLS estimator (but not its expected value, so it is still unbiased under the linearity assumption). This has flow-on effects if we want to use the OLS estimator to obtain confidence intervals for the true coefficients in the regression or to obtain prediction intervals for the response variable for one or more new observations. Generally speaking, to do these things we would have to estimate the autocorrelation using some stipulated form using WLS estimation.