Let $Y_1,\ldots,Y_n$ be a random sample from some distribution $F_\theta$. An estimator $\hat\theta$ for $\theta$ is called unbiased if and only if the bias
$$b(\theta) = E_\theta(\hat\theta)-\theta,$$
equals 0, otherwise, it's called biased.
In many cases $b(\theta)$ is not exactly zero but it's a function of $n$ and s.t. $\lim_{n\to\infty} b(\theta) = 0$. In this case, the estimator is called asymptotically unbiased.
On the other hand, an estimator is called consistent if it converges in probability to $\theta$. That is if, for any $\epsilon>0$,
$$
\lim_{n\to\infty}P_\theta(|\hat\theta -\theta|<\epsilon) = 1.
$$
Consistency is related to unbiasedness, indeed, a necessary and sufficient condition for consistency is that
$$
\lim_{n\to\infty} b(\theta) = 0,\text{ and } \lim_{n\to\infty}\text{var}_\theta(\hat\theta)=0.
$$