Let's approach this from first principles.
An estimator $t$ assigns some guess of $\theta$ to any possible outcome $X.$ Since the possible outcomes are integers, write this guess as $t_i$ when $X=i$ for any integer $i.$
We are hoping to find estimators that tend to be close to $\theta$ when $\theta$ is the parameter. Clearly, when $i$ is observed the possible values of $\theta$ are limited (for sure) to the set $\{i+1,i,i-1\}.$ Thus, a good estimator is likely to guess $\theta$ is close to the observation $i.$ Let us therefore express $t$ in terms of how far it departs from the observation; namely, let
$$t_i = i + \delta_i.$$
When $\theta$ is the parameter, the outcomes $\theta-1,$ $\theta,$ and $\theta+1$ have equal probabilities of $1/3$ and all other integers have zero probability. Consequently,
The expectation of $t$ when $\theta$ is the parameter is $$E(t\mid \theta=i) = \frac{1}{3}\left(t_{i-1} + t_i + t_{i+1}\right) = i + \frac{1}{3}\left(\delta_{i-1} + \delta_i + \delta_{i+1}\right).$$ Because $t$ must be unbiased, this quantity equals $i$ no matter what $i$ might be, showing that for all $i,$ $$\delta_{i-1} + \delta_{i} + \delta_{i+1}=0.$$ Already this is a huge restriction, because if we specify (say) $\delta_0$ and $\delta_1,$ this relation recursively requires $\delta_{-1} = \delta_2 = -(\delta_0 + \delta_1),$ *etc., thereby completely determining the estimator.
The variance of $t$ is $$\operatorname{Var}(t\mid \theta=i) = \frac{1}{3}\left((t_{i-1}-i)^2 + (t_i-i)^2 + (t_{i+1}-i)^2\right) \\= \frac{1}{3}\left((\delta_{i-1}-1)^2 + \delta_i^2 + (\delta_{i+1}+1)^2\right).$$ Among all unbiased estimators, this must have the smallest variance for all $i.$
It is a straightforward exercise in algebra (or Calculus, using a Lagrange multiplier) to show that for a specific $i,$ a minimum of $(2)$ can be obtained subject to the constraint $(1)$ and implies $\delta_{i-1}=\delta_i=\delta_{i+1}.$ Since this must hold for all $i,$ clearly the $\delta_i$ are all equal, whence they must all equal $0$ (because $\delta_1 = \delta_2 = -(\delta_0+\delta_1) = -2\delta_1$ has the unique solution $\delta_1=0,$ etc.).
Consequently, if an UMVUE exists, its variance is a constant given by $(2),$ equal to $2/3.$
Unfortunately, there are unbiased estimators that achieve smaller variances for specific values of $\theta.$
For instance, suppose you had a strong belief that $\theta=0.$ You might then adjust your estimator to guess $\theta=0$ whenever an outcome consistent with that guess showed up. That is, you would set $t_0=t_1=t_{-1}=0.$ That is equivalent to $\delta_{-1}=1,$ $\delta_0=0,$ and $\delta_1=-1.$ As we have remarked earlier, these initial conditions determine $t$ completely from the recursion $(1).$ Its variance when $\theta=0$ is zero, because it always guesses the correct value of $\theta.$ You can't do any better than that! Moreover, $0 \ll 2/3$ is a huge improvement. But compensating for that is a larger variance for certain other values of $\theta.$ For instance, since $\delta_2 = \delta_{-1} = 1,$ when $\theta=1$ the possible outcomes are $0,1,2,$ for which $t$ guesses $0,$ $0,$ and $3,$ respectively, for a variance of
$$\frac{1}{3}\left((0-1)^2 + (0-1)^2 + (3-1)^2\right) = 2 \gg \frac{2}{3}.$$
This contradiction--obtaining a lower variance for certain values of $\theta$--shows no UMVUE exists.
You might enjoy re-interpreting $\delta$ as an estimator of 0 ;-).