First, find the multivariate Yule-Walker representation
$$
\Gamma_j=\begin{cases}
A\Gamma_1^\top+\Sigma&\text{for }j=0\\
A\Gamma_{j-1}&\text{for }j>0
\end{cases}
$$
We then use this to show that
$$
\text{vec}(\Gamma_0)=(I_{n^2}-A\otimes A)^{-1}\text{vec}(\Sigma)
$$
and
$$
\text{vec}(\Gamma_j)=(I_{n}\otimes A)^{j}\text{vec}(\Gamma_0)
$$
and hence
$$
\text{vec}(\Gamma_j)=(I_{n}\otimes A)^{j}(I_{n^2}- A\otimes A)^{-1}\text{vec}(\Sigma)
$$
The following matrix algebraic fact is useful: $$\text{vec}(ABC)=(C^\top\otimes A)\text{vec}(B)\qquad(*)$$ for conformable $A,B,C$.
The result for $j=0$ can be shown via
\begin{eqnarray*}
\Gamma_0 &=& E(\mathbf{X}_t - \mathbf{\mu})(\mathbf{X}_t - \mathbf{\mu})^\top\\
&=& E(\mathbf{X}_t - \mathbf{\mu})( A(\mathbf{X}_{t-1}-\mathbf{\mu})+\zeta_t)^\top\\
&=& E(\mathbf{X}_t - \mathbf{\mu})((\mathbf{X}_{t-1}-\mathbf{\mu})^\top A^\top+\zeta_t^\top)\\
&=& E\left[(\mathbf{X}_t - \mathbf{\mu})(\mathbf{X}_{t-1}-\mathbf{\mu})^\top\right] A^\top + \underbrace{E\left[(\mathbf{X}_t - \mathbf{\mu})\zeta_t^\top\right]}_{=E\left[\sum_{j=0}^\infty A^j\zeta_{t-j}\zeta_t^\top\right]}\\
&=& \Gamma_1 A^\top + \Sigma
\end{eqnarray*}
For $j>0$,
\begin{eqnarray*}
\Gamma_j &=& E(\mathbf{X}_{t} - \mathbf{\mu})(\mathbf{X}_{t-j} - \mathbf{\mu})^\top\\
&=& E( A(\mathbf{X}_{t-1}-\mathbf{\mu})+\zeta_t)(\mathbf{X}_{t-j} - \mathbf{\mu})^\top\\
&=& AE(\mathbf{X}_{t-1}-\mathbf{\mu})(\mathbf{X}_{t-j} - \mathbf{\mu})^\top+ A\underbrace{E(\zeta_t(\mathbf{X}_{t-j} - \mathbf{\mu})^\top)}_{=0}\\
&=& A \Gamma_{j-1}
\end{eqnarray*}
Inserting $\Gamma_1= A\Gamma_{0}$ in the result for $\Gamma_{0}$ gives
$$
\Gamma_{0}= A\Gamma_0^\top A^\top+\Sigma= A\Gamma_0 A^\top+\Sigma
$$
such that (note $\text{vec}(A+B)=\text{vec}(A)+\text{vec}(B)$) $\text{vec}(\Gamma_{0})=\text{vec}( A\Gamma_0 A^\top)+\text{vec}(\Sigma)$ and, by the result on vec operators, $\text{vec}(\Gamma_{0})=( A\otimes A)\text{vec}(\Gamma_{0})+\text{vec}(\Sigma)$. Hence,
$$
\text{vec}(\Gamma_{0})-( A\otimes A)\text{vec}(\Gamma_{0})=\text{vec}(\Sigma)
$$
or
$$
\text{vec}(\Gamma_0)=(I_{n^2}- A\otimes A)^{-1}\text{vec}(\Sigma).
$$
For $\Gamma_j$, write $\Gamma_j = A^j\Gamma_{0}$ as $\Gamma_j = A^j\Gamma_{0}I_n$ and apply the above matrix result (*) to get
$$
\text{vec}(\Gamma_j)=(I_{n}\otimes A^j)\text{vec}(\Gamma_0)=(I_{n}\otimes A)^j\text{vec}(\Gamma_0)
$$
The second representation for $\text{vec}(\Gamma_j)$ is obtained simply by inserting.
This boils down to the well-known expression in the univariate AR(1) case, with $n=1$, $A=a$, $\Sigma=\sigma^2$, so that
$$\text{vec}(\Gamma_j)=(I_{n}\otimes A)^{j}(I_{n^2}- A\otimes A)^{-1}\text{vec}(\Sigma)\Rightarrow\gamma_j=(1\otimes a)^{j}(1- a\otimes a)^{-1}\sigma^2$$
or $$\gamma_j=a^{j}(1- a^2)^{-1}\sigma^2$$