I am currently trying to show that the statistic $\sum\limits_{y = 1}^n Y_i^2$ is minimal sufficient for $\mu$ where $Y_1, \dots, Y_n$ is a random sample from $N(\mu,\mu)$ for $\mu > 0$.
The textbook All of Statistics: A Concise Course in Statistical Inference by Larry Wasserman gives the following definitions and theorems:
9.32 Definition. Write $x^n \leftrightarrow y^n$ if $f(x^n; \theta) = cf(y^n; \theta)$ for some constant $c$ that might depend on $x^n$ and $y^n$ but not $\theta$. A statistic $T(x^n)$ is sufficient if $T(x^n) \leftrightarrow T(y^n)$ implies that $x^n \leftrightarrow y^n$.
9.35 Definition. A statistic $T$ is minimal sufficient if (i) it is sufficient; and (ii) it is a function of every other sufficient statistic.
9.36 Theorem. $T$ is minimal sufficient if the following is true: $$T(x^n) = T(y^n) \ \text{if and only if} \ x^n \leftrightarrow y^n.$$
9.40 Theorem (Factorization Theorem). $T$ is sufficient if and only if there are functions $g(t, \theta)$ and $h(x)$ such that $f(x^n; \theta) = g(t(x^n), \theta)h(x^n)$.
I first calculate the likelihood
$$\begin{align} L(\mu; \mathbf{y}) &= \prod_{i = 1}^n L(\mu; y_i) \\ &= \prod_{i = 1}^n \dfrac{1}{\sqrt{2\pi \mu}} \exp{\left\{ -\dfrac{1}{2 \mu}(y_i - \mu)^2 \right\}} \\ &= \dfrac{1}{(2 \pi \mu)^{n/2}} \exp{\left\{ -\dfrac{1}{2 \mu} \sum_{i = 1}^n (y_i - \mu)^2 \right\}} \\ &= (2\pi \mu)^{-n/2} \exp{\left\{ -\dfrac{1}{2 \mu} \sum_{i = 1}^n (y_i - \mu)^2 \right\}} \\ &= (2\pi \mu)^{-n/2} \exp{\left\{ -\dfrac{1}{2 \mu} \left( \sum_{i = 1}^n y_i^2 - 2\sum_{i = 1}^n y_i \mu + \sum_{i = 1}^n \mu^2 \right) \right\}} \\ &= (2\pi \mu)^{-n/2} \exp{\left\{ -\dfrac{1}{2 \mu} \left( \sum_{i = 1}^n y_i^2 - 2\mu \sum_{i = 1}^n y_i + n\mu^2 \right) \right\}} \end{align}$$
Using the factorization theorem, I get
$$\begin{align}&T(\mathbf{Y}) = \sum_{i = 1}^n Y_i^2, \\ &g(T(\mathbf{y}), \mu) = (2\pi \mu)^{-n/2} \exp{\left\{ -\dfrac{1}{2 \mu} \left( \sum_{i = 1}^n y_i^2 - 2\mu \sum_{i = 1}^n y_i + n\mu^2 \right) \right\}},\\ &h(\mathbf{y}) = 1, \end{align}$$
so we get
$$\begin{align} L(\mu; \mathbf{y}) = g(T(\mathbf{y}), \mu) \times h(\mathbf{y}) = (2\pi \mu)^{-n/2} \exp{\left\{ -\dfrac{1}{2 \mu} \left( \sum_{i = 1}^n y_i^2 - 2\mu \sum_{i = 1}^n y_i + n\mu^2 \right) \right\}} \end{align}$$
I then conclude that the statistic $T(\mathbf{Y}) = \sum_{i = 1}^n Y_i^2$ is sufficient for $\mu$.
I now have the likelihood ratio
$$\begin{align} \dfrac{L(\mu; \mathbf{y_1})}{L(\mu; \mathbf{y_2})} &= \exp{\left\{ \dfrac{1}{2 \mu} \left( \sum_{i = 1}^n y_{2i}^2 - \sum_{i = 1}^n y_{1i}^2 + 2\mu \left( \sum_{i = 1}^n y_{1i} - \sum_{i = 1}^n y_{2i} \right) \right) \right\}} \\ &= \exp{\left\{ \dfrac{1}{2 \mu} \left( \sum_{i = 1}^n y_{2i}^2 - \sum_{i = 1}^n y_{1i}^2 \right) + \left( \sum_{i = 1}^n y_{1i} - \sum_{i = 1}^n y_{2i} \right) \right\}} \end{align}$$
This likelihood ratio does not depend on the parameter $\mu$ iff $T(\mathbf{y}_2) = \sum_{i = 1}^n y_{2i}^2 = \sum_{i = 1}^n y_{1i}^2 = T(\mathbf{y}_1)$: $$\begin{align} \exp{\left\{ \dfrac{1}{2 \mu} \left( \sum_{i = 1}^n y_{2i}^2 - \sum_{i = 1}^n y_{1i}^2 \right) + \left( \sum_{i = 1}^n y_{1i} - \sum_{i = 1}^n y_{2i} \right) \right\}} = \exp{\left\{ \left( \sum_{i = 1}^n y_{1i} - \sum_{i = 1}^n y_{2i} \right) \right\}} \end{align}$$
I then conclude that the statistic $T(\mathbf{Y}) = \sum_{i = 1}^n Y_i^2$ is minimal sufficient.
However, I'm not sure whether I did this all correctly in accordance with the above definitions and theorems. The problem is that some of these are "iff", meaning that they require you to show the two-way implication (otherwise known as 'equivalence'), and I'm not sure that I did this correctly. The user microhaus posted this related answer, but I'm not sure if I've applied it correctly. First, I'm unsure whether my use of the factorization theorem was correct to conclude that the statistic $T(\mathbf{Y}) = \sum_{i = 1}^n Y_i^2$ is sufficient for $\mu$. And second, I'm unsure how one is supposed to show that a statistic is "a function of every other sufficient statistic".
So is my work here correct? Did I use the factorization theorem correctly to conclude that the statistic is sufficient? How does one show that a statistic is "a function of every other sufficient statistic"? If there are any problems here, how can the reasoning be improved?