The question is about the data, not random variables.
Let $X = (x_1, x_2, \ldots, x_n)$ be the data and $Y = (y_1, y_2, \ldots, y_n)$ be additive changes to the data so that the new values are $(x_1+y_1, \ldots, x_n+y_n)$. From
$$\text{Var}(X) = \text{Var}(X+Y) = \text{Var}(X) + 2 \text{Cov}(X,Y) + \text{Var}(Y)$$
we deduce that
$$(*) \quad \text{Var}(Y) + 2 \text{Cov}(X,Y) = 0$$
is necessary for the variance to be unchanged. Add in $n-k$ additional constraints to zero out all but $k$ of the $y_i$ (there are ${n \choose k}$ ways to do this) and note that all $n-k+1$ constraints almost everywhere have linearly independent derivatives. By the Implicit Function Theorem, this defines a manifold of $n - (n-k+1)$ = $k-1$ dimensions (plus perhaps a few singular points): those are your degrees of freedom.
For example, with $X = (2, 3, 4)$ we compute
$$3 \text{Var}(y) = y_1^2 + y_2^2 + y_3^2 - (y_1+y_2+y_3)^2/3$$
$$3 \text{Cov}(x,y) = (2 y_1 + 3 y_2 + 4 y_3) - 3(y_1 + y_2 + y_3)$$
If we set (arbitrarily) $y_2 = y_3 = 0$ the solutions to $(*)$ are $y_1 = 0$ (giving the original data) and $y_1 = 3$ (the posted solution). If instead we require $y_1=y_3 = 0$ the only solution is $y_2 = 0$: you can't keep the SD constant by changing $y_2$. Similarly we can set $y_3 = -3$ while zeroing the other two values. That exhausts the possibilities for $k=1$. If we set only $y_3 = 0$ (one of the cases where $k = 2$) then we get a set of solutions
$$y_2^2 - y_1 y_2 + y_1^2 - 3y_1 == 0$$
which consists of an ellipse in the $(y_1, y_2)$ plane. Similar sets of solutions arise in the choices $y_2 = 0$ and $y_1 = 0$.