I'll give a more general answer regarding a linear system $Ax = b$, since this is what you ultimately have to solve. I'll preface this by remembering that a problem is well-posed according to Hadamard if it has a solution, the solution is unique, and the solution's behaviour changes continuously with the initial conditions. Once you discretise your specific problem you get a system of linear equations $Ax = b$, so going forward, I will consider only systems of linear equations.
Existence of a Solution
A solution does not exist when $b$ is not in the range of $A$. If this arises in practice, one generally needs to study why the problem has no solution, since this is typically a modelling issue, and what needs to be done to rectify it.
A standard "remedy" is to opt for the solution that least violates the equation in the $L_2$ norm: $\min_{u}\|Au-b\|^2_2$, which results in the normal equations: $A^TAu = A^Tb$. Note however that in some applications this modification of the problem may not be satisfactory, so you really ought to check why your model results in an empty solution set before applying the above "remedy".
In the conjugate gradients for the normal equations (CGNR) solver, if the initial guess is set to zero, then one actually recovers $u^* = A^{+}b$ (where $A^{+}$ is the Moore-Penrose pseudoinverse of $A$). If the initial guess is different, then one gets a $u^*+v$ where $v$ is the part of the initial guess from the kernel/null-space of $A$.
Uniqueness of the Solution
The problem does not have a unique solution if $A$ (or $A^TA$ in the $\min_u\|Au-b\|^2_2$ case) is not full rank, since if we assume that $u^*$ is a solution, then $u^*+v$ is also a solution, where $v$ is from the kernel/null-space of $A$ (or $A^TA$). However if the initial guess for the vector is zero in a conjugate gradients (CG) solver then I believe that you will recover the unique solution $u^* = A^{+}b$.
Continuity of the Solution
The problem is a system of linear equations, so the the solution continuously changes with the initial conditions (provided that such a solution exists, but as noted one can just modify the problem to $\min_u \|Au-b\|^2_2$ in that case and solve $A^TAu = A^Tb$ instead).
Compatibility Conditions
Now let's go back to the continuous setting, and let your solution domain be $\Omega$. You have imposed a Neumann boundary condition of the form $\partial_v u(a) = n(a), \, a \in \partial\Omega$ where $v$ is the normal to the boundary at point $a$. Using the divergence theorem we can find that:
\begin{equation}
\int_{\Omega} f = \int_{\Omega} \Delta u = \int_{\Omega} \nabla \cdot \nabla u = \int_{\partial \Omega} \partial_v u = \int_{\partial\Omega} n.
\end{equation}
Then you know that the above compatibility conditions must be satisfied. One way to achieve this would be to change the mean value of $f$ to be equal to $\frac{\int_{\partial\Omega}n}{|\Omega|}$ which can be achieved in the following manner:
\begin{equation}
g(a) = f(a) - \frac{\int_{\Omega} f}{|\Omega|} + \frac{\int_{\partial \Omega} n}{|\Omega|}
\\\implies \\
\int_{\Omega}g = \int_{\Omega} f - \int_{\Omega}\frac{\int_{\Omega} f}{|\Omega|} + \int_{\Omega}\frac{\int_{\partial \Omega} n}{|\Omega|} = \int_{\Omega} f - |\Omega|\frac{\int_{\Omega} f}{|\Omega|} + |\Omega|\frac{\int_{\partial \Omega} n}{|\Omega|} = \int_{\partial\Omega} n
\end{equation}
Then formulate the new problem:
\begin{equation}
\Delta u (a) = g(a), \, a\in\Omega \\
\partial_v u (a) = n(a), \, a \in \partial\Omega.
\end{equation}
The solution for the above problem exists, and it will exist in the discrete case too provided you use a sensible discretisation.
Solvers
Once you have taken into account this compatibility condition what solver should work on your system depends only on your system matrix $A$. Note that while a discretisation of $\Delta$ generally results in a symmetric matrix, once you add the boundary conditions this does not have to be the case anymore. If the conditions are such that your matrix is diagonally dominant or symmetric positive-definite then Jacobi, Gauss-Seidel, SOR, CG should all work. If your matrix is neither diagonally dominant nor symmetric positive (semi-)definite then you may have to rely on a solver such as CGNR (which would essentially solve $A^TAu = A^Tb$ which is symmetric positive (semi-)definite). I do not believe that any of those solvers requires uniqueness of the solution, it just requires that the solution exists (i.e. $b$ is in the range of $A$, while for CGNR not even this is required). If your matrix is positive semi-definite (i.e. you have infinitely many solutions) then which solution you end up in will depend on the initial guess at the beginning (I assume that for most solvers an initial guess of zero should lead you to the Moore-Penrose pseudoinverse solution $u^* = A^{+}b$, though I am not sure whether it holds for all or only for CG).
General Case
In the general case you may have a constraint that is less obviously implemented than the above. Then one usually uses a Lagrange multipliers approach. Assume that your constraints can be written as $Bu = n$, and the discretisation of your PDE without boundary conditions is $Au = f$, where $A$ is symmetric positive (semi-)definite. Then you can form the following quadratic programming problem with linear constraints:
\begin{equation}
\min_u \frac{1}{2}u^T(Au-f), \,\textrm{s.t.}\, Bu = n.
\end{equation}
One can use a Lagrange multipliers approach to rewrite this in the following manner:
\begin{equation}
\min_u\max_{\lambda}\left(\frac{1}{2}u^T(Au-f) + \lambda^T(Bu-n)\right) \\
\implies \\
\begin{bmatrix}
A & B^T \\
B & 0
\end{bmatrix} \begin{bmatrix} u \\ \lambda \end{bmatrix} = \begin{bmatrix} f \\ n\end{bmatrix}.
\end{equation}
The above system is indefinite but symmetric, thus it suggests using a solver such as SYMMLQ. For this to work you must make sure that there is a solution however. A solution will not exist if you have conflicting constraints: $n$ is not in the range of $B$. Thus if you suspect that your constraints may be conflicting you can compute their projection on the range of $B$: $m = BB^{+}n$. The part $p = B^{+}n$ can be computed in practice using a CGNR solver initialized with the zero vector and solving the system $B^TBp = B^Tn$, when you can compute $m = Bp$. After this you just plug in $m$ instead of $n$ in the system resulting from the Lagrange multipliers approach. The solution may also not exist if $f$ is not in the range of $A$. Then you proceed in a similar manner: $g = AA^{+}f$.