If you are a trader and you can get an pseuo-R² of 0.001 in predicting future financial transactions, you're the richest man in the world. If you are a CPU engineer and your design gets a pseudo-R² of 0.999 at telling if a bit is 0 or 1, you have a useless piece of silicon.
This statement is a copy from the question: Determine how good an AUC is (Area under the Curve of ROC)
Whether or not a particular value for R² is actually considered good performance, that depends on the problem.
Also note that R² is not a goodness of fit measure. So the value doesn't tell directly whether your model is a good fit or not. Instead, it tells how large the noise/randomness is relative to the deterministic part. (It's background knowledge about the problem that tells whether a particular ratio, a particular R² a particular performance also means whether or not a model is a good fit or not)
R² and pseudo-R² can be, due to it's computation never be high, even for the very best model. It is not a measure of goodness of fit that determines whether or not we are close to the true model of the distribution.
For instance, there can simply be a lot of noise. Cohen's pseudo-R² is a ratio of deviance $(D_{null}-D_{fitted})/D_{null}$. This value of $D_{fitted}$ does not need to approach zero when the fitted model approaches the perfect model. For binary distributed variables there will always be randomness and we are predicting the population parameters not the binary outcomes.
Instead, goodness of fit can be tested with, for instance, a chi-squared test or G-test (but these require multiple measurements at the same conditions, ie same regressor values).