(Throughout, I assume the labels are $0$ and $1$, not $\pm 1$.)
Let's look at what $R^2$ means in the setting where we use a linear regression with an intercept. While there are many equivalent definitions in this setting, the definition that I find to apply in the most generality is comparing the performance of our model to the performance of a baseline model that only has an intercept and always predicts the pooled mean of $y$.
$$
R^2 = 1 - \dfrac{
\sum_{i=1}^n\left(
y_i - \hat y_i
\right)^2
}{
\sum_{i=1}^n\left(
y_i - \bar y
\right)^2
}
= 1 - \dfrac{
\sum_{i=1}^n\left(
y_i - \hat y_i
\right)^2
}{
\sum_{i=1}^n\left(
y_i - y_{baseline}
\right)^2
}
$$
When we assume a Gaussian conditional distribution, the numerator is equivalent to the negative log likelihood (in the technical sense) of our model, and the denominator is equivalent to the negative log likelihood of that baseline model.
$$
R^2 = 1-\dfrac{-NLL(model)}{-NLL(baseline)}=1-\dfrac{NLL(model)}{NLL(baseline)}= 1 - \dfrac{
\sum_{i=1}^n\left(
y_i - \hat y_i
\right)^2
}{
\sum_{i=1}^n\left(
y_i - y_{baseline}
\right)^2
}\\
= 1 - \dfrac{
\sum_{i=1}^n\left(
y_i - \hat y_i
\right)^2
}{
\sum_{i=1}^n\left(
y_i - 0
\right)^2
}\\
= 1 - \dfrac{
\sum_{i=1}^n\left(
y_i - \hat y_i
\right)^2
}{
\sum_{i=1}^n
y_i^2
}
$$
When we get rid of the intercept and also set the remaining coefficients to zero, that "baseline" is zero. We use a baseline model that always predicts zero.
$$
R^2 = 1-\dfrac{NLL(model)}{NLL(baseline)}$$
Now let's turn to logistic regression. There is a different likelihood, but we can still consider the negative log likelihood of our model compared to the negative log likelihood of a baseline model that always predicts a log-odds of zero, equivalent to always predicting a probability of $0.5$. Negative log likelihood is equivalent to the log-loss.
$$
-\dfrac{1}{n}\sum_{i=1}^n\bigg[
y_i\log(\hat y_i) + (1-y_i)\log(1-\hat y_i)
\bigg]
$$
Consequently, a likelihood-based $R^2$ (akin to McFadden's $R^2$ for a model with an intercept) for a no-intercept logistic regression, would be:
$$R^2_{\text{likelihood-based}}=1-\dfrac{-\dfrac{1}{n}\sum_{i=1}^n\bigg[
y_i\log(\hat y_i) + (1-y_i)\log(1-\hat y_i)
\bigg]}{
-\dfrac{1}{n}\sum_{i=1}^n\bigg[
y_i\log(0.5) + (1-y_i)\log(0.5)
\bigg]}\\
=1-\dfrac{\sum_{i=1}^n\bigg[
y_i\log(\hat y_i) + (1-y_i)\log(1-\hat y_i)
\bigg]}{
\sum_{i=1}^n\bigg[
y_i\log(0.5) + (1-y_i)\log(0.5)
\bigg]}
$$
Alternatively, we can use the regular $R^2$ formula, with our $y_{baseline}=0.5$. This would be equivalent to evaluating the model on the Brier score instead of the likelihood.
$$
R^2_{\text{Brier-based}}= 1 - \dfrac{
\sum_{i=1}^n\left(
y_i - \hat y_i
\right)^2
}{
\sum_{i=1}^n\left(
y_i - y_{baseline}
\right)^2
}= 1 - \dfrac{
\sum_{i=1}^n\left(
y_i - \hat y_i
\right)^2
}{
\sum_{i=1}^n\left(
y_i - 0.5
\right)^2
}
$$
EDIT
I don't agree with everything on this page by UCLA, but it is a good reference for $R^2$-style metrics for logistic regression. In particular, I dislike considering classification accuracy ("Count" on that page) to be $R^2$-style, since it makes no comparison to a baseline value. The final metric, adjusted count, does make a comparison to a baseline model, however.