By using an instrumental variable, you are not estimating the coefficients using vanilla ordinary least squares linear regression. Consequently, you can achieve a lower sum of squared residuals than you would if you predicted the mean value of $y$ every time. This results in the numerator of the fraction below exceeding the denominator, so the entire formula is less than zero.
$$
R^2=1-\left(\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i-\bar y
\right)^2
}\right)
$$
This is a standard way to calculate $R^2$, equal to the squared correlation between predicted and true values in the OLS linear regression case, and equal to the squared correlation between $x$ and $y$ in the simple linear regression case (again, assuming OLS estimation). This is the equation that allows for the "proportion of variance explained" interpretation of $R^2$, too. All of this is to say that such a formula for $R^2$ is totally reasonable and sure seems to be how your software is doing the calculation. (After all, squaring a real correlation between the predictions and true values will not result in a number less than zero, so your software is not squaring a Pearson correlation and must be doing some other calculation.)
When you do estimation using a method other than OLS, it can happen that the numerator exceeds the denominator. I will demonstrate below using an estimation based on minimizing the sum of absolute residuals, but the idea is the same if you use any other non-OLS estimation technique (such as instrumental variables).
library(quantreg)
set.seed(2023)
N <- 10
x <- runif(N)
y <- rnorm(N)
L <- quantreg::rq(y ~ x, tau = 0.5)
preds <- predict(L)
sum((y - preds)^2) # I get 7.260747
sum((y - mean(y))^2) # I get 4.731334
1 - sum((y - preds)^2)/sum((y - mean(y))^2) # Consequently, the R^2 is subzero at -0.5346087
summary. – cach dies Oct 22 '20 at 22:52