YES, POSSIBLY
Neural networks have a nice property that they are universal approximators of functions. Loosely speaking, this means that a neural network of sufficient size can approximate a decent function as closely as the designer wants it to. A famous proof of this came from George Cybenko in 1989, and there have been variants of the Cybenko Universal Approximation Theorem since then.
Consequently, a neural network can fit weird shapes that you might not know to tell a linear model to look for by explicitly including nonlinear and/or interaction terms, meaning that the neural network might be able to get a much tighter fit to the data that lowers the square loss and, correspondingly, increases the $R^2$, perhaps to $0.9$ or higher.
What the universal approximation theorems do not comment on is how neural networks perform when they are fit to data, and overfitting is a real concern. However, if you have a bunch of data and an expectation of a very complex relationship between the features and the outcome, a neural network might be a reasonable model.
Without context, it is difficult to evaluate how strong any measure of performance is, so it would not make sense to evaluate your value of $0.6$ except to say that your model is doing better in terms of square loss than a model would do by always predicting the overall mean.