You can't ignore the R-squared value. R-squared exists to show you how well your model can explain a phenomenon. E.g. how well do changes in your input/independent variable values predict the actual outcome/dependent variable values. Statistical Significance illustrates the degree of confidence that your model is depicting actual relationships between variables vs random chance.
In your particular case, the combination of the two results is basically telling you that the relationship between those two variables is more than likely real but not useful for explanatory purposes. From this I would take away that you are measuring a real effect but that the relationship is too weak to be of any practical use.
It is reasonably likely that you have omitted variables that need to be included in order to properly model the phenomon being studied. (Note: adding variables simply to improve R-score and P values can present an ethical issue if you're simply adding or removing variable to optimize your results. But if you have legitimate theoretical justification for adding variables, then that isn't really an issue in and of itself (though this will present other potential issues related to multiple regression that are seemingly beyond the scope of this question.)
Bottom line: in order for a model to be a useful explanatory tool, it needs to be significant and have a high r-squared value.
P.S. In multiple regression, the F-test p-value tests the significance of the entire model while the p-values for the coefficients test the significance of that particular variable. and yes, it is possible to have a significant model with one or more insignificant variables. I had that problem with a recent analysis because it turned out that 3 of my variables were mediators for my fourth variable and I had to account for that and test for mediation.