I'd like to start by saying I'm not a statistician - I have stats education at the Masters level, but no specialization or advanced work experience.
I'm currently trying to regress financial return data across a set of 4 independent variables (previous 78 quarters, no data missing), and I'm running into an issue: When I run a multiple regression using all 5 variables, there is a high adjusted R2, but several of the "key" variables (those that fundamentally should be significant) have high p-values (0.05+).
However, when I run each variable independently as a single regression against financial returns, they are all significant (p<0.05).
I've looked around on this forum but haven't been able to come up with an answer as to why, or how I should proceed. I calculated VIF's for each X and they are all <5, so multicolineairty doesn't seem to be an issue, and I've conducted a BP Test so HS doesn't appear to be an issue.
The purpose of this analysis is to determine importance (what factors have influenced financial return the most), and it is not to forecast financial return.
My questions are:
- Should I include the "insignificant" variables in my analysis?
- If yes, how can I determine the "level of importance" of each variable (i.e; X1 represents 20% of the total explanatory power of the model).
EDIT: Here is an image of the regression results and correlation tables:
Thank you!

The variables show some correlation, and the overall model is significant (very low Significance F, which I believe is the p-value on the F-Test). However, from my understanding, as long as the VIF is <5 for all variables (which they are), multicollinearity shouldn't be impacting the overall regression results.
Any ideas on if its safe to proceed with the model as is?
– Andy Jan 04 '23 at 21:02