I wonder what the relationship between confidence interval and random chance is. Let me elaborate a bit. Say, I have the following linear relationship between two variables of $X$ and $Y:$
\begin{align} Y&=\alpha+\beta_1 X_1+ \varepsilon. \end{align} Now, say, I get a $\beta_1 $ of $10 $ which is statistically significant at $5\%.$ This means there is a $95\%$ probability that the relationship is correct and $5\%$ that it might be incorrect. More specifically, in my case, it means that with every unit increase in $X,$ my $Y $ increases by $10\%.$
Question: Can I interpret my result by saying, given that the coefficient is $10\%, $ my outcome is $5\%$ more than the random chance? Does it even work like this?
So, say, if I take the significance level of $10\%$ to account and my coefficient is hypothetically still $10,$ then is my result basically random and not strong enough to draw any meaningful conclusion from?