does that mean if our alternative hypothesis is true we will call our results "Statistically Significant"?
No. If the alternative really is the case, we can still fail to reject the null resulting in a type II error (or a false negative).
The mathematial definition of significance is straight forward. When our p value is less than our type one error rate (typically p<0.05), then we call our result statistically significant. Translating this into an inference about the real world is where the trouble usually arises.
I like to think of hypothesis tests as a dilemma. You start with an initial assumption about the world (e.g. that the null truly is the case and that your assumptions about the data generating processes really are true). You perform your test and get a p-value. The interpretation of that p value is similar to what you have in bold; it is the probability that you see a result at least as extreme if not more extreme given that the null is true and your modelling assumptions are true. Now for the dilemma. Assuming the p-value is smaller than small enough (assuming you have some way of choosing what small enough means) you have just observed something quite improbable under the null. So you have two choices:
Conclude that you have not observed anything which would falsify your initial beliefs about the world and accept that you have seen something incredibly rare.
Conclude that one of your beliefs about the world must have been wrong because you have seen something incredibly rare assuming your initial beliefs were true.
Often times, we opt for the second and hence we reject the null. In my opinion, that is what statistical significance means. In a phrase,
"statistical significance is the observation of a test statistic which is sufficiently improbable under the null hypothesis, putting us in the dilemma described above in which we opt to conclude our initial beliefs about the world were in fact incorrect".
This isn't a perfect definition and I'm open to changing that should anyone care to improve it.