Well, if there are 100,000 coin tosses, the probability of any specific result such as 50,039 is very low, so in that case a low probability of whatever we observe doesn't tell us anything about the null hypothesis. By the way, for continuous random variables the probability of any specific observation is 0. That wouldn't be an appropriate "p-value".
The general logic of hypothesis tests is that we reject the null hypothesis if the result is in a pre-specified rejection region that has low probability, say 5%. These rejection regions are chosen in such a way that if indeed we observe an event in the rejection region, this would indicate against the null hypothesis and in favour of the alternative hypothesis. So if as in your example we test $H_0:\ q\le 0.5$ against $H_1:\ q>0.5$ with $q=P($Heads), a reasonable rejection region has the form $R_\alpha=\{X\ge c\}$ with $X$ the random variable giving the observed number of heads (or equivalently the observed proportion), where $c$ is chosen in such a way that $P(R_\alpha)=\alpha$ (or potentially just smaller if exact equality is impossible due to discreteness). This defines a test that has a guaranteed performance characteristic: If $H_0$ is true, and the test is applied often, in the long run it will only reject in a proportion of $\alpha$ of cases.
Now the p-value $p$ is the probability of the rejection region that would just exactly lead to rejection based on our observed data, i.e., $R_p=\{X\ge x\}$ where $x$ is what we observed in our data. Thus, $p$ is the probability of a proper rejection region based on what we observe (in other words, a result as far or farther away from what is expected under $H_0$), rather than just the probability of what we observe.