A linear correlation and a monotonic correlation are quite different things.
You should not be using the data to choose what hypothesis you want to test on the same data. The significance level (and hence p-value) no longer has the correct properties in general. In particular, if you calculate p-values without accounting for the effect of choosing your hypothesis based on the data, the calculated p-values (and the nominal significance level) will tend to be lower than they should be.
For example, imagine you were dealing with a population with either a weak or null linear relationship and smallish samples. Some samples will look more strongly linear, some will look less clearly linear, and some of those will instead suggest a curvilinear, but monotonic relationship. If you pull away a subset of the samples that are more suggestive of a curved monotonic relationship than a linear one, the most "linear-looking" samples (the ones that you're going to reject H0 on) will be a larger fraction of the samples that are left. That is, you will reject a larger fraction of the Pearson tests both when H0 is true and when it's false. This selection process is effectively raising power by pushing up the significance level, but in a way that will typically be hidden when it comes to publication. A similar effect can be described for the fraction of samples you pull out to treat differently, as well.
See here:
https://en.wikipedia.org/wiki/Testing_hypotheses_suggested_by_the_data
This is in effect a form of p-hacking; one might call it "hypothesis shopping", or a version of HARKing. Its impact may be small in some situations and large in others; I don't know quite how large it might be in this particular situation, since the outcomes of the tests on the hypotheses you're choosing among will be positively dependent to some extent, but how much is not going to be clear unless we specify a number of things. In any case, if you're taking a broadly falsificationist approach to a research question (as seems to be the case), it's clearly not proper scientific practice to be choosing which statement you're falsifying after seeing the data you want to use to falsify it.
You should instead be choosing your specific research hypothesis before you see data (indeed, hopefully before even collecting it; you should know what question you're trying to research at the planning stage, unless it's an exploratory study, in which case, you shouldn't normally be dealing in p-values).
If a monotonic correlation was actually something you were interested in, why are you considering a linear correlation? (or, indeed, vice versa).
The usual approaches to generating scientific research hypotheses would presumably all apply here.
[Indeed, how were you doing power calculations and hence obtaining a useful sample size if you don't even know your actual hypothesis?]
If you're not in a position to know which hypothesis you want to test, it seems like you're not really in a position to be doing more than exploratory analysis.
This sort of issue appears to be distressingly common; indeed I have over time gained the strong impression that some areas of research teach students to use very vague hypotheses as a matter of practice. Whether or not that is the unstated policy, I see questions like this in various places on an almost daily basis. This sort of post hoc (after the data are observed) hypothesis selection may well be a substantive contributor to the reproducibility crisis that affects a number of areas of research.