This is going to be a regrettably subjective answer, but I think the question is reasonable.
The short answer is that there is no clear rule for what to do. At the end of the day, for the purposes of conducting causal inference, the modeler (you) has to make a judgment call on whether the results of causal discovery (in the case of PC, the implied conditional independencies) on the observed data overrides existing expert knowledge in your domain.
To make this more concrete, let's say in one sub-step of PC, for variables $X, Y, Z$, you eliminate an edge between $X, Y$ because $X \perp Y \mid Z$ on your sample, but this contradicts expert knowledge (this is a simpler case than collider vs. confounder, but I think it illustrates the point). Is $X \perp Y \mid Z$ true in general (PC was right)? Or is it a quirk of your sample? How would you answer this question?
I'd be very skeptical of any blanket rules of thumb for which one to prefer, and don't have much to offer here except "think carefully" and "consult with domain experts." I'm not aware of any, but it's possible that my knowledge is incomplete here.
I'm not sure how (without expert knowledge) one can algorithmically distinguish the bias from your sample in particular (i.e., bad finite-sample luck) and the bias due to a misspecified DAG (i.e., incorrect assumptions about the world), because (philosophically) the DAG itself encodes assumptions. That is; at some point, to perform causal inference, one needs to choose a set of assumptions about the world. In my opinion, this is one of the distinguishing factors between causal inference and predictive modeling (e.g., supervised machine learning).
I think this touches on your main questions (i.e., what do I include? Do I treat X as a confounder vs. collider?), but let me know if I'm missing something.
In practice, it's probably impossible to control for literally all confounders -- you're unlikely to eliminate all unobserved confounding/collider bias. This is no reason to panic, because by choosing a DAG, you make transparent the assumptions made about the data-generation process. If you choose to believe the DAG generated by PC over the DAG generated by expert knowledge (or vice versa), then under each DAG you can check the requisite backdoor criteria for building a model for whatever causal estimand you care about. I don't think it hurts to fit a model under each DAG as a robustness check, but questions of analytic strategy are probably best left to your collaborators.