I consider a general case with $N$ players, $T$ courses and $R$ repetitions of the game. We consider two random variables:
$i$-th players' competence drawn from $p_i \sim N(\mu_{g_i},\sigma_{g_i}^2)$
$t$-th course difficulty (environmental factor) $g_t \sim N(\mu_{p_i},\sigma_{p_i}^2)$
As you describe it, the score $S_{it}$ is a sum of players' competence and the environmental factor:
$$
S_{it} = p_i + g_t
$$
so that the score is also normally distributed (sum of two Gaussians result in a Gaussian) $S_{it} \sim N(\mu_{it}, \sigma_{it}^2)$ with $\mu_{it} = \mu_{p_i}+\mu_{g_t}$ and $\sigma_{it}^2 = \sigma_{p_i}^2 + \sigma_{g_t}^2$.
Estimating the means/variances themselves is not possible. However, to establish who wins it is enough to look at relative scores! Therefore, we define two relative variables from the data $S_{it}$:
- The relative performance of players for game $t$:
$$
P_{ij}^{(t)} = S_{it} - S_{jt}
$$
- and relative performance of player $i$ between games $t$ and $s$:
$$
G_{ts}^{(i)} = S_{it} - S_{is}
$$
These variables are again Gaussian:
$$
P_{ij}^{(t)} \sim N(\Delta \mu_{ij}^{P}, \Delta \sigma_{ij}^{P,(t)} )\\
G_{ts}^{(i)} \sim N(\Delta \mu_{ts}^{G}, \Delta \sigma_{ts}^{G,(i)} )
$$
where now the means are subtracted $\Delta \mu_{ij}^{P} = \mu_{p_i} - \mu_{p_j}$ and $\Delta \sigma_{ij}^{P,(t)} = \sigma_{it}^2 + \sigma_{jt}^2$, $\Delta \mu_{ts}^{G} = \mu_{g_t} - \mu_{g_s}$ and $\Delta \sigma_{ts}^{G,(i)} = \sigma_{it}^2 + \sigma_{is}^2$.
The crucial part is that these variables are helpful in finding two types of relevant probabilities:
- the player $i$ will get a better score than player $j$ in a game $t$:
$$
Prob(P_{ij}^{(t)}>0) = \frac{1}{2} \left ( 1 - \text{erf}\left ( - \frac{\Delta\mu_{ij}^{P}}{\sqrt{2} \Delta \sigma_{ij}^{P,(t)}} \right ) \right )
$$
- and the player $i$ in the game $t$ will get a score higher than the game $s$:
$$
Prob(G_{ts}^{(i)}>0) = \frac{1}{2} \left ( 1 - \text{erf}\left ( - \frac{\Delta\mu_{ts}^{G}}{\sqrt{2} \Delta \sigma_{ts}^{G,(i)}} \right ) \right )
$$
where both were evaluated readily via Gaussian CDF.
Now, since the variables are independent, one can answer many questions like the probability that player $i$ will beat the remaining competitors in the $t$-th game:
$$
P_{best}(i,t) = \prod_{j (\neq i)} Prob(P_{ij}^{(t)}>0)
$$
Finally, to make use of these formulas, we must estimate the mean $\Delta\mu_{ij}^{P}$ and the variance $\Delta \sigma_{ij}^{P,(t)}$ (the same follows for the means related to $G$). These we readily estimate by using a standard statistics argument for $R$ repetitions of the game-player pair:
$$
\left < P_{ij}^{(t)} \right >_{repetitions} = \Delta\mu_{ij}^{P}
$$
so that the estimator reads
$$
\hat{\Delta}\mu_{ij}^{P} = \frac{1}{R} \sum_{r=1}^R \left ( S_{it,r} - S_{jt,r} \right )
$$
where we denote $S_{it,r}$ as the $r$-th game on the course $t$ played by $i$-th golfer. Similarly, the variance:
$$
\left < \left ( P_{ij}^{(t)} - \left < P_{ij}^{(t)} \right >_{reps}^2 \right )^2 \right > = \left < P_{ij}^{(t)})^2 \right > - \left< \right >^2 = \Delta \sigma_{ij}^{P,(t)}
$$
so that
$$
\hat{\Delta} \sigma_{ij}^{P,(t)} = \frac{1}{R} \sum_{r=1}^R \left ( S_{it,r} - S_{jt,r} \right )^2 - (\hat{\Delta}\mu_{ij}^{P})^2
$$