Yes, it is always mathematically guaranteed that $$\sum_i \min_\theta a_{i, \theta} ≤ \min_\theta \sum_i a_{i, \theta} \tag{1}$$ and that $$\sum_i \max_\theta a_{i, \theta} ≥ \max_\theta \sum_i a_{i, \theta} \tag{2}$$ for any values $a_{i,\theta}$ (including $a_{i, \theta} = f_i(x, \theta)$ or $a_{i, \theta} = g_i(x, \theta)$), least as long as the minima and maxima are all well defined.
Proof of $(1)$: Let $m_i = \min_\theta a_{i,\theta}$. By definition, we have $m_i ≤ a_{i,\theta}$ for all $\theta$. Thus $\sum_i m_i ≤ \sum_i a_{i,\theta}$ for all $\theta$, and thus $\sum_i \min_\theta a_{i, \theta} = \sum_i m_i ≤ \min_\theta \sum_i a_{i,\theta}$, QED. (Inequality $(2)$ can be proven the same way, just by swapping $\max$ for $\min$ and changing the direction of the inequality signs.)
Of course, it could happen that this tighter constraint might not be satisfiable even if if the original robust constraint was (and of course the robust constraint itself might not be satisfiable even if there existed an $x$ that satisfies the original uncertain constraint for all $\theta$) and optimizing with the tighter constraint might yield a less optimal result than using either of the looser constraints. That's the price you pay for simplifying the constraints.
(As for whether there's an established name for the tightened robust constraint, with the minimum / maximum taken componentwise inside the sum, I cannot say — I'm a mathematician, and mostly unfamiliar with OR specific terminology. Sorry.)
Ps. In fact, the inequalities $(1)$ and $(2)$ still hold, and can be proven the same way, even if we generalize them by replacing $\min$ and $\max$ with $\inf$ and $\sup$. While in this general case it's no longer guaranteed that the infimum or supremum is actually attained for any $\theta$, and thus the inequality $m_i ≤ a_{i,\theta}$ might always be strict, the proof still works.