Pivotal clinical trials are perhaps the example of Neyman-Pearson hypothesis testing, although even there P-values are usually also reported rather than just 'critical value exceeded'. This is also a result of the convenient connection between the P-value and $\alpha$ though there's definitely cases where the magnitude of the P-value was considered in weighing evidence, so even here the hybrid between both approaches prevails.
Still, such studies actually prespecify an alternative hypothesis -- the intervention will produce an effect $\Delta$ which is at least some threshold of clinical relevance, or no worse than the current standard, or...
They include sample size justification in the form of power/type II error rate estimation and impletent strong family-wise type I control over all formal statistical claims that will be tested. Both of these are mandated by several regulatory agencies that will authorize drugs for their market based on these results.
The fact that FWER is maintained over several tests including interim looks results in analyses often working primarily in the critical value / $\alpha$ scale, because you no longer have the common 'nice' thresholds of e.g. $\alpha=0.05$. Instead you might use for example $z=2.478$ / $\alpha=0.003$ as a stopping boundary in favour of $H_A$ at the first look, followed by $z=2.257$ / $\alpha=0.012$, and so on.
Taken together a pivotal clinical study is about the purest example of 'hypothesis testing done right' that I know of. This isn't advocating for hypothesis testing, there's examples of $H_0: \Delta=0$ being rejected for drugs that ended up having no meaningful (but still possibly non-zero) clinical effect, and likewise the focus on FWER control has killed studies that prespecified $\alpha$ spending but didn't actually ended up testing (you said you would so the $\alpha$ is 'used up' anyway). The regulations surrounding these studies are driven in no small part by the desire to maintain FWER above all else.