As Peter Flom said, there are many reasonable points of view to what one should do in this situation, and no one correct answer. However, since you are after concrete advice, I can say what I'd do in a comparable situation. I'm not a statistician though but a psychology researcher using statistics and this is just what I typically do, not necessarily what you should do.
I understood that you have a dataset with several dependent variables and you want to run separate linear models with two categorical predictors and their interaction as predictors and the interaction results are what you are mainly interested in. I think you mentioned in some question you have 36(?) different outcomes. In a situation like this, I 1) run the interaction contrasts in emmeans without any adjustment, and 2) take all the p-values from these interaction contrasts and run them through Benjamini-Hochberg correction. In R
pvalues<-c(p1, p2, ... pn) #in order from smallest to largest. Edited to add: so these would be the p-values from healthy vs. patient within each time point, i.e. 5 p-values per outcome variable.
p.adjust<-(pvalues, method=c("BH"))
This p-value adjustment is more lenient than Bonferroni, but still offers some "guard" against type I errors when you run a lot of tests. I know there are many reasonable counterarguments to p-value adjustment in general. However, I'm more comfortable with BH than with running 36 models with no correction (I'm learning the Bayesian way but not quite there yet). Generally, I try to avoid situations where I have to run that many models within the same research design but sometimes that can be difficult.