You are correct that the non-dependence of the pseudo-observations create a situation where treating them as independent is misleading. That said nothing is totally lost as we can still provide reasonable diagnostics for our goodness of fit.
First and foremost, assuming that these "pseudo-observations" now are treated as our real observations the first thing to do is actually show "how well" the resulting model procedure fits those. As such, the suggestions in the CV.SE thread: Diagnostic plots for count regression should carry forward.
I would especially focus on showing the relevant rootograms, they are very informative and quite underused in my opinion (see: Visualizing Count Data Regressions Using Rootograms (2014) by Kleiber & Zeileis for more details). Graphical diagnostics (rootograms, leverage points, etc.) should be well-behaved irrespective of having "pseudo" or "real" observations.
Regarding testing procedures: having a correlated sample means that it is unclear how the degrees of freedom are calculated in such a model, i.e. what are our effective degrees of freedom given the "association" induced in our data. This is for example a pertinent point when dealing with mixed-effects models too.
The obvious solution here is to bootstrap at person level and re-run our analysis, i.e. the procedure of creating the "pseudo" observations in part of the modelling procedure so it is subjected to the sampling variation. This should allow you a reasonable approximation of our null.
Side-note: One might also be tempted to be (over)-conservative and possible run all usual tests but diving the degree of freedom our pseudo-observation sample implies, by two. This would serve as our (crude approximation to) effective degrees of freedom, I wouldn't strongly recommend it but if it results in reasonable test statistics it can offer an additional tick-mark. I would suggest looking more carefully at the literature of analysis hierarchical data.
To recap: Run the usual graphical diagnostics, those should look good. Consider bootstrapping at the appropriate sampling unit to get nulls. If pressed, use a conservative approximation to the model's DoF.