I have executed an analysis using the novel approach developed by Imai et al. (2021) that allows for the execution of matching/weighting techniques when using panel data. I was planning on running sensitivity analysis using the sensemakr package in R to assess the degree to which unobserved confounding may nullify/flip my results.
However, in execution, I realized that I cannot do this with the panelmatch package since result estimates are stored in a PanelEstimate object which do not work with sensemakr. In fact, I am unsure if any matching method will work with sensemakr given that you have to specify a benchmark covariate that is specified in a regression equation. However, with matching, covariates are not specified in the regression equation, so I don't think matching estimates work with sensemakr. (I may be wrong here and would appreciate correction if I am, however, this is not the direct focus on my question).
Given this background, my question concerns how much you can generalize sensitivity analysis results across different causal inference methods? I'll attach an image of a plot demonstrating the sensitivity of an estimate generated from the following model (where I am primarily interested in the sensitivity of the pko estimte and MPpc is the benchmark covariate):
m1 <- lm(lgdppc ~ pko + lpop + ldeaths + wardur + democracy + MPpc, data = data
From these results, this suggests that an unobserved confounder at least 0.5x as strong as the effect of MPpc on lgdppc would flip the estimate of pko if included in the model. Here is my question: Assuming covariates were dealt with in a manner other than statistical control/regression adjustment (such as matching), can we make a similar claim that unobserved confounding may be an issue regardless of the method employed (assuming that a confounder 0.5x as large as MPpc is plausible)?
I understand that a different estimate will be generated if using an alternative method to account for confounders and that subsequent sensitivity analysis results would alter as well. However, I am wondering how much one can generalize sensitivity analysis results across different methods for dealing with confounding. Are the results from this sensitivity analysis (where confounders are accounted for using statistical control) meaningless to inform the issue of unobserved confounding in an analysis using matching to account for confounding?
