This is a follow-up question on a prior thread about the same setup as follows:
We have two "Methods" ("A" and "B") to diagnose a medical condition. We are not trying to determine which one is better (accuracy, sensitivity and specificity). At this point we just want to know whether they result in a different proportion of positive results ("pos"). Like this:
two_by_two<-matrix(c(240,186,272,302),nrow=2,byrow=T)
dimnames(two_by_two) <- list(c("pos","neg"),c("pos","neg"))
names(dimnames(two_by_two)) <-c("Method.A","Method.B")
two_by_two
Method.B
Method.A pos neg
pos 240 186
neg 272 302
So there are 1,000 individuals and we have looked at them with "Method.A" and "Method.B", tabulating the "pos" or positive results, and the "neg" or negative results for each individual and method.
I ran a McNemar's test on the data.
Now what I want to know is whether it makes conceptual sense to fit any regression models for matched or paired observations. For instance, Testing paired frequencies for independence is a great post with @chl introducing some really interesting code for ordinal quasi-symmetry (OQS). Unfortunately my data is hardly ordinal in any way. Agresti talks about marginal and conditional models for matched pairs. But again, it seems like my data is too basic to even contemplate a poisson glm model.
Any ideas? Any conceptual clarification (you can make it binary - yes/no)? Any code or leads to code?