My dependent variable is reading time. My two predictors are categorical.
I conducted a lmer model with the default family and the performance package indicated that the response distribution fits better with an inverse gamma family.

So I conducted a glmer model with the gamma inverse family:

And I conducted a glmer model with the gamma identity family:

The formula was always the same, but the in the plot of the gamma inverse model seems almost like the symmetrical mirror of the x and y axis of the gamma identity. Is this a usual thing?
UPDATE:
I did not transform my data before running the models, but the RT are the sum of three reaction times:
The origin of the data is the measurement of reaction time for button presses in seconds. Which button press would reveal to the participant a new word of the sentence. I am following previous experiments, and the dependent variable is obtained as the sum of the reaction time of the last three words of the sentences (these are the words after we introduced the manipulated word). The purpose is to see in which conditions participants are slower and in which conditions they are quicker to read after the manipulation. (I personally think a gam model with the reading times predicted by word position and predictors would be more interesting here to see if the line tendency is to increase or decrease reading times in each conditions, but I am only a student.)
Model1:
minv <- glmer(RT ~ 1 + pred1 + pred2 + pred1:pred2 + (1 | id) + (1 | stimulus_id) + (1 | order), data=df, family= Gamma(link="inverse"), control = glmerControl(optimizer = "bobyqa", calc.derivs = TRUE))
cat_plot(minv, pred = pred1, modx = pred2, geom = "line", line.thickness = 2, interval = FALSE)
Model2:
mid <- glmer(RT ~ 1 + pred1 + pred2 + pred1:pred2 + (1 | id) + (1 | stimulus_id) + (1 | order), data=df, family= Gamma(link="identity"), control = glmerControl(optimizer = "bobyqa", calc.derivs = TRUE))
cat_plot(mid, pred = pred1, modx = pred2, geom = "line", line.thickness = 2, interval = FALSE)
