There are quite a lot of questions which touch on this issue, but there does not seem to be any that sets out any general principles for deciding when it would be a good idea (or a bad idea, or a pointless but non-harmful idea) to model a variable as both random and fixed.
There is a question which presently has a similar title, but which on further inspection is mostly focusing on one specific situation. There's another question which talks about the case of a binary variable, and in which the general impression from the comments seems to be that modelling the variable as random and fixed is always pointless.
y ~ fixed.factor + (1|fixed.factor:random.factor). I don't think it ever makes sense to havey ~ fixed.factor + (1|fixed.factor) + (1|random.factor)and now I am trying to think of general sitatuation / general principles ! In the linked question which cites the Barr paper, I don't think it arises. – Robert Long Jul 08 '20 at 05:25