The singular effects error is no accident and is fairly common in fitting complicated interaction models (Meteyard & Davies, 2020). This typically happens when a mixed effects model has a random effects structure specified that doesn't fit the data well. For example, there may be very little variance in the way you model it and will result in almost negligible estimates. This can lead to problems like an undefined determinant or an inability to invert the matrix.
As Dimitris already mentioned, it may help first to fit a random effects structure that only includes random intercepts. Typically, these are much easier to fit, and when the data does not match a complicated structure, it often has more power (Matuschek et al., 2017). From there, you can attempt to try something like a non-correlated random effects structure (slopes and intercepts are not correlated) using the || operator in between the random effects you specified, such as (Condition || SentNumb) (see Bates et al., 2015 for more details on syntax of glmer fits). Probably the hardest part for your model to fit will be the three way interaction both in your fixed and random effects. You may want to consider whether or not they are even meaningful/interpretable to include, as they can anyways be a major contributor to convergence issues (Meteyard & Davies, 2020).
By the way, I have been a part of a project that used GLMMs with reaction time and learned that you should probably not use lmer unless you transform the data, as it typically has an inverse Gaussian distribution (heavily right-skewed). There is an excellent paper on this subject (Brysbaert & Stevens, 2018). Basically, it is ideal to first transform the data like so before fitting to lmer:
$$
invRT = \frac{-1000}{RT}
$$
Alternatively, you can fit the data with glmer using the family=inverse.gaussian argument, but it makes interpreting and trouble shooting your data more difficult.
Side Note: For full disclosure, you should probably know that a paper before Brysbaert & Stevens, 2018 once advised against transforming reaction time data for mixed models and instead only using glmer to fit models (Lo & Andrews, 2015). However, I find their arguments were not as compelling as the Brysbaert & Stevens paper, so I would simply read both to make up your own mind. However, I have cited it below in case you may want to look into their argument.
Citations
- Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1). https://doi.org/10.18637/jss.v067.i01
- Brysbaert, M., & Stevens, M. (2018). Power analysis and effect size in mixed effects models: A tutorial. Journal of Cognition, 1(1), 9. https://doi.org/10.5334/joc.10
- Lo, S., & Andrews, S. (2015). To transform or not to transform: Using generalized linear mixed models to analyse reaction time data. Frontiers in Psychology, 6(1171). https://doi.org/10.3389/fpsyg.2015.01171
- Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H., & Bates, D. (2017). Balancing Type I error and power in linear mixed models. Journal of Memory and Language, 94, 305–315. https://doi.org/10.1016/j.jml.2017.01.001
- Meteyard, L., & Davies, R. A. I. (2020). Best practice guidance for linear mixed-effects models in psychological science. Journal of Memory and Language, 112, 104092. https://doi.org/10.1016/j.jml.2020.104092