Similar as in this thread, I have a problems with small probabilities in likelihoods, but I believe the thread does not apply to me.
The likelihood is a product of probabilities. The log-likelihood is a sum of the log of probabilities. in R I use optim for optimization of a log-likelihood. It then happens that for 'bad' choices of the parameters the log-likelihood contribution (log probability) for some observation(s) is numerically identical to zero in R (but in fact is larger than zero by an infinitesimal small amount). So the sum of the log-likelihood contributions is negative infinity (due to few observations giving this contribution) indicating a bad fit but optimization is not possible anymore.
What is the standard way of dealing with this problem?
Lambdaat https://stats.stackexchange.com/a/449216/919 (and briefly discussed in the comments beneath that post). One way to conceive of your problem is to express $p(y_i\mid\theta)=\exp(f(y_i\mid\theta))$ for the log probability $f.$ Your case is where $f$ is very negative and you get underflow. By casting your problem in those terms, all the other solutions apply. – whuber Feb 20 '20 at 19:33