The likelihood ratio gives you information about how much better one model is than the other at predicting the observed result. In that way it gives a measure of relative support by the observations of the models.
There are a few things that make it awkward to write and think about likelihoods. First, likelihoods are calculated by taking the data as fixed and the model as variable. That is kind of backwards to what goes on in a lot of statistical probability exercises where the model is fixed and the probabilities of various potential observations are determined. Second, likelihoods are proportional to probabilities but are not equal to them, and likelihoods should not be understood as being probabilities, as they do not obey the standard axioms of probability.
There is also a confusing aspect to the use of the word "model". In the Wikipedia article on the likelihood ratio the two models being compared, the null model and the alternative model, can equally well be thought of as a single model with a null and alternative parameter value. There are serious complications that come into play when the likelihoods of differing numbers of parameters (i.e. simple vs complex models) are compared, but comparisons of the likelihoods of various mutually exclusive parameter values within a single model are straightforward.