I am following this link to find sentence level bleu score
NLTK: corpus-level bleu vs sentence-level BLEU score
When I run this example:
import nltk
hypothesis = ['This', 'is', 'cat']
reference = ['This', 'is', 'a', 'cat']
references = [reference]
nltk.translate.bleu_score.sentence_bleu(references, hypothesis)
Output:
8.987727354491445e-155
The output is 0
Warning:
/home/mac/.local/lib/python3.6/site-packages/nltk/translate/bleu_score.py:516: UserWarning:
The hypothesis contains 0 counts of 3-gram overlaps.
Therefore the BLEU score evaluates to 0, independently of
how many N-gram overlaps of lower order it contains.
Consider using lower n-gram order or use SmoothingFunction()
warnings.warn(_msg)
/home/mac/.local/lib/python3.6/site-packages/nltk/translate/bleu_score.py:516: UserWarning:
The hypothesis contains 0 counts of 4-gram overlaps.
Therefore the BLEU score evaluates to 0, independently of
how many N-gram overlaps of lower order it contains.
Consider using lower n-gram order or use SmoothingFunction()
warnings.warn(_msg)
How can I run the same function for sentences having n-gram for n<4?
Also what all parameters can I pass inside nltk.translate.bleu_score.sentence_bleu()?