It's my understanding that sample weights are used to ensure that each observation used to train a machine learning model are given a weight corresponding to its perceived importance/value to the model. We would normally pass these sample weights to the sample_weight arg of an sklearn estimator's train() method.
However, if we are to use our model to predict on the unseen data of our test set, our sample weights would be irrelevant, as evidenced by the fact that the many estimators in the sklearn library have no "sample_weight" argument for their predict() methods.
So is there any point in ever passing one's sample weights to the sample_weight argument of a scoring function (ie. precision_score(), recall_score(), etc.) being applied to a test set and the predictions it outputs? It seems that doing so would be providing insight gained unfairly in hindsight to score a model and thus give inflated, overly optimistic scores of its performance, though maybe there is some utility to doing this that I'm not seeing?