Depending on what other predictors there are in your dataset, and how they co-occur with this particular predictor and interact in association with the outcome of interest, "focused data collection" may improve matters. Especially if your dataset is so unbalanced in terms of this predictor that some coefficients (or your model's analogue of coefficients) are imprecisely estimated (see Dikran's answer here). Unfortunately, you write that your predictor is balanced, so this increase in parameter precision will likely not make much of a difference, or at least it would probably be better to increase the precision of all coefficient estimates by "balanced" data collection.
Or it may not. For instance, assume that the target class is X with probability 0.6 if this predictor value is A or B and 0.9 if the predictor value is C or D, and that there are no other predictors involved. Then you will get the highest accuracy by always outputting a classification of X, completely regardless of the predictor... but the accuracy will only be 60% if the predictor is A or B, and 90% if it is C or D. And no amount of focused data collection will change that. (Related: Why is accuracy not the best measure for assessing classification models?)
If you do so much focused data collection that your training data does not reflect the ground population any more, you may even end up biasing your model. Whether that is good or bad for accuracy is not necessarily clear, because of the problems with accuracy as a KPI.
Bottom line: we can't tell. However, you could of course try to get a handle at this by simulation. Pretend that you don't have your full dataset, but a much smaller one, by balanced subsampling. Train a model and evaluate it. Then simulate this kind of focused data collection by adding back in data points where your predictor is A or B, retrain the model and check whether accuracy (or whatever better KPI you use) improves. If this helps, then it may also help in extending your original dataset.