Here's how it may happen: AUC-ROC calculation is based in Sensitivity and Specificity values, both of which are based on the correctly predicted values, both Positive and Negative:
Sensitivity = True Positive Rate = TPos / (TPos + FNeg)
Specificity = True Negative Rate = TNeg / (TNeg + FPos)
Precision and Recall on the other hand are based on the True Positive values:
Precision = Positive Predictive Value = TPos / (TPos + FPos)
Recall (same as Sensitivity) = True Positive Rate = TPos / (TPos + FNeg)
Note that AUC-ROC metric takes in consideration all 4 values (TPos, TNeg, FPos and FNeg).
However, for Precision and Recall only 3 of those values are considered for the calculation (TPos, FPos and FNeg), while the count of True Negatives is disregarded.
This means that your second model may have a slightly improved True Positive detection rate but significantly decreased the True Negative detection rate to a higher extent. This would result in better Precision and Recall, as these metrics ignore TNeg; but would negatively affect the overall AUC-ROC value, especially if the increased performance in TPos detection is surpassed by an even higher decrease in performance of TNeg detection.
The fact the your dataset is imbalanced may enhance this effect even further.