i'am working on a case study, i'am having train data in which there are 45 columns out of which 28 are useful, case study is related to loan approval.
all the columns in dataset are int64 format.
and are in range as
14256 to 168956 1587 to 3456 10 to 95 33456 to 99875
and likewise.
so these columns vary a lot from one column to another columns and are having different ranges, so will i have to scale every column ?
which scalar should i use ?
I want to apply xgboost, RF, logistic regression, svm, Naive Bayes on these data.