I have a technical question about Python vectorization. By definition, Vectorization is a technique of implementing array operations without using for loops. That reduce the running and execution time of code. But what about precision? Personally, I have applied vectorization on a relatively large Numpy matrix (800000 x 1000) to scale it. The first scaling method is done by vectorization. Then, I scale it using a foor loop . Of course a foor loop need more and more time. In other terms, I implement the same mathematical operation by two method: vectorization and without vectorization. ==> When I use the scaled data for further processing, I obtain different results. The "non-vectorized " dataset gives better results when fed the model (pca).
Is that possible?
Vectorization of a large dataset could gain you time but loss in precision?