If you check the tag feature-scaling, you'll learn about many benefits of scaling, not all algorithms need it though. To answer if there are any downsides, consider what scaling is. Both standardization and normalization are about subtracting something and dividing by something. Let's discuss those operations.
To calculate feature such as "number of years since the year 1995" you would subtract 1995 from the current date. You could alternatively create a different feature for "number of years since the year 1997" and those two would only differ by what did you subtract. If your algorithm broke depending if your baseline was 1995 or 1997, there would be something very wrong with it.
The same applies to division. If your algorithm worked differently if your variables were in meters vs kilometers, or minutes vs seconds, it wouldn't be something that you could use to solver generic problems.
The downside of scaling could be worse interpretability (though in some cases it can be the other way around), but in general we don't want the algorithms to be sensitive to something like the scale of the features.
Finally, keep in mind that there are models that accept only certain kinds of features (e.g. only binary in vanilla LCA), where obviously you couldn't use scaled features.
That said, you should not mindlessly apply any feature transformation "as a default" even if it is harmless. If you did so, sooner or later you would regret it, because it would add unnecessary complication to the code, accidentally introduce bugs, slow down the code, or lead to other unanticipated problems. Every such a default behavior in a software has a history of GitHub issues or e-mails of angry users where in their specific case it led to something they didn't want or expect.