While understanding the Logistic regression, I didn't completely get the behavior of its asymptotic nature which says:
Without regularization, the asymptotic nature of logistic regression i.e (it never quite hit 0 & never quite hit 1) would keep driving loss towards 0 on all examples and never get there, driving the weights for each feature to +infinity or -infinity in high dimensions.
A good example where it holds true is :
Imagine that you assign a unique id to each example, and map each id to its own feature. If you don't specify a regularization function, the model will become completely overfit.
The above in reference to the below loss function:

Could anyone help me understand this point via an example or even at intuition level?