I have come to notice that the most commonly used activation functions are continuous. Is there any specific reason behind this? Results such as this paper have worked on training networks with discontinuous activations yet this does not seem to have taken off. Does anybody have insight into why this happens, or better yet an article talking about this?
Asked
Active
Viewed 250 times
4
-
1Here and here are two related questions. – nbro Jan 28 '21 at 00:25
-
1I had seen both before posting but none of them discusses why they are not wildly used or why most activations are continuous. – ABIM Jan 28 '21 at 13:58
-
2Hi @BIM, check this out, has some interesting thoughts. It's about step-function though. – mark mark Jan 28 '21 at 15:27