I'll be writing a program that compares two numbers and based on how different they are, perform, or not perfom an action. It would make sense to have one formula to get a standardized measure that I can use on all number sets.
I've tried each of these three commonly advised methods below, as well as a few other home-brewed variations.
x = (larger - smaller) / larger
x = (larger - smaller) / ((larger + smaller) / 2)
x = smaller / larger
Each of these methods have something in common which forms a problem for me. Each of these methods comes down to dividing something (left) by a reference (right). When the reference is close to zero, the resulting number x increases exponentially and fails to represent what I need it to. In cases where my program runs this formula, and this happens, my program is going to show an undesired outcome.
Is my approach of the problem wrong? Do I just need a different solution? Could someone tell me what am I looking at here?