Given a probability density function (pdf),
$$P(x) = \frac{f(x)}{\Sigma_x f(x)}.$$
Is this still a valid pdf if the normalizing constant in the denominator, $\Sigma_x f(x) = \infty$?
Given a probability density function (pdf),
$$P(x) = \frac{f(x)}{\Sigma_x f(x)}.$$
Is this still a valid pdf if the normalizing constant in the denominator, $\Sigma_x f(x) = \infty$?
The possibility of an infinite sum means that a countable sequence of values $(x) = x_1, x_2, \ldots, x_n, \dots$ is being contemplated. What such a sequence might mean depends on the context, leading to two opposite answers.
Assume the $x_i$ are the supports of a discrete distribution $F$. In this case the numbers $f(x_i)$ are intended, after normalization by the sum, to be the probabilities of the $x_i$. (Which, incidentally, indicates the $x_i$ had better be distinct.) But since, by definition, $\sum_{i=1}^\infty f(x_i)$ is the limit of $\sum_{i=1}^nf(x_i)$ as $n\to\infty$, it follows that
$${\Pr}_F(x_i) = \lim_{n\to\infty} \frac{f(x)}{\sum_{i=1}^nf(x_i)} = 0.$$
Because this would assign zero probability to all real $x$, it would fail to be a distribution (due to violation of the axiom that the total probability must be unity).
Assume the $x_i$ are independently drawn from any distribution $F$ but with weights proportional to $f(x_i)$, all assumed positive. In order to estimate $F$ we will use the weighted empirical probabilities. To do this, it suffices to estimate the value $F$ would assign to any interval of the form $(b, a]$. To do this after drawing $n$ values we would just look at the proportion of the total weight assigned to the $x_i$ with values less than or equal to $a$:
$$\widehat{\Pr}_{F; n}((b, a]) = \frac{\sum_{b\lt x \le a} f(x)}{\sum_x f(x)}.$$
(Both sums are only over the $x_i$ for which $1 \le i \le n$. They are finite and nonzero, so there is no question that the fraction makes sense and is finite.)
For any finite $n$ this clearly lies between $0$ and $1$ (because the numerator, a sum of non-negative values, is a fraction of the denominator, itself a sum of non-negative values). This will have a limit as $n\to\infty$ and it is equal to the value of the parent distribution, $F(a)$.
When we choose $a-b$ sufficiently small, then even for large $n$ there may be only a single $x_i,$ $1 \le i \le n,$ for which $b \lt x_i \le a$. The fraction in this estimator then reduces to
$$\frac{f(x_i)}{\sum_x f(x)},$$
which is identical to the expression in the question. However, fixing $b$ and $a$, we need to let $n \to \infty$. Typically, more and more $x_i$ will fall into the interval $(b, a]$, so as $n$ grows, the numerator of the fraction grows too. It stabilizes in the limit. If that limiting value is nonzero, then necessarily there were infinitely many of the $x_i$ falling in the interval. That's how the apparent contradiction with the first answer is resolved.