My question concerns the notation used in the Value function of a GAN
Does $${x ∼ p_{data}}$$ mean
$$ E_{x\sim p_{data}}[log(D(x)]= \sum_{x} logD(x)(x)p_{data}(x)$$ ?
My question concerns the notation used in the Value function of a GAN
Does $${x ∼ p_{data}}$$ mean
$$ E_{x\sim p_{data}}[log(D(x)]= \sum_{x} logD(x)(x)p_{data}(x)$$ ?
Your notation is a little confusing, but I suspect this is because you're not reading the original equation exactly right. $\mathbb{E}_{x \sim p_{data}(x)}$ means "the expectation over $x$ drawn from the distribution described by $p_{data}(x)$". It looks like you're trying to multiply the expected value of $x$ times $\log(D(x))$ for all $x$, which isn't what's going on here and doesn't quite make sense in this notation.
The way to understand this formula is to relate it to the goal of the GAN paradigm. The first term represents "the expected ability of the discriminator to tell that real data is real". The second term means "The expected ability of the discriminator to tell that the generated data is generated". The goal is to maximize with respect to $D$ and then minimize with respect to $G$; this means "we want our discriminator to be really good at telling fake (generated) data from real data" and then "we want our generator to be really good at fooling the really good discriminator".
'x ~ p_data(x)' means:
$E_{x \sim p_{data}(x)} [log D(x)] = \sum_x p_{data}(x) log D(x)$
where: