When training a GAN, the generator $G$ strives to fool the discriminator $D$, while $D$ attempts to catch any output generated $G$ and isolate it from a real data point. They grow together training in turns for each epoche.
Assuming $D$ is already an expert classifier (for example, classifying birds and nonbirds images). What will happen if I freeze the weights of $D$ and only train $G$ (to generate high-resolution bird images from low-resolution ones for example)? Is there a mathematical problem here? Is $D$ so good that the generator will not be able to learn due to a very high initial error? I have obviously simulated it and failed.