In a previous question (Entropy of an image) and in various sources on the web, the Shannon entropy of an image is considered to be the entropy of the frequency distribution of the grayscale values.
In this way a random image where the pixels take the values 0 or 1 with the same probability has an entropy equal to a single coin toss: we have 50% white and 50% black so the entropy is 1. This does not vary with the dimension of the image.
What I am not able to understand is why each pixel is not considered as a separate random variable.
In my opinion, in my example, is more correct to consider an image with $n$ pixels as a sequence of $n$ coin tosses, resulting finally in an entropy of $n$:
$$H(I)= -2^n 2^{-n} \log_2 2^{-n} = n$$
In the case of an 8-bit image (256 possible pixel values), the final result would be $8*n$. This measure of entropy is dependent on the number of pixels on the image. Something that makes sense to me. Moreover, it solves the problem presented in the previously linked question (Entropy of an image) of the spatial correlation of the pixels.
In this case, the number of free pixels would be decreased because each line is completely dependent on the value of its first pixel. This would decrease the entropy.
What am I doing wrong?

