My understanding of tSNE is that probability distributions are optimized in a way to maintain Euclidean (by default) distances (on average) when transforming from the input to the output space. If that is the case, when transforming from 2D to another 2D space, wouldn't the identity transformation be the ideal tSNE? I know that tSNE transformation is a somewhat random process, but I'd expect distances to be maintained generally.
By contrast, when applying a tSNE to the following data, the result has a very different distribution of distances. Note in particular the distribution of red dots:
Input
Output
This example has been produced using
from sklearn.manifold import TSNE
tsne = TSNE(random_state=1).fit_transform(data)
compare https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html

