One way to address questions like this is to look at extremely simple distributions: that is, those with supports at a small (finite) number of points.
A binary distribution (supported at two points) won't provide enough flexibility, so let's look at ternary distributions. For simplicity, we construct distributions $G$ supported at $-1$, $0$, and $1$, with zero mean, and suppose they are symmetric. Thus the law for $G$ is determined by a single parameter $p$, for which $\Pr_G(X=-1)=\Pr_G(X=1)=p$ and therefore $\Pr_G(X=0)=1-2p$ and necessarily $0 \le p \le 1/2$. Because the mean is zero, the variance of this distribution is the expected value of $X^2$, computed as $p(-1^2) + (1-2p)0^2 + p(1^2)$ = $2p$.
When $X$ and $Y$ both have $G$ for their distribution, bivariate laws for $(X,Y)$ are given by three-by-three contingency tables indexed by the values $(-1,0,1)$. One such family (albeit not the most general one) is
$$\left(
\begin{array}{ccc}
p^2+\frac{\alpha p}{2}+2 \gamma p & (1-2 p) p-2 p \gamma & p^2-\frac{p \alpha
}{2} \\
(1-2 p) p-2 p \gamma & (1-2 p)^2+2 p \gamma & (1-2 p) p \\
p^2-\frac{p \alpha }{2} & (1-2 p) p & p^2+\frac{\alpha p}{2}
\end{array}
\right)$$
(That this is a possible family of bivariate distributions with identical marginals $G$ can be seen by calculating the row sums and column sums and noting they are both equal to $(p, 1-2p, p)$, which is the distribution $G$.)
This construction will be meaningful if and only if all entries are valid probabilities: that is, they must lie between $0$ and $1$. This restricts $\alpha$ and $\gamma$ in terms of $p$, but nevertheless there are intervals of valid values of both these parameters for all $0 \lt p \lt 1/2$. Here are some plots in $(p, \gamma)$ coordinates for various values of $\alpha$ shown across the top:

We see that $\gamma$ can range from $-1/4$ up to $1/2$, depending on what value(s) we choose for $p$ and $\alpha$, and that $-1 \le \alpha \le 1$ are all possible.
The correlation of $X$ and $Y$ is computed as the expectation of $XY$ divided by their (common) variance. The following expression omits all the terms that are obviously zero:
$$\text{Cor}(X,Y) = [(p^2 + 2\gamma p + \frac{1}{2} \alpha p)(-1)^2 + (p^2 - \frac{1}{2} \alpha p)(-1)(1) + \\(p^2 - \frac{1}{2} \alpha p)(1)(-1) + (p^2 + \frac{1}{2} \alpha p)(1)^2]/(2p) = \alpha + \gamma.$$
Whence the preceding plots indicate, for each possible value of $p$, what the possible range of correlations $\alpha+\gamma$ might be. Because $\alpha+\gamma$ usually determines neither $\alpha$ nor $\gamma$ uniqeuly, this provides a simple concrete example of how knowledge of the correlation can fail to determine the joint distribution.
It is likely that any follow-on questions of a related nature can also be addressed by analyzing this family.
When $X$ is a linear function of $Y$ and, again, $X$ and $Y$ are assumed to have identical distributions, the possibilities are limited. Let the linear function be $X = \beta_0 + \beta_1 Y$. Then
$$\text{Var}(Y) = \text{Var}(X) = \text{Var}(\beta_0+\beta_1 Y) = \beta_1^2 \text{Var}(Y)$$
implies the slope $\beta_1=\pm 1$ because $X$ and $Y$, having identical distributions, must have the same variance. If $G$ is symmetric this allows both values of $\beta_1$, but if $G$ is not symmetric then $\beta_1=1$; in either case necessarily $\beta_0=0$ (for otherwise $X$ and $Y$ have different ranges, which is not possible). Whence
$$\text{Cor}(X,Y) = E[XY]/\sqrt{\text{Var}(X)\text{Var}(Y)} = \beta_1/\beta_1^2 = \beta_1 = \pm 1,$$
showing that the variables are either perfectly correlated or possibly (in the symmetric case) anticorrelated.