A good test provides insight as well as a quantification of the apparent difference. A permutation test will do that, because you can plot the permutation distribution and it will show you just how and to what extent there is a difference in your data.
A natural test statistic would be the mean difference between the points in one group relative to those in the other -- but with little change you can apply this approach to any statistic you choose. This test views group membership arising from the random selection of (say) the red points among the collection of all blue or red points. Each possible sample yields a value of the test statistic (a vector in this case). The permutation distribution is the distribution of all these possible test statistics, each with equal probability.
For small datasets, like that of the question ($N=12$ points with subgroups of $n=5$ and $7$ points), the number of samples is small enough you can generate them all. For larger datasets, where $\binom{N}{n}$ is impracticably large, you can sample randomly. A few thousand samples will more than suffice. Either way, these distributions of vectors can be plotted in Cartesian coordinates, shown below using one circular shape per outcome for the full permutation distribution (792 points). This is the null, or reference, distribution for assessing the location of the mean difference in the dataset, shown with a red point and red vector directed towards it.

When this point cloud looks approximately Normal, the Mahalanobis distance of the data from the origin will approximately have a chi-squared distribution with $2$ degrees of freedom (one for each coordinate). This yields a p-value for the test, shown in the title of the figure. That's a useful calculation because it (a) quantifies how extreme the arrow appears and (b) can prevent our visual impressions from deceiving us. Here, although the data look extreme--most of the red points are displaced down and to the left of most of the blue points--the p-value of $0.156$ indicates that such an extreme-looking displacement occurs frequently among random groupings of these twelve points, advising us not to conclude there is a significant difference in their locations.
This R code gives the details of the calculations and construction of the figure.
#
# The data, eyeballed.
#
X <- data.frame(x = c(1,2,5,6,8,9,11,13,14,15,18,19),
y = c(0,1.5,1,1.25, 10, 9, 3, 7.5, 8, 4, 10,11),
group = factor(c(0,0,0,1,0,1,1,1,1,0,1,1),
levels = c(0, 1), labels = c("Red", "Blue")))
#
# This approach, although inefficient for testing mean differences in location,
# readily generalizes: by precomputing all possible
# vector differences among all the points, any statistic based on differences
# observed in a sample can be easily computed.
#
dX <- with(X, outer(x, x, `-`))
dY <- with(X, outer(y, y, `-`))
#
# Given a vector `i` of indexes of the "red" group, compute the test
# statistic (in this case, a vector of mean differences).
#
stat <- function(i) rowMeans(rbind(c(dX[i, -i]), c(dY[i, -i])))
#
# Conduct the test.
#
N <- nrow(X)
n <- with(X, sum(group == "Red"))
p.max <- 2e3 # Use sampling if the number of permutations exceeds this
# set.seed(17)
if (lchoose(N, n) <= log(p.max)) {
P <- combn(seq_len(N), n)
stitle <- "P-value"
} else {
P <- sapply(seq_len(p.max), function(i) sample.int(N, n))
stitle <- "Approximate P-value"
}
S <- t(matrix(apply(P, 2, stat), 2)) # The permutation distribution
s <- stat(which(X$group == "Red")) # The statistic for the data
#
# Compute the Mahalanobis distance and its p-value.
# This works because the center of `S` is at (0,0).
#
delta <- s %*% solve(crossprod(S) / (nrow(S) - 1), s)
p <- pchisq(delta, 2, lower.tail = FALSE)
#
# Plot the reference distribution as a point cloud, then overplot the
# data statistic.
#
plot(S, asp = 1, col = "#00000020", xlab = "dx", ylab = "dy",
main = bquote(.(stitle)==.(signif(p, 3))))
abline(h = 0, v = 0, lty = 3)
arrows(0, 0, s[1], s[2], length = 0.15, angle = 18,
lwd = 2, col = "Red")
points(s[1], s[2], pch = 24, bg = "Red", cex = 1.25)