Difference in power
When you split a data set and perform two separate tests with a cutoff significance level of $\sqrt{\alpha}$, then the test will be less powerful.
The simulations below demonstrate this for the case of a z-test (it is a bit more difficult to compute for Welch's t-test but the principle is similar) when the significance level is $\alpha = 0.04$.

Left/right panels
On the left side the hypothesis test is based on the z-score of the combined data.
On the right side the hypothesis test is based on the z-scores of the split data.
Top/Bottom panels
For the top panels the data is generated assuming that the null hypothesis is correct. This will give a uniform distribution of the p-values. We see that both regions will cover roughly 4% of the area (small discrepancies due to variation in the simulations).
In the bottom panels the data is generated for when the alternative hypothesis is correct with an effect size of $E[z] = 0.5$. This will give a distribution that is not uniform. We can see that the two different tests will have different probabilities in rejecting the null hypothesis. The left side, the test for the grouped data, will be more powerfull (reject more often).
The reason for the higher power is because the test on the left will cover the regions where the alternative hypothesis has the highest density. It is the area that covers 4% of the cube that contains the highest density of the cases given the alternative hypothesis (think of the Neyman Pearson lemma).
Why sacrifice power this way?
If I am going to run 2 identical experiments, how is that different to just running the first t-test with twice as much data?
As explained above a difference is the power of the test. But that is not beneficial. Why then follow this procedure?
The reason is because with the split testing, you are not always running that second experiment. That saves you time, money and energy. You can see it as an early stopping rule.
Also, there may be several practical issues like the ones mentioned by Lukas Lohse. The analysis above is a computation for significance tests assuming that testing procedure is exactly as described by the statistical model. Those assumptions might be wrong. (e.g. systematic bias is not captured by statistical variance in the sample) Performing an additional tests can be considered as a way to make the testing more robust. Ideally that second test is not identical, but is completely independent (to eliminate systematic bias).
Code for image
set.seed(1)
layout(matrix(1:4, 2, byrow = TRUE))
par(mar = c(4,4,4,2))
n = 5*10^4
alpha_t = 0.04
alpha_s = sqrt(alpha_t)
zt = qnorm(0.04) # cutoff level both sets
p1 = seq(0,1,0.01) # p-value first test
z1 = qnorm(p1) # z-value first test
z2 = zt*sqrt(2)-z1 # z-value second test
p2 = pnorm(z2) # p-value second test
simulations with null hypothesis true
simulations with null hypothesis false
for (i in 0:1) {
sim_z1 = rnorm(n, i-0.5)
sim_z2 = rnorm(n, i-0.5)
sim_p1 = pnorm(sim_z1)
sim_p2 = pnorm(sim_z2)
test_1 = pnorm(sim_z1+sim_z2, 0, sqrt(2)) < alpha_t
test_2 = (sim_p1 < alpha_s) * (sim_p2 < alpha_s)
plot(-10,-10, xlim = c(0,1), ylim = c(0,1), xlab = "first p-value", ylab = "second p-value")
title("test using single data set", line = 0.5, font.main = 1)
lines(p1,p2, lwd = 2)
points(sim_p1, sim_p2, pch = 21, cex = 0.5,
col = rgb(test_10.4+0.6, 0.6, 0.6, 0.02),
bg = rgb(test_10.4+0.6, 0.6, 0.6, 0.02))
text(0.2,0.3, paste0(round(100*mean(test_1), 2), " %"), col = 2)
plot(-10,-10, xlim = c(0,1), ylim = c(0,1), xlab = "first p-value", ylab = "second p-value")
title("test using split data sets", line = 0.5, font.main = 1)
lines(c(alpha_s,alpha_s,0),c(0,alpha_s,alpha_s), lwd = 2)
points(sim_p1, sim_p2, pch = 21, cex = 0.5,
col = rgb(test_20.4+0.6, 0.6, 0.6, 0.02),
bg = rgb(test_20.4+0.6, 0.6, 0.6, 0.02))
text(0.2,0.3, paste0(round(100*mean(test_2), 2), " %"), col = 2)
if (i == 0) {
mtext("simulation of p-values when null hypothesis is true", side = 3, line = -2, outer = TRUE, font = 2)
}
if (i == 1) {
mtext("simulation of p-values when alternative hypothesis is true (d = 0.5)", side = 3, line = -23, outer = TRUE, font = 2)
}
}