It's an interesting question. One thing to note is that allowing unequal variances will only change the $t$-statistic if the groups are of unequal size. If the two groups are of equal size (i.e., $n_1=n_2=n$), then Welch's $t$-test (denoted $t_w$) and Student's $t$-test (denoted $t_s$) give the same test statistic, since
$$
t_w=
\frac{\bar{y}_1-\bar{y}_2}{\sqrt{\frac{s^2_1}{n_1}+\frac{s^2_2}{n_2}}}=
\frac{\bar{y}_1-\bar{y}_2}{\sqrt{\frac{s^2_1+s^2_2}{n}}}=
\frac{\bar{y}_1-\bar{y}_2}{\sqrt{(\frac{s^2_1+s^2_2}{2})(\frac{2}{n})}}=
t_s
$$
I point this out because the sleep study example you give in your post involves equal group sizes, which is why running your example returns the same $t$-statistic in all cases.
Anyway, to answer your question, this can be done in nlme::gls() by using the weights parameter combined with nlme::varIdent(). Below I generate some data with unequal group sizes and unequal variances, then show how to fit the models assuming or not assuming equal variance, using both t.test and a regression function (lm or gls):
# generate data with unequal group sizes and unequal variances
set.seed(497203)
dat <- data.frame(group=rep.int(c("A","B"), c(10,20)),
y = rnorm(30, mean=rep.int(c(0,1), c(10,20)), sd=rep.int(c(1,2),c(10,20))))
# the t-statistic assuming equal variances
t.test(y ~ group, data = dat, var.equal = TRUE)
summary(lm(y ~ group, data = dat))
# the t-statistic not assuming equal variances
t.test(y ~ group, data = dat, var.equal = FALSE)
library(nlme)
summary(gls(y ~ group, data = dat, weights=varIdent(form = ~ 1 | group)))
# a hack to achieve the same thing in lmer
# (lmerControl options are needed to prevent lmer from complaining
# about too many levels of the grouping variable)
dat <- transform(dat,
obs=factor(1:nrow(dat)),
dummy=as.numeric(group=="B"))
library('lme4')
summary(lmer(y ~ group + (dummy-1|obs), data=dat,
control=lmerControl(check.nobs.vs.nlev = "ignore",
check.nobs.vs.nRE = "ignore")))
You also asked about getting the same degrees of freedom. The degrees of freedom are based on the Satterthwaite approximation, and t.test applies the approximation by default as that is part of the solution described by Welch. But gls does not do so. Theoretically this could be done, and I believe PROC MIXED in SAS will do so, so you should be able to reproduce the results exactly in PROC MIXED. Maybe (probably) there is some R package that will make it easy to get the Satterthwaite DFs for general regression models (with continuous predictors), but I don't know what it is.
Update by @amoeba: The Satterthwaite approximation is implemented as the default one in the lmerTest package, so to get $p$-value exactly matching Welch's t-test one can run:
library('lmerTest')
summary(lmer(y ~ group + (dummy-1|obs), data=dat,
control=lmerControl(check.nobs.vs.nlev = "ignore",
check.nobs.vs.nRE = "ignore")))
lmer. I searched around and found a hack by Ben Bolker allowing to do just that: https://stats.stackexchange.com/a/214007/28666. I tried and it works, I get the same value of t-statistic! I took the liberty to edit it into your answer, but if you don't want it to be here, please feel free to roll back with my apologies. – amoeba Jun 29 '17 at 11:42library('lmerTest'), thenlmeroutput will contain p-values based on the Satterthwaite approximation, and the p-value will exactly correspond to the Welch's test! Pretty neat. Do you want to update your answer accordingly? – amoeba Jun 29 '17 at 12:00contrast(emmeans(m2, specs="group"))replicates the df and p-value, at least to what looks like rounding error, where m2 is the gls model in Jake Westfall's answer – JWalker Oct 28 '19 at 11:40