Yes and no...the statement is almost correct. The power for a 1-tailed and 2-tailed tests are usually approximately the same. However, the power is technically smaller for a 1-tailed test (not larger).
The significance level determines where the critical value(s) will be for the distribution for the null hypothesis. And, you will have the same critical values for a 1-tailed test with $\alpha$ and a 2-tailed tests with $2\alpha$.
Assuming you want to reject the null, this means you want a value to fall in the critical region (that is "beyond" the critical values). If you fail to reject, you make a type II error, and the probability of this happening is $\beta$. Power is defined as $1-\beta$ (the probability of NOT making a type II error).
The calculation for $\beta$ is to find the probability of being in the non-critical region of the null distribution for the actual distribution (which you estimate using the expected effect size). For a one-tailed test, this region is (essentially) unbounded, usually something like $(-\infty,\nu_\text{c.v.})$. Whereas for a two-tailed test, this region if bounded, usually something like $(-\nu-\text{c.v.},\nu_\text{c.v.})$...and thus smaller. So, $\beta$ is smaller for a 2-tailed test than a 1-tailed test...which means the power is bigger for a 2-tailed test than a 1-tailed test.
However, the assertion that they are approximately the same is usually reasonable, as that "extra" non-critical region area in the 1-tailed test, $(-\infty,-\nu_\text{c.v.})$, the probability of that occurring for the "real" distribution is usually quite small.
Happy to provide a numerical example using a normal distribution if this will help clarify.