Your link to the original paper helped clarify the potential ambiguity in the description contained in the quoted excerpt. This was a standard power analysis for what would effectively be a 2-sample t-test.
The design was to look at the change in A1C values (expressed in percentage points) in participants randomly assigned to one of two diets. For this analysis, those changes in A1C are the data being analyzed. The "assumed SD of 1.9%" was the anticipated standard deviation, in the percentage-point scale, among the individual changes within each diet group, perhaps based on prior studies.
The "1.5–percentage point between-group A1C difference" is the size of the difference in the changes between the two groups that they hoped to detect. For example, if diet 1 led to an average drop of 1 point in A1C, then they would want to say that a drop of 2.5 points with diet 2 was statistically different from the result with diet 1 at "the two-sided 5% level." That's the standard choice for the risk of "finding" a difference that isn't really there, called the Type I error.
The "80% chance of detecting" such a difference between the diets is also a standard choice. That means that the authors were willing to accept a 20% chance that they would not detect a true difference of that magnitude, the Type II error.
So the problem is to find how many cases in each group are needed to provide this tradeoff between falsely finding a difference and missing a true difference, based on the 1.5-point assumed difference (in A1C changes) between the groups and a 1.9-point standard deviation of the A1C changes within each group. That's a 2-sample t-test, for which the calculations are based on the (well known, at least to statisticians) non-central t distribution.
This Cross Validated page shows how the calculations can be done, with a diagram that I have copied here showing the tradeoff between Type I and Type II errors:

For this case, use $\delta$ = 1.5, $\sigma$ = 1.9, $\alpha$ = 0.05, and $\beta$ = 0.2, with $n$ being the necessary size of each group. Standard statistical packages perform the calculations. I repeated this calculation with Russ Lenth's Java applet and found that you would need to end up with 26 cases in each group. The authors expected 26% of their participants to drop out of the study, so they needed to start with 34 in each group.
The complete experimental design was a good deal more complicated, with multiple measurements of weight, A1C, and other variables over the course of more than a year. As the authors note in your quoted excerpt, as the study proceeded they found that their initial estimates of both the inter-group difference and the within-group SDs were both too high, so they added more participants to obtain 80% power with their revised estimates. Some might find this change of study design in midstream to be less than ideal.