I'd say logistic regression isn't a test at all; however a logistic regression may then lead to no tests or several tests.
You're quite correct that labelling something nonparametric because it's not normal is insufficient. I'd call the exponential family explicitly parametric, so I'd usually regard logistic regression (and Poisson regression and Gamma regression and ...) as parametric, though there can be circumstances in which I might accept an argument that particular logistic regressions could be regarded as nonparametric (or at least in a vaguely hand-wavy sense, only quasi-"parametric").
Beware any confusion over the two senses in which a regression may be called nonparametric.
If I fit a Theil linear regression it is nonparametric in the sense that I have left the error distribution undefined (it corresponds to adjusting the regression slope until the Kendall correlation between residuals and $x$ is 0) ... but it is parametric in the sense that I have a fully specified relationship between $y$ and $x$ parameterized by the slope and intercept coefficients.
If on the other hand I fit a kernel polynomial regression (say a local linear regression), but with normal errors, that is also called nonparametric, but in this case it's the parameterization of the relationship between $y$ and $x$ that's nonparametric (at least potentially infinite-dimensional), not the error distribution.
Both senses are used, but when it comes to regression, the second kind is actually used more often.
It's also possible to be nonparametric in both senses, but harder (with sufficient data, I could, for example, fit a Theil locally-weighted linear regression).
The two senses aren't quite as distinct as it may seem at first, however, since if we consider the model as specifying the conditional distribution of the response, then the second sense is specifically about modelling the location (typically the mean) of that conditional distribution, while the first sense relates to the model for the shape of the conditional distribution model. They're both relating to aspects of the conditional distribution. Going back to that second sense, if the distribution is otherwise specified (up to a fixed finite number of parameters) it might be better described as semiparametric, but that's not the convention in this area.
In the case of GLMs, the second form of nonparametric multiple regression include GAMs; that second form is the sense in which Hastie is generally operating (and under which he's operating in that quote).