It makes sense. The only way to be assured of always getting perfect sensitivity is to classify everything as the positive class, which is equivalent to classifying as positive if the predicted value exceeds $-\infty$. Likewise, the only way to get perfect specificity is to classify everything as the negative class, which is equivalent to classifying as negative if the predicted value is below $+\infty$. While you might be able to get perfect sensitivity by setting the threshold just below the lowest prediction or perfect specificity by setting the threshold just above the highest prediction, that only applies to your particular data and, in general, cannot be assured of giving perfect sensitivity or specificity. Thus, it makes sense that the developers would hard-code the $\pm\infty$ in the coords function, as those are the only thresholds that will assure perfect sensitivity or specificity (granted, at the expense of each other).
I even got the Inf for some simulated data. That there is perfect sensitivity and specificity for finite thresholds is addressed below in EDIT 2.
set.seed(2023)
N <- 20
p <- rbeta(N, 1/2, 1/2) # simulated probability predictions
y <- rbinom(N, 1, p) # simulated true classes
r <- pROC::roc(y, p)
coords(r)
################################################################################
> coords(r)
threshold specificity sensitivity
1 -Inf 0.0 1.0
2 0.02950050 0.1 1.0
3 0.04154948 0.2 1.0
4 0.07094372 0.3 1.0
5 0.09818741 0.4 1.0
6 0.11496482 0.5 1.0
7 0.13298654 0.6 1.0
8 0.16426112 0.7 1.0
9 0.22426518 0.8 1.0
10 0.41298569 0.8 0.9
11 0.60549653 0.8 0.8
12 0.68211752 0.8 0.7
13 0.75138334 0.8 0.6
14 0.78644297 0.8 0.5
15 0.79864440 0.8 0.4
16 0.81317781 0.8 0.3
17 0.84750176 0.9 0.3
18 0.88146272 1.0 0.3
19 0.92519032 1.0 0.2
20 0.98128671 1.0 0.1
21 Inf 1.0 0.0
EDIT
The comments correctly point out that you can get perfect sensitivity or specificity in the above example without going to infinite thresholds. This is because my example simulates probability predictions from a model. However, the roc function can handle any continuum of values, such as log-odds.
set.seed(2023)
N <- 20
p <- rbeta(N, 1/2, 1/2) # simulated probability predictions
y <- rbinom(N, 1, p) # simulated true classes
r <- pROC::roc(y, log(p/(1-p)))
pROC::coords(r)
As the probabilities get very small or very large, the log-odds approach $-\infty$ and $+\infty$, respectively.
EDIT 2
The reason there are perfect sensitivity and specificity values in the R output is because the simulation happens not to give $1$ for any of the very low p values and not to give $0$ for any of the high values for the very high p values. In theory, though, for any probability in $(0,1)$ (or any real value of the log-odds), any point can have any true category, no matter how remote the chance of that happening, necessitating the $\pm\infty$ thresholds as the only ways of assuring perfect sensitivity (catch all $1$s, no matter how many $0$s are misclassified) or perfect specificity (catch all $0$, no matter how many $1$s are misclassified).