A variable was categorized into 10 equally spaced intervals from a continuous variable which was originally in proportion. Now, we have to use this variable in CATREG procedure.
No, you don't actually have to bin any continuous variable, and probably shouldn't (see Royston et al., 2006).
As Peter already noted in the comments, there is no need to bin values that are numeric into a categorical response. There are two reasons why this isn't ideal for your scenario. First, this necessarily comes at an information loss, as you squeeze the variance into only ten possible values rather than the full spectrum of values possible, leading to less estimable data. Second, are these categorizations even meaningful? What does Group 1 say about Group 8? In a categorical regression with this many groups, it is hard to determine the value of contrasts created by the CATREG procedure in SPSS, or any categorical regression for that matter.
Here is a simulated example of why this matters in terms of information loss, where the continuous data in $x$ is dichotomized into two categories:
#### Simulate Data ####
set.seed(123)
x <- rnorm(100)
y <- rnorm(100,sd=3) + x
cat.x <- ifelse(x < mean(x), 0, 1)
df <- data.frame(x,y,cat.x)
Fit Regs
fit.num <- lm(y ~ x)
fit.cat <- lm(y ~ cat.x)
summary(fit.num)
summary(fit.cat)
Save Plots
p1 <- df %>%
ggplot()+
geom_point(aes(x,y))+
theme_bw()+
labs(title="Continuous Data",
subtitle="Beta: .842, SE: .321, P: .01")+
geom_abline(
intercept = coef(fit.num)[1],
slope = coef(fit.num)[2],
color="darkred"
)
p2 <- df %>%
ggplot()+
geom_jitter(
aes(cat.x,y),
width=.01,
height=.01
)+
theme_bw()+
labs(title="Binned Data",
subtitle="Beta: 1.446, SE: .585, P: .02")+
geom_abline(
intercept = coef(fit.cat)[1],
slope = coef(fit.cat)[2],
color="darkred"
)
View Together
ggpubr::ggarrange(p1,p2)
You can see in the plots below that the binned version comes at an information loss...the standard error is nearly twice what it was and consequently changes the raw beta estimate (now it is a mean contrast between groups) as well as the $p$-value (which is obviously affected by the standard error).

It would be better to simply run the regression in SPSS with the numeric data in hand and get more precise estimates, as this will capture more variance in the outcomes anyway.
@ttnphns makes several good suggestions, as well.
– Peter Flom Sep 04 '12 at 10:47