A related question covers use of ordinal regression to handle the outcome values.
There's no need to categorize the predictor variables further. You can consider them (integers from 0 to 10) to be an 11-level factor variable, convert them to a factor variable, and use that factor variable as the predictor.
If you convert to an unordered factor you can get predicted outcomes for each of the 11 levels of the predictor independently. If you consider the predictor levels to be evenly spaced in some sense you might convert to an ordered factor instead, which invokes polynomials for the fit. The downside of treating the predictors as a factor is that you will end up with 10 regression coefficients to estimate, which might be tricky unless you have a large number of observations.
A potentially better solution, which doesn't involve conversion to a factor, is to let the data tell you the overall shape of the association by modeling the predictor variable flexibly. A regression spline, which can involve as few coefficients as desired, is often used. You can then plot the predicted outcomes as a smooth function of the predictor values (even though the predictors themselves only take on a few values).
The rms package in R provides the tools that you need for fitting an ordinal regression model with a regression spline. You would write something like
ordModel <- lrm(outcome ~ rcs(predictor, 4))
where 4 represents the number of "knots" around which the spline is built, and then use the package's predict() function on the ordModel object to illustrate the results.
I recommend Frank Harrell's Regression Modeling Strategies as a useful resource once you get into situations that require anything beyond the simplest of linear regression models.