As you seem to be using the rms package in R, then you can accomplish what you want directly with that package. There can be something of a learning curve with it. In particular, it works best with data for which a "datadist" has been defined and made available to the software. Take the time to learn it; it's worth it. This Introduction to the Harrell"verse" might be helpful, as would the many examples in Harrell's Regression Modeling Strategies.
For your specific question, the default knot locations, based on quantiles of the predictor values, typically work well. You might choose other locations if your prior knowledge of the subject matter indicates a region of predictor values where you expect a large change in outcome. Resist the temptation to play around with knot locations, as then you need to account for that use of the outcomes to define the model. See this answer.
I prefer the rcs() default knot locations to those in the ns() function mentioned in the answer I just linked, as the outermost knots aren't forced to be at the extreme predictor values. That helps make knot placement less dependent on vagaries of a particular data sample. The rcspline.eval() function of the Hmisc package does the default placement:
If not given, knots will be estimated using default quantiles of x. For 3 knots, the outer quantiles used are 0.10 and 0.90. For 4-6 knots, the outer quantiles used are 0.05 and 0.95. For nk > 6, the outer quantiles are 0.025 and 0.975. The knots are equally spaced between these on the quantile scale. For fewer than 100 non-missing values of x, the outer knots are the 5th smallest and largest x.
One of the great strengths of the rms package is that it naturally provides evaluation of all terms involving each predictor, with that package's implementation of the anova() function for use with its models. That does "chunk tests" on all terms involving individual predictors, whether splines or interactions. That avoids the problem that I think you fear, which is how to interpret a whole set of individual regression coefficients involving a predictor in a typical model summary. When working in rms, just using anova(model) will do what you want.