Many textbooks and online articles introduce cross validation and bootstrapping together giving me the impression that they are different methods to achieve the same goal. In the model training and validation scenario, cross validation(i.e. leave-one-out, k-fold) is easy to understand. It splits the data into train and validation set, fit the model on train set and use the validation set for validation. However, I couldn't find a good introduction about how to do the same thing with bootstrapping. My guesses is that after generating a bootstrap sample, I should train the model using the bootstrap sample and validate the model using the observations outside this bootstrap sample. Then repeat the process B times. But I am not sure if this understanding is correct or if there is more to it.
I would appreciate if anyone could share a clear process of using bootstrapping to do model validation and model selection or point me to a document that explains it.