Try googling "Guttman scaling" and "Mokken scaling" to get an idea of how Likert items might be scaled. Then look at "Mokken’s AISP" for an automated method of partitioning (and if necessary discarding) Likert items into groups that scale together on the basis of 'Guttman errors'.
(This is all implemented in the R package mokken, though you'll want to read the vignette / JSS article carefully before applying.)
If you find that your A and your B items partition nicely when AISP is applied to them all at once then you have some evidence that your intuitions about what they measure and what they actually measure might be aligned. If you find AISP groupings take items from A and B, then you have some evidence that there might be overlap. Finally, if you find that AISP finds multiple groups within A and/or B then you have some evidence that more than one thing is being measured within your intuitive categories.
These conclusions make strong but often reasonable assumptions about how your concepts relate to your items - principally that it is a dominance relationship rather than e.g. a class or an unfolding structure. For these, different models are appropriate.
If you are happy to make the dominance assumption but would prefer to use factor analysis (for some reason) then you can do exploratory factor analysis and rotate to identify items that form coherent scales. You'll want to use the matrix of polychoric correlations among your items, not the usual correlation coefficients. Probably for actual analysis you'd want to use some form of ordinal IRT model on the data directly.