Firstly, don't undersample unless you have vast amounts of data and need to undersample for computational reasons.
Rather than resampling, use different values of the C hyper-parameter for each class instead. This is equivalent to resampling, but without throwing away any data (just down-weighting it). For details see my paper (with Mrs Marsupial)
G. C. Cawley and N. L. C. Talbot, Manipulation of prior probabilities in support vector classification, In Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-2001), pp. 2433-2438, Washington, D.C., U.S.A., July 15-19 2001.(www,pdf)
There were several other papers written on this topic at about the same time, but that was the easiest for me to find. Most modern implementations of the SVM will support this.
However, the reason for changing the C parameters or resampling is not because of the imbalance, but because false-positive and false negative errors have different costs.
The approach you intend to take is unlikely to work because as soon as you delete some support vectors, that will change the solution simply because the support vectors are necessary to define the optimal hyperplane. This means you will be invalidating much of the theory on which the SVM is based. So I would advise against it.
I don't think SVMs have difficulties with imbalanced learning tasks, and if it is assigning everything to the majority class, it is quite likely that that is the optimal solution for equal false-positive/false-negative costs (see my related question here).