Here's the problem: the estimated effect sizes you used for your power analysis are likely to be highly noisy because they come from a small dataset of 20 subjects. So much so that you probably shouldn't have much faith in either the low estimate for DV1 or the higher estimate for DV2. You may be on more solid ground if you have additional lines of evidence pointing towards those effect sizes, though you should be aware that published estimates are likely to be biased upwards.
But you have to take some decision, and using your pilot and power analysis is certainly a good start. What other information can you use? Well, I'd strongly recommend considering what the minimum interesting/meaningful effect size would be, and calculating what sample size would give you 90% - 95%% power for that effect size. The choice of threshold is arbitrary but 80% power seems low to me. Even if everything else is accurate, that's still a rather high 1 in 5 chance of failing to detect the effect of interest.
Be aware that with reasonable assumptions, estimating interactions reliably may need >10X as much data as the main effects. So it may make sense to focus on the main effects if you are logistically constrained.
I can't provide strong advice about whether to drop DV1 entirely, because that depends on your interests, constraints and the nature of the data (I assume experimental). But based on the limited information you present, it seems reasonable to focus on DV2. Note that based on what I said earlier, it would still be advisable to use a sample size that is more than the minimum your power analysis suggests. In general, I think reliable parameter estimates in these kinds of studies require much higher sample sizes than a few tens of individuals, though of course this varies substantially depending on the question.
As for the number of simulations: yes, 100 seems low. Given that there's little cost to running longer simulations, I would try at least 10-100 times as many.