One of the methodology to select a subset of your available features for your classifier is to rank them according to a criterion (such as information gain) and then calculate the accuracy using your classifier and a subset of the ranked features.
For example, if your features are A, B, C, D, E, and if they are ranked as follow D,B,C,E,A, then you calculate the accuracy using D, then D, B then D, B, C, then D, B, C, E... until your accuracy starts decreasing. Once it starts decreasing, you stop adding features.
In example1 (above), you would pick features F, C, D, A and drop the other features as they decrease your accuracy.
That methodology assumes that adding more features to your model increases the accuracy of your classifier until a certain point after which adding additional features decreases the accuracy (as seen in example 1)
However, my situation is different. I have applied the methodology described above and I found that adding more features decreased the accuracy up until a point after which it increases.
In a scenario such as this one, how do you pick your features? Do you only pick F and drop the rest? Do you have any idea why the accuracy would decrease and then increase?

