Several things:
1) When doing hypothesis tests, the decision is the same whether you use p-values or critical values (if it isn't, you did something wrong, or at least inconsistent).
2) When sample sizes are equal, the t-test (or ANOVA) is less sensitive to differences in variance.
3) You shouldn't do a formal equality of variance test to work out whether or not to assume equal variances; the resulting procedure for testing equality of means doesn't have the properties you'd likely wish it did. If you're not reasonably comfortable with the equal variance assumption, don't make it (if you like, assume the variances are always different unless you have some reason to think they're going to be fairly close). The t-test (and ANOVA) procedures aren't highly sensitive to small to moderate differences in population variance, so with equal (or nearly equal) sample sizes you should be safe whenever you're confident they're not highly different.
4) The "usual" F-test for equality of variance is extremely sensitive to non-normality. If you must test equality of variance, using that test wouldn't be my advice.
Which is to say, if you're able to do a Welch-type test or similar, you may be better off just to do so. It will never cost you much, it may save a lot. (In your particular situation in this case, you are probably safe enough without it - but there's no particular reason not to do it.)
I'll note that R by default uses the Welch test when you try to do a two-sample t-test; it only does the equal-variance version when you tell it to. I think this is the right way to do it (to do the safer thing by default), if only to save us from ourselves.