All of that is perfectly normal.
Boost vs. cut
What our ears do is not really a Fourier transform like in spectrum analysers1, it's more comparable to an auto-correlation function. For instance, we still recognise pitches without a problem even when the fundamental frequency is completely missing! This is useful in natural hearing environments, since reflections may in fact cancel out single harmonics completely, but the autocorrelation isn't greatly affected by that. And that's just why also a single band cut from an EQ isn't really very audible, at least if you don't sweep it around much (that would be how a phaser works). Incidentally, that's also the reason why sharp band cuts are a tool you can well use to fight feedback: even when those frequencies are then completely missing in the mix, our ears are able to make up for this.
It's quite the opposite for sharp band boosts. Those are naturally associated with actual sound sources or at least strong resonances, which is why our ears are specialised in recognising them. A single peak in Fourier spectrum does also show up heavily in an autocorrelation diagram, unlike a single notch. Practically, this is the reason why you should use notches to fight feedback, because the sharp peaks (feedback is essentially an overcritical resonance) are much more audible than sharp notches. Of course this makes it harder to hear where you need to put the notches; an effective if somewhat brutal practice is first tuning a sharp band boost to make it sound as bad as possible, and then pulling the G below 0 to transform it into a barely audible yet effective band cut.
Perfect pitch
That about recognising a note right on hearing is quite another issue. There are pretty few people who can do this, it's called absolute pitch and is subject to a considerable amount of study. Musicians may have this more often than average people, but most actually don't. It's not a necessary thing for being a good musician nor sound engineer; often helpful but perhaps also distracting sometimes.
When you hear a note and don't know what it is what do you do? Well, if you have the score you just look in and see, there it is. Or, if you know at least the harmonic context you realise "aha, that's the third in a IV chord, so in D-major that would be a b".
Seriously though, as a sound engineer you don't need to know about the musical pitches that badly, unless you want to communicate something to the musicians. Otherwise you basically just need a frequency value to put in as some EQ parameter or whatever. And reading out frequencies is best done with a dedicated device.
Technical help, analysers
IMO there's nothing wrong at all with using metering tools etc. to make up. Many guys will tell you these are evil, you should rely on your ears alone, and old analogue boxes with unlabelled magic knobs beat everything etc. etc.. Mostly bogus if you ask me. What's correct is that you shouldn't just apply generic wisdom like "always boost 80 Hz and 7 kHz on a bass drum", "always put compressor on vocals" etc. if the material in question doesn't require that. With enough experience, you will hear when some kind of fix is necessary, and if you don't hear it then you don't need it either. If you do hear it and have some idea about how it might sound better, then why shouldn't you use any tool available? Spectrum analysers, being directly Fourier-based, can do some stuff the ear is trained not to do, like sensing notches as well as peaks. They also have perfect pitch for sure, being designed through exact mathematical operations. Lots of information that can only be helpful, never hurt in coming up with the right parameter settings for EQ etc. – which is obviously much easier when you can just jump to precise values without having to do laborous step-by-step tweaks.
Sure, the more you can do with your ears alone the better. But it doesn't really matter how you obtain the information, as long as you produce a good end result. Anyone's ears are notoriously prone to the placebo effect, looking on meters is much more objective. Yet it's not like you're not working with your ears anymore. Ears remain terrific when it comes to judging overall sound, knowing which peak broadening is due to time-confinedness and which is actual frequency smearing, and identifying each of multiple sound sources. It's still your ears that tell you what needs to be done, even if it's you meters that tell you how to do it.
1Many spectrum analysers do not work by the usual short-time discrete Fourier transform but with seperate bandpass filters; but conceptually that's still a Fourier transform since it models the signal by sinuoids.