You haven't said whether you need 2D or 3D positioning, but if you have only 3 beacons you will need to stick to 2D. Let's assume you know the XY position of the beacons and that the robot's sensor and the beacon emitters are at about the same height.
Geometrically, yes it can be done, with some limitations. Take a pair of beacons B1 and B2; suppose your robot sees them as A12 degrees apart. Now imagine that the beacons are pins stuck in a corkboard, and you have a cardboard triangle with one corner A12 degrees wide. You can press the cardboard between the pins and rotate while keeping the triangle edges against the pins. - marking where the point lies. Interestingly, this forms a circular arc between the pins (actually two arcs, forward and backward.
Now conceptually draw the equivalent with B2 and B3 and angle B23 - more circular arcs, which will intersect. And for good measure use B3 and B1 and angle B31 - it doesn't really give you more information, but it acts as an error check.
You'll find a few places where these arcs intersect. You can reduce the ambiguities using the order of the beacons as you scan (eg: clockwise sequence).
Besides the ambiguities, sometimes the arcs intersect close to right angles and give a sharp intersction point, but other times they are closer to tangent with each other which means that small errors in the angle result in large errors in position.
However, your proposed implementation is not going to work very well. The antennas for a nRF24L01 are not going to give a very sharp directionality. The patterns will almost certainly have multiple irregular but somewhat broad lobes that give you only a very loose sense of direction with additional ambiguities - and multipath reception will make that worse. If there are any obstacles that absorb RF, that will make it worse.
If you have unobstructed line of sight, you could do better with optical sensors - that's the appoach I've played with, on paper. Actually my original inspiration was how to tell where a photo was taken if it shows several towers on hillsides. If you can determine the location of each tower, and you can calibrate the camera's pixels with angles, then this becomes the same problem as you raise - but with far better angular resolution.
For robotics in a smaller space, you could use visible or infrared emitters and a camera on the robot; there is some complication in spinning the camera and combining several less-than-360 degree views to get large angles, but the geometry issue is the same.
And if you have more than 3 beacons, you can improve your accuracy: they are less likely to all be obscured, some combinations will more likely have good intersection angles, you can remove many of the ambiguities, and you can average multiple solutions using different triplets to reduce the errors.
I recall working out the circular arcs traced by the tip of the triangle algebraically some while ago (I was expecting something more complex and loved that it came out so beautifully simple), but I don't have that proof handy and would have to recreate it. However if you know that the location arc is circular, it's easy to compute the center and radius of the arcs from the location of two beacons and the angle between them.