In unextended OpenGL that's indeed a little difficult, if possible at all. There is really no actual notion of "GLSL assembly code", or not one that's remotely vendor-independent, let alone a way to retrieve that.
There are a few extensions that might help you get that information, though, albeit in a not entirely intuitive or even platform-dependent way (but well, you're talking about micro-optimizations, so we're firmly in platform-dependent territory anyway).
First of all, there's GL_ARB_get_program_binary (core since 4.1), which let's you retrieve a platform-dependent binary version of the program (after linking). At least on NVidia hardware (maybe others, too, but I wouldn't know) this binary blob also contains ASCII assembly listings of all the contained shaders in NVidia-extended ARB syntax (continuously refined in the GL_NV_gpu_programX extensions). Those listings are quite easy to find in the binary when opening it with a normal text editor and the syntax is quite straight forward if you know your way around shaders and general assembly programming principles. As said, other vendors might offer similar listings inside their binaries.
Then there's also external programs that might compile your shader into some intermediate form, the most platform-independent of those being SPIR-V, for which Khronos-approved external compilers exist. Being an intermediate language this is somewhat of a "high-level assembly language" with an "assembly-like" structure but with many high-level abstractions that builds a bit of a compromise between the easy interpretability of assembly and the easy programability and platform-independence of a high-level language (think of Java bytecode).
With the GL_ARB_spirv extension (core since 4.6) you can directly use (externally precompiled) SPIR-V programs in OpenGL in place of GLSL shaders. I don't know if there is currently a way to let your driver compile GLSL into SPIR-V and then retrieve that. Otherwise some external SPIR-V module created from a reference compiler would not necessarily be equal to the machine code that your driver's actual GLSL compiler produces. It might also be that such an intermediate representation might already be too high-level for analysing micro-optimizations anyway.
All in all, and especially if you just want to use it for debugging/analyzing purposes on a dedicated machine, trying to look for a whatever-natured assembly listing in your driver's binary blob retrieved with glGetProgramBinary might be the easiest and most "close-to-the metal" way to get some insight into the compiled product. I use it occasionally just out of interest.
But of course no answer to this question would be complete without a warning that you shouldn't make too much out of it either. As on the CPU, generally trusting your compiler is a good default approach and you shouldn't let micro-optimizations like this guide your practice too much, let alone rely on any of that guidance being remotely portable (let alone assume that the assembly listed in the binary is actually what's uses as machine code). But if you want to know what your driver might make of it, it can be quite interesting and helpful, especially when deciding between two equally meaningful ways to write some function that might have unexpected differences in the "assembly".
OSG is not exactly known for being perfomance-friendlythat decision is not in my hands, I have to work with it. My optimization is branching elimination - while I wasn't aware how hardware dependend optimizations are, I was under the assumption that branch elimination is generally preferable. However I now have threesigncalls, which is why I wanted to do this in the first place. Nevertheless, thank you for the input. – Tare May 08 '18 at 18:19