Just an update on the issue: I think that the problem of efficiency of the constant factor is mainly qualitative, rather than quantitative. Indeed, a high constant factor is mainly caused by an inefficient interpretation of the programmer's intent by the language parser.
In this sense, we can say that it's more a problem of semantic linguistics rather than just technical implementation: with a low level language such as C, the language parser don't have to make any guess, since you have direct control of the most basic instructions, and thus are responsible for the whole program flow. With a high level language, you get a more abstract control of the program flow, which makes it easier and faster to design complex program, but at the expense that you delegate some of the program flow design to the language parser: it now has to make some guesses about what you meant, and it can be very costly if it's some kind of program flow that the language parser wasn't primarily made for (such as linear algebra in Python).
This is not because of a lack of investment of the language authors, but rather that, like with any level of abstraction, you choose a balance between the conciseness of your words and their preciseness. A word like "humans" is very abstract and describe a whole specie, but it doesn't account for the particularities and all the culture of each individual in this group. Just like scientific jargon, high-level language parsers are designed to concisely and efficiently describe operations of the targetted paradigm, but they cannot describe as concisely every other paradigms.
However, now it seems that a new class of languages are emerging: annotated languages, such as Julia or Cython or Numba. They are neither really interpreted nor compiled but in-between: you can write high-level code that the "interpreter" will have to guess how to run the most efficiently possible (and correctly of course), or you can almost compile your code by annotation or by using other specialized constructs such as dispatching or semi-automatic unrolling/vectorization of loops, so that your annotations avoid the need to make the language parser to guess what you were meaning to do at lower levels of abstraction: in short, annotations give you the ability to give precisions on your high-level abstraction. We will see in the future if this concept works out.