24

This answer makes me wonder why the sensitivity to gravitational waves decreases proportionally to the distance.

Since gravitational waves extend in all directions, my (uneducated) guess would be that the same argument can be made as for the decrease in sensitivity for electromagnetic waves. Why is this not the case? Why does the sensitivity to GWs decrease in a linear fashion?

called2voyage
  • 6,312
  • 1
  • 32
  • 62
usernumber
  • 17,492
  • 3
  • 54
  • 133

1 Answers1

25

EDIT I'm leaving the original, highly upvoted answer below, but I've had a fundamental rethink about this, prompted by questions from Keith McClary and a helpful clarification from a Physics SE question.

The original answer I gave is the reason that we can detect gravitational waves (GWs) at all. Their coherent nature as single oscillators, means that despite their comparatively low powers, they can be detected right across the universe. In comparison, electromagnetic sources are usually the superposed light from countless incoherent emission sources. On average this has a destructive interference effect that reduces the power (intensity) received; and due to the rapidly changing signal, it is usually intensity that is measured.

However, the answer to the present question is actually just how "sensitivity" is defined.

In order to detect a source, we must identify it in a background of noise. This is done by defining a signal-to-noise ratio. The signal is the product of your source strength (more on that in a minute) and how long you observed it for. The noise is a property of your instrument. The sensitivity of the instrument then is something like the minimum (signal $\times$ observation time) that will produce a significant detection.

In astronomy it is conventional to express the signal in terms of power received since, due to the arguments given above and in the original answer, power (intensity) is generally what is measured. The sources of noise are therefore also defined in terms of a power and the sensivity has units of something like Watts $\times$ seconds, or more conventionally, W/Hz.

In gravitational wave astronomy, because it is the amplitude that is directly detected, GW astronomers express their source signal in terms of amplitude (which is proportional to the square root of detected power) and their sensitivities are expressed in terms of gravitational wave amplitude (which is dimensionless) divided by $\sqrt{Hz}$ as a result.

i.e. we are not comparing like with like. Doubling the sensitivity of a gravitational wave detector is actually like quadrupling the sensivitity of an electromagnetic wave detector. Thus there is no fundamental difference here, the apparent difference in behaviour is merely a result of how sensitivity is defined. The reason for the different definition is as per my original answer below.

Original answer

The difference is that usually when we detect sources of electromagnetic waves, we are detecting intensity, which obeys the inverse square law.

In contrast, we are detecting the amplitude of gravitational waves, and amplitude only scales as the inverse of distance

Why the difference? Sources of gravitational waves are coherent oscillators. A merging binary produces a single coherent wave train with an amplitude that can be defined and measured. By contrast, when we look at a distant star or galaxy in electromagnetic waves we are seeing the incoherent contribution from countless accelerating particles and atoms and all we can detect is the resultant summed intensity. There is no coherent electromagnetic wave with an amplitude that can be measured.

This difference in behaviour is fundamentally because whilst there are positive and negative electric charges, which require contrived circumstances in which to behave coherently (e.g. in a laser), gravitational waves are produced by accelerating masses, and since there is only one sign of "gravitational charge", the individual parts of a gravitational wave source are able to act in concert quite naturally to produce a coherent waveform that has a wavelength larger than the body itself.

An excellent discussion of these points can be found on the first page of the review article by Hendry & Woan (2007).

In principle, if we were looking at a single coherent source of electromagnetic waves then we can detect the amplitude (for example by the force it exerts on charged particles), and then the sensitivity would just reduce as the inverse of distance. At optical frequencies the electric field varies so rapidly that this cannot be done, but it is possible at radio frequencies. Unfortunately the coherence length and coherence time (the time over which the phase of the wave is predictable) are so short that this is rarely practical in laboratory, let alone astronomical, sources.

ProfRob
  • 151,483
  • 9
  • 359
  • 566
  • Good explanation. Am I wrong in thinking that the energy flux of the wave scales with the square of the amplitude, so the energy still follows the inverse square law? – antlersoft Jan 18 '20 at 17:01
  • @antlersoft Yes, intensity goes as the square of the amplitude. – ProfRob Jan 18 '20 at 17:22
  • Answers to Transfer of energy from gravity back to other “more familiar” forms of energy? are generally yes, and I assume that energy conservation still applies. So I assume that a device that extracts energy from a gravitational waves will also respond to the square of the amplitude. – uhoh Jan 21 '20 at 05:24
  • SETI is looking for coherent sources, but many sources say they are limited by inverse square. – Keith McClary Jan 21 '20 at 18:27
  • @keithMcclary give me an actual reference that isn't a blog or internet discussion site that talks about coherent RF emission, SETI and the inverse square law and I'll have a look. For example the first thing that came up was discussing aliens detecting our incoherent RF noise. – ProfRob Jan 21 '20 at 19:18
  • There were plenty of better quality sources after that. I can't find anything about coherent RF not being inverse square (although some people conflate "coherent" with "beamed"). – Keith McClary Jan 21 '20 at 22:27
  • @KeithMcClary well let me know when you do. Since the implication of your comment is that there is something wrong with my answer. – ProfRob Jan 21 '20 at 22:29
  • The answer would be improved if it had a source for: "In principle, if we were looking at a single coherent source of electromagnetic waves then we can detect the amplitude, and then the sensitivity just reduces as the inverse of distance." I think NASA would find such a phenomenon useful and would mention it on their site. – Keith McClary Jan 21 '20 at 23:23
  • The link didn't work for me: "Token was not provided". I found Gravitational astrophysics. – Keith McClary Jan 22 '20 at 00:47
  • It is certainly not true that we can only measure the intensity of EM waves. Radio antennae measure the coherent phase information of a wave. Being able to do so is the main working principle behind large area radio arrays (such as LOFAR and SKA) that can be pointed to a source after the fact. – TimRias Jan 22 '20 at 15:00
  • As far as I know, the difference comes from the fact that in the EM case we measure the EM field strength, whereas in the case of GWs we measure the metric, which is analogous to measuring the EM vector potential. Since the field strength is the gradient of potential it drops off faster. If we could build EM telescopes that measure the vector potential directly, their range would scale directly with their sensitivity. – TimRias Jan 22 '20 at 15:05
  • 1
    @mmeent both the vector potential and (far) field strength of an oscillating dipole go as $1/r$. – ProfRob Jan 22 '20 at 17:54
  • @mmeent and when you talk about coherent detection across an array, that isn't what is meant by coherent emission of waves. The atoms in a radio galaxy do not oscillate in sync. – ProfRob Jan 22 '20 at 18:40
  • I don‘t get it: when the sensitivity of LIGO is doubled (e.g. in the next maintenance phase) will the volume and thus number of detectable events increase 2x, 4x or 8x ?? – Hartmut Braun Jan 23 '20 at 11:03
  • @HartmutBraun 8 times. – ProfRob Jan 23 '20 at 12:00
  • @RobJeffries "well let me know when you do." I am looking for a source that supports your extraordinary claim that "In principle, if we were looking at a single coherent source of electromagnetic waves then we can detect the amplitude, and then the sensitivity just reduces as the inverse of distance." but can't find any. Should I ask a separate reference-request Question about this? – Keith McClary Mar 06 '20 at 05:32
  • 1
    @KeithMcClary There is in fact no difference between an electric dipole and a GW sources. It turns out that the apparent difference is all to do with how you define "sensitivity". GW astronomers do it in terms of amplitude, EM astronomers do it in terms of intensity (power). Major edit above. – ProfRob Mar 07 '20 at 10:41