49

I know this question sounds dumb, but please bear with me. This question came into my mind while I was looking at the photos in an astronomy book. How is it possible that IR and UV photos of stars and nebulae are taken if our eyes could not detect them?

apaderno
  • 105
  • 23
    "I know this question sounds dumb" not at all! If anything, the question demonstrates an above-average level of intelligence: a lot of people wouldn't even think to ask... – Aaron F Feb 02 '20 at 20:30
  • 7
    I've deleted some comments that didn't seem to be targeted at improving the question. – David Z Feb 03 '20 at 03:01
  • 2
    Have you ever heard someone play a song on a piano, but an octave higher or lower? It's like that, but with a different kind of waves. – Mason Wheeler Feb 04 '20 at 15:26
  • A nice book is Alien Vision by Austin Richards. https://www.amazon.com/Alien-Vision-Exploring-Electromagnetic-Technology/dp/0819441422. Already old, but nicely showing what you can see outside the visible spectrum. – user3860088 Feb 04 '20 at 10:03
  • It's not really an answer to the question but it might interest you that some animals including human women are tetrachromats able to see in the UV range: https://www.bbc.com/future/article/20140905-the-women-with-super-human-vision – JimmyJames Feb 04 '20 at 18:11
  • @JimmyJames It doesn't seem like it gives people vision in the UV range; it seems to usually manifest in humans in the visible light spectrum still. Also UV is apparently blocked by our eye lens. – JMac Feb 04 '20 at 20:21
  • @JMac Did you read the article? Also the wikipedia cites a source. I'm not making the claims, just providing references. If UV is blocked by our eyes, are you saying UV blocking sunglasses are a scam? – JimmyJames Feb 04 '20 at 20:31
  • @JimmyJames You don't really want the UV hitting your eye lenses either. It still does damage even if most doesn't reach your retina and thus you can't really see it. The wikipedia only talks about humans having extra vision in the visible range from being a tetrachromat. I skimmed the article, and searched the article for "UV" or "Ultra" and got no hits, so it doesn't seem to talk about humans getting vision in the UV spectrum either. The wikipedia article talks about how people who aren't tetrachromats can see some of the UV spectrum if they don't have eye lenses. – JMac Feb 04 '20 at 20:41
  • @JMac "In 2010, after twenty years of study of women with four types of cones (non-functional tetrachromats), neuroscientist Dr. Gabriele Jordan identified a woman (subject cDa29) who could detect a greater variety of colors than trichromats could, corresponding with a functional tetrachromat (or true tetrachromat)." Am I parsing this incorrectly? It seems to be hedging quite but 'true tetrachromat' seems unambiguous. – JimmyJames Feb 04 '20 at 21:04
  • @JMac You are right about the BBC article, I made a leap there that wasn't in the text. Given that birds, fish, and insects can see in the UV range, though and that primates re-evolved trichromacy after mammals in general lost it, it seems possible that we could re-evolve tetrachromacy as well. – JimmyJames Feb 04 '20 at 21:08
  • @JimmyJames tetrachromacy in humans is believed to mean that they can differentiate more colors within the visible spectrum; it doesn't mean that they see into the UV. – gormadoc Feb 04 '20 at 21:09
  • @JimmyJames I don't see anywhere that suggests it's outside the visible range. It sounds like the true tetrachromats have an extra colour channel within the visible range. So for example a colourblind person has trouble telling the difference between some hues (the hues depend on how they are colourblind), because they are only a dichromat. A tetrachromat like they describe is able to better differentiate hues than a trichromat; so they get a greater variety of colours because they have 4 different wavelength intensities to compare. – JMac Feb 04 '20 at 21:10
  • @JMac Yeah, on rereading, it seems you are right. But it is true that some animals can see in that range based on what I have read there and elsewhere. – JimmyJames Feb 04 '20 at 21:12
  • You can see UV after skin got tanned (or things weathering). You can feel IR from the direction of thermal energy. – Sadaharu Wakisaka Feb 05 '20 at 02:14
  • What exactly is the question? How it is possible to make UV detectors? How to represent UV light in visible colors? I'm having trouble understanding why the OP is confused. We can't see x-rays, but we obviously can make x-ray images. – Wood Feb 05 '20 at 02:23
  • Film material ("Analogue photography") and image sensors are imperfect. Film will react to UV light, which is unwanted as it will make the picture "hazy" (also, the lenses are not optimized to produce a sharp UV image). Hence the skylight filter, which will block UV. An image sensor will react to IR, with similar "unnaturalness" in pictures. However, that can ve used to an advantage: surveillance cameras will illuminate the scene with IR at night, unnoticable for humans. Unless you watch the scene via your mobile phone's camera; you will clearly see the location of such IR sources. – Klaws Feb 09 '20 at 08:26

8 Answers8

84

The images are taken by UV/IR cameras. But the frequencies are mapped down/up to visible region using some scheme. If you want to preserve the ratios of frequencies you do a linear scaling. Generally the scaling is done in way to strike balance between aesthetics and informativity.

In light of the overwhelming attention to this question, I have decided to add the recent image of a black hole as taken by the Event Horizon Telescope. This image was captured in radio waves by array of radio telescopes in eight different places on earth. And the data was combined and represented in the following way.

enter image description here

A point that I forgot to mention which was pointed out by @thegreatemu in the comments below is that the black hole image data was all collected at just one radio wavelength (of $1.3$mm). The colour in this image signifies the intensity of the radio wave. Brighter implying brighter signal.

  • 71
    Maybe an analogy to music will make this more understandable to a layman: I'm not a musician, but I remember a teacher of mine playing songs one octave lower, than what the songs usually were played in. To a species not able to hear the higher octave this would be a representation of the orignal song, just as the picture provided is a representation of the original data. Both representations essentially contain the same information. –  Feb 03 '20 at 05:50
  • 4
    Note that sight and sound are very different senses. In sound we are very sensitive to spectrum, but have only minimal ability to resolve spacially. In sight we are very sensitive to spacial variation, but the spectrum is crushed down into trichromatic color. – Peter Green Feb 03 '20 at 16:36
  • 23
    While they are different senses, the analogy preserves. – infinitezero Feb 03 '20 at 18:28
  • 2
    This is good info, but misleading in most cases. Usually the frequency information is NOT preserved, as is the case in the example figure. The colors presented are a so-called heat map, where the INTENSITY at that pixel is mapped onto a color. "Hotter" pixels (i.e. whiter) received more radio averaged over the entire spectrum. Occasionally you will see different frequency bands mapped onto RGB intensities separately, but this is rare. – thegreatemu Feb 03 '20 at 22:01
  • 2
    In every article I read on that topic, it seemed the journalist was under the impression that it was an actual optical photograph of some sort rather than a visualisation. You can visualise many things as a pseudo-photograph: cellphone radio coverage, or something not electromagnetic at all like noise or pollution levels. – Rich Feb 04 '20 at 03:51
  • 1
    @TheoreticalMinimum In the case of light, ultraviolet would be (more accurately said as) playing octave higher (since UV has shorter wavelength, and so, higher wave frequency). – trolley813 Feb 04 '20 at 08:52
  • @thegreatemu It's called a map because Geographers make maps this way. Altitude, for example, is a height above sea-level, but is often represented by colour on a map. On this one it's intensity or "brightness", of radiation of a frequency we cannot see directly by eye. – nigel222 Feb 04 '20 at 16:47
  • @nigel222 right. My point was that the map is "intensity->color" NOT "frequency->color" , and that in practice "frequency->color" maps are nearly nonexistant – thegreatemu Feb 04 '20 at 20:11
45

when you are looking at a UV or IR photo, the intensities of these (invisible) rays are represented by different (visible) colors and brightnesses in the photo. This technique of rendering things our eyes cannot see into images that we can see is common, and the images thus prepared are called false color images.

niels nielsen
  • 92,630
  • 1
    TL;DR: every answer: 'UV/IR cameras take greyscale images and the images thus prepared are called false color images. The frequencies are mapped down/up to visible region using some scheme.' – Mazura Feb 03 '20 at 01:28
  • 13
    @Mazura not necessary greyscale, UV/IR sensors can capture light intensity at multiple wavelengths in the UV or IR spectrum which can then be artificially shifted to the visible spectrum. – zakinster Feb 03 '20 at 10:03
  • @zakinster in principle yes, but in practice this is pretty much never done. Your monitor has only three colors, so at best you can map the average intensity over 3 different bands to RGB – thegreatemu Feb 03 '20 at 22:02
  • Why stop there? You can map any kind of wave frequency to colors, which means you can see (a picture representation of) sounds, barometric pressure WiFi waves etc.. – refaelio Feb 04 '20 at 09:59
  • @thegreatemu Actually we often make RGB images using more than one non-visible color. One example is seen in Hayes et al. (2013), where Hα, far-UV, and Lyα (which is also in the UV) is mapped to R, G, and B, respectively. – pela Feb 04 '20 at 20:48
45

Because you can build a camera that can.

The sensitivity of a camera is not determined by human eyes, but by the construction of the camera's sensor. Given that in most common applications we want the camera to capture something that mimics what our eyes see, we generally build them to likewise be sensitive to approximately the same frequencies of light that our eyes are sensitive to.

However, there's nothing to prevent you from building a camera that is sensitive to a different frequency spectrum, and so we do. Not only ultraviolet, but infrared, X-rays, and more are all possible targets.

If you're asking why the pictures are visible, well that's because we need to see them with our eyes, so we paint them using visible pigments or display them on displays that emit visible light. However, this doesn't make the pictures "wrong" - at the basic level, both visible light and UV/IR pictures taken by modern cameras are the same thing: long strings of binary bits, not "colors". They take interpretation to make them useful to us.

Typically, UV/IR cameras take greyscale images, because there are no sensible "colors" to assign the different frequencies - or better, "color" is just a made up thing that comes from our brains, and is not a property of light. So coloring all "invisible" lights grey is not "wrong" any more than anything else - and it's easier to build the sensors (which means "cheaper"), because the way you make a color-discriminating camera is effectively the same as the way your eyes are made: you have sub-pixelar elements that are sensitive to different frequency ranges.

  • 10
    You can even build "cameras" that don't use electromagnetic radiation at all, for instance ultrasound or electron miroscopes. – jamesqf Feb 02 '20 at 18:07
  • 1
    Many ordinary cameras can detect some infrared, and they typically display it as a pink or purpleish color. To see this, point your phone camera at the end of a TV remote control. – Jeanne Pindar Feb 02 '20 at 19:04
  • 2
    @Jeanne Pindar: Most digital cameras (including those in phones) have a filter that blocks a lot of the infrared, though. It's possible to remove those filters and take interesting infrared photos: a web search for "digital infrared conversion" will return lots of hits. It can be done for UV, too, though it's more complicated. – jamesqf Feb 02 '20 at 23:44
  • @Jeanne Pindar : Yes, indeed. So you can use your digital camera - though you have to modify it by removing and replacing the filter, because they include one to keep that from "contaminating" the visible picture - to film from about 750-1100 nm (most sensitive at 750, least at 1100), however there will be (I believe, at least) no color discrimination by frequency/wavelength. – The_Sympathizer Feb 03 '20 at 04:28
  • @jamesqf The "phone camera" trick worked well with early phone cameras; the idea persisted even though it doesn't really work anymore :D – Luaan Feb 03 '20 at 09:09
  • It works with every phone I've tried it with, including an LG G7 ThinQ which I wouldn't call an early phone, I think it came out in 2017. – Jeanne Pindar Feb 03 '20 at 14:35
  • @Luaan: I really don't know about phone cameras, since I've never considered it worth disassembling a working phone to try. Indeed, I've only done it myself with a 10-15 year old digital camera. – jamesqf Feb 03 '20 at 18:11
  • @The_Sympathizer - I remember the Fujifilm digital camera scandal that was sensitive ti IR that could "see-through" clothes. It turns out most fabric are transparent to IR – slebetman Feb 03 '20 at 20:12
  • OK, the cameras I'm talking about aren't THAT sensitive! The IR I've been able to photograph has been from leds, heating elements, flames, and reflected or refracted sunlight. – Jeanne Pindar Feb 04 '20 at 17:11
  • Damn! Too bad my old Fujifilm camera is broken :(. Anyway, remember the Heavens Gate mass suicide, and the "Saturn-Like Object" that triggered it? I vaguely recall that the astrophotographer had removed (or didn't have) an IR filter on the camera he used and produced an image overexposed in IR. – Phil Perry Feb 05 '20 at 17:15
12

Think of shining an intense infrared beam onto wood. The wood is scorched. You can see the scorching with visible light, even though the infrared beam is not visible. Do it again, but put some metal in the way to block part of the beam. Now you can see the shadow of the metal.

This is much like how Xrays film shows bones.

Camera are the similar. When sensors in the camera are stimulated by UV/IR/Xrays, they produce an electrical signal. These signals are stored as pixels in an image. You can display the image on a monitor, and choose to make the pixels be whatever color you like.

mmesser314
  • 38,487
  • 5
  • 49
  • 129
  • 4
    Don't actually do this experiment in case you accidentally reflect the infrared beam into your eyes with the metal. – user253751 Feb 03 '20 at 11:10
8

Imagine writing a program that listens to sounds through your computer's microphone and paints the screen a different color for every different note (or frequency). Suddenly you can point it at somebody singing and "see" the sound. You can even show it to a deaf person, and they can have an idea of what kind of song they're watching through the colors they see. None of their senses can capture sound, yet they're seeing it on the screen, because the thing that's capturing the sound is the microphone.

An UV camera is pretty much the same. A sensor captures some light that your eyes can't, and a program paints the screen a different color for every UV frequency you can't see.

user2723984
  • 4,686
7

A camera does not store light. The light you see when you look at a photo is not the same as the light that was captured when the photo was taken.

When light enters a digital camera it triggers electrical changes in the image sensor, which are converted to digital data by an ADC. In a film camera the light instead causes chemical changes in the film emulsion which are retained until the film is developed.

Humans are basically trichromats. We have three different types of "cones" in our eyes with different responses to light. So we can get away with representing color with three numbers per pixel in a digital imaging system or three layers in a chemical film.

Some time later, we reconstruct an image for a person to view. In a simplistic digital camera we would take the red, green and blue values for each pixel and use them to light the red green and blue pixels on our display. In reality there is usually some adjustment involved, because the filters in the camera don't precisely represent the human eye and because the frequency bands overlap, so it is not possible to find "primary colors" that trigger only one cone.

Regular digital cameras are designed to approximate our eyes, because that is what most people want. But there is no fundamental reason why cameras have to be that way. As long as we can build a lens to focus the rays and a sensor that will respond to them we can capture an image.

It's possible to build a camera to work with multiple wavebands at the same time, and this is how regular cameras work, but it's not a great choice for scientific imaging for a few reasons. Firstly the "bayer filter" has to be basically printed onto the sensor, meaning it can't be changed. Secondly it means that your pixels for different wavebands have slightly different spacial locations.

So for obscure wavebands a more common solution is to capture one waveband at a time, the images can then be combined into a single multi-channel image after capture. Or only one monochrome image may be captured, it all depends on the goal of the imaging.

Of course we humans can still only see visible light and we can only see that trichromatically, so at some point the creator of an image has to make a judgement call on how to map the scientific image data (which may have an arbitrary number of channels) to a RGB image (which has exactly 3 channels) for display. Note that color in the final image does not nessacerally imply that there were multiple channels in the original image data, it is not uncommon to use a mapping process that maps a single channel input to a color output.

Peter Green
  • 1,171
  • Also, with multiband (e.g., RGB) imaging chips, the sensitivity is lower because the individual sensors (of the triad at each pixel) are smaller, and resolution is lower because the pixel is spread out over three subsensors. A gray-scale (monochromatic) imager is going to be more sensitive to low light and have better resolution, but if you put filters in front to just pass one color at a time, you miss out on changes over time (only one band at a time can be recorded). – Phil Perry Feb 05 '20 at 17:21
1

Let's follow on with your logic. You're positing that nothing can happen that our bodies can't do. OK:

  1. How can a car drive 200+ miles an hour if we can't run that fast?

  2. How can a plane fly if we can't?

  3. How can a submarine spend weeks underwater if we can't stay underwater (and alive :-) that long?

The answer is that the machines we build can do things that we cannot do without their aid. We build these machines to expand our capabilities.

Cameras are a machine which can do things, such as respond to radiation in the ultraviolet and infrared range of the spectrum, which our eyes cannot.

  • 3
    while this answers the question at a philosophical level, I doubt it is the kind of answer that would be helpful to the person asking it here. – jwenting Feb 05 '20 at 06:17
  • @jwenting : On the other hand, it's exactly the right answer. – WillO Mar 01 '21 at 15:17
0

Answer to the question: How is it possible there are UV photos while our eyes cannot detect UV waves? We are so made, mainly by water, and water has "optical window" absoption coefficient, that means huge one at the ends of window(infra red and ultra violet) and low at green wavelength. So this is reason for our sensitivity for green.