3

Many years ago we received many aerial infrared images. They were developed on transparent film like a slide but much bigger - 20cm x 35cm. If I now take a digital photo of these images, are they real digital infrared images? I am trying to ascertain what level of error occurs when using conventional photography to capture analog infrared images and whether I can confidently analyse these images to compare levels of chloryphyl and landuse.

Robert Buckley
  • 10,175
  • 14
  • 76
  • 156
  • Normally these photos would be scanned by a pass-thru scanner. It is impossible to ascertain the error you would get using a camera without knowing what camera, what lighting conditions and distance to the lens... you can minimize error by using a fixed camera (not an iPhone) and standardized lighting but you would be much better off getting them scanned by a company that does aerial imagery and still maintains analog equipment. Conventional camera lenses can have quite severe edge aberrations and parallax so the images is not constant - on a flat bed scanner the image would be consistent. – Michael Stimson Oct 22 '15 at 21:25

1 Answers1

1

leaving geometric considerations aside, speaking spectrally, it seems to me that such a digital photo would be a product of the radiance curve (brightness vs wavelength) of the lamp in the box, and the RGB characteristics of the camera sensor - like does it pump up red vs blue, etc etc. I could see a reasonable way to handle the first would be to take a photo of the lightbox without one of your slides on it to get the radiance curve as your camera sees it, and divide the photo of each slide by THAT photo to cancel out its influence. To get a handle on the second consideration, you could look for areas of a slide that you expect to be spectrally flat, ideally different areas both bright and dark. For dark areas, flat water that lacks specular reflections might be good. For bright areas, maybe concrete or building rooftops? That'll give you an idea of the brighness vs wavelength curve your camera sees when it should see a flat line vs wavelength. Hopefully you'll find that curve is the same for both your bright and dark fields. You could then divide your slide images by THAT to flatten them out.

That all said, I also am of the opinion that you'll be OK regardless, if your analysis is going to be plain old statistical classification, rather than say something that does some kind of band math/ratioing to quantify things on spectral values. Things that 'look different' in the original slide will still be different in the photo and thus should classify as different. But the convolutions with lamp and camera sensor may muck up spectral relations enough to mess with anything quantitative.

MC5
  • 1,901
  • 1
  • 14
  • 23