10

Currently I am using supervisory classification algorithms to calculate the landuse within a certain image, the issue being the images are sourced from high oblique photography. I know the focal length and dimensions of the camera used, and the GPS position of where the photographs where taken.

I am looking for resources that will enable me to calculate the areas of the landuse from the information I already have. The idea being that if knowing the geometry of the photograph, I will be able to work out the relative distance between each pixel. Thoughts?

Mark Iliffe
  • 293
  • 1
  • 8

6 Answers6

7

The general process for solving this problem is by orthorectification, to convert the data into a Cartesian coordinate space, where each cell represents approximately the same spatial extent. The OSSIM package provides orthorectification, as does GRASS. You'll need ancillary data to perform the rectification, and it can be a time consuming process, but yields excellent results. After transformation to a projected space, computing area should be a straightforward operation in any GIS.

scw
  • 16,391
  • 6
  • 64
  • 101
2

If you can work in a GIS environment, then GRASS can do the job: We sketched the procedure in our paper:

M. Neteler, D. Grasso, I. Michelazzi, L. Miori, S. Merler, and C. Furlanello, 2005: An integrated toolbox for image registration, fusion and classification. International Journal of Geoinformatics, 1(1), pp. 51-61 http://www.grassbook.org/neteler/papers/neteler2005_IJG_051-061_draft.pdf

markusN
  • 12,976
  • 31
  • 48
1

I think you'll need a DEM too. Using DEMs to Register Oblique Photographs and Web Cameras is interesting.

Update: On second thought, if you have multiple photos of the same scene, you could create a 3D point cloud of known objects without having a DEM, the way Photosynth does. I've never seen photosynth used as a GIS tool though.

Update2 Here's a photosynth built from aerial images.

Kirk Kuykendall
  • 25,787
  • 8
  • 65
  • 153
1

Usually, as scw suggests, it is best to orthorectify and then perform the analysis. Fortunately, numerous GIS packages will perform such a workflow. For a brief quantitative background on orthorectification, the webpage Review of Digital Image Orthorectification Techniques by Dr. F. I. Okeke provides an overview of several popular algorithms.

If you are creating an automated or augmented reality system that performs such analytic measures, these two open source projects would provide some useful foundational libraries: NASA Vision Workbench and Photosynth's cousin Bundler.

glennon
  • 2,080
  • 18
  • 21
0

If the oblique photograph was taken from sufficient distance, the transformation of the central areas of the map could be approximated with a parallel projection, which will be a lot easier to deal with than the full perspective projection.

As Kirk mentioned, you'll need a DEM to be able to account for the varying slope of the surfaces. Of course, this will only give you approximate results, so in case you need more accurate measurements, take the orthorectification approach mentioned in scw's answer.

mkadunc
  • 2,061
  • 17
  • 18
0

Oblique photograph will generally need a DEM otherwise georef tools warp the image very badly.

You might want to read this article... www.uibk.ac.at/geographie/personal/corripio/.../corripio04_ijrs.pdf

timemirror
  • 128
  • 5
  • Your URL does not appear complete. Can you please edit it ? – GuillaumeC Aug 26 '10 at 11:40
  • The referenced article is: Corripio, J. G.: 2004, Snow surface albedo estimation using terrestrial photography, International Journal of Remote Sensing . 25(24), 5705–5729. http://www.uibk.ac.at/geographie/personal/corripio/publications/corripio04_ijrs.pdf – Matěj Šmíd Apr 15 '14 at 11:44