Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Principles and Applications of Laser Photogrammetry

Paper Type: Free Essay Subject: Computer Science
Wordcount: 5707 words Published: 23rd Sep 2019

Reference this

Laser Photogrammetry

Abstract

This paper will be explaining the working principles and applications of Laser Photogrammetry. Photogrammetry is a Greek word, “pho” meaning light and the photogrammetry meaning measuring with photographs. Thus, photogrammetry can be defined as a 3-dimensional coordinate measuring technique that uses photographs as the fundamental medium for measurement. It is an estimation of the geometric and semantic properties of objects based on images or observations from similar sensors. Traditional cameras, laser scanning and smart phones can be taken as examples of similar sensors. Measurements are made to give the location recognition, interpretation of an image or scenes. The technology has been used for decades to get information about an object from an image, for instance, autonomous cars need to get a better understanding of the object in front of them. The working principle is aerial triangulation in which photographs are taken from at least two different locations, lines of sight are developed from each camera to points on the object. This paper will mainly address the applications of laser photogrammetry. These applications include: recent advances of photogrammetry in robot vision; remote sensing applications and how the technology is aligned to photogrammetry; and the application of photogrammetry in computer vision and the relationships of photogrammetry and computer vision. The robotics application of photogrammetry is a young discipline in which maps of the environments are built and interpretations of the scene are performed. This is usually operated with small drones which give accurate results and updated maps and terrain models. Another application of photogrammetry is remote sensing. As its name indicates, remote sensing is done remotely without touching the object or scene. Remote sensors are used to cover large areas and where contact-free sensing is desired. For instance, there are objects which are not accessible, sophisticated or toxic to touch. Thus, remote sensors can be placed as far as satellites on orbits far away from the scene and photogrammetry plays an important role in interpretations of the scenes or objects. The third application of photogrammetry is in computer visions. In computer visions, the applications of photogrammetry that will be addresses in this paper include: image-based cartography, aerial reconnaissance and simulated environments.

Introduction

Photogrammetry means obtaining reliable information about physical objects and their environments by measuring and interpreting photographs. It is the science and art of determining qualitative and quantitative features of objects from the images recorded on photographic emulsions. Laser Photogrammetry and 3D Laser Scanning are completely different technologies for different project purposes. The 3D laser scanning, one is using a laser to measure each individual measurement detained, whereas when using photogrammetry, one is using a series of photographs with overlapping pixels to extract 3D information. Qualitative observations are identification of deciduous versus coniferous trees, delineation of geologic landforms, and inventories of existing land use, whereas quantitative observation are size, orientation, and position. Objects’ identification and description of objects are performed by observing the shape, tone and texture of the photographic image. The principal type of photographs used for mapping are vertical photographs, exposed with optical axis. This is illustrated in Figure1, geometry of a single vertical aerial photogrammetry. Vertical photographs, exposed with the optical axis vertical or as nearly vertical as possible, are the principal kind of photographs used for mapping [2]. In a vertical aerial photogrammetry, the exposure station of the photograph is known as the front nodal point of the camera lens. The nodal points are points in the camera lens system such that any light ray entering the lens and passing through the front nodal point will emerge from the real nodal point travelling parallel to the incident light ray [2]. So, the object side of the camera lens has the positive photograph, placed such that all points – the object point, the image point, and the exposure station lie on the same straight line [2]. The line through the lens nodal points and perpendicular to the image plane intersects the image plane at the principal point [2]. The distance measured from the rear nodal point to the negative principal point or from the front nodal point to the positive principal point is equal to the focal length f of the camera lens [2].

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

The ratio between an image distance on the photograph and the corresponding horizontal ground distance is the scale of an aerial photograph [1]. For a correct photographic scale ratio, the image distance and the ground distance must be measured in parallel horizontal planes [1]. However, this condition does not occur because most photographs are tilted and the ground surfaces are not flat horizontal planes. As a result, scale will differ throughout the format of a photograph, and the photographic scale can be defined only at a point, and is given by equation 1 [1].  Equation 1 is used to calculate scale on vertical photographs and is exact for truly vertical photographs [1].  

                                                          S= fHh

                                                                      (1)

where:

         S= photographic scale at a point

         f = camera focal length

         H= flying height above datum

         h= elevation above datum of the point

                          Figure 1: Geometry of a vertical aerial photogrammetry [2]

When calculating, flight planning, approximate scaled distances are adequately enough for direct measurements of ground distances. Average scale is found by using equation 2 [1].

                                                  Save=f(Hhave)

                                                                     (2)

where have

is the average ground elevation in the photo. Referring to the vertical photograph shown in the Figure 2 below, the approximate horizontal length of the line AB is given by equation 3 [1].

                                            Dd(Hhave)f

                                                                          (3)

Where

          D= horizontal ground distance

          d= photograph image distance

Figure 2: Horizontal ground coordinates from single vertical photograph [1]

Again, to get an accurate measurement of the horizontal distances and angles, the scaled variations caused by elevation differences between points must be considered [2].

Horizontal ground coordinates are calculated by dividing each photocoordinate by the true photographic scale at the image point [2]. In equation form, the horizontal ground coordinates of any point are given by equation 4.

                                                               Xp=xp(Hhp)f

                                                          (4)

  Yp=yp(Hhp)f

where

         Xp

,  Yp

= ground coordinate of point p

        xp

, yp

= photocoordinate of point p

            hp

= ground elevation of point p

Equation 4, uses a coordinate system defined by the photocoordinate axes having an origin at the photo principal point and the x-axis typically through the midside fiducial in the direction of flight [2]. Then the local ground coordinate axes are placed parallel to the photocoordinate axes with an origin at the ground principal point [2]. These equations are exact for truly vertical photographs and are typically used for near vertical photographs. After the horizontal ground coordinates of points A and B in Figure 2 are computed, the horizontal distance is given by equation 5.

                                             DAB= [(XaXb)2+ (YaYb)2]0.5

                                             (5)

The elevations ha

and hb 

must be known before the horizontal ground coordinates can be calculated [2]. If stereo solution is used, there is no need to know the elevations ha

and hb

[2]. The solution given by equation 5, is not an approximation because the effect of scale variation caused by unequal elevations is included in the computation of the ground coordinates [2].

Another characteristic of the perspective geometry recorded by an aerial photograph is relief displacement. Relief displacement is evaluated when analyzing or planning mosaic or orthophoto projects [4]. Relief displacement can also be used to interpret photo so that heights of vertical objects are obtained. [4]. This displacement is shown in Figure 3, and is calculated by equation 6 [2].

                                                        d=d(Hhbase)rtop

                                                                   (6)

where:

         d= image displacement

         r= radial distance from the principal point to the image point

        H= flying height above ground

Since the image displacement of a vertical object can be measured on the photograph, Equation 6 can be solved for the height of the object to obtain the vertical height of the object, ht

which is given by equation 7.

                                                ht= d(Hhbase)rtop

                                                                (7)

where:

        hbase

= elevation at the object base above datum

Figure 4: Relief Displacement on a Vertical photograph [1]

All photogrammetric procedures are composed of these two basic problems, resection and intersection. There are photogrammetric problems which are solved by Analog and Analytical solutions. Resection is the process of recovering the exterior orientation of a single photograph from image measurements of ground control points [4]. In a spatial resection, the image rays from total ground control points (horizontal position and elevation known) are made to resect through the lens nodal point (exposure station) to their image position on the photograph [4]. The resection process restores the photograph’s previous spatial position and angular orientation, that is when the exposure was taken. Intersection is the process of photogrammetrically determining the spatial position of ground points by intersecting image rays from two or more photographs [4]. If the interior and exterior orientation parameters of the photographs are known, then conjugate image rays can be projected from the photograph through the lens nodal point (exposure station) to the ground space. Two or more image rays intersecting at a common point will determine the horizontal position and elevation of the point. Map positions of points are determined by the intersection principle from correctly oriented photographs. The Analog solution is one of the methods of solving these fundamental photogrammetric problems. The Analog solutions use optical or mechanical instruments to form a scale model of the image rays recorded by the camera [2]. However, the physical constraints of the analog mechanism, the calibration, and unmodeled systematic errors limit the function and accuracy of the solution [4]. The analytical photogrammetry solution is the second solution that employs mathematical model to represent the image rays recorded by the camera [2]. The collinearity condition equations include all interior and exterior orientation parameters required to solve the resection and intersection problems accurately [2]. Analytical solutions consist of systems of collinearity equations relating measured image photocoordinates to known and unknown parameters or the photogrammetric problem [4].

Working Principles of Photogrammetry- Aerotriangulation

Aerial triangulation is defined as the process of determining x,y and z ground coordinate of individual points on measurements from the photograph [4]. The aerotriangulation geometry along a strip of photography is illustrated in Figure 6 [4]. Photogrammetric control extension requires that a group of photographs be oriented with respect to one another in a continuous strip or block configuration [4]. A pass point is an image point that is shared by three consecutive photographs (two consecutive stereomodels) along a strip. The exterior orientation of any photograph that does not contain ground control is determined entirely by the orientation of the adjacent photographs. Benefits of Aerial Benefits include: minimizing delays and hardships due to adverse weather condition; access to much of the property within the project area is not required; field surveying in difficult area, such as Marshes, Extreme slope, hazardous rock formation, etc; can be minimized. Aerial Triangulation is classified three categories:

  1. Photogrammetric projection method (analogic or analytical) [4].
  2. Strip or block formation and adjustment method (sequential or simultaneous) [4].
  3. Basic unit of adjustment (strip, stereomodel, or image rays) [4].

Figure 6. Aerotriangulation geometry

Application of Photogrammetry

Robot vision

Robot vision systems are an important part of modern robots as it enables the machine to interact and understand with the environment; and to take necessary measurements. The instantaneous feedback from the vision system which is the main requirements of most robots is achieved by applying very simple vision processing functions or/and through the hardware implementation of algorithms [3]. One of the examples of this application is called close-range photogrammetry which is used in time-constrained modes in robotics and target tracking [3]. 

Photogrammetry and Remote Sensing Applications

Remote sensing collects information about objects and features from imagery without touching them. It is mainly used to collect and derivate 2D data from all types of imagery, for instance slope.  Photogrammetry is associated with the production of topographic mapping generally from conventional aerial stereo photography [5]. Today photographs are taken high-precision aerial cameras, and most maps are compiled by stereophotogrammetry methods. The advantage of Aerial Photogrammetry and Topographic Mapping is that it is cost effective when ground survey methods could not cover large areas. The map shows land contours, site conditions and details for large areas. The conventional aerial photography can produce an accurate mapping at scales as large as 1:200. The accuracy is achieved by employing improved cameras and photogrammetric instrumentations.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

After an area has been authorized for mapping, the planning and procurement of photography are the first steps in the mapping process. The necessary calculations are made on a flight design worksheet. The flight planned chooses the best available basement on which to delineate the design flight lines. The final plan gives the location, length, and spacing of flight strips.

Computer Visions

The goals of Computer Visions are for object recognition, navigation, and object modeling. Today’s Object recognition algorithms function according to the data flow shown in the Figure 7 below. Image features are extracted from the image intensity data such as: regions of uniform intensity, boundaries along high image intensity gradients, curves of local intensity maxima or minima (line features), and other image intensity events defined by specific filters(corners) [4,6]. In order to get high level measurements, these features are processed further. For instance, part of a step intensity boundary may be approximated by a straight-line segment and the properties of the resulting line are used to define the boundary segment. Formation of a model for each class is the next step in recognition, in which the algorithms store the feature measurements for a particular object, or a set of object instances for a given class, and then use statistical classification methods to classify a set of features in a new image according the stored feature measurements [4,6]. The second goal of the computer visions is the navigation modelling. The goal of navigation is to provide guidance to an autonomous vehicle. The vehicle is to maintain accurate following along a defined path. In the case of a road, it is desired to maintain a smooth path with the vehicle staying safely within the defined lanes. In the case of off-road travel, the vehicle must maintain a given route and the navigation is carried out with respect to landmarks [6]. The third object of computer visions is object modeling. In object modelling, a complete and accurate 3D model of an object is recovered [6]. The model can then be used for different applications, such as: to support object recognition, and for image simulation. In image simulation, the image intensity data is projected onto the surface of the object to provide realistic image of the object from any desired viewpoint [6]. Computer vision methods is also used for defect detection assessment and is illustrated in Figure 8. Figure 8 shows that the general computer vision pipeline starting from low-level processing up to high-level processing. Correspondingly, the bottom part of Figure 8 labels specific methods for the detection, classification and assessment of defects on civil infrastructure into pre-processing methods, feature-based methods, model-based methods, pattern-based methods, and 3D reconstruction [6]. These methods, however, cannot be considered fully separately. Rather they build on top of each other. For example, extracted features are learned to support the classification process in pattern-based methods [6].

Figure 7: The operational structure of object recognition algorithms.

  Figure 8: Categorizing general computer vision methods (top) and specific methods to defect 

                  detection, classification and assessment of civil infrastructure.

Future Innovations and Developments

These days, close range photogrammetry uses digital cameras with capabilities that will result in moderate to very high accuracies of measurement of objects. To improve robot’s vision capabilities, two alternatives are suggested and studied for future: (a) hardware implementation of more complex image analysis functions with consideration of photogrammetric methodology, or (b) design of a robot “insect-level” intelligent system principle, based on the use of a great variety of different, simultaneous, but simple sensor functions [3]. In computer visions, the long-term goal of computer vision with respect to aerial reconnaissance applications is change detection [6]. In this case, the changes from one observation to the next are meant to be significant changes, that is, significant from the human point of view [6]. Thus, in order to define only significant change, it is essential to be able to characterize human perceptual organization and representation.

Conclusion

When one is looking to deploy one technology over the other for a given project purpose, it is a question of how large an area is required to be collected and how accurately it needs to be collected. Photogrammetry can easily help us to acquire large scale data, has ability to record dynamic scenes, records images to document the measuring process and can automatically process data, possibly for real-time processing. The disadvantages of photogrammetry are: the necessity of light source, the flaws in measurement accuracy, and the occlusions and visibility constraints. The performance of photogrammetry can be improved by using computer simulations which is more automatic to be deployed on places which are difficult to operate. The enormous contribution to heritage conservation cannot be overstated since photogrammetry is particularly preferred for monitoring purposes, like construction sites.

Works Cited

  1. Hamilton Research Group. “Chapter 10: Principles of Photogrammetry.” In Physical Principles of Remote Sensing (3rd Edition). Cambridge University Press, New York, 2013 441pp.
  2. Lillesand, Thomas M, et al. Remote Sensing and Image Interpretation. 6th ed., John Wiley & Sons, 2008.
  3. Gruen, Armin. (1992). Recent advances of photogrammetry in robot vision. ISPRS Journal of Photogrammetry and Remote Sensing. 47. 307-323. 10.1016/0924-2716(92)90021-Z.
  4. Linder, Wilfried. Digital Photogrammetry: A Practical Course. Springer Berlin Heidelberg, 2009. INSERT-MISSING-DATABASE-NAME, INSERT-MISSING-URL. Accessed 2018.
  5. CICES. “Photogrammetry and Remote Sensing.” Chartered Institution of Civil Engineering Surveyors, www.cices.org/.
  6. A. Heller and J.L. Mundy. The evolution and testing of a model-based object recognition system. In Computer Vision and Applications, R. Kasturi and R. Jain, eds, IEEE Computer Society Press., 1991.

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: