Review Article
Sensor modelling and camera calibration for close-range photogrammetry

https://doi.org/10.1016/j.isprsjprs.2015.10.006Get rights and content

Abstract

Metric calibration is a critical prerequisite to the application of modern, mostly consumer-grade digital cameras for close-range photogrammetric measurement. This paper reviews aspects of sensor modelling and photogrammetric calibration, with attention being focussed on techniques of automated self-calibration. Following an initial overview of the history and the state of the art, selected topics of current interest within calibration for close-range photogrammetry are addressed. These include sensor modelling, with standard, extended and generic calibration models being summarised, along with non-traditional camera systems. Self-calibration via both targeted planar arrays and targetless scenes amenable to SfM-based exterior orientation are then discussed, after which aspects of calibration and measurement accuracy are covered. Whereas camera self-calibration is largely a mature technology, there is always scope for additional research to enhance the models and processes employed with the many camera systems nowadays utilised in close-range photogrammetry.

Introduction

This article provides a review of the state of the art in photogrammetric camera calibration with specific focus upon close-range applications. It also analyses selected recent developments in camera technology, sensor modelling, automated calibration approaches and photogrammetric accuracy aspects. Although fully automatic camera calibration is now a routine procedure for many users of photogrammetry, the calibration issue continues to receive research attention due to the ongoing emergence of new camera systems. These developments in turn open up new practical applications and requirements.

Analytical camera calibration by means of the self-calibrating bundle adjustment was developed in the early 1970s (Brown, 1971) and became a standard tool in close-range photogrammetry (CRP) systems in the 1980s (Fraser and Brown, 1986, Wester-Ebbinghaus, 1988). As distinct from the characteristics of aerial camera calibration in topographic photogrammetry, those adopted for close-range applications, in particular industrial vision metrology, are characterised by

  • High-accuracy requirements in object space, translating to image space precision at the 0.1 pixel level or better.

  • Single and multi-sensor imaging configurations with a large variety of different camera and lens types.

  • Geometrically irregular imaging networks (e.g. multi-station convergent imaging).

  • An absence of control point and camera station constraints.

  • Project-specific operational conditions.

Based on the utility and flexibility of self-calibration, a wide variety of cameras ranging from point-and-shoot, to ‘bridge’ cameras, to DSLRs (digital single-reflex cameras) and to purpose-built metric cameras are nowadays employed across the broad arena of CRP applications. However, it has long been recognised that cameras and lenses of high metric quality provide the best results, due in large part to the fact that they lend themselves to calibration of high metric integrity and stability. Also widely recognised are well-proven rules that apply in self-calibration to the recovery of camera parameters of optimal accuracy and reliability, where maximum reliability in this context implies a minimisation of the impact of gross observation errors upon the estimation of calibration parameters. These rules include:

  • Adoption of multi-station convergent imaging networks incorporating a diversity of camera roll angles (i.e. mixed landscape and portrait orientations).

  • Fixed zoom/focus and aperture settings with no lens change or adjustment during image acquisition, with a preference for unifocal lenses.

  • Well-defined object points amenable to high-accuracy image measurement (e.g. targets on a test field, often realized as automation-friendly coded targets, or natural feature points supporting high accuracy image matching).

  • Sufficient variation in scale within the imagery to support the reliable recovery of the camera interior orientation (IO) parameters of principal distance and principal point offset (e.g. scale variation provided through the provision of depth variation in the object point array).

  • Comprehensive and unsystematic coverage of the available sensor format in order to enhance determination of lens distortion parameters.

  • Incorporation of high observational redundancy to enhance the reliability of calibration parameter estimation (automated processes mean that a trivial amount of extra work is involved in measuring 10-, 20- or 50-image networks).

  • Adoption of an appropriate and complete camera model within the self-calibrating bundle adjustment, which in turn should incorporate robust outlier detection.

Although the workflow for digital camera self-calibration can be conducted as an on-line or off-line process, it is generally the latter which is employed. Image acquisition and image measurement/data processing invariably happen at different times and in different locations, though with fully automatic camera calibration the issue of human operator skill level is fortunately no longer a factor influencing results.

Introduction in the early 1990s of the then called ‘still video’ digital cameras had a dramatic and positive impact upon CRP, immediately extending the fields of applications and providing the means for full measurement automation in niche areas such as industrial and engineering metrology. The evolution of consumer-grade digital cameras and especially DSLRs afforded a significant acceleration within the processing pipeline in off-line measurement applications. Moreover, subpixel image operators, e.g. centroiding or template matching, provided a new level of image measurement accuracy, to routinely better than the 0.1 pixel level, which corresponds to <1 μm. This in turn led to a new challenge in calibration, namely the question of whether the physical relationship between the camera body, lens and sensor could provide metric stability to the same accuracy level.

Beyond incorporating integrated removable memory, digital cameras provide the option of a direct transfer of image data to a processing unit that can then process the images automatically in real time. The processor can be integrated into the camera body or placed in a separate computing unit, e.g. a connected computer. In both cases, the acquired images can be processed at a certain frame rate, say between 1 Hz and 500 Hz, depending upon the complexity of the scene being imaged, the amount of data and the data interface specifications. This on-line capability has led to a large number of new applications, for instance in industrial process control, medical surgery and robotics. These applications are usually characterised by a set of cameras that are mounted in fixed positions relative to each other, which are generally employed for a given period of time without re-calibration. Consequently, the cameras have to be calibrated in advance, with the calibration remaining stable over a long period of time. An aspect of note here is that the parameters of IO are often calculated independently from the exterior orientation (EO) parameters, thus removing the issue of possible correlations between interior and exterior parameters in the initial bundle adjustment, as will be discussed in later sections of this paper. The question of pre-calibration also arises for those single-camera applications where a suitable imaging network geometry cannot be provided, e.g. for unmanned aerial vehicle (UAV) flights above flat terrain or in very narrow imaging configurations with small intersection angles in object space.

Whereas the foregoing discussion has concentrated on cameras that might well be regarded as ‘standard’, since these are the most commonly adopted in practical photogrammetry, the development of digital cameras has reached a point where it is nearly impossible to list all available classes, permutations and combinations of cameras and imaging sensors. Among this huge variety of cameras, we find many that have found relevance and application in CRP. Categories include:

  • Multi-camera set-ups (e.g. stereo cameras, multi-camera systems).

  • High-speed cameras, e.g. with 12,000 fps at 1000 × 1000 pixel resolution.

  • Panoramic cameras equipped either with fisheye or spherical lenses, or driven by a scanning unit (linear or circular).

  • Zoom lenses, as are commonly integrated into consumer-grade cameras.

  • Fisheye lenses.

  • Miniaturised cameras (e.g. mobile phones, endoscopes, augmented reality glasses etc.).

  • Cameras with mirrors.

  • Multispectral and thermal imagers.

  • 3D imaging devices (e.g. time-of-flight cameras, light-field cameras).

As long as cameras can be modelled according to the principle of a pinhole camera, with additional parameters to model lens distortion, they can be calibrated via standard photogrammetric means. In cases where either specific accuracy demands require more sophisticated parameter models, changes in IO occur during acquisition of an image series, or the physical imaging model cannot be described by central projection, such as with wider field-of-view fisheye cameras, extended calibration approaches need to be applied.

The state-of-the-art in classical photogrammetric camera calibration has been addressed by several publications and textbooks, e.g. Fryer, 1996, Remondino and Fraser, 2006, Fraser, 2013, Luhmann et al., 2014. These reports are based upon experience gained over the 25 years that digital cameras have been employed for photogrammetric measurement, and in them specific parameter sets, configurations and analysis techniques for camera calibration are recommended. The reported approaches are also implemented within well-known photogrammetric systems, to provide both a dedicated camera calibration capability and a mechanism for 3D accuracy enhancement via the self-calibrating bundle adjustment.

The task of camera calibration has also been addressed within the large computer vision (CV) community. CV researchers have concentrated on developing easy-to-use and fully automated calibration procedures based primarily on linear approaches with simplified imaging models (e.g. Tsai, 1986, Lenz, 1987, Zhang, 2000). Open source or free software packages, exemplified by the OpenCV library, have become very popular. As a key tool of these approaches, flat chess board type object fields that can be provided easily are commonly employed (e.g. by printing a pattern on paper). As will be mentioned later in this paper, however, the use of flat objects for camera calibration is disadvantageous in terms of accuracy and projective coupling (correlation) between parameters, and it is generally a suboptimal approach to producing a metrically adequate, scene-independent calibration (see Section 3.1).

From a photogrammetric point of view, the issue of camera calibration could in many respects be said to be solved. As mentioned above, the following of well-proven recommendations for object design, imaging configuration, image measurement and bundle adjustment will lead to a successful and accurate calibration of most of the cameras used today in CRP. Moreover, with modern structure-from-motion (SfM) approaches, calibration might appear as a minor issue since camera parameters can be readily calculated as part of the projection matrix of every single image. In addition, the use of a huge number of feature points, rather than a small number of targets or discrete object points, can yield a high internal precision (small standard error) of recovery of the calibration parameters within a bundle adjustment. Measures of high precision, which at first glance might indicate a very good quality of calibration, orientation and 3D point generation can, however, mask deficiencies in the external accuracy and reliability. It is important to keep in mind that within computer vision SfM approaches to 3D shape and camera pose determination in multi-view stereo networks often imply the recovery of camera ‘calibrations’ which are neither metrically precise, scene independent nor image invariant. Such attributes are unacceptable for photogrammetric calibration. In addition, SfM software is mostly operated as a ‘black box’ solution without any clear explanation of the implemented imaging model and calculation procedures. Hence, from the engineering point of view it is difficult to analyze results and to avoid weaknesses in the project set-up. Nevertheless, SfM methods combined with photogrammetric algorithms offer a powerful technique for target-free self-calibration, as discussed in Section 4.

Aspects of current self-calibration in CRP are summarised within the following sections. Camera modelling is briefly discussed, after which example non-traditional imaging systems that require extended calibration models are touched upon. Self-calibration via both targeted planar arrays and targetless scenes amenable to SfM-based exterior orientation are then discussed. Finally, aspects of calibration and measurement accuracy are covered.

Section snippets

Standard models

Most camera modelling approaches are based on the introduction of additional parameters for modelling deviations between the ideal mathematical model of central perspective and the physical reality of the camera. The models account for perturbations to collinearity due to the lens, sensor, electronics and camera body. While a large number of additional parameter sets has been published in the literature, the parameter set for lens distortion introduced by Brown (1971) has become an accepted

Point distribution in object space

As mentioned above, the size and design of the object or test field is of significant importance for camera calibration (Fraser, 1996, Remondino and Fraser, 2006). Except in cases of special application, e.g. for single image calibration by extended space resection or calibration of stereo pairs with only one exposure, multiple images are employed for self-calibration. In principle, it does not matter if the camera is moved around an object or if the object is moved around a fixed camera

Calibration accuracy

The question of camera calibration accuracy has two distinct parts. The first concerns the accuracy of recovery of the individual parameters of the calibration model, while the second relates to the accuracy impact in any subsequent photogrammetric measurement, i.e. the effect of calibration errors on object point determination. Within a self-calibrating bundle adjustment, measures of precision are essentially the only internal means by which accuracy can be quantified. Thus, we may see the

Concluding remarks

Camera calibration has traditionally been and continues to be the single most significant factor determining the accuracy potential, and to a large extent also the reliability of close-range photogrammetric measurement. For much of the present era of digital photogrammetry, the well-known ‘standard’ physical calibration model attributable to Brown (1971) has served metric purposes for consumer-grade and professional photogrammetric cameras alike, and this model is yet to be found wanting in

References (35)

  • C.S. Fraser

    Digital camera self-calibration

    ISPRS Int. J. Photogramm. Rem. Sens.

    (1997)
  • K. Richter et al.

    Development of a geometric model for an all-reflective camera system

    ISPRS J. Photogramm. Rem. Sens.

    (2013)
  • D. Schneider et al.

    Validation of geometric models for fisheye lenses

    ISPRS J. Photogramm. Rem. Sens.

    (2009)
  • Barazzetti, L., 2011a. Automatic tie point extraction from markerless image blocks in close-range photogrammetry. Ph.D....
  • L. Barazzetti et al.

    Targetless camera calibration

    Int. Arch. Photogramm., Rem. Sens. Spatial Inform. Sci., Trento

    (2011)
  • Beyer, H., 1987. Some aspects on the geometric calibration of CCD-cameras. In: Proc. ISPRS Intercommission Symposium on...
  • D.C. Brown

    Close-range camera calibration

    Photogramm. Eng.

    (1971)
  • S. El-Hakim

    Real-time image metrology with CCD cameras

    Photogramm. Eng. Rem. Sens.

    (1986)
  • C.S. Fraser

    Limiting error propagation in network design

    Photogramm. Eng. Rem. Sens.

    (1987)
  • C.S. Fraser

    Network design

  • C.S. Fraser

    Automatic camera calibration in close-range photogrammetry

    Photogramm. Eng. Rem. Sens.

    (2013)
  • C.S. Fraser et al.

    Industrial photogrammetry – new developments and recent applications

    Photogramm. Rec.

    (1986)
  • J.G. Fryer

    Camera calibration

  • Hastedt, H., Luhmann, T., 2015. Investigations on the quality of the interior orientation and its impact in object...
  • Hastedt, H., Luhmann, T., Raguse, K., 2005. Three-dimensional acquisition of high-dynamic processes with a...
  • T. Kahlmann et al.

    Calibration for increased accuracy of the range imaging camera swissranger™

    Int. Arch. Photogramm., Rem. Sens. Spatial Inform. Sci.

    (2006)
  • Lenz, R.K., 1987. Lens distortion corrected CCD camera calibration with co-planar calibration points for real-time...
  • Cited by (199)

    • Principled bundle block adjustment with multi-head cameras

      2024, ISPRS Open Journal of Photogrammetry and Remote Sensing
    View all citing articles on Scopus
    View full text