• Point cloud densification


  •   
  • FileName: report.pdf [preview-online]
    • Abstract: Point cloud densificationMona ForsmanFebruary 11, 2011Master’s Thesis in Engineering Physics, 30 ECTS-creditsSupervisor at CS-UmU: Niclas B¨ rlinoExaminer: Christina Igasto˚U ME A U NIVERSITYD EPARTMENT OF P HYSICS

Download the ebook

Point cloud densification
Mona Forsman
February 11, 2011
Master’s Thesis in Engineering Physics, 30 ECTS-credits
Supervisor at CS-UmU: Niclas B¨ rlin
o
Examiner: Christina Igasto
˚
U ME A U NIVERSITY
D EPARTMENT OF P HYSICS
SE-901 87 UME A˚
SWEDEN
Abstract
Several automatic methods exist for creating 3D point clouds extracted from 2D photos. In many
cases, the result is a sparse point cloud, unevenly distributed over the scene.
After determining the coordinates of the same point in two images of an object, the 3D position
of that point can be calculated using knowledge of camera data and relative orientation.
A model created from a unevenly distributed point clouds may loss detail and precision in the
sparse areas. The aim of this thesis is to study methods for densification of point clouds.
This thesis contains a literature study over different methods for extracting matched point pairs,
and an implementation of Least Square Template Matching (LSTM) with a set of improvement
techniques. The implementation is evaluated on a set of different scenes of various difficulty.
LSTM is implemented by working on a dense grid of points in an image and Wallis filtering is
used to enhance contrast. The matched point correspondences are evaluated with parameters from
the optimization in order to keep good matches and discard bad ones. The purpose is to find details
close to a plane in the images, or on plane-like surfaces.
A set of extensions to LSTM is implemented in the aim of improving the quality of the matched
points. The seed points are improved by Transformed Normalized Cross Correlation (TNCC) and
Multiple Seed Points (MSP) for the same template, and then tested to see if they converge to the
same result. Wallis filtering is used to increase the contrast in the image. The quality of the extracted
points are evaluated with respect to correlation with other optimization parameters and comparison
of standard deviation in x- and y- direction. If a point is rejected, the option to try again with a larger
template size exists, called Adaptive Template Size (ATS).
ii
Contents
1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Organization of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Theory 3
2.1 The 3D modeling process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Projective geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.1 Homogenous coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.2 Transformations of P2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 The pinhole camera model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Stereo view geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4.1 Epipolar geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4.2 The Fundamental Matrix, F . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4.3 Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4.4 Image rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5.1 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.3 Rank N-1 approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.6 Least Squares Template Matching . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Overview of methods for densification 15
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Various kinds of input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.1 Video data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.2 Laser scanner data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.3 Still images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
iii
iv CONTENTS
3.3 Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3.1 SIFT, Scale-Invariant Feature Transform . . . . . . . . . . . . . . . . . . . 16
3.3.2 Maximum Stable Extremal Regions . . . . . . . . . . . . . . . . . . . . . 16
3.3.3 Distinctive Similarity Measure . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3.4 Multi-View Stereo reconstruction algorithms . . . . . . . . . . . . . . . . 16
3.4 Quality of matches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Implementation 17
4.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2.1 Algorithm overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2.2 Adaptive Template Size (ATS) . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2.3 Wallis filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2.4 Transformed Normalized Cross Correlation (TNCC) . . . . . . . . . . . . 19
4.2.5 Multiple Seed Points (MSP) . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2.6 Acceptance criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.2.7 Error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.3 Choice of template size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.3.1 Calculation of z-coordinate from perturbed input data . . . . . . . . . . . . 25
5 Experiments 27
5.1 Image sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.1.1 Image pair A, the loading dock . . . . . . . . . . . . . . . . . . . . . . . . 27
5.1.2 Image pair B, “Sliperiet” . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.1.3 Image pair B, “Elgiganten” . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2.1 Experiment 1, Asphalt . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2.2 Experiment 2, Brick walls . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2.3 Experiment 3, Door . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2.4 Experiment 4, Lawn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2.5 Experiment 5, Corrugated plate . . . . . . . . . . . . . . . . . . . . . . . 32
6 Results 33
6.1 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.1.1 Asphalt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.1.2 Brick walls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.1.3 Door . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.1.4 Lawn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.1.5 Corrugated plate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
CONTENTS v
7 Discussion 47
7.1 Evaluation of aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.2 Additional analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.2.1 Point cloud density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.2.2 Runtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.2.3 Error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.2.4 Homographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
8 Conclusions 51
9 Future work 53
10 Acknowledgements 55
References 57
A Homographies 59
A.1 Loading dock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
A.2 Building Sliperiet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
A.3 Building Elgiganten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
B Abbreviations 61
vi CONTENTS
List of Figures
2.1 Similarity, affine and projective transform of the same pattern. . . . . . . . . . . . 6
2.2 Schematic view of a pinhole camera. . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 The epipolar line connects the the cameras’ focal points. . . . . . . . . . . . . . . 8
2.4 Lens distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5 Normal distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.1 Grid points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Seed points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.3 Template and search patch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.4 Grass without Wallis filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.5 Grass with Wallis filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.6 Normalized Cross Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.7 Search part for multiple seed points . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.1 Left image of image pair A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.2 Right image of image pair A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.3 Left image of image pair B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.4 Right image of image pair B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.5 Left image of image pair C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.6 Right image of image pair C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.1 Detected points of example 1 c+w . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.2 Asphalt in A.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.3 Wallis filtered asphalt in A.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.4 Point cloud of areas A.1, A.2 and A.4 . . . . . . . . . . . . . . . . . . . . . . . . 36
6.5 Result of MSP in area A.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.6 Results Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.7 Results Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.8 Results Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.9 Histogram over used template sizes in Experiment 3 . . . . . . . . . . . . . . . . . 41
vii
viii LIST OF FIGURES
6.10 Used seed points in exp. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.11 Used seed points in exp. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.12 Used seed points in exp 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.13 Used seed points in exp 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.14 Histogram over templates sizes in experiment 5 . . . . . . . . . . . . . . . . . . . 46
7.1 Histogram over error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Chapter 1
Introduction
1.1 Background
Several automatic methods exist for creating 3D point clouds extracted from sets of images. In many
cases, they create sparse point clouds which are unevenly distributed over the objects. The task of
this thesis is to evaluate, compare and develop routines and theory for densification of 3D point
clouds obtained from images.
Point clouds are used in 3D modeling for generation of accurate models of real world items or
scenes. If the point cloud is sparse, the detail of the model will suffer as well as the precision of
approximated geometric primitives, therefore the subject of densification methods are of interest to
study.
1.2 Aims
The aims of this thesis are to evaluate some methods for generation of point clouds and finding pos-
sible refinements, that should result in more detailed 3D reconstructions of for example buildings
and ground. Some important aspects are speed, robustness, and quality of the output.
The following reconstruction cases are of special interest:
– Some 3D points on a surface, e.g. a wall of a building , have been reconstructed. The goal is to
extract more points on the wall to determine intrusions/extrusions from e.g. window frames.
– A sparse 3D point cloud have been automatically reconstructed on the ground. The ground
topography is represented as a 2.5D mesh. The goal is to extract more points to obtain a
topography of higher resolution.
1
2 Chapter 1. Introduction
1.3 Related Work
Several techniques for constructing detailed 3D point clouds exist. In the aim of documenting and
reconstructing detailed heritage objects, the papers by El-Hakim et al. [2004], Gr¨ n et al. [2004],
u
Remondino et al. [2008] and Remondino et al. [2009], describe reconstruction of detailed models
from image data.
Some papers on methods based on video input are found in Gallup et al. [2007] and Frahm et al.
[2009].
The papers by D’Apuzzo [2003] and Blostein and Huang [1987] deal with quality evaluation of
the generated point clouds.
A prototype of a computer application for photogrammetric reconstruction of textured 3D mod-
els of buildings is presented in Fors Nilsson and Grundberg [2009] where the necessity of point
cloud densification is noted.
An overview of literature for 3D reconstruction algorithms is done by B¨ rlin and Igasto [2009]
o
and a deeper evaluation of algorithms can be found in Seitz et al. [2006].
1.4 Organization of Thesis
The focus of this thesis is creating dense point clouds of reliable points extracted from digital images.
In Chapter 2 theories of photogrammetry, 3D reconstruction and statistics are introduced. Chapter 3
presents an overview of other methods used for point matching, densification of point clouds and
related subjects. The implemented method and the implementation with details are described in
Chapter 4. A set of experiments designed to evaluate the implemented methods are presented in
Chapter 5. The results of the experiments are presented in Chapter 6 followed by discussion and
evaluation of the aims in Chapter 7. Finally, Chapter 8 contains acknowledgments.
Chapter 2
Theory
Photogrammetry deals with finding the geometric properties of objects, starting with a set of images
of the object. As mentioned in McGlone et al. [2004], the subject of photogrammetry was born
in the 1850:s, when the ability to take aerial photographies from hot-air balloons gave inspiration
to ideas of techniques to make measurements in aerial photographs in the aim of making maps of
forests and ground.
The technique is today used in different applications like computer and robotic vision, see for
example Hartley and Zisserman [2003], in creating models of objects and landscapes, and creating
models of buildings for simulators and virtual reality.
2.1 The 3D modeling process
The 3D modeling process can be described in various ways depending on methods and aims. The
following way of structure is based on B¨ rlin and Igasto [2009]:
o
1. Image acquisition is the task of planning the camera network, take photos, calibration of
cameras and rectification of the images. Different kinds of input images require different
handling. Some examples are images from single cameras, images from a stereo rig, different
angles between the camera positions, video data, and combination with laser scanner data of
the objects
2. Feature points detection in images. Feature points are points that are likely to be detected in
corresponding images.
3. Matching of feature points is required to know which points are corresponding to each other
in the pair of images.
4. Relative orientation between images calculates the relative positions of the cameras where
the images were taken.
5. Triangulation is used to calculate the 3D point corresponding to each pair of matched points.
6. Co-registration is done to organize point clouds from different sets in the same coordinate
system.
7. Point cloud densification is used to find more details and retrieve more points for better
estimation of planes and geometry.
3
4 Chapter 2. Theory
8. Segmentation and structuring in order to separate different objects in the images.
9. Texturing the model with extracted textures from images makes the model photo realistic and
complete.
This thesis focus on step 7, in close connection to steps 2 and 3.
2.2. Projective geometry 5
2.2 Projective geometry
Projective geometry is an extension to Euclidean geometry to include e.g. ideal points that cor-
respond to the intersection of parallel lines. The following introduction of the subject covers the
concepts necessary to understand the pinhole camera model and geometrical transformations. The
notation of this section follows Hartley and Zisserman [2000].
2.2.1 Homogenous coordinates
A line in the 2D plane determined by the equation
ax + by + c = 0
can be represented as
l = [a, b, c]T ,
that means the line consists of all points x = [x, y]T that satisfies the equation ax + by + c = 0. In
homogenous coordinates the point will be [x, y, 1]T . Two lines l and l intersect in the point x if the
cross product of the lines equals the point,
x=l×l.
In 3D a space point is in the similar way given by
p = [x, y, z, 1]T
and a plane by
l = [a, b, c, d].
2.2.2 Transformations of P2
Transformations of the projective plane P2 are classified in four classes, Isometries, Similarity trans-
formations, Affine transformations and Projective transformations. A transformation is performed
using a matrix multiplication of a transformation matrix H and the points x to transform,
x = Hx.
Figure 2.1 shows the effects of some different transformations.
Isometries
Isometries are the simplest kind of transformations. They consist of a translation and a rotation of
the plane, which means that distances and angles are preserved. The transformation is represented
by
R t
x = x,
0T 1
where R is a 2D rotation matrix including optional mirroring, and t is a 2 × 1 vector determining
the translation. This transformation has three degrees of freedom, corresponding to rotation angle
and translation.
6 Chapter 2. Theory
Figure 2.1: Similarity, affine and projective transform of the same pattern.
Similarity transformations
Combining the rotation of an isometry with a scaling factor s gives a similarity transform
sR t
x = x.
0T 1
A similarity transform preserves angles between lines, the shape of an object and the ratios between
distances and areas. This transform has four degrees of freedom.
Affine transformations
An affine transformation combines the similarity transform with a deformation of the plane, which
in block matrix form are
A t
x = x,
0T 1
where A is a composition of a rotation matrix and a deformation matrix, which is diagonal and
contains scaling factors for x and y
λ1 0
D= .
0 λ2
A is then composed as A = R(θ)R(−φ)DR(θ). An affine transformation preserve parallel lines,
ratios and lengths of parallel line segments and ratios of areas as well as directions in the rotated
plane. The affine transformation has six degrees of freedom.
Projective transformations
Projective transformations give perspective views where objects far away is smaller than close ones.
The transformation is represented by
A t
x = x,
vT v
2.3. The pinhole camera model 7
The vector vT determines the transformation of the ideal point where parallel lines intersect. The
projective transform has 8 degrees of freedom, only the ratio between the elements in the matrix are
fixed. This makes it possible to determine the transform between two planes from four pairs of
points.
2.3 The pinhole camera model
Figure 2.2: Schematic view of a pinhole camera. The image plane is shown in front of the camera
centre to simplify the image, in real cameras the image plane, image sensor, is behind the centre of
the camera.
A simple camera model is the pinhole camera. A 3D point X in world coordinates maps on to
the 2D point x on the image plane Z of the camera where the ray between X and the camera centre
C intersects the plane. The focal distance f is the distance between the image plane and the camera
centre which is the focal point of the lens. Orthogonal to the image plane, the principal ray passes
through the camera centre along the principal axis, originating in the principal point of the image
plane. The principal plane is the plane parallel with the image plane through the camera centre.
Figure 2.2 shows a schematic view of the pinhole camera model.
The projection x of a 3D point X on the image plane of the camera is given by
x = P X,
where the camera matrix P is composed by the 3 × 4 matrix
P = KR[I| − C],
The camera matrix describes a camera setup composed by internal and external camera pa-
rameters. The internal parameters are the focal length f of the camera, the principal point P, the
resolution mx , my and optional skew s. The focal length and the principal point are converted to
pixels using the resolution parameters. αx = f mx , αy = f my is the focal length in pixels and
x0 = mx Px , y0 = my Py is the principal point. The internal parameters are stored in the camera
calibration matrix
 
αx s x0
K= αy y0  . (2.1)
1
The external parameters determine the camera position relative to the world. These are the posi-
tion of the camera centre C and the rotation of the camera constructed by a rotation matrix R.
8 Chapter 2. Theory
2.4 Stereo view geometry
2.4.1 Epipolar geometry
The relationship between two images of the same object taken from different points of view, is
described by the epipolar geometry of the images. The two centres of the cameras, C and C , spans
the baseline, see figure 2.3 (left). Each camera has an epipole, e and e , figure 2.3 (right) which is
the projection of the other focal point of the second camera on the image plane of the first camera.
Every plane determined by an arbitrary point X and the baseline between C C is an epipolar plane.
The line of intersection of the image plane and the epipolar plane is called the epipolar line. When
the projection point x of a point X is known in one image, the projection point x is restricted to lie
on a line through the projection of the camera centre C and the point x in the image plane of the
second camera. This line intersects the epipole e .
Figure 2.3: The epipolar line connects the the cameras’ focal points.
2.4.2 The Fundamental Matrix, F
The fundamental matrix is an algebraic representation of the epipolar geometry. The fundamental
matrix F is defined by
x Fx = 0
for all corresponding points. The fundamental matrix F is a 3 × 3 of rank 2. Given at least seven
respectively eight pairs of points the fundamental matrix can be calculated using either the seven
point algorithm or the eight point algorithm, see Hartley and Zisserman [2000] for details.
The relationship between the fundamental matrix and the camera matrices is given by
x = PX
x =P X
F = [e ]× P P +
where [e ]× is the representation matrix for transforming a cross product to a matrix-vector multi-
plication and P + is the pseudoinverse of the matrix P .
2.4. Stereo view geometry 9
2.4.3 Triangulation
When the camera matrices are calculated, and the coordinates for a point correspondence are known,
the 3D point can be calculated by solving the equation system
x = PX
x =P X
for X.
2.4.4 Image rectification
The lens in a camera causes some distortion in the images, making straight lines in the outskirts
of the image projected curved, because of the mapping of a 3D world onto a 2D sensor through
a spherical lens. This error in the images can be reduced by rectification of the image. In this
work, only pre-rectified images are used. Figure 2.4 illustrates the effect of lens distortion and
rectifying. The topics of image rectification and lens distortion are throughly explained in Hartley
and Zisserman [2000] and Remondino [2006]
Figure 2.4: The grid to the left is curved as a lens distorted image, the right image is the rectified
grid.
10 Chapter 2. Theory
2.5 Estimation
This section follows mostly the notation from Montgomery et al. [2004].
2.5.1 Statistics
Origins of errors
In tasks where measurements are done there usually occur some errors. The quality of measurements
are affected by systematic errors (bias) and unstructured errors (variance).
In photogrammetry usual origins of errors are the camera calibration, the quality of point extrac-
tion, the quality of the model function and numeric errors in triangulation and optimization.
Normal and χ2 distribution
The Normal distribution describes the way many random errors affects the results. The most prob-
able value is close to the expected value µ. Few values are far away. The standard deviation σ (and
variance) describes the dispersion of values. The distribution of a normal random variable is defined
by the probability density function N (µ, σ 2 ), giving
1 −(x−µ)2
f (x) = √ e 2σ2 for − ∞ < x < ∞,
2πσ
where µ is the expectation value of the distribution and σ 2 is the variance.
Figure 2.5: Normal distribution with expectation value µ = 5 and variance σ 2 = 10.
2.5. Estimation 11
The χ2 distribution is defined by
y (n/2−1) e−y/2
py (y, n) = n ∈ N , y > 0,
2n/2 Γ( n )
2
where Γ(.) is the Gamma-function. A particular case is the sum of squared independent random
variables, zi ∼ N (0, 1)
n
2
y= zi
i=1
which is χ2 distributed, [F¨ rstner and Wrobel, 2004].
o
Variance and standard deviation
The variance is a measure of the width of a distribution. It is defined as

σ2 = x2 f (x)dx − µ2 ,
−∞
where f (x) is the probability density function of the distribution and µ is the expected value. The
standard deviation is σ, the square root of the variance.
Covariance
Covariance is a measure of how two variables interact with each other. A covariance of zero implies
that the variables are uncorrelated. Of two variables x and y with estimation values E(x) and E(y)
and mean values µx and µy the covariance is
Cov(x, y) = E(xy) − (µx µy ),
Correlation coefficients
The correlation coefficient is the normalized covariance and determines the strength of the linear
relationship between the variables. The correlation coefficient is determined by
Cov(x, y)
ρxy = ,
2 2
(σx σy
where Sxy is called the corrected sum of cross products, defined by
n
Sxy = (xi − x)(yi − y ).
¯ ¯
i=1
Correlated and non-correlated errors
A positive correlation coefficient between x and y implies that given a small value of x, a small
value of y is likely. If the coefficient is zero, there is no linear relationship. If the observations are
plotted, ρ = 0 if the plotted points are equally distributed. If they are close to a line in positive
direction, ρ is close to 1.
12 Chapter 2. Theory
Figure 2.6: Values in the left image are correlated with a correlation coefficient close to 1. Values in
the right image are not correlated, and hence the correlation coefficient is close to 0.
The covariance matrix
The covariance matrix is composed by the variance of each variable and their covariances.
2
σx σxy
C= 2
σxy σy
Error propagation of linear combinations of random variables
As presented in F¨ rstner and Wrobel [2004, ch.2.2.1.7.3] a set of n normally distributed random
o
variables with covariance matrix Cxx can compose a vector


Use: 0.7749