Estimating rotations and translations from optical targets

There are two main problems in tracking and navigation. The first involves using data from targets rigidly attached to a body to determine the orientation and location of the body relative to a reference attitude and location. In many applications, the targets are optical and the data are recorded with a computer vision system. For example, in a variety of biomechanics applications, sets of optical targets are placed on two anatomical segments and the relative displacements of the targets are then used to measure the relative rotation {\bf R} and translation {\bf d} of the two segments. It has long been known that the measurement of the rotation is difficult and prone to error. Several studies in the biomechanics community have discussed and quantified these errors for a variety of situations (e.g., see [1, 2, 3]). We devote this page to discussing various methods used to estimate {\bf R} and {\bf d} from sets of experimental data. These estimates are denoted by an asterisk, i.e., {\bf R}^{*} and {\bf d}^{*}. This problem is sometimes known as the Wahba problem [4] or the Procrustes problem [5].

Contents

Measurements and a cost function

Suppose we have a series of N current position measurement vectors \left ( {\bf m}_1, \, \ldots, \, {\bf m}_N \right ) associated with N vectors of reference position measurements, \left ( {\bf r}_1, \, \ldots, \, {\bf r}_N \right ). As shown in Figure 1, the reference and current measurements typically pertain to a set of targets, or landmarks, that are rigidly attached to the bodies of interest, and the motion of these objects is detected by cameras. The targets often come in fours so that they can be used to form a triad of vectors.

Figure 1. Illustration of the reference and current position measurements of a set of optical targets (or landmarks) attached to a rigid body. For the motion  {\bf m}_3 = {\bf R}{\bf r}_3 + {\bf d}, the screw axis, the rotation axis  {\bf s}, and the angle of rotation,  \theta, are also shown.

Typically, the position measurements are all taken with respect to the same frame, which is often known as the laboratory frame:

(1)   \begin{eqnarray*} && {\bf r}_K = \sum_{i \, \, = \, 1}^3 r_{{K}_i} {\bf E}_i \quad \left( K = 1, \, \ldots, \, N\right), \\ \\ \\ && {\bf m}_K = \sum_{i \, \, = \, 1}^3 m_{{K}_i} {\bf E}_i. \end{eqnarray*}

We wish to determine the rotation tensor {\bf R} = {\bf R}^{*} and translation {\bf d} = {\bf d}^{*} that solves the N equations

(2)   \begin{equation*} {\bf m}_K = {\bf R}{\bf r}_K + {\bf d}. \end{equation*}

Because of measurement errors, no unique solution \left( {\bf R}^{*}, \, {\bf d}^{*}\right) to these equations exist. The best we can do is to find the solution that minimizes the cost function

(3)   \begin{equation*} W = \frac{1}{2} \sum_{K \, \, = \, 1}^N \alpha_K\lnorm {\bf m}_K - {\bf R}{\bf r}_K - {\bf d} \rnorm^2 , \end{equation*}

where \alpha_K are weights assigned to the individual measurements. Because \mathbf{R} is a rotation tensor, and thus proper-orthogonal, the cost function (3) is subject to the constraints

(4)   \begin{eqnarray*} && {\bf R}^T{\bf R} = {\bf I}, \\ \\ && \mbox{det}\left({\bf R}\right) = 1. \end{eqnarray*}

An alternative decomposition of the motion given by (2) expresses this transformation in terms of the parameters of a helical (or screw) motion. As denoted in Figure 1, the four parameters for this motion are the rotation axis {\bf s} that is parallel to the screw axis; the angle of rotation, \theta, about this axis; the position vector of a point on the screw axis, \brho; and the translation \tau along this axis. It is straightforward to show that \brho and \tau can be determined by solving the equation

(5)   \begin{equation*} \left( {\bf I} - {\bf R}\right) \brho + \tau {\bf s} = {\bf d}. \end{equation*}

However, the tensor {\bf I} - {\bf R} is not invertible, so the solution for \brho is not uniquely defined and indeed can be indeterminate when {\bf R} = {\bf I}. One possibility is to choose \brho to be the position vector from the origin to the screw axis that is normal to {\bf s}. Then, from (5), we find that

(6)   \begin{eqnarray*} && {\brho} = \frac{1}{2}\left( {\bf d}_\perp + \cot\left( \frac{\theta}{2}\right) {\bf s}\times{\bf d} \right), \\ \\[0.10in] && \tau = {\bf s}\cdot{\bf d}, \end{eqnarray*}

where

(7)   \begin{equation*} {\bf d}_\perp = {\bf d} - \left({\bf d}\cdot{\bf s}\right){\bf s}. \end{equation*}

This solution to the parameters of the screw motion is equivalent to the one chosen by Spoor and Veldpaus in [6].

Solution methods

We now discuss four well-known solutions to the problem of estimating the rotation tensor and translation. We start with a naive approach that assumes there is no noise, measurement error, or finite precision issues. Next, a method from the satellite dynamics community known as the TRIAD method is discussed [7]. We then present a popular algorithm that uses the singular value decomposition of a matrix [8, 9]. We close with a discussion of the {\sf q}-method, which employs unit quaternions (i.e., Euler-Rodrigues symmetric parameters) and features a novel eigenvalue problem [10, 11]. Comparisons of the methods we present can be found in works by Eggert et al. [12] and Metzger et al. [13]. We do not discuss error estimates for any of these methods. Instead, we refer the interested reader to Dorst’s paper [14]; his work enables one to correlate errors in the estimates {\bf R}^{*} and {\bf d}^{*} to noise and target distributions. Additional works in this area with specific application to biomechanics include studies by Panjabi [1], Spoor and Veldpaus [6], and Woltring et al. [2], among others.

A naive method

A naive method is to ignore errors in the measurement data and assume that one has four targets. First, we define the following augmented arrays for each target’s reference and current position measurement vectors:

(8)   \begin{eqnarray*} && {\sf r}^{\prime}_K = \left[ \begin{array}{c} 1 \\ {\bf r}_K \cdot{\bf E}_1 \\ {\bf r}_K \cdot{\bf E}_2 \\ {\bf r}_K \cdot{\bf E}_3 \end{array} \right], \\ \\ \\ && {\sf m}^{\prime}_K = \left[\begin{array}{c} 1 \\ {\bf m}_K \cdot{\bf E}_1 \\ {\bf m}_K \cdot{\bf E}_2 \\ {\bf m}_K \cdot{\bf E}_3 \end{array} \right]. \end{eqnarray*}

Assuming no errors in either the rotation or translation estimates, one can rewrite each of the four motion equations given by (2) in matrix-vector form as

(9)   \begin{equation*} {\sf m}^{\prime}_K = \left[ \begin{array}{cc} 1 & {\sf 0} \\ {\sf d} & {\sf R} \end{array} \right]{\sf r}^{\prime}_K, \end{equation*}

where

(10)   \begin{eqnarray*} && {\sf R} = \left[ \begin{array}{c c c} R_{11} & R_{12} & R_{13} \\ R_{21} & R_{22} & R_{23} \\ R_{31} & R_{32} & R_{33} \end{array} \right] , \\ \\ \\ && {\sf d} = \left[ \begin{array}{c} {\bf d}\cdot{\bf E}_1 \\ {\bf d}\cdot{\bf E}_2 \\ {\bf d}\cdot{\bf E}_3 \end{array} \right]. \end{eqnarray*}

Given four non-coplanar targets, (9) can be used to solve for {\bf R} and {\bf d}:

(11)   \begin{equation*} \left[\begin{array}{cc} 1 & {\sf 0} \\ {\sf d}^{*} & {\sf R}^{*} \end{array} \right] = \left[\begin{array}{cccc} {\sf m}^{\prime}_1 & {\sf m}^{\prime}_2 & {\sf m}^{\prime}_3 & {\sf m}^{\prime}_4 \end{array} \right] \left[\begin{array}{cccc} {\sf r}^{\prime}_1 & {\sf r}^{\prime}_2 & {\sf r}^{\prime}_3 & {\sf r}^{\prime}_4 \end{array} \right]^{-1}. \end{equation*}

Because there will always be some amount of noise and measurement error, the matrix {\sf R}^{*} in (11) will typically not be proper-orthogonal. This fact contributes substantially to the errors observed when using this naive method.

The TRIAD method

The TRIAD algorithm is credited to Shuster and Oh [7] and dates to the early 1980s. It is often used with magnetometer, gravity, and star-pointing data, but here we consider the case where it is used with data obtained from optical targets. Given three sets of reference and current target position data, say, \left({\bf r}_1, \, {\bf r}_2, \, {\bf r}_3\right) and \left({\bf m}_1, \, {\bf m}_2, \, {\bf m}_3\right), we can use these sets of vectors to define the right-handed orthonormal bases \left \{ {\bf V}_1, \, {\bf V}_2, \, {\bf V}_3 \right \} and \left \{ {\bf v}_1, \, {\bf v}_2, \, {\bf v}_3 \right \}, respectively:

(12)   \begin{eqnarray*} && {\bf V}_1 = \frac{{\bf r}_2 - {\bf r}_1}{\lnorm {\bf r}_2 - {\bf r}_1\rnorm} , \\ \\ \\ && {\bf V}_2 = \frac{{\bf V}_1\times\left({\bf r}_3 - {\bf r}_1\right)}{\lnorm {\bf V}_1\times\left({\bf r}_3 - {\bf r}_1\right)\rnorm} , \\ \\ \\ && {\bf V}_3 = \frac{{\bf V}_2 \times {\bf V}_1}{\lnorm {\bf V}_2 \times {\bf V}_1\rnorm}, \end{eqnarray*}

and

(13)   \begin{eqnarray*} && {\bf v}_1 = \frac{{\bf m}_2 - {\bf m}_1}{\lnorm {\bf m}_2 - {\bf m}_1\rnorm}, \\ \\ \\ && {\bf v}_2 = \frac{{\bf v}_1\times\left({\bf m}_3 - {\bf m}_1\right)}{\lnorm {\bf v}_1\times\left({\bf m}_3 - {\bf m}_1\right)\rnorm} , \\ \\ \\ && {\bf v}_3 = \frac{{\bf v}_2 \times {\bf v}_1}{\lnorm {\bf v}_2 \times {\bf v}_1\rnorm}. \end{eqnarray*}

These unit vectors can be used to define two proper-orthogonal matrices:

(14)   \begin{eqnarray*} && \mathsf{V} = \left[\begin{array}{c c c} {\bf V}_1\cdot{\bf E}_1 & {\bf V}_2\cdot{\bf E}_1 & {\bf V}_3\cdot{\bf E}_1\\ {\bf V}_1\cdot{\bf E}_2 & {\bf V}_2\cdot{\bf E}_2 & {\bf V}_3\cdot{\bf E}_2 \\ {\bf V}_1\cdot{\bf E}_3 & {\bf V}_2\cdot{\bf E}_3 & {\bf V}_3\cdot{\bf E}_3 \end{array} \right], \\ \\ \\ && \mathsf{v} = \left[\begin{array}{c c c} {\bf v}_1\cdot{\bf E}_1 & {\bf v}_2\cdot{\bf E}_1 & {\bf v}_3\cdot{\bf E}_1\\ {\bf v}_1\cdot{\bf E}_1 & {\bf v}_2\cdot{\bf E}_2 & {\bf v}_3\cdot{\bf E}_2 \\ {\bf v}_1\cdot{\bf E}_1 & {\bf v}_2\cdot{\bf E}_3 & {\bf v}_3\cdot{\bf E}_3 \end{array} \right]. \end{eqnarray*}

With the help of the motion equations (2), we can manipulate \mathsf{V} and \mathsf{v} to solve for the matrix \mathsf{R}:

(15)   \begin{equation*} \mathsf{R}^* = \mathsf{v} \mathsf{V}^{T}. \end{equation*}

The translation vector \mathbf{d} is then calculated using the mean positions of the targets before and after the motion:

(16)   \begin{equation*} {\bf{d}}^* = {\overline{\bf{m}}} - {\bf{R}}^* {\overline{\bf{r}}} , \end{equation*}

where

(17)   \begin{eqnarray*} && {\overline{\bf{r}}} = \frac{1}{3}\left({\bf r}_1 + {\bf r}_2 + {\bf r}_3\right), \\ \\ \\ && {\overline{\bf{m}}} = \frac{1}{3}\left({\bf m}_1 + {\bf m}_2 + {\bf m}_3\right). \end{eqnarray*}

For a set of four targets, there are 24 possible constructions of the matrices {\sf V} and {\sf v}, and thus 24 possible estimates of {\bf R} and {\bf d}. To obtain the optimal {\bf R} and {\bf d}, we examine the error e between the true and calculated positions of the targets after the motion:

(18)   \begin{equation*} {e}^2=\lnorm \left[{\bf m}_1, \, {\bf m}_2, \, {\bf m}_3, \, {\bf m}_4 \right]-\left({\bf R}\left[{\bf r}_1, \, {\bf r}_2, \, {\bf r}_3, \, {\bf r}_4 \right] + \left[{\bf d}, \, {\bf d}, \, {\bf d}, \, {\bf d} \right]\right) \rnorm. \end{equation*}

The rotation tensor and translation that yield the smallest error are considered the optimal results \left ( {\bf R}^{*}, \, {\bf d}^{*} \right ).

A method based on the singular value decomposition

The method featuring the singular value decomposition (SVD) is credited to Arun et al. [8] and to Hanson and Norris [9] in the 1980s. Additional discussions of this method can be found in [15, 16, 17]. The first step of the SVD method involves computing the mean positions for a set of four targets before and after the motion:

(19)   \begin{eqnarray*} && {\overline{\bf{r}}} = \frac{1}{4}\left({\bf r}_1 + {\bf r}_2 + {\bf r}_3 + {\bf r}_4\right), \\ \\ \\ && {\overline{\bf{m}}} = \frac{1}{4}\left({\bf m}_1 + {\bf m}_2 + {\bf m}_3 + {\bf m}_4\right). \end{eqnarray*}

Next, a matrix is formed using position vectors relative to these mean positions:

(20)   \begin{equation*} {\sf S} = {\sf C}{\sf D}^{T}, \end{equation*}

where

(21)   \begin{eqnarray*} && {\sf C} = \left[\begin{array}{c c c c} \left({\bf m}_1 - \overline{\bf m}\right)\cdot{\bf E}_1 & \left({\bf m}_2 - \overline{\bf m}\right)\cdot{\bf E}_1 & \left({\bf m}_3 - \overline{\bf m}\right)\cdot{\bf E}_1 & \left({\bf m}_4 - \overline{\bf m}\right) \cdot{\bf E}_1 \\ \left({\bf m}_1 - \overline{\bf m}\right)\cdot{\bf E}_2 & \left({\bf m}_2 - \overline{\bf m}\right)\cdot{\bf E}_2 & \left({\bf m}_3 - \overline{\bf m}\right)\cdot{\bf E}_2 & \left({\bf m}_4 - \overline{\bf m}\right) \cdot{\bf E}_2 \\ \left({\bf m}_1 - \overline{\bf m}\right)\cdot{\bf E}_3 & \left({\bf m}_2 - \overline{\bf m}\right)\cdot{\bf E}_3 & \left({\bf m}_3 - \overline{\bf m}\right)\cdot{\bf E}_3 & \left({\bf m}_4 - \overline{\bf m}\right) \cdot{\bf E}_3 \end{array} \right], \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}} \\ \\ \\ && {\sf D} = \left[\begin{array}{c c c c} \left({\bf r}_1 - \overline{\bf r}\right)\cdot{\bf E}_1 & \left({\bf r}_2 - \overline{\bf r}\right)\cdot{\bf E}_1 & \left({\bf r}_3 - \overline{\bf r}\right)\cdot{\bf E}_1 & \left({\bf r}_4 - \overline{\bf r}\right) \cdot{\bf E}_1 \\ \left({\bf r}_1 - \overline{\bf r}\right)\cdot{\bf E}_2 & \left({\bf r}_2 - \overline{\bf r}\right)\cdot{\bf E}_2 & \left({\bf r}_3 - \overline{\bf r}\right)\cdot{\bf E}_2 & \left({\bf r}_4 - \overline{\bf r}\right) \cdot{\bf E}_2 \\ \left({\bf r}_1 - \overline{\bf r}\right)\cdot{\bf E}_3 & \left({\bf r}_2 - \overline{\bf r}\right)\cdot{\bf E}_3 & \left({\bf r}_3 - \overline{\bf r}\right)\cdot{\bf E}_3 & \left({\bf r}_4 - \overline{\bf r}\right) \cdot{\bf E}_3 \end{array} \right]. \end{eqnarray*}

We then calculate the singular value decomposition of {\sf S}:

(22)   \begin{equation*} {\sf S} = {\sf R}_1 \mathsf{\Lambda} {\sf R}_2, \end{equation*}

where {\sf R}_1 and {\sf R}_2 are orthogonal matrices, and {\mathsf \Lambda} is a 3 \times 3 diagonal matrix. Finally, the optimal estimates of the rotation and translation are given by

(23)   \begin{eqnarray*} && {\sf R}^{*} = {\sf R}_1 \left[ \begin{array}{c c c} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \mbox{det}\left({\sf R}_1{\sf R}_2\right) \end{array} \right] {\sf R}_2, \\ \\[0.15in] && {\bf d}^{*} = \overline{\bf m} - {\bf R}^{*}{\overline{\bf r}}. \end{eqnarray*}

The q-method

The {\sf q}-method dates to the latter half of the 20th century. The method uses unit quaternions (i.e., Euler-Rodrigues symmetric parameters) to parameterize the rotation and features a novel eigenvalue problem to arrive at an estimate for the optimal rotation. This solution was developed independently by Davenport [10] and later in a well-cited paper by Horn [11]. To show how the \mathsf{q}-method works, we start by defining the parameter \alpha and the “centers” of the N reference and current position measurements based on assigned weights \alpha_K:

(24)   \begin{eqnarray*} && \alpha = \sum_{K \, \, = \, 1}^N \alpha_K, \\ \\ \\ && {\bf r} = \frac{1}{\alpha} \sum_{K \, \, = \, 1}^N \alpha_K {\bf r}_K, \\ \\ \\ && {\bf m} = \frac{1}{\alpha} \sum_{K \, \, = \, 1}^N \alpha_K {\bf m}_K. \end{eqnarray*}

Expanding the cost function given in (3), we find with some straightforward manipulations that

(25)   \begin{equation*} W = \frac{ \alpha}{2} \lnorm {\bf m} - {\bf R}{\bf r} - {\bf d} \rnorm^2 + \alpha {\bf m} \cdot{\bf R}{\bf r} - \sum_{K \, \, = \, 1}^N \alpha_K {\bf m}_K\cdot{\bf R}{\bf r}_K + \underbrace{\frac{1}{2} \sum_{K \, \, = \, 1}^N \alpha_K \left( \lnorm{\bf m}_K\rnorm^2 + \lnorm{\bf r}_K\rnorm^2 \right) - \frac{\alpha}{2}\left( \lnorm{\bf m} \rnorm^2 + \lnorm{\bf r}\rnorm^2 \right)}_{\textrm{independent of $\mathbf{R}$ and $\mathbf{d}$}} . \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}} \end{equation*}

We are interested in extremizing W as a function of the rotation tensor {\bf R} and translation {\bf d}, so we can ignore the underbraced terms in (25) that are independent of {\bf R} and {\bf d}. In short, we wish to find {\bf R} and {\bf d} that maximize

(26)   \begin{equation*} I\left({\bf R}, \, {\bf d} \right) = \frac{1}{\alpha}\sum_{K \, \, = \, 1}^N \alpha_K {\bf m}_K\cdot{\bf R}{\bf r}_K - {\bf m} \cdot{\bf R}{\bf r} - \frac{1}{2} \lnorm {\bf m} - {\bf R}{\bf r} - {\bf d} \rnorm^2. \end{equation*}

We now parameterize {\bf R} using a set of Euler-Rodrigues symmetric parameters, \left ( e_0, \, {\bf e} \right ):

(27)   \begin{equation*} \mathsf{R} = \left(e^2_0 - e^2_1 - e^2_2 - e^2_3\right) \left[ \begin{array}{c c c} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right] + \left[ \begin{array}{c c c} 2e^2_1 & 2e_1e_2 & 2e_1e_3 \\ 2e_1e_2 & 2e^2_2 & 2e_2e_3 \\ 2e_1e_3 & 2e_2e_3 & 2e^2_3 \\ \end{array} \right] + \left[ \begin{array}{c c c} 0 & - 2 e_0 e_3 & 2 e_0 e_2 \\ 2 e_0 e_3 & 0 & - 2 e_0 e_1 \\ - 2 e_0 e_2 & 2 e_0 e_1 & 0 \\ \end{array} \right]. \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}} \end{equation*}

This representation enables us to express (26) in a more compact form:

(28)   \begin{equation*} I\left({\bf R}, \, {\bf d} \right) = - \frac{1}{2} \lnorm {\bf m} - {\bf R}{\bf r} - {\bf d} \rnorm^2 + \mathsf{q}\cdot\left( \mathsf{K} \mathsf{q}\right) , \end{equation*}

where the 4 \times 4 symmetric matrix \mathsf{K} and the column vector \mathsf{q} are given by

(29)   \begin{eqnarray*} && \mathsf{K} = \left[ \begin{array}{c c} \mathsf{B} + \mathsf{B}^T - \mbox{tr}\left(\mathsf{B}\right) \mathsf{I} & \mathsf{z} \\ \mathsf{z}^T & \mbox{tr}\left(\mathsf{B}\right) \end{array} \right], \\ \\ \\ && \mathsf{q} = \left[ \begin{array}{c} e_1 \\ e_2 \\ e_3 \\ e_0 \end{array} \right]. \end{eqnarray*}

Here, \mathsf{I} is the 3 \times 3 identity matrix, the column vector

(30)   \begin{equation*} \renewcommand\arraystretch{1.5} \mathsf{z} = \left[\begin{array}{c} \left( {\bf m}\times{\bf r} - \frac{1}{\alpha} \sum_{K \, \, = \, 1}^N \alpha_K {\bf m}_K\times{\bf r}_K\right)\cdot{\bf E}_1 \\ \left( {\bf m}\times{\bf r} - \frac{1}{\alpha} \sum_{K \, \, = \, 1}^N \alpha_K {\bf m}_K\times{\bf r}_K\right)\cdot{\bf E}_2 \\ \left( {\bf m}\times{\bf r} - \frac{1}{\alpha} \sum_{K \, \, = \, 1}^N \alpha_K {\bf m}_K\times{\bf r}_K \right)\cdot{\bf E}_3 \end{array} \right], \end{equation*}

and the 3 \times 3 matrix \mathsf{B} = \left [ B_{ij} \right ] (i, j = 1, \, 2, \, 3), for which the components

(31)   \begin{equation*} B_{ij} = \left( \frac{1}{\alpha} \sum_{K \, \, = \, 1}^N \alpha_K \left({\bf m}_K\cdot{\bf E}_i\right)\left({\bf r}_K\cdot{\bf E}_j\right) \right) - \left({\bf m}\cdot{\bf E}_i\right)\left({\bf r}\cdot{\bf E}_j\right). \end{equation*}

Note that the matrices \mathsf{B} and \mathsf{K} are composed entirely of measurements. Our definition of the column vector \mathsf{z} differs by a minus sign from that found in the literature for the case when {\bf d} = {\bf 0} (e.g., see [7, 18]). This difference is expected and can be traced to our convention for the skew-symmetric part of the matrix \mathsf{R} in (27).

We are now in a position to solve for the optimal {\bf R} and {\bf d}. Specifically, we seek {\bf R} and {\bf d} that maximize (28), subject to the Euler parameter constraint e^2_0 + {\bf e}\cdot{\bf e} = 1. With the help of a Lagrange multiplier \lambda, one seeks {\bf R} = {\bf R}\left(e_0, \, {\bf e}\right) and {\bf d} that maximize

(32)   \begin{equation*} J\left(e_0, \, {\bf e}, \, {\bf d}, \lambda \right) = - \frac{1}{2} \lnorm {\bf m} - {\bf R}{\bf r} - {\bf d} \rnorm^2 + \mathsf{q}\cdot\left( \mathsf{K} \mathsf{q}\right) - \lambda \left(e^2_0 + {\bf e}\cdot{\bf e} - 1 \right). \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}} \end{equation*}

It is now straightforward to calculate the optimal solution by differentiating J with respect to its arguments. Doing so, one finds

(33)   \begin{eqnarray*} && \mathsf{K} \mathsf{q}^* = \lambda^* \mathsf{q}^*, \\ \\ && {\bf d}^* = {\bf m} - {\bf R}^{*}{\bf r}. \end{eqnarray*}

That is, the optimal \mathsf{q} is an eigenvector of \mathsf{K}. For J to be maximized, \mathsf{q}^* must be the eigenvector corresponding to the largest eigenvalue of \mathsf{K}. Once this eigenvector is found, {\bf R}^{*} can be constructed using (27). Notice that {\bf d}^* is a simple mean value, but it can only be obtained after {\bf R}^* has been determined. You should also notice that because the 4 \times 4 matrix \mathsf{K} is real and symmetric, it has four real eigenvalues and its eigenvectors are orthogonal. One advantage of the \mathsf{q}-method is that determining the rotation axis {\bf s}^* and the corresponding rotation angle \theta^* for the screw axis representation of the motion follows easily once \mathsf{q}^* is known:

(34)   \begin{eqnarray*} && \frac{\theta^*}{2} = \cos^{-1}\left(e^*_0\right), \\ \\ \\ && {\bf s}^* = \mbox{cosec}\left( \frac{\theta^*}{2} \right){\bf e}^*. \end{eqnarray*}

An example of estimating a rigid-body motion

To illustrate the four methods for estimating rotations and translations from measurement data, we consider the example of a tossed book. Referring to Figure 2, we choose four landmark points on a uniform rectangular prism of length \ell, width w, and thickness b such that their position vectors in the reference state are

(35)   \begin{eqnarray*} && {\bf r}_1 = - \frac{\ell}{2}{\bf E}_1 + \frac{w}{2}{\bf E}_2 - \frac{b}{2}{\bf E}_3, \\ \\ \\ && {\bf r}_2 = \frac{\ell}{2}{\bf E}_1 + \frac{w}{2}{\bf E}_2 - \frac{b}{2}{\bf E}_3, \\ \\ \\ && {\bf r}_3 = - \frac{\ell}{2}{\bf E}_1 - \frac{w}{2}{\bf E}_2 - \frac{b}{2}{\bf E}_3, \\ \\ \\ && {\bf r}_4 = - \frac{\ell}{2}{\bf E}_1 + \frac{w}{2}{\bf E}_2 + \frac{b}{2}{\bf E}_3. \end{eqnarray*}

For the example at hand, we choose the dimensions \ell = 8, w = 6, and b = 1 (with units of inches) to mimic a small book. Note that in the reference state, the body’s center of mass \overline{X} is located at the origin and the rotation tensor {\bf R} can be taken as the identity transformation, i.e., {\bf R} = {\bf I} in the reference configuration.

Figure 2. Illustrations of the reference configuration  \bkappa_0 and current configuration  \bkappa_t of a tossed book represented by a uniform rectangular prism, with the landmark points used for calculations labeled 1 through 4.

Suppose we rotate and translate the book such that at time t_1, the book’s orientation and translation are described by, respectively,

(36)   \begin{eqnarray*} && {\bf R} = {\bf L}\left( \phi = \frac{\pi}{4}, \, {\bf e}_1\right){\bf L}\left( \theta = \frac{\pi}{6}, \, {\bf e}^{\prime}_2\right){\bf L}\left( \psi = \frac{\pi}{4}, \, {\bf E}_3\right), \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}} \\ \\[0.15in] && {\bf d} = {\bf E}_1 + {\bf E}_2 - 10{\bf E}_3, \end{eqnarray*}

where, using a 3-2-1 set of Euler angles to parameterize {\bf R}, the rotation axis {\bf e}^{\prime}_2 = \cos(\psi){\bf E}_2 - \sin(\psi){\bf E}_1. A lengthy but straightforward calculation shows that the components of {\bf R} at time t_1 are

(37)   \begin{equation*} \mathsf{R} = \left[ \begin{array}{c c c} R_{11} & R_{12} & R_{13} \\ R_{21} & R_{22} & R_{23} \\ R_{31} & R_{32} & R_{33} \\ \end{array} \right] = \left[ \begin{array}{c c c} \sqrt{\frac{3}{8}} & - \frac{1}{4} & \frac{3}{4} \\ \sqrt{\frac{3}{8}} & \frac{3}{4} & - \frac{1}{4} \\ - \frac{1}{2} & \sqrt{\frac{3}{8}} & \sqrt{\frac{3}{8}} \\ \end{array} \right]. \end{equation*}

In terms of Euler’s representation, the rotation {\bf R} = {\bf L} \left ( \nu, \, \mathbf{s} \right ) is equivalent to a counterclockwise rotation of \nu = 60.8320^\circ about an axis

(38)   \begin{equation*} {\bf s} = \frac{{\bf E}_1 + \left(\sqrt{6} - 1\right){\bf E}_2 + {\bf E}_3}{\sqrt{9 - 2\sqrt{6}}}. \end{equation*}

We assume there is no noise in the measurements, in which case, using (2), the landmark points’ position vectors at time t_1 are

(39)   \begin{eqnarray*} && {\bf m}_1 = -2.5745 {\bf E}_1 + 0.9255 {\bf E}_2 - 6.4691{\bf E}_3, \\ \\ && {\bf m}_2 = 2.3245 {\bf E}_1 + 5.8245 {\bf E}_2 - 10.4691 {\bf E}_3, \\ \\ && {\bf m}_3 = -1.0745 {\bf E}_1 - 3.5745 {\bf E}_2 - 10.1433 {\bf E}_3, \\ \\ && {\bf m}_4 = - 1.8245 {\bf E}_1 + 0.6755 {\bf E}_2 - 5.8567 {\bf E}_3. \end{eqnarray*}

We now examine how the four methods described earlier estimate {\bf R} and {\bf d} at time t_1 using the position measurements (35) and (39). The forthcoming results that we present were calculated in MATLAB; the associated code is available for download here.

Solution using the naive method

The solution procedure for the naive method reveals that the augmented matrix in (11) is

(40)   \begin{equation*} \left[ \begin{array}{c c} 1 & \mathsf{0} \\ \mathsf{d}^* & \mathsf{R}^* \end{array} \right] = \left[ \begin{array}{c c c c} 1 & 0 & 0 & 0 \\ 1.0000 & 0.6124 & -0.2500 & 0.7500 \\ 1.0000 & 0.6124 & 0.7500 & -0.2500 \\ -10.0000 & -0.5000 & 0.6124 & 0.6124 \end{array} \right], \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}} \end{equation*}

and so

(41)   \begin{eqnarray*} && {\mathsf R}^* = \left[ \begin{array}{c c c} 0.6124 & -0.2500 & 0.7500 \\ 0.6124 & 0.7500 & -0.2500 \\ -0.5000 & 0.6124 & 0.6124 \end{array} \right], \\ \\ \\ && {\mathsf d}^* = \left[ \begin{array}{c} 1.0000 \\ 1.0000 \\ -10.0000 \end{array} \right]. \end{eqnarray*}

Within numerical precision, the resulting predictions of {\bf R} and {\bf d} are accurate.

Solution using the TRIAD method

Out of 24 possible permutations, the matrix {\mathsf R} that has the least error, as defined by (18), is given by

(42)   \begin{equation*} \mathsf{R}^* = \mathsf{v}\mathsf{V}^{T} = \left[\begin{array}{c c c} -0.2500 & -0.7500 & -0.6124 \\ 0.7500 & 0.2500 & -0.6124 \\ 0.6124 & -0.6124 & 0.5000 \end{array} \right] \left[ \begin{array}{c c c} 0 & 0 & -1 \\ 1 & 0 & 0 \\ 0 & -1 & 0 \end{array} \right]^{T}. \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}} \end{equation*}

The resulting \mathsf{R}^* is identical to that estimated by the naive method; the same is true of the translation.

Solution using the method based on the singular value decomposition

The matrix {\mathsf S} in (20) is calculated as

(43)   \begin{equation*} {\mathsf S} = {\mathsf C}{\mathsf D}^T = \left[ \begin{array}{c c c c} -1.7872 & 3.1117 & -0.2872 & -1.0372 \\ -0.0372 & 4.8617 & -4.5372 & -0.2872 \\ 1.7655 & -2.2345 & -1.9088 & 2.3778 \end{array} \right] \left[ \begin{array}{c c c c} -2 & 6 & -2 & -2 \\ 1.5 & 1.5 & -4.5 & 1.5 \\ -0.25 & -0.25 & -0.25 & 0.75 \end{array} \right]^T. \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}} \end{equation*}

To compute the singular value decomposition of {\mathsf S} in MATLAB, we use the built-in singular value decomposition function \texttt{[U,X,V] = svd(S)}, where {\mathsf U} = {\mathsf R}_1, {\mathsf X} = {\mathsf \Lambda}, and {\mathsf V} = {\mathsf R}_2^T. In this case,

(44)   \begin{eqnarray*} && {\mathsf R}_1 = \left[ \begin{array}{c c c} -0.4378 & 0.3987 & 0.8058 \\ -0.8724 & -0.4052 & -0.2735 \\ 0.2174 & -0.8227 & 0.5253 \end{array} \right], \\ \\ \\ && {\mathsf R}_2^T = \left[ \begin{array}{c c c} -0.9110 & 0.4074 & 0.0633 \\ -0.4117 & -0.9074 & -0.0849 \\ 0.0228 & -0.1035 & 0.9944 \end{array} \right]. \end{eqnarray*}

Both {\sf R}_1 and {\sf R}_2 have unit determinant, and so, from (23)1,

(45)   \begin{equation*} {\sf R}^* = {\sf R}_1{\sf R}_2, \end{equation*}

which agrees with the estimates from the naive and TRIAD methods. A subsequent calculation involving (23)2 shows that the translation estimate also agrees with the two earlier methods.

Solution using the q-method

Using (29), (30), and (31) with unit weights \alpha_i = 1 (i = 1, \, 2, \, 3, \, 4), the matrices {\mathsf B } and {\mathsf K} are given by

(46)   \begin{eqnarray*} && \mathsf{B} = \left[ \begin{array}{c c c} 6.2235 & 0.4309 & -0.2593 \\ 9.7235 & 6.8059 & -0.0718 \\ -4.4691 & 2.8632 & 0.5945 \end{array} \right], \\ \\ \\ && \mathsf{K} = \left[ \begin{array}{c c c c} -1.1769 & 10.1543 & -4.7284 & 2.9350 \\ 10.1543 & -0.0121 & 2.7913 & 4.2098 \\ -4.7284 & 2.7913 & -12.4349 & 9.2926 \\ 2.9350 & 4.2098 & 9.2926 & 13.6238 \end{array} \right]. \end{eqnarray*}

The eigenvalues of {\mathsf K} are all real. The eigenvector that corresponds to the largest eigenvalue is

(47)   \begin{equation*} {\mathsf q}^* = \left[ \begin{array}{c} 0.2500 \\ 0.3624 \\ 0.2500 \\ 0.8624 \end{array} \right]. \end{equation*}

When we construct {\mathsf R}^* from the components of {\mathsf q}^* using (27), we find that the estimate of the rotation is the same as those computed by the naive, TRIAD, and singular value decomposition methods; the same is true of the translation when we calculate it from (33)2 after obtaining {\mathsf R}^*. This equivalence can be attributed to the noiseless nature of the measurements. Lastly, when the rotation has the representation \mathbf{R}^* = \mathbf{L}(\nu^*, \, \mathbf{s}^*), it can be shown from using (34) that the angle and axis of rotation are, respectively,

(48)   \begin{eqnarray*} && \nu^* = 60.8320^{\circ} , \\ \\ && \mathbf{s}^* = 0.4938 {\bf E}_1 + 0.7158 {\bf E}_2 + 0.4938 {\bf E}_3 , \end{eqnarray*}

which agree with their expected values from earlier.

The effect of measurement noise

In the absence of noise, it is well known that the four methods give equally accurate results, and this was observed in the previous calculations. However, in the presence of noise, this situation changes. To illustrate this point, we numerically added noise to the landmark points’ current position measurements {\bf m}_i \left (i = 1, \, 2, \, 3, \, 4 \right ) and recomputed the rotation and translation estimates. We found that the naive and TRIAD methods were susceptible to noise, while the SVD method and the \mathsf{q}-method were equally accurate. Figure 3 depicts the difference between the book’s actual orientation and location at time t_1 and its configuration estimated by the TRIAD method when {\bf m}_i are subject to noise.

Figure 3. Comparison of the book’s actual configuration (yellow) and the estimated configuration (blue) computed from the TRIAD method when noise was numerically added to the landmark points’ current position measurements  {\bf m}_i  \left (i = 1, \, 2, \, 3, \, 4 \right ).

Downloads

The MATLAB files used to perform calculations and generate Figure 3 for the example we discussed in this page are available here and here. The former file is the main script that executes the calculations and plots the actual and estimated configurations; the latter is a function called by the main script to draw the book in these plots.

References

  1. Panjabi, M. M., Centers and angles of rotation of body joints: A study of errors and optimization, Journal of Biomechanics 12(12) 911-920 (1979).
  2. Woltring, H. J., Huiskes, R., de Lange, A., and Veldpaus, F. E., Finite centroid and helical axis estimation from noisy landmark measurements in the study of human joint kinematics, Journal of Biomechanics 18(5) 379–389 (1985).
  3. Yuan, X., Ryd, L., and Blankevoort, L., Error propagation for relative motion determined from marker positions, Journal of Biomechanics 30(9) 989-992 (1997).
  4. Wahba, G., Problem 65-1, A least squares estimate of satellite attitude, SIAM Review 8(3) 384–386 (1966).
  5. Schönemann, P. H., A generalized solution of the orthogonal Procrustes problem, Psychometrika 31(1) 1–10 (1966).
  6. Spoor, C. W., and Veldpaus, F. E., Rigid body motion calculated from spatial co-ordinates of markers, Journal of Biomechanics 13(4) 391-393 (1980).
  7. Shuster, M. D., and Oh, S. D., Three-axis attitude determination from vector observations, Journal of Guidance, Control, and Dynamics 4(1) 70–77 (1981).
  8. Arun, K. S., Huang, T. S., and Blostein, S. D., Least-squares fitting of two 3-D point sets, IEEE Transactions on Pattern Analysis and Machine Intelligence 9(5) 698-700 (1987).
  9. Hanson, R. J., and Norris, M. J., Analysis of measurements based on the singular value decomposition, SIAM Journal on Scientific and Statistical Computing 2(3) 363-373 (1981).
  10. Keat, J., Analysis of Least-Squares Attitude Determination Routine DOAOP, CSC Report CSC/TM-77/6034 (February 1977).
  11. Horn, B. K. P., Closed-form solution of absolute orientation using unit quaternions, Journal of the Optical Society of America A 4(4) 629–642 (1987).
  12. Eggert, D. W., Lorusso, A., and Fisher, R. B., Estimating 3-D rigid body transformations: A comparison of four major algorithms, Machine Vision and Applications 9(5–6) 272–290 (1997).
  13. Metzger, M. F., Faruk Senan, N. A., O’Reilly, O. M., and Lotz, J. C., Minimizing errors associated with calculating the helical axis for spinal motions, Journal of Biomechanics 43(14) 2822-2829 (2010).
  14. Dorst, L., First order error propagation of the Procrustes method for 3D attitude estimation, IEEE Transactions on Pattern Analysis & Machine Intelligence 27(2) 221–229 (2005).
  15. Markley, F. L., Attitude determination using vector observations and the singular value decomposition, Journal of the Astronautical Sciences 36(3) 245–258 (1988).
  16. Markley, F. L., Attitude determination using vector observations: A fast optimal matrix algorithm, Journal of the Astronautical Sciences 41(2) 261-280 (1993).
  17. Söderkvist, I., and Wedin, P. A., Determining the movements of the skeleton using well-configured markers, Journal of Biomechanics 26(12) 1473–1477 (1993).
  18. Bar-Itzhack, I. Y., REQUEST: A recursive QUEST algorithm for sequential attitude determination, Journal of Guidance, Control, and Dynamics 19(5) 1034-1038 (1996).