Background on linear algebra

It is common practice to use matrices to represent transformations of a vector into another vector. Here, we discuss another quantity, known as a tensor, that achieves the same purpose. We generally denote tensors by uppercase boldfaced symbols, such as {\bf A}, and symbolize the transformation of a vector {\bf a} by {\bf A} to a vector {\bf b} as

(1)   \begin{equation*} \mathbf{b} = \mathbf{Aa} . \end{equation*}

The advantages of using tensors are that they are often far more compact than matrices, they are easier to differentiate, and their components transform transparently under changes of bases. The latter feature is very useful when interpreting results in rigid-body kinematics. Consequently, we choose to employ a tensor notation throughout this site for many of the developments we present. This page is intended as both a brief introduction to and review of tensors and their properties. The material provided in the forthcoming sections is standard background for courses in continuum mechanics, and, as such, the primary sources of this information are works by Casey [1, 2], Chadwick [3], and Gurtin [4].

Contents

Preliminaries

To lay the foundation for our upcoming exposition on tensors, we begin with a brief discussion of basis vectors and define two symbols that prove useful in subsequent sections.

Basis vectors

Euclidean three-space is denoted by \euclid^3. For this space, we define a fixed, right-handed orthonormal basis \{{\bf E}_1, \, {\bf E}_2, \, {\bf E}_3 \}. By orthonormal, we mean that for any set of vectors \{{\bf b}_1, \, {\bf b}_2, \, {\bf b}_3 \}, the dot products {\bf b}_i\cdot{\bf b}_j = 0 when i \ne j and {\bf b}_i\cdot{\bf b}_j = 1 when i = j. Unless indicated otherwise, lowercase italic Latin indices such as i, j, and k range from 1 to 3. If \{{\bf b}_1, \, {\bf b}_2, \, {\bf b}_3 \} is right-handed, then the following scalar triple product is positive:

(2)   \begin{equation*}[{\bf b}_1, \, {\bf b}_2, \, {\bf b}_3] = {\bf b}_3\cdot({\bf b}_1\times{\bf b}_2). \end{equation*}

We also make use of another right-handed orthonormal basis, \{{\bf p}_1, \, {\bf p}_2, \, {\bf p}_3 \}, that is not necessarily fixed.  

The Kronecker delta

We use copious amounts of dot products, so it is convenient to define the Kronecker delta \delta_{ij}:

(3)   \begin{equation*}\delta_{ij} =\left\{ \begin{array}{c}1 \quad i = j,\\[0.075in]0 \quad i \ne j.\end{array}\right. \end{equation*}

Clearly,

(4)   \begin{equation*}{\bf p}_i\cdot{\bf p}_j = \delta_{ij}.\end{equation*}

The alternating symbol

We also occasionally make use of the alternating (or Levi-Civita) symbol \altepsilon_{ijk}, which is defined such that

(5)   \begin{eqnarray*}&& \altepsilon_{123} = \altepsilon_{312} = \altepsilon_{231} = 1,\\\\&& \altepsilon_{213} = \altepsilon_{132} = \altepsilon_{321} = - 1, \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\\\\&& \altepsilon_{ijk} = 0 \ \, \mbox{otherwise}.\end{eqnarray*}

In words, \altepsilon_{ijk} = 1 if ijk is an even permutation of 1, 2, and 3; \altepsilon_{ijk} = -1 if ijk is an odd permutation of 1, 2, and 3; and \altepsilon_{ijk} = 0 if either i = j, j = k, or k = i. We also note that

(6)   \begin{equation*}[{\bf p}_i, \, {\bf p}_j, \, {\bf p}_k] = \altepsilon_{ijk},  \end{equation*}

which is simple to verify by using the definition of the scalar triple product.

The tensor product of two vectors

The tensor (or cross-bun) product of any two vectors {\bf a} and {\bf b} in \euclid^3 is defined by

(7)   \begin{equation*}\left({\bf a}\otimes{\bf b} \right){\bf c} = \left({\bf b}\cdot{\bf c}\right){\bf a},  \end{equation*}

where {\bf c} is any vector in \euclid^3. That is, {\bf a}\otimes{\bf b} projects {\bf c} onto {\bf b} and multiplies the resulting scalar by {\bf a}. Put another way, {\bf a}\otimes{\bf b} transforms {\bf c} into a vector that is parallel to {\bf a}. A related tensor product is defined as follows:

(8)   \begin{equation*}{\bf c}\left({\bf a}\otimes{\bf b} \right) = \left({\bf a}\cdot{\bf c} \right){\bf b}. \end{equation*}

In either case, {\bf a}\otimes{\bf b} performs a linear transformation of {\bf c} that it acts on. The tensor product has the following useful properties:

(9)   \begin{eqnarray*}&& \left(\alpha{\bf a} + \beta{\bf b} \right)\otimes{\bf c} = \alpha \left({\bf a}\otimes{\bf c}\right) + \beta\left({\bf b}\otimes{\bf c}\right), \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\\\\&& {\bf c}\otimes\left(\alpha{\bf a} + \beta{\bf b} \right) = \alpha \left({\bf c}\otimes{\bf a}\right) + \beta\left({\bf c}\otimes{\bf b}\right),\end{eqnarray*}

where \alpha and \beta are any two scalars. To prove these identities, one merely shows that the left- and right-hand sides provide the same transformation of any vector {\bf d} in \euclid^3.

Second-order tensors

A second-order tensor {\bf A} is a linear transformation of \euclid^3 into itself. That is, for any two vectors {\bf a} and {\bf b} and any two scalars \alpha and \beta,

(10)   \begin{equation*}{\bf A}\left( \alpha{\bf a} + \beta{\bf b} \right) = \alpha{\bf A}{\bf a} + \beta{\bf A}{\bf b}, \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\end{equation*}

where {\bf A}{\bf a} and {\bf A}{\bf b} are both vectors in \euclid^3. The tensor {\bf a}\otimes{\bf b} is a simple example of a second-order tensor. It is standard to define the following composition rules for second-order tensors:

(11)   \begin{eqnarray*}&& ({\bf A} + {\bf B}){\bf a} = {\bf A}{\bf a} + {\bf B}{\bf a}, \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\\\\&& (\alpha{\bf A}){\bf a} = \alpha ({\bf A}{\bf a}),\\\\&& ({\bf A}{\bf B}){\bf a} = {\bf A}({\bf B}{\bf a}),\end{eqnarray*}

where {\bf A} and {\bf B} are any second-order tensors. To check if two second-order tensors {\bf A} and {\bf B} are identical, it suffices to show that {\bf A}{\bf a} = {\bf B}{\bf a} for any {\bf a}. We also define the identity tensor {\bf I} and the zero tensor {\bf O}:

(12)   \begin{eqnarray*}&& {\bf I}{\bf a} = {\bf a},\\\\&& {\bf O}{\bf a} = {\bf 0}. \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\end{eqnarray*}

Representations

It is convenient at this stage to establish the following representation for any second-order tensor {\bf A}:

(13)   \begin{equation*}{\bf A} = \sum_{i \, \, = \, 1}^3 \sum_{j \, \, = \, 1}^3 A_{ji}{\bf p}_j\otimes{\bf p}_i, \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\end{equation*}

where

(14)   \begin{equation*}A_{ji} = ({\bf A}{\bf p}_i)\cdot{\bf p}_j \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\end{equation*}

are the components of {\bf A} relative to the basis \{{\bf p}_1, \, {\bf p}_2, \, {\bf p}_3\}. The order of the indices i and j is important. Initially, it is convenient to interpret a tensor using the representation

(15)   \begin{equation*}{\bf A} = \sum_{i \, \, = \, 1}^3 {\bf a}_i\otimes{\bf p}_i, \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\end{equation*}

for which

(16)   \begin{equation*}{\bf a}_i = \sum_{j \, \, = \, 1}^3 A_{ji}{\bf p}_j = {\bf A}{\bf p}_i . \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\end{equation*}

In this light, {\bf A} transforms {\bf p}_k into {\bf a}_k. Hence, if we know what {\bf A} does to three orthonormal vectors, then we can write its representation immediately. To arrive at representation (13), we examine the action of a second-order tensor {\bf A} on any vector {\bf b} = \sum_{i \, \, = \, 1}^3 b_i {\bf p}_i:

(17)   \begin{eqnarray*}{\bf A}{\bf b} \!\!\!\!\! &=& \!\!\!\!\! \sum_{i \, \, = \, 1}^3{\bf A} (b_i {\bf p}_i)= \sum_{i \, \, = \, 1}^3 b_i ({\bf A} {\bf p}_i)= \sum_{i \, \, = \, 1}^3 b_i \left(\sum_{j \, \, = \, 1}^3 A_{ji}{\bf p}_j \right)= \sum_{i \, \, = \, 1}^3 \sum_{j \, \, = \, 1}^3 b_i \left( A_{ji}{\bf p}_j \right)= \left(\sum_{i \, \, = \, 1}^3 \sum_{j \, \, = \, 1}^3  A_{ji}{\bf p}_j\otimes{\bf p}_i \right) \left( \sum_{k \, \, = \, 1}^3 b_k{\bf p}_k \right) \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\\\\[0.10in]&=& \!\!\!\!\! \left( \sum_{i \, \, = \, 1}^3 \sum_{j \, \, = \, 1}^3 A_{ji}{\bf p}_j\otimes{\bf p}_i \right) {\bf b} ,\end{eqnarray*}

where we used the definition of the tensor product of two vectors in the next-to-last step. Thus, we infer that {\bf A} has the representation given by (13). We can use this representation to establish expressions for the transformation induced by {\bf A}. To proceed, define {\bf c} = {\bf A}{\bf b}, in which case, from (17),

(18)   \begin{equation*}{\bf c} = {\bf A}{\bf b} = \sum_{i \, \, = \, 1}^3 \sum_{j \, \, = \, 1}^3 A_{ji}b_i {\bf p}_j. \end{equation*}

The components c_k = {\bf c}\cdot{\bf p}_k of {\bf c} are then given by

(19)   \begin{equation*}c_k = \sum_{i \, \, = \, 1}^3 A_{ki}b_i. \end{equation*}

When expressed in matrix notation, (19) has a familiar form:

(20)   \begin{equation*}\left[ \begin{array}{c c c}c_{1}  \\c_{2}  \\c_{3}  \\\end{array} \right] =\left[ \begin{array}{c c c}A_{11} & A_{12} & A_{13} \\A_{21} & A_{22} & A_{23} \\A_{31} & A_{32} & A_{33} \\\end{array} \right]\left[ \begin{array}{c c c}b_{1}  \\b_{2}  \\b_{3}  \\\end{array} \right].\end{equation*}

Note that (20) implies that the identity tensor has the representation {\bf I} = \sum_{i \, \, = \, 1}^3 {\bf p}_i\otimes{\bf p}_i.

The product of two second-order tensors

We now turn to the product of two second-order tensors {\bf A} and {\bf B}. The product {\bf A}{\bf B} is defined here to be a second-order tensor {\bf C}. First, let

(21)   \begin{eqnarray*}&& {\bf A} = \sum_{i \, \, = \, 1}^3\sum_{j \, \, = \, 1}^3 A_{ij} {\bf p}_i\otimes{\bf p}_j,\\\\\\&& {\bf B} = \sum_{i \, \, = \, 1}^3\sum_{j \, \, = \, 1}^3 B_{ij} {\bf p}_i\otimes{\bf p}_j,\\\\\\&& {\bf C} = \sum_{i \, \, = \, 1}^3\sum_{j \, \, = \, 1}^3 C_{ij} {\bf p}_i\otimes{\bf p}_j. \end{eqnarray*}

We then solve the equations

(22)   \begin{equation*}{\bf C}{\bf a} = ({\bf A}{\bf B}){\bf a} \end{equation*}

for the nine components of {\bf C}, where {\bf a} is any vector. Using the arbitrariness of {\bf a}, we conclude that the components of the three tensors, which are all expressed in the same basis, are related by

(23)   \begin{equation*}C_{ij} = \sum_{k \, \, = \, 1}^3 A_{ik}B_{kj}. \end{equation*}

This result is identical to that for matrix multiplication. Indeed, if we define three matrices whose components are A_{ij}, B_{ij}, and C_{ij}, then we find the representation

(24)   \begin{equation*}\left[ \begin{array}{c c c }C_{11} & C_{12} & C_{13}   \\C_{21} & C_{22} & C_{23} \\C_{31} & C_{32} & C_{33}\end{array} \right] =\left[ \begin{array}{c c c }A_{11} & A_{12} & A_{13}   \\A_{21} & A_{22} & A_{23} \\A_{31} & A_{32} & A_{33}\end{array} \right]\left[ \begin{array}{c c c }B_{11} & B_{12} & B_{13}   \\B_{21} & B_{22} & B_{23} \\B_{31} & B_{32} & B_{33}\end{array} \right]. \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\end{equation*}

It is straightforward to establish a similar representation for the product {\bf B}{\bf A}. Finally, consider the product of two second-order tensors {\bf a}\otimes{\bf b} and {\bf c}\otimes{\bf d}:

(25)   \begin{equation*}({\bf a}\otimes{\bf b}) ({\bf c}\otimes{\bf d}) = ({\bf b}\cdot{\bf c}) ({\bf a}\otimes{\bf d}) .\end{equation*}

This result is the simplest way to remember how to multiply two second-order tensors.

Symmetric and skew-symmetric tensors

The transpose {\bf A}^T of a second-order tensor {\bf A} is defined such that

(26)   \begin{equation*}{\bf b}\cdot({\bf A}{\bf a}) = ({\bf A}^T{\bf b})\cdot{\bf a} \end{equation*}

for any two vectors {\bf a} and {\bf b}. If we consider the second-order tensor {\bf c}\otimes{\bf d}, then we can use definition (26) to show that

(27)   \begin{eqnarray*}  ({\bf c}\otimes{\bf d})^T = {\bf d}\otimes{\bf c},\\\\  ({\bf a}\otimes{\bf b} + {\bf c}\otimes{\bf d})^T = {\bf b}\otimes{\bf a} + {\bf d}\otimes{\bf c}. \end{eqnarray*}

Given any two second-order tensors {\bf A} and {\bf B}, it can be shown that the transpose ({\bf A}{\bf B})^T = {\bf B}^T{\bf A}^T. If {\bf A} = {\bf A}^T, then {\bf A} is said to be symmetric.  On the other hand, {\bf A} is skew-symmetric if {\bf A} = -  {\bf A}^T. Using the representation (13) for {\bf A} and the identity (27)1, we find that the tensor components A_{ij} = A_{ji} when {\bf A} is symmetric and A_{ij} = -A_{ji} when {\bf A} is skew-symmetric. These results imply that {\bf A} has six independent components when it is symmetric but only three independent components when skew-symmetric. Lastly, it is always possible to decompose any second-order tensor {\bf A} into the sum of a symmetric tensor and a skew-symmetric tensor:

(28)   \begin{equation*}{\bf A} = \underbrace{ \frac{1}{2}\left( {\bf A} + {\bf A}^T \right) }_{\textrm{symmetric}} + \underbrace{ \frac{1}{2}\left( {\bf A} - {\bf A}^T \right) }_{\textrm{skew-symmetric}}. \end{equation*}

Invariants

There are three scalar quantities associated with a second-order tensor that are independent of the right-handed orthonormal basis used for \euclid^3. Because these quantities are independent of the basis, they are known as the (principal) invariants of a second-order tensor. Given a second-order tensor {\bf A}, the invariants I_{\bf A}, II_{\bf A}, and III_{\bf A} of {\bf A} are defined as

(29)   \begin{eqnarray*}&& I_{\bf A}[{\bf a}, \, {\bf b}, \, {\bf c}] =[{\bf A}{\bf a}, \, {\bf b}, \, {\bf c}] +[{\bf a}, \, {\bf A}{\bf b}, \, {\bf c}] +[{\bf a}, \, {\bf b}, \, {\bf A}{\bf c}] ,\\\\&& II_{\bf A}[{\bf a}, \, {\bf b}, \, {\bf c}] =\mbox{[}{\bf a}, \, {\bf A}{\bf b}, \, {\bf A}{\bf c}] +\mbox{[}{\bf A}{\bf a}, \, {\bf b}, \, {\bf A}{\bf c}] +\mbox{[}{\bf A}{\bf a}, \, {\bf A}{\bf b}, \, {\bf c}] , \\\\&& III_{\bf A}[{\bf a}, \, {\bf b}, \, {\bf c}] =\mbox{[}{\bf A}{\bf a}, \, {\bf A}{\bf b}, \, {\bf A}{\bf c}] ,\end{eqnarray*}

where {\bf a}, {\bf b}, and {\bf c} are any three vectors. The first invariant I_{\bf A} is known as the trace of a tensor {\bf A}, and the third invariant III_{\bf A} is known as the determinant of {\bf A}:

(30)   \begin{eqnarray*}&& \mbox{tr}({\bf A}) = I_{\bf A},\\\\&& \mbox{det}({\bf A}) = III_{\bf A}. \end{eqnarray*}

Suppose we represent {\bf A} in terms of a right-handed orthonormal basis \{{\bf p}_1, \, {\bf p}_2, \, {\bf p}_3 \} in \euclid^3, such as in representation (13). If we take {\bf a} = {\bf p}_1, {\bf b} = {\bf p}_2, and {\bf c} = {\bf p}_3, then

(31)   \begin{eqnarray*}&& [{\bf p}_1, \, {\bf p}_2, \, {\bf p}_3] = 1,\\\\[0.10in]&& \mbox{[}{\bf A}{\bf p}_1, \, {\bf p}_2, \, {\bf p}_3] +[{\bf p}_1, \, {\bf A}{\bf p}_2, \, {\bf p}_3] +[{\bf p}_1, \, {\bf p}_2, \, {\bf A}{\bf p}_3] =\sum_{i \, \, = \, 1}^3 ({\bf A}{\bf p}_i)\cdot{\bf p}_i = A_{11} + A_{22} + A_{33}, \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\\\\\\&& \mbox{[}{\bf A}{\bf p}_1, \, {\bf A}{\bf p}_2, \, {\bf A}{\bf p}_3] =\sum_{i \, \, = \, 1}^3 \sum_{j \, \, = \, 1}^3 \sum_{k \, \, = \, 1}^3 \altepsilon_{ijk} A_{i1}A_{j2}A_{k3} =\det \left(\left[ \begin{array}{c c c }A_{11} & A_{12} & A_{13}   \\A_{21} & A_{22} & A_{23} \\A_{31} & A_{32} & A_{33}\end{array} \right] \right) .\end{eqnarray*}

Consequently, the trace of {\bf A} is given by

(32)   \begin{equation*}\mbox{tr}({\bf A}) = A_{11} + A_{22} + A_{33}. \end{equation*}

A similar result holds for the trace of a matrix. We also note the related result \mbox{tr}({\bf a}\otimes{\bf b}) = {\bf a}\cdot{\bf b}. In addition, we find that the determinant of {\bf A} can be computed using a familiar matrix representation:

(33)   \begin{equation*}\det({\bf A}) =\sum_{i \, \, = \, 1}^3 \sum_{j \, \, = \, 1}^3 \sum_{k \, \, = \, 1}^3 \altepsilon_{ijk} A_{i1}A_{j2}A_{k3} =\det \left(\left[ \begin{array}{c c c }A_{11} & A_{12} & A_{13}   \\A_{21} & A_{22} & A_{23} \\A_{31} & A_{32} & A_{33}\end{array} \right]\right). \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\end{equation*}

Inverses and adjugates

The inverse {\bf A}^{-1} of a second-order tensor {\bf A} is the tensor that satisfies

(34)   \begin{equation*}{\bf A}^{-1}{\bf A} = {\bf A}{\bf A}^{-1} = {\bf I}. \end{equation*}

For the inverse of {\bf A} to exist, its determinant \mbox{det}({\bf A}) \ne 0.  Taking the transpose of (34), we find that the inverse of the transpose of {\bf A} is the transpose of the inverse. The adjugate {\bf A}^{*} satisfies

(35)   \begin{equation*}{\bf A}^{*}({\bf a}\times{\bf b}) =  {\bf A}{\bf a}\times{\bf A}{\bf b} \end{equation*}

for any two vectors {\bf a} and {\bf b}. If {\bf A} is invertible, then (35) yields a relationship between {\bf A}^{*} and {\bf A}^{-1}:

(36)   \begin{equation*}{\bf A}^{*} = \det({\bf A}) ({\bf A}^{-1})^T. \end{equation*}

Eigenvalues and eigenvectors

The eigenvalues (or characteristic values, or principal values) of a second-order tensor {\bf A} are defined as the roots \lambda of the characteristic equation

(37)   \begin{equation*}\det(\lambda {\bf I} - {\bf A}) = 0. \end{equation*}

The three roots of this equation are denoted by \lambda_1, \lambda_2, and \lambda_3. On expanding the characteristic equation (37), we find that

(38)   \begin{equation*}\lambda^3 - I_{\bf A} \lambda^2 + II_{\bf A}\lambda - III_{\bf A} = 0, \end{equation*}

where

(39)   \begin{eqnarray*}&& I_{\bf A} = \mbox{tr}({\bf A}) = \lambda_1  + \lambda_2 + \lambda_3 ,\\\\[0.15in]&& II_{\bf A} = \frac{1}{2}(\mbox{tr}({\bf A})^2 - \mbox{tr}({\bf A}^2) ) = \lambda_1\lambda_2 + \lambda_2\lambda_3 + \lambda_1\lambda_3 ,\\\\[0.15in]&& III_{\bf A} = \det({\bf A}) = \lambda_1\lambda_2\lambda_3 .\end{eqnarray*}

The corresponding eigenvectors (or characteristic directions, or principal directions) of {\bf A} are the vectors {\bf u}_i that satisfy

(40)   \begin{equation*}\mathbf{A} {\bf u}_i = \lambda_i {\bf u}_i. \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\end{equation*}

A second-order tensor has three eigenvectors, one associated with each eigenvalue. To determine these eigenvectors, we express (40), with the help of (20), in matrix-vector form and then use standard solution techniques from linear algebra. Note that the eigenvectors are unique up to a multiplicative constant.

Proper-orthogonal tensors

A second-order tensor {\bf A} is said to be orthogonal if {\bf A}{\bf A}^T = {\bf A}^T{\bf A} = {\bf I}. That is, the transpose of an orthogonal tensor is its inverse. It also follows that \det({\bf A}) = \pm 1. An orthogonal tensor {\bf A} has the unique property that ({\bf A}{\bf a}) \cdot ({\bf A}{\bf a}) = {\bf a}\cdot{\bf a} for any vector {\bf a}, and so it preserves the length of the vector that it transforms. A tensor {\bf A} is proper-orthogonal if it is orthogonal and its determinant \det({\bf A}) = 1 specifically. Thus, proper-orthogonal second-order tensors are a subclass of the second-order orthogonal tensors.

Positive-definite tensors

A second-order tensor {\bf A} is said to be positive-definite if ({\bf A}{\bf a})\cdot{\bf a} > 0 for any vector {\bf a} \ne {\bf 0} and ({\bf A}{\bf a})\cdot{\bf a} =  0 if, and only if, {\bf a} = {\bf 0}. A consequence of this definition is that a skew-symmetric second-order tensor can never be positive-definite. If {\bf A} is positive-definite, then all three of its eigenvalues are positive and, furthermore, the tensor has the representation

(41)   \begin{equation*}{\bf A} = \lambda_1{\bf u}_1\otimes{\bf u}_1 + \lambda_2{\bf u}_2\otimes{\bf u}_2 + \lambda_3{\bf u}_3\otimes{\bf u}_3, \end{equation*}

where \lambda_i and {\bf u}_i are the eigenvalues and eigenvectors of {\bf A}, respectively. This representation is often known as the spectral decomposition.

Third-order tensors

A third-order tensor transforms vectors into second-order tensors and may transform second-order tensors into vectors. With respect to a right-handed orthonormal basis \{{\bf p}_1, \, {\bf p}_2, \, {\bf p}_3\}, any third-order tensor \mathcal{A} can be represented as

(42)   \begin{equation*}\mathcal{A} = \sum_{i \, \, = \, 1}^3\sum_{j \, \, = \, 1}^3\sum_{k \, \, = \, 1}^3 \mathsf{A}_{ijk} {\bf p}_i\otimes{\bf p}_j\otimes{\bf p}_k, \end{equation*}

and we define the following two tensor products:

(43)   \begin{eqnarray*}&& ({\bf a}\otimes{\bf b}\otimes{\bf c})\left[{\bf d}\otimes{\bf e}\right] =({\bf b}\cdot{\bf d})({\bf c}\cdot{\bf e}) {\bf a}, \\\\&& ({\bf a}\otimes{\bf b}\otimes{\bf c}){\bf d} = ({\bf c}\cdot{\bf d}) ({\bf a}\otimes{\bf b}).\end{eqnarray*}

Note the presence of the brackets [ \, \cdot \, ] in (43)1. The main example of a third-order tensor we use throughout this site is the alternator {\bepsilon}:

(44)   \begin{equation*}{\bepsilon} = \sum_{i \, \, = \, 1}^3\sum_{j \, \, = \, 1}^3\sum_{k \, \, = \, 1}^3 \altepsilon_{ijk} {\bf p}_i\otimes{\bf p}_j\otimes{\bf p}_k . \end{equation*}

This tensor has some useful features. First, if {\bf A} is a symmetric second-order tensor, then {\bepsilon}\left[{\bf A}\right] = {\bf 0}. Second, for any vector {\bf c} = \sum_{k \, \, = \, 1}^3 c_k {\bf p}_k,

(45)   \begin{eqnarray*}{\bepsilon}{\bf c} \!\!\!\!\! &=& \!\!\!\!\! \sum_{i \, \, = \, 1}^3\sum_{j \, \, = \, 1}^3\sum_{k \, \, = \, 1}^3 \altepsilon_{ijk} {\bf p}_i\otimes{\bf p}_j c_k\\\\&=& \!\!\!\!\! c_3({\bf p}_1\otimes{\bf p}_2 - {\bf p}_2\otimes{\bf p}_1) + c_2({\bf p}_3\otimes{\bf p}_1 - {\bf p}_1\otimes{\bf p}_3) + c_1({\bf p}_2\otimes{\bf p}_3 - {\bf p}_3\otimes{\bf p}_2), \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\end{eqnarray*}

which is a skew-symmetric second-order tensor. Thus, \bepsilon can be used to transform a vector into a second-order skew-symmetric tensor and transform the skew-symmetric part of a second-order tensor into a vector.

Axial vectors

The fact that the third-order alternating tensor {\bepsilon} acts on a vector to produce a skew-symmetric second-order tensor enables us to define a skew-symmetric tensor {\bf C} for every vector {\bf c}, and vice versa:

(46)   \begin{eqnarray*}&& {\bf C} = - {\bepsilon}{\bf c},\\\\[0.10in]&& {\bf c} = - \frac{1}{2}{\bepsilon}\left[{\bf C}\right]. \end{eqnarray*}

The vector {\bf c} is known as the axial vector of {\bf C}. Notice that if {\bf C} has the representation

(47)   \begin{equation*}{\bf C} = c_{21}({\bf p}_2\otimes{\bf p}_1 - {\bf p}_1\otimes{\bf p}_2) + c_{32}({\bf p}_3\otimes{\bf p}_2 - {\bf p}_2\otimes{\bf p}_3) + c_{13}({\bf p}_1\otimes{\bf p}_3 - {\bf p}_3\otimes{\bf p}_1), \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\end{equation*}

then, with the help of (43)1, its axial vector

(48)   \begin{equation*}{\bf c} = - \frac{1}{2}{\bepsilon}\left[{\bf C}\right] = c_{21}{\bf p}_3 + c_{13}{\bf p}_2 + c_{32}{\bf p}_1. \end{equation*}

We also note the important result

(49)   \begin{equation*}{\bf C}{\bf a} = (- {\bepsilon}{\bf c}){\bf a} = {\bf c}\times{\bf a} \end{equation*}

for any other vector {\bf a}. This identity allows us to replace cross products with tensor products, and vice versa. We often express the relationship between a skew-symmetric tensor {\bf C} and its axial vector {\bf c} without explicit mention of the alternator \bepsilon:

(50)   \begin{eqnarray*}&& {\bf c} = \mbox{ax}\left({\bf C}\right),\\\\&& {\bf C} = \mbox{skewt}\left({\bf c}\right). \end{eqnarray*}

Differentiation of tensors

One often encounters derivatives of tensors. Suppose a second-order tensor {\bf A} has the representation (13), where the tensor components A_{ij} and the basis vectors {\bf p}_i are functions of time. The time derivative of {\bf A} is defined as

(51)   \begin{equation*}\dot{\bf A} = \sum_{i \, \, = \, 1}^3 \sum_{j \, \, = \, 1}^3 \dot{A}_{ij} {\bf p}_i\otimes{\bf p}_j + \sum_{i \, \, = \, 1}^3 \sum_{j \, \, = \, 1}^3 A_{ij} \dot{\bf p}_i\otimes{\bf p}_j + \sum_{i \, \, = \, 1}^3 \sum_{j \, \, = \, 1}^3 A_{ij} {\bf p}_i\otimes\dot{\bf p}_j. \hspace{1in} \scalebox{0.001}{\textrm{\textcolor{white}{.}}}\end{equation*}

Notice that we differentiate both the components and the basis vectors. We can also define a chain rule and product rules. If the tensors {\bf A} = {\bf A}(q(t)) and {\bf B} = {\bf B}(t) and the vector {\bf c} = {\bf c}(t), then

(52)   \begin{eqnarray*}&& \dot{\bf A} = \frac{\partial {\bf A}}{\partial q} \dot{q},\\\\[0.10in]&& \dot{ \overline{ {\bf A}{\bf B} } } = \dot{\bf A}{\bf B} + {\bf A}\dot{\bf B}, \\\\&& \dot{ \overline{ {\bf A}{\bf c} } } = \dot{\bf A}{\bf c} + {\bf A}\dot{\bf c}.\end{eqnarray*}

Now, consider a function \psi = \psi({\bf A}). The derivative of this function with respect to {\bf A} is defined to be the second-order tensor

(53)   \begin{equation*}\frac{\partial \psi}{\partial {\bf A}} = \sum_{i \, \, = \, 1}^3\sum_{j \, \, = \, 1}^3 \frac{\partial \psi}{\partial A_{ij}} {\bf p}_i\otimes{\bf p}_j. \end{equation*}

In addition, if the basis vectors {\bf p}_i are constant, then

(54)   \begin{equation*}\dot{\psi} = \sum_{i \, \, = \, 1}^3\sum_{j \, \, = \, 1}^3 \frac{\partial \psi}{\partial A_{ij}} \dot{A}_{ij} = \mbox{tr} \left( \frac{\partial \psi}{\partial {\bf A}}\dot{\bf A}^T \right). \end{equation*}

References

  1. Casey, J., A treatment of rigid body dynamics, ASME Journal of Applied Mechanics 50(4a) 905–907 and 51 227 (1983).
  2. Casey, J., On the advantages of a geometrical viewpoint in the derivation of Lagrange’s equations for a rigid continuum, Zeitschrift für angewandte Mathematik und Physik 46 S805–S847 (1995).
  3. Chadwick, P., Continuum Mechanics: Concise Theory and Problems, Dover Publications, New York (1999). Reprint of the George Allen & Unwin Ltd., London, 1976 edition.
  4. Gurtin, M. E., An Introduction to Continuum Mechanics, Academic Press, New York (1981).