Skip to content

Eigenvalue Analysis

Eigenvalue Analysis

Generalized Eigenvalue Problems

In free oscillation analysis of continuous bodies, a spatial discretization is performed, and it is modeled with a multi-DOF system with concentrated mass points as shown in Fig. 2.3.1. In the case of free oscillation problems without damping, the governing equation (motion equation) is as follows:

where is the generalized displacement vector, is the mass matrix and is the stiffness matrix. Further, the function is defined with as as the inherent angular frequency; , and as arbitary constants; and as the vector:

In this case, this equation and its second derivative, that is,

is substituted into Eq., which becomes

That is, the following equation is obtained:

Therefore, if coefficient and vector that satisfy Eq. can be determined, function becomes the solution of formula.

The coefficient and vector are called eigenvalue and eigenvector, respectively, and the problem that determines these from Eq. is known as a generalized eigenvalue problem.

Example of a multi-DOF system of free oscillation without damping

Fig. 2.3.1: Example of a multi-DOF system of free oscillation without damping

Problem Settings

Eq., which can be expanded to any order, appears in many situations. When dealing with physical problems, the matrix is often Hermitian (symmetric.) In a complex matrix, the transpose is a conjugate complex number, and the real matrix is a symmetric matrix. Therefore, when the components of matrix are defined as , if the conjugate complex number is set as , the relationship becomes

In this study, it is assumed that the matrices are symmetric and positive definite. A positive definite matrix is a symmetric matrix with all positive eigenvalues; that is, it always satisfies Eq.:

Shifted Inverse Iteration Method

Structural analyses with the finite element method do not require all eigenvalues. In many cases, just a few low-order eigenvalues are sufficient. As for HEC-MW, it was designed to deal with large-scale problems thus, the matrices are large and very sparse (with many zeros). Therefore, it is important to consider this and determine eigenvalues of low-order mode efficiently.

When the lower limit of eigenvalues is set to , Eq. is modified according to the following equation (which is mathematically equivalent):

This equation has the following convenient properties for calculation:

  1. The mode is inverted.
  2. The eigenvalue around are maximized.

In actual calculations, the maximum eigenvalue is often determined at the beginning. Therefore, the main convergence calculation is applied to Eq., rather than Eq. to determine from the eigenvalues around . This method is called shifted inverse iteration.

Algorithm for Eigenvalue Solution

The Jacobi method is another such orthodox and popular method.

It is an effective method for small and dense matrices; however, the matrices dealt with by HEC-MW are large and sparse; thus, the Lanczos iterative is preferred.

Lanczos Method

The Lancos method was proposed by C. Lanczos in the 1950s and is a calculation algorithm for triply diagonalizing a matrix. The following are some of its characteristics:

  1. It is an iterative convergence method that allows calculation of a matrix even if it is sparse.
  2. The algorithm is focused on matrices and vector product, and suitable for parallelization.
  3. It is suitable for the geometric segmentation associated with finite element mesh.
  4. It is possible to limit the number of eigenvalues to be determined and mode range to make the calculation more efficient.

The Lanczos method creates sequential orthogonal vectors, starting from the initial vector, to calculate the basis of subspaces. It is faster than the other subspace methods and is widely used in finite element method programs. However, this method is easily influenced by computer errors, which may impair the orthogonality of the vectors and interrupt it in the middle of the process. Therefore, it is essential to apply measures against errors.

Geometric Significance of the Lanczos Method

By converting Eq. into a variable

and rewriting the problem, the following equation is obtained:

An appropriate vector linearly transformed with matrix (see Fig. 2.3.2).

Linear Transformation of q0 with Matrix A

Fig. 2.3.2: Linear Transformation of \(q_0\) with Matrix \(A\)

The transformed vector is orthogonalized within the space created by the original vector. That is, it is subjected to a so-called Gram–Schmidt orthogonalization shown in Fig. 2.3.2. Thus, if the vector obtained is defined as and normalized (to length 1), it generates (Fig. 2.3.3). With a similar calculation, is obtained from (Fig. 2.3.4), which is orthogonal to both and . If the same calculation is repeated, mutually orthogonal vectors are determined up to the order of the maximum matrix.

Vector q1 orthogonal to q0

Fig. 2.3.3: Vector \(q_1\) orthogonal to \(q_0\)

Vector q2 Orthogonal to q1 and q0

Fig. 2.3.4: Vector \(q_2\) Orthogonal to \(q_1\) and \(q_0\)

The algorithm of the Lanczos method is a Gram–Schmidt orthogonalization on vector sequence or, in other words, . This vector sequence is called Krylov sequence, and the space it creates is called Krylov subspace. If Gram–Schmidt orthogonalization is performed in this space, two adjacent vectors determine another vector. This is called the principle of Lanczos.

Triple Diagonalization

The th calculation in the iteration above can be expressed as

In this case,

In matrix notation, this becomes

In this case,

That is, the eigenvalues are obtained through eigenvalue calculation on the triply diagonalized matrix obtained with Eq..