Module 4: Advanced Lesson 1 of 3

Eigenvalues & Eigenvectors

When a matrix multiplies most vectors, the result points in a completely new direction. But some special vectors only get scaled — their direction stays the same (or flips). These are eigenvectors, and the scale factors are eigenvalues. Together, they reveal the intrinsic behavior of a linear transformation.

Definition

Given a square matrix AA, a nonzero vector v\vec{v} is an eigenvector of AA if:

Av=λvA\vec{v} = \lambda \vec{v}

where λ\lambda is a scalar called the eigenvalue. The matrix AA acts on v\vec{v} by simply stretching (or compressing, or flipping) it — no rotation, no shearing, just scaling by λ\lambda.

Geometric Intuition

The eigen-decomposition reveals the “natural axes” of a transformation — the eigenvectors are the axes, and the eigenvalues are the scale factors along each:

  • λ>1\lambda > 1: the eigenvector direction is stretched
  • 0<λ<10 < \lambda < 1: the eigenvector direction is compressed
  • λ<0\lambda < 0: the eigenvector direction is flipped and scaled
  • λ=0\lambda = 0: the eigenvector direction is collapsed (the matrix is singular along this direction)

Computing Eigenvalues

Rearrange Av=λvA\vec{v} = \lambda\vec{v}:

(AλI)v=0(A - \lambda I)\vec{v} = \vec{0}

For a nonzero v\vec{v} to exist, the matrix (AλI)(A - \lambda I) must be singular — its determinant must be zero:

det(AλI)=0\det(A - \lambda I) = 0

This is the characteristic equation. For an n×nn \times n matrix, it yields a polynomial of degree nn in λ\lambda.

2×2 Example

Find the eigenvalues of:

A=[3102]A = \begin{bmatrix} 3 & 1 \\ 0 & 2 \end{bmatrix}

The characteristic equation:

det[3λ102λ]=(3λ)(2λ)0=0\det\begin{bmatrix} 3 - \lambda & 1 \\ 0 & 2 - \lambda \end{bmatrix} = (3 - \lambda)(2 - \lambda) - 0 = 0 λ25λ+6=0    (λ3)(λ2)=0\lambda^2 - 5\lambda + 6 = 0 \implies (\lambda - 3)(\lambda - 2) = 0

So λ1=3\lambda_1 = 3 and λ2=2\lambda_2 = 2.

Computing Eigenvectors

For each eigenvalue λi\lambda_i, solve (AλiI)v=0(A - \lambda_i I)\vec{v} = \vec{0} for v\vec{v}.

For λ1=3\lambda_1 = 3:

(A3I)v=[0101]v=0    v2=0(A - 3I)\vec{v} = \begin{bmatrix} 0 & 1 \\ 0 & -1 \end{bmatrix}\vec{v} = \vec{0} \implies v_2 = 0

Eigenvector: v1=[10]\vec{v}_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix} (or any scalar multiple)

For λ2=2\lambda_2 = 2:

(A2I)v=[1100]v=0    v1+v2=0(A - 2I)\vec{v} = \begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix}\vec{v} = \vec{0} \implies v_1 + v_2 = 0

Eigenvector: v2=[11]\vec{v}_2 = \begin{bmatrix} -1 \\ 1 \end{bmatrix} (or any scalar multiple)

Eigenvectors Are Not Unique

If v\vec{v} is an eigenvector, so is cvc\vec{v} for any nonzero scalar cc. We typically normalize eigenvectors to unit length, but the choice of sign/scale is arbitrary. What matters is the direction (or subspace).

Symmetric Matrices — A Special Case

Symmetric matrices (A=ATA = A^T) arise frequently in robotics — inertia tensors, covariance matrices, stiffness matrices. They have two powerful guarantees:

  1. All eigenvalues are real (no complex numbers)
  2. Eigenvectors are orthogonal (perpendicular to each other)

This means a symmetric matrix can always be decomposed as:

A=QΛQTA = Q \Lambda Q^T

where QQ is an orthogonal matrix of eigenvectors and Λ\Lambda is a diagonal matrix of eigenvalues. This is the spectral decomposition — it decomposes the transformation into independent scalings along perpendicular axes.

Inertia Tensor Robotics Application

The inertia tensor of a rigid body describes how mass is distributed around each axis:

I=[IxxIxyIxzIxyIyyIyzIxzIyzIzz]I = \begin{bmatrix} I_{xx} & -I_{xy} & -I_{xz} \\ -I_{xy} & I_{yy} & -I_{yz} \\ -I_{xz} & -I_{yz} & I_{zz} \end{bmatrix}

This matrix is symmetric (I=ITI = I^T). Its eigenvectors are the principal axes of inertia — the directions around which the body rotates most naturally (without wobbling). Its eigenvalues are the principal moments of inertia.

When you design a robot link, aligning the coordinate frame with the principal axes simplifies the dynamics equations because the inertia tensor becomes diagonal:

Iprincipal=[I1000I2000I3]I_{\text{principal}} = \begin{bmatrix} I_1 & 0 & 0 \\ 0 & I_2 & 0 \\ 0 & 0 & I_3 \end{bmatrix}

Eigenvalues of Rotation Matrices

Rotation matrices have a distinctive eigenvalue signature. For a 2D rotation by angle θ\theta (where θ0,π\theta \neq 0, \pi):

R(θ)=[cosθsinθsinθcosθ]R(\theta) = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}

The characteristic equation gives:

λ22cosθλ+1=0    λ=cosθ±isinθ=e±iθ\lambda^2 - 2\cos\theta \cdot \lambda + 1 = 0 \implies \lambda = \cos\theta \pm i\sin\theta = e^{\pm i\theta}

The eigenvalues are complex — which makes sense geometrically: a nontrivial rotation in 2D doesn’t preserve any real direction.

For a 3D rotation by angle θ\theta around axis k^\hat{k}, the eigenvalues are:

λ1=1,λ2,3=e±iθ\lambda_1 = 1, \quad \lambda_{2,3} = e^{\pm i\theta}

The eigenvector for λ=1\lambda = 1 is the rotation axis k^\hat{k} itself — the one direction that the rotation leaves unchanged. This fact is the foundation of the axis-angle representation (covered in Lesson 3).

Key Insight

Every 3D rotation matrix has eigenvalue λ=1\lambda = 1. The corresponding eigenvector is the rotation axis. This is an elegant way to extract the axis from a rotation matrix.

Diagonalization

A matrix AA is diagonalizable if it has nn linearly independent eigenvectors. In that case:

A=PΛP1A = P \Lambda P^{-1}

where P=[v1v2vn]P = [\vec{v}_1 \mid \vec{v}_2 \mid \ldots \mid \vec{v}_n] has eigenvectors as columns, and Λ=diag(λ1,λ2,,λn)\Lambda = \text{diag}(\lambda_1, \lambda_2, \ldots, \lambda_n).

This decomposition is powerful because:

Robotics Applications

1. Stability of Linear Systems

A linear dynamical system x˙=Ax\dot{\mathbf{x}} = A\mathbf{x} has behavior determined entirely by the eigenvalues of AA:

Eigenvalue propertySystem behavior
All Re(λi\lambda_i) < 0Stable — all states decay to zero
Any Re(λi\lambda_i) > 0Unstable — at least one state grows unbounded
Re(λi\lambda_i) = 0 (others < 0)Marginally stable — oscillates without growing or decaying

Joint Controller Stability Robotics Application

A PD controller for a single robot joint gives the closed-loop dynamics:

[e˙e¨]=[01kp/mkd/m][ee˙]\begin{bmatrix} \dot{e} \\ \ddot{e} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -k_p/m & -k_d/m \end{bmatrix} \begin{bmatrix} e \\ \dot{e} \end{bmatrix}

where ee is the position error, kpk_p is the proportional gain, and kdk_d is the derivative gain.

The characteristic equation is λ2+(kd/m)λ+kp/m=0\lambda^2 + (k_d/m)\lambda + k_p/m = 0. For stability, both eigenvalues must have negative real parts. By the quadratic formula, this requires:

  • kp>0k_p > 0 (positive stiffness)
  • kd>0k_d > 0 (positive damping)

The eigenvalues also reveal the response character: real eigenvalues give overdamped response (sluggish), complex eigenvalues give underdamped response (oscillatory), and the boundary is critical damping at kd2=4mkpk_d^2 = 4mk_p.

2. Principal Component Analysis (PCA)

Given a point cloud from a LiDAR sensor, the covariance matrix captures how the data is spread:

Σ=1ni=1n(pipˉ)(pipˉ)T\Sigma = \frac{1}{n}\sum_{i=1}^{n} (\mathbf{p}_i - \bar{\mathbf{p}})(\mathbf{p}_i - \bar{\mathbf{p}})^T

The eigenvectors of Σ\Sigma point along the directions of maximum variance (the principal axes of the data cloud), and the eigenvalues measure how much variance there is along each direction.

Surface Normal Estimation Robotics Application

A mobile robot’s LiDAR returns a cluster of 3D points near a surface. To estimate the surface normal:

  1. Compute the covariance matrix Σ\Sigma of the local neighborhood
  2. Find the eigenvalues λ1λ2λ3\lambda_1 \geq \lambda_2 \geq \lambda_3
  3. The eigenvector corresponding to λ3\lambda_3 (the smallest eigenvalue) is the surface normal

Why? The smallest eigenvalue direction has the least spread — that’s the direction perpendicular to the surface where points vary the least.

If λ30\lambda_3 \approx 0 and λ1,λ20\lambda_1, \lambda_2 \gg 0: clearly a flat surface. If λ2λ30\lambda_2 \approx \lambda_3 \approx 0: the points lie along a line (edge). If λ1λ2λ3\lambda_1 \approx \lambda_2 \approx \lambda_3: the points form a blob (corner or noise).

3. Vibration Analysis

The generalized eigenvalue problem for a mechanical system:

Kv=ω2MvK\vec{v} = \omega^2 M\vec{v}

where KK is the stiffness matrix and MM is the mass matrix. The eigenvalues ω2\omega^2 give the natural frequencies and the eigenvectors give the mode shapes — the patterns of vibration the system naturally exhibits.

Robot Arm Resonance Robotics Application

A 2-DOF robot arm has mass matrix MM and stiffness matrix KK. Solving the generalized eigenvalue problem yields two natural frequencies ω1,ω2\omega_1, \omega_2 and their mode shapes.

If the controller commands motion at a frequency near ω1\omega_1 or ω2\omega_2, the arm resonates — vibrations amplify and can damage the system. Knowledge of these eigenvalues lets you:

  • Design controllers that avoid exciting resonant modes
  • Add damping targeted at problematic frequencies
  • Set trajectory acceleration limits to stay below resonance

4. Manipulability Ellipsoid

The matrix JJTJ J^T (where JJ is the robot Jacobian) encodes the end-effector’s ability to move and exert forces in different directions. Its eigenvectors define the axes of the manipulability ellipsoid, and the square roots of its eigenvalues are the semi-axis lengths.

Yoshikawa's Manipulability Measure

A single scalar summarizing overall dexterity is w=det(JJT)=λ1λ2λnw = \sqrt{\det(JJ^T)} = \sqrt{\lambda_1 \lambda_2 \cdots \lambda_n}. When w=0w = 0, the robot is at a singularity. Maximizing ww drives the robot toward configurations with the most uniform capability in all directions.

The Eigenvalue Decomposition Visually

Consider a symmetric 2×2 matrix acting on the unit circle. Because symmetric matrices have orthogonal eigenvectors and real eigenvalues, the eigenvectors define the axes of the resulting ellipse, and the absolute eigenvalues are the semi-axis lengths:

For non-symmetric matrices, the ellipse axes are instead determined by the singular vectors (from the SVD), and the semi-axis lengths are the singular values — a topic closely related to eigenvalues but beyond this lesson’s scope.

Controls

Symmetric Matrix

[ 2.0, 1.0 ]
[ 1.0, 1.0 ]
-33
-33
-33

Eigenvalue Analysis

λ₁ = 2.618
λ₂ = 0.382
v₁ = (0.851, 0.526)
v₂ = (0.526, -0.851)
det = 1.000
Positive definite

Try this: Start with "Identity" (circle maps to circle). Increase a to stretch horizontally. Then add off-diagonal b to rotate the eigenvectors. Set "Singular" to see the ellipse collapse to a line (λ₂ = 0).

Practice Problems

  1. Find the eigenvalues and eigenvectors of A=[4213]A = \begin{bmatrix} 4 & 2 \\ 1 & 3 \end{bmatrix}.

  2. A system matrix is A=[0123]A = \begin{bmatrix} 0 & 1 \\ -2 & -3 \end{bmatrix}. Is the system stable?

  3. The covariance matrix of a 2D point cloud is Σ=[5222]\Sigma = \begin{bmatrix} 5 & 2 \\ 2 & 2 \end{bmatrix}. Find the principal directions and their variances.

  4. A 2D rotation matrix R(60°)R(60°) has eigenvalues e±iπ/3e^{\pm i\pi/3}. Verify this by computing det(RλI)=0\det(R - \lambda I) = 0.

  5. A robot’s Jacobian at a particular configuration gives JJT=[9001]JJ^T = \begin{bmatrix} 9 & 0 \\ 0 & 1 \end{bmatrix}. Describe the manipulability ellipsoid. In which direction is the robot most capable?

Answers
  1. Characteristic equation: (4λ)(3λ)2=λ27λ+10=(λ5)(λ2)=0(4-\lambda)(3-\lambda) - 2 = \lambda^2 - 7\lambda + 10 = (\lambda-5)(\lambda-2) = 0. So λ1=5\lambda_1 = 5, λ2=2\lambda_2 = 2. For λ1=5\lambda_1 = 5: (A5I)v=0v1=(2,1)(A-5I)\vec{v}=0 \Rightarrow \vec{v}_1 = (2, 1). For λ2=2\lambda_2 = 2: (A2I)v=0v2=(1,1)(A-2I)\vec{v}=0 \Rightarrow \vec{v}_2 = (-1, 1).

  2. Characteristic equation: λ2+3λ+2=(λ+1)(λ+2)=0\lambda^2 + 3\lambda + 2 = (\lambda+1)(\lambda+2) = 0. Eigenvalues: λ1=1\lambda_1 = -1, λ2=2\lambda_2 = -2. Both are negative, so the system is stable.

  3. Characteristic equation: (5λ)(2λ)4=λ27λ+6=(λ6)(λ1)=0(5-\lambda)(2-\lambda) - 4 = \lambda^2 - 7\lambda + 6 = (\lambda-6)(\lambda-1) = 0. So λ1=6\lambda_1 = 6 (direction of max variance), λ2=1\lambda_2 = 1. For λ1=6\lambda_1 = 6: v1=(2,1)/5\vec{v}_1 = (2, 1)/\sqrt{5}. For λ2=1\lambda_2 = 1: v2=(1,2)/5\vec{v}_2 = (-1, 2)/\sqrt{5}. The data is most spread along the direction (2,1)(2,1).

  4. R(60°)=[1/23/23/21/2]R(60°) = \begin{bmatrix} 1/2 & -\sqrt{3}/2 \\ \sqrt{3}/2 & 1/2 \end{bmatrix}. det(RλI)=(1/2λ)2+3/4=λ2λ+1=0\det(R - \lambda I) = (1/2 - \lambda)^2 + 3/4 = \lambda^2 - \lambda + 1 = 0. λ=1±142=1±i32=e±iπ/3\lambda = \frac{1 \pm \sqrt{1-4}}{2} = \frac{1 \pm i\sqrt{3}}{2} = e^{\pm i\pi/3}. Confirmed.

  5. Eigenvalues are 9 and 1. The ellipsoid has semi-axis 9=3\sqrt{9}=3 along xx and 1=1\sqrt{1}=1 along yy. The robot is 3x more capable in the x-direction than the y-direction. Condition number = 3.

Key Takeaways

  1. Eigenvectors are the directions a matrix preserves; eigenvalues are the scale factors
  2. The characteristic equation det(AλI)=0\det(A - \lambda I) = 0 yields eigenvalues
  3. Symmetric matrices have real eigenvalues and orthogonal eigenvectors
  4. Eigenvalues of the system matrix determine stability (negative real parts = stable)
  5. Eigenvalues of covariance matrices reveal principal directions in sensor data
  6. The manipulability ellipsoid uses eigenanalysis to assess robot dexterity

Next Steps

Now that you can analyze the intrinsic properties of matrices, the next lesson covers coordinate frames — how to manage the multiple reference frames that every real robot must juggle.