Post

How to Re-Express a Vector in an Orthogonal Basis Using Dot Products

How to Re-Express a Vector in an Orthogonal Basis Using Dot Products

When working in two-dimensional space, we typically start by describing vectors in the “standard basis” (the familiar x-y coordinate system). But there are many situations—especially in linear algebra, data science, and geometry—where switching to a different basis can simplify calculations or reveal hidden structure. This blog post explains how to re-express a vector in a new, orthogonal basis by using the dot product (or “projection”).


1. Why Bother Changing Bases?

Vectors exist independently of the coordinate system we use to describe them. A vector is simply a quantity that points from the origin to some location in space. However, the numerical representation of that vector does depend on the axes (basis vectors) we choose.

  • Different viewpoints: Sometimes we pick a new basis that aligns with a particular direction of interest—like principal components in data analysis or “principal axes” in physics.
  • Simplified math: If our new basis is orthogonal, dot products become straightforward, and certain calculations (like projections or rotations) are much easier.

2. What Does Orthogonal Mean?

Two vectors (b_1) and (b_2) are orthogonal if their dot product is zero:

[ b_1 \cdot b_2 \;=\; 0. ]

In 2D, “orthogonal” is another word for “perpendicular.” If we draw (b_1) and (b_2), they form a right angle. This right-angle relationship makes it possible to do coordinate transformations with simple dot-product formulas.


3. The Projection Trick

Suppose you have a vector [ v = \begin{bmatrix} v_x \ v_y \end{bmatrix} ] in the standard basis. You want to rewrite (v) in terms of a new orthogonal basis ({b_1, b_2}). Mathematically, this means you want to find scalars (\alpha) and (\beta) such that:

[ v \;=\; \alpha\,b_1 \;+\; \beta\,b_2. ]

If (b_1) and (b_2) are orthogonal, there’s a direct formula for (\alpha) and (\beta). Take the dot product of both sides with (b_1). Because (b_1) is orthogonal to (b_2), the dot product (b_1 \cdot b_2) vanishes:

[ b_1 \cdot v \;=\; b_1 \cdot (\alpha\,b_1 + \beta\,b_2) \;=\; \alpha \, (b_1 \cdot b_1) \;+\; \beta \, (b_1 \cdot b_2). ]

Since (b_1 \cdot b_2 = 0), we get:

[ b_1 \cdot v \;=\; \alpha \, (b_1 \cdot b_1). ]

Thus,

[ \alpha = \frac{b_1 \cdot v}{b_1 \cdot b_1}. ]

Similarly, by dotting both sides with (b_2), we find:

[ \beta = \frac{b_2 \cdot v}{b_2 \cdot b_2}. ]


4. A Step-by-Step Example

Let’s walk through a concrete example. Suppose:

[ v = \begin{bmatrix} 2 \ 2 \end{bmatrix}, \quad b_1 = \begin{bmatrix} -3 \ 1 \end{bmatrix}, \quad b_2 = \begin{bmatrix} 1 \ 3 \end{bmatrix}. ]

  1. Check orthogonality:
    [ b_1 \cdot b_2 = (-3)(1) + (1)(3) = -3 + 3 = 0. ]
    So (b_1) and (b_2) are indeed perpendicular.

  2. Compute (\alpha):
    [ v \cdot b_1 = (2)(-3) + (2)(1) = -6 + 2 = -4, \quad b_1 \cdot b_1 = (-3)^2 + 1^2 = 9 + 1 = 10. ]
    Therefore, [ \alpha = \frac{-4}{10} = -\tfrac{2}{5}. ]

  3. Compute (\beta):
    [ v \cdot b_2 = (2)(1) + (2)(3) = 2 + 6 = 8, \quad b_2 \cdot b_2 = 1^2 + 3^2 = 1 + 9 = 10. ]
    Hence, [ \beta = \frac{8}{10} = \tfrac{4}{5}. ]

  4. Final answer in the new basis:
    [ v = -\tfrac{2}{5}\,b_1
    • \tfrac{4}{5}\,b_2. ]
      The coordinates of (v) in the ({b_1, b_2}) basis are: [ \begin{bmatrix} -\tfrac{2}{5} \[6pt] \tfrac{4}{5} \end{bmatrix}. ]
  5. Check (optional): Reconstruct (v) from (\alpha) and (\beta):

    [ -\tfrac{2}{5}\begin{bmatrix}-3 \ 1\end{bmatrix}

    • \tfrac{4}{5}\begin{bmatrix}1 \ 3\end{bmatrix}

      \begin{bmatrix} \tfrac{6}{5} \[4pt] -\tfrac{2}{5} \end{bmatrix} + \begin{bmatrix} \tfrac{4}{5} \[4pt] \tfrac{12}{5} \end{bmatrix} = \begin{bmatrix} \tfrac{10}{5} \[4pt] \tfrac{10}{5} \end{bmatrix} = \begin{bmatrix} 2 \ 2 \end{bmatrix}. ]

Perfect!


5. Common Pitfalls

  1. Forgetting to Divide by (b_i \cdot b_i)
    When you do (v \cdot b_i), that is just the numerator. You must divide by the dot product (b_i \cdot b_i), which is the square of the length of (b_i).

  2. Sign Errors
    Keep track of negative signs carefully when multiplying components.

  3. Mixing Up (b_1) and (b_2)
    If you label them in the wrong order or swap them, you’ll get reversed coordinates.

  4. Not Orthogonal
    The simple formula (\alpha = \frac{v \cdot b_1}{b_1 \cdot b_1}) only works for orthogonal vectors. If (b_1) and (b_2) are not orthogonal, you need a more general approach involving matrices.


6. Conclusion

Re-expressing a vector in a new orthogonal basis is straightforward once you know the dot-product projection trick. Orthogonality ensures that each basis vector is “independent” in the sense that their dot product is zero, making the math simpler. This technique underlies many core ideas in linear algebra—from finding coordinates in rotated axes to projecting data onto principal components.

Key takeaway:
[ \boxed{ v = \alpha\,b_1 + \beta\,b_2 \quad\text{with}\quad \alpha = \frac{v \cdot b_1}{b_1 \cdot b_1}, \;\; \beta = \frac{v \cdot b_2}{b_2 \cdot b_2}. } ]

With a bit of careful dot-product arithmetic, you can move seamlessly between different orthogonal bases!


Further Reading:

  • Gilbert Strang’s Introduction to Linear Algebra for a deeper dive into projections and orthogonal transformations.
  • Principal Component Analysis (PCA) tutorials to see how changing to an orthogonal basis can simplify high-dimensional data.
  • Orthogonal transformations and their applications in graphics and robotics, where rotations and reflections are expressed in convenient coordinate systems.
This post is licensed under CC BY 4.0 by the author.