## Cross Product Considered Harmful

#### Stefan Gössner

Department of Mechanical Engineering, University of Applied Sciences, Dortmund, Germany.

Keywords: Cross product; 2D vectors; planar vector equations; orthogonal operator; perp operator; perp dot product; polar vectors;

### 1. Introduction

The cross product is frequently used in Physics and Engineering Mechanics. However, the majority of problems in education and practice are planar by nature. Therefore they involve only 2D vectors, while the cross product is limited to 3-space $\\mathbb R^3$.

So, to be a little more specific here:

Cross product is considered harmful with vectors in $\\mathbb R^2$.

As this paper focuses solely on vectors in Euclidean 2-space, we need to discuss possible representations of the cross product in two dimensions.

A well known alternative to vectors are complex numbers. While complex numbers are quite useful to solve planar problems, they cannot be easily generalized to three dimensions. On the contrary, 2D vectors are just a particular case of vectors in $\\mathbb R^3$.

It will be shown below, that coordinate-free vector algebra in $\\mathbb R^2$ can be done completely without the necessity of the cross product, which was introduced by Willard Gibbs in his note Elements of Vector Analysis around 1880 [8,10,18].

### 2. Cross Product Matrix in $\\mathbb R^3$

As we intent to eliminate the need for the cross product a closer examination of it in $\\mathbb R^3$ is done first.

$\\bold c = \\bold a \\times \\bold b = \\begin{pmatrix}a\_x \\\\ a\_y \\\\a\_z\\end{pmatrix} \\times \\begin{pmatrix}b\_x \\\\ b\_y \\\\b\_z\\end{pmatrix} = \\begin{pmatrix}a\_y b\_z - a\_z b\_y \\\\ a\_z b\_x - a\_x b\_z \\\\ a\_x b\_y - a\_y b\_x\\end{pmatrix}$(1)

Vector $\\bold c$ resulting from the cross product $\\bold a \\times \\bold b$ is directed normal to the plane determined by $\\bold a$ and $\\bold b$, so that $\\bold a$, $\\bold b$ and $\\bold c$ build a right-handed system (Fig.1). It is obvious, that in 3-space we cannot uniquely define a vector $\\bold c$ orthogonal to another vector $\\bold a$. We rather need two linear independent vectors $\\bold a$ and $\\bold b$ here to get a vector orthogonal to both.

The length of $\\bold c$ corresponds to the area of the parallelogram spanned by $\\bold a$ and $\\bold b$.

Interestingly there is an alternative pure algebraic approach to the cross product. Any vector $\\bold a$ in $\\mathbb R^3$ can be mapped to a skew-symmetric matrix $\\tilde\\bold a$ by partial derivation of the cross product [1]

$\\tilde\\bold a = \\frac{\\partial(\\bold a \\times \\bold b)}{\\partial \\bold b} = \\begin{pmatrix} 0 & -a\_z & -a\_y \\\\ a\_z & 0 & a\_x \\\\ -a\_y & a\_x & 0 \\end{pmatrix}.$(2)

Now multiplying that matrix with vector $\\bold b$, i.e. $\\tilde \\bold a\\bold b$ yields the same result as the cross product $\\bold a \\times \\bold b$. For that reason this matrix is also termed cross product matrix. The tilde operator, that creates the cross product matrix from any 3D vector, is popular either in kinematics, multi body dynamics and robotics [20,21] as well as in vector graphics [15,25].

### 2. Orthogonal Operator in $\\mathbb R^2$

So how might that cross product matrix help us with vector space $\\mathbb R^2$ ?

In contrast to 3-space we can find for any single vector in 2-space its unique orthogonal compagnon, as it has to lie in the x/y-plane also. One way to get the orthogonal vector $\\bold a\_\\bot$ to any vector $\\bold a$ is utilizing the cross product in combination with unit vector $\\bold e\_z$ normal to x/y-plane

$\\bold a\_\\bot = \\bold e\_z \\times \\bold a = \\begin{pmatrix}0 \\\\ 0 \\\\ 1\\end{pmatrix} \\times \\begin{pmatrix}a\_x \\\\ a\_y \\\\ 0 \\end{pmatrix} = \\begin{pmatrix}\\begin{array}{r}-a\_y \\\\ a\_x \\\\ 0\\end{array}\\end{pmatrix}\\,.$(3)

Chace [9] took advantage of this notation for generalizing the solution of planar vector equations in closed-form. Now the cross product matrix in this case can be determined analog to (2)

${\\bold E} = \\tilde {\\bold e}\_z = \\frac{\\partial( e\_z \\times \\bold a)}{\\partial \\bold a} = \\begin{pmatrix} 0 & -1 \\\\ 1 & 0 \\end{pmatrix}\\,,$(4)

which looks somewhat like a skew symmetric version of the 2D unit matrix $\\bold I$. If we apply that matrix to vector $\\bold a$ its orthogonal vector is obtained. So $\\bold E$ is an Orthogonal Operator in $\\mathbb R^2$, which for a proof holds $(\\bold E\\bold a)(\\bold E\\bold b) = \\bold a\\bold b$, which can be easily shown by direct calculation.

The orthogonal operator $\\bold E$, first introduced by Angeles [1-3,10], has interesting properties. Its transposed matrix is equal to its negative matrix $(\\bold E^T = -\\bold E)$ due to skew symmetry. Its determinant is $\\det\\bold E = 1$, so $\\bold E$ is not only an orthogonal matrix but also a rotation matrix, rotating any 2D vector by 90° counterclockwise. Multiplication by itself results in the negative unit matrix ($\\bold E^2 = -\\bold I$). The eigenvalues of $\\bold E$ are $\\pm i\\,$, so we have a direct relationship to the imaginary unit of complex algebra with $i^2 = -1$.

With introduction of an orthogonal operator, vectors in $\\mathbb R^2$ have been equipped with a complex structure [19].

Skew symmetric nature of the orthogonal operator is taken to denote orthogonal 2D vectors by simply writing the ~ symbol over them now, as in

${\\tilde\\bold a} = {\\bold E\\bold a}\\,.$

So the tilde can be viewed as an orthogonal operator by itself and explicite use of the skew symmetric matrix isn't needed anymore.

Getting the orthogonal 2D vector from another in practice is easy. According to (3) the components simply have to be exchanged while the new first component is negated.

${\\tilde\\bold a} = \\tilde{\\begin{pmatrix}a\_x \\\\ a\_y\\end{pmatrix}} = \\begin{pmatrix}-a\_y \\\\ a\_x\\end{pmatrix}\\,.$(4)

Note that this is different from $\\mathbb R^3$, where $\\tilde\\bold a$ is a skew symmetric matrix. In $\\mathbb R^2$ it is rather the orthogonal vector to $\\bold a$. This, by the way, corresponds to the fact discussed above, that in order to create an orthogonal vector in $\\mathbb R^2$, a single vector is sufficient, whereas in $\\mathbb R^3$ two linearly independent vectors are needed.

Use of orthogonal operators, also called perp operator, isn't new of course and can be found documented in [1-4,12,13,16,19,24,25,27]. Interestingly the result of the dot product

$\\tilde\\bold a \\bold b = a\_1 b\_2 - a\_2 b\_1$(5)

is identical to the third component of the cross product in (1), which, as a remarkable result, means:

The cross product in particular as well as any explicite outer product in general can be avoided in vector space $\\mathbb R^2$ by using an orthogonal operator in combination with the dot product instead.

The dot product $\\tilde\\bold a \\bold b$ is a suitable substitute for the cross product in 2-space. Its scalar value is inherited from the cross product and therefore geometrically equivalent to the signed parallelogram area spanned by vectors $\\bold a$ and $\\bold b$. This form of the dot product is commonly called perp dot product [16,27], sometimes skew product or due to the latter fact area product [4].

Orthogonal vectors obey following rules:

$\\begin{matrix} {\\tilde {\\tilde \\bold a}} & = & -\\bold a \\\\ {\\tilde \\bold a}{\\bold a} & = & 0 \\\\ {\\tilde \\bold a}{\\bold b} & = & -{\\bold a}{\\tilde \\bold b} & Antisymmetry \\\\ {\\tilde \\bold a}{\\tilde \\bold b} & = & {\\bold a}{\\bold b} \\end{matrix}$(6)

Apart from that, all other known vector rules as commutative, associative and distributive laws will continue to apply.

Regarding the analogy with complex numbers, $\\bold a\\bold b$ corresponds to the real part and $\\tilde\\bold a\\bold b$ to the imaginary part of the complex product $a^\*b$, where $a^\*$ is the complex conjugate of $a$.

### 3. Vector Equations in $\\mathbb R^2$

Vector equations can be treated the same as algebraic equations, as they might be added, subtracted and squared. They can be multiplied by a scalar quantity. Multiplication of a vector equation with a vector results in a scalar equation. Multiplying it with a vector again yields a vector equation in turn and so on alternating. Additionally the orthogonal operator can be applied to planar vector equations.

Any vector $\\bold a$ and its orthogonal companion $\\tilde\\bold a$ are building together an orthogonal basis. An arbitrary other vector might then be expressed as a linear combination of that two

$\\bold b = \\lambda\\bold a + \\mu\\tilde\\bold a\\,.$(7)

In order to resolve equation (7) for $\\bold a$ the perp operator is applied first

$\\tilde\\bold b = \\lambda\\tilde\\bold a - \\mu\\bold a \\,.$(8)

Equation (7) is multiplied by $\\lambda$, equation (8) by $\\mu$, then the latter is subtracted from the first

$\\lambda\\bold b - \\mu\\tilde\\bold b = (\\lambda^2 + \\mu^2)\\bold a$

which yields the desired result

$\\bold a = \\dfrac{\\lambda\\bold b - \\mu\\tilde\\bold b}{\\lambda^2 + \\mu^2} \\,.$(9)

Equation (7) corresponds to the complex product $ab$ and is therefore representing a similarity transformation. Equation (9) is equivalent to the complex division. Both relations can be shown by direct calculation in coordinate representation.

### 4. Identities

Any vector $\\bold c$ may also be written as a linear combination of two other non-collinear vectors $\\bold a$ and $\\bold b$.

$\\bold c = \\lambda\\bold a + \\mu\\bold b$(10)

The scalar coefficients $\\lambda$ and $\\mu$ with given vectors $\\bold a$, $\\bold b$ and $\\bold c$ can be resolved through multiplying by $\\tilde\\bold b$ and by $\\tilde\\bold a$, which elegantly eliminates the respective other term.

$\\lambda = -\\dfrac{\\tilde\\bold b \\bold c}{\\tilde\\bold a \\bold b} \\quad and \\quad \\mu = -\\dfrac{\\tilde\\bold c \\bold a}{\\tilde\\bold a \\bold b}$

Reintroducing the results to (10) and multiplication by common denominator $\\tilde\\bold a\\bold b$

$\\bold a(\\tilde\\bold b\\bold c) + \\bold b(\\tilde\\bold c\\bold a) + \\bold c(\\tilde\\bold a\\bold b) = \\bold 0\\,,$

finally results - after applying the perp operator to that equation - in the Jacobi Identity

$\\tilde\\bold a(\\tilde\\bold b\\bold c) + \\tilde\\bold b(\\tilde\\bold c\\bold a) + \\tilde\\bold c(\\tilde\\bold a\\bold b) = \\bold 0\\,.$(11)

Vector space $\\mathbb R^2$ obeys the antisymmetry of its outer product (6) as well as the Jacobi identity (11). By that it complies with the requirements of being a Lie Algebra [22].

We might proceed in finding identities. An attempt to express the first term in (11) as a linear combination of $\\bold b$ and $\\bold c$, i.e.

$\\tilde\\bold a(\\tilde\\bold b\\bold c) = \\kappa\\bold b + \\nu\\bold c$

after subsequent multiplication by $\\tilde\\bold b$ and $\\tilde\\bold c$ respectively yields the scalar coefficients $\\kappa=-\\bold a\\bold c$ and $\\nu = \\bold a\\bold b$. Reusing these coefficients in original equation gets us to the Grassmann Identity then.

$\\tilde\\bold a(\\tilde\\bold b\\bold c) = \\bold c(\\bold a\\bold b) - \\bold b(\\bold a\\bold c)$(12)

Cyclic commutation of vectors in (12) with $\\bold a\\bold b\\bold c \\mapsto \\bold c\\bold a\\bold b$ and multiplication by vector $\\bold d$ leads to the Binet-Cauchy Identity.

$(\\tilde\\bold a \\bold b)(\\tilde\\bold c\\bold d) = (\\bold a \\bold c)(\\bold b\\bold d) - (\\bold a\\bold d)(\\bold b\\bold c)$(13)

For the special case $\\bold c = \\bold a$ and $\\bold d = \\bold b$ equation (13) becomes Lagrange's Identity.

$(\\bold a \\bold b)^2 + (\\tilde\\bold a\\bold b)^2 = a^2 b^2$(14)

### 5. Polar Vector Representation

Any vector can be decomposed to its length and direction represented by its unit vector.

$\\bold a = a \\bold e\_\\alpha \\quad with \\quad \\bold e\_\\alpha = \\begin{pmatrix}\\cos\\alpha \\\\ \\sin\\alpha\\end{pmatrix}\\,.$

This is the polar representation of the vector in 2-space, while $\\alpha$ is the angle from the positive x-axis to that vector. It corresponds to the complex polar notation $a e^{i\\alpha}$.

This notation is valuable in geometry and kinematics, as it distinguishable separates lengths and orientations, which may be assigned individual knowns and unknowns then.

Examining the dot product of two vectors

$\\bold a \\bold b = a b \\,\\bold e\_{\\alpha}\\bold e\_{\\beta} = ab\\,(\\cos\\alpha\\cos\\beta + \\sin\\alpha\\sin\\beta)$

and their perp dot product

$\\tilde\\bold a \\bold b = a b \\,{\\tilde\\bold e}\_{\\alpha}\\bold e\_{\\beta} = ab\\,(\\cos\\alpha\\sin\\beta - \\sin\\alpha\\cos\\beta)\\,,$

we get trigonometric expressions for the angle $\\varphi=\\beta-\\alpha$ from vector $\\bold a$ to vector $\\bold b$ after making use of addition theorems of trigonometry

$\\cos\\varphi = \\dfrac{\\bold a \\bold b}{ab}\\,,\\quad\\sin\\varphi = \\dfrac{\\tilde\\bold a \\bold b}{ab}\\,,\\quad\\tan\\varphi = \\dfrac{\\tilde\\bold a \\bold b}{\\bold a \\bold b}.$(15)

Rotating vector $\\bold a$ into vector $\\bold b$ by angle $\\varphi$ is achieved via

$\\bold b = \\cos\\varphi\\,\\bold a + \\sin\\varphi\\,\\tilde\\bold a$(16)

which - as a rotation - is a special case of the similarity transformation (7).

### 6. Time Dependent Vectors

In Kinematics and Multi Body Dynamics we need to deal with time dependent vectors. Here again the polar representation is quite useful, since length and/or orientation of a vector may vary with the time.

Differentiating the direction vector ${\\bold e}\_\\alpha$ with respect to the time gives

$\\dot{\\bold e}\_\\alpha = \\frac{d}{dt}\\begin{pmatrix}\\cos\\alpha \\\\ \\sin\\alpha\\end{pmatrix} = \\dot\\alpha\\begin{pmatrix}-\\sin\\alpha \\\\ \\cos\\alpha\\end{pmatrix} = \\dot\\alpha\\,{\\tilde\\bold e}\_\\alpha\\,.$(17)

So the velocities of the time dependent vector $\\bold a$ are

$\\dot{\\bold a} = \\frac{d}{dt}(a\\,{\\bold e}\_\\alpha) = \\dot a\\,{\\bold e}\_\\alpha + \\dot\\alpha\\, a\\,{\\tilde\\bold e}\_\\alpha\\,,$(18)

where the first summand gives the translational velocity in vector direction and the second summand means the circumferential velocity of the vector rotating with angular velocity $\\dot\\alpha$.

Further differentiation leads us to the accelerations

$\\ddot{\\bold a} = \\frac{d^{\\,2}}{dt^{\\,2}}(a\\,{\\bold e}\_\\alpha) = (\\ddot a - \\dot\\alpha^2 a)\\,{\\bold e}\_\\alpha + (\\ddot\\alpha\\,a + 2\\,\\dot\\alpha\\,\\dot a)\\,{\\tilde\\bold e}\_\\alpha\\,.$(19)

Here again the first summand represents the translational/radial component, whereas the second one is the circumferential part including the Coriolis term (second summand in parentheses).

Both velocities (18) and accelarations (19) are variations of the similarity transformation (7) with respect to the orthonormal basis built by ${\\bold e}\_\\alpha$ and ${\\tilde\\bold e}\_\\alpha$.

### 7. Conclusion

Abandonment of the cross product with planar vectors and simultaneous introduction of an orthogonal operator is a beneficial approach. That small addition to vector operations not only makes an explicite outer product obsolete, but also proves, that vector space $\\mathbb R^2$

• is equipped with a complex structure
• becomes a Lie Algebra

then. Decomposing a vector $\\bold a = a\\,\\bold e\_\\varphi$ in a scalar part and unit vector component, representing length and orientation, is particularly convenient for solving geometric and mechanical problems in a coordinate-free manner, while considerably reducing the amount of trigonometric functions involved.

The proposal to consequently use the orthogonal operator results in a significant improvement for doing vector algebra in $\\mathbb R^2$. This approach has been proven successful in engineering education and practice, as well as computer graphics.

### 8. References

[1] J. Angeles, The role of the rotation matrix in the teaching of planar kinematics, Mechanism and Machine Theory, 2015.
[2] J. Angeles, R. Sinatra, A novel approach to the teaching of planar mechanism dynamics - a case study, Mechanism and Machine Theory, 2015.
[3] J. Angeles. Fundamentals of Robotic Mechanical Systems: Theory, Methods, and Algorithms. Springer, 2007.
[4] B.B. Bantchev, Calculating with Vectors in Plane Geometry. Mathematics and Education in Mathematics, 2008. Proc. 37th Spring Conf. of the Union of Bulgarian Mathematicians, April 2008, pp.261-267.
[5] O. Bottema, B. Roth, Theoretical Kinematics, Dover, 1979
[6] R.G. Calvet, Treatise of plane geometry through geometric algebra. Eigenverlag, 2007,
[7] J.M. McCarthy et al., Geometric Design of Linkages, Springer, 2010
[8] M.J. Crowe, A History of Vector Analysis, Notre Dame, Indiana, 1967
[9] M. Chace, Vector analysis of linkages, ASME J. Eng. Ind., 1963.
[10] H.R.M. Daniali, Planar Vector Equations in Engineering, Tempus Pub., 2006
[11] J.W. Gibbs, Elements of Vector Analysis, New Haven, 1884
[12] S. Gössner, Analysis of Mechanisms in Vector Space R2, IFToMM D-A-CH conference, Innsbruck, Austria, 2016.
[13] S. Gössner, Mechanismentechnik – Vektorielle Analyse ebener Mechanismen, Logos, Berlin, 2016
[14] R.S. Hartenberg, J. Denavit, Kinematic Synthesis of Linkages, McGraw-Hill, 1964
[15] C. Hecker, Physics, Part 4: The Third Dimension, 2007,
[16] F.S. Hill Jr., The Pleasures of 'Perp Dot' Products, Graphics Gems IV, Academic Press, pp. 138-148, 1994.
[17] M. Husty et al., Kinematik und Robotik. Springer, 1997
[18] P. Lynch, Matthew O'Brian: An Inventor Of Vector Analysis, Irish Math. Soc. Bulletin, 2014
[19] D. Mathews, Complex vector spaces, duals, and duels, 2007.
[20] P.E. Nikravesh, Computer-Aided Analysis of Mechanical Systems. Prentice-Hall, NewJersey, 1988
[21] G. Orzechowski et al., Inertia forces and shape integrals in the floating frame of reference formulation, Springer, 2017.
[22] H. Samelson, Notes on Lie Algebras, 1989.
[23] J.J. Uicker et al., Theory of Machines and Mechanisms. Oxford Press, 2011
[24] VDI-Richtlinie 2120, Vektorrechnung – Grundlagen für die praktische Anwendung, Beuth Berlin, 2005.
[25] J. Vince, Vector Analysis for Computer Graphics, Springer, London, 2007
[26] O. Vinogradov, Fundamentals of Kinematics and Dynamics of Machines and Mechanisms. CRC Press, London, 2000
[27] Wolfram MathWorld, Perp Dot Product,