## Cross Product Considered Harmful

#### Stefan Gössner

*Department of Mechanical Engineering, University of Applied Sciences, Dortmund, Germany.*

**Keywords**: Cross product; 2D vectors; planar vector equations; orthogonal operator; perp operator; perp dot product; polar vectors;

### 1. Introduction

The cross product is frequently used in Physics and Engineering Mechanics. However, the majority of problems in education and practice are planar by nature. Therefore they involve only 2D vectors, while the cross product is limited to 3-space

So, to be a little more specific here:

Cross product is considered harmful with vectors in.$\\mathbb R^2$

As this paper focuses solely on vectors in Euclidean 2-space, we need to discuss possible representations of the cross product in two dimensions.

A well known alternative to vectors are *complex numbers*. While complex numbers are quite useful to solve planar problems, they cannot be easily generalized to three dimensions. On the contrary, 2D vectors are just a particular case of vectors in

It will be shown below, that coordinate-free vector algebra in *Elements of Vector Analysis* around 1880 [8,10,18].

### 2. Cross Product Matrix in $\\mathbb R^3$

As we intent to eliminate the need for the cross product a closer examination of it in

Vector

The length of

Interestingly there is an alternative pure algebraic approach to the cross product.
Any vector

Now multiplying that matrix with vector *cross product matrix*. The *tilde operator*, that creates the *cross product matrix* from any 3D vector, is popular either in kinematics, multi body dynamics and robotics [20,21] as well as in vector graphics [15,25].

### 2. Orthogonal Operator in $\\mathbb R^2$

So how might that cross product matrix help us with vector space

In contrast to 3-space we can find for any single vector in 2-space its unique orthogonal compagnon, as it has to lie in the x/y-plane also. One way to get the orthogonal vector

Chace [9] took advantage of this notation for generalizing the solution of planar vector equations in closed-form. Now the cross product matrix in this case can be determined analog to (2)

which looks somewhat like a skew symmetric version of the 2D unit matrix *Orthogonal Operator* in

The orthogonal operator

With introduction of an

orthogonal operator, vectors in$\\mathbb R^2$ have been equipped with acomplex structure[19].

Skew symmetric nature of the orthogonal operator is taken to denote orthogonal 2D vectors by simply writing the ~ symbol over them now, as in

So the tilde can be viewed as an orthogonal operator by itself and explicite use of the skew symmetric matrix isn't needed anymore.

Getting the orthogonal 2D vector from another in practice is easy. According to (3) the components simply have to be exchanged while the new first component is negated.

Note that this is different from

Use of orthogonal operators, also called *perp operator*, isn't new of course and can be found documented in [1-4,12,13,16,19,24,25,27]. Interestingly the result of the dot product

is identical to the third component of the cross product in (1), which, as a remarkable result, means:

The cross product in particular as well as any explicite outer product in general can be avoided in vector space

$\\mathbb R^2$ by using anorthogonal operatorin combination with the dot product instead.

The dot product *perp dot product* [16,27], sometimes *skew product* or due to the latter fact *area product* [4].

Orthogonal vectors obey following rules:

Apart from that, all other known vector rules as *commutative*, *associative* and *distributive* laws will continue to apply.

Regarding the analogy with complex numbers,

### 3. Vector Equations in $\\mathbb R^2$

Vector equations can be treated the same as algebraic equations, as they might be added, subtracted and squared. They can be multiplied by a scalar quantity. Multiplication of a vector equation with a vector results in a scalar equation. Multiplying it with a vector again yields a vector equation in turn and so on alternating. Additionally the orthogonal operator can be applied to planar vector equations.

Any vector

In order to resolve equation (7) for

Equation (7) is multiplied by

which yields the desired result

Equation (7) corresponds to the complex product *similarity transformation*. Equation (9) is equivalent to the complex division. Both relations can be shown by direct calculation in coordinate representation.

### 4. Identities

Any vector

The scalar coefficients

Reintroducing the results to (10) and multiplication by common denominator

finally results - after applying the perp operator to that equation - in the *Jacobi Identity*

Vector space

$\\mathbb R^2$ obeys the antisymmetry of its outer product (6) as well as the Jacobi identity (11). By that it complies with the requirements of being aLie Algebra[22].

We might proceed in finding identities. An attempt to express the first term in (11) as a linear combination of

after subsequent multiplication by *Grassmann Identity* then.

Cyclic commutation of vectors in (12) with *Binet-Cauchy Identity*.

For the special case *Lagrange's Identity*.

### 5. Polar Vector Representation

Any vector can be decomposed to its *length* and *direction* represented by its unit vector.

This is the *polar representation* of the vector in 2-space, while *x*-axis to that vector. It corresponds to the complex polar notation

This notation is valuable in geometry and kinematics, as it distinguishable separates lengths and orientations, which may be assigned individual knowns and unknowns then.

Examining the dot product of two vectors

and their perp dot product

we get trigonometric expressions for the angle

Rotating vector

which - as a rotation - is a special case of the similarity transformation (7).

### 6. Time Dependent Vectors

In *Kinematics* and *Multi Body Dynamics* we need to deal with time dependent vectors. Here again the polar representation is quite useful, since length and/or orientation of a vector may vary with the time.

Differentiating the direction vector

So the velocities of the time dependent vector

where the first summand gives the translational velocity in vector direction and the second summand means the circumferential velocity of the vector rotating with angular velocity

Further differentiation leads us to the accelerations

Here again the first summand represents the translational/radial component, whereas the second one is the circumferential part including the Coriolis term (second summand in parentheses).

Both velocities (18) and accelarations (19) are variations of the similarity transformation (7) with respect to the *orthonormal basis* built by

### 7. Conclusion

Abandonment of the cross product with planar vectors and simultaneous introduction of an orthogonal operator is a beneficial approach. That small addition to vector operations not only makes an explicite *outer product* obsolete, but also proves, that vector space

- is equipped with a complex structure
- becomes a Lie Algebra

then. Decomposing a vector

The proposal to consequently use the orthogonal operator results in a significant improvement for doing vector algebra in

### 8. References

[1] J. Angeles, The role of the rotation matrix in the teaching of planar kinematics, Mechanism and Machine Theory, 2015.

[2] J. Angeles, R. Sinatra, A novel approach to the teaching of planar mechanism dynamics - a case study, Mechanism and Machine Theory, 2015.

[3] J. Angeles. Fundamentals of Robotic Mechanical Systems: Theory, Methods, and Algorithms. Springer, 2007.

[4] B.B. Bantchev, Calculating with Vectors in Plane Geometry. Mathematics and Education in Mathematics, 2008. Proc. 37th Spring Conf. of the Union of Bulgarian Mathematicians, April 2008, pp.261-267.

[5] O. Bottema, B. Roth, Theoretical Kinematics, Dover, 1979

[6] R.G. Calvet, Treatise of plane geometry through geometric algebra. Eigenverlag, 2007,

[7] J.M. McCarthy et al., Geometric Design of Linkages, Springer, 2010

[8] M.J. Crowe, A History of Vector Analysis, Notre Dame, Indiana, 1967

[9] M. Chace, Vector analysis of linkages, ASME J. Eng. Ind., 1963.

[10] H.R.M. Daniali, Planar Vector Equations in Engineering, Tempus Pub., 2006

[11] J.W. Gibbs, Elements of Vector Analysis, New Haven, 1884

[12] S. Gössner, Analysis of Mechanisms in Vector Space R^{2}, IFToMM D-A-CH conference, Innsbruck, Austria, 2016.

[13] S. Gössner, Mechanismentechnik – Vektorielle Analyse ebener Mechanismen, Logos, Berlin, 2016

[14] R.S. Hartenberg, J. Denavit, Kinematic Synthesis of Linkages, McGraw-Hill, 1964

[15] C. Hecker, Physics, Part 4: The Third Dimension, 2007,

[16] F.S. Hill Jr., The Pleasures of 'Perp Dot' Products, Graphics Gems IV, Academic Press, pp. 138-148, 1994.

[17] M. Husty et al., Kinematik und Robotik. Springer, 1997

[18] P. Lynch, Matthew O'Brian: An Inventor Of Vector Analysis, Irish Math. Soc. Bulletin, 2014

[19] D. Mathews, Complex vector spaces, duals, and duels, 2007.

[20] P.E. Nikravesh, Computer-Aided Analysis of Mechanical Systems. Prentice-Hall, NewJersey, 1988

[21] G. Orzechowski et al., Inertia forces and shape integrals in the floating frame of reference formulation, Springer, 2017.

[22] H. Samelson, Notes on Lie Algebras, 1989.

[23] J.J. Uicker et al., Theory of Machines and Mechanisms. Oxford Press, 2011

[24] VDI-Richtlinie 2120, Vektorrechnung – Grundlagen für die praktische Anwendung, Beuth Berlin, 2005.

[25] J. Vince, Vector Analysis for Computer Graphics, Springer, London, 2007

[26] O. Vinogradov, Fundamentals of Kinematics and Dynamics of Machines and Mechanisms. CRC Press, London, 2000

[27] Wolfram MathWorld, Perp Dot Product,