I know it is a controversial opinion, but you might need to know where things are when running a simulation. As a bonus, it helps you in knowing what to draw on the screen.
Of course, you can just use a position vector (x, y, z) where x, y and z are the coordinates of the object. But things have the inconvenient tendency not to stay in the same place for all eternity. So we’ll want to know where that object is at the given time.
Now, the more generic approach is to run a physics simulation. We compute the acceleration of the object, update its speed depending on the acceleration and update its position depending on its speed. By the way, use Runge-Kutta 4 for this, not Euler’s method.
That would work for things in orbit too. But we can do much better in this case. This one guy, Johannes Kepler found simpler formulas to describe the movement of two orbiting objects (e.g. the Sun and the Earth, or the Earth and the Moon). Basically, the trajectories will be ellipses, and we can deduce the position of an object in its orbit at any given time.
We can describe the position of the object in its orbit as the angle between the direction of the periapsis (where the planet is closest to its sun) and the direction to the object, taken at the relevant focus. This is the true anomaly. Everything is an illusion, so we also like to pretend the ellipse is actually a circle and define the eccentric anomaly. But these angles are not linear with time. So we also define the mean anomaly, which works like an angle (goes from 0 to 2π) but is linear with time. Once we know the direction of the object, we will only need to know its distance, r.
And here are the formulas to get from time to the true anomaly, and how we then obtain the distance from focus (e is the eccentricity of the ellipse):
- mean anomaly (M): M = M₀ + n × (t – t₀)
- eccentric anomaly (E): M = E – e × sin E
- true anomaly (f): $latex \displaystyle f=2\arctan\sqrt{\frac{1+e}{1-e}} \tan\frac E2$
- distance from focus (r): $latex \displaystyle r=\frac p{1 + e \cos f}$
You won’t be calculating this in your head, but the expressions are pretty easy to translate to working code. Well, except for one. Notice how the eccentric anomaly is not expressed as a function of the mean anomaly? Well, that’s the hard part. And it’s called Kepler’s equation.
We do not know a simple way to express E as a formula depending only on e and M. Instead, we must proceed by successive approximations until we decide we are close enough. For this, we will need two things:
- a starting value E₀ for E
- a way to improve the current approximation of E
Most introductory books and online articles I have found will tell you to take E₀ = M. This has been obsolete since 1978. It works well enough, but fails in corner cases (near-parabolic orbits around periapsis). And modern starting values are close enough to limit the number of improvements you then need to make. I reviewed 21 papers, selected the 3 most relevant ones, and implemented 21 methods for picking starting values (3 papers for the hyperbolic case with 5 methods for the hyperbolic case).
To improve the starting value, old references will make you do E := M + e sin E, but this is pretty naive. Most modern references will use Newton’s method, which helps you solve an equation faster as long as you know its first derivative. But you can go further by using the second derivative using Halley’s method. You can go arbitrarily far using Householder’s method.
Great, now I have 21 × 4 = 84 approaches to try (5 × 3 = 15 for the hyperbolic case). Which one is the most accurate? Which one is the fastest? And, more importantly, can I find a fast and accurate approach?
You can spend years studying the theoretical properties of these functions. Or, we just try them for real. It is the most efficient way to know how they actually behave on real hardware with real implementations of numbers.
I wrote a benchmark tool to test both the accuracy and the speed of each approach. It turns out that only a handful of approaches are accurate enough for my taste:
# Elliptical case
## Naive method
## Newton's method
## Halley's method
580 c 3.580e-16 gooding_10
764 c 3.926e-16 gooding_11
577 c 3.604e-16 mikkola_1
742 c 3.653e-16 mikkola_2
## Householder's third order method
606 c 3.645e-16 gooding_10
809 c 3.623e-16 gooding_11
641 c 3.935e-16 mikkola_1
764 c 3.904e-16 mikkola_2
# Hyperbolic case
## Newton's method
## Halley's method
1454 c 3.731e-15 mikkola_1
1578 c 3.725e-15 mikkola_2
## Householder's third order method
1704 c 3.618e-15 mikkola_1
1757 c 3.618e-15 mikkola_2
The first column is the number of CPU cycles per solve of Kepler’s equation. The second column is the worst relative error over the test set. The third column is a short name for the method for picking a starting value for E.
As we can see above, no approach seems to work well enough with the naive improvement, or with Newton’s method. And using third derivatives does not do much. gooding_11 and mikkola_2 are relatively slow. This leaves gooding_10 and mikkola_1. I picked mikkola_1 because the paper also treated the hyperbolic case. It comes from A cubic approximation for Kepler’s equation by Seppo Mikkola.
And after all this, I have a very robust way to know where stuff is at any time! Of course, you should not forget to take care of floating point precision issues when implementing all of this…
Appendix: References
- smith_* are from A simple, efficient starting value for the iterative solution of Kepler’s equation (1979), by Gary R Smith
- mikkola_* are from A cubic approximation for Kepler’s equation (1987), by Seppo Mikkola
- gooding_* are from Procedures for Solving Kepler’s Equation (1986) for the elliptic case, and from The hyperbolic Kepler equation (and the elliptic equation revisited) (1988) for the hyperbolic case, by A. W. Odell and R. H. Gooding
- pfelger is from Astronomy on the Personal Computer (2000) by Oliver Montenbruck and Thomas Pfleger
Appendix: Full benchmark
# Elliptical case
## Naive method
264 c 2.500e-01 smith_1
276 c 5.303e-01 smith_2
360 c 2.000e-01 smith_3
444 c 9.701e-02 smith_4
385 c 1.667e-01 smith_5
480 c 6.495e+01 smith_6
246 c 2.500e-01 gooding_1
299 c 2.000e-01 gooding_2
356 c 1.667e-01 gooding_3
246 c 5.303e-01 gooding_4
460 c 9.701e-02 gooding_5
253 c 2.417e-01 gooding_6
325 c 7.497e-02 gooding_7
334 c 7.497e-02 gooding_7b
378 c 1.277e-01 gooding_8
423 c 7.499e-02 gooding_9
455 c 2.376e-01 gooding_10
608 c 8.020e-02 gooding_11
286 c 1.439e-01 gooding_12
448 c 4.342e-01 mikkola_1
629 c 4.426e-01 mikkola_2
## Newton's method
522 c 8.517e-03 smith_1
539 c 1.379e-02 smith_2
578 c 8.111e-03 smith_3
697 c 8.111e-03 smith_4
706 c 8.111e-03 smith_5
900 c 1.028e+02 smith_6
481 c 8.517e-03 gooding_1
583 c 8.111e-03 gooding_2
561 c 8.111e-03 gooding_3
476 c 1.379e-02 gooding_4
686 c 8.111e-03 gooding_5
537 c 3.528e-02 gooding_6
536 c 1.378e-02 gooding_7
585 c 1.378e-02 gooding_7b
702 c 2.524e-04 gooding_8
680 c 1.379e-02 gooding_9
731 c 1.117e-10 gooding_10
940 c 2.072e-08 gooding_11
528 c 5.673e-03 gooding_12
697 c 1.825e-06 mikkola_1
815 c 1.825e-06 mikkola_2
## Halley's method
395 c 1.250e-01 smith_1
419 c 2.553e-03 smith_2
488 c 6.250e-02 smith_3
596 c 1.710e-02 smith_4
557 c 4.167e-02 smith_5
989 c 5.130e+01 smith_6
429 c 1.250e-01 gooding_1
503 c 6.250e-02 gooding_2
524 c 4.167e-02 gooding_3
426 c 2.553e-03 gooding_4
587 c 1.710e-02 gooding_5
410 c 6.127e-03 gooding_6
467 c 2.548e-03 gooding_7
440 c 2.548e-03 gooding_7b
508 c 9.153e-05 gooding_8
552 c 2.550e-03 gooding_9
598 c 3.580e-16 gooding_10
821 c 3.926e-16 gooding_11
430 c 1.028e-03 gooding_12
594 c 3.604e-16 mikkola_1
742 c 3.653e-16 mikkola_2
## Householder's third order method
438 c 1.562e-02 smith_1
439 c 6.779e-04 smith_2
501 c 7.812e-03 smith_3
667 c 2.138e-03 smith_4
533 c 5.208e-03 smith_5
1053 c 6.535e+01 smith_6
453 c 1.562e-02 gooding_1
501 c 7.812e-03 gooding_2
594 c 5.208e-03 gooding_3
429 c 6.779e-04 gooding_4
595 c 2.138e-03 gooding_5
454 c 1.638e-03 gooding_6
526 c 6.736e-04 gooding_7
585 c 6.736e-04 gooding_7b
583 c 9.493e-09 gooding_8
652 c 6.741e-04 gooding_9
681 c 3.645e-16 gooding_10
846 c 3.623e-16 gooding_11
486 c 2.724e-04 gooding_12
651 c 3.935e-16 mikkola_1
752 c 3.904e-16 mikkola_2
# Hyperbolic case
## Newton's method
780 c 7.245e+25 basic
1064 c 1.421e-03 mikkola_1
1201 c 1.970e-04 mikkola_2
1014 c 8.559e-01 gooding
892 c 1.515e+19 pfelger
## Halley's method
1134 c 1.729e+17 basic
1396 c 3.731e-15 mikkola_1
1526 c 3.725e-15 mikkola_2
1265 c 9.160e+10 gooding
1237 c 1.791e+13 pfelger
## Householder's third order method
1260 c 1.323e+07 basic
1631 c 3.618e-15 mikkola_1
1595 c 3.618e-15 mikkola_2
1467 c 2.276e+17 gooding
1365 c 9.997e+01 pfelger