## Numerical Integration of Newton's Equation of Motion

posted January 30, 2021
by H. Gould, J. Tobochnik, and W. Christian

Appendix 3B of An Introduction to Computer Simulation Methods 3rd Ed., by H. Gould, J. Tobochnik, and W. Christian (2007).

Download all of Chapter 3 as PDF from the Open Source Physics collection

View/Download other chapters from this book from the Open Source Physics collection

We summarize several of the common finite difference methods for the solution of Newton's equations of motion with continuous force functions. The number and variety of algorithms currently in use is evidence that no single method is superior under all conditions.

To simplify the notation, we consider the motion of a particle in one dimension and write Newton's equations of motion in the form \label{eq:motion/eqnewton} \begin{align} {dv \over dt} &= a(t),\\ {dx \over dt} &= v(t), \end{align} where $a(t) \equiv a\left(x(t),v(t),t\right)$. The goal of finite difference methods is to determine the values of $x_{n+1}$ and $v_{n+1}$ at time $t_{n+1} = t_n + dt$. We already have seen that $dt$ must be chosen so that the integration method generates a stable solution. If the system is conservative, $dt$ must be sufficiently small so that the total energy is conserved to the desired accuracy.

The nature of many of the integration algorithms can be understood by expanding $v_{n+1} = v(t_n + dt)$ and $x_{n+1} = x(t_n + dt)$ in a Taylor series. We write \begin{align} v_{n+1} &= v_n + a_n dt + O \big ((dt)^2\big), \label{eq:motion/taylorx} \\ x_{n+1} &= x_n + v_n dt + \frac{1}{2} a_n (dt)^2 + O \left((dt)^3\right). \label{eq:motion/taylorv} \end{align} The familiar Euler algorithm is equivalent to retaining the $O(dt)$ terms in Eqs. (\ref{eq:motion/taylorx}) and (\ref{eq:motion/taylorv}): \label{eq:motion/firstorder} \begin{align} v_{n+1} &= v_n + a_n dt \\ x_{n+1} &= x_n + v_n dt. \qquad \mbox{(Euler algorithm)} \end{align} Because order $dt$ terms are retained in (\ref{eq:motion/firstorder}), the local truncation error, the error in one time step, is order $(dt)^2$. The global error, the total error over the time of interest, due to the accumulation of errors from step to step is order $dt$. This estimate of the global error follows from the fact that the number of steps into which the total time is divided is proportional to $1/dt$. Hence, the order of the global error is reduced by a factor of $1/dt$ relative to the local error. We say that an algorithm is $n$th order if its global error is order $(dt)^n$. The Euler algorithm is an example of a *first-order* algorithm.

The Euler algorithm is asymmetrical because it advances the solution by a time step $dt$, but uses information about the derivative only at the beginning of the interval. We already have found that the accuracy of the Euler algorithm is limited and that frequently its solutions are not stable. We also found that a simple modification of (\ref{eq:motion/firstorder}) yields solutions that are stable for oscillatory systems. For completeness, we repeat the Euler-Cromer algorithm here: \begin{align} v_{n+1} &= v_n + a_n dt, \\ x_{n+1} &= x_n + v_{n+1}dt. \qquad \mbox{(Euler-Cromer algorithm)} \end{align}

An obvious way to improve the Euler algorithm is to use the mean velocity during the interval to obtain the new position. The corresponding *midpoint* algorithm can be written as \begin{align} v_{n+1} &= v_n + a_n dt,\quad\mbox{and} \label{eq:motion/midv}\\ x_{n+1} &= x_n + \frac{1}{2} \left(v_{n+1} + v_n\right) dt. \qquad \mbox{(midpoint algorithm)} \label{eq:motion/midx} \end{align} Note that if we substitute (\ref{eq:motion/midv}) for $v_{n+1}$ into (\ref{eq:motion/midx}), we obtain $$x_{n+1} = x_n + v_n dt + \frac{1}{2} a_n \,dt^2.$$ Hence, the midpoint algorithm yields second-order accuracy for the position and first-order accuracy for the velocity. Although the midpoint approximation yields exact results for constant acceleration, it usually does not yield much better results than the Euler algorithm. In fact, both algorithms are equally poor, because the errors increase with each time step.

A higher order algorithm whose error is bounded is the *half-step* algorithm. In this algorithm the average velocity during an interval is taken to be the velocity in the middle of the interval. The half-step algorithm can be written as \begin{align} v_{n + \frac{1}{2}} &= v_{n-\frac{1}{2}} + a_n dt, \label{eq:motion/halfv}\\ x_{n+1} &= x_n + v_{n+\frac{1}{2}} dt. \qquad \qquad \mbox{(half-step algorithm)} \end{align}

Note that the half-step algorithm is not self-starting, that is, (\ref{eq:motion/halfv}) does not allow us to calculate $v_\frac{1}{2}$. This problem can be overcome by adopting the Euler algorithm for the first half step:

$$v_\frac{1}{2} = v_0 + \frac{1}{2} a_0 dt.$$

Because the half-step algorithm is stable, it is a common textbook algorithm. The Euler-Richardson algorithm, a widely used half-step algorithm, can be motivated as follows. We first write $x(t+dt)$ as $$x_1 \approx x(t+dt) = x(t) + v(t)dt + \frac{1}{2} a(t) (dt)^2. \label{eq:motion/taylor11}$$ The notation $x_1$ implies that $x(t+dt)$ is related to $x(t)$ by one time step. We also may divide the step $dt$ into half steps and write the first half step, $x(t+\frac{1}{2} dt)$, as $$\label{eq:motion/taylor2} x(t+\frac{1}{2}dt) \approx x(t) + v(t){dt \over 2}+ \frac{1}{2} a(t) \big ({dt \over 2} \big)^2.$$ The second half step, $x_2(t+dt)$, may be written as $$\label{eq:motion/taylor3} x_2(t+dt) \approx x(t+\frac{1}{2} dt) + v(t+\frac{1}{2} dt){dt \over 2} + \frac{1}{2} a(t + \frac{1}{2} dt)\big ({dt\over 2}\big)^2.$$ We substitute (\ref{eq:motion/taylor2}) into (\ref{eq:motion/taylor3}) and obtain $$x_2(t+dt) \approx x(t) + \frac{1}{2} \big [v(t) + v(t+\frac{1}{2} dt) \big] dt + \frac{1}{2} \big [a(t) + a(t + \frac{1}{2} dt) \big]\big (\frac{1}{2} dt\big)^2. \label{eq:motion/taylor4}$$ Now $a(t+\frac{1}{2} dt) \approx a(t) + \frac{1}{2} a'(t)dt$. Hence to order $(dt)^2$, (\ref{eq:motion/taylor4}) becomes $$x_2(t+dt) = x(t) + \frac{1}{2} \big [v(t) + v(t+\frac{1}{2}dt)\big]dt + \frac{1}{2} \big [2a(t) \big] \big (\frac{1}{2}dt\big)^2. \label{eq:motion/taylor5}$$

We can find an approximation that is accurate to order $(dt)^3$ by combining (\ref{eq:motion/taylor11}) and (\ref{eq:motion/taylor5}) so that the terms to order $(dt)^2$ cancel. The combination that works is $2x_2 - x_1$, which gives the Euler-Richardson result: $$x_{\rm er}(t +dt) \equiv 2x_2(t+dt)- x_1(t+dt) = x(t) + v(t+\frac{1}{2}dt)dt + O(dt)^3.\label{eq:motion/ermy}$$ The same reasoning leads to an approximation for the velocity accurate to $(dt)^3$ giving $$v_{\rm er} \equiv 2v_2(t+dt)- v_1(t+dt) = v(t) + a(t+\frac{1}{2}dt)dt + O(dt)^3. \label{eq:motion/ermv}$$

A bonus of the Euler-Richardson algorithm is that the quantities $|x_2-x_1|$ and $|v_2-v_1|$ give an estimate for the error. We can use these estimates to change the time step so that the error is always within some desired level of precision. We will see that the Euler-Richardson algorithm is equivalent to the second-order Runge-Kutta algorithm (see (\ref{eq:motion/rktwo})).

One of the most common energy drift-free higher order algorithms is commonly attributed to Verlet. We write the Taylor series expansion for $x_{n-1}$ in a form similar to (\ref{eq:motion/taylorx}): $$x_{n-1} = x_n - v_n dt + \frac{1}{2} a_n (dt)^2. \label{eq:motion/forward}$$ If we add the forward and reverse forms, (\ref{eq:motion/taylorx}) and (\ref{eq:motion/forward}) respectively, we obtain $$x_{n+1} + x_{n-1} = 2x_n + a_n (dt)^2 + O \big ((dt)^4 \big)$$ or $$x_{n+1} = 2x_n - x_{n-1} + a_n (dt)^2. \qquad \mbox{(leapfrog algorithm)} \label{eq:motion/verletx}$$

Similarly, the subtraction of the Taylor series for $x_{n+1}$ and $x_{n-1}$ yields

$$v_n = {x_{n+1} - x_{n-1} \over 2dt}. \qquad \qquad \mbox{(leapfrog algorithm)} \label{eq:motion/verletv}$$ Note that the global error associated with the leapfrog algorithm is third-order for the position and second-order for the velocity. However, the velocity plays no part in the integration of the equations of motion. The leapfrog algorithm is also known as the explicit central difference algorithm. Because this form of the leapfrog algorithm is not self-starting, another algorithm must be used to obtain the first few terms. An additional problem is that the new velocity (\ref{eq:motion/verletv}) is found by computing the difference between two quantities of the same order of magnitude. Such an operation results in a loss of numerical precision and may give rise to roundoff errors.

A mathematically equivalent version of the leapfrog algorithm is given by \begin{align} x_{n+1} &= x_n + v_n dt + \frac{1}{2} a_n (dt)^2 \label{eq:motion/velverletx} \\ v_{n+1} &= v_n + \frac{1}{2} \left(a_{n+1} + a_n\right) dt. \qquad \mbox{(velocity Verlet algorithm)} \label{eq:motion/velverletv} \end{align} We see that Eqs. (\ref{eq:motion/velverletx}) and (\ref{eq:motion/velverletv}), known as the *velocity* form of the Verlet algorithm, is self-starting and minimizes roundoff errors. We will refer to Eqs. (\ref{eq:motion/velverletx}) and (\ref{eq:motion/velverletv}) as the Verlet algorithm.

We can derive Eqs. (\ref{eq:motion/velverletx}) and (\ref{eq:motion/velverletv}) from the leapfrog algorithm by the following considerations. We first solve (\ref{eq:motion/verletv}) for $x_{n-1}$ and write $x_{n-1}=x_{n+1} - 2v_n dt$. If we substitute this expression for $x_{n-1}$ into (\ref{eq:motion/verletx}) and solve for $x_{n+1}$, we find the form (\ref{eq:motion/velverletx}). Then we use (\ref{eq:motion/verletv}) to write $v_{n+1}$ as: $$v_{n+1} = {x_{n+2} - x_n \over 2 dt}, \label{eq:motion/vderiv}$$ and use (\ref{eq:motion/verletx}) to obtain $x_{n+2}=2x_{n+1} - x_n + a_{n+1} (dt)^2$. If we substitute this form for $x_{n+2}$ into (\ref{eq:motion/vderiv}), we obtain $$v_{n+1}= {x_{n+1} - x_n \over dt} + \frac{1}{2} a_{n+1} dt. \label{eq:motion/vinterm}$$ Finally, we use (\ref{eq:motion/velverletx}) for $x_{n+1}$ to eliminate $x_{n+1} - x_n$ from (\ref{eq:motion/vinterm}); after some algebra we obtain the desired result (\ref{eq:motion/velverletv}).

Another useful algorithm that avoids the roundoff errors of the leapfrog algorithm is due to Beeman and Schofield. We write the *Beeman* algorithm in the form: \label{eq:motion/beeman} \begin{align} x_{n+1} &= x_n + v_n dt + {1 \over 6}(4a_n -a_{n-1}) (dt)^2 \\ v_{n+1} &= v_n + {1 \over 6} (2a_{n+1} + 5a_n - a_{n-1}) dt. \qquad \mbox{(Beeman algorithm)} \end{align} Note that (\ref{eq:motion/beeman}) does not calculate particle trajectories more accurately than the Verlet algorithm. Its advantage is that it generally does a better job of maintaining energy conservation. However, the Beeman algorithm is not self-starting.

The most common finite difference method for solving ordinary differential equations is the *Runge-Kutta* method. To explain the many algorithms based on this method, we consider the solution of the first-order differential equation $${dx \over dt} = f(x,t). \label{eq:motion/rk}$$ Runge-Kutta algorithms evaluate the rate, $f(x,t)$, multiple times in the interval $[t, t+\Delta t]$. For example, the classic fourth-order Runge-Kutta algorithm, which we will discuss in the following, evaluates $f(x,t)$ at four times $t_n$, $t_n +a_1\Delta t$, $t_n +a_2\Delta t$, and $t_n +a_3 \Delta t$. Each evaluation of $f(x,t)$ produces a slightly different rate $r_1$, $r_2$, $r_3$, and $r_4$. The idea is to advance the solution using a weighted average of the intermediate rates: $$y_{n+1} = y_n + (c_1 r_1 + c_2 r_2 +c_3 r_3 + c_4 r_4)\Delta t.$$

The various Runge-Kutta algorithms correspond to different choices for the constants $a_i$ and $c_i$. These algorithms are classified by the number of intermediate rates $\{r_i\}$, where $i=1, \cdots, N$. The determination of the Runge-Kutta coefficients is difficult for all but the lowest order methods, because the coefficients must be chosen to cancel as many terms in the Taylor series expansion of $f(x,t)$ as possible. The first non-zero expansion coefficient determines the order of the Runge-Kutta algorithm. Fortunately, these coefficients are tabulated in most numerical analysis books.

To illustrate how the various sets of Runge-Kutta constants arise, consider the case $N=2$. The second-order Runge-Kutta solution of (\ref{eq:motion/rk}) can be written using standard notation as: \label{eq:motion/rktwo} \begin{align} x_{n+1} &= x_n + k_2 + O \left( (dt)^3 \right), \quad\mbox{where}\\ k_2 &= f\left(x_n + {k_1 \over 2}, t_n + {dt \over 2}\right) dt. \\ k_1 &= f\left(x_n,t_n\right) dt \end{align} Note that the weighted average uses $c_1=0$ and $c_2=1$. The interpretation of (\ref{eq:motion/rktwo}) is as follows. The Euler algorithm assumes that the slope $f(x_n,t_n)$ at $(x_n,t_n)$ can be used to extrapolate to the next step, that is, $x_{n+1} = x_n + f(x_n,t_n) dt$. A plausible way of making a a more accurate determination of the slope is to use the Euler algorithm to extrapolate to the midpoint of the interval and then to use the midpoint slope across the full width of the interval. Hence, the Runge-Kutta estimate for the rate is $f(x^{\*}, t_n + \frac{1}{2} dt)$, where $x^{\*} = x_n + \frac{1}{2} f(x_n,t_n) dt$ (see (\ref{eq:motion/rktwo})).

The application of the second-order Runge-Kutta algorithm to Newton's equation of motion (\ref{eq:motion/eqnewton}) yields \label{eq:motion/rk2Newton} \begin{align} k_{1v}& = a_n(x_n,v_n,t_n) dt \\ k_{1x}& = v_n dt \\ k_{2v}& = a\left(x_n + {k_{1x} \over 2},v_n + { k_{1v} \over 2}, t + {dt \over 2}\right)dt \\ k_{2x}& = \left(v_n + {k_{1v} \over 2}\right) dt, \end{align} and \label{eq:motion/rk2} \begin{align} v_{n+1}& = v_n + k_{2v} \\ x_{n+1}& = x_n + k_{2x}. \qquad \mbox{(second-order Runge Kutta)} \end{align} Note that the second-order Runge-Kutta algorithm in (\ref{eq:motion/rk2Newton}) and (\ref{eq:motion/rk2}) is identical to the Euler-Richardson algorithm.

Other second-order Runge-Kutta type algorithms exist. For example, if we set $c_1=c_2=\frac{1}{2}$ we obtain the endpoint method: \begin{align} y_{n+1} &= y_n + \frac{1}{2} k_1+ \frac{1}{2} k_2, \quad\mbox{where}\\ k_{1}& = f\left(x_n,t_n\right) dt \\ k_{2}& = f\left(x_n+k_1,t_n+dt\right) dt. \end{align} And if we set $c_1=\frac{1}{3}$ and $c_2=\frac{2}{3}$, we obtain Ralston's method: \begin{align} y_{n+1} &= y_n + \frac{1}{3} k_1+ \frac{2}{3} k_2, \quad\mbox{where}\\ k_{1}& = f\left(x_n,t_n\right) dt \\ k_{2}& = f\left(x_n+\frac{3}{4}k_1,t_n+\frac{3}{4}dt\right) dt. \end{align} Note that Ralston's method does not calculate the rate at uniformly spaced subintervals of $dt$. In general, a Runge-Kutta method adjusts the partition of $dt$ as well as the constants $a_i$ and $c_i$ so as to minimize the error.

In the *fourth-order* Runge-Kutta algorithm, the derivative is computed at the beginning of the time interval, in two different ways at the middle of the interval, and again at the end of the interval. The two estimates of the derivative at the middle of the interval are given twice the weight of the other two estimates. The algorithm for the solution of (\ref{eq:motion/rk}) can be written in standard notation as \label{eq:motion/rkfour} \begin{align} k_1&= f\left(x_n,t_n\right) dt \\ k_2& = f\left(x_n + {k_1 \over 2},t_n + {dt \over 2}\right) dt \\ k_3& = f\left(x_n+{k_2 \over 2},t_n + {dt \over 2}\right) dt \\ k_4& = f\left(x_n + k_3,t_n +dt\right) dt, \end{align} and $$x_{n+1}= x_n + {1 \over 6}(k_1 + 2k_2 + 2k_3 + k_4).$$

The application of the fourth-order Runge-Kutta algorithm to Newton's equation of motion (\ref{eq:motion/eqnewton}) yields \begin{align} k_{1v}&= a\left(x_n,v_n,t_n\right) dt \\ k_{1x}& = v_n dt \\ k_{2v}& = a\left(x_n+{k_{1x} \over 2},v_n+{ k_{1v} \over 2},t_n+{ dt \over 2}\right) dt \\ k_{2x}& = \left(v_n + {k_{1v} \over 2}\right) dt \\ k_{3v}& = a\left(x_n + {k_{2x} \over 2}, v_n + {k_{2v} \over 2}, t_n + {dt \over 2}\right) dt \\ k_{3x}& = \left(v_n + {k_{2v} \over 2}\right) dt \\ k_{4v}& = a\left(x_n + k_{3x}, v_n+ k_{3v}, t+dt\right) dt \\ k_{4x}& = \left(v_n + k_{3v}\right) dt, \end{align} and \begin{align} v_{n+1}& = v_n + {1 \over 6}(k_{1v} + 2k_{2v}+2k_{3v}+k_{4v}) \\ x_{n+1}& = x_n + {1 \over 6}(k_{1x} + 2k_{2x}+2k_{3x}+k_{4x}). \quad \mbox{(fourth-order Runge-Kutta)} \end{align} Because Runge-Kutta algorithms are self-starting, they are frequently used to obtain the first few iterations for an algorithm that is not self-starting.

As we have discussed, one way to determine the accuracy of a solution is to calculate it twice with two different values of the time step. One way to make this comparison is to choose time steps $dt$ and $dt/2$ and compare the solution at the desired time. If the difference is small, the error is assumed to be small. This estimate of the error can be used to adjust the value of the time step. If the error is too large, than the time step can be halved. And if the error is much less than the desired value, the time step can be increased so that the program runs faster.

A better way of controlling the step size was developed by Fehlberg who showed that it is possible to evaluate the rate in such as way as to simultaneously obtain two Runge-Kutta approximations with different orders. For example, it is, possible to run a fourth-order and fifth-order algorithm in tandem by evaluating five rates. We thus obtain different estimates of the true solution using different weighed averages of these rates: \begin{align} y_{n+1} &= y_n + c_1 k_1 + c_2 k_2 +c_3 k_3 + c_4 k_4 +c_5 k_5 \\ y_{n+1}^* &= y_n + c_1^* k_1 + c_2^* k_2 +c_3^* k_3 + c_4^* k_4. \end{align} Because we can assume that the fifth-order solution is closer to the true solution than the fourth order algorithm, the difference $|y-y^*|$ provides a good estimate of the error of the fourth-order method. If this estimated error is larger than the desired tolerance, then the step size is decreased. If the error is smaller than the desired tolerance, the step size is increased. The Cash-Karp 5(4) ODE solver implements this technique for choosing the optimal step size.

In applications where the accuracy of the numerical solution is important, adaptive time step algorithms should always be used. As stated in *Numerical Recipes*:

*Many small steps should tiptoe through treacherous terrain, while a few great strides should speed through uninteresting countryside. The resulting gains in efficiency are not mere tens of percents or factors of two; they can sometimes be factors of ten, a hundred, or more.*

Adaptive step size algorithms are not well suited for tabulating functions or for simulation because the intervals between data points are not constant. An easy way to circumvent this problem is to use a method that takes multiple adaptive steps while checking to insure that the last step does not overshoot the desired fixed step size. ODE solvers often implement this technique. The solver acts like a fixed step size solver, even though the solver monitors its internal step size so as to achieve the desired accuracy.

It also is possible to combine the results from a calculation using two different values of the time step to yield a more accurate expression. Consider the Taylor series expansion of $f(t+dt)$ about $t$: $$\label{eq:motion/taylor1} f(t + dt) = f(t) + f'(t)dt + {1 \over 2!}f''(t) (dt)^2 + \ldots$$ Similarly, we have $$\label{eq:motion/taylor-1} f(t - dt) = f(t) - f'(t)dt + {1 \over 2!}f''(t) (dt)^2 + \ldots$$ We subtract (\ref{eq:motion/taylor-1}) from (\ref{eq:motion/taylor1}) to find the usual central difference approximation for the derivative $$\label{eq:motion/centraldifference} f'(t) \approx D_1(dt) = {f(t + dt) - f(t - dt) \over 2dt} - {(dt)^2 \over 6}f'''(t).$$ The truncation error is order $(dt)^2$. Next consider the same relation, but for a time step that is twice as large. $$\label{eq:motion/4more} f'(t) \approx D_1(2dt) = {f(t + 2dt) - f(t - 2dt) \over 4dt} - {4(dt)^2 \over 6}f'''(t).$$ Note that the truncation error is again order $(dt)^2$, but is four times bigger. We can eliminate this error to leading order by dividing (\ref{eq:motion/4more}) by 4 and subtracting it from (\ref{eq:motion/centraldifference}): \begin{align} f'(t) - {1 \over 4} f'(t) &= {3 \over 4} f'(t) \approx D_1(dt)- {1 \over 4} D_1(2dt), \nonumber \\ \mbox{or}\quad f'(t) &\approx {4D_1(dt) - D_1(2dt) \over 3}. \label{eq:motion/5point} \end{align} It is easy to show that the error for $f'(t)$ is order $(dt)^4$. Recursive difference formulas for derivatives can be obtained by canceling the truncation error at each order. This method is called *Richardson extrapolation*.

Another class of algorithms are *predictor-corrector* algorithms. The idea is to first *predict* the value of the new position: $$\label{eq:motion/xp} x_p = x_{n-1} + 2v_n dt. \qquad \qquad \mbox{(predictor)}$$ The predicted value of the position allows us to predict the acceleration $a_p$. Then using $a_p$, we obtain the *corrected* values of $v_{n+1}$ and $x_{n+1}$: \begin{align} v_{n+1}& = v_n + \frac{1}{2}(a_p + a_n) dt \\ x_{n+1}& = x_n + \frac{1}{2}(v_{n+1} + v_n) dt. \qquad \mbox{(corrected)} \end{align} The corrected values of $x_{n+1}$ and $v_{n+1}$ are used to obtain a new predicted value of $a_{n+1}$, and hence a new predicted value of $v_{n+1}$ and $x_{n+1}$. This process is repeated until the predicted and corrected values of $x_{n+1}$ differ by less than the desired value.

Note that the predictor-corrector algorithm is not self-starting. The predictor-corrector algorithm gives more accurate positions and velocities than the leapfrog algorithm and is suitable for very accurate calculations. However, it is computationally expensive, needs significant storage (the forces at the last two stages, and the coordinates and velocities at the last step), and becomes unstable for large time steps.

As we have emphasized, there is no single algorithm for solving Newton's equations of motion that is superior under all conditions. It is usually a good idea to start with a simple algorithm, and then to try a higher order algorithm to see if any real improvement is obtained.

We now discuss an important class of algorithms, known as *symplectic* algorithms, which are particularly suitable for doing long time simulations of Newton's equations of motion when the force is only a function of position. The basic idea of these algorithms derives from the Hamiltonian theory of classical mechanics. We first give some basic results needed from this theory to understand the importance of symplectic algorithms.

In Hamiltonian theory the generalized coordinates, $q_i$, and momenta, $p_i$, take the place of the usual positions and velocities familiar from Newtonian theory. The index $i$ labels both a particle and a component of the motion. For example, in a two particle system in two dimensions, $i$ would run from 1 to 4. The Hamiltonian (which for our purposes can be thought of as the total energy) is written as $$H({q_i,p_i}) = T + V, \label{eq:motion/hamiltonian}$$ where $T$ is the kinetic energy and $V$ is the potential energy. Hamilton's theory is most relevant for non-dissipative systems, which we consider here. For example, for a two particle system in two dimensions connected by a spring, $H$ would take the form: $$H = \frac{p_{1}^2}{2m} + \frac{p_{2}^2}{2m} + \frac{p_{3}^2}{2m} + \frac{p_{4}^2}{2m} + \frac{1}{2}k(q_{1} - q_{3})^2 + \frac{1}{2}k(q_{2} - q_{4})^2, \label{eq:motion/springH}$$ where if the particles are labeled as $A$ and $B$, we have $p_1 = p_{x,A}$, $p_2 = p_{y,A}$, $p_3 = p_{x,B}$, $p_4 = p_{y,B}$, and similarly for the $q_i$. The equations of motion are written as first-order differential equations known as Hamilton's equations: \label{eq:motion/hameq} \begin{align} {\dot p}_{i}& = -\frac{\partial H}{\partial q_i} \\ {\dot q}_{i}& = \frac{\partial H}{\partial p_i}, \end{align} which are equivalent to Newton's second law and an equation relating the velocity to the momentum. The beauty of Hamiltonian theory is that these equations are correct for other coordinate systems such as polar coordinates, and they also describe rotating systems where the momenta become angular momenta, and the position coordinates become angles.

Because the coordinates and momenta are treated on an equal footing, we can consider the properties of flow in phase space where the dimension of phase space includes both the coordinates and momenta. Thus, one particle moving in one dimension corresponds to a two-dimensional phase space. If we imagine a collection of initial conditions in phase space forming a volume in phase space, then one of the results of Hamiltonian theory is that this volume does not change as the system evolves. A slightly different result, called the *symplectic* property, is that the sum of the areas formed by the projection of the phase space volume onto the planes, ${q_i,p_i}$, for each pair of coordinates and momenta also does not change with time. Numerical algorithms that have this property are called symplectic algorithms. These algorithms are built from the following two statements which are repeated $M$ times for each time step. \begin{align} p_i^{(k+1)}& =p_i^{(k)} + a_{k} F_i^{(k)}\delta t \label{eq:motion/symplectic.a} \\ q_i^{(k+1)}& =q_i^{(k)} + b_{k} p_i^{(k+1)} \delta t, \label{eq:motion/symplectic.b} \end{align} where $F_i^{(k)} \equiv - \partial V(q_i^{(k)})/\partial q_i^{(k)}$. The label $k$ runs from 0 to $M-1$ and one time step is given by $\Delta t = M \delta t$. (We will see that $\delta t$ is the time step of an intermediate calculation that is made during the time step $dt$.) Note that in the update for $q_i$, the already updated $p_i$ is used. For simplicity, we assume that the mass is unity.

Different algorithms correspond to different values of $M$, $a_k$, and $b_k$. For example, $a_0 = b_0 = M = 1$ corresponds to the Euler-Cromer algorithm, and $M = 2$, $a_0 = a_1 = 1$, $b_0 = 2$, and $b_1 = 0$ is equivalent to the Verlet algorithm as we will now show. If we substitute in the appropriate values for $a_k$ and $b_k$ into Eqs. (\ref{eq:motion/symplectic.a}) and (\ref{eq:motion/symplectic.b}), we have \begin{align} p_i^{(1)}& = p_i^{(0)} + F_i^{(0)}\delta t \label{eq:motion/verletsym.a}\\ q_i^{(1)}& = q_i^{(0)} + 2p_i^{(1)}\delta t \label{eq:motion/verletsym.b}\\ p_i^{(2)}& = p_i^{(1)} + F_i^{(1)}\delta t \label{eq:motion/verletsym.c}\\ q_i^{(2)}& = q_i^{(1)}. \label{eq:motion/verletsym.d} \end{align} We next combine (\ref{eq:motion/verletsym.a}) and (\ref{eq:motion/verletsym.c}) for the momentum coordinate and (\ref{eq:motion/verletsym.b}) and (\ref{eq:motion/verletsym.d}) for the position, and obtain \begin{align} p_i^{(2)}&= p_i^{(0)} + (F_i^{(0)} + F_i^{(1)})\delta t \label{eq:motion/verletsym2.a}\\ q_i^{(2)}&= q_i^{(0)} + 2p_i^{(1)}\delta t. \label{eq:motion/verletsym2.b} \end{align} We take $\delta t = \Delta t/2$ and combine (\ref{eq:motion/verletsym2.b}) with (\ref{eq:motion/verletsym.a}) and find \begin{align} p_i^{(2)}&= p_i^{(0)} + \frac{1}{2}(F_i^{(0)} + F_i^{(1)})\Delta t \label{eq:motion/verletsym3.a}\\ q_i^{(2)}&= q_i^{(0)} +p_i^{(0)}\Delta t + \frac{1}{2} F_i^{(0)}(\Delta t)^2, \label{eq:motion/verletsym3.b} \end{align} which is identical to the Verlet algorithm, Eqs. (\ref{eq:motion/velverletx}) and (\ref{eq:motion/velverletv}), because for unit mass the force and acceleration are equal.

Reversing the order of the updates for the coordinates and the momenta also leads to symplectic algorithms: \begin{align} q_i^{(k+1)}& =q_i^{(k)} + b_{k}\delta t p_i^{(k)}, \label{eq:motion/symplectic2.a}\\ p_i^{(k+1)}& =p_i^{(k)} + a_{k}\delta t F_i^{(k+1)} \label{eq:motion/symplectic2.b} \end{align} A third variation uses different values of $k$ in one algorithm. Thus, if $M = 2$, which corresponds to two intermediate calculations per time step, we could use Eqs. (\ref{eq:motion/symplectic.a}) and (\ref{eq:motion/symplectic.b}) for the first intermediate calculation and Eqs. (\ref{eq:motion/symplectic2.a}) and (\ref{eq:motion/symplectic2.b}) for the second.

Why are these algorithms important? Because of the symplectic property, these algorithms will simulate an exact Hamiltonian, although not the one we started with in general. However, this Hamiltonian will be close to the one we wish to simulate if the $a_k$ and $b_k$ are properly chosen.