Good question, thanks for asking. The algorithm's not at all obvious and not (yet) posted anywhere!
Here's an outline of the tempate-matching process:
1. At each test position in the search area, find the RGB square deviation ("RGBSqD": sum
of squares of rgb differences of all pixels) between the template and video image.
Note that the RGBSqD is zero for a perfect match and larger for poorer matches.
2. Determine the average RGBSqD for all test positions.
3. Define the position for which the RGBSqD is minimum as the "working" best match.
Convert the RGBSqD to a "peak height" (PH) using PH = (avgRGBSqD/matchRGBSqD)-1.
Note that the PH is infinity for a perfect match, greatest for the best match and zero for
an average match.
4. If the PH exceeds the "Automark" setting, the match is deemed to be a good one
(i.e., significantly better than average).
5. For sub-pixel accuracy, fit a Gaussian curve to the PHs of the working best match
and its immediate vertical and horizontal neighbors. Note that the 3-point Gaussian
fits should be exact.
6. The final best match (sub-pixel) is the position of the peak of the Gaussian fit.
7. The width of the Gaussian fit is indicative of the uncertainty of the match position,
but it is not used to explicitly estimate this uncertainty.
I hope this answers your question in enough detail. If not, or if you have more questions, please don't hesitate to ask :-)