for the *n* variables .
An approximate solution *x* must be known.
Then a better approximation is found from the approximate equations

which are linear equations in the unknown . The matrix *J* is the Jacobi matrix,

The process is iterated until it converges, usually until
is smaller than the accuracy wanted in the solution, or until all the *f*_{j}(*x*) are ``sufficiently close to 0'' (general criteria are difficult to define). Convergence may, of course, not be obtained if the first approximation was poor (again this is difficult to define in general).

In the one-dimensional case the Newton-Raphson formula

has a very simple geometrical interpretation: it is the extrapolation to 0 along the tangent to the graph of *f*(*x*) (also called *Newton's rule*).

The convergence is quadratic, ,
where is the error after *m* iterations. Note that only approximate solutions for are required. A small error in
will not destroy the convergence completely, but may make it linear instead of quadratic. Hence also the Jacobian matrix *J*
needs to be calculated only approximately, in particular it need often not be recalculated for each iteration. Double computer precision for *x* and *f*(*x*) but single precision for *J* and may give double precision for the final solution.

In fact, the Newton-Raphson method may be applied even to linear equations in order to give double precision solutions using single precision subroutines.

Numerical differentiation might be used; this is then essentially the *secant method*. Some care may be needed, since numerical differentiation becomes inaccurate both for small and large steps, see [Press95].

Rudolf K. Bock, 7 April 1998