Interpolation polynomials

It is often necessary to replace some real function \(f\) with some other, simpler, function \(g,\) which is in some sense close to the function \(f\). This often arises for two reasons:

There are many ways to determine function \(g\) which will "replace" the initial function \(f.\) Most often the function \(g\) is approximated as the linear combination \(g\approx c_1\phi_1\left(x\right)+c_2\phi_2\left(x\right)+\dots+c_n\phi_n\left(x\right)\), where \(\phi_1,\dots,\phi_n\) are some real functions. Depending from properties of the function \(g\), different systems of functions \(\left\{\phi_n\right\}\) are used. (Thus, for example, trigonometric functions are used to approximate a periodic function.) In this text, it will be shown how an arbitrary real smooth function can be approximated by a polynomial. Such polynomial \(g\) will be determined so equality \(f\left(x_i\right)= g\left(x_i\right)\) holds for \(0\le i\le n\), where (\(x_0, x_1,\dots,x_n\) are different points form domain of the function \(f\). This way of approximation is called interpolation, and points \(x_0,x_1,\dots,x_n\) are called interpolation nodes.

Lagrange interpolation polynomial

After constants, simplest polynomials are linear polynomials, that is, polynomials of the form \(a_0+a_1x\). If we know at least two values of function \(f\), then with the help of a linear polynomial we can make one simple approximation using only mathematics from elementary school. If \(y_0 = f(x_0)\) and \(y_1=f(x_1)\) for some points \(x_0\ne x_1\), then there exists the unique linear function \(L\) such that equalities \(L(x_0)= y_0\) and \(L(x_1)= y_1\) hold. This function \(L\) is given by the equation \[y=\frac{y_1-y_0}{x_1-x_0}\left(x-x_0\right)+y_0,\] which is obtained by solving a linear system \[y_0=kx_0+m \qquad y_1=kx_1+m.\]

In some situations, a linear approximation is quite sufficient for our needs:

Example 1. Let's approximate function \(\log_{10}\left(x\right)\) by linear function on the interval \(\left [10,100\right]\). Using above formula, we obtain a polynomial \(y=\frac{x}{90}+\frac{8}{9}\). With a little bit of calculus, it can be calculated that the error of this approximation is less than \(1/3\) on the interval \(\left [10,100\right]\). ▲

However, sometimes linear interpolation is not satisfactory:

Example 2. If we approximate same function \(\log_{10}\left(x\right)\) on the interval \(\left[\frac{1}{10000},10\right]\) using the ends of the interval for interpolation nodes, we obtain a linear function that deviates by more than \(4\) at the one point of the interval. ▲

Intuition tells us that if we use more information about a function \(f\), that is, if we use a more nodes, then the interpolation error will be reduced.

Example 3. Let's approximate function \(\sin\left(x\right)\) on interval \(\left [0,\pi\right]\) using function values in points \(x_0=0, x_1 =\pi/2\) and \(x_2 =\pi\). Function \(\sin\left(x\right)\) at given points takes values \(0,1\) and \(0\). Points \(\left(0, 0\right), \left(\pi/2, 1\right)\) and \(\left(\pi, 0\right)\) are not collinear, so the interpolation polynomial cannot be a linear function. Suppose now that the required interpolation polynomial \(L\) is of second degree. Then \(L\) is in form \(ax^2 + bx + c\) for some real coefficients \(a,b\) and \(c\). It follows from the definition of the interpolation polynomial that equalities \(\sin\left(0\right)=L\left(0\right), \sin\left(\frac{\pi}{2}\right)= L\left(\frac{\pi}{2}\right)\) and \(\sin\left(\pi\right)= L\left(\pi\right)\) hold, i.e \[\begin{aligned} 0&=a\cdot0^2+b\cdot0+c \\ 1&=a\left(\frac{\pi}{2}\right)^2+b\frac{\pi}{2}+c\\ 0&=a\pi^2+b\pi+c.\end{aligned}\] This system has solution \(a = -\frac{4}{\pi^2}, b =\frac{4}{\pi}\) and \(c=0\), so the second degree interpolation polynomial is \[L\left(x\right)= -\frac{4}{\pi^2}x^2 +\frac{4}{\pi}x,\] which approximates the given function with an error less than \(1/10\) on the specified interval. Graph of functions \(\sin\left(x\right)\) and \(L(x)\) is given by Figure 1.

Figure 1 Graph of function \(\sin\left(x\right)\) (red line) and interpolation polynomial \(\frac{4}{\pi^2}x^2 +\frac{4}{\pi}\) (dashed black line).

Analogous to procedure described in the previous example, we can proceed whenever several interpolation nodes are given. In such procedure, it is necessary to solve the linear system \[\tag{1}\left[\begin{matrix}f\left(x_0\right)\\f\left(x_1\right)\\\vdots\\f\left(x_n\right)\end{matrix}\right]=\left[\begin{matrix}x_0 & x_1 &\dots & x_n\\x_0^2 & x_1^2 &\dots & x_n^2\\\vdots &\vdots & &\vdots\\x_0^n & x_1^n &\dots & x_n^n\end{matrix}\right]\left [\begin{matrix}a_0\\a_1\\\vdots\\a_n\end{matrix}\right],\] where \(x_0,x_1,x_2,\dots,x_n\) are (distinct) interpolation nodes, and \(a_0,a_1,\dots,a_n\) are coefficients of the interpolation polynomial. In other words, it is necessary to find the coordinates of the interpolation polynomial in the basis \(\left[1\; x\; x^2\;\dots\; x^n\right]\), if such a polynomial exists.

Reader is maybe aware that the square matrix from the system (1) is so-called Vandermonde matrix of order \(n\), ant it has the non-zero determinant whenever \(x_0, x_1,\dots,x_n\) are mutually different numbers. In our case, \(x_0, x_1,\dots,x_n\) are always mutually different, so solution of the system (1) always exists and it is unique. This shows that for each sequence \(x_0, x_1,\dots,x_n\) from \(n+1\) different points there is a unique interpolation polynomial \(L_n\) of the degree \(n\) which satisfies equality \(L_n\left(x_i\right)= f\left(x_i\right)\) for \(0\le i\le n\).

However, there is a more "natural" procedure for finding interpolation polynomial \(L\). This procedure is based on the use of a basis of polynomial functions that depends from choice of interpolation nodes. Let us therefore fix interpolation nodes \(x_0, x_1,\dots,x_n\). New basis consists of polynomials \[l_i\left(x\right)=\prod\limits _{{j = 0\atop j\ne i}}^n\frac{x-x_j}{x_i-x_j}.\] From definition of polynomials \(l_i\) it follows that \(l_i\left(x_j\right)= 1\) if \(j=i\), and \(l_i\left(x_j\right)= 0\) otherwise. With this property, it is now easy to determine the interpolation polynomial \(L_n\) as a linear combination of polynomials \(l_i\): \[L_n\left(x\right)= f\left(x_0\right)l_0\left(x\right)+ f\left(x_1\right)l_1\left(x\right)+\dots + f\left(x_n\right)l_n\left(x\right).\] Previous definition really makes sense, because if \(x\) equals to one of nodes \(x_i\) then each of function \(l_k\) takes a value of zero except for the function \(l_i\) which takes value \(1\). It follows that \(L_n\left(x_i\right)= l_i\left(x_1\right)f(x_i)= f\left(x_i\right)\) for every \(0\le i\le n\). Using definition of function \(l_i\left(x\right)\) we get that \[\tag{2}L_n\left(x\right)=\sum\limits_{i = 0}^n\left(f\left(x_i\right)\prod\limits_{j = 0\atop j\ne i}^n\frac{x-x_j}{x_i-x_j}\right).\] The interpolation polynomial, written in the form (2), is called a Lagrange interpolation polynomial.

In comments after the equation (1), it was already been shown that interpolation polynomial of degree \(n\) must be unique. But let's prove uniques in another way. Let's suppose there are two different polynomials \(L^1_n\left(x\right)\) and \(L^2_n\left(x\right)\) such that equation \(L^1_n\left(x_i\right)= f\left(x_i\right)= L^2_n\left(x_i\right)\) holds for \(0\le i\le n\). Then the polynomial \(L^1_n\left(x\right)-L^2_n\left(x\right)\) is not the zero polynomial, has degree not greater than \(n\), and has \(n+1\) zeros. This is of course impossible by the fundamental theorem of algebra, so polynomials \(L^1_n\left(x\right)\) and \(L^2_n\left(x\right)\) must be equal.

From all above said, we can conclude that the following theorem holds.

Theorem 1. For any real function \(f\) defined at different points \(x_0, x_1,\dots,x_n\), there exists the unique polynomial \(L\left(x\right)\) of degree \(n\) which satisfies equality \(L\left(x_i\right)= f\left(x_i\right)\) for \(0\le i\le n\).

Lagrange polynomial can be written in a slightly different form. If we set \[\omega_{n + 1}\left(x\right)=\prod\limits_{j = 0}^{n}\left(x-x_j\right),\] then \(\omega_{n + 1}'\left(x\right)=\sum_{k = 0}^{n}\left(\prod _{{j = 0\atop j\ne k}}^{n}\left(x-x_j\right)\right)\), so \(\omega_{n + 1}'\left(x_i\right)=\prod _{{j = 0\atop j\ne i}}^{n}\left(x_i-x_j\right)\). It follows now that the polynomial \(l_i\left(x\right)\) can be written as \[l_i\left(x\right)=\prod\limits _{{j = 0\atop j\ne i}}^n\frac{x-x_j}{x_i-x_j}=\frac{\omega_{n + 1}\left(x\right)}{\left(x-x_i\right)\omega_{n + 1}'\left(x_i\right)},\] so (2) becomes \[\tag{3}L_n\left(x\right)=\sum\limits_{i = 0}^{n}\frac{f\left(x\right)\omega_{n + 1}\left(x\right)}{\left(x-x_i\right)\omega_{n + 1}'\left(x_i\right)}.\]

Following theorem gives us an estimate of the error of Lagrange interpolation.

Theorem 2. Let \(f\colon\mathbb{R}\rightarrow\mathbb{R}\) be \(n+1\) times differentiable function, and let \(L\left(x\right)\) be Lagrange interpolation polynomial based on the function \(f\) values in the nodes \(x_0, x_1,\dots,x_n\). Then for every \(x\in\mathbb{R}\) exists the point \(\xi\in\mathbb{R}\) which belongs to the minimum interval containing points \(x_0, x_1,\dots,x_n\), such that \[\tag{4}f\left(x\right)- L_n\left(x\right)=\frac{f^{\left(n + 1\right)}\left(\xi\right)}{\left(n + 1\right)!}\omega_{n + 1}\left(x\right).\]

Proof. Let \(\bar{x}\) be arbitrary point. If \(\bar{x}= x_i\) for some \(0\le i\le n\), then equation (4) is trivially valid for \(\bar{x}=x\). Suppose therefore that a point \(\bar{x}\) is different from all interpolation nodes. Let \(F\left(x\right)= f\left(x\right)-L_n\left(x\right)-C\omega_{n + 1}\left(x\right)\) where the constant \(C\) is determined in such a way so that the function \(F\) has a zero in the point \(\bar{x}\): \[\tag{5}C =\frac{f\left(\bar{x}\right)-L_{n}\left(\bar{x}\right)}{\omega_{n + 1}\left(\bar{x}\right)}.\] Function \(F\) also has zeros in the points \(x_0, x_1,\dots,x_n\). By successive application of Roll's theorem to functions \(F,F',F'',\dots,F^{\left(n\right)}\) we conclude that \(F^{\left(n + 1\right)}\) has at least one zero \(\xi\) in the minimum interval containing points \(x_0, x_1,\dots,x_n, \bar{x}\). From \(L_{n}^{\left(n + 1\right)}\equiv 0 \) and \(\omega_{n + 1}^{\left(n + 1\right)}\equiv\left(n + 1\right)!\) follows that \[0 = F^{\left(n + 1\right)}\left(\xi\right)= f^{\left(n + 1\right)}\left(\xi\right)-C\left(n + 1\right)!,\] i.e \[\tag{6}C =\frac{f^{\left(n + 1\right)}\left(\xi\right)}{\left(n + 1\right)!}.\] Now equation (4) for \(x=\bar{x}\) follows from the equations (5) and (6). □

As we saw from equation (4), the error of the Lagrange interpolation polynomial depends on the number of interpolation nodes, as well as on the derivative of the function \(f\). However, by increasing the number of interpolation nodes, we can't with certainty say that the interpolation error decreases.

Example 4. Let's find the Lagrange polynomial of the function \(\sin\left(x\right)\) using values at \(0,\frac{\pi}{2}\) and \(\pi\): \[\begin{aligned}L\left(x\right)& =\sin\left(0\right)\frac{\left(x-\frac{\pi}{2}\right)\left(x-\pi\right)}{\left(0-\frac{\pi}{2}\right)\left(0-\pi\right)}+\sin\left(\frac{\pi}{2}\right)\frac{\left(x-0\right)\left(x-\pi\right)}{\left(\frac{\pi}{2}-0\right)\left(\frac{\pi}{2}-\pi\right)}+\sin\left(\pi\right)\frac{\left(x-0\right)\left(x-\frac{\pi}{2}\right)}{\left(\pi-0\right)\left(\pi-\frac{\pi}{2}\right)}\\& =\sin\left(\frac{\pi}{2}\right)\frac{\left(x-0\right)\left(x-\pi\right)}{\left(\frac{\pi}{2}-0\right)\left(\frac{\pi}{2}-\pi\right)}= -\frac{4}{\pi^2}x^2 +\frac{4}{\pi}x.\end{aligned}\] As we can see, we obtained the same polynomial as in Example 3, which should not surprise us because we proved that the interpolation polynomial is unique.

With formula (4), we can estimate the interpolation error on the interval \(\left [0,\pi\right]\):\[\left\lvert \sin\left(x\right) - L\left(x\right)\right\rvert= \left\lvert\frac{ \sin^{\left(3\right)}\left(\xi\right)}{3!} \omega_3\left(x\right) \right\rvert \le \frac{\max_{x\in\left[0,\pi\right]}\left\lvert\sin\left(x\right)\right\rvert}{6}\max_{x\in\left[0,\pi\right]}\left\lvert\omega_{3}\left(x\right)\right\rvert = \frac{1}{6}\frac{\sqrt{3}\pi^3}{36}\approx0.248632\] For some specific values of \(x\), expression \(\left\lvert\omega_{3}\left(x\right)\right\rvert\) is smaller \(\max_{x\in\left[0,\pi\right]}\left\lvert\omega_{3}\left(x\right)\right\rvert\), so it is possible to get a much better error estimate. ▲

With the help of formula (2), it is easy to write a function that forms a Lagrange polynomial in any programming language. One such function, written in the MATLAB language, is given below. The function LagrangePolynomial (X, Y) returns the coefficients of the Lagrangian interpolation polynomial formed by the vector of nodes X and the vector of the corresponding function values Y. It is assumed that the vectors are of the same length, and that the vector X does not contain repeated values. The built-in MATLAB function conv is used to multiply two polynomials given by coefficient vectors.

function [L] = LagrangePolynomial(X, Y)

n = length(X);
L = 0;

for i = 1:n
    Q = 1;
    for j = 1:n
        if j ~= i
            Q = conv(Q, [1, -X(j)] / (X(i) - X(j)));
    L = L + Y(i)*Q;


If only the value of the interpolation polynomial needs to be returned, then the MATLAB algorithm looks even simpler.

Newton's interpolation polynomial

The interpolation polynomial can also be written in the form of a Newton interpolation polynomial, which can be understood as a generalization of a Taylor polynomial. Derivatives in Taylor's polynomial correspond to divided differences in Newton polynomial.

Let different nodes be given \(x_0, x_1,\dots,x_n\). Divided differences of function \(f\) are defined as follows:

Schematically, the divided differences can be written in the table:

\(f[x_i]\) \(f\left[x_{i},x_{i+1}\right]\)
\(f[x_{i+1}]\) \(f\left[x_{i},x_{i+1},x_{i+2}\right]\)
\(f\left[x_{i+1},x_{i+2}\right]\) \(f\left[x_{i},x_{i+1},x_{i+2},x_{i+3}\right]\)
\(f[x_{i+2}]\) \(f\left[x_{i+1},x_{i+2},x_{i+3}\right]\)

With induction, it is easy to prove that the divided difference of order \(k\) is given by the formula \[\tag{7}f\left[x_0,\dots,x_k\right]=\sum\limits_{i=0}^{k}\frac{f\left(x_i\right)}{\prod_{{j=0\atop j\ne i}}^{k}\left(x_i-x_j\right)}.\]

It follows from the stated formula that the divided difference is a linear operator, that is, \[\left(\alpha f +\beta g\right)\left [x_0,\dots, x_k\right] =\alpha f\left [x_0,\dots, x_k\right] +\beta g\left [x_0,\dots, x_k\right]\] for arbitrary real numbers \(\alpha\) and \(\beta\), and arbitrary real functions \(f\) and \(g\) . It follows from formula (7) that the divided difference is a symmetric function of its arguments (value of the divided difference does not change if we permute nodes).

By induction is also proved the Leibniz formula for divided differences: \[\left(fg\right)\left [x_0,\dots, x_k\right] =\sum_{i = 0}^{k}f\left [x_0,\dots, x_{i}\right] g\left [x_{i},\dots, x_k\right].\]

The definition of divided differences can be easily implemented in the MATLAB language. Below is given the code of the function which, based on the vector of interpolation nodes X and the vector of the corresponding function values ​​Y, returns the upper triangular matrix with all divided differences of the order not greater than n (where n is the length of the vector is X). As in the previous algorithm, it is assumed that the vectors X and Y have same length, and that the vector X does not contain repeated values.

function [D] = DividedDifferences(X, Y)
n = length(X);
D = [Y', zeros(n, n-1)];
for i = 2:n
  for j = 1:n-i+1
    D(j, i) = (D(j+1, i-1) - D(j, i-1))/(X(j+i-1)-X(j));

Let us return to interpolation polynomials. To find Newton's interpolation polynomial, we first express the interpolation error using divided differences. Using the definition of the Lagrange interpolation polynomial and formula (4), we obtain that \[\begin{aligned} f\left(x\right)-L_{k}\left(x\right)&=f\left(x\right)-\sum\limits_{i=0}^{k}\frac{f\left(x\right)\omega_{k+1}\left(x\right)}{\left(x-x_i\right)\omega_{k+1}\left(x_i\right)}\\ &=\omega_{k+1}\left(x\right)\left(\frac{f\left(x\right)}{\prod_{j=0}^{k}\left(x-x_j\right)}+\sum\limits_{i=0}^{k}\frac{f\left(x\right)}{\left(x-x_i\right)\prod_{{j=0\atop j\ne i}}^{k}\left(x_i-x_j\right)}\right).\end{aligned}\] From (7) it follows that \[\tag{8}f\left(x\right)-L_{k}\left(x\right)= f\left [x, x_0,\dots, x_n\right]\omega_{k + 1}\left(x\right).\] Let us now write the Lagrange interpolation polynomial of order \(n\) as \[\tag{9}L_n\left(x\right)=\left(L_n\left(x\right)-L_{n-1}\left(x\right)\right)+\dots +\left(L_{1}\left(x\right)-L_{0}\left(x\right)\right)+ L_0\left(x\right)\] where \(L_k\) is a polynomial determined by nodes \(x_0,\dots, x_k\). Each of the differences \(L_k\left(x\right)-L_{k-1}\left(x\right)\) is a polynomial of degree \(k\) which has zeros in the points \(x_0,\dots, x_{k-1}\). It follows that \[\tag{10}L_k\left(x\right)-L_{k-1}\left(x\right)=\alpha_k\omega_k\left(x\right)\] for some real constant \(\alpha_k\). Substituting values \(x_k\) in the equation (10) we get \[L_k\left(x_k\right)-L_{k-1}\left(x_k\right)= f\left(x_k\right)-L_{k-1}\left(x_k\right)=\alpha_k\omega_k\left(x\right),\] which in comparison with (8) gives us that \(\alpha_k = f\left [x_0,\dots, x_k\right]\). Substituting in (10) we have that \(L_k\left(x\right)-L_{k-1}\left(x\right)=f\left[x_0,\dots,x_k\right]\omega_k\left(x\right)\), so based on (9), \[\tag{11}L_n\left(x_k\right)= f\left [x_0\right] + f\left [x_0, x_1\right]\left(x-x_0\right)+\dots + f\left [ x_0,\dots, x_n\right]\left(x-x_0\right)\cdots\left(x-x_{n-1}\right).\] An interpolation polynomial written in the form (11) is called a Newtonian interpolation polynomial.

Comparing expressions (4) and (8), we conclude that the connection between the divided differences and the derivatives is given by the expression \[f\left [x_0,\dots, x_n, x\right] =\frac{f^{\left(n + 1\right)}\left(\xi\right)}{\left(n + 1\right))!},\] where \(\xi\) belongs to the minimal interval containing numbers \(x_0, x_1,\dots,x_n\).

The reader should keep in mind that the Lagrange and Newton interpolation polynomials are exactly the same polynomials, only written in different forms. For specific interpolation nodes \(x_i\) and corresponding values \(f(x_i)\), these two forms are reduced to exactly the same polynomial functions.

Using the MATLAB function for calculating the split differences, which we described earlier, we can easily construct a MATLAB function that calculates the value of an interpolation polynomial at a point \(x\) using the Newtonian polynomial (11).

function [y] = NewtonInterpolation(x, X, Y)
n = length(X);
D = DividedDifferences(X, Y);
y = 0;
Q = 1;
for i = 1:n
  y = y + Q * D(1, i);
  Q = Q * (x - X(i));

Coefficients of the interpolation polynomial can be calculated by slight modification of above algorithm.

For the end of this section, let us give one example of the construction of Newton's interpolation polynomial.

Example 5. Let's interpolate a function \(f\left(x\right)=\sin\left(\frac{\pi x}{2}\right)+0.2e^{-0.2x}\sin\left(2\pi x+1\right)\) using for interpolation nodes \(-2, -1.5,-1,0\) and \(2\). The divided differences table looks like this:

\(x_i\) \(f\left(x_i\right)\) \(f\left[x_i,x_{i+1}\right]\) \(f\left[x_i,x_{i+1},x_{i+2}\right]\) \(f\left[x_i,\dots,x_{i+3}\right]\) \(f\left[x_i,\dots,x_{i+4}\right]\)
2 0.251
-1.5 -0.934 2.65
0.28 -1.097
-1 -0.794 0.455 0.218
0.963 -0.224
0 0.168 -0.33
2 0.113

Using (11) we get \[\begin{aligned}L_{5}(x)& = 0.251-2.371\left(x + 2\right)+2.65\left(x + 2\right)\left(x + 1.5\right)-1.097\left(x + 2\right)\left(x + 1.5\right)\left(x + 1\right)\\& + 0.218\left(x + 2\right)\left(x + 1.5\right)\left(x + 1\right)\left(x-0\right)\\& = 0.218x^4-0.116x^3-0.8695x^2 + 0.4285x + 0.17.\end{aligned}\]

Figure 2

As we can see from Figure 2, the interpolation error \(\left\lvert f\left(x\right)-L_5\left(x\right)\right\rvert\) is relatively big for some choices of variable \(x\). Therefore, we can add another interpolation node \(x_5 = 1\). By calculating the five divided differences, we get that \(\left [x_0, x_1, x_2, x_3, x_4, x_5\right] = - 0.8741\), so (11) give us \[\begin{aligned}L_6\left(x\right)&=L_5\left(x\right)+f\left[x_0,x_1,x_2,x_3,x_4,x_5\right]\left(x-x_0\right)\left(x-x_1\right)\left(x-x_2\right)\left(x-x_3\right)\left(x-x_4\right)\left(x-x_5\right)\\ &=L_5\left(x\right)-0.8741\left(x-x_0\right)\left(x+2\right)\left(x+1.5\right)\left(x-0\right)\left(x-2\right)\left(x-1\right)\\ &=-0.08741x^5-0.000135x^4+0.10362x^3+0.004165x^2+0.94984x+0.16802.\end{aligned}\] By adding one node, the interpolation error is reduced, as can be seen from Figure 3.

Figure 3

It is possible to add more interpolation nodes by an analogous procedure. Note that adding more points does not guarantee reduction of interpolation error. ▲

As can be seen from the previous example and formula (11), when adding interpolation nodes, the Newtonian polynomial does not need to be calculated from scratch, but it is sufficient to calculate several divided differences and the last term in formula (11). In contrast, when adding Lagrange interpolation nodes, the interpolation polynomial needs to be completely recalculated. Therefore, Newton's interpolation polynomial is used when the required number of interpolation nodes is not known in advance.

Interpolation polynomial with evenly spaced nodes

In Lagrange's and Newton's interpolation polynomial, the order and relative position of the interpolation nodes were not important (all that mattered was that all nodes were different). In the case when the interpolation nodes are evenly distributed, the interpolation polynomial can be written in several different forms that are based on Newton's interpolation polynomial. However, instead of divided differences, we use finite differences in these forms. In the following, we will use two types for finite differences. Each of these types is defined inductively, just like the divided difference:

The (forward, backward) difference of the first order is defined as the difference of the value of the function \(f\) in nodes: \(\Delta^{1}f_{i}=\nabla^{1}f_{i + 1}= f(x_{i + 1})- f(x_{i})\).

Directly from the definition, we get connection between the forward and backward differences formed over the same set of nodes: \(\Delta^k f_i =\nabla^{k}f_{i + 1}\).

Using induction, it is easy to derive a formula in which the finite differences of order \(k\) is expressed through the value of the function.

Lemma 1. Finite differences of order \(k\) can be calculated using a formula \[\Delta^{k}f_i =\sum\limits_{j=0}^{k}\left(-1\right)^{j}{k\choose j}f_{i+k-j}.\]

Proof. We will prove this statement by mathematical induction. For \(k = 1\) the stated formula comes down to the definition of a divided difference. Suppose, therefore, that the formula holds for divided differences of degree less than \(k\). It follows that \[\begin{aligned}\Delta^{k+1}f_i&=\Delta^{k}f_{i+1}-\Delta^{k}f_i \\ &=\sum\limits_{j=0}^{k}\left(-1\right)^{j}{k\choose j}f_{i+k+1-j} - \sum\limits_{j=0}^{k}\left(-1\right)^{j}{k\choose j}f_{i+k-j} \\ &= {k \choose 0}f_{i+k+1}+\sum\limits_{j=0}^{k-1}\left(-1\right)^{j+1}{k\choose j+1}f_{i+k-j} - \sum\limits_{j=0}^{k-1}\left(-1\right)^{j}{k\choose j}f_{i+k-j}-\left(-1\right)^{k}{k \choose k}f_{i}\\ &= {k+1 \choose 0}f_{i+k+1}+\sum\limits_{j=0}^{k-1}\left(-1\right)^{j}{k+1\choose j}f_{i+k+1-j}+\left(-1\right)^{k+1}{k+1 \choose k+1}f_{i}\\ &=\sum\limits_{j=0}^{k+1}\left(-1\right)^{j}{k+1\choose j}f_{i+k+1-j} \end{aligned}\] The fourth equation holds because \((-1)^{j + 1}{n\choose j + 1}-(-1)^j{n\choose j}=(-1)^{j + 1}{n + 1\choose j + 1}\). □

Lemma 2. If we set \(x_i = x_0 + ih\), then \[\tag{12}f\left [x_i,\dots, x_{i + k}\right] =\frac{\Delta^{k}f_i}{h^kk!}.\]

Proof. We prove this statement by induction. For \(k=1\) the claim holds by the definition of a finite difference. Therefore, let the statement be true for all divided differences of the order not greater than \(k\). For \(f\left [x_i,\dots, x_{i + k + 1}\right]\) we have \[\begin{aligned} f\left[x_i,\dots,x_{i+k+1}\right] &= \frac{f\left[x_{i+1},\dots,x_{i+k+1}\right]-f\left[x_{i},\dots,x_{i+k-1}\right]}{x_{i+k+1}-x_{i}} \\ &= \frac{1}{(n+1)h}\left(\frac{\Delta^{n}f_{i+1}}{h^nn!}-\frac{\Delta^{n}f_{i}}{h^nn!}\right)=\frac{\Delta^{n+1}f_{i}}{h^{n+1}\left(n+1\right)!}\end{aligned}\]

If interpolation nodes are evenly distributed, then the interpolation polynomial can be written in several different forms in which the finite differences appear. Suppose, therefore, that are given evenly spaced interpolation nodes and a point \(x\) which belongs to the smallest interval containing all given interpolation nodes, and for which we want to calculate the approximate value of the function. Let's mark with \(x_0\) point such that \(x\) belongs to the interval \(\left [x_0-h, x_0 + h\right]\). Nodes greater than \(x_0\) denote by indices \(1,2,3,\dots\), and nodes less than \(x_0\) denote by indices \(-1, -2, -3,\dots\) so that the nodes are arranged on the real line as in the table below.


The formula \(x_i = x_0 + ih\) applies to this arrangement of nodes. Let's also introduce a new variable \(q\in\left(-1, 1\right)\) such that \[\tag{13}x = x_0 + qh.\]

If all nodes have a positive index, then polynomial (11) can be written in the form of Newton's interpolation polynomial for forward interpolation: \[\tag{14}L(x_0 + qh)= f_0 +\frac{q}{1!}\Delta^1 f_0 +\frac{q(q-1)}{2!}\Delta^2 f_0 +\dots +\frac{q\left(q-1\right)\dots\left(q-n + 1\right)}{n!}\Delta^n f_0.\] This form is derived from equation (12) and equality \(\omega\left(x_0 + qh\right)= h^k\prod_{i = 0}^{k-1}(qi)\) which follows from equation (13).

If all nodes have a negative index, then Newton's polynomial (11) can be formed by introducing interpolation nodes om following order \(x_{-1},x _{-2},\dots,x_{-n}\). By using (12) and (13) again, we obtain a Newton interpolation polynomial for the backward difference: \[\tag{15}L(x_0 + qh)= f_0 +\frac{q}{1!}\Delta^{1}f _{- 1}+\frac{q\left(q + 1\right)}{2 !}\Delta^2 f _{-2}+\dots +\frac{q\left(q + 1\right)\dots\left(q + n-1\right)}{n!}\Delta^n f_{-n}.\]

Greater interpolation accuracy is achieved if the nodes on both sides of the point are taken into account \(x\). Suppose that we know function \(f\) values in \(2n + 1\) nodes \(x _{-n},\dots, x_{n+1}\), and suppose that we want to approximate \(f\) at some point \(x\) which belongs to the interval \(\left [x_0, x_0 +\frac{h}{2}\right]\). Then, by forming a Newtonian polynomial (11) using a series of nodes \(x_0, x_1, x _{- 1}, x_2,\dots, x_{n + 1}\) (the order of nodes is formed based on their distance from \(x_0\)), and then by using connections (12) and (13), we obtain a Gaussian interpolation polynomial for forward interpolation: \[\tag{16}L(x_0+qh)=f_0+\frac{q}{1!}\Delta^{1} f_{0}+\frac{q\left(q-1\right)}{2!}\Delta^2 f_{-1}+\dots+\frac{q\left(q^2-1\right)\dots\left(q^2-n^2\right)}{\left(2n+1\right)!}\Delta^{2n+1} f_{-n}.\]

If \(x\in\left [x_0 +\frac{h}{2}, x_1\right)\) then we form polynomial (11) by introducing a series of points \(x_1, x_0, x_2, x _{- 1},\dots, x_{n + 1}, x _{- n}\). Again, using (12) and (13) we obtain the first Gaussian interpolation polynomial for backward interpolation \[L(x_0 + qh)= f_1 +\frac{q-1}{1!}\Delta^{1}f_{0}+\dots +\frac{q\left(q^2-1\right)\dots\left(q^2-(n-1)^2\right)(q-n)(q-(n + 1))}{\left(2n + 1\right)!}\Delta^{2n + 1}f_{-n}.\]

If it is \(x\in\left [x_0-\frac{h}{2},x_0\right)\) then knots \(x_0,x _{-1},x_2,\dots,x_{n},x_{-(n+1)}\) introduce in order into the polynomial (11), and by an analogous procedure as for the previous polynomials we obtain the second Gaussian interpolation polynomial for backward interpolation (18): \[L(x_0 + qh)= f_0 +\frac{q}{1!}\Delta^{1}f _{- 1}+\frac{q\left(q + 1\right)}{2!}\Delta^2 f _{- 1}+\dots +\frac{q\left(q^2-1\right)\dots\left(q^2-n^2\right)}{\left(2n + 1\right)!}\Delta^{2n + 1}f _{-(n + 1)}.\]

The arithmetic mean of polynomials (16) and (17) is the Bessel interpolation polynomial which is used when \(\left\lvert q\right\rvert\le 0.25\). The arithmetic mean of polynomials (16) and (18) is the Stirling interpolation polynomial which is used when \(0.25\le q\le 0.75\).

Formula (4) can be used for the interpolation error of the mentioned polynomials.

Interpolation error

Let it be \(f\colon I\rightarrow\mathbb{R}\) arbitrary function. Then we define an uniform norm \(\left\lVert f\right\rVert_{\infty}\) of functions \(f\) as \[\left\lVert f\right\rVert _{\infty}=\sup_{x\in I}\left\lvert f\left(x\right)\right\rvert.\] If function \(f\) is continuous, and \(I\) is compact set (eg. \(I=[a, b]\)), then the supremum of the function is reached, so we can write \(\left\lVert f\right\rVert _{\infty}=\max_{x\in I}\left\lvert f\left(x\right)\right\rvert\).

It is clear that the notion of a uniform norm describes how much a function deviates from the zero function. Therefore, the uniform norm distinguishes the two functions \(\left\lVert f-g\right\rVert _{\infty}\) is one way to describe how much two functions deviate from each other, that is one way to define the distance between functions \(f\) and \(g\).

Let \(f\) be an arbitrary function and \(L_n\) the corresponding interpolation polynomial of degree \(n\). We will analyse \(\left\lVert f-L_n\right\rVert _{\infty}\), which tell us how well an interpolation polynomial "mimics" the function \(f\). If \(\left\lVert f-L_n\right\rVert _{\infty}\rightarrow 0\) when \(n\rightarrow\infty\), then we say that the interpolation polynomial uniformly converges to function \(f\). In this section, we will determine the sufficient conditions for uniform convergence of the interpolation polynomial.

It is quite easy to find examples of functions \(f\) which "deviate" from the corresponding interpolation polynomial \(L_n\) regardless of the choice of the number and position of the polynomial nodes \(L_n\) (eg discontinuous functions, function with horizontal asymptotes, etc…). Therefore, no size estimate of \(\left\lVert f-L\right\rVert_{\infty}\) can be expected without imposing additional conditions on the function \(f\). If \(f\) is smooth enough, then Theorem 2 guarantees us formula (4): \[f\left(x\right) - L_n\left(x\right) = \frac{f^{\left(n+1\right)}\left(\xi\right)} {\left(n+1\right)!}\omega_{n+1}\left(x\right),\] where \(\xi\) depends on \(x\) and belongs to the minimum interval that includes all interpolation nodes and point \(x\). By setting absolute values ​​and taking the maximum by \(\xi\) and \(x\) on both sides of the above equation, we obtain the constraint of the interpolation error \[\tag{19}\left\lVert f-L_n\right\rVert_{\infty}=\max_{x\in\left [a, b\right]}\left\lvert f\left(x\right)-L_n\left(x\right)\right\rvert\le\frac{\max_{x\in\left [a, b\right]}\left\lvert f^{\left(n+1\right)}\left(x\right)\right\rvert}{\left(n+1\right)!}\max_{x\in\left[a,b\right]}\left\lvert\omega_{n+1}\left(\bar{x}\right)\right\rvert,\] where \(\left [a, b\right]\) is the smallest interval containing all points \(x_0,\dots,x_n\).

The following question now arises: Are there conditions that ensure that size \(\left\lVert f-L\right\rVert _{\infty}\) tends to zero when the number of interpolation nodes increases? German mathematician Carl Runge showed by a simple example that infinite smoothness of function \(f\) does not provide an affirmative answer to the previous question:

Example 6. For an infinitely differentiable function \(f\left(x\right)=\left(1 + x^2\right)^{- 2}\) holds that \(\left\lVert f-L_n\right\rVert _{\infty}\rightarrow\infty\) when \(x\rightarrow\infty\), where \(L_n\) is an interpolation polynomial formed by values of function \(f\) in \(n+1\) evenly spaced nodes in the interval \(\left [-5,5\right]\)

Figure 4. Graph of the function \(f\left(x\right)=\left(1 + x^2\right)^{- 2}\) (red line), and the corresponding interpolation polynomial \(L_{11}\) (dashed line) formed on the basis of evenly distributed nodes \(x = -5, -4,\dots, 5\).

By plotting several more interpolation polynomials formed in this way, it can be inferred that by increasing the number of interpolation nodes, the interpolation error at the ends of the interval \(\left[-5, 5\right]\) increases uncontrollably, while the interpolation error in the middle of the interval decreases. This phenomenon is called the Runge phenomenon. Precise proof of these claims can be found in the literature. ▲

However, if the function \(f\) is analytical in a large enough area, then \(\left\lVert f-L_n\right\rVert _{\infty}\rightarrow 0\) when \(x\rightarrow\infty\) regardless of the choice of interpolation nodes. More precisely, suppose it is a function \(f\) analytical on the disk of radius \(R\) centered in the middle of the interval \(\left [a, b\right]\). Suppose also that the radius of the circle is large enough so that the whole interval \(\left[a, b\right]\) is in it, that is, that it is \(R\ge\frac{1}{2}\left(b-a\right)\). Then, for everyone natural \(m\), follows Cauchy's inequality follows \[\tag{20}\left\lvert f^{\left(m\right)}\left(x\right)\right\rvert\le\frac{m!}{2\pi}\oint\limits_{C_R}\left\lvert\frac{f\left(z\right)}{\left(z-x\right)^{m+1}}\right\rvert dz\le \frac{m!}{2\pi}\frac{\max_{z\in C_R} \left\lvert f\left(x\right) \right\rvert}{(R-\frac{1}{2}\left(b-a\right))^{m+1}} 2R\pi.\] where \(C_R\) is the circle around the middle of the interval \(\left[a, b\right]\), with radius \(R\). By substituting (20) for \(m = n + 1\) in (19), and by noticing that \[\tag{21}\left\lvert\omega_{n+1}\left(x\right)\right\rvert=\left\lvert\left(x-x_0\right)\left(x-x_1\right)\dots\left(x-x_{n}\right)\right\rvert\le(b-a)^{n+1}\] for every \(x\in [a, b]\), we get that \[\tag{22}\left\lVert f - L_n \right\rVert_{\infty} \le \frac{R\cdot\max_{x\in C_R} \left\lvert f\left(x\right)\right\rvert}{\left(R-\frac{1}{2}\left(b-a\right)\right)} \left(\frac{b-a}{R-\frac{1}{2}\left(b-a\right)}\right)^{n+1}.\] The first factor on the RHS of inequality (22) is constant for the fixed \(R\), while the other tends to zero if \(b-a\lt R-\frac{1}{2}\left(b-a\right)\) that is, if \(R\gt\frac{3}{2}(b-a)\). This proves the following theorem:

Theorem 4. Let it \(f\) be analytic on a disk of radius \(R\) with the center at the point \(\frac{1}{2}(b + a)\), and let it be the radius \(R\) large enough so that the inequality holds \(R\gt\frac{3}{2}(b-a)\). Then the interpolation polynomial converges uniformly to \(f\) on interval \(\left[a,b\right]\) regardless of the choice of interpolation nodes.

With the stated theorem and proof, it becomes partially clear why in the example 6 there was a Runge phenomenon. Namely, the function \(f\left(x\right)=\left(1 + x^2\right)^{- 2}\) has poles in points \(i\) and \(-i\) so the previous theorem could not be applied to the whole interval \(\left [-5, 5\right]\).

The estimates, especially inequality (21), used in the proof of Theorem 4 are crude. If the interpolation nodes are fixed, then it is possible to evaluate the expression \(\left\lVert\omega_{n + 1}\right\rVert_\infty\) much better. It turns out that there is a unique distribution of nodes on the interval \([a, b]\), for which the expression \(\left\lVert\omega_{n + 1}\right\rVert_\infty\) is smallest. These nodes are called Chebyshev nodes, after the Russian mathematician Pafnuty Lvovich Chebyshev.

In the following, we will define the nodes of Chebyshev. Without loss the generality, we can define Chebyshev nodes only at interval \([-1,1]\), while the Chebyshev nodes on an arbitrary interval will be obtained by a simple linear transformation. Let us first define the Chebyshev polynomial of \(n^{\mathrm{th}}\) degree, as a polynomial \(T_n\) which satisfies equality \[\tag{23}\cos n\theta = T_n\left(\cos\theta\right).\] Using equality \(\cos\left(k + 1\right)\theta +\cos\left(k-1\right)\theta = 2\cos\theta\cos k\theta\), it follows that \[\tag{24}\cos\left(k + 1\right)\theta = 2\cos\theta\cos k\theta -\cos\left(k-1\right)\theta,\] which by inductive application shows that it is \(\cos n\theta\) can be expressed as polynomial od degree \(n\) in terms of \(\cos\theta\). So definition (23) is good.

If we put \(x =\cos\theta\), from (23) and (24) follows that the Chebyshev polynomials are defined by a recurrent relation \[\begin{aligned}T_{0}\left(x\right)&=1\\T_{1}\left(x\right)&=x\\T_{k+1}\left(x\right)&=2xT_{k}\left(x\right)-T_{k-1}\left(x\right)\quad\text{for } k\ge1.\end{aligned}\] From the stated recurrent relation, it can be seen that leading coefficient in \(T_n\) is \(2^{n-1}\). That is why we define monic Chebyshev polynomials \(t_n\) as \(t_n\left(x\right)=\frac{1}{2^{n-1}}T_n\left(x\right)\) for \(n\ge1\), and \(t_0\left(x\right)= T_0\left(x\right)\).

Two important properties of the Chebyshev polynomials directly follow from formula (23). First, \(\left\lVert T_n\right\rVert_\infty = 1\) for each natural number \(n\), while \(\left\lVert t_n\right\rVert_\infty =\frac{1}{2^{n-1}}\left\lVert T_n\right\rVert_\infty =\frac{1}{2^{n-1}}\). Second, equality \(\cos n\theta = 0\) holds if and only if \(n\theta =\left(2k-1\right)\pi / 2\), so the zeros of the Chebyshev polynomial \(T_n\) are given with \[\tag{25}x_k =\cos\left(\frac{2k-1}{2n}\pi\right)\quad\text{for}\,1\le k\le n.\] It is obvious that these are also the zeros of the monic Chebyshev polynomial \(t_n\). Therefore, all zeros of the (monic) Chebyshev polynomial are different, real, and belong to the interval \(\left(-1, 1\right)\). For a given natural number \(n\), we define Chebyshev nodes of order \(n\) as a set of zeros of polynomial \(T_n\).

Slika 5. From formula (25) follows that Chebyshev nodes are projection of \(n\) evenly distributed points on unit circle.

Let us return to the problem of reducing the interpolation error. If we take the Chebyshev nodes of order \(\left(n + 1\right)\) as interpolation nodes, then the polynomial \(\omega_{n + 1}\left(x\right)\) in expression (4) will be a monic Chebyshev polynomial \(t_{n + 1}\). Using fact that \(\left\lVert t_{n + 1}\right\rVert_\infty =\frac{1}{2^{n}}\) and (19) we get the inequality \[\left\lVert f - L_n\right\rVert _{\infty}\le\frac{\max_{x\in\left [a, b\right]}\left\lvert f^{\left(n + 1\right)}\left(x\right)\right\rvert}{2^{n}\left(n + 1\right)!},\] which is quite sharp. Naturally, now the question arises as to whether an even lower value of \(\left\lVert\omega_{n + 1}\right\rVert_\infty\) can be reached for some other choice of nodes. Following theorem gives a negative answer to the question.

Theorem 5. For an arbitrary monic polynomial \(P_n\) of degree \(n\ge1\), holds \(\left\lVert P_n\right\rVert _{\infty}\ge\left\lVert t_n\right\rVert _{\infty}=\frac{1}{2^{n-1}}\), where \(t_n\) is the monic polynomial of Chebyshev degree \(n\).

Proof. Suppose, on the contrary, that there exists a monic polynomial \(P_n\) of degree \(n\) for which \(\max _{- 1\le x\le1}\left\lvert P_n\left(x\right)\right\rvert =\left\lVert P_n\right\rVert _{\infty}\lt\frac{1}{2^{n-1}}\). Then a polynomial \(t_n -P_n\) has degree strictly less than \(n\) and changes sign at least \(n\) times, so it has at least \(n\) zero. As the degree is strictly less than \(n\), it follows that \(t_n -P_n\) is identically equal to zero. This contradicts the fact that \(t_n -P_n\) changes sign at least \(n\) times. So the assumption that \(\left\lVert P_n\right\rVert_{\infty}\lt\frac{1}{2^{n-1}}\) cannot be true. □

Figure 6. Graph of the function \(f\left(x\right)=\left(1 + x^2\right)^{- 2}\), and the corresponding interpolation polynomial \(L_{11}\) formed using Chebyshev nodes of the 14th order. Compare with Figure 4.

It is possible to prove that interpolation polynomial formed with Chebyshev nodes converges function \(f\) uniformly, assuming that \(f\) is in the class \(C^1\). This proof is not elementary, so we omit it.

Hermite interpolation polynomial

Interpolation polynomial \(L_n\) is constructed so that the conditions \(L_n\left(x_i\right)= f\left(x_i\right)\) hold for \(0\le i\le n\). However, often is little more known than the value of the function in the nodes. For example, the first few derivatives in the interpolation nodes are also known. By using additional information about the derivatives of the function \(f\) it is possible to construct a polynomial that better approximates the function \(f\) from an ordinary interpolation polynomial.

Hermite interpolation polynomial \(H_n\) is an interpolation polynomial of degree \(n\) such that \(\left(n-1 + m\right)\left(m + 1\right)\) conditions hold: \[\tag{26}H_n^{\left(j\right)}\left(x_i\right)= f^{\left(j\right)}\left(x_i\right)\quad\text{for}\;0\le j\le n_i,\, 0\le i\le m.\] where is \(n =\sum_{i=0}^{m}n_i-1\).

For fixed conditions (26) Hermite polynomial \(H_n\) must be unique. Namely, if there is another polynomial \(\bar{H}_n\) which satisfies the conditions (26), then the polynomial \(H_n-\bar{H}_n\) would be a polynomial of degree less than \(n\), with at least \(n+1\) zeros (in each node \(x_i\) a zero with order \(n_i\)), which is impossible. Uniqueness of the Hermite polynomial implies its existence: as the Hermite polynomial is unique, the linear system (26) is determined by a nonsingular matrix. Therefore, this system must always have a solution, independent of the choice of vector on the right.

Hermit polynomial \(H_n\) which satisfies conditions (26) will be constructed with the help of Newton's interpolation polynomial as follows: for each node \(x_i\) let's choose \(n_i-1\) additional different points \(x_i^1, x_i^{2},\dots, x_i^{n_i-1}\) around the node \(x_i\). Based on the comments after formula (11), we conclude that \[f\left[x_i, x_i^1,x_i^{2},\dots,x_i^{n_i-1}\right]=\frac{f^{\left(n_i-1\right)}\left(\xi\right)}{\left(n_i-1\right)!},\] where \(\xi\) belongs to the smallest interval which contains \(x_i, x_i^1, x_i^{2},\dots, x_i^{n_i-1}\). By setting limits in the previous equation for each of the numbers \(x_i^1, x_i^{2},\dots, x_i^{n_i-1}\) to \(x_i\), we get that \[\tag{27}f\left [\underbrace{x_i,\dots, x_i}_{n_i\text{ times}}\right] =\frac{f^{\left(n_i-1\right)}\left(x_i\right)}{\left(n_i-1\right)!}.\] From here, by induction, it is concluded that each of the divided differences of form \[f\left [\underbrace{x_i,\dots, x_i}_{n_i\text{ times}},\underbrace{x_2,\dots, x_2}_{n_2\text{ times}},\dots,\underbrace{x_{i + k-1},\dots, x_{i + k-1}}_{n_{i + k-1}\text{ times}},\underbrace{x_{i + k},\dots , x_{i + k}}_{\le n_{i + k}\text{ times}}\right]\] exists. If in Newton's polynomial \[\tag{28}\begin{aligned}L_n\left(x\right)= & f\left [x_0\right] + f\left [x_0, x_0^{1}\right]\left [x-x_0\right)+ f\left [x_0, x_0^{1}, x_0^{2}\right]\left(x-x_0\right)\left(x-x_0^{1}\right)+\cdots\\& + f\left [x_0, x_0^{1}, x_0^{2},\dots, x_0^{n_i-1}\right]\left(x-x_0\right)\left(x-x_0^{1}\right)\cdots\left(x-x_0^{n_i-2}\right)+\dots\\& + f\left [x_0,\dots, x_0^{n_0-1},\dots, x_{m},\dots, x_{m}^{n_m-1}\right]\left(x-x_0\right)\cdots\left(x-x_n^{n_m-2}\right),\end{aligned}\] formed by the nodes \(x_0, x_0^1,\dots, x_m^{n_m-2}\), we set the limit so \(x_{i}^{j}\rightarrow x_i\) for every \(0\le i\le m\). we obtain the Hermit interpolation polynomial \[\tag{29}\begin{aligned}H_n\left(x\right)= & f\left [x_0\right] + f\left [x_0, x_0\right]\left(x-x_0\right)+ f\left [x_0, x_0, x_0\right]\left(x-x_0\right)^2 +\cdots\\& + f\left [\underbrace{x_0,\dots, x_0}_{n_0\text{ times}}\right]\left(x-x_0\right)^{n_i-1}+ f\left [\underbrace{x_0,\dots, x_0}_{n_0\text{ times}}, x_1\right]\left(x-x_0\right)^{n_i}+\dots\\& + f\left [\underbrace{x_0,\dots, x_0}_{n_0\text{ times}},\dots,\underbrace{x_{m},\dots, x_{m}}_{n_{m}\text{ times}}\right]\left(x-x_0\right)^{n_0}\cdots\left(x-x_n\right)^{n_m-1}.\end{aligned}\]

The estimate of the Hermit interpolation error can be derived from the error of the polynomial (28) by using using limit \(x_{i}^{j}\rightarrow x_i\) for every \(0\le i\le\). That's how they got it \[\tag{30}f\left(x\right)- L_n\left(x\right)=\frac{f^{\left(n + 1\right)}\left(\xi\right)}{\left(n + 1\right)!}\omega_{n + 1}\left(x\right)=\frac{f^{\left(n + 1\right)}\left(\xi\right)}{\left(n + 1\right)!}\prod\limits_{k = 0}^{m}\left(x-x_k\right)^{n_k}\] where \(\xi\) belongs to the smallest interval containing points \(x, x_0, x_1,\dots, x_m\).

In practice, the Hermit interpolation polynomial is constructed by first making a table of divided differences. The table is made by placing each node \(x_i\) in \(n_i\) rows, and then the corresponding divided differences are entered as known derivatives according to formula (27). This gives an incomplete table of divided differences like the next one (in the next table, node \(x_i\) is listed four times, after which divided differences \(f\left[x_i,x_i\right],\) \(f\left[x_i,x_i,x_i\right]\) and \(f\left[x_i,x_i,x_i,x_i\right]\) were entered).

\(\vdots\) \(\vdots\)
\(x_{i-1}\) \(f\left(x_{i-1}\right)\)
\(x_i\) \(f\left(x_i\right)\)
\(x_i\) \(f\left(x_i\right)\) \(\frac{f''\left(x_i\right)}{2!}\)
\(f'\left(x_i\right)\) \(\frac{f^{\left(3\right)}\left(x_i\right)}{2!}\)
\(x_i\) \(f\left(x_i\right)\) \(\frac{f''\left(x_i\right)}{2!}\)
\(x_i\) \(f\left(x_i\right)\)
\(x_{i+1}\) \(f\left(x_{i+1}\right)\)
\(\vdots\) \(\vdots\)

After that, the rest of the table is filled in according to the definition of divided differences. Finally, we write a polynomial (29) based on the data from the constructed table.

Example 6. Find the Hermit interpolation polynomial \(H\) such that it is valid \[H\left(x_0\right)=f\left(x_0\right),\quad H'\left(x_0\right)=f'\left(x_0\right),\quad H''\left(x_0\right)=f''\left(x_0\right),\quad H'''\left(x_0\right)=f'''\left(x_0\right).\] Based on the described procedure, we get that \[H\left(x\right)=f\left(x_0\right)+f'\left(x_0\right)\left(x-x_0\right)+\frac{f''\left(x_0\right)}{2!}\left(x-x_0\right)^2+\frac{f'''\left(x_0\right)}{3!}\left(x-x_0\right)^3.\] As can be seen, the Taylor polynomial is a special case of the Hermit polynomial. Based on formula (30), the interpolation error is \(\frac{f^{\left(4\right)}\left(\xi\right)}{4!}\left(x-x_0\right)^4\), where \(\xi\) is between \(x_0\) and \(x\), which is the Lagrange form of the remainder in Taylor's formula. ▲

Example 7. Find Hermit's interpolation polynomial \(H\) such that it is valid \[H\left(0.5\right)=\sin\left(0.5\right),\quad H '\left(0.5\right)=\sin'\left(0.5\right),\quad H\left(5.5\right)=\sin\left(5.5\right),\quad H '\left(5.5\right)=\sin'\left(5.5\right).\] The split differences table looks like this:

0.5 0.4794
0.5 0.4794 -0.2229
-0.2370 0.0824
5.5 -0.7055 0.1891
5.5 -0.7055

It follows that Hermit's is an interpolation polynomial \[\begin{aligned}H\left(x\right)& = 0.4794 + 0.8776\left(x-0.5\right)-0.2229\left(x-0.5\right)^2 + 0.0824\left(x-0.5\right))^2\left(x-5.5\right)\\& = 0.0824x^3-0.7585x^2 + 1.5743x-0.128425.\end{aligned}\]

Figure 7. Graph of the function \(\sin\left(x\right)\) (red line) and the Hermit interpolation polynomial \(H\left(x\right)\) (black dashed line). The gray dashed line represents the graph of an ordinary interpolation polynomial formed on the basis of two nodes.


  1. Roland Bulirsch, Josef Stoer, Introduction to Numerical Analysis
  2. Walter Gautschi, Numerical Analysis
  3. Desanka Radunović, Numeričke metode