4

I have calculated the first derivative of following equation using Euler method (first order), Three point Finite Difference method (second order) and Four point Finite Difference method (third order). $$f(x)=e^{-32*(x-5)^2}$$ in the domain $0 \leq x \leq 10$.

I discretized the domain by 50, 500, 5000 and 50000 number of points and calculated the solution. Then I compared the solution with analytical solution and calculated the maximum absolute difference in analytical and numerical result in that region and plotted the graph shown below (sorry for bad quality of graph).enter image description here here y-axis is log(error) and x-axis is log(number of grid points).

My question is:

Higher order methods are supposed to be more accurate than lower order methods. In the graph error in third order method is more than second and first order method for course grid (50 and 500 points) and third order method is more accurate than second and first order method in fine grid ( 5000 and 50000) points). Please explain me why this is happening?

thank you!

AGN
  • 544
  • 1
  • 4
  • 12
  • This is a followup to http://scicomp.stackexchange.com/questions/20172/why-are-runge-kutta-and-eulers-method-are-so-different (mentioning this so the two questions are linked). – Christian Clason Jul 16 '15 at 17:23
  • Yes sir, I want to discuss the cause of this error because I didn't study this problem in books,so I decided to ask that to experts. – AGN Jul 16 '15 at 23:12
  • I have linked this question in answer to that question. – AGN Jul 17 '15 at 06:32
  • Could you please clarify what error metric you are using and which finite difference formulas? One sided differences? – Doug Lipinski Jul 17 '15 at 11:25
  • Subtracted the numerical result from analytical one. Then I applied absolute operator on the result. Then I took maximum of the error in the domain. – AGN Jul 17 '15 at 11:35
  • Separate point: Never create graphs in which you show $\ln(x)$ and $\ln y$ on the axes. Nobody knows what $x$ corresponds to $\ln(x)=9.2$ (the last cross-over point). Instead use a log or loglog plot. All reasonable software has this. It uses a logarithmic scaling of the axes, but the labels printed next to the axes are of $x$, not $\ln(x)$. – Wolfgang Bangerth Nov 05 '20 at 05:32

2 Answers2

7

High order methods are not necessarily always more accurate than low order methods; they simply have a more rapid convergence rate. For example, if you take the Taylor expansion of a function at a point $x$ and evaluate it at $x+h$, you have

$$f(x+h) = f(x) + f'(x)h + \frac{f''(x)}{2}h^2 + \ldots$$

The order of a finite difference method is determined by the power of $h$ that is eliminated using finite difference coefficients; for example, rearranging gives

$$\left|\frac{f(x+h)-f(x)}{h} - f'(x)\right| = \left| \frac{f''(x)}{2}h + \frac{f'''(x)}{6}h^2 + \ldots\right|$$

The left term is error ,and you can make the asymptotic argument that if $h \ll 1$, then $h \gg h^2$, so that for $h$ small enough,

$$\left|\frac{f(x+h)-f(x)}{h} - f'(x)\right| \approx \left| \frac{f''(x)}{2}\right| h.$$

This truncation error term implies that error decreases linearly with grid size ($h$), or that the error is $O(h)$. High order methods give that the error is $O(h^r)$ for some order $r$; again, this doesn't imply that high order method errors are smaller, just that the error decreases faster with $h$ than for a low order method.The magnitude of the truncation error depends specifically on magnitude of $ f'', f'''$ etc. we truncated, and power of $h$..

Problem you have considered here has complex behavior in derivatives. If your grid size is not small enough to nullify the effect of magnitude of derivative term you truncated, will give high error. Because of high rate of convergence of higher order methods, your 4th order method error decreases with a faster slope than the lower order methods.

AGN
  • 544
  • 1
  • 4
  • 12
Jesse Chan
  • 3,132
  • 1
  • 13
  • 17
  • Thanks for your answer. As you mentioned all the numerical methods accuracy is based on grid size (i.e) truncation error is reducing because of order (power) of grid size. They are not touching impact of magnitude of derivative terms like $f'', f'''$ etc that we truncated. Is there is any numerical methods whose convergence rate is based on both grid size and magnitude of higher order derivative that we truncated? – AGN Jul 17 '15 at 02:08
  • Sir, I have made small changes in your answer, please check that. – AGN Jul 17 '15 at 06:46
  • Hi Arun - not that I know of. Since $f(x)$ is arbitrary, it's usually difficult to have a method take into account the magnitude of the derivatives. – Jesse Chan Jul 17 '15 at 14:24
  • Ok thank you sir. I hope if that is possible this may give a new family of numerical scheme. – AGN Jul 17 '15 at 14:59
4

You've chosen a very specific example where the function in question has a single extremely narrow peak and is almost zero everywhere else in the domain. This is an example of a stiff problem where you must take a very tight discretization to resolve the peak. Finite difference approximations are based on polynomial approximations to a curve. In the case where the curve is not locally well-approximated by a polynomial (such as the several grid points that straddle the peak in the Gaussian function here) this can lead to oscillations in the approximating polynomial (see this answer) resulting in a poor estimate of the derivative. Refining the grid eventually leads to a situation where the function is well approximated by a polynomial on the scale of the finite difference stencil and accuracy improves.

As discussed in Jesse Chan's answer, low order methods may actually have lower magnitude errors in some cases. However, I would be skeptical of the accuracy of either method in that case and suggest that you simply need higher resolution.

I tried to replicate your results, but I found that higher order finite differences were actually more accurate (based on RMS error) even in this example. Were you using one sided differences? Here's my matlab script if you want to mess with it.

clear

y = @(x) exp(-32*(x-5).^2);
dydx = @(x) -(64*(x-5)).*y(x);

x=linspace(0,10,50);

%first order (one sided) difference
x1 = x(1:end-1);
dy1 = (y(x(2:end))-y(x(1:end-1)))/diff(x(1:2));

%second order centered difference
x2 = x(2:end-1);
dy2 = (y(x(3:end))-y(x(1:end-2)))/(2*diff(x(1:2)));

%fourth order centered difference
x3 = x(3:end-2);
dy3 = (-y(x(5:end))+8*y(x(4:end-1))-8*y(x(2:end-3))+y(x(1:end-4)))/(12*diff(x(1:2)));

xRef = linspace(0,10,1e5);
dyRef = dydx(xRef);

figure(1)
clf
hold on
plot( xRef, dyRef, 'k' )
plot( x1, dy1,'o')
plot( x2, dy2,'x' )
plot( x3, dy3,'^' )
legend('Analytical','First order','Second order','Fourth order')
title('Computed derivative values')

figure(2)
clf
hold on
plot( x1, abs(dy1-dydx(x1)),'o')
plot( x2, abs(dy2-dydx(x2)),'x' )
plot( x2, abs(dy2-dydx(x2)),'^' )
legend('First order','Second order','Fourth order')
title('Error in the derivative computation')

% rms errors:
sqrt( sum((dy1-dydx(x1)).^2 ) / numel(x1) ) %first order
sqrt( sum((dy2-dydx(x2)).^2 ) / numel(x2) ) %second order
sqrt( sum((dy3-dydx(x3)).^2 ) / numel(x3) ) %fourth order

Please note that this is a slightly different situation than that in the related question. That question was about ODE solvers (Euler and RK). The reason I edited your answer there was to highlight the fact that this case is not typical and that in almost all cases where RK4 is stable it will outperform Euler for a given step size. The ODE analogue to your example here is a stiff ODE where you are not in the stability region of explicit solvers. As I said there, if your ODE solver is unstable all bets are off as far as accuracy (just as all the finite difference methods give poor results on a coarse grid here). Of course a stable low-order implicit method will typically have less error than an unstable explicit method.

Even though you may have "less error" for course grids with the lower order finite differences, the actual errors in your example are huge. In that case, neither solution should be used and you have no choice but to refine the grid or use a different method entirely.

Doug Lipinski
  • 4,591
  • 16
  • 25
  • 1
    Thank you for this much effort. I have leaned a new thing from your answer that i didn't think about r.m.s error is low for higher order methods. I'm new to this forum, I'm learning a lot from great peoples. – AGN Jul 16 '15 at 23:46
  • Why I have chosen differentiation is, this is much simpler than integration. Otherwise the question will looks similar to Euler and rk integration question (already I got warning kind of message from one user . In addition to that we can make a relative comparison between time numerical integration ( it can be viewed like extrapolation method) and differentiation ( it can be viewed like interpolation method) as long as the "numerical integration" is stable . I hope you didn't notice that this equation is not a stiff equation in at lest in first derivative. – AGN Jul 17 '15 at 01:58
  • Measuring stiffness of the given equation is loosely defined in this litterateurs that is mostly based on relative value of eigenvalues of ODE. I have one question in mind in measuring stiffness of the ODE, soon i will post it. What do you think about Jesse Chans answer, that look more relevant one, most of the time we are not thinking about it! – AGN Jul 17 '15 at 01:59
  • Please also go through the my commend to Jesse Chans answer. – AGN Jul 17 '15 at 02:22
  • 1
    @ArunGovindNeelanA Jesse Chan's answer is good, and more directly answers your question here. To clarify, RMS error is not always lower for higher order method, just as point-wise error is not always lower. – Doug Lipinski Jul 17 '15 at 11:23
  • @DougLipinski, you measure RMS-error incorrectly. You need to scale an error by the square root of the spatial step. RMS-error is a discrete analog of an integral and you omit discrete version of the differential incorrectly. – Dmitry Kabanov Jul 25 '16 at 14:44
  • @DmitryKabanov Good point. I've fixed that and it makes no difference to my conclusions since that value is almost the same for all three grids. – Doug Lipinski Jul 25 '16 at 18:43
  • @DougLipinski, I'm not sure what you mean by "that value". In my experience, when you refine resolution and do not include scaling factor into the error calculation, you get wrong values for the observed order of accuracy. – Dmitry Kabanov Jul 26 '16 at 20:44
  • @DmitryKabanov by "that value" i meant the terms like (diff(x1(1:2)) * numel(x1)) which I added to my code in the RMS error computations. This divides by the grid spacing times the number of grid points. Since all three grids I was comparing had the same resolution those terms are all about the same. Only the number of grid points is slightly different. – Doug Lipinski Jul 27 '16 at 01:16
  • @DougLipinski, why do you multiply by numel(x1)? You need to multiply by diff(x1(1:2)) if you measure the error in L1-norm and by sqrt(diff(x1(1:2))) if you measure the error in L2-norm (what you call RMS error). – Dmitry Kabanov Jul 28 '16 at 19:12
  • @DmitryKabanov See the definition of RMS error and note that this is different than the $l2$ norm. Dividing by the number of points is necessary for the "mean" part of the computation. – Doug Lipinski Jul 28 '16 at 20:11
  • @DougLipinski, OK, I was wrong thinking that L2-error and RMS errors are the same. Still, I'm not sure I understand the way you compute errors (I've noticed also, that you edited formulas in your code). It is clear that diff(x1(1:2)) = 10 / numel(x1) because your domain length is 10 (linspace(0, 10... line). So, under sqrt in last three lines your effectively multiply sum by 10/(numel(x1)**2). Can you show the relationship between L2-norm and RMS? – Dmitry Kabanov Jul 31 '16 at 08:31
  • @DmitryKabanov I think I've confused myself about things at this point. You're correct again that this still wasn't quite right, I'll edit yet again. The RMS error should not have the diff(x(1:2)) terms in there. At this point I wish I had just used the l2 norm and eliminated all this confusion. If you normalize the l2-error by the grid spacing (effectively the discrete analogue of the L2-error) then it is effectively the same thing as RMS error, just scaled by a constant, since $\delta x$ and the number of grid points are related. – Doug Lipinski Aug 01 '16 at 13:01
  • @DougLipinski, OK, then, thanks for the discussion! – Dmitry Kabanov Aug 01 '16 at 18:50