# Solving ODEs Numerically

In this post we are going to look at two methods for approximating a solution to some ordinary differential equation. We will only consider the first order separable initial value problems. Here, the ‘first order’ means that the problem contains a first derivative only. The word ‘separable’ implies that the derivative can be separated from the derivative function (i.e. we can separate the derivative terms from the terms that do not involve derivatives). Finally, an initial value problem is a problem for which an initial condition is given, thus we have two starting values for x and y.

The following are examples of the types of problems we are considering:

$\frac{dy}{dx}=xy^{\frac{1}{3}}, y(1)=1$
$\frac{dy}{dx}=4x-3, y(0)=0$
$\frac{dy}{dx}=\cos{x}-3, y(0)=0$

The above examples contain expressions for the derivative of y(x). Thus, we are given the derivative, which is some function f() of x and y, and we are also given the initial value. This initial value is the value of x for which y(x) is already known. Please note that we do not know the exact expression for y(x), we just have the expression for the derivative of y w.r.t. x.

Solving a differential equation means finding the y(x). Numerical solutions to differential equations usually approximate the y(x) at some required point. So, a numerical solution does not produce an expression for y(x), it just approximates the value of y(x) at some point x. Given enough approximations, we should also be able to graph y(x) on some interval.

There are many types of approximations to differential equations. Here we will look at Euler’s and Runge-Kutta methods. Both methods are based on the Taylor series expansion. Since the function for the 1st derivative is known, this means that we have an equation for the slope to the curve y(x) makes on the (x,y) plane.

The basic definition of the first derivative is given by $\frac{dy}{dx}\approx \frac{\Delta}{\Delta x}= \frac{(y(x+h)-y(x))}{h}$, which says that for some small change h, we can approximate the rate of change in y to the rate of change in x using this difference ratio. This can be rearranged to give us $y(x+h)$. Thus: $y(x+h)=h[\frac{dy}{dx}]+y(x)$. This expression can be used to solve a differential equation for some other value of x, since we can start with our initial condition, take a small step h, and iteratively approximate y(x). The smaller is the increment step, the better will be our approximation. This is the fundamental idea behind the Euler’s method.

The Runge-Kutta method is a little bit more complex, but it is more accurate as well. This approach is building on the fact that $y(x+h)=h[\frac{dy}{dx}]+y(x)$ is an approximation that can be improved upon. In particular, the Runge-Kutta method is the 4th order approximation to the ordinary differential equation, thus we have: $y(x+h)=y(x)+h[\frac{1}{6}(k1+2\times k2+2\times k3+k4)]$ where
$k1=f(x,y)$, $k2=f(x+ \frac{h}{2},y+ \frac{h}{2}\times k1)$, $k3=(x+ \frac{h}{2},y+ \frac{h}{2}\times k2)$ and $k4=f(x+h,y+h\times k3)$.

It comes as no surprise that the Runge-Kutta is more accurate than the Euler’s method since it is better at approximating the curve of y(x).

We will now look at the implementation of these two methods in C#. My code is given below. Note that I am following my usual coding style by making the individual solution classes to derive from an abstract InitialValue class. Most of the variables and functionality that is common to both method is contained within the parent class. The differential function is implemented as a delegate. Finally, I am also populating the metadata for the class constructors and the solution methods this time, as this is a good practice.

using System;
using System.Collections.Generic;

namespace DifferentialEquations
{

abstract class InitialValue
{
internal delegate double DerivativeFunction(double x, double y);

internal InitialValue(DerivativeFunction func, double initValX, double initValY, double solPoint)
{
givenFunction = func;
this.initValX = initValX;
this.initValY = initValY;
this.solPoint = solPoint;
CalcDistance();
}

internal double Delta
{
get
{
return delta;
}
}

internal int Distance
{
get
{
return distance;
}
}

//calculates absolute distance between the x point for required solution and the initial value of x
private void CalcDistance()
{
double res = 0.0;

if (this.initValX < 0 && this.solPoint < 0)
{
res = Math.Abs(Math.Min(this.initValX, this.solPoint) - Math.Max(this.initValX, this.solPoint));
}
else if (this.initValX > 0 && this.solPoint > 0)
{
res = Math.Max(this.initValX, this.solPoint) - Math.Min(this.initValX, this.solPoint);
}
else
{
res = Math.Abs(this.solPoint) + Math.Abs(this.initValX);
}

this.distance = (int)res;
}

internal abstract double[] SolveDifferentialEquation();

protected DerivativeFunction givenFunction;
private const double  delta=0.01;
protected double initValX;
protected double initValY;
protected double solPoint;
private int distance=0;

}

sealed class EulerMethod:InitialValue
{

/// <summary>
/// Initialises the Euler method
/// </summary>
/// <param name="func">differential function to approximate</param>
/// <param name="initValX">initial value for x</param>
/// <param name="initValY">initial value for y</param>
/// <param name="solPoint">solution point: value of x for which y=f(x) is required</param>
internal EulerMethod(DerivativeFunction func, double initValX, double initValY, double solPoint):base(func,initValX, initValY, solPoint)
{
altDelta = base.Delta;
}

/// <summary>
/// Initialises the Euler method
/// </summary>
/// <param name="func">differential function to approximate</param>
/// <param name="initValX">initial value for x</param>
/// <param name="initValY">initial value for y</param>
/// <param name="solPoint">solution point: value of x for which y=f(x) is required</param>
/// <param name="delta">alternative value for delta</param>
internal EulerMethod(DerivativeFunction func, double initValX, double initValY, double solPoint, double delta)
: base(func, initValX, initValY, solPoint)
{
//this check will not provide for all cases but good enough for demonstration
if (delta < base.Distance)
altDelta = delta;
else
{
Console.WriteLine("Delta value too large. Using default delta of 0.01.");
altDelta = base.Delta;
}
}

/// <summary>
/// Solves a differential equation using the Euler method
/// </summary>
/// <returns>y=f(x) solution at the required point of x</returns>
internal override double[] SolveDifferentialEquation()
{
direction = (this.initValX < base.solPoint) ? true : false;

//the position at index 0 will be taken by initial value
int numPoints = (int)(base.Distance/ this.altDelta)+1;
double[] solutionPoints = new double[numPoints];
double x = this.initValX;

solutionPoints[0] = this.initValY;

for (int i = 0; i < numPoints-1; i++)
{
solutionPoints[i + 1] = solutionPoints[i] + altDelta * this.givenFunction(x, solutionPoints[i]);
if (direction)
x = x + altDelta;
else
x = x - altDelta;
}
Console.WriteLine("Euler Solution at x=: " + base.solPoint + " is " + solutionPoints.GetValue(numPoints - 1));
return solutionPoints;
}

private bool direction;
private double altDelta;
}

sealed class RungeKuttaMethod : InitialValue
{
/// <summary>
/// Initialises the Runge-Kutta method
/// </summary>
/// <param name="func">differential function to approximate</param>
/// <param name="initValX">initial value for x</param>
/// <param name="initValY">initial value for y</param>
/// <param name="solPoint">solution point: value of x for which y=f(x) is required</param>
internal RungeKuttaMethod(DerivativeFunction func, double initValX, double initValY, double solPoint)
: base(func, initValX, initValY, solPoint)
{
altDelta = base.Delta;
}

/// <summary>
/// Initialises the Runge-Kutta method
/// </summary>
/// <param name="func">differential function to approximate</param>
/// <param name="initValX">initial value for x</param>
/// <param name="initValY">initial value for y</param>
/// <param name="solPoint">solution point: value of x for which y=f(x) is required</param>
/// <param name="delta">alternative delta value</param>
internal RungeKuttaMethod(DerivativeFunction func, double initValX, double initValY, double solPoint, double delta)
: base(func, initValX, initValY, solPoint)
{
//this check will not provide for all cases but good enough for demonstration
if (delta < base.Distance)
altDelta = delta;
else
{
Console.WriteLine("Delta value too large. Using default delta of 0.01.");
altDelta = base.Delta;
}
}

/// <summary>
/// Solves a differential equation using Runge-Kutta method
/// </summary>
/// <returns>y=f(x) solution at the required point of x</returns>
internal override double[] SolveDifferentialEquation()
{
direction = (this.initValX < base.solPoint) ? true : false;

//the position at index 0 will be taken by initial value
int numPoints = (int)(base.Distance/ this.altDelta)+1;
double[] solutionPoints = new double[numPoints];
double x = this.initValX;
double k1, k2, k3, k4 = this.initValY;
double scale = 1.0/6.0;
double increment = altDelta/2;

solutionPoints[0] = this.initValY;

for (int i = 0; i < numPoints - 1; i++)
{
k1 = this.givenFunction(x, solutionPoints[i]);
k2 = this.givenFunction(x + increment, solutionPoints[i] + increment * k1);
k3 = this.givenFunction(x + increment, solutionPoints[i] + increment * k2);
k4 = this.givenFunction(x + altDelta, solutionPoints[i] + altDelta * k3);
solutionPoints[i + 1] = solutionPoints[i] + altDelta*scale *(k1+2.0*k2+2.0*k3+k4);
if (direction)
x = x + altDelta;
else
x = x - altDelta;
}
Console.WriteLine("Runge-Kutta Solution at x=: "+base.solPoint+" is " + solutionPoints.GetValue(numPoints - 1));
return solutionPoints;
}

private bool direction;
private double altDelta;
}

class EntryPoint
{
static void Main(string[] args)
{
EulerMethod eu = new EulerMethod(DerivFunction, 0, 0,5);
double[]resEU = eu.SolveDifferentialEquation();

RungeKuttaMethod rk = new RungeKuttaMethod(DerivFunction, 0, 0,5);
double[] resRK = rk.SolveDifferentialEquation();

}

static double DerivFunction(double x, double y)
{
//double pow = (1.0 / 3.0);
//return x * Math.Pow(y, pow);
//return 14.0 * x - 3.0;
return Math.Cos(x) - 3.0;
}
}
}


Below are the results for three initial value problems. Here the h increment is 0.01. The Runge-Kutta method provides a very good approximation.

Table 1
Differential Equation Euler’s Runge-Kutta Exact Solution (at x=5)
$\frac{dy}{dx}=xy^{\frac{1}{3}}, y(1)=1$ 26.8956 26.9999 27
$\frac{dy}{dx}=4x-3, y(0)=0$ 159.6499 159.9999 160
$\frac{dy}{dx}=\cos{x}-3, y(0)=0$ -15.9553 -15.9589 -15.9589

# Solving Nonlinear Equations Numerically

Knowing the roots of some nonlinear equation allows us to answer the question about which values of x make $f(x)$ equal to zero. There are many approaches to findings the roots of equation in nonlinear algebra. Some of the well-known ones include: the linear interpolation (or bisection) method, Newton’s method, Bernoulli’s method and its variations. In this post I am demonstrating an implementation of the two basic methods: bisection and Newton’s. We will also compare their performances.

The bisection method for finding the roots of some function $f(x)$ involves two starting values a and b, where a<b and the function is defined on [a,b]. The solution is approximated by iteratively halving the distance between a and b, and resetting one of these points. The point to reset depends on both, the sign of the slope of $f(x)$ on [a,b] and the sign of $f(x_1)$, where $x_1$ is the current approximation. This process terminates when either of the following conditions is reached: (a) we find the solution to some acceptable accuracy, or (b) we do not find the solution and terminate because the current distance on which we iterate is less than some error value (i.e. we have exhausted the interval without convergence), or we have exceeded the number of allowed iterations. The solution may not be obtained if the starting points are chosen poorly, and there are no roots on the [a,b] interval. For example, take a look at Figure 1, which is a plot of $f(x)=x^2-2$, where some possible [a,b] interval is marked. Imagine instead we pick [a,b] to be [2,3]. Since nowhere on [2,3] is $f(x)=0$, this interval does not provide any roots.

Which of the two roots do you think this method finds if we take a and b to be -2.0 and 2.0? The answer depends on the used method to approximate the slope of the function and what we do when this slope is zero. That is, it will be equally correct to move to the right or to the left. So, it is up to us.

The Newton’s method is a little bit more complicated. Essentially, it is based on the idea that, starting with some guess value $x_0$, we can improve upon the guess by updating it with the value of x for which $f(x)$ is the tangent line to the function. We repeat until convergence to a solution is achieved. Mathematically, this can be expressed as:

$m(x)=f'(x_0)\times(x_1-x_0) +f(x_0)$
$x_1=x_0-\frac{f(x_0)}{f'(x_0)}$

where $x_1$ is the next approximation and $m(x)$ is the tangent line function.

As you can see, the Newton’s method requires us to approximate the first derivative of $f(x)$. This is why I think it is a bit more complex. Also, there are several conditions, under which this method fails to find a solution. One of them is that $f'(root)$ is undefined. Another is that $f'(x) =0$. More conditions can be found on Wikipedia.

Below I am providing one possible way to implement these two methods. Please note the following points about my implementation:

1. One can never provide too many checks for potential run-time failures. I chose to stop at just a few. That is, for any possible solution x, $f(x)$ should be defined. Also, we should be able to estimate the 1st derivative of $f(x)$. While testing the implementation on some functions I noticed that if the 1st derivative is close to zero, then Newton’s method tends to diverge. That is why I am rounding to the nearest 3 f.d. on line 104.
2. My approximation to the first derivative of $f(x)$ uses four points.
3. I have left a few Debug.Assert in the code to help with debugging the code for some ill-behaved functions.
4. The function for which we need to find the roots, is passed as a delegate.

Below is my implementation along with the run example:

using System;
using System.Collections.Generic;

namespace SolvingNonLinearEquations
{
public abstract class NonlinearEquationSolutionMethod
{
public delegate double NonLinearFunction(double x);

public NonlinearEquationSolutionMethod(NonLinearFunction func)
{
this.func = func;
}

protected NonLinearFunction func;
protected const double maxiter = 100;
}

public sealed class BisectionMethod: NonlinearEquationSolutionMethod
{
private BisectionMethod(double from, double to, double err, NonLinearFunction func)
:base(func)
{
this.from = from;
this.to = to;
this.err = err;
}

//ensure we correctly intitialise from and to points where from < to
public static BisectionMethod SetBisectionMethod(double from, double to, double err, NonLinearFunction func)
{
if (from >= to)
return null;
else
return  new BisectionMethod(from, to, err, func);
}

public double Solve()
{
double solution = (from+to)*0.5;
int iter = 1;

//determine of the function has positive or negative slope on the given interval
double slope = func(to) - func(from);
double temp=0.0;

while ((to - from) > err && iter < maxiter)
{
temp = func(solution);
System.Diagnostics.Debug.Assert(!temp.Equals(Double.NaN), "Division by zero or another illegal operation has occured.");

if (temp < 0.0)
{
if (slope > 0)
from = solution;
else
to = solution;
}
else
{
if (slope > 0)
to = solution;
else
from = solution;
}
solution = (from + to) * 0.5;
++iter;
}
if (iter == maxiter || Math.Abs(func(solution)) > Math.Abs(err))
{
Console.WriteLine("Failed to converge.");
solution = default(double);
}
return solution;
}

private double from;
private double to;
private double err;
}

public sealed class NewtonMethod:NonlinearEquationSolutionMethod
{
public NewtonMethod(double guess,  double err, NonLinearFunction func)
: base(func)
{
this.guess = guess;
this.err = err;
}

public double Solve()
{
//Newton's method requires computing the function's first derivative
int iter = 0;
double temp = func(guess);
double tempdev = numFirstDerivative(func, guess);

if (temp==0)
return guess;

//check if the derivative exists at the initial guess
//if not, we cannot use Newton's method with this guess
if (tempdev.Equals(Double.NaN) || Math.Round(tempdev, 3).Equals(0.000))
{
Console.WriteLine("Derivate of this function is undefined or zero at this point.");
return default(double);
}

while(iter<=maxiter && Math.Abs(func(guess))>err && numFirstDerivative(func, guess)!=0)
{
System.Diagnostics.Debug.Assert(!temp.Equals(Double.NaN), "Division by zero or another illegal operation has occured.");
guess = guess - (temp/numFirstDerivative(func, guess));
++iter;
temp = func(guess);
}

if (func(guess).Equals(Double.NaN) || numFirstDerivative(func, guess).Equals(Double.NaN))
guess = default(double);

return guess;
}

//approximates the first derivative of a single-variabled function
private double numFirstDerivative(NonLinearFunction func, double x)
{
double delta = 0.01;
return (func(x - 2 * delta) - 8 * func(x - delta) + 8 * func(x + delta) - func(x + 2 * delta)) / (12 * delta);
}

private double guess;
private double err;
}

class EntryPoint
{
static void Main(string[] args)
{
BisectionMethod bm = BisectionMethod.SetBisectionMethod(-2.0, 2.0, 0.001, funcToSolve);
NewtonMethod nm = new NewtonMethod(0.5, 0.001, funcToSolve);

Console.WriteLine(bm.Solve());
Console.WriteLine(nm.Solve());
}

static double funcToSolve(double x)
{
return x*x-2.0;
}
}
}


For function $f(x)=x^2-2$, the two roots are $\pm\sqrt{2}$. In my implementation, the bisection method finds the negative root, while the Newton’s method finds the positive root, since the initial guess is closer to the positive root. It takes 13 iterations to approximate the solution with the bisection method. While it takes only 4 iterations with the Newton’s method.

# Random Numbers Generator

At the heart of any Monte Carlo method are two fundamental building blocks. The first is the generation of uniform random variables. The second is the transformation of the generated uniform distribution to whichever is the required distribution for the task at hand. For example, a normal one. The quality of random number generator determines the quality of the final distribution.

No matter how good is the underlying random number generation algorithm, the outcome is not truly random, since the algorithm that produces the numbers is fully deterministic. In fact, I actually believe that very few things in live are truly random. Even the seemingly random roulette wheel is influenced by the physical properties such as friction, weight and skewness. So, let’s embrace the fact that we’ll only ever be able to achieve pseudo-randomness. This should be good enough for all practical purposes. Thus, I will stop using ‘pseudo-random’, and use the word ‘random’ instead.

In this post I will show you some very basic algorithms for generating random numbers. If you do further research, you will find that the more complex algorithms are extensions of the basic ones.

Take int x = 1. We can use x to generate a next random number. How? Well, take, for example:

x_next = (25,173*x+13,849) %201

where % is the modulus operator. x_next is (also an integer) = 28. Ok, 28 is kind-of random. Let’s repeat the above calculation where x=28. x_next becomes 118. Hm, if I keep repeating the same ‘substitute/calculate’ steps 10 times, I get:

28, 118, 16, 145, 106, 43, 34, 4, 172, 196

Well, that looks pretty random to me! I use this algorithm to generate 200 integers and show them in Figure 1 (hint: click on it to see the details).

Before I point out a problem with this random sample, let me explain what is going on here. The way I generated my random variables is called linear congruential generator, originally introduced by D.H.Lehmer in 1951. It has a general form of

$x_{i+1}=(a \times x_{i}+c) \mod{m}$

The above will produce a sequence of random numbers in the region between 1 and m-1. If the required uniform variables should be in the unit interval, we can scale by 1/m. Referring back to our initial example, a, c and m are some constants. It turns out that the values of these constants are critical in determining the quality of the generated random variable sequence. Looking at Figure 1 you should be able to spot that the random sequence repeats itself exactly after only 65 iterations. Clearly, this is not a very good random number generator! Ideally, we need an algorithm that can simulate all m-1 random variables without repeating itself. This condition is known as ‘having a full period length’, and it is one of the standard criteria for selecting a random number generator.

In (1), we find the list of conditions a, c and m must satisfy to ensure we obtain a full period length (if c!=0):

• c and m are relatively prime (their only common divisor is 1);
• every prime number that divides m, also divides a-1;
• a-1 is divisible by 4, if m is;

So, the reason we did not obtain the full period length is because the chosen constants do not meet these conditions. If we change m to be 217, it allows for a full period of length 216.

The demonstrated random number generator is the basis for more complex generators. A good random number generator should be able to generate millions of numbers without repeating the sequence. If we are to stick to the int representation, we have to take care not to overflow. Reminder: a System.Int32 variable can hold a maximum value of [(2^32)-1]/2-1.

Many various extensions to the linear congruential generator have been developed over the years; many involving a combination of several linear ones which result in a much longer period length. In C#, the System.Random class provides an easy way to generate random variables. The full implementation of this class can be found here. Random class can be initialised with a given seed, or without. When no seed is provided, the default seed is Environment.TickCount, which is the time in milliseconds since the computer been booted-up. The algorithm behind System.Random is based on the routine in “Numerical Recipes In C: The Art Of Scientific Computing”, which, according to the authors, has been proposed by D.Knuth. The algorithm has elements of the linear congruential generator, but is essentially a mixed ‘subtractive method’ where an array of 55 pseudo-random integers is maintained. At the class initialization, the array is populated with values twice. First, the array is populated with values between 0 and [max(System.Int32)-1]. At this point, the index of the array which receives the value is selected using

$x=(21 \times i)\mod{55}$

This is ‘the element’ of the linear congruential generator I’ve mentioned above. Here i is in [1,54]. Note that c is set to zero, and the three conditions listed above do not apply. The second time the array is re-populated, it’s values are further randomized by cycling through the array four times and reassigning each value with a difference between its original value and a value at another index, which is approximately half-the-array length away. Positivity is ensured by adding max(System.Int32) to any negative number.

Every time a user asks for the next random number, the difference between two values in the array is returned. At the same time, one of the values in the array is replaced with the value returned to the user. There is a check for positivity, which is ensured by adding max(System.Int32) to any negative number.

The Random class has additional methods to allow for simulating random numbers in some range or even simulating random doubles. The latter returns a a double equal to a randomly generated integer scaled by (1/ max(System.Int32)).

References:
(1) P. Glasserman. “Monte Carlo Methods in Financial Engineering”, 2004. Springer-Verlag.

# Comparing Numerical Integration Methods

In this post I will re-introduce and compare four well-known numerical integration techniques for approximating definite integrals of single-variable functions (e.g. polynomials). The compared techniques, in the order of least to the most accurate, are:

• Rectangle rule;
• Mid-point rule;
• Trapezoid rule;
• and Simpson’s rule;

Let me begin by an informal description of the idea behind numerical integration. All numerical integration involves approximating the area under the function’s curve. For example, Figure 1 is a plot of f(x)=2+cos(2*sqrt(x)) for x in the interval between 0 and 2. The area under the curve can be approximated in different ways. The simplest way is to break it up into many rectangles and compute their areas instead.

Figure 2 illustrates this simple ‘rectangle rule’ approach, where I only show three rectangles. Clearly, for this function, the rectangle rule approximation will overestimate the area. However, this is due to the fact that I began to break-up the area under the function’s curve left-to-right. This means that the area of the first rectangle is calculated by evaluating the functions at point x=a, and scaling by the distance between a and b, where b is the starting point of the next rectangle. Was I to start from left to right, as shown in Figure 3, I would have underestimated the area. If you are thinking that some kind of average of the two methods can give a more accurate approximation, you are absolutely correct. This, in fact, is what trapezoid-rule approximation does. Also, it is clear, that the more rectangles are used to break-up the area, the more accurate is the approximation.

It is probably a good time to formally define these numerical methods.
For any two adjacent points a and b, the area of a single rectangle is approximated as follows:

• Rectangle rule: $\int_a^b \! f(x) \, \mathrm{d}x \approx f(a)(b-a)$
• Mid-point rule: $\int_a^b \! f(x) \, \mathrm{d}x \approx f(\frac{a+b}{2})(b-a)$
• Trapezoid rule: $\int_a^b \! f(x) \, \mathrm{d}x \approx \frac{1}{2}[f(a)+f(b)](b-a)$
• and Simpson’s rule: $\int_a^b \! f(x) \, \mathrm{d}x \approx \frac{1}{3}[f(a)+f(\frac{a+b}{2})+f(b)]$

Note that in the above definitions, I am assuming that b>a, and that my increment step is set to 1. Below I am providing a rather naïve implementation of these four numerical integration methods in C#:

using System;

//functionality to perform several types of single-variable function numerical integration
namespace NumericalIntegration
{
public abstract class NumericalIntegration
{
public delegate double SingleVarFunction(double x);

public NumericalIntegration(double from, double to, double steps, SingleVarFunction func)
{
this.fromPoint = from;
this.toPoint = to;
this.numSteps = steps;
eps=0.0005;
h = (toPoint - fromPoint) / (numSteps + 1.0);
this.func=func;
}

public abstract double Integration();

protected double fromPoint;
protected double toPoint;
protected SingleVarFunction func;
protected double numSteps;
protected double eps;
protected double h;
}

public sealed class RectangleRule: NumericalIntegration
{
public RectangleRule(double from, double to, double steps, SingleVarFunction func)
:base(from,to,steps,func)
{ }

public override double Integration()
{
double a = fromPoint;
double value = func(fromPoint);
for (int i = 1; i <= numSteps; i++)
{
a = a + h;
value += func(a);
}
return h*value;
}
}
public sealed class MidPointRule: NumericalIntegration
{
public MidPointRule(double from, double to, double steps, SingleVarFunction func)
:base(from,to,steps,func)
{}

public override double Integration()
{
double a = fromPoint;
double value = func((2.0 * fromPoint + h) / 2.0);
for (int i = 1; i <= numSteps; i++)
{
a = a + h;
value += func((2.0 * a + h) / 2.0);
}
return h * value;
}
}

public sealed class TrapezoidRule: NumericalIntegration
{
public TrapezoidRule(double from, double to, double steps, SingleVarFunction func)
: base(from, to, steps, func)
{}
public override double Integration()
{
double a = fromPoint;
double value = (func(fromPoint)+func(fromPoint+h));
for (int i = 1; i <= numSteps; i++)
{
a = a + h;
value += (func(a) + func(a + h));
}
return 0.5 * h * value;
}
}

public sealed class SimpsonRule: NumericalIntegration
{
public SimpsonRule(double from, double to, double steps, SingleVarFunction func)
: base(from, to, steps, func)
{
h = (toPoint - fromPoint) / (numSteps * 2.0);
}

public override double Integration()
{
double valueEven = 0;
double valueOdd = 0;

for (int i = 1; i <= numSteps; i++)
{
valueOdd += func(fromPoint + this.h * (2.0 * i - 1));
if (i <= numSteps-1)
valueEven += func(fromPoint + this.h * 2.0 * i);
}
return (func(fromPoint)+func(toPoint)+2.0*valueEven+4.0*valueOdd)*(this.h/3.0);
}
}
}


In summary, the code defines an abstract NumericalIntegration class which has two members: a delegate to point to the function over which we need to integrate, an abstract Integration method which must be overridden by the deriving classes. The assignment of the integration parameters occurs through the base class constructor. However, in case of the SimpsonRule class, I override the definition of h.

Please note that for the sake of this comparison, I will not be attempting to estimate the number of rectangles or partitions needed to achieve some convergence criterion. However, I am still declaring eps in the base class constructor, which can be used for this purpose.

Ok, so, let’s evaluate the Integration methods of each rule for the following functions and parameters: a=0, b=2, number of partitions = 8. Also, we’ll compare the performance on the following three functions:

• func1: $f(x)=2+\cos(2\times\sqrt{x})$
• func2: $f(x)=2+\sin(2\times\sqrt{x})$
• func3: $f(x)=0.4\times\ln(x^2)$

The last function’s lower bound is set to 0.1 instead of zero.
Table 1 summarises the results of each numerical integration rule. The last column shows the mathematical solution. Clearly, the Simpson’s rule is the most accurate here.

Table 1
func rectangle mid-point trapezoid Simpson’s solution (to 6 d.p)
1 3.68415 3.45633 3.46733 3.46000 3.46000
2 5.41971 5.51309 5.45394 5.49218 5.49946
3 -0.50542 -0.21474 -0.25244 -0.22756 -0.22676

For the sake of completeness I provide the Main function code as well:

namespace TestNumericalIntegration
{
using NumericalIntegration;

class Program
{
static void Main(string[] args)
{
double from = 0.1;
double to = 2.0;
double steps = 20.0;

RectangleRule ri = new RectangleRule(from, to, steps, funcToIntegrate);
MidPointRule mi = new MidPointRule(from, to, steps, funcToIntegrate);
TrapezoidRule ti = new TrapezoidRule(from, to, steps, funcToIntegrate);
SimpsonRule si = new SimpsonRule(from, to, steps, funcToIntegrate);

double[] res = { ri.Integration(), mi.Integration(), ti.Integration(), si.Integration() };
foreach (double r in res)
Console.WriteLine(r.ToString("F5", System.Globalization.CultureInfo.InvariantCulture));
}

static double funcToIntegrate(double x)
{
//return 2.0+System.Math.Cos(2.0*System.Math.Sqrt(x));       //first function
//return 2.0 + System.Math.Sin(2.0 * System.Math.Sqrt(x)); //second function
return 0.4 * System.Math.Log(x*x);                       //third function

}
}
}


I hope you’ve found this useful. In my next post I will examine various approaches to numerical integration of multi-dimensional integrals.