4

I am trying to implement a vanilla European option pricer with Monte Carlo and compare its result to the BS analytical formula's result.

I noticed that as I increase (from 1 million to 10 millions) the number of simulations, the MC result starts to diverge.

Note that I deliberately only use only one variance reduction technique: antithetic variables. I was hoping that by merely increasing the number of simulations, I would manage to increase the precision.

Can anyone please give me clues or pointers as to why my result diverges?

Included below is the C# code for the pricer:

using System;
using System.Threading.Tasks;
using MathNet.Numerics.Distributions;
using MathNet.Numerics.Random;

namespace MonteCarlo
{
    class VanillaEuropeanCallMonteCarlo
    {
        static void Main(string[] args)
        {
            const int NUM_SIMULATIONS = 10000000;
            const decimal strike = 50m;
            const decimal initialStockPrice = 52m;
            const decimal volatility = 0.2m;
            const decimal riskFreeRate = 0.05m;
            const decimal maturity = 0.5m;
            Normal n = new Normal();
            n.RandomSource = new MersenneTwister();


            VanillaEuropeanCallMonteCarlo vanillaCallMonteCarlo = new VanillaEuropeanCallMonteCarlo();

            Task<decimal>[] simulations = new Task<decimal>[NUM_SIMULATIONS];

            for (int i = 0; i < simulations.Length; i++)
            {
                simulations[i] = new Task<decimal>(() => vanillaCallMonteCarlo.RunMonteCarloSimulation(strike, initialStockPrice, volatility, riskFreeRate, maturity, n));
                simulations[i].Start();
            }

            Task.WaitAll(simulations);

            decimal total = 0m;

            for (int i = 0; i < simulations.Length; i++)
            {
                total += simulations[i].Result;
            }

            decimal callPrice = (decimal)(Math.Exp((double)(-riskFreeRate * maturity)) * (double)total / (NUM_SIMULATIONS * 2));

            Console.WriteLine("Call Price: " + callPrice);
            Console.WriteLine("Difference: " + Math.Abs(callPrice - 4.744741008m));
        }


        decimal RunMonteCarloSimulation(decimal strike, decimal initialStockPrice, decimal volatility, decimal riskFreeRate, decimal maturity, Normal n)
        {
            decimal randGaussian = (decimal)n.Sample();
            decimal endStockPriceA = initialStockPrice * (decimal)Math.Exp((double)((riskFreeRate - (decimal)(0.5 * Math.Pow((double)volatility, 2))) * maturity + volatility * (decimal)Math.Sqrt((double)maturity) * randGaussian));
            decimal endStockPriceB = initialStockPrice * (decimal)Math.Exp((double)((riskFreeRate - (decimal)(0.5 * Math.Pow((double)volatility, 2))) * maturity + volatility * (decimal)Math.Sqrt((double)maturity) * (-randGaussian)));
            decimal sumPayoffs = (decimal)(Math.Max(0, endStockPriceA - strike) + Math.Max(0, endStockPriceB - strike));
            return sumPayoffs;
        }
    }
}
chrisaycock
  • 9,817
  • 3
  • 39
  • 110
balteo
  • 717
  • 1
  • 6
  • 13

1 Answers1

4

This is essentially the same question as your previous question and the issue is still the same: variability just does not go away just because you use 100 million draws once. Compare the distribution of results of $N$ Monte Carlo simulations at $n_1 = 1,000,000$ with those for $n_2 = 10,000,000$. You will see a reduction but that does not imply that every single run get a tighter answer.

Dirk Eddelbuettel
  • 4,864
  • 3
  • 27
  • 34