I am playing around with writing a daily stock price prediction algo in Python using a Monte Carlo/GBM methodology. I know there are many other questions on here about this topic (here, and here), but I'm super confused on the inputs and choice of time delta to give sensible results. It seems that unless you have a time delta (dt) increment that is suitably tiny, the results are garbage, and vary wildly with the choice of dt. Obviously I can see that dt is involved in the exponential component, so any change to it will have a big effect. My question is how does one choose a dt?
I'm basically using the great example here, which uses numpy (apologies to the non-python people).
I want to model daily price movements, and look at the results after a certain number of steps (n). I see a lot of examples use a dt of 1/252 (number of trading days in a year, then look at the n-th index of the each sims array to see the values.
Why do we need to have dt as 1/252 to model daily movements? Can we use 1/50 say? Does dt have to be 'suitably tiny'?
As an example, let's use FB. It is trading at 118.72. I want to know the probability of it being above 125 after 60 trading days. I will run 10,000 paths. FB standard deviation is 0.12.
Using a dt of 1/252, and looking at the 60th value of each result, gives me 1912 paths with a value above 125, so a 19% chance.
Using a dt of 1/50 gives me 3294 paths, a 33% chance.
Very confused.
Apologies if this is a stupid question to all the quants on here - I'm primarily a coder.