Apologies in advance, I have never (seriously) studied statistics and I'm out of my depth, so this may not be a simple question.
I wrote a program in java to determine the likelihood of a Tenhou in Mahjong - it's basically just a very rare hand. I set it to run a series of tests simulating a hand each time, a million at a time. Basically, it deals out tiles - if it's a Tenhou, that's a pass, otherwise it fails. Sometimes, there is only one Tenhou out of a million. Other times, there are 10 or more out of a million. If I run it for say, 50 million, the results still vary quite a bit.
I'd like to get an accurate probability. Is there some kind of formula or principle that I can apply? Even if I could just get a link to an article to read, that would be helpful - I just don't even know where to start.
Edit: Imagine you are playing 5 card poker. You are dealt five cards. If it's not a royal flush, shuffle the cards and try again. That's essentially all my program is doing, but it's doing it one million times, several times.
To be more specific - it shuffles the 136 tiles in a Mahjong set, deals out 14 of them. If those 14 don't constitute a winning hand, shuffle them all again and deal out a new hand of 14. Do that a million times, several times. So, it's not simulating a real game with many different variables - there's just the one variable (the hand that is dealt) and one parameter (condition for a winning hand).
My issue is that when running that same process over and over, the outcome sometimes varies by a factor of 10 or more - that is, there might be one winning hand out of one million dealt, or there may be ten or more winning hands out of one million dealt. My question is, how do I reconcile a spread that large? Please let me know if more clarification is needed.