Statistical significance is a way of talking about the uncertainty associated with using the results of a random sample to make predictions about the characteristics of the larger population that sample was drawn from. If you aren't using a sample to generalize about a population, then all of this is irrelevant.
So if you actually ran an election and you counted the votes and A got 5,000,001 votes and B got 4,999,999, then A wins. Statistics of any kind doesn't come into to the discussion at all.
However, if you wanted to predict who WOULD win the election before it happened, then you could try and ask 100 RANDOM citizens who they would vote for. Let's say you find that 55 of them say they will vote for A and 45 will vote for B. How much should you trust that this result - which is just from 100 people - will actually reflect what's going on in the entire population? Maybe you got "unlucky" with your 100 random people and just happened to get more A supporters than actually exist.
It's that question that "statistical significance" is trying to answer. Sure it LOOKS like A is winning in the survey we did, but is the difference between A and B's votes just due to random error associated with using a sample, or is it "statistically significant?" (And if the sample of people isn't random than none of this works.)
If that's really what you are interested in then there are lots of different ways to calculate statistical significance around a result like this. For example, if you only had two candidates you could do a binomial test to test if the distribution of votes is "significantly different" from 50%/50%. Or more generally, you could just calculate 95% confidence intervals around the estimate for each candidate and see if they overlap. This isn't a formal statistical test but it's really what's going on when pollsters refer to a race being "within the margin of error," and relies on the same assumptions about the reliability of random samples.