There are two questions here and although I don't disagree with the comments I think the tone could be friendlier and a more helpful response could be given.
(1) The first is a basic question from a beginner who wants to understand the puzzle of sample size determination. We should be honest and admit that it is a perplexing problem and the answer seems almost like circular reasoning.
(2) The second is a simple question with a simple answer.
Let me address (2) first since it is simple. In practice you never know the true variance. What you do is guess at it. This can be done by looking at how the answer changes as the population variance would vary over a range of plausible values. You might find the plausible values through literature on similar studies or through your ownpilot study or you can do a two stage adaptive design where the initial stage is used primarily to determine if you need additional data and if so how much. The first stage of an adaptive design is similar to a pilot study but the statistics behind it and the method of final determination of sample size is more complex.
Now about (1), determining sample size (or a formula for it) when comparing the difference between a hypothesized value of a parameter and the true value involves knowing something called the effect size. That is a normalized difference between the hypothesized value and the true value. But to know that requires knowing the true value. But that was your original problem! You want to find out or get a good estimate of the parameter. If you knew the true value you wouldn't have a problem and hence would not need to take a sample to estimate it. It is natural as a first reaction to see this circular argument that tells you that determining the sample size is impossible.
But that is not really so. What you really want to do is specify how big a difference between the hypothesized value and the true value (in terms of effect size) would be large enough for you to want to declare it significant. So you pretend that you have this effect size and see what the probability would be that you would reject the null hypothesis if you have this effect size. This probability will be a function of the sample size and the critical value for your test statistic. The critical value is determined by setting a significance level (aka type I error) for the test. This you can calculate. But it does depend on some unknowns that you have to guess at such as the population standard deviation that we spoke about in the answer to question (2).
This then describes sample size determination for hypothesis testing. a similar thing can be done for confidence intervals where effect size and type I error is replaced by a specified width and a confidence level respectively.
Although it’s unlikely that you know sigma when the population mean is not known, you may be able to determine sigma from ... a pilot test/simulation. Is that not clear? You can know sigma from 1) literature, 2) your pilot study, 3) special simulation processes. It is noteworthy that if you use some (previous study's) sample sigma it is better to pick the formula's Z from Student's rather than normal distribution. – ttnphns Jul 28 '12 at 05:37