This problem is not amenable to null hypothesis significance testing because tests cannot tell you the probability that all 29 containers do not have the coating. This is an estimation problem. The way to calculate sample size for estimates is based on a margin of error. The more you sample, the tighter the margin of error is. That's the way a statistician controls the confidence of their estimates.
95% CIs, however, are going to all be 0-0 using the normal approximation to the maximum likelihood estimator. A way around this is to use a median unbiased estimator. The median unbiased estimator is any value $\tilde{p}$ such that:
\begin{equation}
Pr(Y \ge n\tilde{p}) \ge 0.5 \quad \text{and} \quad Pr(Y \le n\tilde{p}) \ge 0.5
\end{equation}
The small sample size correction for discrete data suggests using a "mid-point cumulative probability function" so that the $Pr(Y=n\tilde{p})$ is scaled by 0.5. That way a single value rather than a range of values satisfies the criterion. If you use a hypergeometric probability function, you may account for the finite population. The sample size calculation is too complex to achieve by simulation or analytically, so eyeballing is warranted. Using the code below, I generate MUEs with the following halfwidths:

As expected, rather undesirable properties of highly variable estimates are generally seen until you sample a very sizable fraction of all the repositories. It is a problem probably better suited to logistics than statistics.
hyperprob <- function(y, nd, nt, p, lower.tail=T, offset=0) {
phyper(y, nt*p, nt*(1-p), nd)/2 + #dhyper does not deal with fractional gamma well
phyper(y-lower.tail, nt*p, nt*(1-p), nd, lower.tail=lower.tail)/2 +
offset
}
uniroot(hyperprob, c(0,1), y=y, nd=nd, nt=nt, lower.tail=F, offset=-0.5)
uniroot(hyperprob, c(0,1), y=y, nd=nd, nt=nt, lower.tail=F, offset=-0.025)
# estimate and lower are 0 if every sample is 0
upper <- uniroot(hyperprob, c(0,1), y=y, nd=nd, nt=nt, lower.tail=T, offset=-0.025)