It depends... But on what exactly?
The answer depends on the assumptions that you make.
1. If I read your question most literally: you know all data values.
In that case, there is no need for bounds (minimum or maximum), as you can simply calculate the variance of the data values in the array with:
$$ \text{var}(\mathbf{x}) = \frac{1}{N} \sum_{i=1}^N (x_i - \bar{x})^2 \, . $$
2. Now, say, you do not know any of the values; only that there are $N$. In other words: you have not seen the sample, but only know the sample size.
Then the answer depends on what you assume about where the sample came from, i.e. the population.
2.1 If you make no assumptions about the population (equivalently: the underlying distribution), then you cannot say anything about an upperbound for the sample variance.
Take the example of a $t$-distribution with $\nu \le 2$ degrees of freedom. The population variance is infinite, and the sample variance cannot be bounded from above (unless $N = 1$). Why? Because for every sample you provide, I can increase its variance by pushing the minimum and maximum values farther from the mean.
Please note, this same argument holds for a standard normal distribution! Even-though it has population variance equal to $1$, one can create samples with arbitrarily large sample variance.
2.2 If you assume that the population "lives on" bounded support, then Dilip Sarwate's answer will suffice: on support $[0, \, c]$ the sample variance is maximally $c^2 / 4$ (multiplied by $(N-1)/N$ for odd $N$).
P.S. Since the variance is essentially a weighted sum (integral) of non-negative terms (integrand), it is non-negative itself and bounded from below by $0$. I therefore concentrated on the upperbound in my answer.