There are two different distributions in play - the distribution of your data (or strictly speaking, the hypothesised distribution of the population they are drawn from), and the distribution of your test statistic under the assumption that the null hypothesis is correct. This is a little subtlety that I find can catch people out when they are new to the concept, but in fact it is usually the case that the null distribution of your test statistic is quite different to the (hypothesised or actual) distribution of your data.*
Using a chi squared goodness of fit test means that if your null hypothesis is true, then your test statistic, $\chi^2 = \sum_{i=1}^n {\frac{(O_i - E_i)}{E_i}^2}$, would follow a chi square distribution (at least, approximately - you may hear it called "asymptotic", i.e. for large samples it should be close enough to chi square for practical purposes). This is why you use the chi square distribution tables. Remember that $\chi^2$ is larger if your data is a bad fit to the hypothesised distribution, since in this case the square of the difference between expected and observed frequencies, the $(O_i - E_i)^2$ in the numerators, is large. So if $\chi^2$ is larger than the critical value in the tables, you have significant evidence against the null hypothesis, in the sense that a $\chi^2$ value that large, and hence a fit that poor, would be unlikely if the hypothesised model were true.
None of this means that your data is chi squared distributed, or that you expect it to be so. The expected frequencies $E_i$ used in the $\chi^2$ calculation are premised on the population having whatever distribution is specified according to $H_0$.
We can look at this in more detail. Supposing that $H_0$ is true so the expected frequencies are "correct" (they may not match the observed frequencies precisely, but if we were to take hundreds of samples and tabulate each one, then in each category the average of our observed frequencies should be very close to the expected one), then the quantity $\frac{O_i - E_i}{\sqrt{E_i}}$ will behave approximately like a z-score for that cell. The sum over all the cells of $\sum_{i=1}^n {\frac{(O_i - E_i)}{E_i}^2}$ is then, roughly, the sum of squared z-scores. Perhaps you already know that the random chi-squared variable with $\nu$ degrees of freedom, $\chi^2_\nu$, is the sum of the squares of $\nu$ independent standard normal variables. You may now be able to see that, so long as your data were drawn from a population whose distribution matched your null model, your test statistic will approximately follow a chi-squared distribution.
Things are actually slightly more complicated - since the total of the expected frequencies equals the total of the observed frequencies, the total of $O_i - E_i$ must be zero. Hence the value of $O_i - E_i$ in the last cell is completely determined by what happened in the previous cells, so our "z-scores" weren't quite independent. Fortunately we can compensate for this by subtracting one from the degrees of freedom, which explains why in your case, with 5 cells in your table and hence 5 values of $\frac{(O_i - E_i)^2}{E_i}$ in your sum, you'd compare your test statistic to the critical value listed in the tables for $\chi^2_4$. If your test statistic is above the critical value, this tells you the sum of squared z-scores would be unlikely to be so high. Were your null hypothesis true, then your test statistic should behave like the sum of squared z-scores, so the fact it came out so high constitutes evidence against the null hypothesis - i.e. it suggests your population did not follow the hypothesised distribution.
$*$ The very first hypothesis test that many people learn is the Z test for the mean, where the data are sampled from a normal distribution of known variance $\sigma^2$ and the null hypothesis is $H_0: \mu = \mu_0$ . In this case, all three of the population distribution, the sample mean and the z-score (the test statistic for this test) are normally distributed. But this is rarely true for hypothesis tests in general. Moreover, for the Z test, assuming the null hypothesis is true, their distributions are $X \sim \mathcal{N}(\mu_0, \sigma^2)$, $\bar{X} \sim \mathcal{N}(\mu_0, \frac{\sigma^2}{n})$ and $Z \sim \mathcal{N}(0, 1)$. So on closer inspection we see that, even here, the distributions of the data and the test statistic are quite different after all.