The idea of Bonferroni is to make sure the combined type I error probability, i.e., the probability of rejecting at least one $H_0$ if all tested $H_0$ are true, is lower than the desired significance level $\alpha$, say 0.05. This is done using a worst case computation, meaning that even if all tests are dependent in the worst possible way, error probability $\le\alpha$ is still respected. This is only possible if the probability to reject for each individual test is very low - in fact you can see this because for $\alpha=0.05$, the Bonferroni p-value threshold to reject (or, equivalently, one minus the adjusted confidence level) for $k$ tests is $\alpha/k=0.05/4=0.0125$. It is much harder to achieve $p<0.0125$ than $p<0.05$, and this translates into the Bonferroni-corrected test to have low power, i.e., a relatively low probability to reject even if the $H_0$ is actually false. In other words, it is quite conservative.
I assume your original $\chi^2$-test was carried out a level 0.05, just one test, no adjustment, so it is easier for data to reject the $H_0$, and it may well happen that it was easy enough to come out significant even if none of the four tests for single proportion, tested at effective level of 0.0125, were significant as the latter are conservative.
I guess that your original $\chi^2$ p-value may have been smaller than 0.05 but larger than 0.0125, which would illustrate the problem well. However this does not necessarily have to be so; there is a further possible explanation.
The $\chi^2$-test statistic sums up deviations from what is expected under $H_0$ over all four proportions. It being significant means that the combined variation is too big compared to what is expected under $H_0$. This can happen because one of the four individual deviations is far too big, in which case that individual proportion may well also come out significant when running the post-hoc tests (apart from conservativity of Bonferroni, see above). It may however also be that all four deviations are slightly too big, but there is no single one that stands out. It may then be that nothing significant can be seen when looking at the individual proportions in isolation; they may all deliver p-values that are maybe all small but bigger than 0.05 even if the overall $\chi^2$-test has $p<0.05$ from putting all four deviations together (which uses more information to find any problem with the $H_0$ than any single post hoc test; the same argument of course can apply to the Bonferroni threshold 0.0125).
Note by the way that, as the proportions have to sum up to 1, it is not possible that your null hypothesis is truly violated for just one proportion. If one proportion is too large, there must be at least one other that is too small. It is however possible that data will single out just one proportion as significantly different from $H_0$ in the post hoc test, which may happen if one proportion is strongly wrong in one direction, and all the others are "less wrong" (say all three by 1/3 of the difference) in the other direction. There may be enough data to find the problem with one proportion in such a situation, but not enough to find the problem with the others. (Keep in mind that not rejecting never means that the $H_0$ is true.)