I will attempt this different approach in explaining.
- Let's call the grey/blue area in the graph, exclusion region.
- Let's call the confidence intervals returned from the
acf function, as confidence intervals.
So, the null hypothesis of the acf test is to be significantly different than 0 as stated before. Alpha controls the significance and in your case alpha=0.05 this means that the exlusion region is the area where with 95% confidence we are saying that the coefficient is not significantly different than 0, if it falls within for the given data, but it is significantly different than 0 if it is outside that area (as there is a chance to be outside of that area for the given data only 5%, so it is not a coincidence and therefore the null hypothesis is rejected).
Now, for the second part, confidence intervals show the range that gives you the defined confidence based on the alpha value, for your given data. In your case, with alpha=0.05, this means that for the second obesrvation(first lag), there is a confidence of 95% that the actual results falls within [0.25381333, 0.33221189]. To verify this, for the first observation (no lag), this range is [1, 1] as we are 100% sure that we know the present value (no surpise with 100% autocorrelation with itself).
Lastly, what might help in this case, is the link between those two values, the +/-exclusion region interval and the confidence intervals givan by acf. For your first given datapoint(first lag):
(0.33221189 - 0.25381333) / 2 is your +/- of your exclusion region in the graph (=0.03919928).
In conclusion, the confidence range around the actual value that we are doing the hypothesis testing with let's say 95% confidence, remains the same as stated by @Sextus Empiricus, based on the statistics of the given sample. What is changing though is the centre of the value and the prespective:
- Are we looking for significantly different (eg.than zero)? Then it should be falling outside our confidence region to reject the hypothesis.
- Do we want to know the region to which the actual value should be falling inside with a certain confidence (eg. 95%)? Then we find get the +/- intervals that represent this confidence.
Think of it like z-score for normal distribution. +/- 1.96*std for example to get 95% confidence. Based on your sample's mean and std, the interval gets computed and you either try to test a hypothesis, or display a certain confidence (95% in this case) for a the actual value to be around your sample's mean.
I hope this helps, tried to explain with many comparisons. Sorry if I repeated a few things.