ADF tests are used to inform on the order of integration of a time series. With the function ur.df, three different specifications can be used, called "trend", "drift" and "none". These are explained quite well here.
Now, how would you explain the following changes in the p-values of the coefficients when more and more differences are taken?
- Original data with type = "trend"
tau3 phi2 phi3
**-3.020791 6.173046 4.907725**
- First differences of data with type = "trend"
tau3 phi2 phi3
**-3.862442 5.086268 7.626563**
- Second round of first differences with type = "trend"
tau3 phi2 phi3
**-5.005805 8.376976 12.55419**
- Third round of first differences with type = "trend"
tau3 phi2 phi3
**-6.741858 15.19711 22.75069**
The pattern is very clear. When more rounds of first differences are used, the null of a unit root can be more strongly rejected. At the same time, the test statistics phi2 and phi3 explode. I am not perfectly sure why this is the case. How can the test statistic on phi2 increase after differencing?
From the other thread: "Rejecting this null implies that one, two, OR all three of these terms was NOT zero."
Do the statistics for phi2 and phi3 only increase due to the stationarity becoming even stronger? Or is this an artifact of the test interpreting the differencing as some kind of trend?