I have a time series on quarterly data with seasonality which I am trying to fit a SARIMA model to. The seasonal variation seems to be decreasing with time. I am wondering what the best way to model this with a SARIMA model is. Here is the series decomposed: 
My confusion stems from when it is appropriate to take the first seasonal difference. In the literature, taking the first seasonal difference seems to be the correct way to handle seasonality. But if I do that, I model that the seasonal difference is constant over time, which it doesn't seem to be. If I then try to model without taking the first seasonal difference, all reasonable models violate the assumption of the AR constants being between -1 and 1. Here are the four quarters plotted separately:

However, could this be due to lower power of the HEGY test? Because when adding the part of the time-series that I deleted to do the pseudo-out of sample forecast. I get the result that according to CH I should use one seasonal difference, OCSB that I shouldn't and HEGY that I should.
– Dididi Mar 19 '17 at 12:54https://www.otexts.org/fpp/8/9
It seems that seasonality is check for first and then normal stationarity. But I have seen other people doing it the other way around. This is important for me because when I first take the seasonal difference, the ndiffs() says that I shouldn't take a second difference.
– Dididi Mar 21 '17 at 10:39forecast::auto.arimain R. Also, note what I said above: the Canova-Hansen test will misbehave if you present it with a series that has a nonseasonal unit root or a time trend. So this provides some guidance on what you need to do first. – Richard Hardy Mar 21 '17 at 10:50