Although the paper is well-written and contains a lot of very interesting results, I think one of its premises is flawed, and hence it is not clear that Tononi's critera need to be amended.
First, Tegmark argues that a hypothetical state of matter with the minimum necessary conditions for self-awareness, hereafter called Perceptronium, must not merely have information and integration; it must also have autonomy, which arises from dynamics (future state is not the same as past state, though it is computed from it) and independence (insensitivity to external input).
But this is not to my mind very solid. It's an interesting idea to explore, and Tegmark does a good job of it in my opinion, but there is another equally plausible scenario: the system doesn't have much independence (it is highly sensitive to input) but it is oblivious to this fact. Without a strong case for independence, utility falters also.
One of the central claims in the paper is that there is an apparent conflict between the information and integration claims: you need a large number of states, but this information be integrated in a way that is not easily separable into independent components. The reason that this appears problematic is that our information is stored in our brains, and the best-known example of a neural network with fully integrated states is a Hopfield network, where it turns out that if you have N neurons you can maintain about N/7 distinct attractors--states around which the whole network converges.
Although a Hopfield network brilliantly delivers on the idea of integration (for each state is a global attractor; any neuron that starts going off on its own independent way will be pulled back into the state it's supposed to be in), and it has a measure of biological plausibility, the whole brain is certainly not actually a Hopfield network. The most obvious discrepancy is that in a Hopfield network every neuron provides input to every other, whereas in our brain one neuron provides input to only about 10,000 neurons out of >10,000,000 (a ten thousanth of one percent, or less).
But let's suppose that a Hopfield network is a good model. If so, we get on the order of 10 billion possible attractors, which we can encode in about 34 bits (Tegmark's math is a little different). So, consciousness only can flicker between 10 billion different states? That seems wrong somehow--surely there is more variety of experience than this, and so surely the criteria have to be altered somehow.
Tegmark acknowledges that there may be a way to encode more state, but doesn't really address the issue. But there is a very easy way to encode more state.
In computer science there is the idea of a hash code, which is a deterministic function that scrambles the input values so much that even a small change in input results in a very different output. Let's imagine a case where instead of one monolithic Hopfield network, we have a million Hopfield networks of size 10,000 (each holding about 10 bits of information). Now let's suppose that we compute from these networks a hash code that combines their input such that we disable half of each network's states. So we get 9 bits of information per Hopfield network, each of the million Hopfield networks depends rather strongly on all the others (because of this hash-code based selection function), and yet at any instant we have 9*1,000,000 bits of state represented. The state might be dynamic, as the hash-coding makes invalid certain states, which then causes a new hash code to be computed, which makes other state invalid, etc., and this dynamism may cause the network to settle into one of fewer than 2^(9M) states, but it seems exceedingly unlikely that it would go all the way down from 9M to 34 or whatever.
And if we look at the architecture of the cortex, we find that the architecture supports this kind of model better than a universal Hopfield network.
So I don't think Tegmark provides a very compelling case that there's even a problem to solve. Without a problem to solve, the dynamics principle isn't needed. That said, it seems intuitively that Tegmark is right that dynamics are needed, or at least that we in practice are dynamic and therefore our kind of Perceptronium is one which contains dynamics, so I'm not arguing that the paper is wrong, either. Just that there is plenty of wiggle room if you want to stick to Tononi's original criteria.