The argument can be outlined as follows (I presume it is not hard to supply philosophical and mathematical details):
Ex hypothesi, P(A → B) = P(B | A) = P(A & B) / P(A).
By the rule of total probability, we have
(*) P(A → B) = P(A → B | B) P(B) + P(A → B | ~B) P(~B)
where the propositions B and ~B are complementary (instead, one may take any relevant set of mutually exclusive and collectively exhaustive propositions).
Consider multiple conditioning (i.e., “conditionalising”): Suppose we have P(A | B) and want to put a further condition C; that is, we seek the probability of a proposition A given B, given C. In ill-formed notation (just to illustrate the idea!), what we want is P(A | B | C). Properly, this is P(A | B & C) which is equal to P(A & B & C) / P(B & C) – you can easily check these by the conditional probability formula.
We apply multiple conditioning to P(A → B) with B and ~B:
P(A → B | B) = P( B | A & B)
P(A → B | ~B) = P( B | A & ~B)
Substituting into (*), we get
P(A → B) = P(B | A & B) P(B) + P(B | A & ~B) P(~B)
Observe that
P(B | A & B) = P(B & A & B) / P(A & B) = P(A & B) / P(A & B) = 1
P(B | A & ~B) = P(B & A & ~B) / P(A & ~B) = P(⊥) / P(A & ~B) = 0
So, we get P(A → B) = P(B | A) = P(B)
Therefore, the equation P(A & B) = P(A)P(B) holds true and propositions A and B are independent, while, according to the argument, the probability of B has to be dependent on the probability of A in general.