I recently inherited a large body of legacy code that solves a very stiff, transient problem. I would like to demonstrate that the spatial and temporal step sizes are small enough that the qualitative nature of the computed solution would not change if they were decreased. In other words, I'd like to show that the solution is "converged" in a qualitative sense. Since I can explicitly set the spatial mesh size, that part is straightforward. However, as the code uses automatic time-step size control, I cannot set the time step size directly.
The algorithm changes the time step between two bounds based on the number of jacobian iterations needed to reach an error tolerance during the last $n$ time steps. The fact that it uses jacobian iteration makes me fairly certain that it is some sort of implicit scheme, but I cannot be absolutely certain. It does not account for the error it is seeing in the current time step, which leads to it running into the iteration limit on occasion (maybe a dozen times over the course of several thousand time steps, almost always during the most dynamic portions of the simulation). The current runs I am completing I am setting the time-step bounds two and a half orders of magnitude apart ($10^{-13}$ to $5 \cdot 10^{-11}$).
In the runs, I have control over the time-step bounds, the number of past time-steps it looks at to choose the current time step, the maximum change in the time step (ratios), the target number of jacobian iterations, the maximum number of iterations, and the error bound. I would like if someone could put me on the right path to analyzing the time-step independence, or at the very least figuring out what algorithm is used.