The best way by far is to include all of your code as supplementary material. If possible, also include files with the relevant random seeds needed to recreate your results. Not only does this allow people to recreate your results (which you might not care about), it also allows them to more easily continue where you left off. This allows for new collaborations and citations to your work. Unfortunately, this comes with the difficulty of forcing you to clean up your code, and make sure its bug free. Hence, it is more an ideal than what is usual in practice. But at the very least, you should archive a version of your code used to produce your results, that way if another researcher asks for code, you can produce it.
In terms of the description in your paper, then I would concentrate on a high-level, implementation independent description of the key novel features of the model (this is the practical part most good paper achieve). Concentrate on the features that will change the result qualitatively if they are tweaked. Most models I work with produce quantitative results, but the specific quantities are usually not of interest, only the qualitative behavior (since the parameters are usually far from ones observable in nature). Thus, I focus on describing the parts of the model, that if changed will change the qualitative behavior of the system. If this mindset forces me to describe every last detail of my model down to the implementation, then I know that my model is not very robust, and thus should be scrapped.
A good way to test if your in-paper description is sufficient, is to ask a friend (or student) who did not work on this project with you to describe how they might implement your model is pseudo-code. If they don't get stuck while trying this (as in they arrive at a sketch of a model which should produce the same qualitative results), then you know you have done a good job of description.