The number of parameters often turns out not to be a good measure of the complexity of a function or model. There are a number of ways of measuring complexity in different scenarios - one of the simplest is Vapnik–Chervonenkis dimension. The basic idea is to imagine a bunch of points distributed in the xy plane, some labeled + and some labeled -. Can a curve from a particular model class be chosen such that all the + points are above the line, and all the - points are below? The number of points for which you can do this defines how "wiggly" the model class is, and thus how complex it is.
For example, say that you have two models $f_1(x) = a*x^2$ and $f_2(x)=1-cos(a*x)$, each of which have one tunable parameter $a$. For how many points (say placed between -0.5 and 0.5) can we produce any +/- labeling, for each model? $f_1(x)$ is very poor by this measure, since it can't even produce all labels for two points (if $y_1/x_1^2>y_2/x_2^2$, then you can never get point 1 below the line and point 2 above). $f_2(x)$, however, actually has infinite VC-dimension, since it can produce any labeling for any number of points (see solution B2b here). So if both $f_1$ and $f_2$ both provide good fits to some data (e.g. we can find an $a$ for each that gives a good fit), we would strongly prefer $f_1$ since it comes from a simpler model class and therefore is more likely to generalize well.
Note that we're talking about complexity of function classes here, not specific functions as you implied in your question - typically "parameters" refers not to the input $x$ of a function, but the coefficients of parts of the model ($a$ in my example).