[update] It seems I had my question's original title misfocused a bit; concerning the use of partial derivatives I found some explanation/confirmation in wikipedia's partial derivative There it is stated -and explained with a 3-d-picture of a surface-, that indeed the three partial derivatives suffice to find the/a common extremum. In the occurrence of my problem I seem to have had a misleading intuition, based on the observation of non-independence of the x and x-square-data and felt thus cautious (too much, apparently).
Full exposition of my proceeding/reasoning and source of question is here
I adapted the subject of my question; if this seems inappropriate please feel free to roll it back. The original question follws below [end update]
I'm just trying to reinvent the wheel - I want to understand (and implement for an example) the computation of the polynomial regression. For the simplest nonlinear approach let's use the estimated model
$$\hat y_k=a+b x_k + c x_k^2 \qquad \text{(for k=1 to n)}$$
with the minimizing criterion
$$\sum (y_k - \hat y_k)^2 =\sum (y_k - (a+bx_k+cx_k^2))^2= \text{Min}$$
I have expressed all the occurrng and needed variances and covariances in a matrix, cofactored with $a,a^2,b,b^2,c \text{ and }c^2$ . To find the minimum I must use the derivatives; however I'm not much familiar with partial derivatives.
So my question is: do I get the global minimum if I solve my matrix-expression setting the partial derivatives resp to $a,b,c$ separately (and find the then possible common solution)? Or do I need to involve some "joint derivative" or "common derivative" where, for instance something like $\partial ab$, thus a and b, are simultanously used in one derivative?