This question was edited to clairify my question, for the old question see the edit log
I found this about regression and correlation:
Regression is different from correlation because it try to put variables into equation and thus explain causal relationship between them, for example the most simple linear equation is written : Y=aX+b
Based on the infromation mentioned here:
However such results do not allow any causal explanation of the effect of x on y, indeed x could act on y in various way that are not always direct, all we can say from the correlation is that these two variables are linked somehow, to really explain and measure causal effect of x on y we need to use regression method, which will come next.
If we plot data and it shows a clear linear trend we can test if this linear trend is significant using a correlation test (I assume), if this is the case we can apply a linear model to this and then inspect the p value of the slope to determine if the slope is probably the same as we could expect in our popultion.
I'm not sure if the above assumption is correct so I'm wondering what the p value of the correlation test tells us and what the P value of the slope tells us?
R'slmandcor.testfunctions will be identical (provided neither variable is a constant). – whuber Jan 18 '17 at 23:46lm()andcor.test()) must be the same because both are testing the same thing, i.e. whether the slope is different from zero. Was there something wrong with what I said? – Stefan Jan 19 '17 at 01:42cor.test()actually tests whetheralternative hypothesis: true correlation is not equal to 0. So it doesn't test whether the slope is equal to 0. However I found out that The correlation coefficient, $r$, is the slope of the regression line when both variables have been standardized first. and How does the correlation coefficient differ from regression slope?. Thanks for poking me! Learned something new! – Stefan Jan 19 '17 at 05:28