I thought I'd post R code here to illustrate the point that I made in the comments to Placidia's answer above (which I like, but I wanted to go into more detail on this particular point.) This is the working that went into that comment.
The following code imposes random pairings of observations from two independent samples. If we repeat this imposition of random pairs 1000 times, running a paired t-test each time, you'll see that there is considerable fluctuation in the observed p-value. This is arising from fluctuation of the estimated std error and then t-statistic, of course, but I'm just illustrating the p-values here. But note that this problem has implications for the construction of confidence intervals too, rather than just for hypothesis tests per se.
I selected the random seed below to illustrate what happens when the unpaired t-test is "significant" (i.e. the "correct" analysis for these data, arising as they do from independent samples) but there is nothing canonical about that choice.
## Code written in R 2.15.2
set.seed(2464858)
## set a and b to be random selections from
a <- rnorm(20, m = .5, s = 1)
b <- rnorm(20, m = 1, s = 1)
# "Appropriate" unpaired t-test
t.test(a, b, paired=F)
# should return: t = -2.2397, df = 37.458, p-value = 0.03113
## Now 1000 iterations for imposing "fake" pairs at random:
## with apologies for inefficient looping of code.
paired.ps <- array(NA, 1000)
for (i in 1:1000){
b2 <- sample(b, length(b))
t.result <- t.test(a, b2, paired=T)
paired.ps[i] <- t.result$p.value
}
## Summary and histogram of distribution of p-values returned.
summary(paired.ps)
## returns Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.00106 0.02533 0.03876 0.03936 0.05106 0.10000
hist(paired.ps)

Note that you can tweak mean/standard deviation in the 2 populations below to see the impact of these parameters on the fluctuation of p-values from an (inappropriately applied) paired t-test.